Computer says no: justice, accountability and clarity in the age of algorithms

Short

A discussion on the ethical implications of algorithm-supported decisions, how they impact our lives and how they may fundamentally change our existing systems of law

4th March 2016
By Harry Armstrong

On 1 February, Nesta (which publishes The Long + Short) brought together academics from computer science, law and the social sciences to discuss the ethical and legal implications of algorithms and machine-learning systems being used to make more decisions about – and for us – in our everyday lives.

As algorithms make more autonomous decisions, where responsibility falls if something goes wrong becomes a more important and difficult question to answer. One particular focus of the speakers' discussion was how we should think about apportioning responsibility, and how existing laws deal with this issue and may need to adapt.

Here's part of their conversation.


Speakers

Mireille Hildebrandt, Professor of Interfacing Law and Technology, Vrije Universiteit Brussels

Burkhard Schafer, Professor of Computational Legal Theory, University of Edinburgh

Ramon Amaro, Associate Lecturer in Critical Theory and Media Philosophy, University of London, Goldsmiths

Chaired by Lydia Nicholas, Senior Researcher, Nesta


Lydia Nicholas (chair): The machine is not, legally, allowed to make decisions on its own. There has to be someone, at the end, who's ticking the box and saying, yes, I accept this decision. So, in that kind of complex system, can we talk about where responsibility lies and how we apportion responsibility? And even if there are current models of there being one responsible person, is that suitable for our current kind of complex-system world?


Mireille Hildebrandt: The responsibility seems to be distributed and very complex. The role of the law is to clarify this and to attribute responsibility in a way that enables whoever either produces, designs or employs these algorithms to know what he can expect, in terms of being held responsible, and can make decisions based on that.

That means if I'm, let's say, a network operator in a smart grid and I begin to use these algorithms to do load balancing, if I'm aware of the fact that I am liable, under private law, if things go wrong, then I will, for instance, see if I can insure this risk.

Now the insurance company is interested in only one thing and that is whether it will be able to make a profit on me. So it will do the calculations and say, is this a risk for me or not? And then the insurance company might say, "Sorry, you can go ahead and do it, but I'm not going to insure you because I have no idea what will come of this." So this kind of private law liability can stabilise the situation. We have product liability in European law. That means that if you manufacture a product, there is a strict liability. If something goes wrong with that product and causes damage, then you are held liable.


The same machine-learning algorithms that power the things we really enjoy – our Facebook news feed, discounts on our holidays – power things that may be detrimental to society: drones, automated weaponry, and so on


There's a whole discussion about whether we should have a system like that for software. And the argument against it is that this will stifle innovation. This is an argument you can use for very many things – anything stifles innovation – but I think it's very important to see that this is all about checks and balances. So you want to have your cake and eat it, too. And sometimes that is possible.

But what the law does, once again, is not saying, let's go into the complexity. No. It is taking a [step back] and saying, who can we hold accountable such that that entity will be able to make sense, to take the right type of decisions? And that will influence the entire chain before it.


Ramon Amaro: I'm not a lawyer. I'm a social scientist and an engineer so, as a social scientist, I think I take primary concern not so much in accountability and responsibility, but more so the social, political, and economic impact to communities and environments based on who would be held accountable.

And so the same machine-learning algorithms that power the things we really enjoy, such as our Facebook news feed, shopping recommendations, discounts on holidays – those are the same technologies, at their foundation, [as those which] power things that may be detrimental to society, such as drones, automated weaponry and so on and so forth.

In a sense, many of these technologies are two sides of the same coin. So it actually poses different questions when we talk about accountability. What's the relationship between the things that actually benefit us and the things that actually may be causing harm? That's exacerbated by the fact of there being so many players to initiate not only the design of the algorithms, as you mentioned, but, actually, the deployment.

It takes 168 operators to operate one drone – one militarised drone. And that goes into everything from engineering design, all the way to navigation, and, of course, bringing the drone back. So when we deal with issues of accountability, which one of those 168 is responsible?

And are those 168 automatically caught within other power structures that may not give them free agency to even participate within that particular type of environment? It gets even more complicated when we divorce ourselves from the idea of data and the internet as being some kind of free, loose system. Because if we really look at the mathematics that form the data we're talking about, even when we look at the structure of the internet, it's very much solidified.


It takes 168 operators to operate one militarised drone. When we deal with issues of accountability, which one of those 168 is responsible?


It's based on very hardened databases, which transmit data through very hard wires through a system of protocols that have a set of rules, of logics that aren't very flexible. So in a way, data isn't free. We aren't living in a sort of freeform data society.

But an overall ethical framework still has yet to exist because it's such a nuanced type of technology. So, in a way, I'm avoiding the question. Purposely, because, in a way, I believe that the question of accountability shuts down the multitude of other, smaller issues that have a much wider impact on how our relationship with technology has developed into a society where secrecy is logics of power, where the navigation of our sense of agency and accountability has to navigate through ownership of a ubiquitous system that no one really understands.

So I think, before we get to the actual "Who's to blame?" or "Who's to be held accountable?", I think we need to start unravelling our social situations that have merged into a system of, basically, abstract ways of living.


LN: I wonder, Burkhard, if you had any comments on the responsibility question?


Burkhard Schafer: I'm not a lawyer myself. But one of the big problems I have – and this is not a UK problem, it's across the globe – [is with] the way we train lawyers as litigators and judges. People who come in after something really, really bad happened and who then apportion blame.

That is how we train our students. And, in a way, the way governments react to something going wrong is along the same lines. Something bad happened. We need to blame someone and we need to put in legislation. And some of the most disastrous proposals for legislation that we have, at the moment, to deal with are things that definitely get that something went wrong and now there's a rush in legislation. But there's no guarantee that it's going to make things better rather than, potentially, worse.

And that has also a bit to do with how we access the risk of technology. We are comfortable with risks. We are used to them. We don't notice them any longer. We don't even care about them that much. But if something new comes along, then we notice it. And when something then goes wrong, we want to hold someone responsible.


Rather than rushing in and apportioning blame, ask yourself, would things have been better without the technology? Might they have been worse? What are the consequences that prohibition or problematic allocation of responsibility would bring with it?


And that, very often, can be a harmful response. That's harmful to us, ourselves because it, very often, closes down, very early on, potentially very, very positive developments. So one important thing, for me, would be to emphasise, yes, things will go wrong.

But rather than rushing in there and apportioning blame, ask yourself, would things have been better without, for instance, the technology? Might they have been worse? What are the consequences that prohibition or problematic allocation of responsibility would bring with it?

My favourite example – I remember that my university, when we just started with this whole internet thing, there was a rumor that students had been able to hack into the email system and intercept essay questions. It was never confirmed, but it was a new technology and no one knew what was going on. And the thought was very, very plausible. So there was immediately a rule – no exam questions by email any longer

What did we do? Well, we went back to the old system – to print them out on printers, which were in open-access corridors and around them. Then we put the hard copy in our desk. And we left for coffee, and tests, and lectures. And we left the offices open because we never locked them, in those days. And then we put them in an open pigeonhole. So no one ever looked at the risks of the old system, the non-technology system, because we knew these risks. We were comfortable with them.

But because something bad had happened or might have happened and it was related to an unknown technology, we rushed into rules that made things worse. So that would be my big, big caveat.

There are relatively easy questions, from a legal perspective, when something goes wrong badly. If you think about an automated car and you have a serious accident, I think the law can deal with that. That's a type of harm we understand. It can be quantified. You have a clear victim. That's unproblematic. What I'm much, much more worried about with machine learning is that we get a type of harm that is much less visible, much less quantifiable, and much less comfortable with our normal rules and procedure.


What I'm much, much more worried about with machine learning is that we get a type of harm that is much less visible, much less quantifiable, and much less comfortable with our normal rules and procedure.


LN: Mireille has wanted to comment since you started to talk about how we train lawyers.


MH: We'll do some lawyer unbashing here. The example that you give is very interesting because when you revert back to the old use of printed things, you say, "Well, that creates more problems." Yes, but that is because the printer has changed. So it's not that simple. I would say that I'm not thinking now as a lawyer. I am thinking of the system of law, the attribution of responsibility, and the effects on that. And I totally agree that it is very important, before you begin to attribute liability – which, for me, is not the same as blame. In private law, you have strict liability where the issue is not blame, but the issue is a redistribution of the damage. And you sort of try to incentivise key players in the field to take the right sort of decisions.

And instead of telling them, "You have to do this, and then you have to do that, and you're not allowed to do this," you tell them, "Look, if something goes wrong and people suffer damage, you are going to pay." And you allow them then to figure out ways to have their cake and eat it, too.


This transcript has been edited for clarity and length. A full transcript will be published on the Nesta website


Republish

We want our stories to go far and wide; to be seen be as many people as possible, in as many outlets as possible.

Therefore, unless it says otherwise, copyright in the stories on The Long + Short belongs to Nesta and they are published under a Creative Commons Attribution 4.0 International License (CC BY 4.0).

This allows you to copy and redistribute the material in any medium or format. This can be done for any purpose, including commercial use. You must, however, attribute the work to the original author and to The Long + Short, and include a link. You can also remix, transform and build upon the material as long as you indicate where changes have been made.

See more about the Creative Commons licence.

Images

Most of the images used on The Long + Short are copyright of the photographer or illustrator who made them so they are not available under Creative Commons, unless it says otherwise. You cannot use these images without the permission of the creator.

Contact

For more information about using our content, email us: [email protected]

HTML

HTML for the full article is below.