Human says no

Human says no

Short

Future Trends: The next big technological controversy will see the public turn against algorithms and machine learning

19th December 2016
By Olivier Usher

Algorithms share many features with previous technological innovations mired in controversy. Just like the introduction of genetically modified crops, vaccines and nuclear power in previous decades, the use of algorithmic decision-making has broad social implications, combined with a lack of transparency, accountability and choice.

In 2017, public disquiet about the decisions that algorithms make, the way they affect us, and the lack of debate around their introduction, will become mainstream.


Listen to an audio version of this article, brought to you by Curio.io



What are algorithms?

An algorithm is a step-by-step sequence of rules that sets out how to make a decision about something. The concept has existed since the dawn of science (and the word has for a long time too) but it is in the past half-century that algorithms have become so important: computer programs are nothing but complex algorithms, rules that tell computer hardware how to make decisions.

Machine learning is a more recent invention. Programmed to recognise patterns of data and told to promote desirable outcomes, machine-learning algorithms effectively rewrite themselves, a form of artificial intelligence. Instead of the computer simply implementing the rules a human programmer has told it, the computer figures out the best way of achieving an outcome it is told to prioritise.


What’s the problem?

In a short space of time, and with remarkably little fanfare, algorithms have replaced human decisions in huge swaths of life. Some of the decisions are good, some bad, but how these decisions are made is rarely transparent.

The public gets very little insight into how proprietary software works, and for machine-learning algorithms, it’s questionable whether even their programmers fully understand how decisions are made, given that the software teaches itself. In one extreme case of mistaken machine learning, an algorithm created to sort photos of dogs from those of wolves actually trained itself to recognise snowy backgrounds instead.

Algorithms and machine learning increasingly make decisions that affect our daily lives: far more important than telling wolves and whippets apart.

In a short space of time, and with little fanfare, algorithms have replaced human decisions in huge swathes of life

Next time you apply for a job, there’s a good chance at least part of the assessment will be carried out by a computer. If you’re unfortunate enough to be prosecuted in the USA, an algorithm is likely to recommend to the judge whether or not you’re released on bail. Whether your mortgage application is accepted, or you get a cheap quote for car insurance, the decision won’t be taken by a human. And prototype self-driving cars rely on algorithms to make life-or-death decisions affecting their passengers – as well as other road users.

While removing human biases can make decision-making fairer, that is not necessarily always the case.

A machine learning algorithm that has trained itself to identify prospective hires based on how similar they are to successful employees in a company might be a great way of eliminating unconscious bias in your recruiters; or it might simply replicate the biases of your workforce with an added sheen of technological neutrality.



A machine-learning algorithm that has trained itself to identify prospective hires based on how similar they are to successful employees in a company might be a great way of eliminating unconscious bias in your recruiters, or it might simply replicate the biases of your workforce with an added sheen of technological neutrality.

Similarly, there are credible (though contested) claims that an algorithm used to decide who goes to jail in the USA is harsher to black petty criminals, and unduly lenient to public menaces who are white.

Self-driving car algorithms are potentially troubling too. There are reasons to worry that biases could be baked into their software. And even if the software is scrupulously fair, one manufacturer has already said the safety of the occupants comes first: reassuring if you are their customer, but rather less so if you plan to walk along one of the roads they drive on.


Fake news

One vexed topic which burst into the mainstream in late 2016 – without being blamed on algorithms, yet – is the problem of fake news and social media.

The algorithmic curation of news, and filter bubbles of our own making, combine with a lack of transparency over who produces what content and who checks their facts, leaving many of us wondering if we should believe anyone at all.

In the circumstances, it’s hardly surprising that Donald Trump was able to blame the Google News algorithm for bad press (while benefiting from a deluge of fake news stories himself).


The backlash

In the coming year, the backlash against algorithmic decisions will begin in earnest. The trigger could take many forms. It could be a politician forced to resign over fake news pushed by a news algorithm. It might be a murder committed by a violent thug released on bail thanks to court software. It might be an employer successfully sued over a discriminatory recruitment system; or a pedestrian killed by a self-driving car that's protecting its passenger.

But it’s algorithmic decision-making as a whole that will be in the firing line when the controversy comes to life.

Technologists will be forced to confront the criticism and address some of the more obvious concerns around opacity and bias. As with previous technological controversies, the promise that algorithms can make the world better could be at risk if the response isn't quick and credible.

Whether we will choose to trust technological fixes remains to be seen

The flare-up over fake news has already prompted Facebook and Google to respond with efforts to find technological solutions to this new technological controversy. Similarly, organisations such as the Trust Project are seeking to produce technology that independently ranks the trustworthiness of the media we consume.

Whether we will choose to trust such technological fixes remains to be seen.

If we don't, expect business to start advertising algorithm-free services, ranging from mortgages approved by real bank managers, to a resurgence of news websites with humans curating the content. (You could even call them ‘editors’.)

Just as customers are willing to pay more for food without genetically modified crops or pesticides, people will place a premium on decisions made by humans if they don't trust the machines.


A version of this article originally appeared on the Nesta website, as part of the 2017 predictions series. Read all the predictions here

Republish

We want our stories to go far and wide; to be seen be as many people as possible, in as many outlets as possible.

Therefore, unless it says otherwise, copyright in the stories on The Long + Short belongs to Nesta and they are published under a Creative Commons Attribution 4.0 International License (CC BY 4.0).

This allows you to copy and redistribute the material in any medium or format. This can be done for any purpose, including commercial use. You must, however, attribute the work to the original author and to The Long + Short, and include a link. You can also remix, transform and build upon the material as long as you indicate where changes have been made.

See more about the Creative Commons licence.

Images

Most of the images used on The Long + Short are copyright of the photographer or illustrator who made them so they are not available under Creative Commons, unless it says otherwise. You cannot use these images without the permission of the creator.

Contact

For more information about using our content, email us: [email protected]

HTML

HTML for the full article is below.