The brain age
It is the era of the brain. Billions of dollars are being poured into largescale research projects such as the European Commission's Human Brain Project and the American BRAIN initiative (a recursive acronym for Brain Research through Advancing Innovative Neurotechnologies). Popular books and magazine articles tell us that, finally, neuroscience is answering questions that have puzzled philosophers and other students of human nature for centuries. It used to be that philosophers, theologians, psychologists had the job of interrogating the soul or the mind. Now we assume that only neuroimaging will deliver the answers.
Eventually, decades of heavy investment in neuroscience may bear astonishing fruit. The only problem is what that might mean for society. If some of the dearly held assumptions of today's proselytisers for neuroscience turn out to be true, will we be glad we found out? It may be that brain science eventually lets us see what was previously hidden, and understand what was previously ineffable. It could offer many benefits, but also threaten much of what we hold most dear. Perhaps we ought to be circumspect about ascribing too much power to neuroscientific findings.
Let us peer as best we can into the brain economy of the future. We can start by extrapolating from the aims of current research projects. The Human Brain Project, for example, hopes to build a detailed computer simulation of an entire human brain: all its 86bn or so neurons and the trillions of connections between them. Many believe (or at least hope) that one day some successor to the HBP will provide such thorough maps of inner space that we will be able to identify the 'neural correlates' of thoughts, so that any particular thought could be reliably matched to a particular pattern of neurons firing. If that came to pass, then just by looking at a brain scan you could tell what the subject was thinking, in the same way that Michael Fassbender's android in Ridley Scott's Prometheus watches people's dreams on TV. In this future, we will have not only the perfect lie detector but also the perfect surveillance technology.
By contrast, the BRAIN initiative, rather than modelling a whole brain, seeks instead to develop better ways of imaging brains and also, intriguingly, of stimulating them. Its current projects include "lasers to trigger the firing of specific brain cells" and "radio waves to stimulate the activity of specific brain circuits". This implies a possible future where, if we know the neural correlates of thoughts, we can implant thoughts into brains through the external stimulation of the neurons. This sounds very science fictional now, but so would any kind of real-time brain imaging have done a century ago. And, after all, humans have had one technology for remotely implanting precise thoughts in other people's heads for thousands of years. It's called written language.
Imagine the day, then, when the technology exists to allow us to read thoughts and also implant them in another brain. Two people wearing the right kind of head-mounted gadget could indulge in computer-mediated, brain-to-brain communication; or what we would have previously called telepathy.
A possible future where, if we know the neural correlate of thoughts, we can implant thoughts into brains through the external stimulation of the neurons
This sounds fun. But then a lot of things do until you consider the advertising opportunities. The film Minority Report is comparatively utopian in its vision of the hero being bombarded only by audiovisual ads selectively projected at him as he walks down the street. If people are wearing the right kind of receiver, perhaps the future equivalent of a smartphone, adverts can and will be inserted directly into our minds by neuromarketers. Such adverts will not only present products in an attractive way but also actively implant the desire to buy them. A giant new technology advertising company will thus supplant Google and Apple, with their old-fashioned advertising delivery devices mounted on the wrist or face.
We'll probably even consider direct-to-brain advertising a fair price to pay because we'll be so seduced by the future version of autocomplete for search queries: it'll be autocomplete for thoughts.
Self-improvement may one day be similarly enhanced. Today's growing industry of self-help literature is based on a range of psychological tricks to nudge our minds into working in a way that suits us better. Such fuzzy strategies may come to seem redundant if we can address the neurons directly, through clever machinery. Already you can buy a commercial electroencephalograph (EEG) device called a Muse headband, which comes with a smartphone app designed to train you to achieve a state of mental calm. This is done using biofeedback: if the Muse detects that your brain signals are getting noisier, the sound of wind through your earphones grows in volume and you must try to relax in order to calm it down. When I tested the Muse for a week, I didn't notice I was any calmer in normal life, though I was intrigued by its manufacturer's promise that a future version might help you "match wits with your perfect mate", presumably through some kind of telepathic version of Scrabble-based dating.
Such technologies, we may imagine, will only improve and become better targeted; and neurogadgets will enable us to become not only calmer but also cleverer and more conscientious. This surely sounds attractive – until it becomes mandatory; let's say your health insurer demands it. Already American and European corporations have employee 'wellness' programmes that strongly encourage going to the gym and otherwise staying healthy. It wouldn't be surprising if future employees were subtly coerced into using the latest technology to keep their brains as toned as their biceps.
Most people will probably consider that the boon of such technologies, if they can eventually be developed, will outweigh the annoyances. After all, we don't want to un-invent the internet just because it has also proved to be a slick medium for advertising. But there are profound questions about how smooth the path to such futuristic developments will be, if they are even possible at all.
Today's popular neuroscience, it should be pointed out, is vastly overhyped. We are assured erroneously by evangelising articles and books that we know how decisions are made in the brain, which parts of the brain fulfil which specialised function, and how love or intuition boils down to a handful of neurotransmitter molecules. It is all nonsense (or, as some of us have called it, neurotrash or neurobollocks); and most practising neuroscientists are in fact embarrassed by such inflated claims. Yet, like Fox Mulder in The X-Files, we want to believe.
One recent study by Diego Fernandez-Duque, Jessica Evans, Colton Christian and Sara D Hodges has found, as its title states, that Superfluous Neuroscience Information Makes Explanations of Psychological Phenomena More Appealing. Subjects found explanations of human behaviour more credible if they included spurious, sometimes deliberately illogical, neurobabble, than when they contained factual information from hard sciences such as physics. Evidently, we look to neuroscience today as a uniquely privileged source of revelation about human nature.
Yet some tough problems might be unsolvable even in principle, however many research dollars are thrown at simulation and imaging technologies. Could we really decode the neural correlates of every possible thought? That would depend on every brain having the same thought in exactly the same way, and it's not at all clear that they do, even if we buy the neurocentric thesis that it's the brain that has a thought. It's certainly possible to argue, on the contrary, that only people, not brains, have thoughts.
Indeed, the biggest challenge to some of the more excitable hype surrounding the big-data neuroscience research projects is a problem quite familiar to philosophers: that of consciousness. We simply have no idea how the electrochemical interaction of neurons in the brain gives rise to first-person conscious experience, of the kind you are having right now while you read this article. It's quite possible that no progress at all on this problem will be generated from the existence of a complete map of the brain, just as a detailed atlas of the United States can't predict the music of Lady Gaga.
In this sense, we are firing blind. It's an even more profound problem than when the much-hyped Human Genome Project, despite its successes, failed to lead to many new genetic interventions for common diseases, because it turned out that genes interact in too many complex ways with each other and with the environment. But at least we understand in principle what a gene does: it codes for proteins made in the body. We have absolutely no idea what a neuron does in the sense of how it contributes to consciousness.
Paul Fletcher, professor of health neuroscience at Cambridge University, explains that this is the major obstacle for progress in the field. "Nobody has a credible idea of how brain processes produce mental processes, or even a vocabulary with which to articulate such an idea, should it suddenly come to them in the bath," he says. "Good science is usually about linking levels of description: showing how an observation at one level – say, the genetic – ultimately manifests in a physiological process or behaviour or symptom through a series of intermediary facts each expressed at intervening levels… We just don't have these linkages in brain-mind science; it's like the brain observations are made in one language and the mind observations in another, and there is no clue how to translate between those languages."
Any assumption that neurons are all that matter in understanding our mental life is therefore controversial. Hence the existence – with its implicit rebuke to the Human Brain Project – of the Human Mind Project: a cross-disciplinary group of academics based at the University of London, led by the neuroscientist Colin Blakemore. The group emphasises the role of the arts and humanities in the study of human nature, and "the importance of a comprehensive, interdisciplinary approach to understanding the mind, integrating science and the humanities".
Furthermore, in their Superfluous Neuroscience Information study, Fernandez-Duque et al point out that the science of psychology is "less prestigious than neuroscience but equally pertinent – if not more – to explanations of the mind". Paul Fletcher says: "I don't think it's inevitable that collecting more brain data will inform our knowledge of the mind – though I sincerely doubt that we will ever make progress in gaining useful, applicable knowledge of the mind without paying great attention to brain data."
People found explanations of human behaviour more credible if they included spurious, sometimes deliberately illogical neurobabble, than when they contained factual information from hard sciences
Whatever its limitations, advanced neuroscience should have profound medical benefits in the field of mental health. Among the many future therapeutic possibilities Paul Fletcher foresees are techniques such as optogenetic stimulation, "whereby a virus used to carry protein switches on to specific neurons and these neurons can then be activated or deactivated by light." A better understanding of the brain might also enable us better to identify the causes of its dysfunction. "Perhaps some forms of mental illness may be caused by fundamental disturbances in the structure of certain neural pathways," Fletcher says; "others by inflammatory processes, others as a consequence of plastic changes emerging from trauma and adverse social circumstances." If we understood the brain better in the future, he thinks "we could begin to distinguish different causes of mental distress and set up more complex treatments and interventions, avoiding the one-size-fits-all approach that we're currently reliant on."
But the ambition to understand how distress operates in the brain could give future research some darker implications. If we are allowed to peer into the neurons to examine the causes of mental distress, the next step might be to unpick the reasons for certain behaviour, particularly in the area of criminal justice.
Even now, the crude, broad-brush strokes of modern neuroscience have convinced some that brain scan evidence ought to be used in the criminal courts. If that trend continues, our notions of moral responsibility and justice will need to be overhauled. One detailed vision of how this might work is offered by the neuroscientist David Eagleman in his 2011 book Incognito: the Secret Lives of the Brain.
Eagleman explains that there have already been criminal cases in which the existence of a certain kind of brain tumour or other obvious brain dysfunction, like sleepwalking, has led to the accused being found not guilty. Or suppose – as in one case – a tumour pressing on a certain part of a man's brain allegedly prompted him suddenly and completely uncharacteristically to start seeking out paedophile images on the internet. When the tumour was removed, it was claimed he had no further interest in such images and became 'normal' again. Most people would be inclined to agree that the tumour somehow made him do it, and that he was not responsible for his actions in the same way as someone who was not coerced by biology.
But then, aren't we all coerced by our biology, all the time? Moreover, the picture of our brains will only get more detailed. "Currently we can detect only large brain tumours," Eagleman writes, "but in one hundred years we will be able to detect patterns at unimaginably small levels of the microcircuitry that correlate with behavioural problems."
So the view is that eventually there will be a biological explanation available for all kinds of pathological behaviour. (It already makes intuitive sense that if someone has committed a rape or murder, there must be something wrong with their brain.) At that point, Eagleman argues, we will need to abandon ideas of moral responsibility and retribution altogether. Criminals, he says, "should always be treated as incapable of having acted otherwise." What we need to look at is their 'modifiability', ie, whether an offender's brain has sufficient neuroplasticity for us to be able to rewire the problem out of existence. If so, he/she might be subjected to what Eagleman calls a 'prefrontal workout' to strengthen the area of the brain associated with impulse control.
If we are allowed to peer into the neurons to examine the cause of mental distress, the next step might be to unpick the reasons for certain behaviour, particularly in the area of criminal justice
What happens if the criminal isn't modifiable? Eagleman's answer is both humane and disturbing: humane in that it points out that punishment will be pointless, because it won't correct the criminal's future behaviour; disturbing in that the recommended course of action is, in his words, to 'warehouse' the offender indefinitely in some location cut off from society. "Someone with frontal lobe damage," he says, "who will never develop the capacity for socialisation, should be incapacitated by the state." He adds, almost casually: "The same goes for the mentally retarded or schizophrenic."
Trust in brain science as a total explanation of behaviour thus leads to apparently well-meaning policy recommendations that seem unaware of just how dystopian they sound. A life sentence for all criminals unfortunate enough to possess the wrong kind of brain certainly sounds harsh. But if recidivism can one day be accurately predicted through neuroimaging (already the subject of what one ethicist called a 'promising' 2013 study by the Mind Research Network in Albuquerque, New Mexico), then it will be hard to justify freeing someone who will commit more crimes. To refuse to grant parole to a prisoner on such a basis is, of course, to deny them liberty simply because of certain facts about their physiology. It is to punish something even less tangible than George Orwell's 'thoughtcrime', and that is unconscious brain-state crime.
All this is worrying enough if the science is reliable. But what if our confidence in the science is misplaced? As Paul Fletcher says: "One thing that we need to emphasise if we are to harness the technological developments, is to remind ourselves of the limits of what they are telling us and of the level of description that they are informing." The worst-case scenario, perhaps, is that we develop tools of neuronal behaviour analysis and modification that seem to work even though we still don't have a theory of consciousness, so don't understand why they work; in the same way that no one currently understands why modern antidepressant drugs work (on the occasions when they do). Technologies often arise before the theories that explain them: the steam engine preceded thermodynamics. But some technologies are more dangerous than others, which is why we agree to limit research into, say, biowarfare. Perhaps we ought to be more circumspect, too, about rushing to develop technologies which could be used to argue that certain human beings with unlucky brain-wiring ought to be indefinitely 'warehoused'.
After all, if such technologies eventually become possible it is easy to see where the social logic leads. If we can probe people's brains and see crime-causing deformities, why wait until they have committed a crime before 'warehousing' them? To do so would be to be complicit in the crimes they would inevitably commit if left to themselves. So the only sensible thing to do would be to subject all citizens to regular brain scans; and 'warehouse' anyone whose results were bad.
Some civil liberties activists might want to complain about this, but they needn't pose a long-term problem if the logical progression of modification takes its course. Concerned, neuroscience-empowered governments will naturally seek to fix the bits of their 'modifiable' citizens' brains that cause political protest, laziness, even sarcasm. There will, after all, be too strong a temptation to begin using the raft of stimulating technologies pioneered by the BRAIN initiative to make everyone ‘better’.
Welcome to the future of the brain. If you don't like it, don't worry. We'll soon change that.