Going dark: the insecure past and future of cyberwar


An interview with Fred Kaplan, author of a new book on the disturbing origins of today's digital battles

4th April 2016

On a Saturday 1983, Ronald Reagan sat down to watch a special presidential early screening of the sci-fi film WarGames at Camp David. In the film, Matthew Broderick plays a high-school student who, playing a computer game, unwittingly hacks into NORAD and almost spurs World War III. Back at the White House the following week, the president was suitably perturbed by the film to badger the chairman of the Joint Chiefs of Staff to look into the threat of cyberwar. Could this kind of thing actually happen? It was the first time the idea was raised in the Oval Office. "We'll look into it," he was told. They found that the threat was much worse than Reagan feared.

From this unlikely anecdote to the revelations of Edward Snowden and the North Korean Sony hacks, journalist Fred Kaplan traces the history of cyberwar in his new book, Dark Territory. The book, by the national-security columnist for Slate, traces the story of the NSA and other organisations as a new dimension of warfare was waged by military officers, scientists and spies. The Long + Short spoke with Kaplan about how cybersecurity went from a cinematic anecdote to a $200bn industry that threatens world peace.

Fred Kaplan
You describe the failure to adequately secure internet technologies at their birth as the "bitten apple in the digital Garden of Eden". what happened, and could it have been different?

In 1967, just before the ARPANet (the precursor to the internet) was about to roll out, a computer scientist named Willis Ware – who worked at the RAND Corporation and advised the National Security Agency – wrote a secret memo (it's since been declassified) warning that once you put information in a network, with online access from multiple, unsecure locations, you're creating inherent vulnerabilities; you won't be able to keep secrets any more.

When I was researching my book, I asked the man who was director of ARPA at the time what he thought of Ware's paper. He said his team begged him not to saddle them with a security requirement; it would be like telling the Wright Brothers that their first plane had to carry 20 passengers 50 miles. Let's do this one step at a time, they said. Besides, the Russians won't be able to do this for decades. It took about three decades.

Meanwhile, we'd built up whole systems and networks with no provision for security. There were alternative ways to build these systems: they would have meant less anonymity and slower speeds, but they could have made for a more secure system.

50 years after Willis Ware came to the conclusion that any network is inherently vulnerable, how secure are we in the US and the UK?

Really not secure at all. Ware told someone at the time, "The only computer that's completely secure is a computer that no one can use," and that's conventional wisdom today. The military is the most secure realm of society, but the situation there is far from perfect.

In every Pentagon war game that tests whether a Red Team can hack into a command-control network, they always get in. The emphasis now is on detection and resilience: making sure commanders know when someone hacks in and repairing the damage as quickly as possible.

You describe how the first time the White House really discusses cyber-attacks is the result of Reagan watching a fictional film wherein a tech whiz teenager unwittingly hacks into military computers and nearly triggers World War III; the common feeling is, still, that governments are out of touch, tech whiz kids are in control and threats could come from anywhere. How true is that today?

It’s not quite that bad. When Reagan watched WarGames in 1983, computers were still new. Even in the late 1990s, when foreign hacks started happening, governments had no bureaus or protocols for dealing with intrusions. Now they do; there are tens of thousands of officers and officials specialising in cybersecurity or cyber-offensive operations.

Even so, there are also private companies – even freelancers operating out of basements – that make a good living from "zero-day exploits": discovering vulnerabilities (in computers, operating systems, routers and so on) that no one else has found, then selling them to the companies, the government, or (in the case of black-hat hackers) foreign spies or criminals.

Companies and even places like the FBI, NSA and GCHQ have come to value these hackers.

We've built up whole systems and networks with no provision for security. There were alternative ways to build these systems: they would have meant less anonymity and slower speeds, but it could have made for a more secure system

How do you define a term like 'cyberwar', and what does that mean for our understanding today of what, say, now constitutes an act of war? Is cyberwar just a digital component to wars we already wage – is it just a new battleground, or an entirely new form of battle?

In the last two years of the Bush administration, when Robert Gates first became secretary of defence, he asked the Pentagon's legal counsel precisely this question. He didn't get a reply for two years – and it wasn't really a reply. Nobody has thought this through.

Nation-states (and I include the US and UK) are hacking into one another's military networks and critical infrastructure systems all the time. Any future war will definitely have cyber as one of its 'domains' (which could do critical damage to command-control and computer-connected weapons like GPS-guided 'smart bombs').

Yet this hasn't been thoroughly studied by high-level people with strategic outlooks. Partly this is because, until very recently, the technology has been embedded in places like NSA and GCHQ, where only those with the highest security clearances know enough to discuss the issues on even the most basic level. This is just now beginning to change.

The NSA and the US military may be very good at securing themselves, but when targets can include private companies and infrastructure, does anyone have a good idea of how to systematically take care of the rest of us?

When these vulnerabilities were first examined in the 1990s, some White House aides proposed mandatory cyber-security requirements for companies dealing with critical infrastructure (transportation, energy, finance, water supply and so on). This push was resisted by the companies and by economic advisers who thought it would slow down research and development. There are now way too many networks for the government to protect.

The banks are doing a fairly good job, because their business depends on taking our money and earning our trust; plus they have lots of money to spend on security. The utilities that own power grids, for them there's no incentive: The cost of cleaning up after an attack isn't that much more than the cost of preventing one, and it's not clear that preventative measures would really work.

Meanwhile, no one's making them do anything. As a result, the government's notion of "defending the nation from a cyber-attack" is to penetrate enemy networks in order to see whether an attack is in the works: the digital equivalent of placing a spy in their systems. But of course, once they're inside, this could be seen as – and, in fact, could be – not just a form of espionage or defence, but also laying the groundwork for our own cyber-attack.

Nation states (and I include the US and UK) are hacking into one another's military networks and critical infrastructure systems all the time. Yet this hasn't been thoroughly studied by high-level people with strategic outlooks

Government agencies' ability to monitor the communications of their citizens, innocent and suspect, is a heated topic. How has working on the book affected your own thinking on this issue?

I've learned this: If you're worried about criminals or mischief-makers hacking into your bank account, there are things you can do – the digital equivalent of installing a better lock or a burglar alarm. But if someone really wants something you have, if he's really good at this, and especially if he has the resources and wherewithal of a nation state, there's almost nothing you can do.

In the light of Apple's recent fight with the FBI over encryption, did your research reveal any historical precedent or shed light on these questions? Is security better served with unbreakable, end-to-end encryption?

There's been cooperation or complicity between telecoms and intelligence agencies going back nearly a century. In the 1920s, an American spy agency persuaded Western Union to give them access to all telegraphs going in and out of the United States.

When telephones came along, AT&T routinely let the FBI and NSA tap phone lines. In the internet age, it's been a two-way street. The NSA hacks into the backbone of networks; in exchange, it helps the companies solve problems. If computer or software companies want to sell their wares to the Defense Department, they have to be vetted for security. A branch of the NSA, called the Information Assurance Directorate, does the vetting. When Microsoft submitted its first Windows operating system for vetting, IAD found 1,500 points of vulnerability. IAD helped Microsoft patch the holes – or most of them: they left a few open, so that, when foreign governments bought the system, the NSA could hack into it.

The fight between Apple and the FBI over Sayeed Farook's phone has very little to do with that particular phone. The government is trying to create a new legal precedent for perpetuating this age-old arrangement in an era of stronger encryption; Apple is trying to disrupt that arrangement.

We've long feared that "a handful of technical savants, from just down the street or the other side of the globe, could devastate the natioN" for decades – why hasn’t it happened yet?

I'm not sure it could "devastate the nation". The US and, to some extent, the UK are sufficiently decentralised that you couldn't "shut off all the lights" with one switch. But you could shut off quite a bit. Why hasn't it happened? Why hasn't a nuclear bomb gone off since 1945?

I think the big powers know that this would cross a Rubicon, and who knows what happens next? Besides, nation states don’t suddenly invade or bomb each other out of the blue; attacks grow out of crises and confrontations. No such crisis has taken place as yet in the cyber-era. But when Russia invaded Georgia, it used not just tanks, airplanes and ships but also cyberweapons simultaneously. Cyber deception has enabled US military campaigns in Iraq, Afghanistan and Bosnia, as well as Israeli campaigns in Syria (to say nothing of the US-Israeli Stuxnet operation against Iran's uranium enrichment plant). If terrorists get hold of cyberweapons, then, of course they might inflict mayhem for its own sake.

The future of cyberwarfare is usually framed as a threat. Scary scenarios in which the US and its western allies are out of control, a battlefield that's not remote, but includes companies, movie studios, banks, everyday infrastructure on home soil – that doesn't involve just soldiers, but all of us. But is it an opportunity, too, something preferable to the battlefield? What are the upsides? In your survey of the history and current state of cyberwar, is there anything to be optimistic about?

It's better, I suppose, to have a power plant shut down through cyber means than for someone to blow it up with a bomb, destroying it for good and probably killing people besides. But it's more likely that a cyber-attack will be one part of a larger war. When planes first took to the air, some 'visionaries' thought aerial warfare would replace the clash of armies on the ground. That didn't happen; it just supplemented those clashes.

Given that there is a lot of money to be made in cybersecurity, are governments – and others – ever overstating the threats? In 50 years time will we be be speaking ill of a "cybersecurity industrial complex"?

This complex is already going strong. Cybersecurity is at least a $200bn business. The top cadets from the military academies are joining cyberwar corps. It's the fastest growing segment of the military budget. Executives of every kind of company realise they need to hire people who know how to ward off cyber-attacks, in part to soothe stockholders and customers, in part to ward off attacks.

Dark Territory: The Secret History of Cyber War, by Fred Kaplan, trade paperback published by Simon & Schuster, £12.99, out now

Image by Carol Dronsfield


We want our stories to go far and wide; to be seen be as many people as possible, in as many outlets as possible.

Therefore, unless it says otherwise, copyright in the stories on The Long + Short belongs to Nesta and they are published under a Creative Commons Attribution 4.0 International License (CC BY 4.0).

This allows you to copy and redistribute the material in any medium or format. This can be done for any purpose, including commercial use. You must, however, attribute the work to the original author and to The Long + Short, and include a link. You can also remix, transform and build upon the material as long as you indicate where changes have been made.

See more about the Creative Commons licence.


Most of the images used on The Long + Short are copyright of the photographer or illustrator who made them so they are not available under Creative Commons, unless it says otherwise. You cannot use these images without the permission of the creator.


For more information about using our content, email us: [email protected]


HTML for the full article is below.