The cybersecurity threat is often described in terms of warfare: firewalls, attack and defence, strengthening our digital borders and bolstering our armour. Whether protecting nation states, corporate servers or the ever more connected and interconnected devices in our personal lives, the rhetoric gets ever more elevated as the challenges grow more difficult and complex. But perhaps we can learn more from cybersecurity's other great well of metaphor: biology.
We speak of computers infected with viruses and programs plagued by bugs. Such analogies are useful. Our bodies are masters of defence, highly evolved to tackle the same sorts of issues that afflict computer systems. Both have to deal with complex, varied and ever-evolving adversaries that continually find new forms of attack.
Our approach to shoring up our cyber defences is concentrated, normally, on a 'defensive wall': all our efforts are directed towards keeping nefarious agents out. But this is a bit like the body only using its skin to protect itself – what happens when something gets through?
What if we could use artificial intelligence to develop an artificial immune system?
Researchers are now working on systems to tackle attackers once they're 'in'. This accepts the fact that, just like we may take the rush-hour tube during the height of flu season, we open ourselves up to attack by clicking on dodgy spam emails or choosing weak passwords. A defensive wall will never be able to keep everything out. But that's not the only problem: it's usually impossible to tell when someone has hacked into your systems, and to know what they are doing. What if we could use artificial intelligence to develop an artificial immune system for our software systems?
Computer scientists have been experimenting with this idea since the 1980s, but advances in artificial intelligence and machine learning algorithms have only recently advanced the idea to the point of becoming a reality. Using these algorithms, it's possible to replicate two key characteristics of the immune system – learning and memory – to sense within a network what 'normal' looks like, so you can recognise unusual behaviour and shut it down.
ExploreIt's work, but not as we know it Rent is the answer Genomics and me Who wants to live forever?
One of the most successful companies exploiting this approach is the aptly named Darktrace. The algorithms Darktrace uses record and learn how each bit of a network operates so that it can build a model of normal activity. It only takes a week sitting in the background, observing, before it is ready to start detecting the unusual. Just like the human body the system has to deal with a lot of noise, but so far the tool has done a pretty good job of identifying the most suspicious-looking behaviour. If a computer on the network were to start downloading unusually high amounts of data from a central server, for example, or connecting with suspicious-looking external sites, Darktrace will immediately flag up the activity.
It's even possible to set up an automated response so that the system can automatically cut access to sensitive information. You can trap the hacker in a 'honey pot' and observe how they act, see what they are interested in accessing and learn more about them – even the body's immune system isn't that clever.
But Dave Palmer, director of technology at Darktrace, says that fewer than 1 per cent of the organisations his company works with opt in for some kind of automated response. It's unclear why this is, but it might change as the cyber threat gets bigger and it becomes increasingly difficult to deal with the sheer volume of data and attacks.
We may see attackers turning the system against itself, targeting important bits of the network and then making them look suspicious so that they are shut down
But, like our body's immune system, could this artificial one also cause problems? There are plenty of instances of anti-virus software identifying important computer programs as malicious and shutting them down – a kind of artificial autoimmunity, if you like. The learning element of the algorithm will make this less likely, but with a lot of noisy data it may on occasion lock out the wrong person. Nonetheless, like our own immune systems the benefits will far outweigh the negatives.
As hacking into a system like this becomes more difficult, we may see attackers turning the system against itself, targeting important bits of the network and then making them look suspicious so that they are shut down. There are plenty of viruses and bacteria out there that have the same kind of effect on our own bodies. We'll have to wait and see what route the 'bad' guys take in response to these advances. Hackers can learn from biology, too.
It's never going to be possible to be 100 per cent safe, but having defences that can evolve to tackle new challenges and limit any damage when something does get in will give you the upper hand. It's all about trying to shift the balance in your favour. Artificial immune systems will be increasingly important, but perhaps the most important thing we should take from biology is the importance of multi-level defence. The skin, immune system and even genetic level defences are all part of what keeps us safe. In order to give us the best protection possible in what could be a dangerous digital world, we are going to need firewalls, artificial immune systems, anti-virus software and informed users. Let's hope we get it right before it's too late.