Over the past 30 years, the internet has gone from novelty to necessity. Predicting how we will stay safe in a hyperconnected world over the next 30 years is a hard task, but experts predict that smart cities and “deepfakes” are two of the major cyber security challenges ahead.
Already, voice assistants and devices such as smart meters and lights are becoming the norm. Smart cities will take this further, embedding the so-called internet of things into infrastructure and built environments. Possibilities include street lights that change intensity based on the presence of humans (detected by their smartphones) and virtual guides for the elderly should they get lost.
That convenience comes at a cost, says Mariarosaria Taddeo, a research fellow at the Oxford Internet Institute and deputy director of its Digital Ethics Lab. “My speculative idea is that the more you have smart cities . . . the wider the surface of attack.”
Ms Taddeo, who is also a fellow at the Alan Turing Institute, is concerned about the effect these complex networks will have on cyber security. With every additional connection, it becomes harder to figure out where a vulnerability has emerged.
“Your [Google] Nest speaks to your [Amazon] Alexa, and your Alexa tells your fridge what needs to be bought next week — when something goes wrong, how do we map the connection and responsibility?”
How we will stay safe in 2050
Read more in the series:
Scientists in race to protect humanity from future pandemics
New viruses are a threat but experts predict the old enemy influenza will be most deadly
Future of warfare: high-tech militias fight smouldering proxy wars
Private armies will battle on behalf of autonomous bodies like cities and even companies
Humans must adapt to climate change destruction
From submerged islands to savage heat, our lifestyles and economic systems will need to adjust
Artificial intelligence is a key part of cyber security’s future development, according to Ms Taddeo. However, such technology is a double-edged sword: while security experts use advances in the field to identify and respond to threats more quickly, she says, hackers use the same technology to find weak spots.
In smart cities, the scope for disruption is immense. Hackers could take over the AIs that control critical infrastructure, for instance, putting water or electricity supplies in the hands of malicious actors.
Other experts take hope from the evolution of older software such as Adobe Flash. Previously hackers have relied on a single flaw, says Ryan Kalember, head of cyber security strategy at Proofpoint, a California-based tech protection company. “With a bug in 2010, I could . . . do something really [effective] because the system was not designed in a resilient way.”
By 2050, Mr Kalember thinks hackers will instead have to exploit a series of system vulnerabilities. This means technical attacks will be the preserve of only the best hackers, such as the Israeli spyware company NSO Group, which last year said it had figured out how to hack iPhones. “Real technical vulnerabilities will be harder and harder to find,” he says.
Mr Kalember derives further optimism from improved security methods: smartphones, for example, already use biometric authentication such as fingerprint or facial recognition instead of passwords. “As you have things like face identification it becomes increasingly absurd that we have dozens . . . of passwords managed in deeply insecure ways,” he says.
This shift is essential, he adds, because although technical vulnerabilities will be harder to exploit in future, humans are already the weakest link in cyber security, with the most tech-savvy individuals vulnerable to increasingly personalised and complex attacks.
The rise of deep fakes — synthetic audio, video and photos of people generated by algorithms — is one source of vulnerabilities. “[Deep fakes] are becoming increasingly common, increasingly accessible and increasingly realistic,” warns Henry Ajder, head of communications and research analysis at Deep Trace, a start-up that identifies deep fakes.
Hackers can depict someone saying or doing almost anything using ever-decreasing amounts of initial material. So far, the technology has mainly been used to create so-called revenge pornography. But the risk of other criminal uses of deep fakes is growing as the tech improves.
In December, Facebook discovered that a pro-Trump US media outlet had used similar technology to generate pictures for hundreds of fake accounts that were then used to push political messages. This could just as easily be used to generate unique avatars to obscure the identity of hackers when creating scams.
Groups like Deep Trace are working on technology to identify these synthetic images, but it is an arms race — each new iteration of deep fakes is better than the last.
Mr Ajder admits that, at their worst, deep fakes could worsen the disinformation environment, creating a world “where you cannot tell what is real”. One way he envisions countering this is through “semantic passwords”. These use specific information only a close contact would know, such as a memory, to differentiate between humans and digital doppelgängers.
With improved systems for identifying deep fakes and higher levels of media literacy among users, Mr Ajder remains optimistic that their impact can be limited.
But cyber security in 2050 should not come at the cost of privacy, warns Ms Taddeo. She points to state of the art systems that can “monitor all of your movements while you’re connected — any keystroke, movement on the trackpad, where your eyes are moving”.
Between pervasive data collection and constant authentication, she says, the risk is that we create a panopticon, where surveillance systems are expanded out of all proportion to threats.
Get alerts on Cyber Security when a new story is published