The A-Team, of the eponymous TV show, included military operatives and masters of disguise. A digital version will not be enough to make the internet safe © Nbc-Tv/Kobal/Shutterstock

A new generation of highly persuasive deepfakes designed to manipulate and confuse the public is worming its way through the internet. We may think we are invulnerable, but the sophistication of this new breed is likely to catch even the savviest of us out — in fact, it has probably already done so.

Today’s deepfakes are more than just Twitter accounts controlled by robots or manipulated videos of real people in the public eye. They are being designed to pass as unremarkable ordinary people, or journalists. Take the case of Oliver Taylor, a coffee-loving politics junkie who writes freelance editorials for the Jerusalem Post and the Times of Israel. Or so the world thought until a Reuters article in July noted that, despite his ample online footprint and convincing profile picture, “Taylor” does not exist.

It is not clear who was behind the fake persona, masquerading as a real person. The technology to generate deepfakes is now so accessible and cheap he could as easily have been generated by a hostile nation state or a teenage prankster in a basement. 

His mission was seemingly to dupe editors into printing stories that promoted his agenda and built credibility for his profile. He was only exposed after an academic he had accused of being a terrorist sympathiser followed up on a hunch that something didn’t feel right about his bio and picture and began to make inquiries. But for that, “Taylor” could still be going about his business unmasked.

Despite an ample online footprint and convincing profile picture, freelance writer Oliver Taylor turned out to be a deepfake © Cyabra/Reuters

His exposure is probably only the tip of the iceberg. In today’s online environment, especially now that working-from-home arrangements mean new professional relationships are being forged exclusively online, we can no longer be sure who we are dealing with.

Some deepfakes could be masking human agents. But others, more worryingly, may be being powered by artificial intelligence programs crafted to take advantage of the personal data we shed online to pinpoint our vulnerabilities, befriend us, and then manipulate us into doing their bidding. They are sinister precisely because they know us, and our weaknesses, so well. 

We lack defences against these programs because much of the information that powers them is already out there, being used by private sector algorithms for marketing and advertising. While privacy legislation helps, it cannot protect us entirely because so much of the information that makes us vulnerable is voluntarily given up.

The US is particularly vulnerable due to laws that make domestic counter efforts legally contentious — specifically covert ones that might influence political processes, public opinion, policies or media. Rand Corporation’s Rand Waltzman says you have to go back to the 1980s and President Ronald Reagan’s Active Measures Working Group to find the last official US counter propaganda program.

That group was disbanded after the collapse of the Soviet Union. Its final report nonetheless warned that “as long as states and groups interested in manipulating world opinion, limiting US government actions, or generating opposition to US policies and interests continue to use these techniques, there will be a need . . . to systematically monitor, analyse, and counter them.” 

Counter-intelligence efforts by social media platforms or independent verifiers, meanwhile, can only go so far. Many online personas, especially those on platforms offering background legitimacy, such as LinkedIn, will have been cultivated for years to make them look legitimate.

LinkedIn understands the problem. Between January and June last year, the company’s artificial intelligence algorithms intercepted some 19.5m fake accounts at the registration stage alone. Another 2m were intercepted after registration, with an additional 67,000 intercepted following reports from other members. How many are getting through these filters, however, is impossible to say. 

This is why what we need to protect the vulnerable online are active measures by trusted democratic states that are committed to human rights. This means the deployment of data-mining techniques to flag up our own online vulnerabilities to us. Think of it as the deployment of trusted digital guardian angels, operating overtly and in plain sight.

Failing that, the only fallback is to hire independent white-hat hacker groups, often made up of former intelligence or military operatives who are already masters of digital disguise: a version of television’s the A-Team. Their slogan went: “If you have a problem . . . if no one else can help . . . and if you can find them . . . maybe you can hire them . . . ” But don’t hold your breath. 

izabella.kaminska@ft.com




Get alerts on Cyber Security when a new story is published

Copyright The Financial Times Limited 2020. All rights reserved.
Reuse this content (opens in new window)

Follow the topics in this article