youtube screen
YouTube has promised more monitoring of content to protect children © Dreamstime.com

Two decades after Google first turned insights about its users’ online behaviour into gold, the basic infrastructure underpinning the data economy still displays worrying flaws. 

The systems and tools required to support the collection and sharing of personal data on a mass scale — while also providing the security and control users might expect — do not always live up to their billing.

The result has been a parade of privacy snafus and regulatory interventions, as a seemingly endless series of failures comes to light.

One reading of this is that, with today’s greater scrutiny, more weaknesses are being exposed, resulting in the systems being tightened. If so, this could turn out to be a transitional period in which old failures are progressively weeded out.

It could also be the case, however, that the systems failures reflect conflicts of interest that will not be resolved so easily — and that more profound changes to business models will be required.

Events of recent days have served to highlight some of the issues at stake. On Tuesday, for instance, Google was fined $170m for failing to prevent targeted advertising from being shown to children, as required by US law. As part of an agreement with regulators, it undertook to build a new system through which content providers on YouTube can report when their videos are directed at minors, making it possible to limit the types of advertising that can accompany them.

This sounds like an inherently weak form of compliance, relying as it does on self-reporting by partners who will have strong incentives not to comply. No doubt that is why YouTube also promised more scanning of content, using machine learning to try to identify videos that haven’t been adequately flagged. Self-reporting and screening tools like these should improve things. But the former are open to abuse and the latter are based on probabilistic techniques that can’t guarantee 100 per cent compliance.

On the same day as the FTC fine, a potential Google failure of a completely different kind came to light. According to a complaint to data regulators in Ireland, the search company has been giving advertisers access to unique identifiers about its users that can be used to target ads more effectively — even though it said last year it would no longer do this. Google said it needed to see more details of the complaint before discussing the apparent failure.

Ahead of a full investigation, it is hard to pass judgment. But claims that personal data is being leaked without user knowledge inevitably draw comparison to previous failures. Facebook’s Cambridge Analytica scandal, for instance, stemmed from a failure to adequately police liberal data-sharing tools that were developed in a more permissive era. Bringing full transparency to how systems like these operate is overdue.

Tightening up the infrastructure is only part of the corrective, of course. Europe’s GDPR has helped to bring wider agreement on the key principles that should underpin the data economy. But in many cases these principles, which seek to enshrine greater user control, have yet to be turned into detailed rules that can be baked into systems capable of supporting a more robust online data regime.

Facebook’s call this week for wider discussion on the principle of data portability is a case in point. Giving users the power to transfer their personal information to another online service sounds like an important online freedom, and one that could stimulate greater competition. But, as Facebook points out, it can also be a slippery concept: the Cambridge Analytica scandal, for instance, stemmed from making user data more portable.

It is tempting to view Facebook’s argument as self-serving, designed to put the brakes on making personal information more mobile in order to protect its current dominance of social networking. The company questions, for instance, whether users should be free to export their “social graph” — their network of contacts — to other services, since the information in part “belongs” to those contacts themselves.

But as the Electronic Frontier Foundation has pointed out, Facebook itself benefited greatly in its early days from being able to import entire contact lists from Google’s Gmail. The company then went on to block Twitter from tapping into its own social graph in the same way, hampering a potential competitor.

This inevitably prompts suspicions about Facebook’s motivation now. But that doesn’t mean there isn’t a trade-off that needs to be struck between increasing protections for users and giving them more online freedom. Debates like this are a sign of the more robust data infrastructure that is under development — and this time, it should at least come with wider deliberation and greater transparency.

richard.waters@ft.com

Get alerts on Internet privacy when a new story is published

Copyright The Financial Times Limited 2020. All rights reserved.
Reuse this content (opens in new window)

Follow the topics in this article