What’s ailing the right to be forgotten (and some thoughts on how to fix it)

The European Court of Justice’s recent “right to be forgotten” ruling is going through growing pains.  “A politician, a pedophile and a would-be killer are among the people who have already asked Google to remove links to information about their pasts.”  Add to that list former Merill Lynch Executive Stan O’Neal, who requested that Google hide links to an unflattering BBC News articles about him.

Screen Shot 2014-07-09 at 9.21.19 AMAll told, Google “has removed tens of thousands of links—possibly more than 100,000—from its European search results,” encompassing removal requests from 91,000 individuals (apparently about 50% of all requests are granted).  The company has been pulled into discussions with EU regulators about its implementation of the rules, with one regulator opining that the current system “undermines the right to be forgotten.”

The list of questions EU officials recently sent Google suggests they are more or less in the dark about the way providers are applying the ECJ’s ruling.  Meanwhile, European companies like forget.me (pictured) are looking to reap a profit from the uncertainty surrounding the application of these new rules.  The quote at the end of the Times article sums up the current state of affairs:

“No one really knows what the criteria is,” he said, in reference to Google’s response to people’s online requests. “So far, we’re getting a lot of noes. It’s a complete no man’s land.”

What (if anything) went wrong? As I’ll argue* below, a major flaw in the current implementation is that it puts the initial adjudication of right to be forgotten decisions in the hands of search engine providers, rather than representatives of the public interest.  This process leads to a lack of transparency and potential conflicts of interest in implementing what may otherwise be sound policy.

The EU could address these problems by reforming the current procedures to limit search engine providers’ discretion in day-to-day right to be forgotten determinations.  Inspiration for such an alternative can be found in other areas of law regulating the conduct of third party service providers, including the procedures for takedown of copyright-infringing content under the DMCA and those governing law enforcement requests for online emails.

I’ll get into more detail about the current implementation of the right to be forgotten and some possible alternatives after the jump.

A question of institutional competence

The current implementation of the right to be forgotten decision puts the day-to-day interpretation of a set of imprecise rules pertaining to the exercise of very important freedoms—a process typically reserved for public regulatory bodies and (in the U.S.) administrative agencies—in the hands of a small number of Silicon Valley companies.

The European Court of Justice decision gives EU citizens the right to request the removal of search results from providers like Google, Yahoo, or Bing, provided the results are “inadequate, irrelevant or excessive in relation to the purposes” for which the results were created.  Under the current regime it’s up to the search engine providers to determine whether that standard is met.  The providers are also tasked with balancing the public’s interest in viewing the information against the individual interest in removing the results.

While these determinations are appealable to a EU supervisory body, in practice Google, Microsoft, and Yahoo employees will be on the front line of the decisionmaking process.  As Jeff Rosen anticipated in his New Republic piece “Delete Squad,” this process shifts responsibility for a whole lot of decision making about the future of free speech to a few companies—a fact which should trouble those who supported the “right to be forgotten” out of concern for corporate overreach.

This is work for policymakers, not corporations. While corporations can and should be good citizens, they are not equipped to adjudicate, on behalf of the public, big questions about the proper balance between privacy and speech.  They have neither the public transparency nor the institutional mission of government agencies; nor are they subject to the systems of political and institutional accountability we rely on to regulate government conduct.

Then there’s the issue of conflicts of interest.  The ECJ’s decision tasks search engine providers to apply a public interest test to determine whether information should be deleted.  But if you work for a search engine provider, chances are you already have some opinions about whether access to information serves the public interest.  Google’s mission is “to organize the world’s information and make it universally accessible and useful.”

It’s in Google’s culture—and in its profit interest—to believe that making more information accessible makes the world a better place.  There’s nothing wrong with that position, but if you’re a regulator looking for an impartial representative of European data subjects’ privacy rights you might start elsewhere.  (This is not a critique of Google, by the way – it’s a commentary on the procedures that force them to play philosopher kings against their own interests).

These conflicts of interest can affect both big and small decisions.  The task of implementing a law doesn’t just entail macro decisions about the meaning or application of the law; it also entails a large number of micro-decisions about format, process, and practical implementation, which in practice are often as important as the big ones. For instance, take a look at the online form Google created for processing right to be forgotten requests:

Screen Shot 2014-07-09 at 9.00.49 AM

Even the form used to process right to be forgotten requests can have a dramatic effect on how the right is implemented.  For instance, how will Google protect against fake or fraudulent requests? (A competing business getting too much exposure on Google or Bing?  Why not apply to get their link taken down?).  It’s not hard to forge electronic proof of identification, particularly when the document can be almost anything (“a passport or … government issued ID is not required”) and you’re allowed to obscure the photo and any identification numbers.  (There’s also some irony in the fact that you have to show ID in order to be forgotten).

The point is not that Google’s form is inadeqate—I don’t know what a better alternative might look like—but rather that in choosing this particular implementation, Google isn’t just mechanically “applying the law.” Instead, the current right to be forgotten regime requires providers to make a large number of micro-policy determinations about what kind of proof of identification to require, what format explanations for the request must be in, how narrowly to define the record to be deleted, etc.

It’s fair to say that even if European regulators review providers’ application of standards like “relevant” or “excessive for the purpose,” most of micro-decisions made by providers will never be reviewed. Those decisions will, however, have measurable effects on how the right to be forgotten actually affects European citizens’ privacy and expressive rights.

An alternative approach — shift the policymaking to the regulators

EU regulators are currently in the process of formulating additional guidelines for providers subject to takedown requests. Adding more meat to the ECJ’s ruling is a needed first step, but it may fall short of addressing the procedural flaws with the current regime, which puts day-to-day adjudication of right to be forgotten claims in the hands of providers rather than policymakers.

Instead, the EU’s best bet is to change the procedural framework governing right to be forgotten requests in a way that shifts the burden of practical policymaking away from the likes of Google, Yahoo, and Microsoft.  As I’ll discuss below, inspiration for such an alternative can be found in other areas of law governing third party service providers and their users, including DMCA (copyright) takedowns and procedures governing law enforcement requests for email content.**

The right to be forgotten regime governs the relationship between four sets of actors: (1) the content creator (say, BBC News), (2) the person the content is about who wants it taken down (say, Stan O’Neal), (3) the “third party” search engine provider who posts a link to content (say, Google), and (4) the members of the public searching for the content. The current implementation gives too much discretion to one party in this love quadrangle—the search engine provider.

This is by no means the only way to strike the balance. Take for instance, the system set up for notice and takedowns of copyright infringing materials posted on a third party service.  In contrast to the right to be forgotten regime, the standards under the DMCA strictly limit the discretion of third party providers when it comes to determining whether particular materials infringe copyright. The provider’s role is instead merely to perform the takedown—the one thing it alone is uniquely suited to do.  The substantive and adjudicative decisions are driven and made by the interested parties and by the court system.

This is, of course, by design.  Providers such as YouTube would in theory benefit from valuable pirated content on their services, so the DMCA takes them out of the substantive decisionmaking process about whether that content is in fact infringing. Under DMCA rules, when a copyright holder wants something taken down from (say) YouTube, he or she must send a notice to the provider.  So long as the notice is compliant—that is, so long as it lists certain required information—YouTube must take the content down.

Crucially, YouTube does not adjudicate or analyze the takedown claim.  That burden is instead shifted to the content owner and the content user, as well as to the courts:

In order to ensure that copyright owners do not wrongly insist on the removal of materials that actually do not infringe their copyrights, the safe harbor provisions of the DMCA require service providers to notify the subscribers if their materials have been removed and to provide them with an opportunity to send a written notice to the service provider stating that the material has been wrongly removed. [512(g)] If a subscriber provides a proper “counter-notice” claiming that the material does not infringe copyrights, the service provider must then promptly notify the claiming party of the individual’s objection. [512(g)(2)] If the copyright owner does not bring a lawsuit in district court within 14 days, the service provider is then required to restore the material to its location on its network.

[Quoted from the chillingeffects.org faq].

The DMCA standard tends towards over-removal of content.  If the EU prefers a system that tilts the other way, it can look to models governing law enforcement requests for email content from webmail services like YahooMail and gmail.  When a law enforcement entity wants a YahooMail user’s email, their first step (depending on the jurisdiction) is to ask a judge for a court order and to serve Yahoo with that court order.

At this point Yahoo has a binary decision—it either complies with the court order or challenges it in court.  Crucially, in this regime it’s the judge, rather than the provider, who first determines whether the evidence presented meets (for example) the Fourth Amendment’s probable cause standard. It’s the judge, rather than a Google or Yahoo lawyer, who’s the first line of defense in preserving the right to be secure against unreasonable searches and seizures. Putting the court at the front of the process has the added benefit of reducing the number of improper requests that reach the providers.

The EU would do well to adopt a similar model for its right to be forgotten regime.  In an ideal system, all right to be forgotten requests would be adjudicated, in the first instance, by EU data protection authorities.  Those authorities would set up processes to manage these requests, perhaps including an adversarial process and appeals to handle close cases.   Service providers would of course have to play a clerical role in the takedown process, as they do in the DMCA context.  And providers would probably have to retain their right to appeal decisions under certain circumstances, as they do with court orders for user emails.

As with the DMCA and law enforcement procedures described above, this change in process would more firmly place responsibility for important questions of liberty and privacy in the hands of those who are accountable to the public.  It would be regulators, rather than private companies, who are tasked with determining, on a day-to-day basis, what is in the public interest and what is not, what is relevant and what is outdated.  And it would be they, rather than Google or Yahoo, who bear responsibility for balancing the right to be forgotten against the right of their citizens to know.

To be sure, the existing right to be forgotten regime does have one advantage for EU citizens: since it transfers most of the administrative burden to service providers, it’s mostly free for EU taxpayers.  But those savings, it turns out, have costs of their own.

[*As always, the views expressed here are my own academic musings and do not necessarily reflect the views of my employer or anyone else]

[**Note that this post isn’t a wholesale endorsement of either of these legal regimes, and indeed much has been written about flaws in the procedures governing the DMCA and law enforcement requests for digital content.]

 

You may also like...

5 Responses

  1. Brett Bellmore says:

    What’s ailing it is that the whole idea is illegitimate. A right that OTHER people forget what they know about you, and destroy any records they have on you? Absurd.

    • Joe says:

      Absurd it might be but as applied here it is not about erasing people’s minds or having them destroy records as a general matter. It involves search engines == “no longer be linked to his name by a list of results displayed following a search made on the basis of his name” to quote the ruling. This would not involve trying to delete any record of the person online, necessarily, and if so, is both rather absurd and very impractical. The ruling also doesn’t make even the right as applied absolute, providing exception. When every intimate matter can be viewed by the world by the touch of the button, certain that would truly be seen as absurd not that long ago, some desire to limit such access does not seem totally absurd to me. Though under U.S. law, it would as applied to public data seem patently unconstitutional if mandatory.

  2. Frank says:

    Excellent post. I believe institutional precedents in the US suggest a model for RtbF; see the discussion of BBB and FTC here:
    https://www.law.northwestern.edu/lawreview/v104/n1/105/LR104n1Pasquale.pdf

  3. Frank says:

    Excellent post! I believe institutional precedents in the US suggest a model for RtbF; see the discussion of BBB and FTC here:
    https://www.law.northwestern.edu/lawreview/v104/n1/105/LR104n1Pasquale.pdf