Home of internet privacy

Our thoughts on the EU’s “Right to be Forgotten” ruling

This post was originally published on May 16, 2014.

On Tuesday, the European Court of Justice decided that Europeans have what they call the “right to be forgotten,” meaning that companies like Google “can be made to remove irrelevant or excessive personal information from search engine results.” The court didn’t mention how “irrelevant” or “excessive” would be judged.

While we value the individual’s right to privacy, we find that this ruling undermines freedom of speech. The Internet was built on the free and open exchange of information; this ruling, for us, is a step backwards.

Unfortunately, since the ruling was made by the EU’s highest court, Google can’t appeal. According to the court:

An internet search engine operator is responsible for the processing that it carries out of personal data which appear on web pages published by third parties.

Thus, if, following a search made on the basis of a person’s name, the list of results displays a link  to a web page which contains information on the person in question, that data subject may approach the operator directly and, where the operator does not grant his request, bring the matter  before the competent authorities in order to obtain, under certain conditions, the removal of that  link from the list of results.

 (You can read the press release of the ruling here.)

The ruling stems from the case of Mario Costeja Gonzalez, a Spanish lawyer who Googled himself to find links to a 1998 legal notice in a Spanish newspaper regarding an auction of his home in order to settle his debts. He complained to Spain’s privacy agency in 2009, arguing that the links should be removed because the debt has long since been repaid. Google challenged the order, arguing that it shouldn’t have to censor material that was already legally published on a Spanish news site. And thus it was brought to Europe’s highest court.

What does the ruling mean?

The ruling means that Google is responsible for content that it links to, and may have to purge its search results of links to items in the public domain upon the request of an individual—for example, links to an article in a newspaper archive, a blog post, a comment left on social media, etc. Note that this applies to “ordinary” citizens, and not public figures like politicians or movie stars.

As it is, Google already encourages users who want information removed from Google to contact the webmaster of the page and ask them to delete the content in question all together (see their guide on how to how to remove information from Google).

The onus is now on Google to judge and remove the links from its European sites, and not on the original publication to delete the offending information.

According to the BBC, Google has said that  “forcing it to remove data amounts to censorship.” And in response to Tuesday’s ruling, Google issued the following statement:

This is a disappointing ruling for search engines and online publishers in general. We now need to take time to analyze the implications.

Only a few days after the ruling, as BBC reports, Google is already fielding scores of requests, among them:

An ex-politician seeking re-election has asked to have links to an article about his behaviour in office removed.

A man convicted of possessing child abuse images has requested links to pages about his conviction to be wiped.

And a doctor wants negative reviews from patients removed from the results.

Implications for free speech

Indeed, while on the surface this looks like a win for privacy advocates—especially as netizens become increasingly vigilant in the wake of Edward Snowden’s NSA revelations—the ruling could, in the words of the New York Times, “undermine press freedoms and free speech.”

In fact, Fred H. Cate, director of the Center for Applied Cybersecurity Research at Indiana University, calls this a nightmare situation not only for search engines but also for regular Internet users:

With today’s decision, Europe has not merely decided that one party can be held liable for facilitating access to material created by another, but that this is true even where the information is true, concerns a matter of public interest, and does not otherwise violate any national law.

Wikipedia founder Jimmy Wales outright condemns the ruling, tweeting that “thinking of this EU censorship decision as being about Google may cloud judgment – it’s about journalism and free flow of information.”

Unintended consequences?

Beyond worries about censorship and obscuring access to public information, Slate’s Joshua Keating cautions that  “the ironic result of the decision will be to empower governments and corporations at the expense of individual users.” As Keating notes, “the notion that Google could now have access to data that it is preventing the rest of us from seeing isn’t exactly comforting.”

This means that Google would potentially have a database of everyone who made a request, and all the objectionable links in question. What could Google do with that data? What could anyone do if they were able to obtain it?

Also, this ruling applies only inside Europe; if you ask Google to remove a link about you and they comply in Europe, the link would still exist outside of Europe. Europeans could simply use a different search engine or access Google via a VPN, and the links to unflattering articles about themselves would still be there.

Implementation nightmare

From an implementation standpoint, we have to wonder: is Google really expected to weigh millions of requests?

Won’t Google and other Internet companies want to limit their liability by approving most requests to remove information? The result would be the mass de-indexization of useful, legally published information.

Another issue is that the court has provided no practical guidance as to what counts as “irrelevant” or “excessive.” By what metric will this be judged?

Also, it probably won’t end with Google: the Computer & Communications Industry Association CCIA warns that this ruling could affect “all companies providing links on the Internet.” Imagine if this ruling extended to Wikipedia, Facebook, Twitter, online periodicals, and archive.org’s Wayback Machine? Could this ruling also extend to newspaper archive websites with links to old issues and articles?

What would the Internet look like then?

Right to Reply: a better solution?

In her piece in Wired, Julia Powles states that “[the] remedy that the court discusses—erasure of lawful material online—is undesirable and problematic in all sorts of ways for freedom of speech and press.” However, “there are great opportunities available for creative, rapid, and adaptable technical solutions.”

In her solution, “Right to Reply”, individuals are given a mechanism by which they can comment or respond to content on search engines. Instead of simply removing links to “inaccurate” or “outdated” information, Right to Reply lets people correct or update information. You can see more at righttoreply.eu, brought to you by Powles and Jat Singh over at Cambridge Code.

Is this the beginning of the end of the Internet as we know it?

The Internet is what it is today because of the free and open exchange of information.

Search engines play a central role in how we use the Internet and access information. Asking search engines to remove links puts the onus on the search engine—the messenger—and not the original publication. It will produce a version of the Internet that’s been airbrushed to obscure information in the public domain, resulting in a less informed citizenry.

While we value the individual’s right to privacy, we worry about the implications for free speech.

What are your thoughts on this? Weigh in in our comments!