The Privacy Law Scholars’ Conference, one of the most important privacy conferences in the world, takes place this week in Berkeley, California. Privacy law scholars from many different countries are gathering to discuss what is becoming an increasingly significant subject all the time.
There are huge issues at stake, from government surveillance to corporate data gathering, from the internet of things to the right to be forgotten. My own contribution, which arose from the extensive discussions of the right to be forgotten that followed the Google Spain ruling in May 2014 – which effectively grants people the right to have old, irrelevant information ‘delisted’ from searches made under their name, attempts to bring some of these different issues together – by trying to take a look at the internet shorn of some of the illusions that we have about it.
We have many such illusions – and we seem to be able to hold what are to most intents and purposes contradictory.
We want some people to be able to find out everything about us – but we want privacy from others and see surveillance as deeply intrusive. We think everything on the net lasts forever and at the same time we’re worried about everything being forgotten. We don’t trust anything we read on the internet and we treat it as a perfect historical archive that should not ever be tampered with – in ways such as that provided by the Google Spain ruling. We know that the internet is always changing but even so we treat it as something that will be in its current form forever.
We’re great supporters of freedom of speech – think #JeSuisCharlie – but we want cyber-bullies and ‘trolls’ to be found and punished severely. We think anonymity abounds but at the same time that the security services, Facebook and Google know everything there is to know about us.
Google itself is a paradox: at times we treat it as a philanthropic indexer of the internet and champion of freedom of expression, at other times as an evil mega-corporation driven only by profit or trying to control the world and indeed us. We expect it to provide imaginative, innovative and engaging products and services – and we expect it to do so for nothing, and without invading our privacy or gathering our data. Google has its own contradictions too: at times it likes to act as a ‘speaker’ claiming First Amendment protection for freedom of speech, at other times as an apparently neutral indexer, its algorithms organic and generated by the internet and its users themselves, and with no real responsibility for what happens through its systems.
As I argue in my paper, these contradictions and paradoxes are not a result of misconceptions; things that can be resolved if only we find the right approach, the right legal and technical tools, the right way to look at things. Rather, they are just how things are: a dynamic but creative and contradictory mess. All these views, all these perspectives, have something behind them. None of them are ‘lies’ – but none are the whole truth either. The ‘right to be forgotten’ is a prime example.
Enabling someone to have an old, irrelevant story not appear on search results for their name isn’t just about making sure that old story is not seen – but is about helping other, newer, more relevant stories to get seen. Removing the old stories from the top of the search results makes others rise to the top, and stories that would otherwise only appear on the never-noticed later pages of search results appear on the first page. It’s not just about ‘hiding’ the ‘wrong’ stories, but about helping the ‘right’ stories to be found. It might, in practice, help rather than hinder freedom of expression.
Caught in the middle
What it also does is to start to redress what is, ultimately, the key issue in the internet right now: where the power lies. People are caught between two huge powers: on one side are the corporations who effectively run most of the services that people use on the internet, on the other the governments who surveil, censor, prosecute and control. Both the corporates and the governments can seem so huge and so powerful as to be beyond challenge – but they’re not, and the right to be forgotten is one example of how these challenges can work.
It provides a small tool for individuals – and only individuals, for corporations and governments cannot use it – to have an influence over decisions made about them by and on the internet. It is crude and unrefined, but in a messy, complex and dynamic internet it should be welcomed as part of a way forward rather than seen as a threat to people or to freedom of speech. It is a threat only to the power of the corporate algorithms that dominate the internet – and those corporate algorithms need to be challenged.
Once we see through the illusions and contradictions of the internet, it is easier to see where and how laws can and cannot work, and to see when and how they should be challenged. The application of this is far broader than the right to be forgotten. It could (but doesn’t) help governments to understand the futility and counterproductive nature of much of their approaches to surveillance and encryption, and to the issues surrounding ‘trolling’. Sadly, right now, the illusions are rarely seen through, and surface-level analyses bring about poor law and poor policy. It is something that we should very much try to avoid.
Paul Bernal (@PaulbernalUK) is a lecturer in the UEA Law School, specifically in the fields of Information Technology, Intellectual Property and Media Law. His research relates most directly to human rights and the internet, and in particular privacy rights.