Dr. Joss Wright is a Research Fellow at the Oxford Internet Institute (OII), where his current research focuses on analysing Internet censorship and data anonymization. Prior to the OII, Dr. Wright worked at the University of Siegen in Germany examining security and privacy issues in cloud computing. He has a PhD in Computer Science from the University of York.


There is a lot of debate about privacy. Where it came from, where it is going, and what it means for society. Undoubtedly, privacy is certainly under threat and will never be the same again. A lot of people will point out that privacy did not really exist in law internationally until quite recently. The first really significant bit of law was a 1898 legislation in the United States, from Samuel Warren and Louis Brandeis, who defined privacy as the right to be let alone. However, privacy has really existed long before this; it was just, and this is slightly controversial, more intrinsic.

We worked on a human scale back then. You said something to someone, and they could re-tell it, a rumour could spread, a story could be told, but it was on a human scale. You would forget and everyone knew it would change in the telling. Then technology brought about this erosion of a right that had always been very intuitive. Even now when you ask somebody to define privacy, it’s very very tricky, but if you say to somebody x, y, z happens – has your privacy been violated? People can instantly say ‘yes’, or ‘no’.

We must look at these tech companies also. Google, for example, makes over 95 percent of their profit from targeted advertising. We are now working on a scale we were not built to predict. As a human, we can’t make good privacy decisions; we get short-term easy rewards like access to Facebook, or access to Gmail. The privacy risks that come with that, the risks of our data being used against us, or being used in a way that is not within our control, is a long-term potential probabilistic concern.

I think privacy is something we can still preserve, albeit not to the same extent we used to be able to. I think we should try, not in an attempt to fix a status quo of ‘what is private now should always be private’, but to guide a society towards a society we want to live in, so that we do not have the risk of all data being shared with everyone, and have this transparent society like David Brin writes about.

We need to build systems that do that, and we need to have legal backing to enforce the companies that do not treat data they are not suppose to treat with strong sanctions, like the European Union is doing with the proposed general data protection regulation that is coming into force hopefully in 2016. These strong sanctions against companies will go a long way.

Interested in more content like this? Sign up to our newsletter, and you wont miss a thing!

Previous post

Qualities of a Data Scientist, with Adam Drake

Next post

Kaplan Launch New Data Science Course in New York this September