Do Physical Rights Translate to Digital Rights?

From facial recognition to data privacy challenges, digital rights have become the subject for debate in the contemporary digital world. Digital rights essentially encompass those human rights laid out in the United Nation’s Universal Declaration but in the context of the online world. In theory, the protection is there, but in practice, the speed at which technologies advance leaves little room for planning a response to infringements against these rights. 

We know that the protection of digital rights is crucial in this day and age. While technology has the potential to be used as a force for good, helping to transform many of society’s development goals into reality, it can also become a threat to privacy and fuel inequality. Preserving personal data, for example, is one of the most urgent issues in contemporary digital policy. Daily we are faced with companies and governments collecting and using data beyond the necessary context. Arguably, the law is struggling to keep up to the speed at which new technologies are arising. 

The following graph (based on data from the Privacy Rights Clearinghouse) shows the increase in the number of records in data breaches involving private information in the USA:

The graph shows us how the increase in technology and the collection of data has been accompanied by serious data privacy breaches during the last few years. In 2016, we saw one of the largest hacks of user data, with one billion people being affected by the attack on Yahoo. Although data breaches are a serious concern, that’s certainly not the only one in this contemporary age. 

Another example is DeepNude, an application allowing users to virtually “undress” women using artificial intelligence have shut it down. Interestingly, the team’s message announcing the shutdown was “the world is not yet ready for DeepNude,” as expecting that the use of such a tool will become appropriate in the future.

We must consider what these digital rights mean in light of the fact that the majority of this digital power is held by those major players known as the GAFAM (Google, Apple, Facebook, Amazon, Microsoft). These companies, who already hold control in shaping the internet and digital technologies, are quickly developing even more power with the rise of AI.

Last year, Facebook was fined for $5 billion for infringing privacy rights. Although this was a significant first step towards some type of regulation, there are also arguments that the penalty fell short of what was intended to be. Congress still relies on pre-existing laws to give an adequate penalty, and in this case, the law is simply falling behind. A few months later, the tech giant acquired a startup which is developing devices that can pick up electrical signals from the brain and transmit them to a computer. In this case, privacy might be the least of our concerns. How will the law respond if such technology is misused?  

We are growing accustomed to digital infringements, and some even argue, we no longer expect privacy. Imagine that while you are at work, someone enters your home and looks around. Quickly they are able to make a conclusion of your approximate age, wealth, and family size. What should the response to this action be? Undoubtedly, this would be considered an invasion of privacy. While we might think it’s pretty straight forward, you might want to consider this. Would it change your response if the person entering your home was actually doing so to better assess your needs and provide you with tailored products and recommendations? Probably not. This would still be an unlawful trespass. Why then is it acceptable when companies track your online movements for the same reasons? 

In Catt v the United Kingdom (European Court of Human Rights, Application No 43514/15, 24 January 2019), the applicant, Mr. Catt requested that the police release any information it held related to him under the UK’s Data Protection Act 1998. The Court held that, although the collection of personal data to prevent crime and disorder is lawful, in this case, without the adequate scheduled reviews it was considered disproportionate and unnecessary. 

We have to be careful with exceptions. As I write this post we are living through the COVID-19 pandemic. While in the health sector, for instance, AI-enabled frontier technologies are helping to save lives, arguably, other technologies wouldn’t be accepted by most of us under different circumstances. In the battle against the pandemic, several governments are deploying new surveillance tools. In China, these tools enable the government to track people’s movements and identify who they have been in contact with. In Israel, the same technologies usually used to battle terrorists have been deployed to fight the virus. As identified by Yuval Noah Harari:

You might argue that there is nothing new about all this. In recent years both governments and corporations have been using ever more sophisticated technologies to track, monitor, and manipulate people. Yet if we are not careful, the epidemic might nevertheless mark an important watershed in the history of surveillance. Not only because it might normalise the deployment of mass surveillance tools in countries that have so far rejected them, but even more so because it signifies a dramatic transition from “over the skin” to “under the skin” surveillance.“

There are alternatives to the above scenario. In Europe, companies had to adapt to the strict laws around privacy. The newly unveiled Pan-European Privacy-Preserving Proximity Tracing (PEPP-PT) is busy developing an alternative to contact tracing whilst maintaining privacy. The solution enables tracking of infection chains across national borders without using any personalised information. 

We know that when used appropriately, these technologies, specifically AI, can actually be used as a force for good. In the current coronavirus epidemic, we are seeing that some social media platforms are turning to AI to tackle the problem of fake news, for example. In this context, AI is able to effectively detect the spread of false information that has the potential to cause harm, and flag that content for removal. 

How we manage these challenges and ensure that these technologies are always used to defend human rights is the subject of much discussion. A good place to start is by recognising our failure so far. It has become clear that the traditional mechanisms to assure our digital rights have failed us too often. Some of the most recent scandals, such as the one involving Cambridge Analytica have helped us in a way to start questioning these mechanisms and asking new questions. 

In reality, most of us spend the most part of our days online, it has become an extension of our physical world. The rise of the internet of things means also that the distinction between online and offline will soon fade away. Our home, our fridge and our cars will all be interlinked and connected. If we are living for the most part in the digital world, and if all our possessions are also connected digitally, it only makes sense that we have the same rights, and protection, in both worlds.

Perhaps we need to move beyond the contractual ideas of privacy to solve the deeper issues associated with handling big data. Ideas such as creating a new legal category, upholding digital companies as information fiduciaries who have duties of care, confidentiality, and loyalty toward their end-users are worth investigating. 

Our rights to privacy, be in the digital or physical world, should demonstrate consistency. Digital rights need to be a mirror of our physical rights. We need to develop an awareness of what these rights are, and expect nothing less but the assurance that they will be upheld. So next time you are talking with someone on a videoconference, you can be safe knowing that they won’t be forcefully undressing you with AI.


Contributors

Dr Evan Shellshear, Head of Analytics at Biarri, a world leading mathematical and predictive modelling company, and an expert in artificial intelligence with a Ph.D. in Game Theory from the Nobel Prize winning University of Bielefeld (in Germany). He has many years of international experience in the development and design of AI tools for a variety of industries from manufacturing to retail. He is also the author of a number of books including the best selling book on Amazon, Innovation Tools.

Kelly Forbes, Co-Founder of AI Asia Pacific Institute.