Mage Data strengthens its data security posture with the ISO 27001 certification. READ MORE >



November 12, 2020

Artificial Intelligence and The Right to be Forgotten

The initiative taken by Europe with the General Data Protection Regulation (GDPR) has been widely appreciated and adopted, leading to the rise of many other laws such as the California Consumer Protection Act (CCPA), Brazil’s Lei Geral de Proteção de Dados (LGPD), to name a few, which share several similarities with the GDPR. All these data protection legislations provide people with numerous rights to their personal information of which, the most prominent or most talked-about right has been the “right to be forgotten.”

The right to be forgotten, also known as the right to erasure, ensures that individuals have the right to request that information about them be deleted if they choose. It stems from a decision taken by the European Court of Justice in 2014, which establishes it as a fundamental human right. According to the GDPR and many other data protection regulations, organizations receiving requests from individuals have a specified time period within which they should respond to the requests made, albeit with a few limited exceptions; but the crux of it remains the same – there should be a provision to delete personal information when such a request is received, or once the data is no longer required.

Why, then, is the same concept not yet applied to Artificial Intelligence and Machine Learning (AI and ML) systems?

These complex systems are fed a limitless amount of data daily and a lot of its sensitive information. So, most of us end up oblivious to the amount of information that’s consumed in the name of AI. Moreover, AI systems are making such intuitive decisions that we do not have a complete understanding of how the algorithms are working. A classic example would be Target, where the company’s algorithm had predicted the pregnancy of a teenager and was sending her targeted advertisements.* Cut to the chase, the teen’s father had an altercation with customer service only to find out that the company was right about his daughter.

Now, this is an example of the system at work. The wealth of information stored by AI systems also serves ample opportunity for malicious offenders to get their hands on. Early 2019, a DeepFake nude app called DeepNude was hacked, wherein AI-generated deepfakes were being used to create compromising images of unwary women. Only two days into the product’s launch, its anonymous creator put an end to it, saying, “The world is not ready yet.” Another example includes OpenAI, who announced they would not release the complete version of an AI technology that can automatically write realistic texts based partly on the content of 8 million web pages, due to their concerns about malicious applications of the technology.

So, let’s go back to the beginning – data protection laws deem it essential to give individuals control over the data being collected about them, including the right to be forgotten as we discussed, and other rights such as the right to opt-out of the data sale (as in CCPA). The underlying reason, as to why this hasn’t been applied to AI systems yet, is due to its sheer complexity. One, massive amounts of data that at present, there is no practical way to retrieve once fed into the machines and two, the system works in ways we sometimes cannot comprehend as in the example of Target. Hence, the concept of the right to be forgotten hasn’t been applied to AI and ML yet.

What can we do about this?

To better answer this question, I would like to give you one more example of one of my favorite sci-fi movies, Upgrade (2018). For those of you who haven’t seen it, spoilers ahead. The lead, Grey, is rendered paralyzed in the same scuffle that led to his wife’s death. One of his former customers, who also happens to be a techno-protegé, convinces Grey to accept a STEM implant (a microcomputer of the future), claiming it would give him the ability to walk again. Going into the movie, we realize that STEM has given Grey the physical and intellectual capabilities that no human being can possess. With STEM’s help, Grey manages to attain revenge on all the men responsible for his wife’s death. Oh wait, I forgot to add something to my previous sentence – an unwilling Grey manages to attain revenge. What Grey was game for in the beginning, slowly turns into a game in which he is being controlled by STEM. As the movie progresses, we see him lose more and more control over his thoughts and decisions. In the end, STEM takes over Grey’s mind – sticking him in some sort of idyllic dream state where he is happy with his wife. Having assumed complete control, STEM proceeds to kill its inventor and the police officer who tried to help Grey, and is now fiercely capable of doing anything, even take over the world if it wanted to. Another shocking turn of events – the idea of being planted in Grey’s body had been STEM’s objective all along, having suggested it to its inventor, and not the other way around. Finally, the AI that was only meant to serve as an auxiliary brain creates devastating consequences for its creator and its user.

For now, this might be an exaggerated example, but hey, the movie was inspired from somewhere. AI is very much capable of re-writing programming, and overriding what we, its creators, put in place, not to mention a whole lot of individuals itching to get their hands on this valuable data, which doesn’t make this process any easier. Companies, governments, militaries, and law enforcement agencies have to start acknowledging the ethical dilemmas, flaws, and risks involved. They need to be more skeptical about the technology we’re using, deepen their understanding of AI and adopt policies or a universal standard that will govern the function of AI, leading to a brighter narrative for both mankind and technology.

Reference

* WIRED: The Next Big Privacy Hurdle? Teaching AI to Forget

BLOG LIBRARY >