The GDPR: An Artificial Intelligence Killer?

Home | Blog | The GDPR: An Artificial Intelligence Killer?

The EU’s GDPR Act for artificial intelligence (AI), which attempts to regulate AI technology, has been criticised for being an innovation killer and for attempting to regulate the unregulatable. 

Act includes limitations on the use of AI by law enforcement agencies and strict risk categorisation for any new products in the AI family. The Act is regarded as the political brainchild of French Commissioner and Macron ally Thierry Breton, who praised the compromise reached on the AI Act as helping to create a ‘launch pad’ for European businesses. However, the French President warned that Brussels’ inertia risked giving China and the US an advantage in the AI race.


Artificial Intelligence and Data Protection

Artificial Intelligence is an area of research within the field of computer science that concerns itself with the functioning of autonomous systems. AI has become a focal point within both academic and political discourse, as it affects almost all areas of modern life in the age of digitization. Scenarios originating from artificial intelligence are mainly driven and determined by the availability and evaluation of data. Thus, the accumulation of relevant (personal or non-personal) data regularly constitutes a key factor for AI-related issues. 

The collected personal data may then be used to create (personality) profiles as well as to make predictions and recommendations with regard to individualised services and offers. 

In addition to this, non-personal data may be used for the analysis and maintenance of products. The applications and business models based on the collection of data are employed in both the private and public sector. The current and potential fields of application for AI are as diverse and numerous as the reactions there are ranging from optimism to serious concerns – oftentimes referring to a potential ‘era of the machines’.

The General Data Protection Regulation (GDPR) and AI

The General Data Protection Regulation (GDPR) is a regulation that aims to protect personal data and privacy in the age of digitization. The GDPR sets out provisions concerning the handling of personal data, which entered into force on May 25, 2018. 

This is how the GDPR intersects with artificial intelligence (AI):

Informed Consent: The GDPR mandates that companies must have proof of a person’s consent to process their personal data. For AI applications, managing consent becomes more complex. How can consent be managed within AI algorithms?

Profiling and Analytics: Profiling involves using personal characteristics or behaviour patterns to make generalisations about an individual. The GDPR requires organisations to log and present details on the use of profiling. Additionally, individuals must have the ability to withdraw consent from profiling algorithms. Uncovering algorithmic biases and ensuring human judgement in profiling decisions are also part of GDPR requirements.

AI Challenges: While traditional analytics face GDPR challenges, AI within profiling and analytics poses even more questions. Some have even asked, “Is the GDPR an AI killer?” Artificial intelligence is the future, and the GDPR is a regulation that aims to protect personal data and privacy in the era of digitization.

On the whole, the GDPR is not without criticism, as some argue that it could stifle innovation and create unnecessary barriers. However, GDPR emphasises the importance of human influence and simply introduces a higher level of accountability for humans using the AI.

Risks of Generative AI for Businesses

Generative artificial intelligence (AI) has gained huge popularity, but its adoption by businesses poses ethical risks. 

So, to ensure responsible use of AI, organisations can consider the following steps:

Step-1: Use Zero or First-Party Data: Rely on data directly collected from users or customers. This minimises privacy concerns and ensures accuracy.

Step-2: Keep Data Fresh and Well-Labelled: Regularly update datasets and maintain clear labels. Stale or mislabeled data can lead to biassed AI outcomes.

Step-3: Human in the Loop: Maintain human oversight during AI processes. Humans can catch errors, assess ethical implications, and intervene when necessary.

Step-4: Test and Re-Test: Continuously evaluate AI models for accuracy, fairness, and safety. Rigorous testing helps identify and rectify issues.

Step-5: Seek Feedback: Engage with stakeholders, users, and experts to gather feedback. This iterative process improves AI systems over time.

Generative AI holds immense potential for transforming business operations, but responsible implementation and responsible use of AI is crucial. By addressing these concerns, organisations can harness the benefits of AI while mitigating emerging risks.


The GDPR serves as a guiding light for responsible data handling in our increasingly digital world, rather than being a hindrance to the advancement of AI. It emphasises the essential role of human oversight and instils a heightened sense of accountability in the handling of personal data. While some critics express concerns about potential barriers to innovation, the GDPR stands as a crucial safeguard in the digital age, ensuring that personal data is treated with integrity and ethical considerations.

Given that AI technologies often involve the collection and processing of personal data, their impact on personal rights and privacy is significant. Therefore, the GDPR is not an obstacle to AI development but rather a necessary framework that prioritises the protection of personal data and privacy in our digitised era.