The European Data Protection Board (EDPB) has recently issued an opinion on specific data protection aspects concerning the processing of personal data in AI models. The Irish supervisory authority asked for this opinion, which answers some important questions, such as whether AI models can be thought of as anonymous, whether “legitimate interest” is a valid legal basis, and what the legal consequences are for illegally processing personal data while AI models are being made. Here are key takeaways from the EDPB opinion on AI, examining its implications for AI developers, deployers, and organizations that rely on AI-driven decision-making processes.
Can AI Models Be Considered Anonymous?
One of the most pressing questions in the EDPB’s opinion is whether AI models trained on personal data can ever be considered truly anonymous. The board concludes that AI models trained on personal data cannot, in all cases, be deemed anonymous.
For an AI model to be considered anonymous, two key conditions must be met:
- The likelihood of extracting personal data directly from the model should be insignificant.
- The probability of obtaining personal data through queries to the model should also be minimal.
Based on the EDPB, supervisory authorities (SAs) should look at each claim of anonymity on its own, taking into account the security measures in place and the technical steps taken to stop data extraction. The opinion emphasizes that organizations must provide substantial documentation demonstrating that their AI models do not allow for personal data retrieval.
Legitimate interest as a legal basis for AI data processing
The EDPB also looked into whether “legitimate interest” can be used as a legal reason for organizations to process personal data for the development and deployment of AI models. The opinion clarifies that no hierarchy exists between the legal bases provided under the GDPR; however, organizations must rigorously assess whether legitimate interest applies.
The assessment follows a three-step test:
- Legitimate Interest: The controller must identify a clear, lawful, and real interest in processing personal data. Examples include improving AI-driven fraud detection systems or enhancing cybersecurity measures.
- Necessity Test: Organizations must determine whether the processing is necessary to achieve the identified interest. If less intrusive alternatives exist, they must be considered.
- Balancing Test: The interests of the organization must not override the fundamental rights and freedoms of data subjects. This test requires evaluating factors such as the type of data processed, the risks to data subjects, and the transparency of the processing activities.
The opinion also highlights that AI developers and deployers should ensure compliance with the principles of transparency and fairness. You must inform data subjects about how you process their personal data and for what purposes.
Consequences of Illegal Data Processing in AI Models
The EDPB’s opinion addresses a major concern: the impact of unlawfully processed personal data on the subsequent use of an AI model. The Board outlines three key scenarios:
Scenario 1: The Same Controller Retains and Processes Personal Data
If a controller develops an AI model using illegally processed personal data and then deploys it, its legality depends on whether the controller used the data for different purposes in the development and deployment phases.
The EDPB instructs supervisory authorities to verify whether the deployment involves separate processing activities and whether the controller has taken corrective actions.
Scenario 2: A Different Controller Uses the AI Model
If a new controller deploys an AI model that retains embedded personal data, they must assess its compliance with GDPR requirements.
The controller must evaluate factors such as the source of the training data and any prior regulatory findings of data protection violations.
Scenario 3: The Model is Anonymized Before Deployment
If a controller processes personal data illegally during development but later anonymizes it, the deployment may be legal as long as no personal data remains in the model.
However, if the model processes new personal data after deployment, the controller must reassess GDPR compliance.
Results make it clear that companies need to be careful when adding AI models to their systems and make sure that the data sources they use are legal and moral.
Steps for AI Developers and Deployers to Consider
The EDPB opinion guides steps that AI developers and deployers should take to comply with GDPR requirements. These include:
- Defining Roles and Responsibilities
- Organizations must clearly define whether they act as controllers, joint controllers, or processors when handling personal data in AI models.
- Joint controllers must establish agreements that outline their respective responsibilities under Article 26 of the GDPR.
- Pre-Contractual Due Diligence
- Deployers should request documentation from AI developers, such as data protection impact assessments (DPIAs) and legitimate interest assessments.
- Agreements should include strong guarantees regarding the lawfulness of data processing and liability provisions.
- Transparency and documentation.
- Developers should proactively disclose information about training data sources, data minimization techniques, and compliance measures.
- AI deployers should maintain records of their assessments to demonstrate accountability.
- • Implementation of data protection measures.
- Organizations must apply appropriate anonymization, pseudonymization, and data minimization techniques to reduce privacy risks.
- Security measures should be in place to prevent data extraction or misuse.
Transparency and Compliance Under the AI Act
The opinion also discusses how the upcoming AI Act will impact AI developers and deployers. Under Article 53 of the AI Act, providers of general-purpose AI models (GPAIMs) must:
- Share technical documentation with downstream providers, including details on training data sources.
- Provide a summary of the datasets used for training to ensure transparency and compliance with data protection laws.
Organizations should start preparing for these requirements by ensuring their documentation aligns with the expected regulatory framework.
FAQs
- Can AI models ever be fully anonymous?
No, the EDPB states that AI models trained on personal data do not always remain anonymous. It is necessary to evaluate the probability of extracting personal data on an individual basis. - What is the significance of the EDPB opinion for AI developers?
AI developers must ensure their models comply with GDPR by using lawful data sources, applying privacy-preserving techniques, and documenting compliance efforts. - How should organizations demonstrate legitimate interest when processing AI data?
They must conduct a three-step test. The test requires identifying a lawful interest, proving the necessity of processing, and ensuring that data subjects’ rights remain protected. - What happens if an AI model is trained on unlawfully obtained data?
Organizations may experience legal repercussions, including fines and restrictions on model deployment, depending on how they use the model. - What additional requirements will the AI Act impose?
The AI Act will require AI developers to provide transparency about training data sources, risk assessments, and compliance documentation for general-purpose AI models.
Conclusion
The EDPB’s opinion highlights key challenges in data protection for AI models. AI models trained on personal data are not inherently anonymous, and companies must ensure their data processing methods comply with legal standards. Organizations must carefully consider legitimate interest as a legal basis and understand the consequences of using illegally processed data.
Developers and users of AI can make sure their actions are in line with the law and encourage responsible innovation by putting in place strong data protection measures, being open, and following GDPR and the new AI Act.
Businesses must keep up with the changing AI and data protection regulations to meet compliance challenges. The EDPB’s recent opinion underscores the complexities of GDPR in developing and deploying AI models. Aligning your AI practices with legal standards is essential to mitigate regulatory risks.
Stevens Law Group offers experienced professionals in data privacy and technology law to assist with GDPR compliance, AI model assessment, and best practices to safeguard your business from legal risks.
Schedule a consultation with Stevens Law Group today to ensure your AI initiatives comply with regulatory expectations and avoid operational impacts.
References:
Opinion of the Board (Art. 64)
Leave a Reply