Lessons from Hamburg Commissioner for Data Protection and Freedom of Information’s €492,000 Fine
-
Insight Article 06 October 2025 06 October 2025
-
UK & Europe
-
Regulatory movement
A € 492,000 fine from the Hamburg Commissioner for Data Protection and Freedom of Information underscores the growing regulatory scrutiny of algorithmic decision-making and the critical importance of transparency and accountability in AI-driven processes. Not only EU GDPR but also the EU AI Act play a crucial role for user of AI (and its output) in decision-making.
1. Regulatory spotlight on algorithmic decisions
On 30 September 2025, the Hamburg Commissioner for Data Protection and Freedom of Information (HmbBfDI) published an interim report outlining administrative fines imposed for violations of the General Data Protection Regulation (Regulation (EU) 2016/679 – “GDPR”) during the current year (available here in German).
Among the cases highlighted, a financial services provider was fined nearly EUR 500,000 for failing to adequately fulfil its obligations under the GDPR. The case concerned the use of automated decision-making in credit card application processes, where applications were rejected despite applicants demonstrating good creditworthiness. The decisions in question were based on algorithms and were made without human oversight. When affected individuals requested information about the reasons for the rejection, the company failed to provide sufficient explanations, thereby breaching its information and access obligations under GDPR.
This enforcement action underlines the increasing regulatory scrutiny of algorithmic decision-making and the critical importance of transparency and accountability in AI-driven processes. In addition to the GDPR, the Artificial Intelligence Act (Regulation (EU) 2024/1689 – “AI Act”) will also be relevant in the future, as it contains further regulations, particularly regarding the use of high-risk AI systems, which complement the obligations of the GDPR.
2. Legal assessment
2.1 Article 22 GDPR
Under Article 22 (1) GDPR, individuals have the right not to be subject to decisions based solely on automated processing, including profiling, where such decisions produce legal effects or similarly significantly affect them. Exceptions apply only where the decision is necessary for entering into or performing a contract, authorised by Union or Member State law, or based on the data subject’s explicit consent.
In its landmark decision in SCHUFA Holding AG (C-634/21), the European Court of Justice (ECJ) clarified that credit scoring based on automated processing may fall within the scope of Article 22, particularly where the outcome significantly influences contractual decisions such as loan approvals. The ECJ emphasised that transparency obligations under Articles 13–15 GDPR are essential in such contexts, requiring controllers to provide:
- Meaningful information about the logic involved in the processing;
- The significance of the processing; and
- The envisaged consequences for the data subject.
As further analysed in our Insight on the ECJ’s decision concerning Dun & Bradstreet Austria GmbH (ECJ Ruling on Automated Decision-Making and Data Subject Access), the notion of “meaningful information about the logic involved” requires more than a generic description. It entails a level of detail that enables the data subject to understand the rationale behind the automated decision and to assess its fairness and impact.
In the Hamburg case, the company relied on automated systems to assess creditworthiness but failed to meet these informational obligations. The HmbBfDI found that the company did not adequately explain the logic behind its algorithmic decisions, nor did it provide sufficient access to the underlying rationale when requested. This lack of transparency constituted a breach of the GDPR and justified the imposition of a fine.
The case illustrates the practical implications of the ECJ’s rulings on SCHUFA and Dun & Bradstreet Austria GmbH, reinforcing that organisations deploying automated decision-making must ensure both legal justification and procedural transparency. Failure to do so may result in significant regulatory sanctions.
2.2 Beyond the GDPR: How Article 86 AI Act expands the right to explanation
When it comes to automated decision-making, companies do not only have to comply with the provisions of the GDPR, but also the AI Act. The AI Act introduces Article 86, which grants individuals the right to explanation when decisions are made using high-risk AI systems that significantly affect their rights, health, or safety. Article 86 AI Act will apply from 2 August 2026 on and complements the GDPR but differs in scope and emphasis:
- GDPR focuses on data protection and individual rights in automated processing.
- AI Act targets systemic risks and technical governance of AI systems, including transparency, human oversight, and documentation.
While the GDPR already implies a right to explanation through the above-mentioned Articles, Article 86 AI Act makes this even more explicit and ties it to high-risk AI systems listed in Annex III. It requires deployers to provide “clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken.” This may also include AI systems used for decision-making in financial services as, among others, “AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score” also fall under the high-risk systems listed in Annex III of the AI Act (Annex III no. 5 (b) AI Act).
However, pursuant to Article 86 (3) AI Act, the right to explanation of individual decision-making in paragraph 1 shall only apply to the extent that it is not otherwise provided for under Union law. Articles 13 to 15, 22 GDPR already partially cover this right, meaning Article 86 (1) AI Act does not apply in these cases. This exclusion is, however, not absolute. The applicability of the provisions of the GDPR requires that an automated decision in individual cases pursuant to Art. 22 (1) GDPR be present. The characteristics of such automated decision are not identical to individual decisions based on the output of a high-risk AI system as required by Art. 86 (1) AI Act. Article 22 (1) of the GDPR only covers decisions “based solely on automated processing,” whereas for Article 86 (1) AI Act, decisions by the operator based solely on the output of AI systems are sufficient. The requirements for such decisions under Article 22 GDPR were further outlined by the SCHUFA ruling. In addition, the information referred to in Art. 86 (1) AI Act “on the role of the AI system in the decision-making process” goes beyond Art. 15 (1) lit. h of the GDPR and also beyond the requirements that the ECJ has formulated to date in the context of the right to information under the GDPR. This broader scope reflects the AI Act’s emphasis on system-level accountability and the need to explain not just the outcome, but the operational context of the AI system. Eventually, Article 86 (1) requires the relevant data to originate from a high-risk AI system, which in return is irrelevant for Article 22(1) GDPR.
Ultimately, Article 86 AI Act further protects the rights of data subjects. If an automated decision falls outside the scope of Article 22 GDPR, it may still be subject to Article 86 AI Act. This dual framework ensures that individuals are not left without recourse, even when automated decisions do not meet the strict criteria of the GDPR.
3. Practical takeaways
This case and the evolving legal landscape offer several key lessons:
- Transparency is non-negotiable: Companies must be able to explain automated decisions in a way that is understandable and meaningful to affected individuals.
- Documentation and oversight: Robust internal processes are essential to ensure compliance with both GDPR and the AI Act.
- Proactive engagement with regulators: Cooperation and remedial action can significantly mitigate penalties. In its press release, the HmbBfDI expressly highlighted the cooperation of the financial services provider. This was taken into account as a significant mitigating factor in determining the fine.
- Prepare for dual compliance: Businesses using AI systems should align their practices with both GDPR and AI Act, especially when deploying high-risk systems. This includes conducting impact assessments that address both data protection and AI-specific risks, and ensuring that explanations are tailored to the technical and legal context of each system.
As AI regulation matures, companies must move beyond technical compliance and embrace ethical and human-centric governance of automated decision-making. Even though automated decisions do not generally constitute a violation of the disclosure requirements under the GDPR and AI Act, companies should keep an eye on both sets of regulations (as well as future ones).
End