ChatGPT faces further investigation from the EU and Beyond

  • 04 July 2023 04 July 2023
  • UK & Europe

  • Cyber Risk

ChatGPT, the Artificial Intelligence (AI) software developed by OpenAI, is increasingly under scrutiny from data protection authorities across the world. This insight explores some of the current regulatory investigations into the technology and the reasons behind the criticisms.

As the world adapts to the increased prevalence of AI technologies and tools which use those technologies, it is inevitable that there will be some degree of regulatory scrutiny as the boundaries of what can be done, and how it is regulated, are tested. Across the UK and EU, where regulators have already taken a keen interest in how AI interacts with the GDPR, it will be interesting to observe and learn from the response of those regulators and how they interact with organisations such as OpenAI, that are leveraging AI in their products. 

In the meantime, we are seeing regulators across the UK and EU investing resources in AI as a developing area. For example, the UK ICO has published a guide to AI and a risk toolkit, which is a useful start for organisations when thinking about navigating what is clearly a complex area of risk. 

ChatGPT’s recent suspension in Italy

Following the Italian data protection authority’s ban on the processing activities of ChatGPT, they have been allowed to resume service. On 30 March 2023, the Italian data protection authority (the Garante) issued an Order, following an investigation into a reported data breach affecting ChatGPT users, that the processing conducted by OpenAI was in violation of various GDPR Articles, specifically:

  • Article 5: Principles relating to processing of personal data
    • The Garante noted that the processing was misaligned with this Article, due to the fact that information provided by the generative AI technology may be incorrect, and not corresponding to the real data.
  • Article 6: Lawfulness of processing
    • The Garante stated that there was an absence of an appropriate legal basis in relation to the collection and processing of personal data required to “train” the AI algorithm.
  • Article 8: Conditions applicable to child’s consent in relation to information society services
    • Although OpenAI’s Terms of Service purported to restrict the use of ChatGPT for children under 13, it was noted that there was an absence of any age verification procedures for ChatGPT users. Therefore, the Garante further submitted that the absence of filters may expose children under 13 to inappropriate information, given their degree of development and self-awareness.
  • Article 13: Information to be provided where personal data is collected form the data subject
    • The Garante noted that no information was provided to users, nor to the interested parties whose data had been collected.
  • Article 25: Data protection by design and default.

Pursuant to GDPR Article 58(2)(f), the Garante opted to use a corrective power, imposing an immediate temporary limitation on processing, pending further investigation. This limitation extended to all personal data of the interested parties established in the Italian territory. OpenAI had 20 days to respond to the Garante, confirming the measures that it had taken in response to the breaches identified. Failure to respond would result in an administrative fine of up to €20 million or 4% of total global turnover.
On 28 April 2023, the Garante confirmed that OpenAI had notified them of such measures, including:

  • The publishing of an information notice on their website, available to users in Europe and elsewhere;
  • The expansion of the privacy policy for users, with accessibility available from the sign-up page;
  • The introduction of a mechanism to opt-out of processing, using an easy online form, and a process whereby data subjects can obtain erasure of information that is considered inaccurate (although the company stated that it is impossible, as of now, to rectify inaccuracies);
  • Clarification that the processing of certain personal data required to enable the performance of its services on a contractual basis will continue. However, the processing of users' personal data for training algorithms will be based on the legal basis of its legitimate interest, without prejudice to users' right to opt-out from such processing;
  • The addition of a button for Italian users to confirm that they are aged above 18 prior to gaining access to the service, or that they are aged above 13 and have obtained consent from their parents or guardians for that purpose; and 
  • The addition of a birth date request in the service sign-up area, blocking access to those aged below 13, and to request confirmation of the consent given by parents or guardians for users aged between 13 and 18

On this basis, OpenAI were allowed to resume operations and the processing of Italian users’ data. However, the Garante did note that it plans to continue “its fact-finding activities regarding OpenAI…under the umbrella of the ad-hoc task force that was set up by the European Data Protection Board.”

Following the Italian developments, other supervisory authorities have also become increasingly invested in monitoring the activities of ChatGPT, and AI services more generally. 

Warning signs from elsewhere in Europe 

On 13 April 2023, the HBDI in Hesse, Germany, issued a press releasesharing the same concerns” as the Garante. However, the Commissioner also commented that “hasty assessments would not take sufficient account of the importance of the issues” and therefore, further information was required from OpenAI. 

On 19 April, a second announcement was made, noting that a request had been sent to OpenAI to answer questions regarding ChatGPT’s data processing practices. Similarly, on 24 April, the LfDI in Baden-Württemberg, also issued a press release noting that the State Commissioner had also approached OpenAI for comment.[1] 

The Commissioner highlighted that questions as to OpenAI’s compliance with data protection laws can only be fully answered after the purposes behind the processing, and the data pool which feeds the algorithm’s knowledge, have been identified. He cited concerns, such as the questions assigned to ChatGPT, which may reveal information about a person, or another individual, including their “interests in political, religious, ideological or scientific questions, or on his or her family or sexual life situation.”

This follows a warning from the UK Information Commissioner’s Office to AI firms on 3 April 2023, observing that “organisations developing or using generative AI should be considering their data protection obligations from the outset, taking a data protection by design and by default approach. This isn’t optional – if you’re processing personal data, it’s the law.” 

The Spanish data protection authority, the AEPD, similarly announced a preliminary investigation into OpenAI for possible GDPR breaches on 13 April 2023. 

In Germany, the HBDI are looking to identify much of the same information as the Garante, and the topics identified by the ICO, such as:

  • Whether the data processing complies with the basic principles of data protection law;
  • Whether it is based on a valid legal basis;
  • Whether it is sufficiently transparent for the data subjects;
  • The protection of children under 16; and
  • How the data is used to train the system.

On the 1 June 2023, the HBDI issued a questionnaire to OpenAI seeking to ascertain whether German and European data protection law is sufficiently observed in the data processing carried out by ChatGPT. The HBDI Commissioner has commented that “if it turns out that ChatGPT does not adequately protect the fundamental and data protection rights of the users of the service, the HBDI has a wide range of effective tools at its disposal in response.” OpenAI’s answers are due no later than 30 June 2023.

Based on the Hessen Commissioner’s information, it looks as though there will be a coordinated response to ChatGPT, either by the German supervisory authorities, or the EDPB, with the aim of demanding “the same data protection from American AI providers, as from European providers.”

In addition to German regulators, The Dutch Data Protection Authority (Autoriteit Persoonsgegevens (AP)), explained on 7 of June 2023 that they were also “concerned about the handling of personal data by organisations that use so-called generative artificial intelligence (AI), such as ChatGPT.” As they described, “ChatGPT is based on an advanced language model trained with data,” however, such data can be drawn from the internet, or by storing and using the questions that people ask. Subsequently, this data may “contain sensitive and very personal information, for example if someone asks for advice about a marital quarrel or about medical matters.” Furthermore, the AP has concerns about the content generated, which may be “outdated, inaccurate, inappropriate, or offensive and may take on a life of its own.” The AP, much like many other regulators, hopes to clarify matters, particularly with regard to how personal data is handled during the provision of training the underlying system.

What is next?

Unsurprisingly, due to its unique nature and the way in which it has grabbed the attention of the media and a wide demographic, ChatGPT seems to be at the forefront of data protection regulators’ minds. As the regulation of AI is still very much a developing landscape, the burden will be on organisations developing AI tools, such as ChatGPT, to ensure compliance and to be able to demonstrate that compliance actively to regulators. Equally, we anticipate that data subjects will start to ask more questions as they learn about what these technologies can offer them, and what personal data is being collected and used when they interact with those technologies. Against this backdrop, we anticipate that more activity is to come from both regulators and data subjects as organisations continue to test the boundaries of AI in real-life settings. 

We will continue to monitor the extent to which regulators are commenting on AI and, in particular, if and how those regulators are liaising with each other, particularly across the EU. It is clear from the increased activity at this early stage that regulators are investing more resources in the investigation of AI technologies and on that basis, we anticipate that there will be an uptick in commentary (and potential criticism or at least learning points for organisations) in coming months. 

[1] This comes as a coordinated response from the German data protection authorities in Hesse, Baden-Württemberg, Rheinland-Pfalz and Schleswig-Holstein. 


Additional authors:

Danielle Rodgers, Knowledge Lawyer & Christie Newton, Knowledge Paralegal

Stay up to date with Clyde & Co

Sign up to receive email updates straight to your inbox!