The fine print of AI hype: The legal risks of AI washing
-
Market Insight 2025年5月14日 2025年5月14日
-
英国和欧洲
-
Regulatory & Investigations - Technology Risk
With the first obligations under the European AI Act now applicable, legal uncertainties are emerging - particularly concerning the scope of the definition of artificial intelligence (“AI”). A pending case before the CJEU underscores these uncertainties and highlights the emerging phenomenon of ‘AI washing’, where companies embellish claims about their use of AI. As the prominence of AI increases, so does the scrutiny of its accurate representation.
The first obligations of the European AI Act (Regulation (EU) 2024/1689) apply since 2 February 2025, and the second case concerning the AI Act is already pending before the CJEU. However, this case from Poland in particular is subject of heated debate as to whether the AI Act applies at all. The case concerns software used in the judiciary to distribute incoming cases. It is questionable whether this software actually constitutes an AI system in terms of the AI Act and whether the AI Act applies. If even the courts are unsure whether the products in question are AI, this naturally leads to further uncertainty in practice – which is exploited, sometimes unintentionally but also intentionally, in the form of so-called AI washing.
The definition of AI in the AI Act
Article 3 (1) AI Act defines an ‘AI system’ as a “machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”. The definition is based on an earlier OECD definition. The legal definition itself is quite broad and not necessarily specific to AI, as many traditional software systems also process inputs to generate outputs. However, recital 12 AI Act further specifies characteristics of the term, in particular those that serve to distinguish it from “simpler traditional software systems or programming approaches”. The EU Commission published further guidelines on the definition to address any uncertainties. These are, however, not yet formally adopted.
Although the AI Act’s definition of AI merely identifies which applications fall under its regulation, establishing a consistent understanding could have an impact in other areas. While a clearer definition of AI should help reduce confusion, it also increases the risk of heightened regulatory scrutiny. As public awareness grows, so does the potential for more frequent allegations of fraud.
AI washing is the new greenwashing
In this context, the term ‘AI washing’ is becoming increasingly common. AI washing describes the embellishment of AI-related facts. This can be the case, for example, with statements about the existence of AI, the handling of AI or the use of AI by companies.
Experts have different theories about the roots of this new phenomenon. On the one hand, it is attributed to the lack of a precise definition of AI (or AI system when using the differentiation in the AI Act), which makes AI washing possible in the first place. In addition, some experts see the problem in the lack of technical understanding at senior management level and the pressure to constantly innovate. There is a fear of missing out on the AI hype in many companies.
On the other hand, it is believed that many are eager to jump on the bandwagon and exploit the advertising factor associated with the buzzword AI. This can even lead to deliberate misrepresentation. For example, the capabilities of AI are exaggerated, or the term ‘intelligence’ is used in a misleading way. This is the case, for example, if the software does not use any learning algorithms and makes decisions without having been explicitly trained to do so.
Legal pitfalls of AI washing
The AI Act, while aimed at regulating AI use, is not explicitly crafted to address AI washing. Rather, it may represent a positive step toward promoting greater transparency in how companies utilize AI.
Transparency obligations
If the software in question is indeed an AI system within the meaning of the AI Act, provider and deployer are subject to transparency obligations under Article 50 AI Act, depending on the type of AI used. In the case of a high-risk AI system, the transparency obligation under Article 26 (11) AI Act also applies, pursuant to which deployers of high-risk AI systems shall inform the natural persons that they are subject to the use of the high-risk AI system. Any violation of these obligations may result in deterrent fines in accordance with Article 99 (4)(e) and (g) AI Act.
In addition to the transparency requirements of the AI Act, there are also disclosure requirements under unfair competition law. Unfair competition law covers any commercial practice intended to promote the sale or purchase of goods or services and therefore also applies to the advertising of AI products.
Liability of companies and directors
However, if a company falsely claims that it deploys or develops AI and thus engages in AI washing, it faces liability risks. For one, the company might be liable to its investors. The company's liability to investors arises mainly from incorrect or incomplete information in prospectuses under the German Securities Prospectus Act (Wertpapierprospektgesetz) and related laws, allowing investors to claim damages. The German Investment Code (Kapitalanlagegesetzbuch) and the German Capital Investment Act (Vermögensanlagengesetz) also regulate liability for misleading information in investment documents. Under corporate law, misrepresentation in annual reports, especially regarding AI applications, can lead to liability under the German Commercial Code (HGB) and the German Stock Corporation Act (Aktiengesetz). Tort liability may arise under the German Civil Code (BGB) if incorrect information is disseminated to a wide audience, with particularly serious cases falling under sec. 826 German Civil Code (BGB) for offending common decency. The company may also face liability to third parties for false public advertising under the Unfair Competition Act (UWG) and under contract law principles like culpa in contrahendo. Additionally, directors may be personally liable to the company for false statements in annual reports and may face administrative sanctions under the German Securities Trading Act (Gesetz über den Wertpapierhandel), the German Investment Code, the Capital Investment Act and various EU regulations.
Possible criminal liability
In addition to above-mentioned liability risks, criminal prosecution cannot be completely ruled out either. In addition to administrative offences and criminal provisions under unfair competition law, criminal liability for fraud or capital investment fraud may also be considered under certain circumstances. Although there is no case law on this yet, it should be kept in mind.
AI washing gives rise to legal claims
As illustrated above, there are significant legal risks associated with AI washing. These include, of course, the possibility of fines being imposed by the authorities.
This is already happening in the United States, for example. Last year, the U.S. Security and Exchange Commission (SEC) took action against two investment advisors for making “false and misleading statements about their purported use of artificial intelligence.” Both firms agreed to settle the SEC’s charges and pay a total of $ 400,000 in civil penalties. The SEC found that the respective companies “marketed to their clients and prospective clients that they were using AI in certain ways when, in fact, they were not”.
Investors are also getting involved and taking action against false claims about AI. In February and March of this year, investors filed a securities class action lawsuit against two companies for alleged AI washing in the U.S. The complaint alleges, among other things, that the company in which the investment was made misrepresented its position and ability to capitalise on AI. The complaint alleges that the company’s statements omitted material facts and caused investors to purchase the company’s securities at “artificially inflated prices.”
Practical guidance for companies and insurers
To avoid any claims because of AI washing, companies should fact-check any statements made on their behalf. Statements made by companies about AI should be consistent across the company’s communications, including investor presentations and other marketing activities. This applies not only to consumer advertising, but in general. Otherwise, there is a risk of legal action from investors and customers.
When a company purchases software, it should thoroughly check the software for the presence of AI. This is not only to get to the bottom of any false claims about the existence or capability of AI, but also to be able to assess any possible responsibilities and obligations under the AI Act. When developing software independently, all statements should also always be checked for accuracy to avoid false claims.
结束