How will AI reshape civil liability?

  • Étude de marché 17 novembre 2023 17 novembre 2023
  • Asie-Pacifique, Amérique latine, Amérique du Nord, Royaume-Uni et Europe

  • Technologie, externalisation et données

The rapid development of artificial intelligence (AI) has brought huge opportunities for businesses but raises important questions for regulators and legal professionals on how to ensure the technology doesn’t become too powerful, or have adverse impacts, and who to hold accountable if it does. Here, as a follow up to our Digital Resilience Podcast on IP and privacy issues relating to AI, David Méheut, a partner in our Paris office, discusses how civil liability relating to AI is being inspired by ethical principles.

AI capabilities have progressed so rapidly in recent years that businesses are rethinking how they build technology, carry out their operations, and utilise their resources, to maximise its potential. However, this sudden race to incorporate AI has also raised significant concerns and increased the urgency of big questions about the degree of autonomy these systems can acquire, our ability to control them, and how civil liability frameworks apply if something goes wrong.

On June 14, 2023 the European Parliament approved its version of the draft EU Artificial Intelligence Act, which, along with other draft legislations at EU level, would be the world’s first comprehensive AI law. But, as with any new technology regulation, the challenge is in balancing the tension between the need to mitigate the risks, but not so much that it stifles progress. Some claim the draft doesn’t go far enough, while others argue that it makes innovation impossible.

A good place to start when thinking about AI and civil liability is by referring to the ethical principles that have been developed by various institutions, including the CNIL (Commission Nationale de l’Informatique et des Libertés) in France and Unesco at a global level. At the heart of all these is the notion that, given AI will have some level of autonomy and is therefore highly unpredictable, it is critical that anyone using it measures and monitors its impact for any adverse evolution, be that the development of bias, discrimination, or a loss of control.

We can see the ethical principles reflected within the draft EU Act, which categorises AI use-cases based on the risk involved, and, in ‘high risk’ cases, outlines the importance of risk mapping and monitoring adverse impacts. An example of how this principle has been applied can be found in a case involving high frequency trading (HFT) in France, where a company lost control of an algorithm, impacting the trading price of a security. Despite the user denying responsibility, they were ultimately found liable, due to their failure to anticipate the event, and put a contingency plan in place.

This approach also follows a broader trend in compliance, which has moved towards anticipating and mitigating risks, rather than expressly specifying the measures that must be in place. What remains unclear and unresolved in the case of AI, however, is what happens in the case of adverse events. Will it be as simple as ‘unplugging’ an AI system and taking back control? It’s easy to envisage a situation where this may not be possible. For example, in a specialised environment such as healthcare, where lives are at risk, what would happen if there wasn’t a sufficiently qualified human practitioner to take over?

For now, where organisations are considering implementing AI into how they work, the ethical principles are a sensible place to start. That means detailed risk mapping and impact assessments, and a plan for what you will do if an AI system goes awry. It is still early days, but given the speed that AI technology is progressing, there seems little doubt that civil liability cases will start to follow.

Fin

Restez au fait des nouvelles de Clyde & Cie

Inscrivez-vous pour recevoir de nos nouvelles par courriel (en anglais) directement dans votre boîte de réception!

Vous pourriez être intéressé par...