Reflections on the Clyde & Co and Opus 2 panel at London International Disputes Week 2024

  • Insight Article 17 June 2025 17 June 2025
  • UK & Europe

  • Tech & AI evolution

  • Dispute Resolution

The panel discussed Ethnical Standards for AI in Arbitration, asking the questions: what’s feasible, what’s viable and what’s desirable? There is little doubt that AI was the “hot topic” during this year’s London International Disputes Week 2025, and the active and engaged audience at the event proved demonstrated that. The panel comprised of Chris Williams of Clyde & Co, Matthew Lavy KC of 4 Pump Court, Dr Karen Seif of Paris Sorbonne University Abu Dhabi, and Natalie Armstrong of Clyde & Co, and was moderated and directed by Kateryna Honcharenko of Opus 2.

Discussing use cases of AI: the risks and benefits

Kateryna Honcharenko introduced the panel and began the discussion with a thought-provoking quote from Albert Einstein: “The measure of intelligence is the ability to change”. Kateryna kicked off the session with a practical approach to AI and asking the panelists about some of the use cases of AI, along with the risks and benefits. Matthew Lavy KC described the difference between “lab cases” and “real world cases”, explaining that the former are unlikely to affect most practitioners. Some of the real-world cases, as highlighted by Matthew, were the use of AI tools to support with tasks such as document review, legal research, summarising cases, and preparing chronologies. Matthew explained that these are tasks which AI can handle well (compared to the average human lawyer). The benefits of having an AI tool handle these tasks are obvious; they save time, reduce costs, and allow practitioners to focus on the more interesting and challenging tasks.

Chris Williams built on Matthew’s point further, adding the perspective that “AI can bring great gains to the parties and to arbitral institutions”, whilst cautioning that AI may begin to encroach on the human decision making. Dr Karen Seif provided some additional use case examples, drawing from her own experiences. AI tools with translation capabilities are a particularly powerful tool in international arbitration, particularly as many of the relevant laws, and supporting documents may be in foreign languages. Karen explained, that in her experience, the translations produced by AI tools were impressive. She explained that AI tools may be increasingly used in international arbitration and dispute resolution if they “can bring down the language barrier and increase the pool of qualified arbitrators”.

Natalie Armstrong took a slightly different approach, and focused on how generative AI tools can be used in a controlled way. There is a plethora of generative AI tools available on the market for arbitrators and practitioners alike to use. It is important that practitioners and arbitrators know how to use and control them. You can put up guard rails and vary the “temperature” to reduce the “creativity” of generative AI tools, to avoid them “hallucinating” and inventing answers. Natalie succinctly captured the inherent risks that come with using generative AI tools: “Inherently these tools want to be helpful and if they can’t find the answer, they will make it up.”

Delegation of decision making to AI

Next, Kateryna shifted the focus of the panel to consider whether AI tools could be delegated decision making powers. She referenced the recent Silicon Valley Arbitration and Mediation Center’s (SVAMC) Guidelines on the Use of Artificial Intelligence in Arbitration, which include a specific guidelines on non-delegation of decision-making powers. In a recent case LaPaglia v. Valve Corp, the matter started as a dispute under American Arbitration Association (AAA) rules, with an award being issued a few months later. The claimant in the case has recently filed a petition in the US District Court to set aside the award on the grounds that they believe the arbitrator had “outsourced” the decision making to AI. The decision allegedly contained “tell-tale signs” of AI generation, and purportedly referenced “evidence” not present at the trial or in the parties’ submissions.

The following statement was put to the panel: the use of AI arguably betrays the use of a well-reasoned arbitrator. How do arbitrators make sure they leverage AI to enhance efficiency of the practice, and not to dilute the quality of decision making?

Karen responded with words of reassurance and highlighted that even though we are in the early stages of AI adoption, there is already a significant amount of guidance on non-delegation of the arbitrators powers. Comparisons can be made between delegating to judicial secretaries and judicial assistants, and delegating to AI tools. Karen pointed out that delegating every single part of arbitrator role and mandate is not ethical or acceptable, whether to AI tools or human. However, there are tasks which don’t involve substantive decision making such as organising calendars, categorising documents, and other administrative tasks which would be perfectly safe to delegate to an administrative assistant, or perhaps AI. Other tasks may also be suitable for an AI tool to undertake, such as correspondence with parties, drafting procedural awards or organising simple procedural history. Karen made reference to a 2015 study by Queen Mary University on delegating tasks to administrative assistants and drew parallels. Where do we draw the line?

Matthew continued the conversation, agreeing that parallels can be drawn in delegation of tasks. The answer to the question of which tasks can be delegated will evolve over time as attitudes towards AI develop. What people find “acceptable” with regards to the use of AI in the context of arbitration will likely change. Matthew cautioned however, that it might a slippery slope using AI to draft a chronology and procedural orders, to taking on a larger role in the decision making and explanatory process.  

Karen posed the question to the audience: how many people would feel comfortable allowing an AI tool to draft a decision on behalf of the tribunal? There were chuckles, but no raised hands.

Tackling the Risk of Bias and Discrimination in AI

Kateryna started the conversation with reference to a survey carried out by Bryan Cave Leighton Paisner (BCLP) in 2023, which found that in the context of arbitration, the risk of bias was high on respondent’s list of concerns. On the topic of bias and discrimination, the important question was posed to the panel: how do we make sure AI is used responsibly, and what kind of biases should we be aware of in the context of dispute resolution?  

Karen started the conversation by explaining that risk of automation bias and complacency. Once the AI tools has provided a response, parties get complacent and rely on these responses without challenge. How do we deal with this issue? One solution might be to ensure that the risks of AI usage (such as hallucinations, biases etc) are mitigated by ensuring tools are used responsibly. Matthew expanded and flagged that important point that legal education remains important, and the responsible and appropriate use of AI and how to evaluate the results from AI must be taught.

The point was further developed by Matthew, that is important to focus on what the output of the AI tool is going to be used for? If it were to fail, what could the implications be? Building on this point, Chris referred to a recent case in which a solicitor and surveyor had jointly relied on AI to produce a report. The partner signed off on the report, and it was actually a junior member of the team who double checked the references within the report and noticed the mistake. Similar to when a junior member of the team prepares a piece of work, it is the responsibility of the senior lawyers or partners to ensure that it is correct. The same can be said about AI, and practitioners must take responsibility for the potentially incorrect outputs of AI when leveraging it.    

Natalie picked up on the point and said you can test AI tools by asking the tool to give you the source where it obtained its information from. It is important to undertake due diligence on the tool and to review the outputs. When relying on AI tools to undertake document reviews, an easy way to verify the information is to ask AI for the control number of the document it obtained the information from.

The topic ended on a positive note and the panel discussed how to AI can be used to counter human biases. Karen explained that you can use AI to scan anonymised CV’s of potential arbitrators and rank them based on certain criteria. This may even widen the pool of arbitrators and overcome human biases Natalie highlighted that AI could be used to alleviate the contention which often comes when selecting an arbitrator in investor state disputes and suggested that we could be relying on AI for this in future. Matthew rounded off the topic by explaining to the audience that biases from AI tools come from the data used to train them: “garbage in, garbage out” and that using AI tools where you can select training data will assist with preventing bias.

Regulation of AI: Changing standards of care?

The conversation shifted towards the regulation of AI within legal practice. Chris explained that both the SRA and the Law Society have issued thorough guidance on the use of AI in legal practice, noting that practitioners should have a least a basic understanding of how the tools work. Blaming a mistake on an AI tool could be reputationally damaging for individual practitioners and the firm, and would not be an acceptable excuse to the regulator.

Chris further explained that the standards of skill and care on the part of lawyers will not change. Instead, they will need to make sure that they maintain the same standards of skill and care, even when making use of AI. Matthew further expanded on this point and highlighted that there may come a time where it would be negligent not to make use of AI tools. This was further supported by Natalie who made reference to the SRA Code of Conduct for Firms, in particular the obligation under Rule 4.2 to ensure that the service provided to clients is competent and delivered in a timely manner, and is appropriate to the client’s needs. There may be a time where choosing not to use AI and leverage the benefits it brings, may actually amount to negligence. Karen also agreed and used the analogy that using an AI tool is like delegating work junior associate: it can be a very helpful resource, but needs to be closely supervised like any junior team member.

Part of the issue with using AI tools, which was picked up by the panel, was that even the developers of these tools do not always understand how AI reaches conclusions. This is a limiting factor in using AI to act as a judge or arbitrator, is the lack of a reasoned judgment. Notwithstanding the complexities of AI tools, Chris firmly established that practitioners don’t necessarily have to understand the intricacies of how a system is built, but they do need to undertake a degree of scrutiny to at least make sure that the AI tool is being used for the task.  

Disclosure of using AI tools and equality of arms

On the topic of disclosure of use of AI tools, the panel discussed whether practitioners would be comfortable with the arbitrator using AI to review their determinations. Arbitrators are selected for their expertise. Parties select arbitration rather than proceeding through the courts, in part, because the parties are usually permitted to select the arbitrators who will determine their case. This benefit is undermined if arbitrators use AI. Perhaps a solution is to prohibit arbitrators from using AI tools, even to “sense check” their awards.

Matthew explained, in the context of disclosure of use of AI, disclosure requirements would likely depend on what AI is being used for. If it is just for drafting simple letters or for administrative tasks such as for internal case analysis or document review, this is unlikely to cross a line. However if you are using AI to enhance or amend evidence, such as improving the resolution on grainy CCTV footage, which might show a suspect brandishing a knife, then all parties need to be aware that AI has been used. The reality may be that the suspect was not holding a knife, but something else. In order or the other parties to appropriately challenge evidence, they need to know if AI has been used in relation to it. Matthew concluded that disclosure requirements will therefore depend on how AI has been used.

The conversation moved on to the topic of “equality of arms”. There are some case studies which suggest that AI tools are being used to analyse previous decisions of arbitrators with a view to identifying which arbitrators might be the most favourable, or where specific arguments have a track record of being favourable with arbitrators. Natalie made the important point that one party may be supported by a law firm which is better funded, and can absorb the high cost of implementing AI tools versus a party who doesn’t have the financial means to retain firms with the same level of technology. Those better funded firms will also have the means to properly train staff on use of AI. A member of the audience raised the point that there has always been inequality of arms in law. Natalie responded by highlighting that there is a risk that AI may exacerbate that inequality. She made the suggestion that perhaps arbitral tribunals can control the extend of use of AI, by requiring parties to agree to use comparable tools or set guardrails in the first procedural orders.

Closing thoughts

The conversation naturally shifted back towards regulation of AI systems. At present, there is a patchwork of guidelines across countries and arbitral institutions. However, these guidelines are not homogenous and the technology is developing at a rapid speed, and with regulators seemingly unable to keep up.

The topic of training for junior lawyers on how to use AI was also discussed. Chris Williams highlighted that it will still be incumbent on firms to make sure that junior lawyers and trainees are still trained in key skills such as business development, cognitive skills and the ability to adapt in this changing world.

Panellists

  1. Chris Williams - Clyde & Co
  2. Matthew Lavy KC - 4 Pump Court
  3. Natalie Armstrong - Clyde & Co
  4. Dr. Karen Seif - Paris Sorbonne University Abu Dhabi
  5. Moderated by: Kateryna Honcharenko - Opus 2 – senior arbitration consultant

End

Stay up to date with Clyde & Co

Sign up to receive email updates straight to your inbox!