The FCA’s new AI Live Testing initiative – does it address the elephant in the room?

  • Insight Article 16 May 2025 16 May 2025
  • UK & Europe

  • Tech & AI evolution

  • Corporate & Advisory

The FCA's new Engagement Paper explores a fresh approach to safely and responsibly adopting AI in the UK financial services sector.

The recent Engagement Paper on AI Live Testing released by the UK’s Financial Conduct Authority (FCA) seeks to offer a new approach to fostering the safe and responsible adoption of AI within the UK financial services sector.

AI Live Testing is the latest initiative from the FCA AI Lab, which was created to facilitate the transition of AI solutions from proof of concept to live deployment to drive economic growth and create positive outcomes for UK consumers and markets.

The AI Live Testing initiative focuses on the end of this journey and is specifically designed to support the safe and responsible deployment of advanced AI models in the UK's financial markets – particularly ones that are to be used as part of customer facing activities such as product sales.

The Engagement Paper discusses the unique challenges financial services firms encounter during the deployment of advanced AI models and how the FCA wants to engage with firms to address those challenges. In particular, the FCA wants the initiative to address key questions, such as:

  • What input-output validation is needed to build confidence that AI-generated outcomes are likely to meet regulatory expectations
  • How consumer groups, including vulnerable consumers, are likely to be impacted by the new technology
  • What processes are in place to address poor/unintended AI model outcomes when they arise

The FCA is inviting firms with suitable AI deployment use cases to join the initiative from the summer of 2025. It will initially run as a 12-month pilot, but, if successful, may become a permanent feature of the FCA’s innovation services.

Global perspective

Financial services regulators around the world, shocked by the speed of adoption of AI, continue to grapple with how to regulate the use of AI within financial services markets.

Whilst there is no singular approach being taken, there are many examples around the world of AI regulatory initiatives. Examples include, in the USA, the National Institute of Standards and Technology’s AI risk management framework (AI RMF) that comprises a variety of profiles intended to represent different use cases and sector combinations; and, in Singapore, the AI governance testing framework and software toolkit being developed by the AI Verify Foundation that aims to provide a set of standardised technical tests for AI.

The FCA’s latest AI initiative, whilst different in scope and focus, forms part of these global efforts to develop methodologies for appropriate AI governance.

Accountability of third party AI providers

Although this initiative aims to give financial services firms the certainty and confidence to invest in AI systems, firms that are regulated by the FCA will ultimately be liable for the outputs of the AI systems they use.

And yet, many of these AI systems are created by unregulated third party providers that have no direct accountability to the FCA. A recent Bank of England and FCA survey on artificial intelligence and machine learning in UK financial services found that a third of AI use cases are implementations of third party products, with the expectation that this third party exposure will grow as AI model complexity rises and outsourcing costs fall.

So, the question needs to be asked, are the FCA and other financial services regulators around the world regulating the right persons when it comes to the adoption and use of AI in financial services? Or should the third parties that produce and understand the AI models, who are not traditionally in scope of the financial services regulatory regime, also be subject to the remit of the regulators in order to create better and more responsible use of AI within the sector?

In the FCA’s response to the UK Government’s pro-innovation strategy on AI, the FCA alluded to this elephant in the room by suggesting that key third party AI providers could be designated as critical third parties to the financial sector for the purposes of operational resilience, therefore bringing them under the oversight of the FCA, the PRA and the Bank of England.

Conclusion

AI Live Testing is the latest in a range of FCA initiatives that show how UK regulators are looking to engage with and understand the impact of AI on the financial services sector. But it still does not address the fundamental issue of whether there needs to be a sea change in the regulatory framework to allow for better oversight of AI deployment and use within the sector.

The FCA’s current stance is that no change to the underlying framework is needed, but consistent feedback to the FCA, from regulated firms, third party AI providers and Insurtechs alike, is that greater clarity around how the regulatory framework should apply to the use of AI in financial services is needed in order to speed up the adoption and roll-out of AI within the sector.

Whether proper regulatory oversight of, and engagement with, third party AI providers is required to help ensure that the benefits of AI use within the sector are realised whilst managing the risks, is just one of a number of key questions that still needs to be addressed as the march into a potential future dominated by AI continues.

End

Areas:

  • Market Insight

Additional authors:

Hasith Balapatabendi, Trainee Solicitor, London

Stay up to date with Clyde & Co

Sign up to receive email updates straight to your inbox!