top of page
  • YouTube
  • LinkedIn

What the FCA’s AI Live Testing (FS25/5) means for FinTechs deploying AI


The current state of AI deployment in FinTech

AI models have only been widely accessible to businesses and the public for a few years. But in that short time, AI has evolved beyond a handful of small, isolated proofs of concept into a full-blown financial services toolkit.


Like a fussy mother, machine learning now has a hand in credit decisions, fraud alerts, client onboarding, customer insights and the day-to-day operations of fast-growing FinTechs. And in many cases, these models make more real-time decisions than people do.


The Financial Conduct Authority (FCA) has been refreshingly clear about the fact that AI is no excuse for falling outside the rulebook. Whether a decision is made by a human or AI model, your firm is still on the hook for:

  • Senior Managers and Certificate Regime (SM&CR) accountability

  • Consumer Duty outcomes

  • Operational resilience

  • All the familiar expectations around fairness and transparency.


But many well-meaning firms are stuck on the same problem: the ‘last mile’.


A model might work perfectly in a clean test environment. But real data, customer interactions and dynamic scenarios hit it harder than six shots of tequila – and much like the rest of us, that’s when it becomes a risk.


To help bridge that gap, the FCA has introduced AI Live Testing, a sort of supervised runway for launching AI for financial services in the real world. It gives firms a structured way to test advanced models in live conditions. And it ensures the regulator is involved early enough to manage the risks before they turn into findings, failures or headlines.


Book a free consultation with FinTech Compliance today to ensure your AI models are ready for FCA Live Testing.


The FCA AI Live Testing framework: A new model for collaboration

AI Live Testing sits at the centre of the FCA’s AI Lab, the regulator’s growing toolkit for getting closer to the technology it’s overseeing.


Until recently, firms were left to guess whether their systems comply with UK AI regulatory expectations. But that’s not a great long-term strategy for managing complex models that make high-impact decisions.


That’s where the FS25/5 FCA AI Live Testing feedback statement comes in.


The statement sets out a new service for firms that want to move AI out of the deployment sandbox and into real customer journeys, but need regulatory certainty before flipping the switch.


Instead of presenting a finished model and hoping it aligns with the FCA’s expectations, Live Testing gives you a structured way to test AI in real-world conditions. That way, the regulator is involved from the start rather than just getting a call when something goes wrong.


To be clear, AI Live Testing isn’t the compliance equivalent to a theme park fast-track pass. Nothing about the underlying rules changes, and you’ll continue to face the cost of non-compliance. You still need proper governance, documented oversight, explainability, fair outcomes, monitoring – the whole supermarket trolley for financial services compliance.


What AI Live Testing offers is early, focused engagement so you can design the right controls before your model meets real customers, not after a complaint lands in your inbox.


For FinTechs deploying AI at speed, this gives you something you’ve never had before: a way to innovate without having to guess where the regulatory landmines are buried.


How AI Live Testing works: What the FCA is actually proposing

Early engagement

Before anything goes live, the FCA works with your firm to identify specific risks linked to your AI model.


That includes:

  • Where it could behave unpredictably

  • Where decision quality might slip

  • Which customer journeys or operational processes it could affect


This isn’t just a casual chat about common regulatory compliance problems. It’s a structured review of what your model does and how it could cause actual harm.


That way, you can hopefully prevent biased outputs, opaque logic and a tendency to behave unpredictably in the face of messy real-world data.


Safeguards and controls

Once you know what risks you face, you both agree the safeguards and controls that must be put in place.


These should cover:

  • Who monitors the model

  • What gets recorded

  • When interventions happen

  • What governance evidence is required during the test period


This creates a clearly defined operating framework so that everyone knows what ‘good’ looks like before deployment. That way, your team will have a better understanding of responsibilities, how quickly errors must be corrected and what evidence you need to prove your model is behaving itself.


Continuous feedback

Once your model enters AI Live Testing, the FCA expects continuous feedback loops, not the traditional ‘deploy, document later’ approach.


You must actively monitor your model, document anomalies and maintain an ongoing dialogue with the regulator while your model is in operation.


If something unexpected happens – and with AI, something unexpected always happens – that information becomes part of the supervised testing process, not a post-incident confession.


You must actively monitor the model, flagging issues in real time and making adjustments with the FCA.


Particular areas of regulatory scrutiny

The FCA has been specific about the areas it wants tested and evidenced during Live Testing:

  • Bias and fairness, especially in high-impact decision making

  • Explainability, so you can justify outputs internally and externally

  • Data integrity and model drift, since AI models can quickly shift performance

  • Consumer Duty outcomes to ensure your model isn’t degrading customer experience or fairness

  • Operational resilience, so a glitch model doesn’t destabilise your core services


Pilot testing

The FCA is currently reviewing industry feedback on FS25/5, with an application window for the second feedback cohort expected to open in January 2026.


Once they’re refined AI Live Testing, the regulator expects to begin pilot testing before rolling out the service more broadly.


This gives you a limited but valuable window to prepare your AI governance framework for financial services before Live Testing becomes the norm.


Why AI Live Testing matters: Implications for FinTechs

For most FinTechs, the hardest part of deploying AI isn’t building the model. It’s working out how far they can take it without accidentally breaching UK financial services regulations.


AI Live Testing is designed to close that gap by offering a structured, supervised route to deployment. Instead of trying to interpret broad principles like Consumer Duty or SM&CR, you’ll get the chance to test your model in controlled live settings.


The FCA will be looking at the same risks you are – and, crucially, telling you whether your approach meets their expectations.


This will help you understand:

  • How your model fits within existing FCA requirements: There are currently no AI-specific rules, so the FCA wants firms to map AI risks to their existing framework

  • Whether governance is robust enough for real-world deployment: This includes decision logging, monitoring cadence and establishing who steps in when the model misbehaves

  • How the model interacts with Consumer Duty outcomes: Especially fairness, customer understanding and potential harm caused by automated decisions

  • What Senior Managers remain accountable for under SM&CR: The answer, spoiler alert, is “pretty much all of it”, even if the model is the one pushing the buttons


This structure is particularly valuable for firms building AI-heavy propositions such as credit scoring systems, automated Know Your Customer (KYC) decisioning, behavioural fraud models and risk-scoring tools.


How AI Live Testing strengthens financial services compliance

AI systems create real customer impact very quickly.


This is exceptional from a commercial standpoint. But if the regulator hasn’t seen your guardrails it can raise some eyebrows.


AI Live Testing gives you a way to validate a model’s behaviour before it rolls out at scale. That means fewer late-stage rebuilds, fewer unexpected compliance findings and fewer urgent board updates that start with “so, the model did something weird...”


But it’s not just about reducing risk and wasted resources. AI Live Testing also gives you a reputational advantage.


By engaging with the FCA early, you send a strong message to investors: “We’re scaling responsibly, not recklessly”.


In a market where governance is part of your due-diligence checklist, demonstrating proactive regulatory engagement can make fundraising conversations much, much smoother.


What FinTech firms should do now to prepare

You can’t stroll into AI Live Testing with a half-finished model and a PowerPoint of good intentions.


To get value from the programme – and to avoid awkward conversations with the FCA – you need to start preparing now.


A brutal review of your AI use cases

List every model you’ve got running or in development and flag where things feel a bit... fuzzy.


Can you explain the model’s decisions without summoning a data scientist?


Are there fairness concerns hiding in your training data?


Does governance begin and end with “we’ll keep an eye on it”?


If so, congratulations, you’ve just found your starting point.


Map AI systems to current FCA rules

AI Live Testing doesn’t come with a new rulebook. So you need to map each system to the FCA rules you already know:

  • Consumer Duty: Does the model treat customers fairly and avoid harm wherever possible?

  • SM&CR: Which Senior Manager is accountable when the model goes rogue at 3am?

  • Operational resilience: Could a model failure interrupt essential services?

  • Financial promotions: If the model generates content or decisions communicated to customers, are you certain they’re compliant?


Risk management

Take a look at your model risk management framework, ideally with the kind of curiosity you’d reserve for an unsupervised child wielding a hedge trimmer:

  • Who actually owns each model? (Here’s a hint: If the answer is “everyone”, it’s really “no one”)

  • How do you detect drift, and how quickly could you respond if performance shifts?

  • What does your bias monitoring look like, and do you check beyond the obvious?

  • Can you explain the model’s decision in plain English without hiding behind technical jargon?


Third-party vendors

The FCA has repeatedly said that outsourcing AI models to third-party providers doesn’t transfer accountability. So saying “the provider handles that” is a one-way ticket to enforcement action.


You need to know:

  • How their models work

  • How they’re tested

  • How their data is handled

  • What controls they can prove, not just promise


Documentation

While you’re doing all this, start pulling together the documentation you need right away.


That includes:

  • Data lineage: Where inputs come from and why they’re trustworthy

  • Validation results: Including tests that didn’t go well

  • Governance structures: Who signs off on what, and when

  • Risk assessments: Real ones, not the “everything is low” kind


This is also the moment to pick which of your AI models might be suitable for early participation in FCA Live Testing pilots.


Choose systems that are high impact, high complexity and high risk as this is where FCA guidance on AI models will make the biggest difference.


Training

Finally, you need to deliver comprehensive training for AI in financial services to your Senior Managers and compliance teams.


Once an AI model goes live, accountability doesn’t magically shift to the algorithm.


Senior management function holders need to understand what the model does, how decisions are monitored, and what to do when something unexpected pops up (which it will, usually on a Friday afternoon).


Preparing now means fewer surprises later, and a smoother path into AI Live Testing when the FCA opens the door.


A turning point for AI innovation: Now’s the time to act

AI Live Testing is a clear signal that the FCA is changing how it supports innovation.


But firms who benefit most won’t be the ones who simply skim the FS25/5 feedback statement and hope for the best. They’ll be the ones who put proper governance, documentation and oversight in place before the FCA comes knocking.


That’s exactly where FinTech Compliance gives your firm a real advantage.


We’re not a generalist consultancy that dabbles in financial services. We’re the only compliance consultancy in the UK that specialises exclusively in FinTech. That means your models, use cases, scaling challenges and regulatory pressures aren’t niche to us – they’re the entire job.


While you’re preparing for AI Live Testing, we can support you with a wide range of retainer services and ad-hoc compliance requests, including:

  • Pre-deployment AI reviews: We examine your model logic, decision pathways, data dependencies and customer impact so you can identify problems before they become regulatory findings

  • Governance and control framework design: We help you build the oversight structures the FCA expects, including clear ownership, defined escalation routes, documented monitoring and controls that work in real production environments

  • Documentation and model-risk guidance: We prepare the artefacts you’ll need for Live Testing, such as data lineage, validation evidence, fairness assessments, Consumer Duty impact analysis and model-risk evaluations

  • Retainer services and ongoing advisory: Your AI systems will inevitably evolve, drift, and misbehave – but we can be on hand to keep them compliant and stable even in a changing regulatory landscape


Ready to deploy AI with confidence?


Book a free consultation with the experts at FinTech Compliance today.

 
 
 

Comments


bottom of page