From AI adoption to accountability
AI adoption across financial services is no longer a future‑facing ambition. For many firms, it is already embedded in customer interactions, onboarding, monitoring, fraud detection and decision support. What began as tightly scoped experimentation has moved quickly into operational reality, often at speed and scale.
What is changing is not the pace of adoption, but the nature of the challenge it creates. Early conversations centred on efficiency, automation and competitive advantage. Increasingly, however, attention is shifting to whether firms have the capability to govern AI effectively once it becomes business‑critical.
That shift is being driven as much by scrutiny as by innovation. UK Government policy continues to support AI development – reflected in initiatives such as the AI Opportunities Action Plan – but supervisory focus is sharpening on how firms evidence control, manage risk and maintain accountability in environments where decisions are automated, data‑driven and increasingly opaque.
AI governance in financial services: closing the readiness gap
Across the sector, many firms are now well past the point of asking whether AI should be used. The Treasury Select Committee’s recent inquiry and FCA engagement with industry reflect just how widespread adoption has become, particularly among larger institutions.
What varies far more is how effectively AI is governed once it becomes embedded into day‑to‑day operations. In practice, AI governance in financial services is still often built on models designed for more deterministic systems, where decision paths could be traced and challenged with relative ease. As AI is deployed at greater speed and scale, those approaches are being stretched.
Existing regulatory regimes, including Consumer Duty, SM&CR and operational resilience, continue to apply. The challenge for firms is how AI governance is operationalised in environments where decision‑making is less visible, more complex and frequently supported by third‑party technology. This is where gaps between AI use and operational readiness most often emerge, placing new demands on oversight structures, control frameworks and data foundations.
Supervision is becoming more evidence‑led
Regulators are not seeking to halt innovation. The FCA’s 2026–27 annual work programme and initiatives such as the Mills Review suggest a pragmatic approach: examining whether existing frameworks remain fit for purpose, rather than rewriting the rulebook from scratch.
At the same time, supervisory expectations are becoming more demanding. There is growing emphasis on demonstrable outcomes, usable MI and the ability to explain how risks are identified, managed and escalated once AI systems are live. Regimes such as the Critical Third Parties framework underline that this scrutiny extends beyond firms themselves to their wider technology ecosystems.
For firms, this changes the focus of AI risk from policy design to operational delivery.
Where capability constraints start to show
The hardest issues rarely emerge at the point of deployment. They surface later, once systems are scaled, embedded and relied upon.
Questions that were once theoretical become immediate and practical:
- how outcomes are monitored in fast‑moving processes
- what meaningful oversight looks like when models are complex
- where accountability sits when responsibility is shared across functions, suppliers and automated systems
It is no longer sufficient to acknowledge these risks. Firms are expected to demonstrate, on an ongoing basis, that they are actively managed.
This is where the real gap sits.
What effective AI use depends on
As firms move beyond experimentation and try to operationalise AI, a more fundamental constraint becomes clear. In most cases, the limiting factor is not AI capability, but the combination of generative AI (GenAI) tools and the quality, consistency and reliability of the underlying data.
Mainstream GenAI and LLM‑based tools are highly effective at text extraction, summarisation and surface‑level pattern recognition. They can turn large volumes of content into something more digestible. However, they are not designed to support regulated decision‑making. They do not inherently understand what financial advice data represents, how values relate to one another, or why one data point should be trusted over another.
Critically, these models often struggle to provide the explainability, traceability and auditability that regulators expect. They can silently resolve conflicts, obscure data lineage, and produce outputs that sound confident even when they are incomplete or wrong. As a result, firms frequently end up increasing human oversight rather than reducing it – spending more time validating outputs, resolving inconsistencies and evidencing compliance.
By contrast, purpose‑built (predictive) AI models for analytics and prediction are designed around structured, trusted data. They are trained to understand how advice data is created, how it changes over time, and when it must be corrected rather than inferred. This enables predictive analysis, consistent MI, and defensible insights that can be traced back to source and explained to regulators.
The more reliable, explainable and auditable the data foundation becomes, the more safely AI can be applied. In regulated environments, value does not come from applying GenAI and LLM models to unstructured data, but from combining selective GenAI capabilities with predictive AI operating on trusted, regulator‑ready data. That is what allows automation to scale – and risk to come down.
Inside firms, the skills question is becoming central
As AI becomes embedded, responsibility is shifting more visibly onto people. Risk and compliance teams are increasingly involved earlier, while senior managers are being asked to take accountability for systems they may not have designed themselves.
In many cases, AI is exposing long‑standing weaknesses: unclear data ownership, inconsistent documentation or fragile oversight models. When decisions are made at speed and scale, those weaknesses become harder to defend to boards, regulators or customers.
This places a premium on experienced leadership, operational judgement and the ability to embed governance into day‑to‑day delivery, rather than treating it as a parallel control function.
Why the pressure builds post‑adoption
Regulatory expectations around AI continue to evolve, shaped by what supervisors are seeing once systems are live, scaled and embedded into business‑critical processes. That is where many of the most difficult questions begin to surface – not at the point of experimentation, but later, when AI is relied on day to day.
As AI moves deeper into operational workflows, firms are increasingly expected to demonstrate how outcomes are monitored, how risks are identified and mitigated, and how accountability is maintained in practice. In this context, strengthening governance, data foundations and oversight mechanisms tends to matter far more than trying to anticipate the precise direction of future regulation.
For financial services leaders, the issue is no longer whether AI can deliver value. It is whether that value can be sustained – and defended – once scrutiny shifts from intent to evidence.
Momenta is supporting governance, evidencing and operational oversight
As AI becomes embedded in day‑to‑day operations, the challenge for many firms is less about technology and more about capability. Momenta works with financial services organisations to strengthen governance, evidencing and operational oversight by building the leadership and delivery capacity needed to operate safely at scale. This includes deploying experienced interim leaders to embed controls and accountability into operational teams, supporting business‑as‑usual delivery through enhanced oversight models, and helping firms strengthen the governance structures that underpin regulatory confidence.
If you are reviewing how AI is being used across your organisation, or how outcomes are evidenced once systems are live, Momenta can support with practical, operational expertise shaped by real regulatory experience.