The presentation area at Fintech Connect in London last week was packed with every seat taken, all focused on a topic that's moved from peripheral interest to strategic imperative: effectively deploying AI in Financial Services.
As I spoke about AI quality assurance and how strong governance frameworks help fintech teams deploy AI safely and with confidence, the questions and conversations afterwards made it clear these were people wrestling with real challenges around accuracy, performance, explainability and robustness in their own AI initiatives.
It told me everything I needed to know about where the industry is: past the 'should we?' phase and deep into the 'how do we do this responsibly?' phase. The conversations after reinforced this with specific questions about assurance frameworks, regulatory navigation and scaling challenges. People are now at the stage of trying to figure out how to do it right.

During the panel debate, a fellow panellist confidently declared that AI agents would never get better at tasks without constant human intervention and that we'd always need to hold their hand through every iteration. The research tells a different story. AI agents are now demonstrating the ability to optimise their own strategies on certain tasks, learning and improving autonomously.
That exchange crystallised something I've observed across the sector. Whilst there's a massive desire to harness AI's efficiencies, we're still wrestling with contradictory views about its capabilities and limitations. Financial institutions recognise this is a journey they need to embark on. Not just to learn what AI transformation actually looks like, but to capture the productivity gains that are increasingly non-negotiable in today's competitive landscape.
Start small, but start with Assurance
For organisations embarking on AI-driven digital transformation, the path forward is clear:
- Begin with proof of concepts - targeting simple, well-defined processes. Think internal query routing, straightforward operational workflows or low-risk back-office tasks like document classification or expense categorisation.
- Develop your assurance model from day one – this isn't something you bolt on after the fact.
- Scale methodically – your proof of concepts and assurance methods should mature in tandem, increasing complexity only as your governance matures.
- Engage specialists early – whether that's partnering with firms like 2i, who specialise in designing assurance models for regulated environments, or building dedicated internal capabilities.
If you reach the end of your proof of concept without an assurance model, you're not going to be able to scale.
This is perhaps the most critical insight from the event. Too many organisations treat assurance as a compliance checkbox rather than a strategic enabler. They build, they test, they celebrate success and then discover they have no framework for scaling safely. At that point, they're essentially starting over, having learned expensive lessons about what not to do.
2i has seen this pattern repeatedly across both financial services and public sector clients. Drawing on experience from both environments, we've developed assurance frameworks that allow organisations to move quickly while maintaining the rigorous oversight that regulators demand.
Don’t wait for perfect clarity
Here's an uncomfortable truth: financial services will be waiting a long time before regulators provide crystal-clear boundaries for every AI use case. The technology is evolving faster than policy frameworks can keep pace. That ambiguity, where you can and cannot deploy AI and what constitutes 'safe' usage, is unlikely to resolve soon.
Here's the thing. The real risk isn't regulatory punishment for thoughtful experimentation. It's paralysis - organisations waiting for the perfect rulebook will find themselves years behind competitors who learned to operate responsibly within uncertainty.
The FCA has established regulatory sandboxes precisely to help firms navigate this ambiguity. They're actively supportive of innovation when it's approached responsibly, with proper risk management and clear learning objectives. The infrastructure exists to experiment safely. Use it. Start small, leverage these frameworks and expand your AI footprint deliberately as you build confidence and capability.
This is exactly why robust assurance models matter so much. They allow you to move forward without waiting for perfect regulatory clarity, because you've built the governance structures that can flex and adapt as rules evolve.
Learning from the Public Sector's pragmatic approach
Something remarkable is happening in UK government departments - they're outpacing financial services in practical AI adoption. The reason is straightforward. They have an explicit mandate from Number 10: use AI, drive efficiency, do more with existing resources. It's not a suggestion or a strategic aspiration. It's a directive backed by executive authority.
This creates permission to experiment, to fail fast, to learn publicly. Public sector departments are building capability through deliberate practice. Financial services, despite having more resources and often more sophisticated technical teams, is moving more cautiously, sometimes paralysed by the very regulatory ambiguity we discussed.
If FS adopts this same pragmatic mindset, I expect we'll see a massive productivity surge. The capabilities are there, and the technology is ready but what's needed is organisational commitment and the assurance frameworks that make bold experimentation safe.
Choose your proof of concepts wisely
AI isn't perfect just like people aren't perfect. The critical skill is targeting it appropriately. Deploy AI against poorly defined processes or ambiguous decision-making contexts, and you'll get results that damage trust and waste resources. Choose well-structured, rules-based workflows where success criteria are clear and you'll build the case for expansion.
Think about straightforward operational tasks: routing customer queries to the right department based on clear classification rules, flagging expense reports that fall outside policy parameters, or extracting standardised data from structured documents. By starting here you can build organisational confidence, demonstrate ROI and create the foundation for tackling more complex challenges.
The efficiency gains are real. The technology is proven. Organisations have the tools to experiment safely. What's needed is the organisational courage to start and the discipline to build robust assurance.