An AI maturity framework for regulated financial services
Introduction
Most firms in Jersey's financial services sector know AI matters. Far fewer know what to do about it.
Self-assessment can be unreliable, with people overestimating AI maturity in areas like technology that they understand well while failing to spot gaps in some of the organisational factors that are most important to success.
To help firms, we’ve put together an AI maturity framework for offshore financial services, alongside a free AI maturity assessment that they can use to understand where they stand and where their gaps are.
Many existing frameworks are focused on technology, but research and our experience consistently show that most AI failures are organisational rather than technological. Instead, we've focused on the fundamental factors that best predict success.
The framework
Our framework covers four areas:
- Strategy: Research shows that firms with a clear plan aligned to business priorities are 2-3 times more likely to get real value from AI.
- Governance: AI use introduces risk, whether from intentional introduction of AI or unauthorised use by staff.
- Automation: How big is the gap between the current state and where you want to get to?
- People: Having things on paper is great, but it doesn’t matter if you’re not able to put it into practice.
Strategy gives you direction, governance reduces risk, automation saves time and money, and your people make it all happen.
We’ve deliberately de-emphasized the specific technology relative to the organisational factors that best predict success, as shown in research on AI in particular1 or technology adoption more generally2.
1 https://www.rand.org/content/dam/rand/pubs/research_reports/RRA2600/RRA2680-1/RAND_RRA2680-1.pdf
Strategy
Buying a few Copilot licences is a purchasing decision, not a strategy. Strategy ensures that AI use is aligned with your business priorities, rather than patchy and ad hoc.
McKinsey found that organisations seeing the largest returns from AI are more likely than others to follow a range of best practices. Of these, three of the top four practices came under strategy. The top-performing firms were twice as likely as other firms to have a clearly defined roadmap, and 1.5 times as likely to have leadership alignment on value creation.3
Similarly, PwC found that firms with the right foundations in place were three times as likely to see simultaneous cost and revenue benefits from AI.4
Potential mistakes include not having a strategy, failing to align the strategy with business goals, or having a strategy that only exists on paper.
Because of this, the assessment questions look at whether there is a strategy in place, how engaged the board is with AI, and how well this is translating to using AI in practice. Senior engagement with AI is one of the best predictors of success, and the ability both to pilot AI projects and to convert those projects to production are key signs of maturity.
Most pilots fail, with estimated success rates consistently below 50%. A good pilot tests a use case that's aligned with business priorities, has clearly defined success criteria, and is structured to maximise learning. Too often, firms pilot whatever seems easiest or most impressive rather than what would deliver the most value.
But running a pilot allows you to surface issues cheaply. Skipping it just moves failures to somewhere more expensive instead. Piloting the wrong thing is a common mistake, but failing to run pilots at all is worse.
Governance
Staff are probably already using AI whether you've sanctioned it or not. Good governance means you can move quickly without creating regulatory exposure.
Without the right controls in place, staff may input personal, sensitive, or confidential information into unsanctioned systems, produce inaccurate output without proper review, or use AI to make decisions in a way that is inconsistent with client or regulatory expectations.
Unauthorised tool use is common. ManageEngine found that most professionals admitted to inputting information into an AI tool without approval, often including confidential information.5
Hallucinations (cases where AI produces plausible but incorrect output) are also common and may be difficult to spot. Numerous cases of lawyers submitting non-existent citations have been documented. Staff may be using AI for tasks like completing regulatory filings or producing KYC paperwork even if you haven’t permitted it, so what steps are you taking to ensure that they don’t contain errors?
Managing these risks requires someone to own them. In the absence of a named individual at a senior level, documents can go stale, responses are uncoordinated, and nobody is incentivised to make the case for appropriate investment and resource. If everyone is responsible, nobody is.
3 https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai
4 https://www.pwc.com/gx/en/ceo-survey/2026/pwc-ceo-survey-2026.pdf
Automation
Most companies in the offshore finance space still perform a significant amount of manual work, often because previous approaches to automation weren’t sufficiently flexible to provide value.
The greatest gain comes when AI allows you to fundamentally rethink processes. BCG found that companies that create the most value with AI focus 80% of their investment on redesigning end-to-end workflows and building new products and business models.6 Similarly, McKinsey found that the companies getting the most value from AI were 2.8x as likely to fundamentally redesign their workflows.7
None of this works without the right infrastructure. Generative AI is better able to deal with untidy data than previous automation technologies were, but it still requires access to the relevant context. OpenAI found that deep system integration and data readiness are two of the six main characteristics of leading firms8, while McKinsey found that the firms getting the most value from AI were more than twice as likely to have the right infrastructure and architecture in place than were other firms.9
People
Policies and tools are only as useful as the behaviours around them. Research consistently shows that getting this right matters more than the technology itself.
People covers a range of different areas, such as whether your staff know what they’re allowed to do and whether they’ve had adequate training, in addition to less frequently considered factors such as whether staff feel comfortable using AI and whether use is modelled by senior management.
Basic awareness of appropriate AI use and data protection concerns can be surprisingly poor. ManageEngine found that the top justification given for unauthorised AI use was
“I used my own device, so I thought it was fine”, with other common responses including “I didn’t realise I needed approval”.10
Even if staff are aware of the risks, they may still use unauthorised systems if they are not held accountable, no alternative is provided, or they see everyone else using them. 54% of respondents told BCG that they would use AI tools even if not authorised by the company.11
When staff do receive training, it is often limited and of low quality. BCG found that only 36% of staff considered their training to be sufficient, and identified factors such as in-person training, access to a coach, and receiving at least five hours of training as predictive of success.12
McKinsey found that senior leaders at companies getting the most benefit from AI are almost twice as likely to actively drive AI adoption and model its use.13 If senior staff aren’t using AI or are using it poorly (e.g., by inputting confidential data to personal ChatGPT accounts) then it is unsurprising if more junior staff show the same behaviours.
Another frequently underappreciated factor is psychological safety. MIT Technology Review found that 84% of executives have observed connections between this and AI outcomes14. Junior staff often have the most exposure to the actual pain points of day-to-day work, so can be a rich source of ideas, but will only put them forward if they feel comfortable. In addition, lack of comfort can lead to staff hiding use or failing to flag risks.
5 https://www.manageengine.com/survey/shadow-ai-surge-enterprises/
6 https://www.bcg.com/publications/2025/ai-at-work-momentum-builds-but-gaps-remain
7 https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai
8 https://openai.com/index/the-state-of-enterprise-ai-2025-report/
9 https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai
What the results tell you
Based on your answers, we assign an overall maturity level, alongside a maturity level for each area. It’s a short assessment, so these are a starting point rather than a final verdict.
Most firms will score Foundational, Developing, or Established, which represents an opportunity to improve, particularly if scoring low on Automation, but lower scores on Governance and People can also be a sign of risk.
Advanced and Leading firms are rarer but are in a strong position to see further risk reductions or even transformative impact on their business model by addressing any remaining weaknesses.
Foundational firms either haven’t started with AI yet or need to put some of the fundamentals in place before proceeding. These firms have the greatest opportunity to improve their maturity but need to be conscious of the risks they might currently be exposed to, such as shadow AI (staff use of unauthorised AI tools).
Developing firms have usually begun experimenting with AI but still lack some of the fundamentals required to get the best value without taking on unnecessary risk. Every firm is different, but perhaps they use a general-purpose chatbot such as Microsoft Copilot and have some basic governance processes in place while lacking an explicit strategy, clear accountability, and adequate training.
Established firms often score highly in some areas and not in others. Maybe they have a clear governance framework in place but still struggle with shadow AI and manual processes, or maybe they have significant automation but aren’t getting full value from it due to its ad hoc nature. Many firms that plateau here could see significant risk reductions or value gains from working on their weaker areas.
Advanced firms are well ahead of most in the sector but still have some gaps. They can move into the next tier by addressing remaining weak points, ensuring that policies exist in practice rather than just on paper, and using AI to reshape processes and unlock new revenue opportunities rather than just as a drop-in that makes existing processes more efficient.
Leading firms score highly across all areas. They have a clear strategy, proper controls, extensive (but appropriate) AI use, and the right processes, training, and environment in place to make it all work. In a rapidly shifting landscape, their main challenges are continuing to stay at the frontier as regulations and technology evolve. Very few firms are at this level, so if you are, we would love the opportunity to learn more about what you are doing.
10 https://www.manageengine.com/survey/shadow-ai-surge-enterprises/
11 https://www.bcg.com/publications/2025/ai-at-work-momentum-builds-but-gaps-remain
12 https://www.bcg.com/publications/2025/ai-at-work-momentum-builds-but-gaps-remain
13 https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai
14 https://www.technologyreview.com/2025/12/16/1125899/creating-psychological-safety-in-the-ai-era
Next steps
The free assessment is available from https://assessment.sindriconsulting.com/.
There are 12 questions in total, which typically take about five minutes to complete.
We also offer a more detailed diagnostic conversation if you want to go deeper.
You can email us at info@sindriconsulting.com to discuss.
Before AI becomes a regulatory question, assess your AI exposure
Confidential. Designed for regulated firms. No obligation.
