AI regulation and the Goldilocks problem – too hot, too cold, or just right?
Why uncertainty, not overregulation, is the real risk to AI adoption in the UK.


A patchwork of regulation
The EU AI Act recently celebrated its first birthday, with its first phase, banning ‘unacceptable risk’ AI, coming into force in February 2025. While we haven’t yet seen any major fines for non-compliance, the act’s more significant requirements for high‑risk systems apply from 2 August 2026. This is when we expect to see the big wave of enforcement really start to roll in.
In contrast, the UK doesn’t have a single, overarching framework for Financial Services. Instead, it takes a principles-led approach to regulating AI, applying existing laws and sector specific regulations such as UK GDPR, the Data Protection Act and the Data Use and Access Act (DUAA).
The FCA expects firms to comply with existing rules – consumer duty SM&CR, DORA, PRA model risk management. In its February update, it confirmed that it doesn’t plan to introduce extra AI regulations.
The result is a fragmented, evolving landscape, open to interpretation – and confusion.
A moving target
As with any new technology, there’s ongoing debate over the regulation of AI. How prescriptive should rules be? Who should enforce them? How do you regulate something that is evolving so quickly?
We’ve seen this pattern before – peer to peer lending, pay day loans, crypto, and I have no doubt we will shortly see it again around quantum computing.
What is clear, that whilst balancing innovation and risk is always a challenge, the UK is lagging behind Europe, and that lack of regulatory clarity stifles innovation and investment. My job involves advising over 2000 regulated firms on sourcing, modernising and optimising their technology, and I see this clear tension in action. Senior technology leaders and boards are keen to take advantage of the efficiencies that AI can deliver, but nervous of regulatory risk, with many adopting an approach of ‘wait and see’. Whilst understandable, this uncertainty slows decision-making and creates an ever decreasing window to gain commercial advantage from being an early adopter.
The cost of waiting
The risks of AI are no longer theoretical. There are some headline grabbing litigation events globally that are worth reflecting on:
- In the UK, Getty Images vs Stability AI is believed to be the first UK generative AI copyright trial.
- In the US, The New York Times vs OpenAI and Microsoft highlights disputes around AI training data.
- Cases involving biometrics such as those in the US where Clearview AI had allegedly scraped billions of images from public websites and social media without or consent.
There are also more complex, human-centred risks emerging. Recently the Sunday Times reported on a pending US lawsuit in Florida, alleging that a man died by suicide after he developed a delusional romantic attachment to a chatbot.
Individually these cases are varied, however collectively they point to the fact that we’re defining the rules of AI in real time, often through litigation, rather than clear, comprehensive regulation. Regulation via litigation is clumsy and time consuming, and more importantly wastes the valuable window for first-mover advantage.
I see strong parallels with some of the early litigation against cryptocurrency. For AI, as these cases reach conclusion, they’ll shape how organisations ethically deploy AI, and how they build in audit and assurance.
The price of being too early to the party
Regulators and governments find themselves in a difficult spot. AI represents a significant economic opportunity, with research by PwC published in 2025 suggesting that AI adoption could contribute £79.3bn to the economy by 2035. On the other side, poorly governed AI carries ethical, legal and societal risks, so balancing regulation whilst fostering innovation, creates a dynamic equilibrium.
Much noise has been made at government level around ramping up AI adoption, particularly in building the infrastructure needed for AI and increasing UK compute capacity. Their approach outlines five principles – safety, transparency, fairness, accountability and redress. However, we’re still lacking clarity on how those principles translate into day-to-day decisions. This is particularly true in financial services where AI use cases are both high value and high risk.
The Goldilocks challenge
AI regulation is a balancing act. Working with such fast moving and potent technology is a constantly shifting landscape, and regulation, like Goldilocks’s porridge, is exquisitely hard to get ‘just right’. Too much regulation, too early risks stifling innovation, whereas too little, too late increases the likelihood of harm – and reactive enforcement. The challenge for many organisations is simply uncertainty, and only time will tell how successfully we balance innovation whilst preventing harm.
Organisations can’t afford to stand still
We understand that organisations can’t afford to stand still while the story unfolds. At Softcat, we help customers move forward with confidence in an uncertain regulatory landscape, combining practical advisory support with a broad ecosystem of specialist vendors to design, implement and govern AI in a way that is responsible, secure and compliant. This means turning principles like transparency, accountability and auditability into day-to-day controls, so organisations can innovate at pace without losing sight of risk and regulatory expectations.
If you’d like to talk through your options, please contact your Softcat Account Manager, or if you’d like to explore our Data, Automation and AI solutions and services, please click here.