When AI Acts: Governing Agents and Agentic AI | Softcat
Skip to main content

When AI Acts: Governing Agents and Agentic AI

A follow-on blog responding to questions raised after our previous article on AI governance.
Softcat PPT Background Diagonal Plum Aubergine Gradient RGB Softcat PPT Background Diagonal Plum Aubergine Gradient RGB

Andrew Pearch

Cyber Assurance Lead

Following our blog on AI governance, there were some brilliant questions raised by readers. We thought it would be helpful to create a follow-on blog to answer these questions and provide further guidance.

When AI stops advising and starts acting

The questions raised after my last blog circled the same issue: does existing AI governance still work once AI is no longer just a tool, but an actor? The answer is uncomfortable but clear.

Most current AI governance assumes a simple model: a human prompts. The system responds. A human remains in control. However, AI agents and agentic AI break that model completely. An AI agent can be given an objective, access to systems and permission to act. It can decide what to do next, which tools to use and when to act, often without human intervention. At that point, AI isn’t just offering advice. It’s making decisions and taking actions on the organisation’s behalf.

This is not theoretical

Agentic AI is already in use across IT operations, security automation, customer interaction, analytics and optimisation. We see it in use across industries – a customer chatbot issuing refunds, or an IT system automatically resetting accounts, for example. The attraction is obvious. Agents reduce latency, remove manual effort and operate continuously.

The governance challenge is equally obvious. When an agent acts, the organisation acts. Accountability does not evaporate because the decision path was automated. After an incident, regulators will not accept ‘the agent decided’ as an explanation. They will ask who approved its use, what authority it was given, what limits were set, how it was monitored and how risk was understood and accepted in advance.

Why AI agents stress-test governance

Traditional AI governance focuses heavily on model accuracy, bias and data quality, and works best when people make decisions, with systems simply following instructions. AI agents blur that line and introduce new pressure points. Accountability becomes unclear because agents can act independently, require significant access to systems and change how they behave over time. Without clear ownership, defined boundaries and ongoing oversight, these systems can operate in ways that are difficult to explain, justify or control.

These are precisely the conditions under which weak governance frameworks fail. The failure is not technological. It is organisational.

Delegated authority is the real governance issue

From a governance perspective, AI agents are not just advanced tools. They operate with authority granted by the organisation, using identities, credentials and access paths that allow them to affect systems, data and outcomes, without human approval. This matters, because it brings agentic AI back into familiar regulatory territory. The question is no longer, “is the model accurate?” but, “who authorised this behaviour, under what conditions and with what oversight?” Clear ownership, clear limits and clear oversight are what make delegated authority safe.

Applying the CAF lens to AI agents

In the UK, one of the most widely used approaches for managing cyber risk is the National Cyber Security Centre’s Cyber Assessment Framework (CAF). While it was not written specifically for AI, it provides a practical way to think about responsibility, boundaries and oversight, which are all issues raised by AI agents.

Under CAF control A.2, organisations must take ownership of the risk created by an agent and define the decisions it is permitted to take. Risk ownership cannot sit with the technology team by default; it must sit with accountable business leadership.

CAF B.1 requires clear policies that define where agents can and cannot operate, rather than relying on assumed good behaviour or technical safeguards alone.

CAF B.5 becomes critical because agents require identities, but those identities must be constrained, monitored and auditable. An all-powerful agent identity is simply a privileged account by another name.

CAF B.6 forces discipline around training data, live data access, outputs, retention and any learning or adaptation over time. Uncontrolled data flows are how agentic systems drift out of tolerance without anyone noticing.

Evidence matters more than intent

A recurring mistake in early agent deployments is reliance on intent rather than evidence. Organisations may say the agent was designed to behave responsibly, or that guardrails were considered, however that will not withstand scrutiny. Regulators and auditors will look for evidence that controls were designed, implemented, monitored and reviewed. They will expect to see approval decisions, defined authority boundaries, access reviews, monitoring outputs and incident response considerations that explicitly include agent behaviour.

What good governance looks like in practice

Governing AI agents does not need an entirely new set of rules. It requires existing governance disciplines to be applied more clearly and consistently. Good governance for AI agents includes:

  • Clear ownership – with a named senior owner.
  • Defined limits on authority – with a clear description of what the agent can and cannot do.
  • Controlled access – with only required permissions, reviewed regularly.
  • Ongoing monitoring – with clear records, so that unusual behaviour can be spotted.
  • Regular review and challenge – revisited as business needs change.

Most importantly, ownership of the risk they create should be clear and defensible at board level.

The uncomfortable truth

Organisations that treat agentic AI as ‘just another AI tool’ will find their governance models fail under pressure. Those that recognise agents as delegated actors, design authority boundaries deliberately and embed monitoring and assurance from day one will be far better placed to scale agentic AI safely, with confidence.

Agentic AI can deliver real value when it is governed well. Clear ownership, sensible limits and ongoing oversight do not slow innovation, they make it sustainable, allowing organisations to adopt agentic AI with confidence, scale its use responsibly and build trust.

This is not about holding AI back, it is about ensuring that when AI acts, the organisation can explain, justify and stand behind the action and outcome.

If you’d like to find out more about Softcat’s Data, Automation and AI solutions, please click here.