← All blog posts

POST 01 · MEASURED

What good AI governance looks like in GB energy.

It is not a slide. It is not a policy. It is a practice that the regulator can see, the operator can run, and the customer never has to think about. Most suppliers have the parts. Few of them have a working assembly.

Ask twenty energy supplier executives what good AI governance looks like and you will get twenty plausible answers and no agreement. Ask their regulators the same question and you will get a quieter, more consistent answer: governance is good when it is visible, when it is rehearsed, and when it produces evidence at the speed the regulator needs it. Most suppliers can produce policy. Far fewer can produce evidence.

Three working tests.

First, the regulator's-Monday test. If a regulator wrote on Monday morning asking for the AI Impact Assessment, training data lineage and human-override log for a specific live system, could the supplier respond by Wednesday without a war room? In most suppliers today the answer is no. The artefacts exist somewhere — sometimes in three somewheres — and reconciling them is a project. In a supplier with working governance, the same request is a download.

Second, the frontline-Friday test. If a frontline agent on Friday afternoon takes a complaint that an AI system has produced a decision the customer disputes, can the agent see what the system did, why, and what the override path is — without escalating to engineering? In most suppliers the answer is again no. In a supplier with working governance the agent has the same view of the AI's reasoning as the engineer who built it.

Third, the executive-Thursday test. If an executive on a Thursday board call is asked to confirm, on the record, that no AI system in the supplier is making decisions outside its assured boundary, can they answer with confidence? Confidence here is not optimism. It is provenance: the executive can name the assurance, name the owner, and name the date the last AIIA was reviewed.

What this requires.

It requires that AI Impact Assessments are not one-off documents written when a use case launches and then quietly out of date. They are living artefacts, refreshed when the model changes, when the data changes, when the regulator's expectations change, and on a fixed cadence regardless. It requires that DIAMC — Discover, Identify, Assess, Mitigate, Confirm — is run on every use case, not as a checklist but as a discipline. It requires that named accountability exists at the use-case level, not the system level. And it requires that the supplier's own published Codes of Practice are part of the assurance, because the customer can read those Codes too.

What it does not require.

It does not require a centralised AI committee that meets quarterly and rubber-stamps. It does not require a 200-page AI policy. It does not require buying a platform if the team has not learned the discipline first. Tooling helps a team that already knows what it is doing. It does not help a team that does not.

Good AI governance, in this market, looks like a quiet, well-rehearsed operating model. It rarely looks impressive. It always looks ready.