There is a quiet test taking shape inside the regulator's casework. It is not written down anywhere. It does not need to be. When an AI-mediated outcome surfaces in a complaint, the case officer asks for the AI Impact Assessment. The supplier produces it, or does not. From that moment, the trajectory of the case is set.
What the test really asks.
Not whether the AIIA is perfect. Not whether the model is best-in-class. The test asks: did the supplier think this through before deployment, write it down in a form the regulator can recognise, refresh it when the world changed, and act on what it found? An AIIA that meets those four asks is not difficult. An AIIA that does not exist is.
Why retrofitting is worse than starting late.
You cannot retro-fit an AIIA on a system that has been in production making decisions for six months. Or rather, you can, but the regulator will mark the date the system went live, and the gap between go-live and AIIA will sit on the record. A supplier that retrofits its AIIA in October 2026 for a system that went live in March 2025 is not assured; it is in mitigation.
What an AIIA produces, beyond compliance.
Cleaner use cases. Fewer abandoned pilots. Faster regulator responses. Lower cost of incident, when an incident happens. A frontline that knows what the AI is supposed to do. An executive who can answer the Thursday-board question without flinching. The compliance value is real. The operating value is larger.
What to do this quarter.
Inventory every AI-mediated decision point in the customer journey. Triage by impact. Write a one-page AIIA for each, against the supplier's published Code of Practice and the relevant regulator's framework. Stand up a monthly review cadence. None of this requires a platform purchase. All of it requires a deliberate decision to start.