Insurable AI
Abstract
Insurable AI sets out the conditions required for AI systems to be underwritable in the real world. Insurance does not price ideals. It prices controls, evidence, and repeatable performance under stress. This publication describes five criteria that move AI from a trust based posture to an audit based posture, including governance embedded design, enforceable control points, and verifiable decision records. It connects these criteria to practical deployment questions that insurers, auditors, and risk owners care about. What happens when the model is pressured, when policies conflict, when users attempt bypass, and when failure must be proven rather than asserted. The central claim is that insurability requires deterministic infrastructure around the model, including pre inference enforcement and forensic grade logging. With the right architecture, systems can produce admissible artefacts, demonstrate consistent behaviour, and support independent verification. Without that, safety remains probabilistic and accountability remains unclear. The outcome is a pragmatic framework for building AI systems that can be governed, audited, and insured.
Key Findings
Insurability requires controls and evidence, not assurances.
Governance must be embedded in architecture, not added as a wrapper.
Deterministic enforcement reduces variance and improves predictability.
Audit grade artefacts must be independently verifiable.
Risk evaluation improves when failure modes are measurable and testable.
The PDF version is the canonical downloadable file for archival and printing.
Get the full publication (PDF)