Core Publication

Inference Time Law

Abstract

Inference Time Law is the idea that high stakes AI governance cannot rely on training time intentions or probabilistic safety layers. If enforcement only exists as prompts, model side filters, or post hoc review, it can fail under contradiction, drift, or adversarial pressure. Governance that must be provable needs an enforcement layer that operates at inference time, before the model processes the request. This publication describes what that looks like in practice, including deterministic pre inference gating, explicit policy evaluation, and refusal reasons that can be audited. It also explains why evidence matters, including the role of cryptographically signed artefacts that can be verified offline. The goal is simple. Move governance from claims to controls, and from controls to evidence. If you want AI systems to be governable, auditable, and insurable, you need enforcement that is deterministic, testable, and repeatable under real conditions.

Key Findings

  • Training time alignment is not governance and cannot provide operational guarantees.

  • Inference time enforcement must be code enforced, not probability enforced.

  • Deterministic gating enables consistent pass or deny decisions under contradiction.

  • Signed, offline verifiable artefacts provide audit grade evidence.

  • Governance should be evaluated as a control system, not a narrative.

The PDF version is the canonical downloadable file for archival and printing.

Get the full publication (PDF)