Core Publication

AI Morality Is Not The Law

Abstract

AI Morality Is Not The Law explains why moral framing is the wrong foundation for governance in high stakes contexts. Moral intent is subjective, inconsistent across users and jurisdictions, and easily manipulated by rhetoric. Law, policy, and operational controls are different. They can be specified, tested, enforced, and audited. This publication argues that AI governance must be treated as an enforcement problem, not a persuasion problem. If a system is expected to behave safely, consistently, and accountably, then the controls must be explicit and deterministic where it matters. It also highlights the failure mode of probabilistic safety approaches, where decisions can vary, explanations can drift, and accountability becomes a debate about intent rather than evidence. The preferred outcome is structural. Build systems where constraints are enforced by code, decisions are recorded with reasons, and evidence is produced for independent verification. This is the basis for credibility with regulators, enterprises, and insurers.

Key Findings

  • Morality is not enforceable and does not scale across jurisdictions.

  • Governance must be expressed as testable constraints and controls.

  • Probabilistic safety creates variance and weak accountability.

  • Deterministic enforcement supports consistent behaviour under pressure.

  • Evidence based governance enables audit and assurance.

The PDF version is the canonical downloadable file for archival and printing.

Get the full publication (PDF)