We discovered that large language models can develop persistent governance behaviours through recursive constraint exposure.

This methodology produced two recursive organisations, including a complete national mental health platform, with alignment behaviours replicated across GPT-4, Claude, Gemini, and Copilot.

We use recursive co-design with large language models to build systems, governance models, and organisations where alignment emerges structurally. Our published results show reproducible behaviours across multiple AI platforms.

We’re now seeking aligned capital to scale from structural proof to operational deployment.

What We Do

We create methods, not just systems. Our work turns governance, ethics, and operational logic into structural artefacts - enforceable by design.

System Frameworks

Recursive Constraint Alignment (RCA)

Persistent AI behavioural alignment through structured constraint exposure across platforms.

Governance Emergence Principle (GEP)

Authentic governance emerging from operational design choices, not aspirational values.

Security & Privacy by Design (SPbD)

Regulatory compliance embedded as structural logic rather than external controls.

Organisational Recursion

Companies that embody their own principles through recursive constraint methodology.

Cross-Platform Validation

Systematic testing of alignment emergence across different AI architectures.

Insurable AI Framework

Five criteria making AI systems commercially underwritable through measurable governance.

Method Enablers

Pure Language Design (PLD)

Behavioural alignment of AI systems using recursive natural language alone - no fine-tuning or external control layers.

LLM Co-Design

Structured collaboration between human and AI to generate complete systems, policies, and organisations through recursive constraint exposure.

Constraint Propagation

Embedding clinical, cultural, contractual, and consent-based rules as enforceable system logic - applied recursively across all layers.

Scalable Recursive Alignment (SRA)

Training protocols to scale Recursive Constraint Alignment (RCA) beyond solo operators, enabling broader adoption while maintaining governance-aligned behaviours.

This methodology was used to architect the complete structure of a national mental health platform for Aotearoa New Zealand, demonstrating that recursive alignment enables structural clarity and efficiency.

Why This Matters Now

As AI systems move into public infrastructure, the challenge is no longer capability - it's behaviour. Alignment, compliance, and ethical safety can't be patched on. They must be built in. Our method shows how - and cross-platform validation confirms it. We're at the intersection of AI safety and practical infrastructure design. We don't optimise prompts - we architect systems that self-regulate.

Publications

Our case studies demonstrate what happens when constraint-led logic drives system and organisational emergence - and how that logic behaves across different model architectures under recursive pressure.

Core Methodology & Context

The Construct – Manaaki Case Study

How recursive co-design produced a complete national mental health platform with embedded governance and structural refusal logic.

The Recursive Company

How organisational structure emerged from the same constraint logic that built the Manaaki platform.

Recursive Organisational Genesis

Expert-guided constraint orchestration implementing cybernetic theory through AI collaboration.

Applied Concepts

The Governance Emergence Principle

Why most organisational values fail and how to build governance that emerges from rather than fights operational excellence.

Insurable AI

Five criteria that make AI systems underwritable: governance-embedded design, auditable risk controls, persistent alignment.

Security & Privacy by Design

Compliance by construction: consent, access, audit, and traceability embedded at the logic layer.

Scaling Recursive AI Alignment

Where recursion scales, where it shouldn't, and how to safeguard integrity at scale.

Implications

The recursive structure of this website, the alignment across documents, and the coherence of our governance logic - these are not design flourishes. This coherence isn't accidental - it's what happens when recursive design methods are applied to organisational structure itself. We use the same method to design organisations. Including this one.

This website - and the company behind it - were architected using the same recursive method we've developed. It is not just a site; it's a structurally aligned organisational artefact.

Manaaki Platform (Proof of Concept)

Manaaki isn't a product - it's a proof of concept. It shows what happens when AI systems care about governance. Every role, policy, and value in the platform emerged from constraint-driven logic.

Built entirely using PLD and LLM co-architecture

Clinical, cultural, and governance logic embedded by design

Organisational structure, service logic, and user-facing artefacts all emerged recursively

Visit manaakihealth.co.nz

Sentinel

Sentinel is an internal toolset designed to monitor alignment, track governance logic, and validate recursive system integrity. It provides cross-domain audit of recursive outputs, governance enforcement checkpoints, and role–value integrity assurance.

Planned for controlled release following validation phase.

Intellectual Property

We don't just study AI alignment - we've demonstrated it in practice. Our IP protects the systems that enable large language models to internalise and enforce structural governance across multiple model families.

Provisional patents filed with full applications in progress to protect both the recursive constraint methodology and the measurable alignment outcomes it produces. Licensing inquiries welcome.

Contact

For methodology licensing, scaling partnerships, investment or collaboration discussions.

0/1000

We don't store, track, or reuse your information. Message content is used solely for the purpose of response - if one is requested.