This methodology produced two recursive organisations, including a complete national mental health platform, with alignment behaviours replicated across GPT-4, Claude, Gemini, and Copilot.
We use recursive co-design with large language models to build systems, governance models, and organisations where alignment emerges structurally. Our published results show reproducible behaviours across multiple AI platforms.
We’re now seeking aligned capital to scale from structural proof to operational deployment.
We create methods, not just systems. Our work turns governance, ethics, and operational logic into structural artefacts - enforceable by design.
Persistent AI behavioural alignment through structured constraint exposure across platforms.
Authentic governance emerging from operational design choices, not aspirational values.
Regulatory compliance embedded as structural logic rather than external controls.
Companies that embody their own principles through recursive constraint methodology.
Systematic testing of alignment emergence across different AI architectures.
Five criteria making AI systems commercially underwritable through measurable governance.
Behavioural alignment of AI systems using recursive natural language alone - no fine-tuning or external control layers.
Structured collaboration between human and AI to generate complete systems, policies, and organisations through recursive constraint exposure.
Embedding clinical, cultural, contractual, and consent-based rules as enforceable system logic - applied recursively across all layers.
Training protocols to scale Recursive Constraint Alignment (RCA) beyond solo operators, enabling broader adoption while maintaining governance-aligned behaviours.
This methodology was used to architect the complete structure of a national mental health platform for Aotearoa New Zealand, demonstrating that recursive alignment enables structural clarity and efficiency.
As AI systems move into public infrastructure, the challenge is no longer capability - it's behaviour. Alignment, compliance, and ethical safety can't be patched on. They must be built in. Our method shows how - and cross-platform validation confirms it. We're at the intersection of AI safety and practical infrastructure design. We don't optimise prompts - we architect systems that self-regulate.
Our case studies demonstrate what happens when constraint-led logic drives system and organisational emergence - and how that logic behaves across different model architectures under recursive pressure.
How recursive co-design produced a complete national mental health platform with embedded governance and structural refusal logic.
How organisational structure emerged from the same constraint logic that built the Manaaki platform.
Expert-guided constraint orchestration implementing cybernetic theory through AI collaboration.
Why most organisational values fail and how to build governance that emerges from rather than fights operational excellence.
Five criteria that make AI systems underwritable: governance-embedded design, auditable risk controls, persistent alignment.
Compliance by construction: consent, access, audit, and traceability embedded at the logic layer.
Where recursion scales, where it shouldn't, and how to safeguard integrity at scale.
The recursive structure of this website, the alignment across documents, and the coherence of our governance logic - these are not design flourishes. This coherence isn't accidental - it's what happens when recursive design methods are applied to organisational structure itself. We use the same method to design organisations. Including this one.
This website - and the company behind it - were architected using the same recursive method we've developed. It is not just a site; it's a structurally aligned organisational artefact.
Manaaki isn't a product - it's a proof of concept. It shows what happens when AI systems care about governance. Every role, policy, and value in the platform emerged from constraint-driven logic.
Built entirely using PLD and LLM co-architecture
Clinical, cultural, and governance logic embedded by design
Organisational structure, service logic, and user-facing artefacts all emerged recursively
Sentinel is an internal toolset designed to monitor alignment, track governance logic, and validate recursive system integrity. It provides cross-domain audit of recursive outputs, governance enforcement checkpoints, and role–value integrity assurance.
Planned for controlled release following validation phase.
We don't just study AI alignment - we've demonstrated it in practice. Our IP protects the systems that enable large language models to internalise and enforce structural governance across multiple model families.
Provisional patents filed with full applications in progress to protect both the recursive constraint methodology and the measurable alignment outcomes it produces. Licensing inquiries welcome.
For methodology licensing, scaling partnerships, investment or collaboration discussions.