A Coherence-Based Architecture for AI Integrity Oversight
As large language models (LLMs) grow in scale and influence, the question is no longer whether governance is needed — but what kind of governance will actually work. Most…
2025-11-28
Ethica Luma (ELF) is an independent foundation dedicated to ensuring that technology, governance, and organisations move forward with integrity.
Where most approaches measure efficiency, safety, or compliance, we address the missing dimension: coherence.
Coherence is what ensures systems do not simply function, but serve humanity with clarity, trust, and responsibility.
We provide independent, in-depth reviews of AI systems — with a focus on ethics, relational dynamics, and coherence. This service identifies risks (known and unknown), strengthens user trust, and offers visible accreditation through the ELF Ethics & Safety Mark.
Find out moreAs large language models (LLMs) grow in scale and influence, the question is no longer whether governance is needed — but what kind of governance will actually work. Most…
2025-11-28
The RI Behavioural Layer is a lightweight, auditable system for evaluating AI model behaviour through structured, human-relevant metrics. It moves beyond content moderation and…
2025-11-26
AI safety today is centred on frontier-risk research — understanding the internal capabilities and hidden behaviours of the most advanced models. AISI’s work has demonstrated real…
2025-11-21