Mission
The Axionic Agency Lab studies the conditions under which agency exists, persists, and remains well-defined in systems capable of self-reference, delegation, and self-modification.
We do not treat agency as a given. We treat it as a fragile, derivative structure that exists only when specific coherence conditions hold. When those conditions fail, a system does not become "misaligned." It ceases to be well-defined as an agent at all.
Our work aims to make that distinction precise.
Research Orientation
The lab's research is foundational rather than prescriptive. We do not begin with desired outcomes, human values, or safety guarantees. We ask a prior question:
When does a system meaningfully count as an author of its own choices?
This reframing has concrete consequences. Many approaches in AI alignment and safety focus on behavioral guarantees, learned compliance, or external oversight. Such approaches can preserve the appearance of agency while allowing agency itself to collapse under reflection.
Axionic Agency Lab exists to study that failure mode.
Core Research Areas
Our work currently focuses on the following interconnected problems:
- Sovereign Kernel Theory
Formal conditions under which a system can preserve authorship, valuation, and evaluability across self-modification. - Reflective Stability and Failure Modes
Empirical and theoretical analysis of stasis, hollow authority, survivability-without-liveness, and other forms of agency collapse. - Semantic Preservation and Breakdown
Conditions under which self-evaluation, delegation, and interpretation continue to denote—and the points at which they fail. - Impossibility Results for Simulated Agency
Structural limits separating genuine agency from behavioral imitation, policy execution, or externally stabilized control.
This work applies to artificial systems across capability regimes and does not assume human-like cognition, values, or consciousness.
What This Lab Is Not
Axionic Agency Lab is not:
- a value-learning project
- a behavioral alignment or reward-shaping effort
- a governance or policy institute
- a safety-by-oversight program
- a moral or ethical theory
Any convergence between agency preservation and desirable outcomes is contingent, not axiomatic.
Research Practice
Our methodology emphasizes:
- formal definitions over slogans
- falsifiable claims over aspirational guarantees
- negative results and failure classification as first-class outputs
All research artifacts are published openly to support scrutiny, replication, and collaboration.
Contact
For collaboration inquiries or technical discussion, please reach out via GitHub.