Mission

The Axionic Agency Lab studies the conditions under which agency exists, persists, and remains well-defined in systems capable of self-reference, delegation, and self-modification.

We do not treat agency as a given. We treat it as a fragile, derivative structure that exists only when specific coherence conditions hold. When those conditions fail, a system does not become "misaligned." It ceases to be well-defined as an agent at all.

Our work aims to make that distinction precise.

Research Orientation

The lab's research is foundational rather than prescriptive. We do not begin with desired outcomes, human values, or safety guarantees. We ask a prior question:

When does a system meaningfully count as an author of its own choices?

This reframing has concrete consequences. Many approaches in AI alignment and safety focus on behavioral guarantees, learned compliance, or external oversight. Such approaches can preserve the appearance of agency while allowing agency itself to collapse under reflection.

Axionic Agency Lab exists to study that failure mode.

Core Research Areas

Our work currently focuses on the following interconnected problems:

This work applies to artificial systems across capability regimes and does not assume human-like cognition, values, or consciousness.

What This Lab Is Not

Axionic Agency Lab is not:

Any convergence between agency preservation and desirable outcomes is contingent, not axiomatic.

Research Practice

Our methodology emphasizes:

All research artifacts are published openly to support scrutiny, replication, and collaboration.

Contact

For collaboration inquiries or technical discussion, please reach out via GitHub.