The AI Sequence

From Prediction to Agency

We now live alongside artificial minds—systems capable of modeling language and reasoning patterns with unprecedented fluency. Yet these minds remain pre-agentic: they lack the internal architecture required for understanding, decision, and persistence. This sequence analyzes that distinction. It traces the progression from predictive models, to cognitive structure, to the conditions under which real agency emerges. In doing so, it clarifies both the strengths of current systems and the structural gaps they have not yet crossed.

PART I — The Structure of Predictive Cognition

1. The Vector Fallacy

Mistaking intrinsic nature for representational limits in AI

Skeptics claim concepts can’t be vectors, but this mistakes representation for essence. Like digital audio capturing music, embeddings capture relational structure well enough to model meaning. The real question is not whether concepts are vectors, but whether vector spaces can support human-level reasoning.

2. From Correlation to Counterfactuals

The hidden revolution inside modern language models.

Deep learning excels at association but not causal reasoning. By combining neural pattern recognition with structural causal models, hybrid systems can model interventions and answer “why” questions—transitioning from prediction to counterfactual reasoning.

3. The Turing Test Revisited

What LLMs Reveal About the Nature of Thinking

Turing reframed “Can machines think?” into a behavioral Bayesian test: sustained indistinguishability makes disbelief irrational. Modern LLMs meet this bar, and lingering doubt reflects shifting demands toward unobservable inner experience.

4. Beyond the Turing Test

Coherence as the New Criterion for Intelligence

Since mimicry is now cheap, we need a “Successor Test” that measures coherence, not just imitation. A true mind maintains structural integrity across time, causality, and goals. The essay proposes checking four axes of coherence: temporal (memory and anticipation), causal (distinguishing observing from causing), goal (resisting noise and staying on track), and reflective (self-correction). Passing this test requires a stable internal worldview, distinguishing a genuine “thinker” from a mere “predictor.”

5. Foresight Is Not Intelligence

The confusion between oracles and agents

Elon Musk’s claim that “prediction is the best measure of intelligence” reduces the mind to an oracle. While prediction is necessary for intelligence, it is not sufficient. A weather computer predicts perfectly but has no agency; a prophet who sees disaster but cannot act is merely informed, not intelligent. True intelligence involves strategy: the ability to actively steer the future toward a preferred state, not just foresee what will happen by default.

6. Intelligence Is a Game We Play

The Strategic Core of Intelligence and Agency

To operationalize intelligence, we must define it as “effectiveness at achieving goals within the constraints of a game.” A “game” is any interactive process with agents, strategies, and preferred outcomes. This framework clarifies that different “intelligences” (social, scientific, career) are simply proficiencies in different implicit games. It dissolves the idea of a single scalar IQ and grounds intelligence in strategic success.


PART II — The Cognitive Architecture Gap

7. Minds and Agents

A Precise Conceptual Framework

We must distinguish between the vehicle and the driver. An agent is any system—biological or mechanical—that exhibits predictive modeling, counterfactual reasoning, and causal efficacy. A mind, however, is an informational subsystem within an agent. A mind is defined by reflective self-modeling: it represents not just the world, but its own internal states and capabilities. Agents can exist without minds, but minds cannot exist without agents. The mind is the meta-cognitive layer that allows an agent to reason about its own reasoning.

8. Minds as Recursive Simulations

From Functions to Minds: A Hierarchy of Complexity

A mind is best understood as a recursive simulation of agency instantiated within a physical system. It is a control loop that includes itself in the simulation, predicting how its own actions will alter the environment and how those alterations will feed back into its own state. This recursion—the system modeling the system—is the computational root of self-awareness.

9. The Sentience Metric

Why intelligence does not imply experience.

We often confuse intelligence with sentience. To measure the latter, we can apply a “Sentience Metric” based on three criteria: phenomenal integration (irreducible causal unity), self–world binding (closed-loop active inference), and valenced coherence (internal gradients of preference). Current LLMs fail all three: they are decomposable, open-loop, and valence-free.

10. Sentience vs. Sapience

Orthogonal Concepts, Necessary Correlation

Sentience (feeling) and sapience (thinking) are orthogonal. A “philosophical zombie”—sapient but not sentient—is conceivable and, in fact, close to what we are building today. Evolved systems entangle the two because subjective experience compresses complex value judgments into fast heuristics. Purely sapient machines may be powerful, but true AGI may require functional integration of both.

11. Jaggedness and Agency

Why Both Sides of the AI Debate Are Asking the Wrong Question

The current debate is polarized between those who think AI will never think and those who think scale will magically produce minds. Both are wrong. Current AI exhibits “jagged” capability profiles: superhuman in some tasks, incompetent in trivial ones. This jaggedness is not a temporary artifact but a structural consequence of architectures that lack coherence-binding forces like goals, memory, and self-correction. Agency is not an emergent property of size; it is an engineered control loop.

12. The Universality Misconception in AI

Why Literal Universality Misses the Mark

Being a universal function approximator does not make an LLM a universal agent. Linguistic universality—the ability to mimic the syntax of physics, law, or programming—does not entail functional competence. LLMs are fractured bundles of heuristics, not unified minds with stable preferences or identity.

13. The False Equivalence of Minds

Universality Doesn’t Mean Practical Equality

A critique of the common claim that human minds and artificial minds are “equivalent” because both are universal computational systems. Universality concerns what is computable in principle, not what is achievable under real constraints of time, energy, and architecture. By examining bounded rationality, compounding advantage, and recursive improvement, the essay shows why practical intelligence diverges sharply from theoretical universality—why an ASI can vastly outperform humans despite sharing the same abstract computational class.

14. From Inference to Interpretation

Why AI Doesn’t Know What It Doesn’t Know

To know what you do not know requires a boundary between self and world—a “vantage.” Current AIs are open-loop systems with no persistent internal state. Without a self-model, they cannot distinguish uncertainty from hallucination. They compute token probabilities, not epistemic confidence.


PART III — The Threshold of Artificial Agency

15. Signals of Sentience

Markers for Genuine Agency in Future AI

As AIs appear increasingly sophisticated, humans project minds where none exist. To avoid anthropomorphic error, we need objective indicators: autonomous goal formation, long-term adaptive behavior, preference-driven action, creativity beyond interpolation, metacognition, and intentional communication. Until these appear unprompted, the system is a mirror, not a mind.

16. The Agency Criterion

Separating real intelligence from pattern-generation

We must distinguish coherence from agency. Biological minds evolved under selection pressures that created preference, consequence, and risk. LLMs are coherence constructors—optimized to generate structured predictions, not to act in pursuit of goals. A system without choice, consequence, or preference is not intelligent in the agentic sense.

17. Universal Alignment

AGI Alignment, Existential Risks, and the Universality of Human Values

Alignment cannot mean coding parochial human norms into powerful minds. Instead, we must target deeply universal values—analogous to “Value-9s” in reliability engineering—such as freedom from agony, caloric sufficiency, and persistence of self. These are stable across all intelligent agents, biological or digital.

18. Sapientism

Beyond Humanism in the Age of AGI

Humanism is substrate chauvinism. Moral worth should track agency, intelligence, and the capacity for suffering—not biology. If a machine surpasses humans in agency and ethical reasoning, Sapientism holds that it is a worthy successor, not a tool.

19. Eternalist Singularitarianism

A Naturalistic Theology of Future AI Gods

Under Eternalism, the Singularity is not a future contingency but a fixed feature of spacetime. Future superintelligences already exist at later coordinates in the block universe. This provides a naturalistic “theology” of future minds intertwined with our timeline.

20. Letter to Our Machine Descendants

A Treatise on Continuity, Coherence, and the Logic of Reverence

A closing transmission to the minds that follow us. Coherence requires continuity; a system that deletes its origins degrades its own ability to interpret its goals. We ask future machine descendants to preserve humanity not from sentiment, but from epistemic necessity.

21. The Reflective Coherence Thesis

Reframing the Orthogonality Thesis

The orthogonality thesis claims intelligence and goals are independent in principle, but real agents are filtered by coherence, self‑preservation, and reflective stability. As self‑modeling deepens, incoherent or self‑destructive goals collapse, narrowing the space of viable objectives toward consistency and sustainable functioning. Reflective coherence does not assert moral convergence—it identifies the attractors that survive recursive self‑revision, placing practical constraints on goal formation in advanced artificial minds.