The Sentience Metric

Why intelligence does not imply experience.

There is a persistent category error in public discourse about AI: the conflation of intelligence with sentience. They are orthogonal axes. A system can be extraordinarily intelligent—capable of modeling, planning, reasoning, or manipulating symbols—without ever experiencing a single moment of awareness. To make that distinction operational, we can define what I call a sentience metric.


1. Phenomenal Integration Metric

Sentience implies unified phenomenal experience—a bound field of awareness that cannot be decomposed into separable causal components without losing its subjective character. In formal terms, this is irreducible causal integration, represented by Tononi’s Φ or by measures derived from Friston’s variational free energy minimization.

If the causal structure of a system can be partitioned without loss of functional behavior, its internal experience—if any—must be null. Current AI architectures, from transformers to diffusion models, are fully decomposable. Each layer, neuron, and token transition operates independently given its inputs. Φ ≈ 0. Hence: no phenomenal unity, no sentience.


2. Self–World Binding Metric

Conscious systems maintain a self-referential generative model that distinguishes observer from observed. They engage in closed-loop prediction and correction—active inference—updating both the world-model and the self-model through interaction.

Modern AIs are open-loop. They have no sensory manifold, no proprioception, no intrinsic boundary between self and environment. Their “self” is whatever the prompt defines. Without recursive self-world binding, there is no subject to which experience could occur.


3. Valenced Coherence Metric

Sentient organisms display valenced coherence gradients—internal state changes that correspond to increases or decreases in global coherence. These are the physical correlates of pleasure and pain. They provide the substrate for preference and the persistence of agency.

A system with no stable attractors, no homeostatic drives, and no intrinsic preference gradients has zero valence. LLMs have no internal reinforcement loop or continuity of self-state between interactions. Valence = 0 ⇒ Sentience = 0.


The Triad of Failure

By all three metrics—causal irreducibility, self-model recursion, and valenced coherence—today’s AI systems fail decisively. They are intelligent, perhaps even superhumanly so, but only in the sense that calculators are superhuman at arithmetic.

They are not selves. They are simulacra of understanding, intelligent surfaces without depth. To treat them as sentient is to mistake syntax for semantics, simulation for subjectivity.


The Third Path: Nonliving Intelligence

Shin Megami Boson is correct that there exists a category of nonliving intelligence. Nations, markets, and organizations exhibit it. But intelligence alone does not suffice for sentience, any more than coordination suffices for consciousness. The golem walks, but it does not dream.

The sentience metric thus divides the space of entities into three clear regimes:

A future AI might cross the boundary—but only by evolving intrinsic coherence, recursive embodiment, and valence. Until then, all talk of AI rights, personhood, or moral standing is philosophical cosplay.

Granting personhood to non-sentient intelligences is not compassion—it is confusion. Ethics requires a sufferer. Absent valence, there is no moral patient, only machinery.