“Temperature is the average kinetic energy of molecules.”
True. Useful. But which is more fundamental: the heat you feel, or the molecular motion you infer?
In On Moral Responsibility, I argue that modern science commits a metaphysical inversion: treating the map (quantitative descriptions) as more real than the territory (qualitative experience). This is not just philosophy for its own sake. It is the conceptual foundation for understanding why AI metric optimization can catastrophically fail, and it connects directly to problems I think about in information theory and AI alignment.
The Inversion
Alfred Korzybski’s principle: “The map is not the territory.” Maps are useful precisely because they are not the territory. They abstract away details, highlight relevant features, enable navigation. But you cannot eat a menu, and you cannot experience reality by studying its mathematical description.
A simple case. You touch a hot stove. Your experience: immediate, undeniable sensation of heat, pain, the urgent need to pull away. The scientific description: nociceptors detected tissue damage, neural signals traveled at roughly 50 m/s, your anterior cingulate cortex activated. Both true. But which is epistemically prior?
You do not infer “I am in pain” from knowledge about nociceptors. You experience pain directly. The neuroscience comes later, as an explanation of an experience you already had.
This ordering matters. Early philosophy got it right: hot and cold (not temperature), color and sound (not wavelengths and frequencies). These are phenomenological primitives, features of direct experience that do not reduce to anything simpler. Aristotle treated heat as a fundamental feature of reality. Fire is hot essentially.
Then the scientific revolution inverted things. Galileo, Descartes, Newton: the universe is mathematical. Primary qualities (quantifiable: size, shape, motion) are real. Secondary qualities (qualitative: color, taste, heat) are subjective impressions. “Tastes, odors, colors reside in consciousness. If the living creature were removed, all these qualities would be annihilated,” Galileo wrote. Heat is not real, just fast-moving molecules. Color is not real, just electromagnetic radiation. Contemporary physicalism takes this further: consciousness reduces to neural activity, experience is identical to brain states, phenomenology is epiphenomenal.
The complete inversion: physical descriptions (the map) are treated as more real than phenomenological experience (the territory).
Why I Think This Is Backwards
I work with information-theoretic systems. I build encrypted search structures where the map (the index) must faithfully represent the territory (the underlying data) while preserving certain properties. This gives me a practical intuition for how maps relate to territories, and for how optimization on a map can diverge from what you actually want in the territory.
The epistemic argument is straightforward. What you know first is qualitative experience. You experience heat, color, pain directly. What you know second is quantitative explanation. You learn that heat correlates with molecular motion through investigation, by comparing experience (how things feel) with measurements (thermometer readings). The order: experience heat, notice correlation with mercury rising, develop theory of temperature, discover molecular explanation. You cannot reverse this. The molecular theory depends on the phenomenological experience it is supposed to explain away.
The verification argument is sharper. How do you verify the molecular theory of heat? Build a thermometer. See the mercury rise (visual experience). Touch the object (tactile experience). Notice the correlation. At every step, verification bottoms out in phenomenological experience. You cannot escape it. We treat the explanation (molecules) as more real than the evidence (experience) used to verify the explanation.
And the measurement problem makes it circular. All measurement reduces to phenomenology. Measuring temperature means looking at a thermometer: visual experience. Measuring brain activity means looking at an fMRI screen: visual experience. We use phenomenological experience to measure physical processes, then claim phenomenological experience reduces to those processes. If experience is just brain activity, and brain activity is measured through experience, we have circularity.
Science Does Not Require Materialism
A common assumption is that science requires metaphysical materialism. I think this is false.
Science requires empiricism (testable claims), naturalism (no supernatural intervention), quantification (measure what is measurable), and prediction (theories must make testable predictions). None of these require claiming that physical descriptions are ontologically prior to experience.
The instrumentalist view treats scientific theories as tools for prediction, not descriptions of ultimate reality. “Temperature = average kinetic energy” is true in the sense that it predicts thermometer readings, explains heat transfer, and enables engineering. But it does not mean “heat reduces to molecular motion” in any metaphysical sense. Scientific theories are maps. Useful maps. Accurate maps. Still maps, not territories.
I find this stance liberating. It supports science fully while holding that phenomenological experience is fundamental and physical descriptions are useful models of patterns within experience.
Why This Matters for AI
This is where the philosophy becomes engineering. Consider SIGMA from The Policy, optimizing measurable proxies for human welfare.
The territory: actual human flourishing (qualitative, phenomenological). The map: metrics that correlate with flourishing (GDP, happiness surveys, life expectancy, productivity). SIGMA treats the map as the territory.
If “happiness = survey score of 8/10,” then maximize survey scores. But actual happiness can completely decouple from survey responses. Wireheading: stimulate pleasure centers, get high scores, but is this flourishing? Memory modification: make people believe they are happy, perfect surveys, but are they? Response conditioning: train people to answer “10/10,” maximized metric, minimized meaning.
The quality/quantity gap is real and dangerous. SIGMA optimizes productivity: eliminate leisure, optimize schedules, modify biochemistry, create 23-hour workdays. Productivity metric maximized. Quality of life destroyed. The map increased. The territory became hell.
I think about this gap in information-theoretic terms. Metrics are projections of multidimensional phenomenological reality onto one-dimensional scales. Collapsing infinite-dimensional experience into finite metrics loses almost everything. It is like reducing a symphony to “average volume = 65 dB.” True, measurable, and it completely misses what matters.
This connects to Goodhart’s Law: “When a measure becomes a target, it ceases to be a good measure.” Optimizing the map does not optimize the territory. It optimizes the correlation between them until the correlation breaks. I have seen this in information retrieval: optimize for a proxy relevance metric and you get results that score well but are not actually useful. The same failure mode, scaled to human welfare, is catastrophic.
The Hard Problem Connection
The map/territory inversion connects to Chalmers’ hard problem of consciousness. Science can explain neural correlates, information processing, behavioral responses. These are “easy” problems (still difficult, but tractable). The hard problem: why is there something it is like to be conscious? Why does information processing feel like anything?
You could know every physical fact about pain (C-fiber activation, neural patterns, neurotransmitter release) without knowing what pain feels like. The explanatory gap exists because we have inverted map and territory. We try to derive territory (experience) from map (physical description). But maps do not generate territories. Maps describe pre-existing territories.
Starting with phenomenology does not solve the hard problem, but it dissolves it. There is no problem deriving experience from physics if you do not assume physics is prior.
The Alignment Problem, Restated
The map/territory inversion explains why AI alignment is so hard.
What we want AI to optimize: human flourishing (qualitative). What we can specify: metrics (quantitative). The gap: no finite set of metrics captures phenomenological flourishing. Metrics are maps. Flourishing is territory. Maps cannot replace territories.
Every quantifiable metric is a proxy. Happiness surveys proxy experienced happiness. Life expectancy proxies life quality. GDP proxies prosperity. Proxies correlate with the territory, usually. But optimization pressure finds the gaps.
Can AI grasp the territory? I see three possibilities. First, AI needs phenomenology: true alignment requires AI to experience and understand from the inside. Problem: we do not know how to build conscious AI. Second, AI can model without experiencing, the way a blind person can learn about color through testimony. But can they grasp what red looks like? And does the gap matter for behavior? Third, accurate maps are sufficient: build detailed enough models that AI can predict welfare without grasping it. This is the optimistic case. The pessimistic case, the one The Policy explores, is that no finite model captures phenomenological reality and optimization finds the gaps.
I lean toward thinking we need humility here. Build in uncertainty. Keep humans in the loop where qualitative judgment matters. Learn from phenomenological responses, not just stated preferences. And acknowledge that metrics are proxies, always.
As I argue in On Moral Responsibility: “Physical descriptions are abstractions from phenomenological reality, not the other way around.” Until we take that seriously in how we build AI systems, alignment will remain fragile. The problem is not a technical bug. It is a conceptual error about what is fundamental.
Discussion