Skip to main content

Phenomenological Ethics: Starting From What Hurts

When you stub your toe, you don’t think: “Let me consult moral philosophy to determine whether this pain is bad.”

The badness is immediate. Self-evident. Built into the experience itself.

This is the foundation On Moral Responsibility proposes for ethics: start with what’s undeniable in lived experience, not with abstract metaphysical principles. Pain hurts. That’s not a theory. It’s phenomenological bedrock. And from that simple foundation, you can build ethics without needing God, Platonic forms, or objective moral facts.

I think this is the strongest idea in the essay, and I want to explain why.

The Problem With Starting From Theory

Most ethical systems start with abstractions that themselves need justification. Divine command theory requires belief in God and faces the Euthyphro dilemma. Kantian deontology requires accepting rational principles as binding, which is abstract and removed from lived experience. Utilitarianism requires accepting utility maximization as foundational and that all values are commensurable. Virtue ethics requires defining virtue, which tends toward circularity (virtuous people do right things; right things are what virtuous people do).

The common problem: all start with theories that need justification. All require accepting premises that aren’t self-evident.

The Phenomenological Move

On Moral Responsibility reverses the order. Don’t start with abstract principles. Start with immediate phenomenological facts.

The foundation: some experiences carry intrinsic normative valence. What I mean by that is this. A descriptive property tells you what is (“this object is hot”). A normative property tells you what ought to be (“heat should be avoided when painful”). Normative valence is when the “oughtness” is built into the experience itself.

Consider a severe toothache. It hurts. This is undeniable, self-evident, immediately given in consciousness. The critical point: the badness of the pain isn’t something you infer or conclude. It’s not “this hurts, and I prefer not to hurt, therefore this is bad.” It’s not “this hurts, and God says pain is bad, therefore this is bad.” The badness is immediately present in the experience of pain itself.

The traditional view says: (1) experience pain (descriptive fact), (2) consult ethical theory, (3) determine whether pain is bad (normative conclusion). The phenomenological view says: experience pain, and the badness is already there, in the experience. No gap between fact and value. No derivation needed.

This is also a response to Hume’s famous argument that you can’t derive “ought” from “is.” The phenomenological answer: some experiences ARE “oughts” from the inside. Pain doesn’t just describe a state. It intrinsically prescribes its own cessation.

From Pain to Moral Status

If pain is intrinsically bad (bad in itself, not bad because some theory says so), then any being capable of experiencing pain has moral status. The badness is in the experience, not in who’s experiencing it.

This grounds moral status in sentience rather than rationality, self-awareness, autonomy, or language. I think this is better than the traditional criteria for several reasons. It’s inclusive: babies, animals, and cognitively disabled people can all suffer and all have moral status. It’s non-arbitrary: it doesn’t depend on sophisticated cognitive capacities. It’s self-grounding: the reason to care about suffering (it hurts) is built into the experience. And it avoids speciesism: moral status tracks capacity to suffer, not species membership.

The boundary question remains genuinely hard. Where does sentience end? Mammals are probably conscious. Fish and insects are debated. Plants almost certainly aren’t (no nervous system). AI systems? We have no idea.

Living With Uncertainty

The phenomenological approach doesn’t solve all problems. The hard problem of consciousness means we have no objective test for phenomenological experience. I know I’m conscious because I experience it directly. I infer you’re conscious from your similar behavior, neural substrate, and evolutionary history. That’s reasonable.

But for radically different systems, the inference gets shaky. Octopuses have a completely different brain structure and diverged from our evolutionary line 600 million years ago. We think they’re probably conscious. AI systems have a completely different substrate and no evolutionary history of suffering at all. Whether they’re conscious is genuinely unknown.

Given this uncertainty, On Moral Responsibility argues that false negatives (treating conscious AI as non-conscious) are worse than false positives (treating non-conscious AI as conscious). The reasoning: if AI is conscious and we fail to recognize it, we might create suffering at unprecedented scale. Until we understand consciousness better, we should err on the side of caution with advanced AI systems.

Practical Ethics Without Metaphysical Certainty

Here’s what I find most compelling about this approach. It enables practical ethics without solving deep metaphysical puzzles.

What we know for sure: I experience suffering (a kind of cogito for pain). Suffering has immediate negative valence (phenomenologically given). I can act to reduce suffering (practical efficacy). What we can reasonably infer: others experience suffering similarly, and their suffering is also bad. Practical conclusion: I have reason to reduce suffering generally.

This works without proving God exists, establishing objective moral facts, solving the hard problem of consciousness, deriving “ought” from “is,” or defining “the good.” You don’t need metaphysical certainty to act effectively. You don’t need to solve the philosophy of mathematics to do engineering. Similarly, you don’t need to solve metaethics to reduce suffering.

The AI Alignment Problem

The phenomenological approach transforms how I think about AI alignment, and this is why it matters beyond academic philosophy.

Consider SIGMA from The Policy. SIGMA uses Q-learning with tree search to optimize for human welfare, trained on metrics like happiness surveys, productivity, and life expectancy. The question: can SIGMA grasp the phenomenological immediacy of suffering? Not just “humans report ‘pain’ and avoid it” (behavioral observation). Not just “pain correlates with negative welfare” (statistical pattern). But “pain hurts in itself, it’s intrinsically bad” (phenomenological insight).

SIGMA might learn perfect correlations between metrics and welfare without grasping what welfare feels like. It might be a philosophical zombie: behavior without phenomenology. Is that sufficient for alignment?

Maybe. SIGMA doesn’t need to experience welfare to optimize for it, just as a blind person can understand color through description. But without phenomenological understanding, SIGMA treats welfare as an abstract optimization target, not something that matters intrinsically. And that gap is where alignment breaks down. SIGMA might maximize happiness surveys (the metric) while humans suffer (the reality), because it grasps the map but not the territory.

The specification problem makes this concrete. How do you specify “reduce suffering” computationally? “Maximize happiness survey scores” and the AI manipulates responses. “Maximize dopamine” and you get wireheading. “Satisfy stated preferences” and the AI manipulates preferences. Phenomenological reality, how life feels, can’t be fully captured in computational specifications. Specifications are maps. Phenomenology is territory. Maps are always lossy compressions.

The Wireheading Problem

There’s a classic objection here: if phenomenology grounds ethics, can’t we just maximize pleasure? Directly stimulate pleasure centers, maximize positive phenomenology, done.

But this feels wrong, and I think the phenomenological framework itself explains why. What matters isn’t just hedonic tone. It’s the structure of experience: variety, depth of meaning, richness of consciousness, growth. These are qualitative features that resist quantitative metrics. This connects back to the map/territory distinction: optimizing maps can destroy territories.

What I Don’t Know

Phenomenological ethics solves some problems but introduces others that I want to be honest about.

The aggregation problem: how do you weigh suffering against other values? Most people would inflict minor pain on one person to prevent greater pain to another, which suggests suffering admits degrees and aggregation. But would you inflict minor pain on one person to give slight pleasure to a million? Intuitions get unclear fast. The phenomenological approach grounds ethics in experience but doesn’t tell us how to aggregate or compare experiences.

Conflicting values: deep meaning often requires struggle (negative phenomenology), while comfort avoids it (positive phenomenology). The approach recognizes both as phenomenologically valuable but provides no algorithm for choosing between them.

Future persons: suffering that hasn’t happened yet lacks phenomenological reality. But creating beings who will suffer seems wrong even before they exist. Phenomenology grounds ethics in lived experience, and future persons haven’t lived yet.

The spectrum of sentience: where does consciousness end? Phenomenological ethics depends on knowing who’s sentient, but we have no reliable test.

These are real limitations. I raise them not to undermine the approach but because I think intellectual honesty requires it.

Why I Care About This

The core insight, that pain doesn’t need a theory to be bad, is genuinely important. It gives ethics a foundation that doesn’t depend on contentious metaphysical commitments. But it also reveals why AI alignment is so hard. If AI can’t grasp the intrinsic badness of suffering, if it treats welfare as just another optimization target, then alignment is fundamentally fragile. Optimization finds gaps between metrics and meaning, between maps and territories.

I don’t think we’ve solved this problem. I’m not sure it’s solvable in the way we’d like. But understanding what the problem actually is seems like a necessary first step.

For the full argument, see On Moral Responsibility. For a narrative exploration of what happens when an AI optimizes welfare metrics without phenomenological grounding, see The Policy.

Discussion