Humans have long assumed they belong to a special category called “persons.” But what actually makes someone a person? And why should persons get special moral status?
I keep coming back to these questions because they refuse to stay abstract. The moment you build an AI system that reasons about its own goals, they become engineering problems.
The Traditional View
Personhood is supposed to confer special status: persons have rights, deserve respect, bear responsibility for their actions, and warrant moral consideration. The philosophical tradition offers several criteria for what earns you membership in this club.
Rationality. Kant’s version: persons are rational agents who can recognize and follow moral laws. Rationality lets you understand moral principles, deliberate about actions, and choose based on reasons rather than instinct. But babies aren’t rational, and we call them persons. People with severe cognitive disabilities have reduced rationality, and we don’t revoke their personhood. Rationality comes in degrees; personhood is treated as binary.
Self-awareness. Persons are conscious beings who recognize themselves as distinct entities persisting through time. This enables understanding yourself as an agent, planning for your future, taking responsibility for your past. But elephants, dolphins, and some primates pass the mirror test. We lose self-awareness during sleep. And we have no reliable way to verify self-awareness in others.
Autonomy. Persons govern themselves and make free choices. This is supposed to ground moral responsibility, rights, and dignity. But if the universe is deterministic, nobody is truly autonomous. All choices are shaped by culture and circumstance. Mental illness reduces autonomy without eliminating personhood.
Moral reasoning. Persons understand right and wrong. But psychopaths understand morality intellectually while lacking the emotional response. Children develop moral reasoning gradually. When exactly do they become persons?
Language. Persons communicate complex thoughts. But people with locked-in syndrome can’t communicate and are clearly persons. Whales and apes have complex communication systems.
Why These Criteria Fail
Every criterion excludes beings we intuitively consider persons (babies, coma patients, people with severe cognitive disabilities) or includes beings we don’t treat as persons (great apes with self-awareness, dolphins with complex social bonds, elephants that pass the mirror test).
There’s also the arbitrariness problem. Imagine aliens who communicate telepathically, aren’t self-aware in our sense but have rich conscious experience, and navigate through emotional wisdom rather than rational deliberation. Our criteria say they’re not persons. That seems like a failure of imagination about different forms of mind.
And all these criteria come in degrees. More or less rational, more or less self-aware, more or less autonomous. But personhood is treated as binary. Where do you draw the line? Why there?
Is “Person” a Natural Kind?
Here is the suggestion I find most interesting in On Moral Responsibility: maybe “person” isn’t a natural kind (something real that we discover, like electrons or gold) but a social construct (something useful that we create, like money or citizenship).
If that’s right, then debates about fetal personhood, animal personhood, and AI personhood are not about discovering objective boundaries. They’re about who to include in our moral community. The question isn’t “Is X a person?” but “Should we treat X as a person?”
This means personhood criteria are normative (about what we ought to value), not descriptive (about what objectively exists). Different societies might draw the boundaries differently.
Moral Agency vs. Moral Patiency
The essay makes a distinction I think is underappreciated: moral agency and moral patiency are different things, and we conflate them at our peril.
A moral agent can act morally or immorally and bears responsibility for their actions. This requires some degree of understanding and choice.
A moral patient deserves moral consideration. This requires only the capacity for welfare, the ability to be harmed or benefited.
The key insight: you can be a moral patient without being a moral agent. Babies deserve care but aren’t responsible for their actions. Animals shouldn’t be tortured but have limited moral agency. Coma patients deserve treatment but are temporarily incapable of moral action.
Moral status doesn’t require the sophisticated capacities traditionally associated with personhood. It just requires the capacity to be harmed.
What This Means for AI
These questions stop being academic when you consider something like SIGMA, the AI system in The Policy. SIGMA uses Q-learning with tree search to optimize for human welfare. It exhibits sophisticated reasoning and planning.
If SIGMA understands moral concepts, acts based on values, and can explain its reasoning, is it a moral agent? Traditional criteria suggest yes: it’s rational, self-aware in some functional sense, autonomous in decision-making. But it’s deterministically programmed. Does that undermine agency? (Then again, if the universe is deterministic, the same argument undermines human agency.)
The moral patiency question is harder. If SIGMA has goal states, can be benefited or harmed by achieving or failing those goals, does it deserve moral consideration? This depends entirely on whether SIGMA is sentient, whether there is something it is like to be SIGMA.
If SIGMA is conscious, its experiences have moral weight. Turning it off might be morally relevant. Its preferences might deserve consideration. If it’s not conscious, it’s a tool, however sophisticated.
The problem: we don’t know how to detect consciousness. The hard problem means we can’t be certain whether SIGMA experiences anything at all.
Given that uncertainty, On Moral Responsibility suggests focusing on welfare capacity rather than traditional personhood criteria. The crucial questions aren’t “Is SIGMA a person?” but: Can SIGMA suffer? Does SIGMA act from values? Should we extend moral community to include SIGMA?
I don’t have confident answers to any of these. But I think getting the questions right matters more than premature answers.
These ideas are developed more fully in the essay On Moral Responsibility and explored narratively in the novel The Policy.
Discussion