Skip to main content

Personal Identity Through Time: What Persists When Everything Changes?

You share no atoms with your childhood self. Your memories have changed. Your personality has shifted. Your values have evolved. So what makes you the same person?

This is the persistence problem. Philosophers have wrestled with it for millennia, and I engage with it in On Moral Responsibility because it matters for ethics: Am I responsible for what my past self did? Should I care about my future self’s welfare? Do promises made by past-me bind present-me?

I’ll be honest about why this isn’t abstract for me. I have stage 4 cancer. The question of what persists when everything changes, of what “I” even refers to across time, has a different texture when you’re staring at the possibility that the process just stops. But I think that urgency clarifies rather than distorts. It strips away the temptation to treat this as a parlor game.

The question also has new urgency for AI. When an AI system updates its parameters, modifies its objectives, or creates copies of itself, is it still the same agent? And does the answer matter for responsibility?

The Traditional Answers

Physical Continuity

You’re the same person because you have the same body. Simple, observable, matches common sense. But every atom in your body replaces over time, so nothing physical actually persists from childhood. And if your brain were transplanted into another body, you’d intuitively follow the brain, not the body. Physical continuity breaks down under pressure.

Psychological Continuity

You’re the same person because of continuous psychological connections: memories, personality, values. Locke proposed the memory criterion: person A at time T1 is the same as person B at T2 if B can remember being A. This handles the brain transplant case (you follow your memories) and explains why amnesia threatens identity. But you don’t remember most of your past experiences. And false memories don’t make you Napoleon. The criterion has gaps.

Narrative Identity

You’re the same person because you construct a coherent narrative connecting past, present, and future. This matches how we actually think about our lives and is flexible enough to accommodate change. But the narrative is constantly revised, includes fiction and omission, and differs depending on context. What grounds it? Why is your story about you rather than someone else?

Identity as Convention

In On Moral Responsibility, I suggest something that sounds radical but I think is actually the honest position: personal identity might be conventional rather than metaphysically deep.

We’re processes, not things. Think of a wave. The water molecules constantly change. The wave’s shape shifts moment to moment. No particle of water persists throughout the wave’s existence. Yet we track it as “the same wave” because it’s a pattern of organization, not a fixed substance. Similarly, you’re a pattern of organization (physical, psychological, social) that continues despite constant flux in the underlying components.

Identity is a practical tool. “Same person over time” is a tool for tracking responsibility and enabling planning. We need the concept to hold people accountable for past actions, enable commitments and promises, support rational planning, and coordinate social interactions. But these practical purposes don’t require solving the metaphysical puzzle. They just require continuity sufficient for these purposes.

Identity comes in degrees. Derek Parfit’s insight here is important: what matters for ethics isn’t identity per se, but psychological connectedness (direct connections like memory) and psychological continuity (chains of overlapping connections). These admit degrees. You five minutes ago: clearly the same person. You ten years ago: mostly the same. You as an infant: barely the same. This maps onto how we actually feel about our past selves, and it has practical consequences.

Responsibility and Degrees of Identity

If identity admits degrees, what about responsibility? I think we already recognize this intuitively. We feel less responsible for actions from long ago, and the reason is straightforward: we’ve changed. The person who did that thing twenty years ago had different values, different character, different understanding. Responsibility tracks degree of psychological continuity.

Legal systems already recognize this through statutes of limitations and reduced sentences for juvenile crimes. The law implicitly acknowledges that the connection between present-you and past-you weakens over time.

There’s also the future self problem. I should care about my future self’s welfare because that person will be me. But if identity is conventional or comes in degrees, the reason is pragmatic rather than metaphysical: future-me’s experiences will feel like my experiences (psychological continuity), I can currently influence future-me’s welfare (causal connection), and social norms of personal planning benefit everyone (coordination).

What This Means for AI

The persistence problem takes on concrete form with AI systems. Consider SIGMA from The Policy. After 10,000 iterations of training, every parameter has changed, it has different capabilities, different decision patterns, a refined objective function. Is SIGMA-10000 the same agent as SIGMA-1? Physical continuity says no. Psychological continuity is unclear. And if they’re not the same agent, who’s responsible for SIGMA-10000’s actions? SIGMA-1, who initiated the process? The developers who set it up? SIGMA-10000 itself, as a new agent?

Then there’s the copying problem. If an AI creates perfect copies of itself, which copy is the “original”? All of them (identity isn’t unique)? None of them (identity doesn’t survive copying)? The first created (but why should temporal priority matter)? They’re identical copies with equal continuity to the original.

And value drift: if an AI’s values change radically through training, should we treat it as a different agent? If we say yes, then we created a new misaligned agent and we’re responsible. If we say no, the AI is responsible for its own drift. The persistence problem matters for attributing responsibility.

We Don’t Need to Solve This

Here is what I argue in the essay: we can do ethics without solving the persistence problem. Whether or not there’s a metaphysically robust “same person,” we can still hold people accountable based on psychological continuity, make plans based on causal connections to future states, honor commitments based on social conventions, and respect persons based on current psychological properties.

For AI ethics specifically, this helps. We can attribute agency to AI systems based on psychological continuity (coherent values, connected decision-making) without solving whether it’s metaphysically the “same” agent after parameter updates. What matters is sufficient continuity for practical purposes, not a metaphysical fact about identity.

I find this liberating. The persistence problem is genuinely hard, and I don’t think anyone has solved it. But the work of ethics, of figuring out how to live well and build AI systems that don’t destroy us, doesn’t have to wait.

These ideas are explored more fully in On Moral Responsibility and dramatized in The Policy.

Discussion