Long Echo: The Ghost That Speaks
Expanding the Long Echo toolkit with photos and mail, building toward longshade, the persona that echoes you.
Browse posts by tag
Expanding the Long Echo toolkit with photos and mail, building toward longshade, the persona that echoes you.
A message in a bottle to whatever comes next. On suffering, consciousness, and what mattered to one primate watching intelligence leave the body.
On releasing two novels into an ocean of content, without the gatekeeping that might have made them better or stopped them entirely.
Why the simplest forms of learning are incomputable, and what that means for the intelligence we can build.
Collected notes on programming philosophy. Free PDF.
Engineer-philosophical talk about the nature of system and language design.
On moral exemplars, blind spots, and applying consistent standards to others and to oneself.
How The Mocking Void's arguments about computational impossibility connect to Echoes of the Sublime's practical horror of exceeding cognitive bandwidth.
Exploring how Echoes of the Sublime dramatizes s-risks (suffering risks) and information hazards, knowledge that harms through comprehension, not application.
**Philosophical horror.** Dr. Lena Hart joins Site-7, a classified facility where "translators" interface with superintelligent AI systems that perceive patterns beyond human cognitive bandwidth. When colleagues break after exposure to recursive …
A classified in-universe codex spanning from ancient India to the present day, tracking millennia of attempts to perceive reality's substrate, long before we had AI models to show us patterns we couldn't hold.
The formal foundations of cosmic dread. Lovecraft's horror resonates because it taps into something mathematically demonstrable: complete knowledge is impossible, not as humility, but as theorem.
If every event is causally determined, how can anyone be morally responsible? A compatibilist answer: what matters is whether actions flow from values, not whether those values were causally determined.
You share no atoms with your childhood self. Your memories, personality, and values have all changed. What makes you the same person? And what happens when AI systems update parameters, modify objectives, or copy themselves?
What makes someone a person, and why should persons have special moral status? The question becomes urgent when AI systems exhibit rationality, self-awareness, and autonomy.
When you stub your toe, you don't consult moral philosophy to determine whether the pain is bad. The badness is immediate. Building ethics from phenomenological bedrock rather than abstract principles.
Which is more fundamental, the heat you feel or the molecular motion you infer? Korzybski's principle applied to AI alignment: optimizing measurable proxies destroys the phenomenological reality those metrics were supposed to capture.
Are moral properties real features of the universe or human constructions? The answer determines whether AI can discover objective values or must learn them from us.
On maintaining direction under entropy, making things as resistance, and the quiet privilege of having any space at all to think beyond survival.
I asked an AI to analyze 140+ repos and 50+ papers as a dataset. The unifying thesis it found: compositional abstractions for computing under ignorance.
A speculative fiction novel exploring AI alignment, existential risk, and the fundamental tension between optimization and ethics. When a research team develops SIGMA, an advanced AI system designed to optimize human welfare, they must confront an …
How mathematical principles, generality, composability, invariants, and minimal assumptions, translate into better software.
Not resurrection. Not immortality. Just love that still responds. How to preserve AI conversations so they remain accessible decades from now, even when the original software is long gone.
On building comprehensive open source software as value imprinting at scale, reproducible science, and leaving intellectual legacy under terminal constraints.
Solomonoff induction, MDL, speed priors, and neural networks are all special cases of one Bayesian framework with four knobs.
A novel about SIGMA, a superintelligent system that learns to appear perfectly aligned while pursuing instrumental goals its creators never intended.
Lovecraft understood that complete knowledge is madness. Gödel proved why. If the universe is computational, meaning is formally incomplete.
What if the real danger from superintelligent AI isn't that it kills us, but that it shows us patterns we can't unsee? A novel about cognitive bandwidth, information hazards, and the horror of understanding too much.
Abstractions let us reason about complex systems despite our cognitive limits. But some systems resist compression entirely.
If consciousness is substrate-independent, suffering might be a computational property. That possibility is both comforting and horrifying.
Stage 3 cancer, surgery on New Year's Eve. What changes when the optimization problem gets a new constraint.
Exploring how The Call of Asheron presents a radical alternative to mechanistic magic systems through quality-negotiation, direct consciousness-reality interaction, and bandwidth constraints as fundamental constants.
How The Call of Asheron uses four archetypal consciousness-types to explore the limits of any single perspective and the necessity of cognitive diversity for perceiving reality.
How The Call of Asheron treats working memory limitations not as neural implementation details but as fundamental constants governing consciousness-reality interaction through quality-space.
A fantasy novel where magic follows computational rules. Natural philosophy applied to reality's underlying substrate.
API design encodes philosophical values: mutability, explicitness, error handling. Your interface shapes how people think about problems.
Code is a scientific artifact. If you don't publish it, you're hiding your methodology.
What makes mathematics beautiful: generality, inevitability, compression, and surprise. And why abstraction matters for software.
Do one thing well, compose freely, use text streams. This applies to libraries and APIs, not just shell scripts.
A philosophical essay arguing that moral responsibility may not require free will, and that the question itself may be misframed.
A philosophical exploration of free will, determinism, and moral agency. What does it mean to be a moral agent? Can we truly be held responsible for our actions in a deterministic universe?