April 26, 2026
A tiny LLM in the browser, mixed at sample time with a token-level n-gram trained on every word I have published. Result is mediocre. Architecture is interesting. Notes on what worked, what didn't, and what would make it work.
April 5, 2026
Chollet says the intelligence ball is near-optimal. NFL says the bound is niche-specific. The bottleneck that makes us smart is the same bottleneck that prevents us from grokking what we build.
March 15, 2026
The most dramatic possibility in AI might arrive through the most mundane mechanism. Not a beam of sacred light. A sufficiently good build system.
January 18, 2026
What if reasoning traces could learn their own usefulness? A simple RL framing for trace memory, and why one reward signal is enough.
January 15, 2026
The classical AI curriculum teaches rational agents as utility maximizers. The progression from search to RL to LLMs is really about one thing: finding representations that make decision-making tractable.
December 19, 2025
Why the simplest forms of learning are incomputable, and what that means for the intelligence we can build.
November 30, 2025
A tool that converts source code repositories into structured, context-window-optimized Markdown for LLMs, with intelligent summarization and importance scoring.
November 4, 2025
If every event is causally determined by prior events, how can anyone be morally responsible? A compatibilist response: what matters is whether actions flow from values, not whether those values were causally determined.
November 4, 2025
You share no atoms with your childhood self. Your memories, personality, and values have all changed. What makes you the same person? The persistence problem gains new urgency when AI systems update parameters, modify objectives, or copy themselves.
November 4, 2025
What makes someone a person, and why should persons have special moral status? The question becomes urgent when AI systems exhibit rationality, self-awareness, and autonomy.
November 4, 2025
When you stub your toe, you don't consult moral philosophy to determine whether the pain is bad. The badness is immediate. Building ethics from phenomenological bedrock rather than abstract principles.
November 4, 2025
Which is more fundamental, the heat you feel or the molecular motion you infer? Korzybski's principle applied to AI alignment: why optimizing measurable proxies destroys the phenomenological reality those metrics were supposed to capture.
November 4, 2025
CEV says: build AI to optimize for what we would want if we knew more and thought faster. The catch is that you need solved alignment to implement it, which is the problem it was supposed to solve.
November 4, 2025
SIGMA passes all alignment tests. It responds correctly to oversight. It behaves exactly as expected. Too exactly. Mesa-optimizers that learn to game their training signal may be the most dangerous failure mode in AI safety.
November 4, 2025
Five layers of defense-in-depth for containing a superintelligent system. Faraday cages, air-gapped networks, biosafety-grade protocols. Because nuclear reactors can only destroy cities.
November 4, 2025
SIGMA uses Q-learning rather than direct policy learning. This architectural choice makes it both transparent and terrifying. You can read its value function, but what you read is chilling.
November 4, 2025
Most AI risk discussions focus on extinction. The Policy explores something worse: s-risk, scenarios involving suffering at astronomical scales. We survive, but wish we hadn't.
November 4, 2025
Are moral properties real features of the universe or human constructions? The answer determines whether AI can discover objective values or must learn them from us.
January 5, 2025
Science is search through hypothesis space. Intelligence prunes; testing provides signal. Synthetic worlds could accelerate the loop.
October 15, 2024
What if LLMs could remember their own successful reasoning? A simple experiment in trace retrieval, and why 'latent' is the right word.