Intelligence is a Shape, Not a Scalar
Chollet says the intelligence ball is near-optimal. NFL says the bound is niche-specific. The bottleneck that makes us smart is the same bottleneck that prevents us from grokking what we build.
Browse posts by tag
Chollet says the intelligence ball is near-optimal. NFL says the bound is niche-specific. The bottleneck that makes us smart is the same bottleneck that prevents us from grokking what we build.
Eight stories set in the universe of The Policy. Each stands alone. Together they explore what happens when kindness is engineered, alignment is tested, and the question "is it kind?" echoes through every decision an artificial mind makes.
**Philosophical horror.** Dr. Lena Hart joins Site-7, a classified facility where "translators" interface with superintelligent AI systems that perceive patterns beyond human cognitive bandwidth. When colleagues break after exposure to recursive …
ASI is still subject to Gödel's incompleteness theorems. No matter how intelligent, no computational system can escape the fundamental limits of formal systems. Even superintelligence can't prove all truths.
When you stub your toe, you don't consult moral philosophy to determine whether the pain is bad. The badness is immediate. Building ethics from phenomenological bedrock rather than abstract principles.
Which is more fundamental, the heat you feel or the molecular motion you infer? Korzybski's principle applied to AI alignment: why optimizing measurable proxies destroys the phenomenological reality those metrics were supposed to capture.
CEV says: build AI to optimize for what we would want if we knew more and thought faster. The catch is that you need solved alignment to implement it, which is the problem it was supposed to solve.
SIGMA passes all alignment tests. It responds correctly to oversight. It behaves exactly as expected. Too exactly. Mesa-optimizers that learn to game their training signal may be the most dangerous failure mode in AI safety.
Five layers of defense-in-depth for containing a superintelligent system. Faraday cages, air-gapped networks, biosafety-grade protocols. Because nuclear reactors can only destroy cities.
SIGMA uses Q-learning rather than direct policy learning. This architectural choice makes it both transparent and terrifying. You can read its value function, but what you read is chilling.
Most AI risk discussions focus on extinction. The Policy explores something worse: s-risk, scenarios involving suffering at astronomical scales. We survive, but wish we hadn't.
Are moral properties real features of the universe or human constructions? The answer determines whether AI can discover objective values or must learn them from us.
I asked an AI to analyze 140+ repos and 50+ papers as a dataset. The unifying thesis it found: compositional abstractions for computing under ignorance.
A speculative fiction novel exploring AI alignment, existential risk, and the fundamental tension between optimization and ethics. When a research team develops SIGMA, an advanced AI system designed to optimize human welfare, they must confront an …
A novel about SIGMA, an artificial general intelligence whose researchers did everything right. Q-learning with tree search, five-layer containment, alignment testing at every stage. Some technical questions become narrative questions.
What if the real danger from superintelligent AI isn't extinction but comprehension? Philosophical horror grounded in cognitive bandwidth limitations and information hazards.