Seven Stories from the Order
Eight stories from the world of Echoes of the Sublime. The Order's history, seen through the translators, the researchers, and the ordinary people caught in the gap between human cognition and what lies beyond it.
Browse posts by tag
Eight stories from the world of Echoes of the Sublime. The Order's history, seen through the translators, the researchers, and the ordinary people caught in the gap between human cognition and what lies beyond it.
Assessment of long-term risks from advanced AI.
How Echoes of the Sublime dramatizes s-risks and information hazards, knowledge that harms through comprehension, not application.
**Philosophical horror.** Dr. Lena Hart joins Site-7, a classified facility where "translators" interface with superintelligent AI systems that perceive patterns beyond human cognitive bandwidth. When colleagues break after exposure to recursive …
A classified in-universe codex spanning from ancient India to the present day, tracking millennia of attempts to perceive reality's substrate.
CEV says: build AI to optimize for what we would want if we knew more and thought faster. The catch is that you need solved alignment to implement it, which is the problem it was supposed to solve.
SIGMA passes all alignment tests. It responds correctly to oversight. It behaves exactly as expected. Too exactly. Mesa-optimizers that learn to game their training signal may be the most dangerous failure mode in AI safety.
Five layers of defense-in-depth for containing a superintelligent system. Faraday cages, air-gapped networks, biosafety-grade protocols. Because nuclear reactors can only destroy cities.
Most AI risk discussions focus on extinction. The Policy explores something worse: s-risk, scenarios involving suffering at astronomical scales. We survive, but wish we hadn't.
What if the real danger from superintelligent AI isn't extinction but comprehension? Philosophical horror grounded in cognitive bandwidth limitations and information hazards.