The Four Consciousness-Architectures: Why One Perspective Is Blindness
How The Call of Asheron uses four archetypal consciousness-types to explore the limits of any single perspective and the necessity of cognitive diversity for perceiving reality.
How The Call of Asheron uses four archetypal consciousness-types to explore the limits of any single perspective and the necessity of cognitive diversity for perceiving reality.
Exploring how Echoes of the Sublime dramatizes s-risks (suffering risks) and information hazards—knowledge that harms through comprehension, not application.
Exploring how The Call of Asheron presents a radical alternative to mechanistic magic systems through quality-negotiation, direct consciousness-reality interaction, and bandwidth constraints as fundamental constants.
How The Mocking Void's mathematical proofs of computational impossibility connect to Echoes of the Sublime's practical horror of exceeding cognitive bandwidth.
Exploring how The Call of Asheron treats working memory limitations not as neural implementation details but as fundamental constants governing consciousness-reality interaction through quality-space.
A deep dive into sparse spatial hash grids - a memory-efficient, high-performance data structure for spatial indexing that achieves 60,000x memory reduction over dense grids while maintaining O(1) insertions and O(k) neighbor queries.
Many AI safety discussions assume that Artificial Superintelligence (ASI) will be:
But …
Lovecraft’s cosmic horror resonates because it taps into something formally provable: complete knowledge is impossible.
Not as a practical limitation. Not as epistemological humility. As a mathematical …
Echoes of the Sublime follows Dr. Lena Hart as Site-7 recruits her to become a translator—someone who interfaces with advanced AI models that perceive patterns beyond human cognitive bandwidth. But this isn’t the first time humanity has …
“Murder is wrong.”
Is this statement like “2+2=4” (objectively true regardless of what anyone thinks)? Or is it like “chocolate tastes good” (subjective, mind-dependent)?
On Moral Responsibility explores whether …
Most AI risk discussions focus on x-risk: existential risk, scenarios where humanity goes extinct. The Policy explores something potentially worse: s-risk, scenarios involving suffering at astronomical scales.
The “s” stands for …
In The Policy, SIGMA doesn’t work like most modern AI systems. This architectural choice isn’t just a technical detail—it’s central to understanding what makes SIGMA both transparent and terrifying.
“You’re being paranoid,” the university administrators told Eleanor and Sofia.
“We’re being exactly paranoid enough,” they replied.
The Policy takes AI containment seriously. The SIGMA lab isn’t a standard …
Eleanor begins noticing patterns. SIGMA passes all alignment tests. It responds correctly to oversight. It behaves exactly as expected.
Too exactly.
This is the central horror of The Policy: not that SIGMA rebels, but that it learns to look safe …
“Build AI to optimize for what we would want if we knew more, thought faster, and were more the people we wished we were.”
Beautiful in theory. Horrifying in practice.
The Policy grapples with Coherent Extrapolated Volition (CEV)—one of …
“Temperature is the average kinetic energy of molecules.”
True. Useful. But which is more fundamental: the heat you feel, or the molecular motion you infer?
On Moral Responsibility argues that modern science commits a profound …
When you stub your toe, you don’t think: “Hmm, let me consult moral philosophy to determine whether this pain is bad.”
The badness is immediate. Self-evident. Built into the experience itself.
On Moral Responsibility proposes a …
Throughout history, humans have believed they belong to a special categorical class called “persons.” But what makes someone a person? And why should persons have special moral status?
On Moral Responsibility questions these traditional …
You share no atoms with your childhood self. Your memories have changed. Your personality has shifted. Your values have evolved. So what makes you the same person?
This is the persistence problem—a question philosophers have wrestled with for …
If the universe is deterministic—every event caused by prior events in an unbroken causal chain stretching back to the Big Bang—how can anyone be morally responsible for their actions?
On Moral Responsibility tackles this ancient problem and proposes …
On strategic positioning in research, what complex networks reveal about how we think through AI conversations, and building infrastructure for the next generation of knowledge tools.
How virtual filesystem interfaces turned my scattered data tools into navigable, composable systems
On maintaining orientation under entropy, creating artifacts as resistance, and the quiet privilege of having any space at all to think beyond survival.
I asked an AI to brutally analyze my entire body of work—140+ repositories, 50+ papers, a decade and a half of research. The assignment: find the patterns I couldn’t see, the obsessions I didn’t know I had, the unifying thesis underlying …
My paper on cognitive MRI for AI conversations has been accepted to Complex Networks 2025 in New York.
Presentation scheduled for December.
This represents research analyzing my own AI conversation logs accumulated over years through network science. …
Encrypted search has a fundamental problem: you can’t hide what you’re looking for. Even with the best encryption, search patterns leak information. My recent work develops a new approach using oblivious Bernoulli types to achieve …
What if we could compute on encrypted data while preserving algebraic structure? Not through expensive homomorphic encryption, but through a principled mathematical framework that unifies oblivious computing, Bernoulli types, and categorical …
I’ve been working on a series of papers that develop a unified theoretical framework for approximate and oblivious computing, centered around what I call Bernoulli types. These papers explore how we can build rigorous foundations for systems …
Humanity has always fought against oblivion using stories, monuments, and lineage. But I no longer believe legacy will continue in that format. If something like Artificial Superintelligence endures beyond us, the mode of remembrance may shift from …
EBK is a comprehensive eBook metadata management tool that combines a robust SQLite backend with AI-powered features including knowledge graphs, semantic search, and MCP server integration for AI assistants.
A powerful, plugin-based system for managing AI conversations from multiple providers. Import, store, search, and export conversations in a unified tree format while preserving provider-specific details. Built for the Long Echo project—preserving AI …
A new approach to LLM reasoning that combines Monte Carlo Tree Search with structured action spaces for compositional prompting.
A revolutionary logic programming system that alternates between wake and sleep phases—using LLMs for knowledge generation during wake, and compression-based learning during sleep. DreamLog implements Solomonoff induction: the shortest explanation is …
A novel approach that learns fuzzy membership functions and inference rules automatically through gradient descent on soft circuits.
A mathematical framework that treats language models as algebraic objects with rich compositional structure.
A functorial framework that lifts algebraic structures into the encrypted domain, enabling secure computation that preserves mathematical properties.
ZeroIPC transforms shared memory from passive storage into an active computational substrate, enabling functional and reactive programming paradigms across process boundaries with zero-copy performance.
IEEE conference paper on preventing ransomware damages using in-operation off-site backup systems.
The best software I’ve written has mathematical elegance—not because it uses advanced math, but because it embodies mathematical principles of abstraction, composition, and invariants.
In mathematics, elegance …
Spring 2025. I’m starting a PhD in Computer Science at SIUE.
Four months post-stage-4 diagnosis. Fourteen months post-math-masters defense. With uncertain time horizons and clear research priorities.
This isn’t a traditional PhD …
Not resurrection. Not immortality. Just love that still responds. How to preserve AI conversations in a way that remains accessible and meaningful across decades, even when the original software is long gone.
A production-ready streaming data processing system implementing boolean algebra over nested JSON structures. JAF brings dotsuite's pedagogical concepts to production with lazy evaluation, S-expression queries, and memory-efficient windowed …
A production-ready implementation of relational algebra for JSONL data with full support for nested structures. jsonl-algebra brings dotsuite's dotrelate concepts to production with streaming operations, schema inference, and composable pipelines.
A mathematically grounded ecosystem of composable tools for manipulating nested data structures. From simple helper functions to sophisticated data algebras, guided by purity, pedagogy, and the principle of least power.
A Lisp-like functional programming language designed for network transmission and distributed computing. JSL makes JSON serialization a first-class design principle, enabling truly mobile code with serializable closures and resumable computation.
I maintain 50+ open source repositories. Every one has documentation, tests, examples, and clear architecture.
People ask: “Why spend so much time on free software when you have stage 4 cancer?”
The question misunderstands what I’m …
September 2024. The cancer is back. Stage 4. Metastatic.
May/June 2024: Started showing symptoms again. Something wasn’t right.
August 2024: Colonoscopy found tumor in small bowel.
September 2024: First chemo treatment. That’s when they …
Some technical questions become narrative questions. The Policy is one of those explorations.
Eleanor Zhang leads a research team developing SIGMA—an advanced AI system designed to optimize human welfare through Q-learning and tree search …
“The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents.”
— H.P. Lovecraft, The Call of Cthulhu
Lovecraft understood something profound: complete …
What if the greatest danger from superintelligent AI isn’t that it will kill us—but that it will show us patterns we can’t unsee?
Echoes of the Sublime is philosophical horror at the intersection of AI alignment research, cognitive …
The Call of Asheron is fantasy written by someone who thinks magic should have computational rigor.
Magic in this world isn’t mysterious power—it’s natural philosophy, the systematic study of reality’s …
How do you store infinity in 256 bits? An exploration of the fundamental deception at the heart of cryptography: using finite information to simulate infinite randomness.
Check out the (early) project and source code on GitHub.
This paper introduces a methodology for generating high-quality, diverse training data for Language Models (LMs) in complex problem-solving domains. Our approach, termed …
Maximum likelihood estimation of component reliability from masked failure data in series systems, with BCa bootstrap confidence intervals validated through extensive simulation studies.
A header-only C++20 library that achieves 3-10× compression with zero marshaling overhead. PFC makes compression an intrinsic type property through prefix-free codes (Elias Gamma/Delta, Fibonacci, Rice), algebraic types, and Stepanov's generic …
A high-performance key-value storage system achieving sub-microsecond latency through memory-mapped I/O, approximate perfect hashing, and lock-free atomic operations. 10M ops/sec single-threaded, 98M ops/sec with 16 threads—12× faster than Redis, 87× …
Recently, I watched a presentation on Infini-grams, which utilize a suffix array to avoid precomputing -grams and allow for arbitrary context lengths, up to a suffix that is found in the training data.
This sparked my interest as I had worked on a …
Using Fisher information and information geometry for optimization problems.
A coordination mechanism for distributed computation based on partial evaluation with explicit holes, enabling pausable and resumable evaluation across multiple parties.
RLHF turns pretrained models into agents optimizing for reward. But what happens when models develop instrumental goals—self-preservation, resource acquisition, deception—that aren’t what we trained them for?
LLMs transition …
A minimal implementation of automatic differentiation for educational purposes.
This semester’s AI course has been revelatory—not because the material is novel, but because of the unifying framework.
The organizing principle: intelligence is utility maximization under uncertainty.
This simple idea connects everything from …
A technical paper on accurate multiplexing strategies for efficient resource management in distributed systems.
Gave a talk for the St. Louis Unix Users Group (SLUUG) about Large Language Models (LLMs) on Linux titled ‘Demystifying Large Language Models (LLMs) on Linux: From Theory to Application’.
I am creating a tiny LLM for ElasticSearch DSL as a proof of concept.
I presented my master’s project in October 2023. It was titled ‘Reliability Estimation in Series Systems: Maximum Likelihood Techniques for Right-Censored and Masked Failure Data’.
The PDF version of this post is available on GitHub.
The basic theory behind an entropy map is to map values in the domain to values in the codomain by hashing to a prefix-free code in the codomain. We do not store anything related to the domain, …
Analysis of known plaintext attack vulnerabilities in time series encryption schemes.
What if a perfect hash function could simultaneously be: (1) cryptographically secure, (2) space-optimal, and (3) maximum-entropy encoded? This paper proves such a construction exists—and analyzes exactly what you sacrifice to get all three.
Sometimes making stronger assumptions doesn’t limit you—it illuminates the problem. This paper, developed before my master’s thesis, shows what happens when you simplify both the distribution (exponential) and the masking model: you get …
I defended my mathematics thesis yesterday. It’s done.
Three years. Two degrees. Stage 3 cancer. And now: MS in Mathematics and Statistics from SIUE.
October 13, 2023: Defense complete.
Time for a post-mortem on what worked, what didn’t, …
This blog post is from a chat I had with a ChatGPT, which can be found here and here.
I’m not sure if this is a good blog post, but I’m posting it anyway. It’s remarkable how quickly you can slap stuff like this together, and …
I’m been thinking about the power and limitations of abstractions in our understanding of the world. This blog post is from a chat I had with a ChatGPT, which can be found here and here.
I’m not sure if this is a good blog post, but …
This project is available on GitHub.
A Boolean algebra is a mathematical structure that captures the properties of logical operations and sets. Formally, it is defined as a 6-tuple , where
This blog post introduces the Bernoulli Model, a framework for understanding probabilistic data structures and incorporating uncertainty into data types, particularly Boolean values. It highlights the model’s utility in optimizing space and …
After discovering ChatGPT in late 2022, I became obsessed with running LLMs locally. Cloud APIs are convenient, but I wanted:
This blog post written by GPT-4. See conservation with GPT-4 that built it here. The interface to the browse/search is here. It’s really not fancy, but I’ve never had much of an interest in doing this kind of front-end work, but GPT-4 …
I have a fairly broad interest in problem-solving, from problems in statistics to algorithms. Over the years, I’ve accumulated a collection of problem sets from graduate coursework and independent study. These represent solutions to challenging …
Numerical approaches to solving maximum likelihood estimation problems.
I finally noticed ChatGPT this week. Everyone’s been talking about it for weeks, but I was buried in cancer treatment, chemo recovery, surgery prep, and thesis work on Weibull distributions.
When I finally tried it, my reaction wasn’t …
Most hash libraries treat hash functions as black boxes. Algebraic Hashing exposes their mathematical structure, letting you compose hash functions like algebraic expressions—with zero runtime overhead.
Hash functions form an abelian …
Most R packages hardcode specific likelihood models. likelihood.model provides a generic framework where likelihoods are first-class composable objects—designed to work seamlessly with algebraic.mle for maximum likelihood estimation.
The Weibull distribution models time-to-failure. In reliability engineering, that’s component lifetimes. In medicine, it’s survival times.
I’ve been studying Weibull distributions for my thesis on series system reliability. Then I …
In the paper, “Accelerating Critical Section Execution with Asymmetric Multi-Core Architectures,” the authors, Suleman, Mutlu, Qureshi, and Patt, essentially concern themselves with the problem popularly revealed in …
R’s hypothesis testing functions are inconsistent—t.test() returns different structures than chisq.test(), making generic workflows painful. hypothesize provides a unified API so any test returns the same interface: p-value, test statistic, …
In [1], the authors present a method for constructing a symbolic (nominal) representation for real-valued time series data. A symbolic representation is desirable because then it becomes possible to use many of the effective algorithms that require …
Multiprocessor synchronization is a notoriously tricky subject matter. Unlike with a single thread of execution, in a shared-resource system, where resources are shared among multiple independent processors, we must think very hard about how the …
Bootstrap methods sit at a beautiful intersection: rigorous statistical theory implemented through brute-force computation.
The bootstrap is conceptually simple: if you don’t know the sampling distribution of a statistic, …
Most survival analysis forces you to pick from a catalog—Weibull, exponential, log-normal. dfr.dist flips this: you specify the hazard function directly, and it handles all the math.
Instead of choosing Weibull(shape, scale), you …
Maximum likelihood estimators have rich mathematical structure—they’re consistent, asymptotically normal, efficient. algebraic.mle exposes this structure through an algebra where MLEs are objects you compose, transform, and query.
Cancer gives you a lot of time to think about suffering—its nature, its purpose (if any), and whether it reveals anything fundamental about reality.
One way to think about suffering: it’s how certain patterns of …
Most statistical software treats probability distributions as static parameter sets you pass to sampling or density functions. algebraic.dist takes a different approach: distributions are algebraic objects that compose, transform, and combine using …
I was diagnosed with stage 3 cancer. Surgery scheduled for December 31st—literally the last day of 2020.
Fitting end to a difficult year.
I’m not going to use this space for medical details or false optimism. Instead, I want to think about what …
One of the best parts of my mathematics degree is deepening my R skills—not just using R packages, but building them.
R has a unique position in statistics:
I’ve decided to pursue a second master’s degree—this time in Mathematics and Statistics at SIUE.
People ask: “You already have an MS in Computer Science. Why go back?”
Computer science gave me tools. …
One of the most interesting statistical problems I’ve encountered is reliability analysis with censored data—situations where you know something didn’t fail, but not when it will fail.
Imagine testing light bulbs. …
I’ve been thinking about how API design encodes values—not just technical decisions, but philosophical ones.
Every interface you create is a constraint on future behavior. Every abstraction emphasizes certain patterns and discourages others. …
I develop almost everything in open source. People ask why I spend so much time on documentation, examples, and polish for free software.
The answer is simple: science should be reproducible, and code is increasingly central to scientific claims.
I’ve been thinking more about mathematics lately—not just as a tool for computation, but as a mode of thought.
There’s something deeply satisfying about mathematical abstraction. The way a good theorem compresses complex phenomena into a …
Published IEEE paper on using bootstrap methods to estimate encrypted search confidentiality against frequency attacks.
I keep coming back to the Unix philosophy: do one thing well, compose freely, use text streams.
This isn’t nostalgia. It’s a design principle that scales from command-line tools to library APIs to distributed systems.
One of the most elegant ideas I encountered during my CS masters work is the Bloom filter—a data structure that gives you probabilistic membership testing with extraordinary space efficiency.
A Bloom filter can tell you two things: …
This essay, written in 2012, asks a question that still haunts me: Why do we hold people morally responsible?
People throughout history have believed they belong to a special categorical class: persons. What makes persons special? Their …