Skip to main content

About

Alex Towell

Alex Towell

PhD student, tool maker, open source developer. I have stage 4 cancer and I keep working. Most of what I make is public.

What I Work On

Statistical computing and reliability — Maximum likelihood estimation for censored and masked failure data. Weibull series systems. My MS in Mathematics was built around this.

Encryption and privacy — My CS thesis asked: can you search an encrypted index without revealing what you’re looking for? That led to years of work on secure index designs, query obfuscation, and information-theoretic confidentiality. More recently I’ve built practical tools for the same impulse — cryptoid for encrypted content on static sites with user management, and pagevault for password-protecting HTML, Markdown, PDFs, or any content on static hosting.

Digital preservation and personal data — I want the things I make and collect to be readable in a hundred years. I even built a deadman switch. Nobody’s paying attention to this work, and that’s fine. I built an ecosystem of small CLIs for managing my own data: bookmarks, conversations, email, photos, ebooks, notes, repos, metadata. Plain formats (JSONL, SQLite, plaintext), designed to compose through pipes. Tying it all together is longecho — a self-describing, plaintext-first archival philosophy with graceful degradation. It’s self-similar: every data source in a longecho archive is itself longecho-compliant, so operations like site generation, search, and export to JSON apply recursively all the way down. There’s also a universal archive (arkiv), a relational algebra over the format (jsonl-algebra), and a dozen domain-specific tools. They also scratch a mathematical itch: small tools that compose are morphisms in a category of data transformations.

Libraries — C++ interval sets and algebraic hashing. R packages for composable likelihoods and distribution algebras. Python tree structures and fuzzy inference. Published on PyPI and CRAN.

Algorithmic probability — Solomonoff induction, Kolmogorov complexity, the theoretical foundations of prediction. I’ve been reading and writing about this since 2010.

AI tools — I build a lot of tooling for working with AI — Claude Code plugins for academic writing, publication pipelines, site management. I’m also building eidola, which packages all my archived data into a conversable persona — a simulacrum that can represent my thinking after I can’t. The cognitive MRI paper is the academic version of the same idea: mapping the topology of a mind’s conversations.

I also write fiction — a novel about AI alignment, a long-form piece on AI safety and cognitive limits.

Background

I’ve been at SIUE a long time. BS in Computer Science, MS in CS (encrypted search), MS in Mathematics and Statistics (reliability theory), now a PhD. Each degree sent me in a different direction.

I care about open source as scientific practice, algebraic structure in algorithms, and composable tools.

Find Me