SLUUG Talk: Demystifying Large Language Models on Linux
Talk for the St. Louis Unix Users Group about running and understanding Large Language Models on Linux.
Browse posts by tag
Talk for the St. Louis Unix Users Group about running and understanding Large Language Models on Linux.
I pointed Claude Code at the Erdős problem database with vague instructions to 'find interesting things.' It built 92 Python modules, ran 131 subagents, and computed exact Ramsey numbers nobody had computed before. I mostly watched.
The most dramatic possibility in AI might arrive through the most mundane mechanism. Not a beam of sacred light. A sufficiently good build system.
The classical AI curriculum teaches rational agents as utility maximizers. The progression from search to RL to LLMs is really about one thing: finding representations that make decision-making tractable.
A message in a bottle to whatever comes next. On suffering, consciousness, and what mattered to one primate watching intelligence leave the body.
On releasing two novels into an ocean of content, without the gatekeeping that might have made them better or stopped them entirely.
Why the simplest forms of learning are incomputable, and what that means for the intelligence we can build.
Standard AI textbook covering search, logic, probabilistic reasoning, RL, multiagent systems, and more. Canonical comprehensive AI text.
Explores analogy and cognition via computational models. Classic work on analogy and cognition modeling.
Classic exploration of self-reference, formal systems, and the nature of mind.
A tool that converts source code repositories into structured, context-window-optimized Markdown for LLMs, with intelligent summarization and importance scoring.
ASI is still subject to Gödel's incompleteness theorems. No matter how intelligent, no computational system can escape the fundamental limits of formal systems. Even superintelligence can't prove all truths.
SIGMA uses Q-learning rather than direct policy learning. This architectural choice makes it both transparent and terrifying. You can read its value function, but what you read is chilling.
On research strategy, what complex networks reveal about how we think through AI conversations, and building infrastructure for the next generation of knowledge tools.
Accepted paper at Complex Networks 2025 on using network science to reveal topological structure in AI conversation logs.
EBK is a comprehensive eBook metadata management tool with AI-powered enrichment, semantic search, and knowledge graphs. Part of the Long Echo toolkit.
Treating prompt engineering as a search problem over a structured action space, using MCTS to find effective prompt compositions.
A plugin-based toolkit for managing AI conversations from multiple providers. Import, store, search, and export conversations in a unified tree format. Built for the Long Echo project.
A speculative fiction novel exploring AI alignment, existential risk, and the fundamental tension between optimization and ethics. When a research team develops SIGMA, an advanced AI system designed to optimize human welfare, they must confront an …
Starting a CS PhD four months after a stage 4 diagnosis, because the research matters regardless of completion.
Not resurrection. Not immortality. Just love that still responds. How to preserve AI conversations so they remain accessible decades from now, even when the original software is long gone.
Science is search through hypothesis space. Intelligence prunes; testing provides signal. Synthetic worlds could accelerate the loop.
A novel about SIGMA, an artificial general intelligence whose researchers did everything right. Q-learning with tree search, five-layer containment, alignment testing at every stage. Some technical questions become narrative questions.
Intelligence as utility maximization under uncertainty. A unifying framework connecting A* search, reinforcement learning, Bayesian networks, and MDPs.
Abstractions let us reason about complex systems despite our cognitive limits. But some systems resist compression entirely.
I had GPT-4 build me a search interface for browsing saved ChatGPT conversations. Flask, Whoosh, a couple hours.
Encountering ChatGPT during cancer treatment and recognizing the Solomonoff connection. Language models as compression, prediction as intelligence. A personal inflection point reconnecting with AI research after years in survival mode.