ZeroIPC reimagines inter-process communication by treating shared memory not as passive storage, but as an active computational substrate. It brings modern programming abstractions—futures, lazy evaluation, reactive streams, and CSP-style channels—to IPC with zero-copy performance.
The Core Insight
Traditional IPC systems treat shared memory as a bucket for data. You serialize, copy, deserialize. Even “zero-copy” systems are often just optimized data containers.
ZeroIPC asks a different question: What if shared memory could hold not just data, but computation itself?
This shift enables:
- Futures that represent computations in progress across processes
- Lazy values that defer expensive work and share cached results
- Reactive streams with functional operators (map, filter, fold)
- CSP channels for Go-style structured concurrency
All with zero serialization overhead and language independence.
Design Philosophy
1. Minimal Metadata
ZeroIPC stores only three pieces of information per structure:
- Name: For discovery
- Offset: Where data starts
- Size: How much memory is allocated
No type information. No schema. No versioning metadata.
This enables true language independence—Python and C++ can both create, read, and write structures. Type safety is enforced per-language (C++ templates, Python NumPy dtypes).
2. Language Equality
There’s no “primary” language. All implementations are first-class:
C++ Producer:
#include <zeroipc/memory.h>
#include <zeroipc/array.h>
zeroipc::Memory mem("/sensor_data", 10*1024*1024);
zeroipc::Array<float> temps(mem, "temperature", 1000);
temps[0] = 23.5f;
Python Consumer:
from zeroipc import Memory, Array
import numpy as np
mem = Memory("/sensor_data")
temps = Array(mem, "temperature", dtype=np.float32)
print(temps[0]) # 23.5
Same binary format. No bindings. No FFI. Pure implementations following the same specification.
3. Zero Dependencies
Each implementation stands alone:
- C: Pure C99, POSIX only
- C++: Header-only, C++23
- Python: Pure Python with NumPy
No protobuf. No serialization libraries. Just direct memory access.
Traditional Data Structures
ZeroIPC provides lock-free implementations of standard structures:
Structure | Description | Concurrency |
---|---|---|
Array | Fixed-size contiguous storage | Atomic operations |
Queue | Circular MPMC buffer | Lock-free CAS |
Stack | LIFO with ABA prevention | Lock-free CAS |
Map | Hash map with linear probing | Lock-free |
Set | Hash set for unique elements | Lock-free |
Pool | Object pool with free list | Lock-free |
Ring | High-performance streaming | Lock-free |
These are the foundation. The interesting part is what comes next.
Codata: Computation as First-Class Structure
Data vs Codata
Data structures answer “what values are stored?”
- Array: collection of values
- Map: key-value associations
- Queue: FIFO buffer
Codata structures answer “how are values computed?”
- Future: value that will exist
- Lazy: computation deferred
- Stream: potentially infinite sequence
- Channel: communication process
ZeroIPC is one of the first IPC systems to treat codata as first-class.
The Four Pillars of Codata
1. Future: Asynchronous Results
Cross-process async/await:
Process A - Computation:
zeroipc::Memory mem("/compute", 10*1024*1024);
zeroipc::Future<double> result(mem, "expensive_calc");
// Perform expensive computation
double value = run_simulation();
result.set_value(value);
Process B - Waiting:
zeroipc::Memory mem("/compute");
zeroipc::Future<double> result(mem, "expensive_calc", true);
// Wait with timeout
if (auto value = result.get_for(std::chrono::seconds(5))) {
process_result(*value);
} else {
handle_timeout();
}
Mathematical model:
Key properties:
- Single assignment (immutable once set)
- Multiple readers (many processes can wait)
- Error propagation (can set error state)
- Timeout support (prevent indefinite blocking)
2. Lazy: Deferred Computation with Memoization
Expensive computations cached and shared:
// Process A: Define computation (not executed yet!)
zeroipc::Memory mem("/cache", 50*1024*1024);
zeroipc::Lazy<Matrix> inverse(mem, "matrix_inverse");
inverse.set_computation([&original]() {
return compute_inverse(original); // Expensive!
});
// Process B: First access triggers computation
zeroipc::Lazy<Matrix> inverse(mem, "matrix_inverse", true);
Matrix m = inverse.get(); // Computes and caches
// Process C: Gets cached result instantly
Matrix m2 = inverse.get(); // Returns immediately
Mathematical model:
Use cases:
- Configuration values from complex logic
- Shared computation results across processes
- Derived data that might not be needed
3. Stream: Reactive Data Flows
Functional reactive programming across processes:
Process A - Sensor Data:
zeroipc::Memory mem("/sensors", 10*1024*1024);
zeroipc::Stream<double> temperature(mem, "temp_stream", 1000);
while (running) {
double temp = read_sensor();
temperature.emit(temp);
std::this_thread::sleep_for(100ms);
}
Process B - Stream Processing Pipeline:
zeroipc::Memory mem("/sensors");
zeroipc::Stream<double> temperature(mem, "temp_stream");
// Functional transformation pipeline
auto fahrenheit = temperature
.map(mem, "temp_f", [](double c) {
return c * 9/5 + 32;
});
auto warnings = fahrenheit
.filter(mem, "warnings", [](double f) {
return f > 100.0;
})
.window(mem, "5min", 5min)
.map(mem, "avg", [](auto window) {
return std::accumulate(window.begin(), window.end(), 0.0)
/ window.size();
});
warnings.subscribe([](double avg_high_temp) {
if (avg_high_temp > 105.0) {
trigger_emergency_cooling();
}
});
Stream operators:
map
: Transform each elementfilter
: Keep matching elementsfold
: Reduce to single valuetake
/skip
: Limit/offsetwindow
: Group into time/count windowsmerge
/zip
: Combine streams
Mathematical model:
Key properties:
- Backpressure (ring buffer prevents overwhelming consumers)
- Multi-cast (multiple consumers on same stream)
- Composable (operators chain functionally)
- Lazy subscription (processing only with subscribers)
4. Channel: CSP-Style Communication
Go-style channels for structured concurrency:
Worker Pool Pattern:
zeroipc::Memory mem("/workers", 50*1024*1024);
// Buffered channels
zeroipc::Channel<Task> tasks(mem, "task_queue", 100);
zeroipc::Channel<Result> results(mem, "results", 100);
// Dispatcher process
for (auto& task : work_items) {
tasks.send(task); // Blocks if buffer full
}
tasks.close();
// Worker processes (multiple instances)
while (auto task = tasks.receive()) {
Result r = process_task(*task);
results.send(r);
}
// Aggregator process
std::vector<Result> all_results;
while (auto result = results.receive()) {
all_results.push_back(*result);
}
Channel types:
- Unbuffered: Synchronous rendezvous
- Buffered: Asynchronous up to capacity
- Closing: Signal no more values
Mathematical model:
Real-World Applications
High-Frequency Trading
Stream<MarketData> quotes(mem, "quotes");
auto signals = quotes
.window(mem, "1min", duration(1min))
.map(mem, "vwap", calculate_vwap)
.filter(mem, "triggers", is_trade_signal);
IoT Data Pipeline
Stream<SensorData> raw(mem, "raw_sensor");
auto processed = raw
.filter(mem, "valid", validate)
.map(mem, "normalized", normalize)
.window(mem, "5min", 5min)
.map(mem, "aggregated", aggregate)
.foreach(store_to_database);
Distributed Simulation
Future<State> states[N];
for (int i = 0; i < N; i++) {
states[i] = Future<State>(mem, "state_" + std::to_string(i));
}
auto final_state = Future<State>::all(states).map(combine_states);
Binary Format Specification
The magic behind language independence is a simple binary format:
[Table Header][Table Entries][Structure 1][Structure 2]...
Table Header (16 bytes):
- Magic number:
0x5A49504D
(‘ZIPM’) - Version: Currently 1
- Entry count: Active entries
- Next offset: Allocation pointer
Table Entry (40 bytes):
- Name: 32 bytes (null-terminated)
- Offset: 4 bytes
- Size: 4 bytes
Data structures: Raw binary, layout determined by structure type
All implementations follow this spec exactly. Cross-language tests verify compatibility.
Theoretical Foundation
Category Theory Perspective
In category theory, data and codata are dual concepts:
Data: Initial algebras (constructed bottom-up)
- Built from constructors
- Pattern matching for destruction
- Example:
List = Nil | Cons(head, tail)
Codata: Final coalgebras (defined by observations)
- Defined by projections
- Copattern matching for construction
- Example:
Stream = {head: T, tail: Stream<T>}
Coinduction
Streams demonstrate coinduction—defining infinite structures by observations:
// Infinite stream of Fibonacci numbers
Stream<int> fibonacci(Memory& mem) {
return Stream<int>::unfold(mem, "fib",
std::pair{0, 1},
[](auto state) {
auto [a, b] = state;
return std::pair{a, std::pair{b, a + b}};
});
}
Performance
- Array Access: Identical to native arrays (zero overhead)
- Queue Operations: Lock-free with atomic CAS
- Memory Allocation: O(1) bump allocation
- Discovery: O(n) where n ≤ max_entries
- Stream Processing: Backpressure prevents memory bloat
CLI Tools
The zeroipc-inspect
tool provides comprehensive debugging:
# List all shared memory segments
zeroipc-inspect list
# Show detailed information
zeroipc-inspect show /sensor_data
# Monitor stream in real-time
zeroipc-inspect monitor /sensors temperature_stream
# Dump raw memory
zeroipc-inspect dump /compute --offset 0 --size 1024
Comparison with Traditional IPC
Aspect | Traditional IPC | ZeroIPC |
---|---|---|
Model | Message passing | Computational substrate |
Coupling | Tight (sender/receiver) | Loose (producer/consumer) |
Timing | Synchronous | Asynchronous/Reactive |
Composition | Manual | Functional operators |
Patterns | Request-response | Streams, futures, lazy |
Overhead | Serialization | Zero-copy |
Types | Schema-based | Duck typing / templates |
What ZeroIPC Is Good For
✅ High-frequency sensor data sharing ✅ Multi-process simulations ✅ Real-time analytics pipelines ✅ Cross-language scientific computing ✅ Reactive event processing ✅ Zero-copy producer-consumer patterns
What ZeroIPC Is Not For
❌ General-purpose memory allocation ❌ Network-distributed systems ❌ Persistent storage ❌ Garbage collection
Implementation Languages
C Implementation (c/)
- Pure C99 for portability
- Zero dependencies beyond POSIX
- Static library (
libzeroipc.a
) - Minimal overhead
C++ Implementation (cpp/)
- Template-based zero overhead
- Header-only library
- Modern C++23 features
- RAII resource management
Python Implementation (python/)
- Pure Python, no compilation
- NumPy integration
- Duck typing flexibility
- mmap for direct memory access
Cross-Language Testing
# C++ writes, Python reads
cd interop && ./test_interop.sh
# Python writes, C++ reads
./test_reverse_interop.sh
All tests verify binary compatibility between implementations.
Future Explorations
The boundary between data and code continues to blur:
- Persistent Data Structures: Immutable structures with structural sharing
- Software Transactional Memory: ACID transactions in shared memory
- Dataflow Programming: Computational graphs in shared memory
- Actors: Message-passing actors with mailboxes
- Continuations: Suspended computations for coroutines
Why This Matters
ZeroIPC demonstrates that shared memory can be more than a data bucket. By treating computation as a first-class structure, we unlock patterns impossible in traditional IPC:
- Functional reactive programming across processes
- Lazy evaluation shared between languages
- Async/await without serialization
- CSP concurrency primitives in shared memory
The key insight: Computation itself becomes structure.
Resources
- Repository: github.com/queelius/zeroipc
- Binary Spec: SPECIFICATION.md
- Codata Guide: docs/codata_guide.md
- API Reference: docs/api_reference.md
License
MIT
Interested in IPC, functional programming, or systems design? ZeroIPC is actively developed and contributions are welcome. The project explores how far we can push modern programming abstractions into the traditionally low-level world of inter-process communication.
Discussion