Benchmarks
Real-world performance benchmarks for ZAP Protocol across AI agents, blockchain VMs, and distributed systems.
Benchmarks
ZAP is designed for the most demanding use cases in AI and crypto infrastructure. These are real benchmarks measured on Apple M1 Max.
The Infinity Benchmark
This benchmark measures encoding/decoding round-trip time in memory. ZAP achieves 0µs because there is no encoding step — the in-memory format IS the wire format.
"But that's unfair!" — Yes, that's the point. ZAP eliminates the problem entirely.
Measured Results
All benchmarks run on Apple M1 Max with Go 1.25 and Python 3.14. Run them yourself →
Serialization Performance
| Operation | JSON | ZAP | Speedup | JSON Allocs | ZAP Allocs |
|---|---|---|---|---|---|
| Tool call encode | 385ns | 18ns | 21x | 2 | 0 |
| Tool call decode | 1,746ns | 31ns | 56x | 15 | 1 |
| Round-trip | 2,142ns | 48ns | 45x | 17 | 1 |
| Message routing | 2,122ns | 3.2ns | 656x | 17 | 0 |
| Batch 100 messages | 37µs | 1.8µs | 21x | 300 | 0 |
| Large message (32KB) | 37µs | 4.6µs | 8x | 2 | 1 |
Blockchain & Consensus
| Operation | JSON | ZAP | Speedup | JSON Allocs | ZAP Allocs |
|---|---|---|---|---|---|
| Warp message encode | 3.7µs | 53ns | 70x | 2 | 0 |
| Warp message decode | 20µs | 82ns | 244x | 34 | 1 |
| Consensus vote | 489ns | 0.34ns | 1,438x | 1 | 0 |
| 1000 attestations | 421µs | 1.8µs | 234x | 2 | 0 |
| Validator set (100) | 28µs | 284ns | 99x | 2 | 0 |
| Random state access | 707µs | 0.96ns | 736,458x | 11,011 | 0 |
AI Agent Communication
| Scenario | JSON-RPC | ZAP | Speedup |
|---|---|---|---|
| 20 agents × 50 tool calls | 12.48ms | 6.21ms | 2x |
| Per-call latency | 9.64µs | 3.56µs | 2.7x |
| Memory (100 MCP servers) | 825 MB | 2.4 MB | 341x |
Distributed Inference
| Operation | JSON | ZAP | Speedup |
|---|---|---|---|
| KV cache shard (1MB) | 22.6ms | 0.024ms | 926x |
| Batch prompts (32×512) | 4,049µs | 20µs | 200x |
| Speculative decode verify | 5.46µs | 0.026µs | 210x |
Memory Efficiency
ZAP's arena allocation and zero-copy access provide consistent memory behavior:
| Metric | JSON | ZAP | Improvement |
|---|---|---|---|
| Allocations per message | 17 | 0-1 | 94% reduction |
| 100 MCP server overhead | 825 MB | 2.4 MB | 99.7% reduction |
| Memory fragmentation | High | None | Eliminated |
| GC pressure | Severe | Minimal | Predictable latency |
| Cache locality | Poor | Excellent | Faster access |
Why Memory Matters
Traditional JSON-RPC with 100 MCP servers:
- Each server: separate process (~8MB)
- Pipe buffers: ~256KB per connection
- Per-message allocations: 17 heap allocs
- Total: 825 MB just for connections
ZAP with single router:
- One router process (~2MB)
- Shared arena buffer (64KB)
- Per-message allocations: 0
- Total: 2.4 MB for same 100 servers
Claude Code (100 MCP): ████████████████████████████████████████ 825 MB
Hanzo ZAP Router: █ 2.4 MB
341x less memory. Same functionality.Memory-Mapped Files
ZAP files can be memory-mapped for instant access to any field:
// Map a 10GB state file
data, _ := mmap.Open("blockchain_state.zap")
defer data.Close()
// Access any field instantly — OS pages in only what you touch
account := state.Root().Accounts().Get(address)
balance := account.Balance() // Only this 4KB page is loaded
// JSON would require parsing all 10GB firstMeasured: 707µs (JSON parse + access) vs 0.96ns (ZAP mmap access) = 736,458x faster
Zero Allocations
The key to ZAP's performance isn't just speed—it's zero heap allocations.
JSON encode: 2 allocs/op 272 B/op
ZAP encode: 0 allocs/op 0 B/op ← No heap activity
JSON decode: 15 allocs/op 640 B/op
ZAP decode: 1 allocs/op 48 B/op ← Just the result slice
JSON batch 100: 300 allocs/op 30KB/op
ZAP batch 100: 0 allocs/op 0 B/op ← Arena reuseWhy this matters:
- No GC pauses: Zero allocations = zero garbage collection
- Predictable latency: No surprise GC stalls in hot paths
- Better cache utilization: Data stays in L1/L2 cache
- Linear scaling: Batch operations don't multiply allocations
Code Size
ZAP generates minimal code compared to other formats:
| Format | Generated code (1000 types) |
|---|---|
| Protobuf | ~2.5 MB |
| FlatBuffers | ~1.8 MB |
| ZAP | ~150 KB |
Smaller generated code means:
- Faster compilation
- Smaller binaries
- Better instruction cache utilization
- Easier auditing
Methodology
Test Environment
- CPU: Apple M1 Max (10 cores)
- Memory: 32GB unified
- OS: macOS 14
- Go: 1.25.6
- Python: 3.14.2
What We Measure
Encoding time: Time to convert in-memory structures to wire format.
- JSON:
json.Marshal()call - ZAP: Direct struct pack to buffer
Decoding time: Time to make wire data accessible.
- JSON:
json.Unmarshal()+ field access - ZAP: Pointer arithmetic + bounds check
Allocations: Heap allocations per operation.
- Measured via Go's
testing.B.ReportAllocs() - Critical for GC-sensitive workloads
The "Unfair" Advantage
ZAP benchmarks look unfair because they are. Traditional formats force you to:
- Serialize — Convert memory to bytes
- Transmit — Send bytes over network/IPC
- Deserialize — Parse bytes back to memory
ZAP eliminates steps 1 and 3. The wire format IS the memory format.
This isn't cheating — it's better engineering.
Run Your Own Benchmarks
# Clone the benchmark suite
git clone https://github.com/zap-protocol/benchmarks
cd benchmarks
# Install dependencies
make setup
# Run all benchmarks
make bench
# Run specific benchmark suite
make bench-serialize # Go serialization
make bench-blockchain # Warp messaging, consensus
make bench-agents # MCP, multi-agent
make bench-inference # Distributed AIResults are written to results/ in JSON format.
See the benchmark repository for full methodology and reproducible results.