Matchbox
A prediction market order matching engine built in Rust.
Matchbox is a toy (but architecturally honest) system that demonstrates how to build a correct, distributed order matching engine. It supports multiple API server instances running simultaneously without double-matching orders.
The Three Pillars
flowchart LR
subgraph P1["HTTP API"]
H["POST /orders<br/>GET /orderbook<br/>GET /health"]
end
subgraph P2["Matching Engine"]
M["Price-time priority<br/>Partial fills<br/>BTreeMap + VecDeque"]
end
subgraph P3["WebSocket Feed"]
W["Real-time fills<br/>Redis Pub/Sub<br/>Cross-instance fan-out"]
end
P1 -- "orders" --> P2
P2 -- "fills" --> P3
style P1 fill:#2e7d32,color:#fff
style P2 fill:#1565c0,color:#fff
style P3 fill:#f57f17,color:#fff
| Pillar | What It Does |
|---|---|
| HTTP API | Submit orders, query the order book |
| Matching Engine | Price-time priority matching with partial fills |
| WebSocket Feed | Real-time fill events broadcast to all clients |
Tech Stack
Rust — Systems language, memory-safe, zero-cost abstractions
Tokio — Async runtime for concurrent I/O
Axum — HTTP framework with WebSocket support
Redis — Coordination layer (queue, Pub/Sub, leader lock)
serde — JSON serialization
Project Layout
matchbox/
├── crates/
│ ├── engine/ # Pure matching logic (no I/O)
│ │ └── src/
│ │ ├── models.rs # Order, Fill, Side types
│ │ ├── book.rs # OrderBook (BTreeMap + VecDeque)
│ │ └── matcher.rs # match_order() + tests
│ └── server/ # API server (Axum + Redis + WS)
│ └── src/
│ ├── main.rs # Entry point
│ ├── engine_worker.rs # Queue consumer + matching
│ ├── redis_sub.rs # Pub/Sub → broadcast
│ └── routes/ # HTTP + WS handlers
├── docker-compose.yml
└── README.md
Design Philosophy
The codebase is ~867 lines of Rust across 12 source files. The guiding principle: every line earns its place. Features that don’t serve correctness or architectural clarity were intentionally omitted — no auth, no persistence, no complex order types.
The engine crate has zero I/O dependencies. It is a pure function: (Order, &mut OrderBook) → Vec<Fill>. This makes it trivially testable and completely decoupled from networking, storage, or serialization concerns.
Getting Started
Prerequisites
Quick Start
# 1. Start Redis
docker compose up -d
# 2. Run tests
cargo test --workspace
# 3. Start the server
RUST_LOG=info cargo run -p server
# 4. Submit a sell order
curl -X POST localhost:8080/orders \
-H "Content-Type: application/json" \
-d '{"side":"sell","price":50,"qty":10}'
# 5. Submit a matching buy order
curl -X POST localhost:8080/orders \
-H "Content-Type: application/json" \
-d '{"side":"buy","price":50,"qty":10}'
# 6. Check the order book (should be empty after match)
curl localhost:8080/orderbook
# 7. Watch fills in real-time
websocat ws://localhost:8080/ws
Without Docker
If you have Redis installed locally:
# Start Redis
redis-server
# Run the server (Redis defaults to localhost:6379)
RUST_LOG=info cargo run -p server
Building
# Debug build
cargo build --workspace
# Release build
cargo build --workspace --release
# Run tests
cargo test --workspace
# Lint
cargo clippy --all-targets
# Format
cargo fmt --all
Configuration
Matchbox is configured entirely through environment variables.
Environment Variables
| Variable | Default | Description |
|---|---|---|
REDIS_URL | redis://127.0.0.1:6379 | Redis connection URL |
PORT | 8080 | HTTP server listen port |
RUST_LOG | (none) | Log level filter |
Examples
# Default — connects to local Redis on 6379, serves on 8080
cargo run -p server
# Custom port
PORT=3000 cargo run -p server
# Remote Redis
REDIS_URL=redis://redis.example.com:6379 cargo run -p server
# Debug logging
RUST_LOG=debug cargo run -p server
# Module-specific logging
RUST_LOG=server::engine_worker=debug,info cargo run -p server
Docker Compose
The included docker-compose.yml runs Redis:
services:
redis:
image: redis:7-alpine
ports:
- "6379:6379"
command: redis-server --appendonly no
docker compose up -d # Start Redis
docker compose down # Stop and remove
docker compose logs redis # View Redis logs
Multi-Instance Setup
Run multiple API servers sharing one Redis, with automatic leader election.
Start Two Instances
# Terminal 1 — Instance A (will likely become leader)
PORT=8080 RUST_LOG=info cargo run -p server
# Terminal 2 — Instance B (follower, retries every 5s)
PORT=8081 RUST_LOG=info cargo run -p server
Check the logs — one instance prints Became engine leader, the other keeps retrying.
Cross-Instance Matching
# Sell on Instance A
curl -X POST localhost:8080/orders \
-H "Content-Type: application/json" \
-d '{"side":"sell","price":50,"qty":10}'
# Buy on Instance B
curl -X POST localhost:8081/orders \
-H "Content-Type: application/json" \
-d '{"side":"buy","price":50,"qty":10}'
# Both show empty book (orders matched through shared queue)
curl localhost:8080/orderbook
curl localhost:8081/orderbook
WebSocket on Both
# Connect WS to each instance
websocat ws://localhost:8080/ws &
websocat ws://localhost:8081/ws &
# Submit crossing orders — BOTH connections receive the fill
Leader Failover
# 1. Kill the leader instance (Ctrl+C)
# 2. Wait ~30 seconds (lock TTL expires)
# 3. The other instance logs "Became engine leader"
# 4. Submit orders — they are processed by the new leader
During failover, orders queue in Redis and are not lost. The new leader processes them on election.
Verify Leader
# Check which instance holds the lock
docker exec $(docker compose ps -q redis) redis-cli GET engine:leader
System Overview
Matchbox uses a Single Writer architecture. All API servers push orders into a shared Redis queue. One elected engine worker consumes orders sequentially and publishes fills.
Component Diagram
graph TB
subgraph Clients["Clients"]
C1["Browser / CLI"]
C2["WebSocket Client"]
end
subgraph API["API Server Instances"]
A1["Server A · :8080"]
A2["Server B · :8081"]
A3["Server N · :808N"]
end
subgraph Redis["Redis — Coordination Layer"]
Q["engine:order_queue<br/>List — FIFO Order Queue"]
ID["engine:order_id_counter<br/>String — Atomic INCR"]
LOCK["engine:leader<br/>String — SETNX · TTL 30s"]
SNAP["orderbook:snapshot<br/>String — JSON Snapshot"]
PS["fills:events<br/>Pub/Sub — Fill Broadcast"]
end
subgraph Engine["Engine Worker — Single Leader"]
EW["engine_worker_loop<br/>· Owns OrderBook in memory<br/>· Sequential match_order"]
end
C1 -- "POST /orders" --> A1
C1 -- "POST /orders" --> A2
C1 -- "GET /orderbook" --> A3
A1 -- "INCR" --> ID
A1 -- "RPUSH order" --> Q
A2 -- "RPUSH order" --> Q
Q -- "LPOP" --> EW
LOCK -. "SET NX EX 30" .-> EW
EW -- "PUBLISH fill" --> PS
EW -- "SET snapshot" --> SNAP
PS -- "SUBSCRIBE" --> A1
PS -- "SUBSCRIBE" --> A2
PS -- "SUBSCRIBE" --> A3
A1 -- "WS fill" --> C2
A2 -- "WS fill" --> C2
SNAP -- "GET" --> A3
style Redis fill:#dc382c,color:#fff,stroke:#b71c1c
style Engine fill:#1565c0,color:#fff,stroke:#0d47a1
style API fill:#2e7d32,color:#fff,stroke:#1b5e20
style Clients fill:#f57f17,color:#fff,stroke:#e65100
Order Lifecycle
sequenceDiagram
participant C as Client
participant A as API Server
participant R as Redis
participant E as Engine Worker
participant W as WebSocket Clients
C->>A: POST /orders {side, price, qty}
A->>R: INCR engine:order_id_counter
R-->>A: order_id = 42
A->>R: RPUSH engine:order_queue
A-->>C: 201 Created {order_id: 42}
Note over E: Polling queue via LPOP
R->>E: LPOP returns Order JSON
E->>E: match_order(order, book)
E->>R: PUBLISH fills:events
E->>R: SET orderbook:snapshot
R->>A: SUBSCRIBE message
A->>A: broadcast::send(fill)
A->>W: WebSocket Text(fill JSON)
Why Single Writer?
The core problem: if two servers match orders against the same book simultaneously, a resting order can be consumed twice (double-fill).
The single writer eliminates this by construction — one task, one book, sequential processing. Redis queue serializes all incoming orders. No locks, no CAS retry loops, no distributed consensus.
| Approach | Correctness | Complexity | Chosen? |
|---|---|---|---|
| Single Writer (Redis Queue) | Correct by construction | Simple | Yes |
| Optimistic Locking (CAS) | Correct with retries | Moderate | No |
| Raft Consensus | Strongly consistent | Very high | No |
Data Structures
Order Book
The book uses BTreeMap<u64, VecDeque<Order>> — one map for bids, one for asks.
#![allow(unused)]
fn main() {
pub struct OrderBook {
bids: BTreeMap<u64, VecDeque<Order>>, // Best bid = last key
asks: BTreeMap<u64, VecDeque<Order>>, // Best ask = first key
order_index: HashMap<u64, Side>, // O(1) lookup by ID
pub sequence: u64, // Snapshot version
}
}
Why BTreeMap?
Prices must be sorted. BTreeMap gives sorted iteration for free:
#![allow(unused)]
fn main() {
// Lowest ask (best for buyers)
let best_ask = self.asks.keys().next().copied();
// Highest bid (best for sellers)
let best_bid = self.bids.keys().next_back().copied();
}
Why VecDeque?
Time priority requires FIFO ordering at each price level:
#![allow(unused)]
fn main() {
// New order goes to the back
level.push_back(order);
// Oldest order dequeued first during matching
let oldest = level.pop_front();
// Peek + modify for partial fills
let maker = level.front_mut().unwrap();
maker.qty -= fill_qty;
}
Both push_back and pop_front are O(1) amortized.
Complexity
| Operation | Cost | Notes |
|---|---|---|
| Find best price | O(log P) | P = number of price levels (~100 max) |
| Insert order | O(log P) | BTreeMap insert + VecDeque push |
| Match one order | O(1) | VecDeque pop_front |
| Full match cycle | O(K log P + M) | K levels crossed, M fills generated |
Core Types
#![allow(unused)]
fn main() {
pub struct Order {
pub id: u64, // Unique via Redis INCR
pub side: Side, // Buy or Sell
pub price: u64, // Integer ticks (no floats)
pub qty: u64, // Contracts remaining
pub timestamp: u64, // Nanoseconds for time priority
}
pub struct Fill {
pub maker_order_id: u64,
pub taker_order_id: u64,
pub price: u64, // Always maker's price
pub qty: u64,
pub timestamp: u64,
}
pub enum Side { Buy, Sell }
}
All types derive Serialize and Deserialize for JSON transport over HTTP, Redis, and WebSocket.
Matching Engine
The matching engine lives in crates/engine/ — a pure Rust library with zero I/O dependencies.
Price-Time Priority
The algorithm used by every major exchange (NASDAQ, CME, Binance, Polymarket):
- Price first — for a buy, match the lowest ask. For a sell, match the highest bid.
- Time second — at the same price, the earlier order matches first.
- Maker price — fills execute at the resting (maker) order’s price, not the taker’s.
The Core Function
#![allow(unused)]
fn main() {
pub fn match_order(mut incoming: Order, book: &mut OrderBook) -> Vec<Fill> {
let mut fills = Vec::new();
match incoming.side {
Side::Buy => match_buy(&mut incoming, book, &mut fills),
Side::Sell => match_sell(&mut incoming, book, &mut fills),
}
// Unfilled remainder rests on the book
if incoming.qty > 0 {
book.add_resting_order(incoming);
}
book.sequence += 1;
fills
}
}
Takes an Order and a mutable OrderBook. Returns fills. The book is mutated in place. No I/O, no async, no Redis — pure computation.
How match_buy Works
#![allow(unused)]
fn main() {
while incoming.qty > 0 {
let best_ask_price = match book.best_ask() {
Some(p) => p,
None => break, // No sellers
};
if incoming.price < best_ask_price {
break; // Price too low
}
// Walk the price level FIFO, filling orders
// ...
}
}
The sell-side (match_sell) is symmetric — matches against highest bids first.
Price Priority in Action
flowchart LR
subgraph Input
O["Incoming Order<br/>Buy 25 @ 510"]
end
subgraph Book["Order Book — asks side"]
L1["490: Sell qty=10"]
L2["500: Sell qty=10"]
L3["510: Sell qty=10"]
end
subgraph Output["Generated Fills"]
F1["Fill price=490 qty=10"]
F2["Fill price=500 qty=10"]
F3["Fill price=510 qty=5"]
end
O --> L1
L1 --> F1
O --> L2
L2 --> F2
O --> L3
L3 --> F3
F3 -. "5 remaining" .-> L3
style Input fill:#1565c0,color:#fff
style Output fill:#2e7d32,color:#fff
Partial Fills
When an incoming order is larger than available liquidity:
sequenceDiagram
participant I as Incoming Buy 60@100
participant B as Book asks at 100
participant F as Fills
Note over B: [Sell qty=30, Sell qty=50]
I->>B: Match first sell (qty=30)
B->>F: Fill qty=30 (sell fully consumed)
I->>B: Match second sell (30 of 50)
B->>F: Fill qty=30 (sell partially consumed)
Note over I: Fully consumed (30+30=60)
Note over B: [Sell qty=20] remaining
Test Coverage
$ cargo test -p engine
running 9 tests
test test_no_match_rests_on_book ... ok
test test_full_match ... ok
test test_partial_fill_incoming_larger ... ok
test test_partial_fill_resting_larger ... ok
test test_price_priority ... ok
test test_time_priority ... ok
test test_fills_sum_to_matched_qty ... ok
test test_no_match_price_too_low ... ok
test test_sell_matches_highest_bid_first ... ok
Multi-Instance Design
The hardest problem in this project: N server instances sharing one order book without double-matching.
Leader Election
On startup, every instance tries to claim the engine leader lock:
#![allow(unused)]
fn main() {
const LEADER_KEY: &str = "engine:leader";
const LEADER_TTL_SECS: i64 = 30;
async fn try_become_leader(redis: &Client, instance_id: &str) -> bool {
redis.set::<(), _, _>(
LEADER_KEY,
instance_id,
Some(Expiration::EX(LEADER_TTL_SECS)), // Expires in 30s
Some(SetOptions::NX), // Only if key doesn't exist
false,
).await.is_ok()
}
}
SET NX EX is atomic — exactly one instance wins. Others retry every 5 seconds.
Lock Refresh
The leader refreshes its lock every 10 seconds using an atomic Lua script:
#![allow(unused)]
fn main() {
let script = r#"
if redis.call('get', KEYS[1]) == ARGV[1] then
return redis.call('expire', KEYS[1], ARGV[2])
else
return 0
end
"#;
}
This prevents a stale leader from accidentally extending a lock that another instance already acquired.
Failover
stateDiagram-v2
[*] --> TryAcquire: Instance starts
TryAcquire --> Leader: SET NX succeeds
TryAcquire --> Follower: SET NX fails
Leader --> Leader: Refresh lock every 10s
Leader --> Processing: LPOP returns order
Processing --> Leader: match then publish
Leader --> [*]: Crash or shutdown
Follower --> Follower: Retry every 5s
Follower --> TryAcquire: Lock expired after 30s
During the failover window (~30s), orders queue up in Redis but are not lost. The new leader processes them in FIFO order.
Known limitation: The in-memory order book is lost when the leader crashes. A production system would reconstruct it from a persistent fill log.
Correctness Guarantee
flowchart TD
A["All orders flow through<br/>Redis list LPOP — atomic"] --> B["Only one engine worker<br/>runs at a time — leader lock"]
B --> C["Orders processed<br/>sequentially — single task"]
C --> D["Book modified by<br/>one writer only"]
D --> E["Each order matched<br/>exactly once"]
style E fill:#2e7d32,color:#fff,stroke:#1b5e20,stroke-width:2px
API Reference
POST /orders
Submit a new limit order.
curl -X POST localhost:8080/orders \
-H "Content-Type: application/json" \
-d '{"side":"buy","price":50,"qty":10}'
Request body:
{
"side": "buy",
"price": 50,
"qty": 10
}
Response (201 Created):
{
"order_id": 1
}
Validation:
qtymust be > 0 (400 if zero)pricemust be > 0 (400 if zero)sidemust be"buy"or"sell"(422 if invalid)
GET /orderbook
Return the current order book state.
curl localhost:8080/orderbook
Response (200 OK):
{
"bids": [
{ "price": 50, "qty": 30 },
{ "price": 48, "qty": 10 }
],
"asks": [
{ "price": 52, "qty": 15 }
],
"sequence": 42
}
bids— sorted highest price firstasks— sorted lowest price firstsequence— increments on every order processed
GET /ws
WebSocket endpoint for real-time fill events.
websocat ws://localhost:8080/ws
Each fill arrives as a JSON text message:
{
"maker_order_id": 1,
"taker_order_id": 2,
"price": 50,
"qty": 10,
"timestamp": 1711814400000000000
}
GET /health
Liveness probe.
curl localhost:8080/health
# {"status":"ok"}
WebSocket Feed
Clients connect to GET /ws and receive fill events in real time across all server instances.
How It Works
flowchart TD
R["Redis Pub/Sub<br/>fills:events"] --> RS["redis_subscriber<br/>one per API instance"]
RS --> TX["broadcast::Sender<br/>shared in AppState"]
TX --> RX1["rx.recv — WS Client 1"]
TX --> RX2["rx.recv — WS Client 2"]
TX --> RX3["rx.recv — WS Client 3"]
RX1 --> W1["WebSocket send"]
RX2 --> W2["WebSocket send"]
RX3 --> W3["WebSocket send"]
style R fill:#dc382c,color:#fff
style TX fill:#1565c0,color:#fff
Server-Side Handler
The key pattern: subscribe to the broadcast channel before the WebSocket upgrade to avoid missing fills during the handshake.
#![allow(unused)]
fn main() {
pub async fn ws_handler(
ws: WebSocketUpgrade,
State(state): State<AppState>,
) -> Response {
let rx = state.fills_tx.subscribe(); // Subscribe BEFORE upgrade
ws.on_upgrade(|socket| handle_socket(socket, rx))
}
}
Each socket runs two concurrent tasks via tokio::select!:
#![allow(unused)]
fn main() {
// Task 1: broadcast → WebSocket (send fills to client)
// Task 2: WebSocket → drain (detect disconnection)
tokio::select! {
_ = &mut send_task => recv_task.abort(),
_ = &mut recv_task => send_task.abort(),
}
}
Cross-Instance Fan-Out
When the engine worker (on Instance A) publishes a fill to Redis Pub/Sub, every API instance receives it and forwards to their local WebSocket clients. A client connected to Instance B sees fills generated by Instance A.
Connecting
# Using websocat
websocat ws://localhost:8080/ws
# Using wscat
wscat -c ws://localhost:8080/ws
# Programmatically (JavaScript)
const ws = new WebSocket("ws://localhost:8080/ws");
ws.onmessage = (e) => console.log(JSON.parse(e.data));
Redis Integration
Redis serves as the coordination layer — it handles queuing, ID generation, leader election, state sharing, and event broadcasting.
Key Namespace
| Key | Type | Purpose | Commands |
|---|---|---|---|
engine:order_id_counter | String | Unique order IDs | INCR |
engine:order_queue | List | Order FIFO queue | RPUSH / LPOP |
engine:leader | String | Leader lock (30s TTL) | SET NX EX |
orderbook:snapshot | String | Book state for queries | SET / GET |
fills:events | Pub/Sub | Fill broadcast channel | PUBLISH / SUBSCRIBE |
Connection Setup
The server uses the fred crate with connection pooling:
#![allow(unused)]
fn main() {
let config = Config::from_url(&redis_url)?;
let redis = Builder::from_config(config).build()?;
redis.init().await?;
}
Pub/Sub requires a separate dedicated client (Redis protocol constraint):
#![allow(unused)]
fn main() {
let subscriber = Builder::from_config(config)
.build_subscriber_client()?;
subscriber.init().await?;
subscriber.subscribe("fills:events").await?;
}
Why Not Store the Book in Redis?
Matching requires iterating price levels and mutating individual orders — doing this in Redis would need complex Lua scripts or multiple round-trips per match.
In-memory matching takes ~1-10 microseconds. Redis round-trips take ~100-200 microseconds each. The in-memory approach is simpler and orders of magnitude faster.
The trade-off: the book is lost if the engine crashes. Acceptable for a toy system.