Performance
All benchmarks run on native Linux. Raw protocol performance — no TLS, no auth, no persistence overhead.
~1M
msg/s QoS 0
1 pub · 1 sub · 64B
<5ms
p99 latency
1,000 concurrent clients
826KB
minimal binary
no TLS, stripped
<50ms
startup time
to first accepted connection
Throughput
1 publisher · 1 subscriber · 64-byte payload · loopback · native Linux
| QoS Level | RelayQ | Other brokers* |
|---|---|---|
| QoS 0 — fire and forget | ~1,000,000 msg/s | ~500K–800K msg/s |
| QoS 1 — at least once | ~300,000 msg/s | ~150K–200K msg/s |
| QoS 2 — exactly once | ~50,000 msg/s | ~20K–30K msg/s |
* Published figures for Mosquitto, NanoMQ, and EMQX under their own test conditions — not a controlled head-to-head. Run your own comparison using the instructions below.
Latency (p99)
1,000 concurrent idle clients · loopback · native Linux
| QoS Level | p99 latency |
|---|---|
| QoS 0 end-to-end | <5ms |
| QoS 1 end-to-end | <10ms |
| QoS 2 end-to-end | <10ms |
Connection Scalability
QoS 0 throughput and RAM usage as concurrent client count scales
| Concurrent clients | QoS 0 msg/s | RAM usage |
|---|---|---|
| 100 | 950,000 | 12 MB |
| 1,000 | 850,000 | 45 MB |
| 10,000 | 600,000 | 80 MB |
| 50,000 | 400,000 | 250 MB |
Resource Usage
Binary — full (with TLS)
2.3 MB
Binary — embedded (no TLS)
826 KB
Container image (scratch)
2.3 MB
RAM per idle connection
~4 KB
Methodology
- Each benchmark runs for 10 seconds minimum — median of 5 runs
- Broker and clients on the same machine — loopback only
- No TLS, no auth, no persistence — raw protocol performance
- Production deployments with TLS + auth + persistence: expect ~20–30% lower throughput
- Thread-per-client model trades maximum throughput for predictable latency
- QoS 2 throughput is bounded by the 4-packet handshake, not broker processing
Want to run these benchmarks in your own environment?
Start a 90-day pilot →