1.5 KiB
Ring Benchmark Results
Generated by lib/erlang/bench_ring.sh against sx_server.exe on the
synchronous Erlang-on-SX scheduler.
| N (processes) | Hops | Wall-clock | Throughput |
|---|---|---|---|
| 10 | 10 | 907ms | 11 hops/s |
| 50 | 50 | 2107ms | 24 hops/s |
| 100 | 100 | 3827ms | 26 hops/s |
| 500 | 500 | 17004ms | 29 hops/s |
| 1000 | 1000 | 29832ms | 34 hops/s |
(Each Nm row spawns N processes connected in a ring and passes a
single token N hops total — i.e. the token completes one full lap.)
Status of the 1M-process target
Phase 3's stretch goal in plans/erlang-on-sx.md is a million-process
ring benchmark. That target is not met in the current synchronous
scheduler; extrapolating from the table above, 1M hops would take
~30 000 s. Correctness is fine — the program runs at every measured
size — but throughput is bound by per-hop overhead.
Per-hop cost is dominated by:
er-env-copyper fun clause attempt (whole-dict copy each time)call/cccapture +raise/guardunwind on everyreceiveer-q-delete-at!rebuilds the mailbox backing list on every matchdict-set!/dict-has?lookups in the global processes table
To reach 1M-process throughput in this architecture would need at least: persistent (path-copying) envs, an inline scheduler that doesn't call/cc on the common path (msg-already-in-mailbox), and a linked-list mailbox. None of those are in scope for the Phase 3 checkbox — captured here as the floor we're starting from.