Honest assessment: adoption chicken-and-egg, security surface area, accessibility gap, tooling desert, Lisp Curse fragmentation, Worse Is Better problem, and mitigation strategy for each. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
5.6 KiB
S-expression Protocol: Risks and Pitfalls
Bear traps, historical precedents, and honest assessment of what could go wrong.
Adoption Chicken-and-Egg
No one builds clients for a protocol no one serves. No one serves a protocol no one has clients for. HTTP won despite technically inferior alternatives because it was there. The Tier 0 strategy (sexp rendered to HTML by the server) is the right answer — you don't need anyone to adopt anything on day one. But the jump from Tier 0 to Tier 1/2 requires a critical mass of sites serving sexp, and that's historically where alternative protocols die.
Security Surface Area
Evaluating arbitrary sexp from a remote server is eval() with s-expressions. Sandboxing matters enormously. What can a component do? Can it access localStorage? Make network requests? Read other components' state? HTML's security model (same-origin policy, CSP, CORS) took 20 years of CVEs to get to where it is. You'd need an equivalent — and you'd need it from day one, not after the first exploit. The "components are functions" model is powerful but "functions from strangers" is the oldest trap in computing.
Accessibility
HTML's semantic elements (<nav>, <main>, <article>, <button>) have decades of screenreader support baked in. A sexp (nav ...) that renders to DOM inherits this — but a Tier 2 Rust client rendering natively doesn't. You'd need to build an accessibility layer from scratch, or you create a fast web that blind users can't use.
Tooling Desert
View Source, DevTools, Lighthouse, network inspectors, CSS debuggers — the entire web development toolchain assumes HTML/CSS/JS. A sexp protocol starts with zero tooling. Developers won't adopt what they can't debug. You'd need at minimum: a sexp inspector, a component browser, a network viewer that understands sexp streams, and a REPL. Building the protocol is 10% of the work; building the toolchain is 90%.
The Lisp Curse
S-expressions are so flexible that two implementations inevitably diverge. Is it :key value or (key value)? Are booleans #t/#f or true/false? Is the empty list () or nil? Every Lisp dialect makes different choices. Without a brutally strict spec locked down early, you get fragmentation — which is exactly what killed previous "simple protocol" attempts (XMPP, Atom, etc.).
Performance Isn't Free
"Parsing sexp is faster than parsing HTML" is true for a naive comparison. But browsers have spent billions of dollars optimising HTML parsing — speculative parsing, streaming tokenisation, GPU-accelerated layout. A sexp evaluator written in a weekend is slower than Chrome's HTML parser in practice, even if it's simpler in theory. The Rust client would need serious engineering to match perceived performance.
Content-Addressed Caching Breaks on Dynamic Content
Hashing works beautifully for static components and blog posts. It falls apart for personalised content, A/B tests, time-sensitive data, or anything with user state. You'd need a clear boundary between "cacheable structure" and "dynamic bindings" — and that boundary is exactly where complexity creeps back in.
The Worse Is Better Problem
HTTP+HTML is objectively a mess. It's also the most successful application protocol in history. Worse is better. Developers know it, frameworks handle the ugly parts, and the ecosystem is vast. A cleaner protocol has to overcome "good enough" — the most lethal competitor in technology.
Legal and Regulatory Assumptions
Cookie consent, GDPR right to erasure, accessibility mandates (WCAG), eIDAS digital signatures — all of these are written assuming HTTP+HTML. A new protocol potentially falls outside existing frameworks, which is either a feature (no cookie banners!) or a bear trap (non-compliant by default).
Federation Is Hard
ActivityPub exists and adoption is still tiny after 8 years. The problem isn't the protocol — it's spam, moderation, identity, key management, and social dynamics. Sexp doesn't solve any of those. A beautiful protocol that inherits AP's unsolved problems is still stuck on those problems.
Honest Summary
The idea is sound and the architecture is elegant. The risk isn't technical — it's that history is littered with technically superior protocols that lost to worse-but-established ones. The Tier 0 on-ramp (server renders HTML, sexp is internal) is the best defence against this, because it means rose-ash doesn't depend on protocol adoption to work. The protocol can grow organically from one working site rather than needing an ecosystem to bootstrap.
Mitigation Strategy
| Risk | Mitigation |
|---|---|
| Adoption chicken-and-egg | Tier 0 works today on any browser — no adoption needed to start |
| Security | Define capability model before Tier 1 ships; components are pure functions by default |
| Accessibility | Tier 0/1 render to semantic HTML+DOM; Tier 2 Rust client uses platform a11y APIs (e.g. AccessKit) |
| Tooling | Build inspector and REPL as first Tier 1 deliverables, not afterthoughts |
| Lisp Curse | Lock the spec early: :key value attrs, #t/#f booleans, () empty list — no alternatives |
| Performance | Tier 2 Rust client competes on parsing; Tier 0/1 lean on browser's optimised DOM |
| Dynamic caching | Separate envelope (cacheable structure) from bindings (dynamic data) at the protocol level |
| Worse Is Better | Don't compete with HTTP — run on top of it (Tier 0/1) and beside it (Tier 2) |
| Legal/regulatory | Tier 0 is standard HTTP — fully compliant; Tier 2 inherits HTTP compliance via QUIC |
| Federation | Solve spam/moderation at the application layer, not the protocol layer |