Add cooperative compute mesh: client-as-node, GPU sharing, IPFS persistence

Members' Rust clients become full peer nodes — AP instances, IPFS nodes,
and artdag GPU workers. The relay server becomes a lightweight matchmaker
(message queue, pinning, peer directory) while all compute, rendering,
and content serving is distributed across members' own hardware. Back
to the original vision of the web: everyone has a server.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-02-28 14:52:17 +00:00
parent 3e29c2a334
commit 19240c6ca3

View File

@@ -707,6 +707,155 @@ Depends on:
---
## Client as Node: Cooperative Compute Mesh
### Everyone Has a Server
The original web was peer-to-peer — everyone ran a server on their workstation. Then we centralised everything into data centres because HTTP was stateless and browsers were passive. The sexp protocol with client-as-node reverses that.
Each member's Rust client is not just a viewer — it's a full peer node:
- An **ActivityPub instance** (keypair, identity, inbox/outbox)
- An **IPFS node** (storing and serving content-addressed data)
- An **artdag worker** (local GPU for media processing)
- A **sexp peer** (bidirectional streams to relay and other peers)
```
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Alice │ │ Bob │ │ Charlie │
│ RTX 4070 │ │ M2 MacBook │ │ RX 7900 │
│ 12GB VRAM │ │ 16GB unified │ │ 20GB VRAM │
│ │ │ │ │ │
│ artdag node │ │ artdag node │ │ artdag node │
│ IPFS node │ │ IPFS node │ │ IPFS node │
│ sexp peer │ │ sexp peer │ │ sexp peer │
└──────┬───────┘ └──────┬───────┘ └──────┬───────┘
│ │ │
└────────┬────────┴────────┬────────┘
│ │
┌────────▼─────────────────▼────────┐
│ rose-ash relay │
│ │
│ • Message queue (offline inbox) │
│ • Capability registry │
│ • IPFS pinning service │
│ • HTTPS gateway (Tier 0) │
│ • Peer directory │
│ • Federation bridge (JSON-LD) │
└───────────────────────────────────┘
```
### Offline Persistence
When a member's client goes offline, their content persists on IPFS. The relay provides two services:
**IPFS pinning** — members' CIDs are pinned by the cooperative's pinning node, ensuring content stays available even when the author's client is off. This is cheap — just disk storage, no compute.
**Message queuing** — activities addressed to an offline member are held by the relay and drained when they reconnect:
```scheme
;; Alice is offline. Bob sends her a message.
;; The relay holds it.
;; Alice's client comes online, connects to relay
(hello :actor "alice@rose-ash.com" :last-seen "2026-02-28T12:00:00Z")
;; Relay drains the queue
(queued :count 3 :since "2026-02-28T12:00:00Z"
(Create :actor bob :published "2026-02-28T16:00:00Z"
(Note (p "See you at the market Saturday!")))
(Like :actor charlie :object alice-post-42)
(Follow :actor dave :object alice))
;; Alice's client processes them, sends acknowledgment
(ack :through "2026-02-28T16:00:00Z")
;; Relay clears the queue. Now alice is live —
;; subsequent activities stream directly peer-to-peer.
```
### Cooperative GPU Sharing
Members contribute idle GPU cycles to the cooperative. The relay acts as a job matchmaker:
```scheme
;; Alice uploads a video. Her laptop has integrated graphics — too slow.
(submit-job
:type "artdag/render"
:recipe "bafyrecipe..."
:input "bafyinput..."
:requirements (:min-vram 8 :gpu #t))
;; Relay knows Charlie's RTX 7900 is online and idle.
;; Job routes to Charlie's client.
(job :id "job-789" :assigned-to charlie
:type "artdag/render"
:recipe "bafyrecipe..."
:input "bafyinput...")
;; Charlie's client runs the job, pins result to IPFS
(job-complete :id "job-789"
:output "bafyoutput..."
:duration-ms 4200
:worker charlie)
;; Alice gets notified
(push! (swap! "#render-status" :inner
(use "render-complete" :cid "bafyoutput...")))
```
This is already how artdag works conceptually. The L1 server is a Celery worker that picks up rendering tasks. Replace "Celery worker on a cloud server" with "Celery worker on a member's desktop" and the architecture barely changes. The task queue just has different workers.
### Economics
| | Centralised (current) | Cooperative mesh |
|---|---|---|
| Image/video processing | Cloud GPU ($2-5/hr) | Member's local GPU (free) |
| Content storage | Server disk + S3 | IPFS (distributed) + pinning |
| Content serving | Server bandwidth | Peer-to-peer + IPFS |
| Server cost | GPU instances + storage + bandwidth | Cheap relay (CPU + disk only) |
| Scaling | More users = more cost | More members = more capacity |
The co-op's infrastructure cost drops to: **one small VPS + IPFS pinning storage.** That's it. All compute — rendering, processing, serving content — is distributed across members' machines.
More members joining makes the network faster and more capable, not more expensive. Like BitTorrent seeding, but for an entire application platform.
### The Relay Server's Role
The relay is minimal — a matchmaker and persistence layer, not a compute provider:
- **Peer directory**: who's online, their QUIC address, their GPU capabilities
- **Message queue**: hold activities for offline members
- **IPFS pinning**: persist content when authors are offline
- **HTTPS gateway**: serve HTML to Tier 0 browsers (visitors, search engines)
- **Federation bridge**: translate sexp ↔ JSON-LD for Mastodon/Pleroma peers
- **Job queue**: match GPU-intensive tasks to available peers
- **Capability registry**: what each peer can do (GPU model, VRAM, storage)
The relay does no rendering, no media processing, no content generation. Its cost stays flat regardless of member count.
### Content Flow
```
Author creates post:
1. Edit in Rust client (local)
2. Render media with local GPU (artdag)
3. Pin content + media to IPFS (local node)
4. Publish CIDs to relay (for pinning + discovery)
5. Stream activity to connected followers (peer-to-peer)
6. Relay queues activity for offline followers
Reader views post:
1. Fetch sexp from author's client (if online, peer-to-peer)
2. Or fetch from IPFS by CID (if author offline)
3. Or fetch from relay gateway as HTML (if Tier 0 browser)
4. Components resolved from local cache (content-addressed)
5. Render locally (Rust GPU or sexpr.js in browser)
```
No server rendered anything. No server stored anything permanently. No server paid for GPU time. The cooperative's members are the infrastructure.
---
## Cooperative Angle
- Members install the Rust client → fast native experience, 5MB binary, no app store
@@ -715,6 +864,9 @@ Depends on:
- AI agents speak the protocol natively → components as tool calls, mutations as actions
- Auto-updates via content-addressed components → no gatekeeping
- The component registry is a shared vocabulary across the cooperative network
- Members' desktops are the cloud — contributing GPU, storage, and bandwidth
- The relay server stays cheap and flat-cost regardless of growth
- The original vision of the web: everyone has a server
---
@@ -725,7 +877,8 @@ Depends on:
| `sexpr-js-runtime-plan.md` | The JS library powering Tier 1 (Phases 3-4) |
| `ghost-removal-plan.md` | Posts must be sexp before federation/client rendering adds value |
| `sexpr-ai-integration.md` | AI agents benefit from all tiers and the self-describing schema |
| artdag (`artdag/`) | The media processing engine that runs on member GPUs |
---
*The document is the program. The program is the document. The protocol is both.*
*The document is the program. The program is the document. The protocol is both. The network is its members.*