diff --git a/README.md b/README.md
index 545a212..408ad6c 100644
--- a/README.md
+++ b/README.md
@@ -1,212 +1,312 @@
-# Art Celery
+# Art DAG L1 Server
-L1 rendering server for the Art DAG system. Manages distributed rendering jobs via Celery workers.
+L1 rendering server for the Art DAG system. Manages distributed rendering jobs via Celery workers with content-addressable caching and optional IPFS integration.
+
+## Features
+
+- **3-Phase Execution**: Analyze → Plan → Execute pipeline for recipe-based rendering
+- **Content-Addressable Caching**: SHA3-256 hashed content with deduplication
+- **IPFS Integration**: Optional IPFS-primary mode for distributed storage
+- **Storage Providers**: S3, IPFS, and local storage backends
+- **DAG Visualization**: Interactive graph visualization of execution plans
+- **SPA-Style Navigation**: Smooth URL-based navigation without full page reloads
+- **L2 Federation**: Publish outputs to ActivityPub registry
## Dependencies
- **artdag** (GitHub): Core DAG execution engine
- **artdag-effects** (rose-ash): Effect implementations
+- **artdag-common**: Shared templates and middleware
- **Redis**: Message broker, result backend, and run persistence
+- **PostgreSQL**: Metadata storage
+- **IPFS** (optional): Distributed content storage
-## Setup
+## Quick Start
```bash
-# Install Redis
-sudo apt install redis-server
-
-# Install Python dependencies
+# Install dependencies
pip install -r requirements.txt
+# Start Redis
+redis-server
+
# Start a worker
-celery -A celery_app worker --loglevel=info
+celery -A celery_app worker --loglevel=info -E
# Start the L1 server
python server.py
```
-## Web UI
+## Docker Swarm Deployment
-The server provides a web interface at the root URL:
+```bash
+docker stack deploy -c docker-compose.yml artdag
+```
+
+The stack includes:
+- **redis**: Message broker (Redis 7)
+- **postgres**: Metadata database (PostgreSQL 16)
+- **ipfs**: IPFS node (Kubo)
+- **l1-server**: FastAPI web server
+- **l1-worker**: Celery workers (2 replicas)
+- **flower**: Celery task monitoring
+
+## Configuration
+
+### Environment Variables
+
+| Variable | Default | Description |
+|----------|---------|-------------|
+| `HOST` | `0.0.0.0` | Server bind address |
+| `PORT` | `8000` | Server port |
+| `REDIS_URL` | `redis://localhost:6379/5` | Redis connection |
+| `DATABASE_URL` | `postgresql://artdag:artdag@localhost:5432/artdag` | PostgreSQL connection |
+| `CACHE_DIR` | `~/.artdag/cache` | Local cache directory |
+| `IPFS_API` | `/dns/localhost/tcp/5001` | IPFS API multiaddr |
+| `IPFS_GATEWAY_URL` | `https://ipfs.io/ipfs` | Public IPFS gateway |
+| `IPFS_PRIMARY` | `false` | Enable IPFS-primary mode |
+| `L1_PUBLIC_URL` | `http://localhost:8100` | Public URL for redirects |
+| `L2_SERVER` | - | L2 ActivityPub server URL |
+| `L2_DOMAIN` | - | L2 domain for federation |
+| `ARTDAG_CLUSTER_KEY` | - | Cluster key for trust domains |
+
+### IPFS-Primary Mode
+
+When `IPFS_PRIMARY=true`, all content is stored on IPFS:
+- Input files are added to IPFS on upload
+- Analysis results stored as JSON on IPFS
+- Execution plans stored on IPFS
+- Step outputs pinned to IPFS
+- Local cache becomes a read-through cache
+
+This enables distributed execution across multiple L1 nodes sharing the same IPFS network.
+
+## Web UI
| Path | Description |
|------|-------------|
| `/` | Home page with server info |
| `/runs` | View and manage rendering runs |
-| `/run/{id}` | Run detail page |
+| `/run/{id}` | Run detail with tabs: Plan, Analysis, Artifacts |
+| `/run/{id}/plan` | Interactive DAG visualization |
+| `/run/{id}/analysis` | Audio/video analysis data |
+| `/run/{id}/artifacts` | Cached step outputs |
| `/recipes` | Browse and run available recipes |
| `/recipe/{id}` | Recipe detail page |
+| `/recipe/{id}/dag` | Recipe DAG visualization |
| `/media` | Browse cached media files |
+| `/storage` | Manage storage providers |
| `/auth` | Receive auth token from L2 |
-| `/auth/revoke` | Revoke a specific token |
-| `/auth/revoke-user` | Revoke all tokens for a user (called by L2 on logout) |
| `/logout` | Log out |
| `/download/client` | Download CLI client |
-## Authentication
-
-L1 servers authenticate users via L2 (the ActivityPub registry). No shared secrets are required.
-
-### Configuration
-
-```bash
-export L1_PUBLIC_URL=https://celery-artdag.rose-ash.com
-```
-
-### How it works
-
-1. User clicks "Attach" on L2's Renderers page
-2. L2 creates a **scoped token** bound to this specific L1
-3. User is redirected to L1's `/auth?auth_token=...`
-4. L1 calls L2's `/auth/verify` to validate the token
-5. L2 checks: token valid, not revoked, scope matches this L1
-6. L1 sets a local cookie and records the token
-
-### Token revocation
-
-When a user logs out of L2, L2 calls `/auth/revoke-user` on all attached L1s. L1 maintains a Redis-based token tracking and revocation system:
-
-- Tokens registered per-user when authenticating (`artdag:user_tokens:{username}`)
-- `/auth/revoke-user` revokes all tokens for a username
-- Revoked token hashes stored in Redis with 30-day expiry
-- Every authenticated request checks the revocation list
-- Revoked tokens are immediately rejected
-
-### Security
-
-- **Scoped tokens**: Tokens are bound to a specific L1. A stolen token can't be used on other L1 servers.
-- **L2 verification**: L1 verifies every token with L2, which checks its revocation table.
-- **No shared secrets**: L1 doesn't need L2's JWT secret.
-
-## API
+## API Reference
Interactive docs: http://localhost:8100/docs
-### Endpoints
+### Runs
| Method | Path | Description |
|--------|------|-------------|
-| GET | `/` | Server info |
| POST | `/runs` | Start a rendering run |
-| GET | `/runs` | List all runs |
+| GET | `/runs` | List all runs (paginated) |
| GET | `/runs/{run_id}` | Get run status |
| DELETE | `/runs/{run_id}` | Delete a run |
-| GET | `/cache` | List cached content hashes |
-| GET | `/cache/{hash}` | Download cached content |
-| DELETE | `/cache/{hash}` | Delete cached content |
-| POST | `/cache/import?path=` | Import local file to cache |
-| POST | `/cache/upload` | Upload file to cache |
-| GET | `/assets` | List known assets |
-| POST | `/configs/upload` | Upload a config YAML |
-| GET | `/configs` | List configs |
-| GET | `/configs/{id}` | Get config details |
-| DELETE | `/configs/{id}` | Delete a config |
-| POST | `/configs/{id}/run` | Run a config |
+| GET | `/api/run/{run_id}` | Get run as JSON |
+| GET | `/api/run/{run_id}/plan` | Get execution plan JSON |
+| GET | `/api/run/{run_id}/analysis` | Get analysis data JSON |
-### Configs
+### Recipes
-Configs are YAML files that define reusable DAG pipelines. They can have:
-- **Fixed inputs**: Assets with pre-defined content hashes
-- **Variable inputs**: Placeholders filled at run time
+| Method | Path | Description |
+|--------|------|-------------|
+| POST | `/recipes/upload` | Upload recipe YAML |
+| GET | `/recipes` | List recipes (paginated) |
+| GET | `/recipes/{recipe_id}` | Get recipe details |
+| DELETE | `/recipes/{recipe_id}` | Delete recipe |
+| POST | `/recipes/{recipe_id}/run` | Execute recipe |
+
+### Cache
+
+| Method | Path | Description |
+|--------|------|-------------|
+| GET | `/cache/{hash}` | Get cached content (with preview) |
+| GET | `/cache/{hash}/raw` | Download raw content |
+| GET | `/cache/{hash}/mp4` | Get MP4 video |
+| GET | `/cache/{hash}/meta` | Get content metadata |
+| PATCH | `/cache/{hash}/meta` | Update metadata |
+| POST | `/cache/{hash}/publish` | Publish to L2 |
+| DELETE | `/cache/{hash}` | Delete from cache |
+| POST | `/cache/import?path=` | Import local file |
+| POST | `/cache/upload` | Upload file |
+| GET | `/media` | Browse media gallery |
+
+### IPFS
+
+| Method | Path | Description |
+|--------|------|-------------|
+| GET | `/ipfs/{cid}` | Redirect to IPFS gateway |
+| GET | `/ipfs/{cid}/raw` | Fetch raw content from IPFS |
+
+### Storage Providers
+
+| Method | Path | Description |
+|--------|------|-------------|
+| GET | `/storage` | List storage providers |
+| POST | `/storage` | Add provider (form) |
+| POST | `/storage/add` | Add provider (JSON) |
+| GET | `/storage/{id}` | Get provider details |
+| PATCH | `/storage/{id}` | Update provider |
+| DELETE | `/storage/{id}` | Delete provider |
+| POST | `/storage/{id}/test` | Test connection |
+| GET | `/storage/type/{type}` | Get form for provider type |
+
+### 3-Phase API
+
+| Method | Path | Description |
+|--------|------|-------------|
+| POST | `/api/plan` | Generate execution plan |
+| POST | `/api/execute` | Execute a plan |
+| POST | `/api/run-recipe` | Full pipeline (analyze+plan+execute) |
+
+### Authentication
+
+| Method | Path | Description |
+|--------|------|-------------|
+| GET | `/auth` | Receive auth token from L2 |
+| GET | `/logout` | Log out |
+| POST | `/auth/revoke` | Revoke a specific token |
+| POST | `/auth/revoke-user` | Revoke all user tokens |
+
+## 3-Phase Execution
+
+Recipes are executed in three phases:
+
+### Phase 1: Analyze
+Extract features from input files:
+- **Audio/Video**: Tempo, beat times, energy levels
+- Results cached by content hash
+
+### Phase 2: Plan
+Generate an execution plan:
+- Parse recipe YAML
+- Resolve dependencies between steps
+- Compute cache IDs for each step
+- Skip already-cached steps
+
+### Phase 3: Execute
+Run the plan level by level:
+- Steps at each level run in parallel
+- Results cached with content-addressable hashes
+- Progress tracked in Redis
+
+## Recipe Format
+
+Recipes define reusable DAG pipelines:
-Example config:
```yaml
-name: my-effect
+name: beat-sync
version: "1.0"
-description: "Apply effect to user image"
+description: "Synchronize video to audio beats"
-registry:
- effects:
- dog:
- hash: "abc123..."
+inputs:
+ video:
+ type: video
+ description: "Source video"
+ audio:
+ type: audio
+ description: "Audio track"
-dag:
- nodes:
- - id: user_image
- type: SOURCE
- config:
- input: true # Variable input
- name: "input_image"
+steps:
+ - id: analyze_audio
+ type: ANALYZE
+ inputs: [audio]
+ config:
+ features: [beats, energy]
- - id: apply_dog
- type: EFFECT
- config:
- effect: dog
- inputs:
- - user_image
+ - id: sync_video
+ type: BEAT_SYNC
+ inputs: [video, analyze_audio]
+ config:
+ mode: stretch
- output: apply_dog
+output: sync_video
```
-### Start a run
-
-```bash
-curl -X POST http://localhost:8100/runs \
- -H "Content-Type: application/json" \
- -d '{"recipe": "dog", "inputs": ["33268b6e..."], "output_name": "my-output"}'
-```
-
-### Check run status
-
-```bash
-curl http://localhost:8100/runs/{run_id}
-```
-
-### Delete a run
-
-```bash
-curl -X DELETE http://localhost:8100/runs/{run_id} \
- -H "Authorization: Bearer
'
+ content = re.sub(r'```(\w*)\n(.*?)```', code_block_replace, content, flags=re.DOTALL)
+
+ # Inline code
+ content = re.sub(r'`([^`]+)`', r'{code}\1', content)
+
+ # Headers
+ content = re.sub(r'^### (.+)$', r'\1
', content, flags=re.MULTILINE)
+ content = re.sub(r'^## (.+)$', r'\1
', content, flags=re.MULTILINE)
+ content = re.sub(r'^# (.+)$', r'\1
', content, flags=re.MULTILINE)
+
+ # Bold and italic
+ content = re.sub(r'\*\*([^*]+)\*\*', r'\1', content)
+ content = re.sub(r'\*([^*]+)\*', r'\1', content)
+
+ # Links
+ content = re.sub(r'\[([^\]]+)\]\(([^)]+)\)', r'\1', content)
+
+ # Tables
+ def table_replace(match):
+ lines = match.group(0).strip().split('\n')
+ if len(lines) < 2:
+ return match.group(0)
+
+ header = lines[0]
+ rows = lines[2:] if len(lines) > 2 else []
+
+ header_cells = [cell.strip() for cell in header.split('|')[1:-1]]
+ header_html = ''.join(f'{cell} ' for cell in header_cells)
+
+ rows_html = ''
+ for row in rows:
+ cells = [cell.strip() for cell in row.split('|')[1:-1]]
+ cells_html = ''.join(f'{cell} ' for cell in cells)
+ rows_html += f'{cells_html} '
+
+ return f'
'
+
+ content = re.sub(r'(\|[^\n]+\|\n)+', table_replace, content)
+
+ # Bullet points
+ content = re.sub(r'^- (.+)$', r'{header_html} {rows_html}\g<0>
', content)
+
+ # Paragraphs (lines not starting with < or whitespace)
+ lines = content.split('\n')
+ result = []
+ in_paragraph = False
+ for line in lines:
+ stripped = line.strip()
+ if not stripped:
+ if in_paragraph:
+ result.append('
') + in_paragraph = True + result.append(line) + if in_paragraph: + result.append('
') + content = '\n'.join(result) + + return content + + +@app.get("/docs", response_class=HTMLResponse) +async def docs_index(request: Request): + """Documentation index page.""" + user = await get_optional_user(request) + + html = f""" + + +