956da6df2e6632b5a985baa4fb68a3b909153028
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 2m32s
Fragment and oEmbed endpoints must be accessible without authentication. The silent auth middleware was returning 302 redirects, causing fragment fetches from coop apps to silently fail. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Art DAG L1 Server
L1 rendering server for the Art DAG system. Manages distributed rendering jobs via Celery workers with content-addressable caching and optional IPFS integration.
Features
- 3-Phase Execution: Analyze → Plan → Execute pipeline for recipe-based rendering
- Content-Addressable Caching: IPFS CIDs with deduplication
- IPFS Integration: Optional IPFS-primary mode for distributed storage
- Storage Providers: S3, IPFS, and local storage backends
- DAG Visualization: Interactive graph visualization of execution plans
- SPA-Style Navigation: Smooth URL-based navigation without full page reloads
- L2 Federation: Publish outputs to ActivityPub registry
Dependencies
- artdag (GitHub): Core DAG execution engine
- artdag-effects (rose-ash): Effect implementations
- artdag-common: Shared templates and middleware
- Redis: Message broker, result backend, and run persistence
- PostgreSQL: Metadata storage
- IPFS (optional): Distributed content storage
Quick Start
# Install dependencies
pip install -r requirements.txt
# Start Redis
redis-server
# Start a worker
celery -A celery_app worker --loglevel=info -E
# Start the L1 server
python server.py
Docker Swarm Deployment
docker stack deploy -c docker-compose.yml artdag
The stack includes:
- redis: Message broker (Redis 7)
- postgres: Metadata database (PostgreSQL 16)
- ipfs: IPFS node (Kubo)
- l1-server: FastAPI web server
- l1-worker: Celery workers (2 replicas)
- flower: Celery task monitoring
Configuration
Environment Variables
| Variable | Default | Description |
|---|---|---|
HOST |
0.0.0.0 |
Server bind address |
PORT |
8000 |
Server port |
REDIS_URL |
redis://localhost:6379/5 |
Redis connection |
DATABASE_URL |
(required) | PostgreSQL connection |
CACHE_DIR |
~/.artdag/cache |
Local cache directory |
IPFS_API |
/dns/localhost/tcp/5001 |
IPFS API multiaddr |
IPFS_GATEWAY_URL |
https://ipfs.io/ipfs |
Public IPFS gateway |
IPFS_PRIMARY |
false |
Enable IPFS-primary mode |
L1_PUBLIC_URL |
http://localhost:8100 |
Public URL for redirects |
L2_SERVER |
- | L2 ActivityPub server URL |
L2_DOMAIN |
- | L2 domain for federation |
ARTDAG_CLUSTER_KEY |
- | Cluster key for trust domains |
IPFS-Primary Mode
When IPFS_PRIMARY=true, all content is stored on IPFS:
- Input files are added to IPFS on upload
- Analysis results stored as JSON on IPFS
- Execution plans stored on IPFS
- Step outputs pinned to IPFS
- Local cache becomes a read-through cache
This enables distributed execution across multiple L1 nodes sharing the same IPFS network.
Web UI
| Path | Description |
|---|---|
/ |
Home page with server info |
/runs |
View and manage rendering runs |
/run/{id} |
Run detail with tabs: Plan, Analysis, Artifacts |
/run/{id}/plan |
Interactive DAG visualization |
/run/{id}/analysis |
Audio/video analysis data |
/run/{id}/artifacts |
Cached step outputs |
/recipes |
Browse and run available recipes |
/recipe/{id} |
Recipe detail page |
/recipe/{id}/dag |
Recipe DAG visualization |
/media |
Browse cached media files |
/storage |
Manage storage providers |
/auth |
Receive auth token from L2 |
/logout |
Log out |
/download/client |
Download CLI client |
API Reference
Interactive docs: http://localhost:8100/docs
Runs
| Method | Path | Description |
|---|---|---|
| POST | /runs |
Start a rendering run |
| GET | /runs |
List all runs (paginated) |
| GET | /runs/{run_id} |
Get run status |
| DELETE | /runs/{run_id} |
Delete a run |
| GET | /api/run/{run_id} |
Get run as JSON |
| GET | /api/run/{run_id}/plan |
Get execution plan JSON |
| GET | /api/run/{run_id}/analysis |
Get analysis data JSON |
Recipes
| Method | Path | Description |
|---|---|---|
| POST | /recipes/upload |
Upload recipe YAML |
| GET | /recipes |
List recipes (paginated) |
| GET | /recipes/{recipe_id} |
Get recipe details |
| DELETE | /recipes/{recipe_id} |
Delete recipe |
| POST | /recipes/{recipe_id}/run |
Execute recipe |
Cache
| Method | Path | Description |
|---|---|---|
| GET | /cache/{cid} |
Get cached content (with preview) |
| GET | /cache/{cid}/raw |
Download raw content |
| GET | /cache/{cid}/mp4 |
Get MP4 video |
| GET | /cache/{cid}/meta |
Get content metadata |
| PATCH | /cache/{cid}/meta |
Update metadata |
| POST | /cache/{cid}/publish |
Publish to L2 |
| DELETE | /cache/{cid} |
Delete from cache |
| POST | /cache/import?path= |
Import local file |
| POST | /cache/upload |
Upload file |
| GET | /media |
Browse media gallery |
IPFS
| Method | Path | Description |
|---|---|---|
| GET | /ipfs/{cid} |
Redirect to IPFS gateway |
| GET | /ipfs/{cid}/raw |
Fetch raw content from IPFS |
Storage Providers
| Method | Path | Description |
|---|---|---|
| GET | /storage |
List storage providers |
| POST | /storage |
Add provider (form) |
| POST | /storage/add |
Add provider (JSON) |
| GET | /storage/{id} |
Get provider details |
| PATCH | /storage/{id} |
Update provider |
| DELETE | /storage/{id} |
Delete provider |
| POST | /storage/{id}/test |
Test connection |
| GET | /storage/type/{type} |
Get form for provider type |
3-Phase API
| Method | Path | Description |
|---|---|---|
| POST | /api/plan |
Generate execution plan |
| POST | /api/execute |
Execute a plan |
| POST | /api/run-recipe |
Full pipeline (analyze+plan+execute) |
Authentication
| Method | Path | Description |
|---|---|---|
| GET | /auth |
Receive auth token from L2 |
| GET | /logout |
Log out |
| POST | /auth/revoke |
Revoke a specific token |
| POST | /auth/revoke-user |
Revoke all user tokens |
3-Phase Execution
Recipes are executed in three phases:
Phase 1: Analyze
Extract features from input files:
- Audio/Video: Tempo, beat times, energy levels
- Results cached by CID
Phase 2: Plan
Generate an execution plan:
- Parse recipe YAML
- Resolve dependencies between steps
- Compute cache IDs for each step
- Skip already-cached steps
Phase 3: Execute
Run the plan level by level:
- Steps at each level run in parallel
- Results cached with content-addressable hashes
- Progress tracked in Redis
Recipe Format
Recipes define reusable DAG pipelines:
name: beat-sync
version: "1.0"
description: "Synchronize video to audio beats"
inputs:
video:
type: video
description: "Source video"
audio:
type: audio
description: "Audio track"
steps:
- id: analyze_audio
type: ANALYZE
inputs: [audio]
config:
features: [beats, energy]
- id: sync_video
type: BEAT_SYNC
inputs: [video, analyze_audio]
config:
mode: stretch
output: sync_video
Storage
Local Cache
- Location:
~/.artdag/cache/(orCACHE_DIR) - Content-addressed by IPFS CID
- Subdirectories:
plans/,analysis/
Redis
- Database 5 (configurable via
REDIS_URL) - Keys:
artdag:run:*- Run stateartdag:recipe:*- Recipe definitionsartdag:revoked:*- Token revocationartdag:user_tokens:*- User token tracking
PostgreSQL
- Content metadata
- Storage provider configurations
- Provenance records
Authentication
L1 servers authenticate via L2 (ActivityPub registry). No shared secrets required.
Flow
- User clicks "Attach" on L2's Renderers page
- L2 creates a scoped token bound to this L1
- User redirected to L1's
/auth?auth_token=... - L1 calls L2's
/auth/verifyto validate - L1 sets local cookie and records token
Token Revocation
- Tokens tracked per-user in Redis
- L2 calls
/auth/revoke-useron logout - Revoked hashes stored with 30-day expiry
- Every request checks revocation list
CLI Usage
# Quick render (effect mode)
python render.py dog cat --sync
# Submit async
python render.py dog cat
# Run a recipe
curl -X POST http://localhost:8100/recipes/beat-sync/run \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <token>" \
-d '{"inputs": {"video": "abc123...", "audio": "def456..."}}'
Architecture
L1 Server (FastAPI)
│
├── Web UI (Jinja2 + HTMX + Tailwind)
│
├── POST /runs → Celery tasks
│ │
│ └── celery_app.py
│ ├── tasks/analyze.py (Phase 1)
│ ├── tasks/execute.py (Phase 3 steps)
│ └── tasks/orchestrate.py (Full pipeline)
│
├── cache_manager.py
│ │
│ ├── Local filesystem (CACHE_DIR)
│ ├── IPFS (ipfs_client.py)
│ └── S3/Storage providers
│
└── database.py (PostgreSQL metadata)
Provenance
Every render produces a provenance record:
{
"task_id": "celery-task-uuid",
"rendered_at": "2026-01-07T...",
"rendered_by": "@giles@artdag.rose-ash.com",
"output": {"name": "...", "cid": "Qm..."},
"inputs": [...],
"effects": [...],
"infrastructure": {
"software": {"name": "infra:artdag", "cid": "Qm..."},
"hardware": {"name": "infra:giles-hp", "cid": "Qm..."}
}
}
Description
Languages
Python
87.9%
HTML
5.9%
Common Lisp
5.7%
Shell
0.5%