Refactor to use IPFS CID as the primary content identifier:
- Update database schema: content_hash -> cid, output_hash -> output_cid
- Update all services, routers, and tasks to use cid terminology
- Update HTML templates to display CID instead of hash
- Update cache_manager parameter names
- Update README documentation
This completes the transition to CID-only content addressing.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Upload endpoint returns both CID and content_hash
- Cache manager handles both SHA3-256 hashes and IPFS CIDs
- get_by_cid() fetches from IPFS if not cached locally
- Execute tasks support :cid in addition to :hash
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Recipes: Now content-addressed only (cache + IPFS), removed Redis storage
- Runs: Completed runs stored in PostgreSQL, Redis only for task_id mapping
- Add list_runs_by_actor() to database.py for paginated run queries
- Add list_by_type() to cache_manager for filtering by node_type
- Fix upload endpoint to return size and filename fields
- Fix recipe run endpoint with proper DAG input binding
- Fix get_run_service() dependency to pass database module
Storage architecture:
- Redis: Ephemeral only (sessions, task mappings with TTL)
- PostgreSQL: Permanent records (completed runs, metadata)
- Cache: Content-addressed files (recipes, media, outputs)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The get_cache_manager() singleton wasn't initializing with Redis,
so workers couldn't see files uploaded via the API server.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The cache_manager now uses Redis hashes for the content_index and
ipfs_cids mappings. This allows multiple uvicorn workers to share
state, so files added by one worker are immediately visible to all
others.
- Added redis_client parameter to L1CacheManager
- Index lookups check Redis first, then fall back to in-memory
- Index updates go to both Redis and JSON file (backup)
- Migrates existing JSON indexes to Redis on first load
- Re-enabled workers=4 in uvicorn
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add PostgreSQL database for cache metadata storage with schema for
cache_items, item_types, pin_reasons, and l2_shares tables
- Add IPFS integration as durable backing store (local cache as hot storage)
- Add postgres and ipfs services to docker-compose.yml
- Update cache_manager to upload to IPFS and track CIDs
- Rename all config references to recipe throughout server.py
- Update API endpoints: /configs/* -> /recipes/*
- Update models: ConfigStatus -> RecipeStatus, ConfigRunRequest -> RecipeRunRequest
- Update UI tabs and pages to show Recipes instead of Configs
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add PyYAML dependency for parsing config files
- Add Pydantic models: VariableInput, FixedInput, ConfigStatus, ConfigRunRequest
- Add Redis storage functions for configs
- Add config YAML parsing with variable and fixed input detection
- Add config API endpoints: upload, list, get, delete, run
- Add config UI: Configs tab, list page, detail page with run form
- Add HTMX endpoints for config operations
- Add pinning on publish: configs and their fixed inputs are pinned
when runs from configs are published to L2
- Clean up debug logging in cache_manager
Config YAML format supports:
- Fixed inputs: resolve asset hashes from registry
- Variable inputs: marked with `input: true`, filled at run time
- DAG definition with nodes and edges
- Registry of assets and effects
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add is_pinned(), pin(), _load_meta(), _save_meta() to L1CacheManager
- Update can_delete() and can_discard_activity() to check local pinned status
- Update run deletion endpoints (API and UI) to check pinned metadata
- Remove L2 shared check fallback from run deletion
- Fix L2SharedChecker to return True on error (safer - prevents accidental deletion)
- Update tests for new pinned behavior
When items are published to L2, the publish flow marks them as pinned
locally. This ensures items remain non-deletable even if L2 is unreachable,
and both outputs AND inputs of published runs are protected.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
cache_manager.py:
- get_by_content_hash() now tries direct cache.get(content_hash)
since uploads use content_hash as node_id
- This works even if cache index hasn't been reloaded
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Update list_all() to scan cache_dir for legacy files directly
(old files stored as CACHE_DIR/{hash}, not CACHE_DIR/legacy/)
- Update cache listing endpoints to use cache_manager.list_all()
instead of iterating CACHE_DIR.iterdir() directly
- This ensures uploaded files appear in the cache UI regardless
of whether they're in the old or new cache structure
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Remove symlink hack from cache_file() - no longer needed
- Add get_cache_path() helper for content_hash lookups
- Update all CACHE_DIR / content_hash patterns to use cache_manager
- Fix cache_manager.get_by_content_hash() to check path.exists()
- Fix legacy path lookup (cache_dir not legacy_dir)
- Update upload endpoint to use cache_manager.put()
This ensures cache lookups work correctly for both legacy files
(stored directly in CACHE_DIR) and new files (stored in nodes/).
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add cache_manager.py with L1CacheManager wrapping artdag Cache
- Add L2SharedChecker for checking published status via L2 API
- Update server.py to use cache_manager for storage
- Update DELETE /cache/{content_hash} to enforce deletion rules
- Add DELETE /runs/{run_id} endpoint for discarding runs
- Record activities when runs complete for deletion tracking
- Add comprehensive tests for cache manager
Deletion rules enforced:
- Cannot delete items published to L2
- Cannot delete inputs/outputs of runs
- Can delete orphaned items
- Runs can only be discarded if no items are shared
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>