- Update save_run_cache to also update actor_id, recipe, inputs on conflict
- Add logging for actor_id when saving runs to run_cache
- Add admin endpoint DELETE /runs/admin/purge-failed to delete all failed runs
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add /runs/stream POST endpoint for streaming recipes
- Accepts recipe_sexp, sources_sexp, audio_sexp
- Submits to run_stream Celery task
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Remove legacy_tasks.py, hybrid_state.py, render.py
- Remove old task modules (analyze, execute, execute_sexp, orchestrate)
- Add streaming interpreter from test repo
- Add sexp_effects with primitives and video effects
- Add streaming Celery task with CID-based asset resolution
- Support both CID and friendly name references for assets
- Add .dockerignore to prevent local clones from conflicting
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Use dedicated thread with new event loop for database operations
- Create new database connection per operation to avoid pool conflicts
- Handles both async and sync calling contexts correctly
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Database is the ONLY source of truth for cache_id -> ipfs_cid
- Removed Redis caching layer entirely
- Failures will raise exceptions instead of warning and continuing
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Database (cache_items table) is now source of truth
- Redis used as fast cache on top
- Mapping persists across restarts
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Files in /data/cache/nodes/ are now stored by IPFS CID only
- cache_id parameter creates index from cache_id -> IPFS CID
- Removed deprecated node_id parameter behavior
- get_by_cid(cache_id) still works via index lookup
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
IPFS CIDs are the primary identifiers. If IPFS upload fails,
the operation must fail rather than silently using local hashes.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add plan_cid column to pending_runs table schema
- Add update_pending_run_plan() function to save plan_cid
- Update get_pending_run() to return plan_cid
- Save plan_cid right after storing plan to IPFS (before execution)
- Plan is now available even if run fails
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Upload final output to IPFS after execution completes
- Return success=False if IPFS upload fails
- Previously the run would succeed with output_ipfs_cid=None
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
All callers were passing str(path) but the function expected Path objects,
causing 'str' object has no attribute 'parent' errors when fetching from IPFS.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Remove effect:identity shortcut executor so effects load from IPFS by CID
- COMPOUND nodes now fall back to generic EFFECT executor for dynamic effects
- EFFECT nodes also fall back to generic executor when specific not found
- Update test assertions to match current implementation
- Raise error instead of silently skipping when effect executor not found
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- EFFECT nodes now handled explicitly like SOURCE, COMPOUND, SEQUENCE
- Case-insensitive node type matching throughout
- Fallback executor lookup tries both upper and original case
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Uses FFmpeg concat demuxer. Falls back to re-encoding if
stream copy fails (different codecs/formats).
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Celery task "succeeds" (no exception) but may return {"success": False}.
Now we check the task result's success field AND output_cid before
marking run as completed.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Failed runs were not showing in UI/CLI because list_runs only
included runs with status "pending" or "running", excluding "failed".
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The dog effect was updated to use process() but DogExecutor
was still importing the old effect_dog() function.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Bug: S-expression plans produce lowercase node types (source, compound)
but code was checking uppercase (SOURCE, COMPOUND).
Fix: Use .upper() for node type comparisons.
Add TestNodeTypeCaseSensitivity tests to catch this regression.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- execute_recipe now returns success=False if output_cid is None
- Add TestRecipeOutputRequired tests to catch missing output
- Recipe must produce valid output to be considered successful
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Tests cover:
- SOURCE node resolution (fixed CID vs user input)
- COMPOUND node filter chain handling
- Cache lookup by code-addressed cache_id vs IPFS CID
- All plan step types (SOURCE, EFFECT, COMPOUND, SEQUENCE)
- Error handling for missing inputs
These tests would have caught the bugs:
- "No executor for node type: SOURCE"
- "No executor for node type: COMPOUND"
- Cache lookup failures by code-addressed hash
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add COMPOUND node handling in execute_recipe for collapsed effect chains
- Index cache entries by node_id (cache_id) when different from IPFS CID
- Fix test_cache_manager.py to unpack put() tuple returns
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- SOURCE nodes with :input true now resolve CID from input_hashes
- Tries multiple name formats: exact, lowercase-dashes, lowercase-underscores
- Only return "completed" status for runs with actual output
- Add integration tests for SOURCE CID resolution
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Remove execute_level() from tasks/execute.py (defined but never called)
- Remove render_dog_from_cat() from legacy_tasks.py (test convenience, never used)
- Remove duplicate file_hash() from legacy_tasks.py, import from cache_manager
- Remove unused hashlib import
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Major changes:
- Add execute_recipe task that uses S-expression planner
- Recipe S-expression unfolds into plan S-expression with code-addressed cache IDs
- Cache IDs computed from Merkle tree of plan structure (before execution)
- Add ipfs_client.add_string() for storing S-expression plans
- Update run_service.create_run() to use execute_recipe when recipe_sexp available
- Add _sexp_to_steps() to parse S-expression plans for UI visualization
- Plan endpoint now returns both sexp content and parsed steps
The code-addressed hashing means each plan step's cache_id is:
sha3_256({node_type, config, sorted(input_cache_ids)})
This creates deterministic "buckets" for computation results computed
entirely from the plan structure, enabling automatic cache reuse.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The CLI expects {"steps": [...]} but DAG format stores {"nodes": {...}}.
Added _dag_to_steps() to convert between formats, including topological
sorting so sources appear first.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Store plan directly in CACHE_DIR/{cid} instead of CACHE_DIR/legacy/{cid},
which matches what cache_manager.get_by_cid() checks at line 396-398.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Compiler now generates SHA3-256 hashes for node IDs
- Each hash includes type, config, and input hashes (Merkle tree)
- Same plan = same hashes = automatic cache reuse
Cache changes:
- Remove index.json - filesystem IS the index
- Files at {cache_dir}/{hash}/output.* are source of truth
- Per-node metadata.json for optional stats (not an index)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
When content isn't found in local cache, fetch directly from IPFS
using the CID. IPFS is the source of truth for all content-addressed data.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The artdag Cache object doesn't persist state across process restarts,
so cache.get(node_id) returns None even when files exist on disk.
Now we check the filesystem directly at {cache_dir}/nodes/{node_id}/output.*
when the in-memory cache lookup fails but we have a valid node_id from
the Redis index.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
When the Celery worker can't find source content in the local cache,
fetch it from IPFS using the CID. This ensures workers can execute
DAGs even when they don't share the same filesystem as the web server.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Debug why recipes are not found in cache after upload.
Logs now show each step of put() and get_by_cid().
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Media: Only count video/image/audio/unknown types, not effects/recipes
- Effects: Use database count_user_items instead of filesystem scan
- Recipes: Use database count_user_items instead of loading all recipes
This ensures stats reflect user ownership via item_types table,
and prevents effects from being double-counted as media.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Track when items are being saved to database to diagnose
why recipes show 0 in stats but effects show correctly.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Call get_cache_manager() to get the cache manager instance
before using it in effects and media deletion.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- cache_service.delete_content: Remove user's ownership link first,
only delete actual file if no other owners remain
- cache_manager.discard_activity_outputs_only: Check if outputs and
intermediates are used by other activities before deleting
- run_service.discard_run: Now cleans up run outputs/intermediates
(only if not shared by other runs)
- home.py clear_user_data: Use ownership model for effects and media
deletion instead of directly deleting files
The ownership model ensures:
1. Multiple users can "own" the same cached content
2. Deleting removes the user's ownership link (item_types entry)
3. Actual files only deleted when no owners remain (garbage collection)
4. Shared intermediates between runs are preserved
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Upload: Create item_types entry to track user-effect relationship
- List: Query item_types for user's effects instead of scanning filesystem
- Delete: Remove ownership link, only delete files if orphaned (garbage collect)
This matches the ownership model used by recipes and media, where multiple
users can "own" the same cached content through item_types entries.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Upload now creates item_types entry linking user to recipe
- List queries item_types for user's recipes (not all cached)
- Delete removes item_types entry (not the file)
- File only deleted when no users own it (garbage collection)
This allows multiple users to "own" the same recipe CID.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Same reasoning as the list fix: the owner field from recipe content
could be spoofed. For L1, any authenticated user can delete recipes.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The dual-indexing code was calling file_hash(source_path) after
cache.put(move=True) had already moved the file, causing
"No such file or directory" errors on upload.
Now computes local_hash before the move operation.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The delete_recipe() returns (success, error) tuple but
clear-data wasn't checking the result, so failed deletes
weren't being reported.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add plan_cid column to run_cache schema
- Store DAG JSON to IPFS during execute_dag task
- Return plan_cid in run status and list APIs
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>