206 Commits

Author SHA1 Message Date
03f0929fdf Fix SX nav morphing, retry error modal, and aria-selected CSS extraction
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 12m18s
- Re-read verb URL from element attributes at execution time so morphed
  nav links navigate to the correct destination
- Reset retry backoff on fresh requests; skip error modal when sx-retry
  handles the failure
- Strip attribute selectors in CSS registry so aria-selected:* classes
  resolve correctly for on-demand CSS
- Add @css annotations for dynamic aria-selected variant classes
- Add SX docs integration test suite (102 tests)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 20:37:17 +00:00
f551fc7453 Convert last Python fragment handlers to SX defhandlers: 100% declarative fragment API
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 34m5s
- Add dict recursion to _convert_result for service methods returning dict[K, list[DTO]]
- New container-cards.sx: parses post_ids/slugs, calls confirmed-entries-for-posts, emits card-widget markers
- New account-page.sx: dispatches on slug for tickets/bookings panels with status pills and empty states
- Fix blog _parse_card_fragments to handle SxExpr via str() cast
- Remove events Python fragment handlers and simplify app.py to plain auto_mount

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 19:42:19 +00:00
e30cb0a992 Auto-mount fragment handlers: eliminate fragment blueprint boilerplate across all 8 services
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 16m38s
Fragment read API is now fully declarative — every handler is a defhandler
s-expression dispatched through one shared auto_mount_fragment_handlers()
function. Replaces 8 near-identical blueprint files (~35 lines each) with
a single function call per service. Events Python handlers (container-cards,
account-page) extracted to a standalone module.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 19:13:15 +00:00
293f7713d6 Auto-mount defpages: eliminate Python route stubs across all 9 services
Some checks failed
Build and Deploy / build-and-deploy (push) Failing after 16s
Defpages are now declared with absolute paths in .sx files and auto-mounted
directly on the Quart app, removing ~850 lines of blueprint mount_pages calls,
before_request hooks, and g.* wrapper boilerplate. A new page = one defpage
declaration, nothing else.

Infrastructure:
- async_eval awaits coroutine results from callable dispatch
- auto_mount_pages() mounts all registered defpages on the app
- g._defpage_ctx pattern passes helper data to layout context

Migrated: sx, account, orders, federation, cart, market, events, blog

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 19:03:15 +00:00
4ba63bda17 Add server-driven architecture principle and React feature analysis
Documents why sx stays server-driven by default, maps React features
to sx equivalents, and defines targeted escape hatches for the few
interactions that genuinely need client-side state.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 17:48:35 +00:00
0a81a2af01 Convert social and federation profile from Jinja to SX rendering
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 14m34s
Add primitives (replace, strip-tags, slice, csrf-token), convert all
social blueprint routes and federation profile to SX content builders,
delete 12 unused Jinja templates and social_lite layout.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 17:43:47 +00:00
0c9dbd6657 Add attribute detail pages with live demos for SX reference
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 4m45s
Per-attribute documentation pages at /reference/attributes/<slug> with:
- Live interactive demos (demo components in reference.sx)
- S-expression source code display
- Server handler code shown as s-expressions (defhandlers in handlers/reference.sx)
- Wire response display via OOB swaps on demo interaction
- Linked attribute names in the reference table

Covers all 20 implemented attributes (sx-get/post/put/delete/patch,
sx-trigger/target/swap/swap-oob/select/confirm/push-url/sync/encoding/
headers/include/vals/media/disable/on:*, sx-retry, data-sx, data-sx-env).

Also adds sx-on:* to BEHAVIOR_ATTRS, updates REFERENCE_NAV to link
/reference/attributes, and makes /reference/ an index page.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 17:12:57 +00:00
a4377668be Add isomorphic SX architecture migration plan
Documents the 5-phase plan for making the sx s-expression layer a
universal view language that renders on either client or server, with
pages as cached components and data-only navigation.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 16:52:12 +00:00
a98354c0f0 Fix duplicate headers on HTMX nav, editor content loading, and double mount
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 4m14s
- Nest admin header inside post-header-child (layouts.py/helpers.py) so
  full-page DOM matches OOB swap structure, eliminating duplicate headers
- Clear post-header-child on post layout OOB to remove stale admin rows
- Read SX initial content from #sx-content-input instead of
  window.__SX_INITIAL__ to avoid escaping issues through SX pipeline
- Fix client-side SX parser RE_STRING to handle escaped newlines
- Clear root element in SxEditor.mount() to prevent double content on
  HTMX re-mount
- Remove unused ~blog-editor-sx-initial component

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 16:27:47 +00:00
df8b19ccb8 Convert post edit form from raw HTML to SX expressions
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 3m29s
Replace _post_edit_content_sx raw HTML builder with sx_call() pattern
matching render_editor_panel. Add ~blog-editor-edit-form,
~blog-editor-publish-js, ~blog-editor-sx-initial components to
editor.sx. Fixes (~sx-editor-styles) rendering as literal text on
the edit page.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 15:53:50 +00:00
544892edd9 Delete 391 dead Jinja templates replaced by sx_components/defpage
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 4m13s
All app-level templates have been replaced by native sx component builders
and defpage declarative routes. Removes ~15,200 lines of dead HTML.

Kept: shared/browser templates (errors, ap_social, macros, root layout),
account + federation _email/magic_link, federation profile.html chain.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 15:10:56 +00:00
c243d17eeb Migrate all apps to defpage declarative page routes
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 3m41s
Replace Python GET page handlers with declarative defpage definitions in .sx
files across all 8 apps (sx docs, orders, account, market, cart, federation,
events, blog). Each app now has sxc/pages/ with setup functions, layout
registrations, page helpers, and .sx defpage declarations.

Core infrastructure: add g I/O primitive, PageDef support for auth/layout/
data/content/filter/aside/menu slots, post_author auth level, and custom
layout registration. Remove ~1400 lines of render_*_page/render_*_oob
boilerplate. Update all endpoint references in routes, sx_components, and
templates to defpage_* naming.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 14:52:34 +00:00
5b4cacaf19 Fix NIL leaking into Python service calls, add mobile navigation menu
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 4m10s
Strip NIL values at I/O primitive boundaries (frag, query, action, service)
to prevent _Nil objects from reaching Python code that expects None. Add
mobile_nav_sx() helper that auto-populates the hamburger menu from nav_tree
and auth_menu context fragments when no menu slot is provided.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 10:45:52 +00:00
a8c0741f54 Add SX editor to post edit page, prevent sx_content clearing on save
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 4m5s
- Add sx_content to _post_to_edit_dict so edit page receives existing content
- Add SX/Koenig editor tabs, sx-editor mount point, and SxEditor.mount init
- Only pass sx_content to writer_update when form field is present (prevents
  accidental clearing when editing via Koenig-only path)
- Add csrf_exempt to example API POST/DELETE/PUT demo endpoints
- Add defpage infrastructure (pages.py, layouts.py) and sx docs page definitions
- Add defhandler definitions for example API handlers (examples.sx)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 10:23:33 +00:00
0af07f9f2e Replace 5 blog post admin render_template() calls with native sx builders
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 2m49s
Converts data inspector, entries browser, calendar view, settings form,
and WYSIWYG editor panels from Jinja templates to Python content builders.
Zero render_template() calls remain across blog, events, and orders services.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 09:15:43 +00:00
222738546a Fix value-select: include SELECT element value in GET requests
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 4m45s
sx.js only appended INPUT values to GET request URLs. SELECT and
TEXTAREA elements with a name attribute were silently ignored,
so the category parameter was never sent.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 08:56:48 +00:00
4098c32878 Fix value-select: return raw option elements instead of component
Some checks failed
Build and Deploy / build-and-deploy (push) Has been cancelled
The ~value-options component wrapped options in a fragment that didn't
render correctly inside a <select> innerHTML swap. Return plain
(option) elements directly.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 08:54:28 +00:00
3bd4f4b661 Replace 21 Jinja render_template() calls with sx render functions
Phase 1: Wire 16 events routes to existing sx render functions
- slot, slots, ticket_types, ticket_type, calendar_entries,
  calendar_entry, calendar_entry/admin

Phase 2: Orders checkout return (2 calls)
- New orders/sx/checkout.sx with return page components
- New render_checkout_return_page() in orders/sx/sx_components.py

Phase 3: Blog menu items (3 calls)
- New blog/sx/menu_items.sx with search result components
- New render_menu_item_form() and render_page_search_results()
  in blog/sx/sx_components.py

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 08:52:32 +00:00
5dd1161816 Move example CSS to basics.css, pretty-print wire response, update sx logo
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 5m9s
- Move .sx-fade-in and .sx-loading-btn CSS from inline (style) tags to
  basics.css so they go through the on-demand CSS registry
- Pretty-print sx source in wire response display (not all on one line)
- Change sx logo from </> icon to (</>) text

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 08:45:54 +00:00
002cc49f2c Add 21 new interactive examples to sx docs site (27 total)
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 3m26s
Loading: lazy loading, infinite scroll, progress bar
Forms: active search, inline validation, value select, reset on submit
Records: edit row, bulk update
Swap/DOM: swap positions, select filter, tabs
Display: animations, dialogs, keyboard shortcuts
HTTP: PUT/PATCH, JSON encoding, vals & headers
Resilience: loading states, request abort (sync replace), retry

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 02:26:10 +00:00
e6b0849ce3 Add Jinja-to-sx migration plan
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 3m19s
Documents remaining 24 render_template() calls across events, blog,
and orders services with phased conversion strategy.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 02:16:47 +00:00
8024fa5b13 Live wire response + component display with OOB swaps on all examples
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 3m56s
- All 6 examples show Component and Wire response as placeholders that
  fill with actual content when the demo is triggered (via OOB swaps)
- Wire response shows full wire content including component definitions
  (when not cached) and CSS style block
- Component display only includes defs the client doesn't already have,
  matching real sx_response() behaviour
- Add "Clear component cache" button to reset localStorage + in-memory
  component env so next interaction shows component download
- Rebuild tw.css with Tailwind v3.4.19 including sx content paths
- Optimize sx_response() CSS scanning to only scan sent comp_defs

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 01:54:45 +00:00
ea18a402d6 Remove Prism language-* classes from code block components
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 3m21s
highlight.py handles syntax coloring with Tailwind classes —
Prism classes were conflicting and are not needed.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 01:14:11 +00:00
e4e43177a8 Fix code blocks + add violet bg classes to tw.css
Some checks failed
Build and Deploy / build-and-deploy (push) Has been cancelled
- Pass :code keyword to ~doc-code and ~example-source components
  (highlighted content was positional but components use &key code)
- Rebuild tw.css (v3.4.19) with sx/sxc and sx/content in content paths
  so highlight.py classes (text-violet-600, text-rose-600, etc.) are included
- Add bg-violet-{100-500} classes for the sx app's violet menu bar
- Add highlight.py custom syntax highlighter (sx, python, bash)

IMPORTANT: tw.css must contain bg-violet-{100-500} rules for the sx
app's menu bar. Do not rebuild tw.css without ensuring violet classes
are included (via safelist or content paths).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 01:13:01 +00:00
8445c36270 Remove last Jinja fragment templates, use sx_components directly
Events fragment routes now call render_fragment_container_cards(),
render_fragment_account_tickets(), and render_fragment_account_bookings()
from sx_components instead of render_template(). Account sx_components
handles both SxExpr (text/sx) and HTML (text/html) fragment responses.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 01:07:02 +00:00
5578923242 Fix defhandler to produce sx wire format instead of HTML
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 3m28s
execute_handler was using async_render() which renders all the way to
HTML. Fragment providers need to return sx source (s-expression strings)
that consuming apps parse and render client-side.

Added async_eval_to_sx() — a new execution mode that evaluates I/O
primitives and control flow but serializes component/tag calls as sx
source instead of rendering them to HTML. This mirrors how the old
Python handlers used sx_call() to build sx strings.

Also fixed: _ASER_FORMS checked after HTML_TAGS, causing "map" (which
is both an HTML tag and an sx special form) to be serialized as a tag
instead of evaluated. Moved _ASER_FORMS check before HTML_TAGS.

Also fixed: empty? primitive now handles non-len()-able types gracefully.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 01:00:00 +00:00
9754b892d6 Fix double-escaping when render forms (<>, HTML tags) appear in eval position
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 4m30s
Return _RawHTML wrapper so pre-rendered HTML in let bindings isn't
escaped when used in render context.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 00:31:31 +00:00
ab75e505a8 Add macros, declarative handlers (defhandler), and convert all fragment routes to sx
Some checks failed
Build and Deploy / build-and-deploy (push) Has been cancelled
Phase 1 — Macros: defmacro + quasiquote syntax (`, ,, ,@) in parser,
evaluator, HTML renderer, and JS mirror. Macro type, expansion, and
round-trip serialization.

Phase 2 — Expanded primitives: app-url, url-for, asset-url, config,
format-date, parse-int (pure); service, request-arg, request-path,
nav-tree, get-children (I/O); jinja-global, relations-from (pure).
Updated _io_service to accept (service "registry-name" "method" :kwargs)
with auto kebab→snake conversion. DTO-to-dict now expands datetime fields
into year/month/day convenience keys. Tuple returns converted to lists.

Phase 3 — Declarative handlers: HandlerDef type, defhandler special form,
handler registry (service → name → HandlerDef), async evaluator+renderer
(async_eval.py) that awaits I/O primitives inline within control flow.
Handler loading from .sx files, execute_handler, blueprint factory.

Phase 4 — Convert all fragment routes: 13 Python fragment handlers across
8 services replaced with declarative .sx handler files. All routes.py
simplified to uniform sx dispatch pattern. Two Jinja HTML handlers
(events/container-cards, events/account-page) kept as Python.

New files: shared/sx/async_eval.py, shared/sx/handlers.py,
shared/sx/tests/test_handlers.py, plus 13 handler .sx files under
{service}/sx/handlers/. MarketService.product_by_slug() added.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 00:22:18 +00:00
13bcf755f6 Add OOB header swaps for sx docs navigation + enable OAuth + fragments
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 3m10s
- OOB nav updates: AJAX navigation now swaps both menu bar levels
  (main nav highlighting + sub-nav with current page) using the same
  oob_header_sx/oob_page_sx pattern as blog/market/events
- Enable OAuth for sx and test apps (removed from _NO_OAUTH, added sx
  to ALLOWED_CLIENTS, added app_urls for sx/test/orders)
- Fetch real cross-service fragments (cart-mini, auth-menu, nav-tree)
  instead of hardcoding empty values
- Add :selected param to ~menu-row-sx for white text current-page label
- Fix duplicate element IDs: use menu-row-sx child_id/child mechanism
  instead of manual header_child_sx wrappers
- Fix home page copy: "Server-rendered DOM over the wire (no HTML)"

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 23:22:01 +00:00
3d55145e5f Fix illegible code blocks: use light background to match Prism theme
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 2m26s
The existing prism.css sets color:black on code elements. Dark
bg-stone-900 backgrounds made text invisible. Switched to bg-stone-50
with a border to work with the light Prism theme.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 21:38:20 +00:00
8b2785ccb0 Skip OAuth registration for sx docs app
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 4m0s
sx is a public documentation site like test — no auth needed.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 21:31:32 +00:00
03196c3ad0 Add sx documentation app (sx.rose-ash.com)
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 2m42s
New public-facing service documenting the s-expression rendering engine.
Modelled on four.htmx.org with violet theme, all content rendered via sx.

Sections: docs, reference, protocols, examples (live demos), essays
(including "sx sucks"). No database — purely static documentation.

Port 8012, Redis DB 10. CI and deploy.sh updated with app_dir() mapping
for sx_docs -> sx/ directory. Caddy reverse proxy entry added separately.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 21:25:52 +00:00
815c5285d5 Add Prism.js syntax highlighting CSS for code blocks
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 21:00:58 +00:00
ed30f88f05 Fix missing SxExpr wraps in events + pretty-print sx in dev mode + multi-expr render
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 2m39s
- Wrap 15 call sites in events/sx_components.py where sx-generating
  functions were passed as plain strings to sx_call(), causing raw
  s-expression source to leak into the rendered page.

- Add dev-mode pretty-printing (RELOAD=true) for sx responses and
  full page sx source — indented output in Network tab and View Source.

- Fix Sx.render to handle multiple top-level expressions by falling
  back to parseAll and returning a DocumentFragment.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 20:29:22 +00:00
8aedbc9e62 Add version logging, Markdown card menu item, and oembed card types
- sx-editor prints version on init: [sx-editor] v2026-03-02b-exorcism
- Add Markdown to card insert menu with /markdown and /md slash commands
- Add YouTube, X/Twitter, Vimeo, Spotify, CodePen as dedicated embed
  menu items with brand icons (all create ~kg-embed cards)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 20:18:41 +00:00
8ceb9aee62 Eliminate raw HTML injection: convert ~kg-html/captions to native sx
Add shared/sx/html_to_sx.py (HTMLParser-based HTML→sx converter) and
update lexical_to_sx.py so HTML cards, markdown cards, and captions all
produce native sx expressions instead of opaque HTML strings.

- ~kg-html now wraps native sx children (editor can identify the block)
- New ~kg-md component for markdown card blocks
- Captions are sx expressions, not escaped HTML strings
- kg_cards.sx: replace (raw! caption) with direct caption rendering
- sx-editor.js: htmlToSx() via DOMParser, serializeInline for captions,
  _childrenSx for ~kg-html/~kg-md, new kg-md edit UI
- Migration script (blog/scripts/migrate_sx_html.py) to re-convert
  stored sx_content from lexical source

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 19:57:27 +00:00
4668c30890 Fix parser bug: string values like ")" were confused with delimiter tokens
Some checks failed
Build and Deploy / build-and-deploy (push) Failing after 7s
Both Python and JS parsers used next_token() which returns plain strings
for both delimiter characters and string values, making them
indistinguishable. A string whose value is ")" or "(" would be
misinterpreted as a structural delimiter, causing parse errors.

Fix: use peek() (raw character) for all structural decisions in
parseExpr before consuming via next_token(). Also add enhanced error
logging to sx.js mount/loadComponents for easier future debugging.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 01:18:09 +00:00
39f61eddd6 Fix component caching: move data-components check before empty-text guard
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 3m27s
When server omits component source (cache hit), the script tag has
empty textContent. The early `if (!text.trim()) continue` was
skipping the data-components handler entirely, so components never
loaded from localStorage.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 01:02:04 +00:00
5436dfe76c Cache sx component definitions in localStorage across page loads
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 2m30s
Server computes SHA-256 hash of all component source at startup.
Client signals its cached hash via cookie (sx-comp-hash). On full
page load: cookie match → server sends empty script tag with just
the hash; mismatch → sends full source. Client loads from
localStorage on hit, parses inline + caches on miss.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 00:57:53 +00:00
4ede0368dc Add admin preview views + fix markdown converter
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 5m31s
- Fix _markdown() in lexical_to_sx.py: render markdown to HTML with
  mistune.html() before storing in ~kg-html
- Add shared/sx/prettify.py: sx_to_pretty_sx and json_to_pretty_sx
  produce sx AST for syntax-highlighted DOM (uses canonical serialize())
- Add preview tab to admin header nav
- Add GET /preview/ route with 4 views: prettified sx, prettified
  lexical JSON, sx rendered HTML, lexical rendered HTML
- Add ~blog-preview-panel and ~blog-preview-section components
- Add syntax highlight CSS for sx/JSON tokens

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 00:50:57 +00:00
a8e06e87fb Fix extended-text/heading/quote nodes: treat as inline text when inside links
Ghost's extended-text node can appear both as a block (with children) and
inline (with text field). When used as a child of a link node, it has a
text field and should produce a text literal, not a (p ...) wrapper.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 23:47:54 +00:00
588d240ddc Fix backfill script imports to match actual module paths
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 23:29:26 +00:00
aa5c251a45 Auto-bust sx.js and body.js via MD5 hash instead of manual version string
Computes file content hash at process startup, cached for lifetime.
Removes manual cache-busting instruction from CLAUDE.md.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 23:26:20 +00:00
7ccb463a8b Wire sx_content through full read/write pipeline
Model: add sx_content column to Post. Writer: accept sx_content in
create_post, create_page, update_post. Routes: read sx_content from form
data in new post, new page, and edit routes. Read pipeline: ghost_db
includes sx_content in public dict, detail/home views prefer sx_content
over html when available, PostDTO includes sx_content.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 23:22:30 +00:00
341fc4cf28 Add SX block editor with Koenig-quality controls and lexical-to-sx converter
Pure s-expression block editor replacing React/Koenig: single hover + button,
slash commands, full card edit modes (image/gallery/video/audio/file/embed/
bookmark/callout/toggle/button/HTML/code), inline format toolbar, keyboard
shortcuts, drag-drop uploads, oEmbed/bookmark metadata fetching.

Includes lexical_to_sx converter for backfilling existing posts, KG card
components matching Ghost's card CSS, migration for sx_content column, and
31 converter tests.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 23:17:49 +00:00
1a5969202e Fix back-button DOM restoration: process OOB swaps on popstate, disable editor font overrides
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 2m43s
- Process sx-swap-oob and hx-swap-oob elements in the popstate handler
  so sidebar, filter, menu, and headers are restored on back navigation
- Disable the 62.5% base font-size hack that leaked globally and caused
  all fonts to shrink when navigating to/from the editor
- Cache-bust sx.js to v=20260301d

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 23:14:32 +00:00
3bc5de126d Add cache busting instruction for sx.js to CLAUDE.md
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 4m38s
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 23:03:51 +00:00
1447122a0c Add on-demand CSS: registry, pre-computed component classes, header compression
Some checks failed
Build and Deploy / build-and-deploy (push) Failing after 42s
- Parse tw.css into per-class lookup registry at startup
- Pre-scan component CSS classes at registration time (avoid per-request regex)
- Compress SX-Css header: 8-char hash replaces full class list (LRU cache)
- Add ;@css comment annotation for dynamically constructed class names
- Safelist bg-sky-{100..400} in Tailwind config for menu-row-sx dynamic shades
- Client sends/receives hash, falls back gracefully on cache miss

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 21:39:57 +00:00
ab45e21c7c Cache-bust sx.js and disable static file caching
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 2m29s
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 20:46:26 +00:00
c0d369eb8e Refactor SX templates: shared components, Python migration, cleanup
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 6m0s
- Extract shared components (empty-state, delete-btn, sentinel, crud-*,
  view-toggle, img-or-placeholder, avatar, sumup-settings-form, auth
  forms, order tables/detail/checkout)
- Migrate all Python sx_call() callers to use shared components directly
- Remove 55+ thin wrapper defcomps from domain .sx files
- Remove trivial passthrough wrappers (blog-header-label, market-card-text, etc)
- Unify duplicate auth flows (account + federation) into shared/sx/templates/auth.sx
- Unify duplicate order views (cart + orders) into shared/sx/templates/orders.sx
- Disable static file caching in dev (SEND_FILE_MAX_AGE_DEFAULT=0)
- Add SX response validation and debug headers

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 20:34:34 +00:00
755313bd29 Add market admin CRUD: list, create, and delete marketplaces
Replaces placeholder "Market admin" text with a functional admin panel
that lists marketplaces for a page and supports create/delete via sx,
mirroring the events calendar admin pattern.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 19:16:39 +00:00
01a67029f0 Replace Tailwind CDN with pre-built CSS via standalone CLI
- Add shared/static/styles/tailwind.css as Tailwind v4 input with
  explicit @source paths for all service templates and safelisted
  dynamic classes (bg-{colour}-{shade}, text-{size})
- Build to shared/static/styles/tw.css (93KB minified)
- Replace <script src="cdn.tailwindcss.com"> with <link> to tw.css
  in sx page shell, Jinja _head.html, and ~base-shell component
- Add build-tw.sh convenience script

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 17:23:20 +00:00
b54f7b4b56 Fix SX history, OOB header swaps, cross-service nav components
- Always re-fetch on popstate (drop LRU cache) for fresh content on back/forward
- Save/restore scroll position via pushState
- Add id="root-header-child" to ~app-body so OOB swaps can target it
- Fix OOB renderers: nest root-row inside root-header-child swap instead of
  separate OOB that clobbers it
- Fix 3+ header rows dropped: wrap all headers in single fragment instead of
  concatenating outside (<> ...)
- Strip <script data-components> from text/sx responses before renderToString
- Fall back to location.assign for cross-origin pushState (SecurityError)
- Move blog/sx/nav.sx to shared/sx/templates/ so all services have nav components

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 17:17:39 +00:00
5ede32e21c Activate regular script tags after sx swap operations
Scripts inserted via innerHTML/insertAdjacentHTML don't execute.
Add _activateScripts() to _swapContent that recreates script tags
(without type or type=text/javascript) as live elements. This fixes
editor.js not loading when navigating to edit pages via sx-get.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 15:39:07 +00:00
7aea1f1be9 Activate script tags in raw! DOM output
Scripts inserted via innerHTML (template.content) don't execute.
When raw! renders HTML containing <script> tags, recreate them as
live elements so the browser fetches and executes them. Fixes
editor.js not loading on HTMX navigation to edit pages.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 15:25:52 +00:00
0ef4a93a92 Wrap raw Jinja HTML in (raw! "...") for sx source embedding
Post edit, data, entries, and settings pages pass raw Jinja HTML
as content to full_page_sx/oob_page_sx, which wraps it in SxExpr().
This injects unescaped HTML directly into sx source, breaking the
parser. Fix by serializing the HTML into a (raw! "...") expression
that the sx evaluator renders unescaped.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 15:16:42 +00:00
48696498ef Wrap multi-expression sx returns in fragments to prevent kwarg truncation
When multiple sx expressions are concatenated and passed as a kwarg
value via SxExpr(), the parser only sees the first as the value — the
rest become extra args silently dropped by the component. Wrap in (<>)
fragments in render_editor_panel() and _page_cards_sx().

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 14:56:05 +00:00
b7d95a8b4e Fix sx.js component kwarg evaluation: distinguish expressions from data
Three issues with the eager kwarg evaluation in renderComponentDOM and
renderStrComponent:

1. Data arrays (e.g. tags list of dicts) were being passed to sxEval
   which tried to call a dict as a function — causing blank pages.
   Fix: only evaluate arrays with a Symbol head (actual expressions);
   pass data arrays through as-is.

2. Expression arrays like (get t "src") inside map lambdas lost their
   scope when deferred — causing "get,t,src" URLs. Fix: eagerly evaluate
   these Symbol-headed expressions in the caller's env.

3. Bare symbol `t` used as boolean in editor.sx threw "Undefined symbol".
   Fix: use `true` literal instead.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 14:51:07 +00:00
e7d5c6734b Fix renderDOM swallowing pre-rendered DOM nodes as empty dicts
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 2m20s
renderComponentDOM now eagerly renders kwarg values that are render
expressions (HTML tags, <>, ~components) into DOM nodes. But renderDOM
treated any non-array object as a dict and returned an empty fragment,
silently discarding pre-rendered content. Add a nodeType check to pass
DOM nodes through unchanged.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 14:41:51 +00:00
e4a6d2dfc8 Fix renderStrComponent with same eager-eval pattern as renderComponentDOM
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 3m18s
The string renderer's component call had the same deferred-evaluation
bug — and this is the path actually used for blog card rendering via
renderToString. Apply the same _isRenderExpr check to route render-only
forms through renderStr while data expressions go through sxEval.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 13:50:42 +00:00
0a5562243b Fix renderComponentDOM: route render-only forms through renderDOM
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 2m19s
The previous fix eagerly evaluated all kwarg expressions via sxEval,
which broke render-only forms (<>, raw!, HTML tags, ~components) that
only exist in the render pipeline. Now detect render expressions by
checking if the head symbol is an HTML/SVG tag, <>, raw!, or ~component,
and route those through renderDOM while data expressions still go
through sxEval for correct scope resolution.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 13:45:43 +00:00
2b41aaa6ce Fix renderComponentDOM evaluating kwarg expressions in wrong scope
renderComponentDOM was deferring evaluation of complex expressions
(arrays) passed as component kwargs, storing raw AST instead.  When the
component body later used these values as attributes, the caller's env
(with lambda params like t, a) was no longer available, producing
stringified arrays like "get,t,src" as attribute values — which browsers
interpreted as relative URLs.

Evaluate all non-literal kwarg values eagerly in the caller's env,
matching the behavior of callComponent and the Python-side renderer.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 13:40:50 +00:00
cfe66e5342 Fix back_populates typo in Post.authors relationship
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 13:36:18 +00:00
382d1b7c7a Decouple blog models and BlogService from shared layer
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 2m20s
Move Post/Author/Tag/PostAuthor/PostTag/PostUser models from
shared/models/ghost_content.py to blog/models/content.py so blog-domain
models no longer live in the shared layer. Replace the shared
SqlBlogService + BlogService protocol with a blog-local singleton
(blog_service), and switch entry_associations.py from direct DB access
to HTTP fetch_data("blog", "post-by-id") to respect the inter-service
boundary.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 13:28:11 +00:00
a580a53328 Fix alembic revision IDs to match existing naming convention
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m46s
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 12:38:43 +00:00
0f9af31ffe Phase 0+1: native post writes, Ghost no longer write-primary
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m50s
- Final sync script with HTML verification + author→user migration
- Make ghost_id nullable on posts/authors/tags, add UUID/timestamp defaults
- Add user profile fields (bio, slug, profile_image, etc.) to User model
- New PostUser M2M table (replaces post_authors for new posts)
- PostWriter service: direct DB CRUD with Lexical rendering, optimistic
  locking, AP federation, tag upsert
- Rewrite create/edit/settings routes to use PostWriter (no Ghost API calls)
- Neuter Ghost webhooks (post/page/author/tag → 204 no-op)
- Disable Ghost startup sync

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 12:33:37 +00:00
e8bc228c7f Rebrand sexp → sx across web platform (173 files)
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 11m37s
Rename all sexp directories, files, identifiers, and references to sx.
artdag/ excluded (separate media processing DSL).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 11:06:57 +00:00
17cebe07e7 Add sx-get to cross-domain cart and auth-menu fragment links
Cart mini and auth-menu components were rendering plain <a href>
links for cross-domain navigation. Add sx-get with OOB swap
attributes so these use the SX fetch path instead of full reloads.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 10:47:24 +00:00
82b411f25a Add cross-domain SX navigation with OOB swap
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m38s
Enable instant cross-subdomain navigation (blog → market, etc.) via
sx-get instead of full page reloads. The server prepends missing
component definitions to OOB responses so the client can render
components from other domains.

- sexp.js: send SX-Components header, add credentials for cross-origin
  fetches to .rose-ash.com/.localhost, process sexp scripts in response
  before OOB swap
- helpers.py: add components_for_request() to diff client/server
  component sets, update sexp_response() to prepend missing defs
- factory.py: add SX-Components to CORS allowed headers, add
  Access-Control-Allow-Methods
- fragments/routes.py: switch nav items from ~blog-nav-item-plain to
  ~blog-nav-item-link (sx-get enabled)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 10:33:12 +00:00
a643b3532d Phase 5 cleanup: remove legacy HTML components, fix nav-tree fragment
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m43s
- Remove old raw! layout components (~app-head, ~app-layout, ~oob-response,
  ~header-row, ~menu-row, ~oob-header, ~header-child) from layout.sexp
- Convert nav-tree fragment from Jinja HTML to sexp source, fixing the
  "Unexpected character: ." parse error caused by HTML leaking into sexp
- Add _as_sexp() helper to safely coerce HTML fragments to ~rich-text
- Fix federation/sexp/search.sexpr extra closing paren
- Remove dead _html() wrappers from blog and account sexp_components
- Remove stale render import from cart sexp_components
- Add dev_watcher.py to auto-reload on .sexp/.sexpr/.js/.css changes
- Add test_parse_all.py to parse-check all 59 sexpr/sexp files
- Fix test assertions for sx- attribute prefix (was hx-)
- Add sexp.js version logging for cache debugging

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 10:12:03 +00:00
22802bd36b Send all responses as sexp wire format with client-side rendering
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 5m35s
- Server sends sexp source text, client (sexp.js) renders everything
- SexpExpr marker class for nested sexp composition in serialize()
- sexp_page() HTML shell with data-mount="body" for full page loads
- sexp_response() returns text/sexp for OOB/partial responses
- ~app-body layout component replaces ~app-layout (no raw!)
- ~rich-text is the only component using raw! (for CMS HTML content)
- Fragment endpoints return text/sexp, auto-wrapped in SexpExpr
- All _*_html() helpers converted to _*_sexp() returning sexp source
- Head auto-hoist: sexp.js moves meta/title/link/script[ld+json]
  from rendered body to document.head automatically
- Unknown components render warning box instead of crashing page
- Component kwargs preserve AST for lazy rendering (fixes <> in kwargs)
- Fix unterminated paren in events/sexp/tickets.sexpr

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 09:45:07 +00:00
0d48fd22ee Add test service to CI build loop
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 4m55s
The test service was missing from the CI app list, so its Docker
image was never rebuilt on push (no Node.js for sexp.js parity tests).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 23:54:40 +00:00
b92e7a763e Use lazy import for quart.Response in sexp_response helper
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 2m4s
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 23:46:58 +00:00
fec5ecdfb1 Add s-expression wire format support and test detail view
- HTMX beforeSwap hook intercepts text/sexp responses and renders
  them client-side via sexp.js before HTMX swaps the result in
- sexp_response() helper for returning text/sexp from route handlers
- Test detail page (/test/<nodeid>) with clickable test names
- HTMX navigation to detail returns sexp wire format (4x smaller
  than pre-rendered HTML), full page loads render server-side
- ~test-detail component with back link, outcome badge, and
  error traceback display

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 23:45:28 +00:00
269bcc02be Send test dashboard component definitions to client via sexp.js
Uses client_components_tag() to emit all component definitions as
<script type="text/sexp" data-components> before </body>, making them
available for client-side rendering by sexp.js.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 23:42:42 +00:00
9f2f0dacaf Add update/hydrate methods and browser auto-init to sexp.js
Adds Sexp.update() for re-rendering data-sexp elements with new data,
Sexp.hydrate() for finding and rendering all [data-sexp] elements,
and auto-init on DOMContentLoaded + htmx:afterSwap integration.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 23:40:14 +00:00
39e013a75e Wire sexp.js into page template with auto-init and HTMX integration
- Load sexp.js in ~app-layout before body.js
- Auto-process <script type="text/sexp"> tags on DOMContentLoaded
- Re-process after htmx:afterSwap for dynamic content
- Sexp.mount(target, expr, env) for rendering into DOM elements
- Sexp.processScripts() picks up data-components and data-mount tags
- client_components_tag() Python helper serializes Component objects
  back to sexp source for client-side consumption
- 37 parity tests all passing

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 23:36:49 +00:00
2df1014ee3 Add Node.js to test containers for sexp.js parity tests
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m39s
Node 20 from Debian packages — needed to run test_sexp_js.py which
verifies JS renderer output matches Python renderer output.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 23:30:17 +00:00
e8a991834b Add sexp.js: client-side s-expression parser, evaluator, and DOM renderer
Vanilla JS (no build tools) counterpart to shared/sexp/ Python modules.
Parses s-expression text, evaluates special forms, and renders to DOM
nodes or HTML strings. Full component system with defcomp/~name.

Includes:
- Parser: tokenizer + parse/parseAll matching Python parser exactly
- Evaluator: all special forms (if, when, cond, let, and, or, lambda,
  defcomp, define, ->, set!), higher-order forms (map, filter, reduce)
- DOM renderer: createElement for HTML tags, SVG namespace support,
  component invocation, raw! for pre-rendered HTML, <> fragments
- String renderer: matches Python html.render output for SSR parity
- ~50 built-in primitives (arithmetic, string, collection, predicates)
- 35 parity tests verifying JS output matches Python output via Node.js

Also fixes Python raw! handler to properly unwrap _RawHTML objects.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 23:28:21 +00:00
bc7a4a5128 Add cross-service URL functions and rights to base_context
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 2m22s
blog_url, market_url, cart_url, events_url and g.rights were only
available as Jinja globals, not in the ctx dict passed to sexp
helper functions. This caused all cross-service links in the header
system (post title, cart badge, admin cog, admin nav items) to
produce relative URLs resolving to the current service domain.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 23:19:42 +00:00
8e4c2c139e Fix duplicate menu rows on HTMX navigation between depth levels
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 2m39s
When navigating from a deeper page (e.g. day) to a shallower one
(e.g. calendar) via HTMX, orphaned header rows from the deeper page
persisted in the DOM because OOB swaps only replaced specific child
divs, not siblings. Fix by sending empty OOB swaps to clear all
header row IDs not present at the current depth.

Applied to events (calendars/calendar/day/entry/admin/slots) and
market (market_home/browse/product/admin). Also restore app_label
in root header.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 23:09:15 +00:00
db3f48ec75 Remove app_label text from root header, keep settings cog
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 3m6s
The word "settings" (app_label) was showing next to "Rose Ash 2.0"
in the top bar. Removed that label while restoring the settings cog
icon on the right side of the menu bar.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 23:03:46 +00:00
b40f3d124c Remove settings cog from root header bar
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 2m13s
The settings page is accessible via its own route; no need for a
persistent cog icon next to Rose Ash 2.0.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 22:59:33 +00:00
3809affcab Test dashboard: full menu system, all-service tests, filtering
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m11s
- Run tests for all 10 services via per-service pytest subprocesses
- Group results by service with section headers
- Clickable summary cards filter by outcome (passed/failed/errors/skipped)
- Service filter nav using ~nav-link buttons in menu bar
- Full menu integration: ~header-row + ~header-child + ~menu-row
- Show logo image via cart-mini rendering
- Mount full service directories in docker-compose for test access
- Add 24 unit test files across 9 services

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 22:54:25 +00:00
81e51ae7bc Fix settings cog URL: /settings/ not /admin/
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 2m50s
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 22:50:16 +00:00
b6119b7f04 Show settings cog on root header for admin users
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 2m30s
Pass settings_url and is_admin to header-row component so the blog
settings cog appears on the root header row for admin users across
all services. Links to blog /admin/.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 22:47:32 +00:00
75cb5d43b9 Apply generic admin header pattern to all events admin pages
Some checks failed
Build and Deploy / build-and-deploy (push) Has been cancelled
Events admin pages (calendars, calendar admin, day admin, entry admin,
slots, slot detail) now use shared post_admin_header_html with
selected="calendars". Container nav is fetched via fragments so post
header row matches other services.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 22:46:00 +00:00
f628b35fc3 Make post header row generic: admin cog + container_nav in shared helper
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 2m31s
Move admin cog generation and container_nav border wrapping from
blog-specific wrapper into shared post_header_html so all services
render identical post header rows. Blog, events, cart all delegate
to the shared helper now. Cart admin pages fetch container_nav_html
via fragments. Village Hall always links to blog.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 22:37:24 +00:00
2e4fbd5777 Remove extra cart header row from admin pages, use shared post header
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m21s
Cart admin pages (admin overview, payments) now use the same header
pattern as blog/market/events: root_header → post_header → admin_header.
The domain name appears via app_label on the root header instead of a
separate level-1 "cart" row.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 22:17:36 +00:00
b47ad6224b Unify post admin nav across all services
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m56s
Move post admin header into shared/sexp/helpers.py so blog, cart,
events, and market all render the same admin row with identical nav:
calendars | markets | payments | entries | data | edit | settings.

All links are external (cross-service). The selected item shows
highlighted on the right and as white text next to "admin" on the left.

- blog: delegates to shared helper, removes blog-specific nav builder
- cart: delegates to shared helper for payments admin
- events: adds shared admin row (selected=calendars) to calendar admin
- market: adds /<slug>/admin/ route + page_admin blueprint, delegates
  to shared helper (selected=markets). Fixes 404 on page-level admin.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 22:01:56 +00:00
2d08d6f787 Eliminate payments sub-admin row in cart, show selection on admin label
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 2m25s
Same pattern as blog: remove the level-3 payments header row, instead
show "payments" in white text next to "admin" on the admin row.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 21:35:02 +00:00
beebe559cd Show selected sub-page name in white next to admin label
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 2m29s
Appends e.g. "settings" in white text next to the admin shield icon
on the left side of the admin row, in addition to the highlighted
nav button on the right.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 21:28:27 +00:00
b63aa72efb Fix admin nav selection: use !important to override text-black
Some checks failed
Build and Deploy / build-and-deploy (push) Has been cancelled
The direct bg-stone-500 text-white classes were losing to text-black
in Tailwind specificity. Use !bg-stone-500 !text-white to ensure
selected admin nav items display correctly.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 21:27:02 +00:00
8cfa12de6b Eliminate post sub-admin rows, highlight active nav on admin row
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m15s
Remove the separate sub-admin header rows (data, entries, edit, settings)
that caused duplicate/stale rows on HTMX navigation and font styling breaks.
Instead, pass selected= to the admin row to highlight the active nav item
via aria-selected styling. External nav items (calendars, markets, payments)
also gain is-selected and select-colours support.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 20:40:03 +00:00
3dd62bd9bf Bigger text in test dashboard + add deliberate failing test
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 2m18s
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 20:34:19 +00:00
c926e5221d Fix test dashboard: use raw! for pre-rendered table rows
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m20s
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 20:32:40 +00:00
d62643312a Skip OAuth/auth for test service (public dashboard)
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m50s
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 20:24:07 +00:00
8852ab1108 Add test service to OAuth allowed clients
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m41s
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 20:09:13 +00:00
1559c5c931 Add test runner dashboard service (test.rose-ash.com)
Some checks failed
Build and Deploy / build-and-deploy (push) Has been cancelled
Public Quart microservice that runs pytest against shared/tests/ and
shared/sexp/tests/, serving an HTMX-powered sexp-rendered dashboard
with pass/fail/running status, auto-refresh polling, and re-run button.
No database — results stored in memory.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 20:08:10 +00:00
00efbc2a35 Add unit test coverage for shared pure-logic modules (240 tests)
Track 1.1 of master plan: expand from sexp-only tests to cover
DTOs, HTTP signatures, HMAC auth, URL utilities, Jinja filters,
calendar helpers, config freeze, activity bus registry, parse
utilities, sexp helpers, error classes, and jinja bridge render API.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 19:34:37 +00:00
6c44a5f3d0 Add app label to root header and auto-reload sexp templates in dev
Show current subdomain name (blog, cart, events, etc.) next to the site
title in the root header row. Remove the redundant second "cart" menu row
from cart overview and checkout error pages.

Add dev-mode hot-reload for sexp templates: track file mtimes and re-read
changed files per-request when RELOAD=true, so .sexp edits are picked up
without restarting services.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 19:33:00 +00:00
6d43404b12 Consolidate post header/menu system into shared infrastructure
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m29s
Replace duplicated _post_header_html, _oob_header_html, and header-child
components across blog/events/market/errors with shared sexpr components
(~post-label, ~page-cart-badge, ~oob-header, ~header-child, ~error-content)
and shared Python helpers (post_header_html, oob_header_html,
header_child_html, error_content_html). App-specific logic (blog container-nav
wrapping, admin cog, events calendar links) preserved via thin wrappers.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 19:06:18 +00:00
97c4e25ba7 Fix post-row link on 404: inject Jinja globals into error context
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m33s
base_context() doesn't include blog_url/cart_url/etc — those live in
Jinja globals. Without them call_url(ctx, "blog_url", ...) falls back
to a relative path instead of https://blog.rose-ash.com/...

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 18:47:03 +00:00
f1b7fdd37d Make rich 404 resilient to cross-service failures
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m27s
Build a minimal context directly instead of relying on
get_template_context() which runs the full context processor chain
including cross-service fragment fetches. Each step (base_context,
fragments, post hydration) is independently try/excepted so the page
renders with whatever is available.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 18:36:11 +00:00
597b0d7a2f Fix relations nav_label URL bug and add rich 404 pages with headers
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m24s
The relations container-nav fragment was inserting nav_label (e.g.
"calendars", "markets") as a URL path segment, generating wrong links
like /the-village-hall/markets/suma/ instead of /the-village-hall/suma/.
The nav_label is for display only, not URL construction.

Also adds a rich 404 handler that shows site headers and post breadcrumb
when a slug can be resolved from the URL path. Falls back gracefully to
the minimal error page if context building fails.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 18:30:44 +00:00
ee41e30d5b Move payments admin from events to cart service
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m40s
Payments config (SumUp credentials per page) is a cart concern since all
checkouts go through the cart service. Moves it from events.rose-ash.com
to cart.rose-ash.com/<page_slug>/admin/payments/ and adds a cart admin
overview page at /<page_slug>/admin/.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 18:15:35 +00:00
5957bd8941 Move calendar blueprint to app level for correct URL routing
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m3s
The calendar blueprint was nested under calendars (admin), making URLs
/{slug}/admin/{calendar_slug}/ instead of /{slug}/{calendar_slug}/.

Register calendar blueprint directly on the app and update all endpoint
references from calendars.calendar.* to calendar.* (37 in Python,
~50 in templates).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 16:50:13 +00:00
a8edc26a1d Add external flag to menu-row for cross-subdomain links
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m18s
Cross-subdomain hx-get breaks due to OAuth redirects. When external=true,
menu-row renders a plain <a href> without HTMX attributes, allowing
normal browser navigation.

Applied to post header links on events and market services which link
back to blog.rose-ash.com.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 16:46:47 +00:00
6a331e4ad8 Fix payments admin link to cart.rose-ash.com/{slug}/admin/payments/
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m2s
Payments are managed by the cart service, not events.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 16:37:41 +00:00
4a99bc56e9 Fix markets admin link to market.rose-ash.com/{slug}/admin/
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m3s
Markets use the same admin pattern as calendars but on the market
subdomain, not the events subdomain.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 16:34:54 +00:00
4fe5afe3e6 Move calendar management to /{slug}/admin/ and reserve slug
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m3s
- Change calendars blueprint prefix from /calendars to /admin
- Simplify routes from /calendars/ to / within blueprint
- Reserve admin, markets, payments, entries as calendar slugs
- Update blog admin nav link to /{slug}/admin/

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 16:31:24 +00:00
efae7f5533 Fix calendars admin link to correct events URL path
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 59s
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 16:28:12 +00:00
105f4c4679 Rewrite sprint plan: fit the task to the timescale
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m14s
Six 2-week sprints, each shipping one or two complete deliverables.
Not 20 weeks crammed into 2 — the right amount of work for the time.
Each sprint is valuable on its own. Stop after any and you've shipped.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 16:24:59 +00:00
a7cca2f720 Fix admin nav label: calendar → calendars
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m22s
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 16:23:04 +00:00
8269977751 Add two-week sprint plan: 90% of the masterplan in 14 days
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m21s
Ghost killed by day 5, sexp protocol running internally by day 8,
sexpr.js on every page by day 10. Cut Rust client, IPFS mesh, and
browser extension to later. Everything users touch runs on sexp.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 16:21:13 +00:00
0df932bd94 Fix blog page title showing post name twice
Some checks failed
Build and Deploy / build-and-deploy (push) Has been cancelled
Stop concatenating post title into base_title in route context.
Build proper "Post Title — Site Title" format in meta component instead.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 16:20:44 +00:00
c220fe21d6 Add master plan: 9 tracks from stability to federated protocol
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m8s
Schedules all existing plans into coherent 20-week roadmap with parallel
tracks: platform stability, decoupling, entities/relations, Ghost removal,
sexp pages, internal protocol, client-side runtime, native client, and
scalability. Critical path identified through Ghost removal as linchpin.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 16:19:39 +00:00
f9d9697c67 Externalize sexp to .sexpr files + render() API
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m20s
Replace all 676 inline sexp() string calls across 7 services with
render(component_name, **kwargs) calls backed by 46 external .sexpr
component definition files (587 defcomps total).

- Add render() function to shared/sexp/jinja_bridge.py
- Add load_service_components() helper and update load_sexp_dir() for *.sexpr
- Update parser keyword regex to support HTMX hx-on::event syntax
- Convert remaining inline HTML in route files to render() calls
- Add shared/sexp/templates/misc.sexp for cross-service utility components

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 16:14:58 +00:00
f4c2f4b6b8 Add internal-first strategy for sexpr:// protocol development
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m22s
Build and battle-test the protocol on the internal microservice mesh
before exposing it publicly. Current fetch_data/call_action/fetch_fragment
map directly to sexp verbs. Same protocol serves internal and public clients.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 15:16:35 +00:00
881ed2cdcc Add doc on sexp as microservice wire format
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 57s
Strongest near-term application: replace lossy dicts and opaque HTML
fragments with structured trees that are both inspectable and renderable.
Includes incremental migration path from current fetch_data/fetch_fragment.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 15:11:05 +00:00
2ce2077d14 Add risks and pitfalls analysis for sexp protocol
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 58s
Honest assessment: adoption chicken-and-egg, security surface area,
accessibility gap, tooling desert, Lisp Curse fragmentation, Worse Is
Better problem, and mitigation strategy for each.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 15:08:44 +00:00
8cf834dd55 Add doc on how sexp protocol fundamentally changes the web
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m23s
Covers: APIs as separate concept disappearing, front-end framework
collapse, AI as first-class citizen, browser monopoly breaking,
content portability, client-server blur, computational governance,
and the Unix pipes analogy.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 15:04:13 +00:00
4daecabf30 Add open verb system to unified sexp protocol spec
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m47s
Verbs are no longer limited to HTTP's fixed seven methods — any symbol
is a valid verb. Domain-specific actions (reserve, publish, vote, bid)
read as natural language. Verb behaviour declared via schema endpoint.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 14:59:34 +00:00
19240c6ca3 Add cooperative compute mesh: client-as-node, GPU sharing, IPFS persistence
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m18s
Members' Rust clients become full peer nodes — AP instances, IPFS nodes,
and artdag GPU workers. The relay server becomes a lightweight matchmaker
(message queue, pinning, peer directory) while all compute, rendering,
and content serving is distributed across members' own hardware. Back
to the original vision of the web: everyone has a server.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 14:52:17 +00:00
3e29c2a334 Unify sexp protocol and ActivityPub extension into single spec
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 2m30s
Merges sexpr-activitypub-extension.md and sexpr-protocol-and-tiered-clients.md
into sexpr-unified-protocol.md — recognising that browsing, federation, and
real-time updates are all the same thing: peers exchanging s-expressions on
a bidirectional stream. One format, one connection, one parser.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 14:44:39 +00:00
a70d3648ec Add sexp protocol spec and tiered client architecture plan
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m16s
Defines three client tiers (browser HTML, browser extension with
sexpr.js, Rust native client) served from the same route handlers
via content negotiation. Includes native sexp:// protocol design
over QUIC, content-addressed caching, bidirectional streaming,
self-describing schema, and implementation plan from Phase 1
(Quart content negotiation) through Phase 7 (fallback gateway).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 14:40:18 +00:00
0d1ce92e52 Fix sexp parse errors: avoid literal parentheses in sexp string args
The sexp parser doesn't handle "(" and ")" as string literals
inside expressions. Use raw! with pre-formatted strings instead.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 14:20:41 +00:00
09b5a5b4f6 Convert account, orders, and federation sexp_components.py to pure sexp() calls
Eliminates all f-string HTML from the remaining three services,
completing the migration of all sexp_components.py files to the
s-expression rendering system.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 14:15:17 +00:00
f0a100fd77 Convert cart sexp_components.py from f-string HTML to pure sexp() calls
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 14:08:36 +00:00
16da08ff05 Fix market and calendar URL routing
Market: blog links now use market_url('/{slug}/') instead of
events_url('/{slug}/markets/'), matching the market service's
actual route structure /<page_slug>/<market_slug>/.

Calendar: flatten route from /<slug>/calendars/<calendar_slug>/
to /<slug>/<calendar_slug>/ by changing the events app blueprint
prefix and moving listing routes to explicit /calendars/ paths.
Update all hardcoded calendar URL paths across blog and events
services (Python + Jinja templates).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 13:58:05 +00:00
5c6d83f474 Add sexp ActivityPub extension plan with implementation phases
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m2s
Defines a backwards-compatible AP extension using s-expressions as
the wire format: content negotiation, component discovery protocol,
WebSocket streaming, and a path to publishing as a FEP. Includes
bidirectional JSON-LD bridging for Mastodon/Pleroma compatibility.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 13:40:15 +00:00
da8a766e3f Convert all f-string HTML to sexp() in market/sexp/sexp_components.py
Eliminates every HTML tag from the market service's sexp component layer,
replacing f-string HTML with composable sexp() calls throughout ~30 functions
including product cards, filter panels, nav panels, product detail, meta tags,
cart controls, like buttons, and sentinel infinite-scroll elements.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 13:38:53 +00:00
9fa3b8800c Add sexp-as-wire-format rationale for AI-driven systems
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 56s
Documents why s-expressions on the wire are a natural fit for
LLM agents: fewer tokens, no closing-tag errors, components as
tool calls, mutations as agent actions, content-addressed caching.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 13:31:38 +00:00
f24292f99d Convert editor panel <script> block to sexp wrapper
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m4s
Separate JS content from HTML tag — pass JS body into
(script (raw! js)) so zero raw HTML tags remain.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 13:27:48 +00:00
de3a6e4dde Convert all f-string HTML to sexp() in blog/sexp/sexp_components.py
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m5s
~39 functions converted from f-string HTML to sexp() calls.
Only remaining HTML is the intentional <script> block in
render_editor_panel (complex JS init for WYSIWYG editor).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 13:24:16 +00:00
0bb57136d2 Add sexpr.js runtime plan and comprehensive Ghost removal plan
Two planning documents for the next major architectural steps:
- sexpr-js-runtime-plan: isomorphic JS s-expression runtime for
  client-side rendering, content-addressed component caching,
  and native hypermedia mutations
- ghost-removal-plan: full Ghost CMS replacement covering content
  (Lexical→sexp), membership, newsletters, Stripe subscriptions,
  and media uploads

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 12:53:12 +00:00
495e6589dc Convert all remaining f-string HTML to sexp() in events/sexp_components.py
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m3s
Eliminates every raw HTML string from the events service component file.
Converted ~30 functions including ticket admin, entry cards, ticket widgets,
view toggles, entry detail, options, buy forms, slots/ticket-type tables,
calendar description forms, nav OOB panels, and cart icon.

Zero HTML tags remain in events/sexp/sexp_components.py.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 12:39:37 +00:00
903193d825 Convert events header/panel f-string HTML to sexp calls
Migrates ~20 functions from f-string HTML construction to sexp():
- _oob_header_html, _post_header_html label/cart badge
- _calendars_header_html, _calendar_header_html, _calendar_nav_html
- _day_header_html, _day_nav_html (entries scroll menu + admin cog)
- _markets_header_html, _payments_header_html labels
- _calendars_main_panel_html + _calendars_list_html
- _calendar_main_panel_html (full month grid with day cells + entry badges)
- _day_main_panel_html + _day_row_html (entries table)
- _calendar_admin_main_panel_html + _calendar_description_display_html
- _markets_main_panel_html + _markets_list_html
- _payments_main_panel_html (SumUp config form)
- _entry_state_badge_html, _ticket_state_badge_html
- _day_admin_main_panel_html

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 12:19:57 +00:00
eda95ec58b Enable cross-subdomain htmx and purify layout to sexp
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m15s
- Disable htmx selfRequestsOnly, add CORS headers for *.rose-ash.com
- Remove same-origin guards from ~menu-row and ~nav-link htmx attrs
- Convert ~app-layout from string-concatenated HTML to pure sexp tree
- Extract ~app-head component, replace ~app-shell with inline structure
- Convert hamburger SVG from Python HTML constant to ~hamburger sexp component
- Fix cross-domain fragment URLs (events_url, market_url)
- Fix starts-with? primitive to handle nil values
- Fix duplicate admin menu rows on OOB swaps
- Add calendar admin nav links (slots, description)
- Convert slots page from Jinja to sexp rendering
- Disable page caching in development mode
- Backfill migration to clean orphaned container_relations

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 12:09:00 +00:00
d2f1da4944 Migrate callers from attach-child/detach-child to relate/unrelate API
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m18s
Switch all cross-service relation calls to the new registry-aware
relate/unrelate/can-relate actions, and consolidate per-service
container-nav fragment fetches into the generic relations handler.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 09:24:52 +00:00
53c4a0a1e0 Externalize sexp component templates and delete redundant HTML fragments
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m13s
Move 24 defcomp definitions from Python string constants in components.py
to 7 grouped .sexp files under shared/sexp/templates/. Add load_sexp_dir()
to jinja_bridge.py for file-based loading. Migrate events and market
link-card fragment handlers from render_template to sexp. Delete 9
superseded Jinja HTML fragment templates.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 08:55:54 +00:00
9c6170ed31 Add SVG child elements (path, circle, rect, etc.) to HTML_TAGS
Fixes EvalError: Undefined symbol: path when rendering ~mobile-filter
component which uses an SVG <path> element.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 08:37:35 +00:00
a0a0f5ebc2 Implement flexible entity relation system (Phases A–E)
Declarative relation registry via defrelation s-expressions with
cardinality enforcement (one-to-one, one-to-many, many-to-many),
registry-aware relate/unrelate/can-relate API endpoints, generic
container-nav fragment, and relation-driven UI components.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 08:35:17 +00:00
6f1d5bac3c relation plan
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m31s
2026-02-28 08:23:10 +00:00
b52ef719bf Fix 500 errors and double-slash URLs found during sexp rendering testing
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m18s
- events: fix ImportError for events_url (was importing from shared.utils
  instead of shared.infrastructure.urls)
- blog: add missing ~mobile-filter sexp component (details/summary panel)
- shared: fix double-slash URLs in ~auth-menu, ~cart-mini, ~header-row
  by removing redundant "/" concatenation on URLs that already have trailing slash
- blog: fix ghost_sync select UnboundLocalError caused by redundant local
  import shadowing module-level import

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 01:40:02 +00:00
838ec982eb Phase 7: Replace render_template() with s-expression rendering in all POST/PUT/DELETE routes
Eliminates all render_template() calls from POST/PUT/DELETE handlers across
all 7 services. Moves sexp_components.py into sexp/ packages per service.

- Blog: like toggle, snippets, cache clear, features/sumup/entry panels,
  create/delete market, WYSIWYG editor panel (render_editor_panel)
- Federation: like/unlike/boost/unboost, follow/unfollow, actor card,
  interaction buttons
- Events: ticket widget, checkin, confirm/decline/provisional, tickets
  config, posts CRUD, description edit/save, calendar/slot/ticket_type
  CRUD, payments, buy tickets, day main panel, entry page
- Market: like toggle, cart add response
- Account: newsletter toggle
- Cart: checkout error pages (3 handlers)
- Orders: checkout error page (1 handler)

Remaining render_template() calls are exclusively in GET handlers and
internal services (email templates, fragment endpoints).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 01:15:29 +00:00
e65232761b Fix NoneType strftime error in events calendar grid
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 59s
Guard day_date.strftime() call with None check — day_cell.date can
be None for empty grid cells.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 23:31:00 +00:00
1c794b6c0e Fix nested raw! sexp errors and missing container nav in market pages
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m6s
- Fix invalid nested (raw! a (raw! b)) patterns in market and events
  sexp_components — concatenate HTML strings in Python, pass single
  var to (raw! h) instead
- Add container_nav_html fetch to market inject_post context processor
  so page-scoped market pages show calendar/market nav links
- Add qs_filter to base_context for sexp filter URL building

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 23:28:11 +00:00
d53b9648a9 Phase 6: Replace render_template() with s-expression rendering in all GET routes
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m15s
Migrate ~52 GET route handlers across all 7 services from Jinja
render_template() to s-expression component rendering. Each service
gets a sexp_components.py with page/oob/cards render functions.

- Add per-service sexp_components.py (account, blog, cart, events,
  federation, market, orders) with full page, OOB, and pagination
  card rendering
- Add shared/sexp/helpers.py with call_url, root_header_html,
  full_page, oob_page utilities
- Update all GET routes to use get_template_context() + render fns
- Fix get_template_context() to inject Jinja globals (URL helpers)
- Add qs_filter to base_context for sexp filter URL building
- Mount sexp_components.py in docker-compose.dev.yml for all services
- Import sexp_components in app.py for Hypercorn --reload watching
- Fix route_prefix import (shared.utils not shared.infrastructure.urls)
- Fix federation choose-username missing actor in context
- Fix market page_markets missing post in context

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 23:19:33 +00:00
8013317b41 Phase 5: Page layouts as s-expressions — components, fragments, error pages
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m14s
Add 9 new shared s-expression components (cart-mini, auth-menu,
account-nav-item, calendar-entry-nav, calendar-link-nav, market-link-nav,
post-card, base-shell, error-page) and wire them into all fragment route
handlers. 404/403 error pages now render entirely via s-expressions as a
full-page proof-of-concept, with Jinja fallback on failure.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 15:25:11 +00:00
04419a1ec6 Switch federation link-card fragment to sexp rendering
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m2s
All four services (blog, market, events, federation) now use the shared
~link-card s-expression component instead of per-service Jinja templates.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 14:59:25 +00:00
573aec7dfa Add restart: unless-stopped to all dev services
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m31s
Containers now auto-restart on crash instead of staying down.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 14:46:25 +00:00
36b5f1d19d Fix blog startup deadlock: use direct DB instead of self-HTTP call
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m30s
ghost_sync was calling blog's own /internal/data/page-config-ensure via
HTTP during startup, but the server isn't listening yet — causing a retry
loop that times out Hypercorn. Replace with direct DB insert using the
existing session.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 14:44:04 +00:00
28c66c3650 Wire s-expression rendering into live app — blog link-card
- Add setup_sexp_bridge() and load_shared_components() to factory.py
  so all services get s-expression support automatically
- Create shared/sexp/components.py with ~link-card component definition
  (replaces 5 per-service Jinja link_card.html templates)
- Replace blog's link-card fragment handler to use sexp() instead of
  render_template() — first real s-expression rendered page content

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 14:38:51 +00:00
5d9f1586af Phase 4: Jinja bridge for incremental s-expression migration
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m14s
Two-way bridge: sexp() Jinja global renders s-expression components in
templates, register_components() loads definitions at startup. Includes
~link-card component test proving unified replacement of 5 per-service
Jinja fragment templates.

19 new tests (218 total).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 14:34:42 +00:00
fbb7a1422c Phase 3: Async resolver with parallel I/O and graceful degradation
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m14s
Tree walker collects I/O nodes (frag, query, action, current-user,
htmx-request?), dispatches them via asyncio.gather(), substitutes results,
and renders to HTML. Failed I/O degrades gracefully to empty string.

27 new tests (199 total), all mocked at execute_io boundary — no
infrastructure dependencies needed.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 14:22:28 +00:00
09010db70e Phase 2: HSX-style HTML renderer with render-aware evaluation
S-expression AST → HTML string renderer with ~100 HTML tags, void elements,
boolean attributes, XSS escaping, raw!, fragments, and components. Render-aware
special forms (if, when, cond, let, map, etc.) handle HTML tags in control flow
branches correctly by calling _render instead of _eval.

63 new tests (172 total across parser, evaluator, renderer).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 14:04:35 +00:00
0fb87e3b1c Phase 1: s-expression core library + test infrastructure
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m9s
S-expression parser, evaluator, and primitive registry in shared/sexp/.
109 unit tests covering parsing, evaluation, special forms, lambdas,
closures, components (defcomp), and 60+ pure builtins.

Test infrastructure: Dockerfile.unit (tier 1, fast) and
Dockerfile.integration (tier 2, ffmpeg). Dev watch mode auto-reruns
on file changes. Deploy gate blocks push on test failure.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 13:26:18 +00:00
996ddad2ea Fix ticket adjust: commit before cart-summary fetch
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m50s
The tickets adjust_quantity route fetches cart-summary from cart, which
calls back to events for ticket counts. Without committing first, the
callback misses the just-adjusted tickets.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 12:34:54 +00:00
f486e02413 Add orders to OAuth ALLOWED_CLIENTS
Checkout return from SumUp redirects to orders.rose-ash.com which needs
to authenticate via the account OAuth flow.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 11:04:00 +00:00
69a0989b7a Fix events: return 404 for deleted/missing calendar entries
The before_request handler loaded the entry but didn't abort when it was
None, causing template UndefinedError when building URLs with entry.id.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 10:41:40 +00:00
0c4682e4d7 Fix stale cart count: commit transaction before cross-service fragment fetch
The cart-mini fragment relies on cart calling back to events for calendar/
ticket counts. Without committing first, the callback runs in a separate
transaction and misses the just-added entry or ticket adjustment.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 10:30:13 +00:00
bcac8e5adc Fix events: use cart-mini fragment instead of local cart template
Events was trying to render _types/cart/_mini.html locally, which only
exists in the cart service. Replace with fetch_fragment("cart", "cart-mini")
calls and add oob param support to the cart-mini fragment.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 10:22:37 +00:00
e1b47e5b62 Refactor nav_entries_oob into composable shared macro with caller block
Replace the shared fallback template with a Jinja macro that each domain
(blog, events, market) can call with its own domain-specific nav items.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 10:13:38 +00:00
ae134907a4 Move _nav_entries_oob.html to shared templates instead of duplicating
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m16s
Used by both blog and events — belongs in shared/browser/templates
where the ChoiceLoader fallback resolves it for all apps.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 10:05:23 +00:00
db7342c7d2 Fix events: add missing _nav_entries_oob.html template
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 57s
Template exists in blog but was missing from events, causing
TemplateNotFound on calendar creation.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 10:03:58 +00:00
94b1fca938 Fix entrypoint.sh permissions for new services
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 55s
Mark executable so bind-mounted dev volumes work correctly.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 09:50:23 +00:00
96b02d93df Fix blog: add page_configs migration, fix stale cart reference in ghost_sync
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 43s
- Add 0003_add_page_configs.py migration to create table in db_blog
- Fix ghost_sync.py: fetch_data("cart", "page-config-ensure") → "blog"

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 09:25:19 +00:00
fe34ea8e5b Fix market crash: remove stale toggle_product_like import
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 43s
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 09:19:49 +00:00
f2d040c323 CI: deploy swarm only on main, dev stack on all branches
- Trigger on all branches (not just main/decoupling)
- Swarm stack deploy gated behind main branch check
- Dev stack (docker compose) always deployed
- Added relations, likes, orders to build loop

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 09:18:33 +00:00
22460db450 Rewrite CLAUDE.md to reflect full monorepo
Replaces the old art-dag-only docs with comprehensive documentation
covering all web platform services, shared library, art DAG subsystem,
architecture patterns, auth, inter-service communication, and dev/deploy.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 09:17:30 +00:00
1a74d811f7 Incorporate art-dag-mono repo into artdag/ subfolder
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 2m33s
Merges full history from art-dag/mono.git into the monorepo
under the artdag/ directory. Contains: core (DAG engine),
l1 (Celery rendering server), l2 (ActivityPub registry),
common (shared templates/middleware), client (CLI), test (e2e).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

git-subtree-dir: artdag
git-subtree-mainline: 1a179de547
git-subtree-split: 4c2e716558
2026-02-27 09:07:23 +00:00
1a179de547 Add s-expression architecture transformation plan
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m57s
Vision document for migrating rose-ash to an s-expression-based
architecture where pages, media renders, and LLM-generated content
share a unified DAG execution model with content-addressed caching
on IPFS/IPNS.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 09:05:02 +00:00
fa431ee13e Split cart into 4 microservices: relations, likes, orders, page-config→blog
Some checks failed
Build and Deploy / build-and-deploy (push) Has been cancelled
Phase 1 - Relations service (internal): owns ContainerRelation, exposes
get-children data + attach/detach-child actions. Retargeted events, blog,
market callers from cart to relations.

Phase 2 - Likes service (internal): unified Like model replaces ProductLike
and PostLike with generic target_type/target_slug/target_id. Exposes
is-liked, liked-slugs, liked-ids data + toggle action.

Phase 3 - PageConfig → blog: moved ownership to blog with direct DB queries,
removed proxy endpoints from cart.

Phase 4 - Orders service (public): owns Order/OrderItem + SumUp checkout
flow. Cart checkout now delegates to orders via create-order action.
Webhook/return routes and reconciliation moved to orders.

Phase 5 - Infrastructure: docker-compose, deploy.sh, Dockerfiles updated
for all 3 new services. Added orders_url helper and factory model imports.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 09:03:33 +00:00
76a9436ea1 Fetch product data from market service in cart's add_to_cart route
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 1m54s
The global add_to_cart route was calling find_or_create_cart_item without
denormalized product data, leaving NULL columns. Now fetches product info
via fetch_data("market", "products-by-ids") before creating the cart item.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 22:27:52 +00:00
8f8bc4fad9 Move entry_associations to shared — fix events cross-app import
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 2m33s
entry_associations only uses HTTP fetch_data/call_action, no direct DB.
Events app imported it via ..post.services which doesn't exist in events.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 18:05:30 +00:00
giles
4c2e716558 Make JAX the primary fused-pipeline path for CPU/GPU parity
JAX via XLA produces identical output on CPU and GPU. Previously
CUDA hand-written kernels were preferred on GPU, causing visual
differences vs the JAX CPU fallback. Now JAX is always used first,
with legacy CuPy/GPUFrame as fallback only when JAX is unavailable.

Also adds comprehensive CLAUDE.md for the monorepo.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 19:31:53 +00:00
giles
b788f1f778 Fix CPU HLS streaming (yuv420p) and opt-in middleware for fragments
- Add -pix_fmt yuv420p to multi_res_output.py libx264 path so browsers
  can decode CPU-encoded segments (was producing yuv444p / High 4:4:4).
- Switch silent auth check and coop fragment middlewares from opt-out
  blocklists to opt-in: only run for GET requests with Accept: text/html.
  Prevents unnecessary nav-tree/auth-menu HTTP calls on every HLS segment,
  IPFS proxy, and API request.
- Add opaque grant token verification to L1/L2 dependencies.
- Migrate client CLI to device authorization flow.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 18:33:53 +00:00
giles
4f49985cd5 Enable JAX compilation for streaming tasks
The StreamInterpreter was created without use_jax=True, so the JAX
compiler was never activated for production rendering. Desktop testing
had this enabled but the celery task path did not.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 15:39:29 +00:00
giles
07cae101ad Use JAX for fused pipeline fallback on CPU instead of GPUFrame path
When CUDA fused kernels aren't available, the fused-pipeline primitive
now uses JAX ops (jax_rotate, jax_scale, jax_shift_hue, etc.) instead
of falling back to one-by-one CuPy/GPUFrame operations. Legacy GPUFrame
path retained as last resort when JAX is also unavailable.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 15:35:13 +00:00
giles
c53227d991 Fix multi-res HLS encoding on CPU: fall back to libx264 when NVENC unavailable
The hardcoded h264_nvenc encoder fails on CPU-only workers. Now uses
check_nvenc_available() to auto-detect and falls back to libx264.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 15:30:35 +00:00
giles
3bffb97ca1 Add JAX CPU to L1 worker image, CUDA JAX to GPU image
CPU workers can now run GPU-queue rendering tasks via JAX on CPU.
GPU image overrides with jax[cuda12] for full CUDA acceleration.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 15:24:59 +00:00
giles
84e3ff3a91 Route GPU queue to CPU workers for CPU-based job processing
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 15:10:43 +00:00
giles
f1d80a1777 L2: verify auth state with account on each request
When user has artdag_session cookie, periodically (every 30s) check
account's /auth/internal/check-device endpoint. If account says the
device is no longer active (SSO logout), clear the cookie immediately.
Prevents stale sign-in after logging out from another app.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 01:42:09 +00:00
giles
0e14d2761a Fix L2 deployment: healthcheck, DB deadlock, CI image resolution
- Add /health endpoint (returns 200, skips auth middleware)
- Healthcheck now hits /health instead of / (which 302s to OAuth)
- Advisory lock in db.init_pool() prevents deadlock when 4 uvicorn
  workers race to run schema DDL
- CI: --resolve-image always on docker stack deploy to force re-pull

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 01:35:11 +00:00
giles
b45a2b6c10 Fix OAuth token exchange: use internal URL, add error logging
The server-to-server token exchange was hitting the external URL
(https://account.rose-ash.com/...) which can fail from inside Docker
due to DNS/hairpin NAT. Now uses INTERNAL_URL_ACCOUNT (already set in
both docker-compose files) for the POST. Adds logging at all three
failure points so silent redirects are diagnosable.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 01:20:41 +00:00
giles
3dde4e79ab Add OAuth SSO, device ID, and silent auth to L2
- Replace L2's username/password auth with OAuth SSO via account.rose-ash.com
- Add device_id middleware (artdag_did cookie)
- Add silent auth check (prompt=none with 5-min cooldown)
- Add OAuth config settings and itsdangerous dependency

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-24 23:50:31 +00:00
giles
d8206c7b3b Fix gutter width: close header wrapper before dark main area
The max-w-screen-2xl wrapper now only constrains the header/nav,
matching blog layout. Dark content area goes full-width with its
own inner max-w constraint.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-24 23:45:57 +00:00
giles
a5717ec4d4 Fall back to username for auth-menu email param
Existing sessions have email=None since the field was just added.
Username IS the email in Art-DAG (OAuth returns user.email as username).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-24 23:39:19 +00:00
giles
1b4e51c48c Add max-width gutters to match coop layout
Wrap page in max-w-screen-2xl mx-auto py-1 px-1 like blog.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-24 23:38:14 +00:00
giles
e7610bed7c Dark content area beneath coop header
Wrap content block in bg-dark-800 so all existing dark-themed
templates render correctly without per-file migration.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-24 23:34:56 +00:00
giles
e58def135d Add deploy.sh and zap.sh scripts for manual deploys
Ported from old art-dag root, updated for monorepo paths.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-24 23:26:39 +00:00
giles
a13c361dee Configure monorepo build: unified CI, local deps, .dockerignore
- Dockerfiles use monorepo root as build context
- common/ and core/ installed as local packages (no git+https)
- Client tarball built from local client/ dir
- Unified CI with change detection: common/core -> rebuild both
- Per-repo CI workflows removed

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-24 23:16:23 +00:00
giles
5173167f3e Import test/ 2026-02-24 23:10:04 +00:00
giles
c590f2e039 Squashed 'test/' content from commit f2edc20
git-subtree-dir: test
git-subtree-split: f2edc20cba865a6ef67ca807c2ed6cee8e6c2836
2026-02-24 23:10:04 +00:00
giles
1862fe96fc Import client (art-client) as client/ 2026-02-24 23:09:47 +00:00
giles
7784e6b2b0 Squashed 'client/' content from commit 4bb0841
git-subtree-dir: client
git-subtree-split: 4bb084154a4eb4b4f580d52d936cab05ef313ebb
2026-02-24 23:09:47 +00:00
giles
97d4d4ce21 Import core (art-dag) as core/ 2026-02-24 23:09:39 +00:00
giles
cc2dcbddd4 Squashed 'core/' content from commit 4957443
git-subtree-dir: core
git-subtree-split: 4957443184ae0eb6323635a90a19acffb3e01d07
2026-02-24 23:09:39 +00:00
giles
d77241602f Import common/ 2026-02-24 23:08:41 +00:00
giles
ea9015f65b Squashed 'common/' content from commit ff185b4
git-subtree-dir: common
git-subtree-split: ff185b42f0fa577446c3d00da3438dc148ee8102
2026-02-24 23:08:41 +00:00
giles
44694da76f Import L2 (activity-pub) as l2/ 2026-02-24 23:07:31 +00:00
giles
f54b0fb5da Squashed 'l2/' content from commit 79caa24
git-subtree-dir: l2
git-subtree-split: 79caa24e2129bf6e2cee819327d5622425306b67
2026-02-24 23:07:31 +00:00
giles
4dff4cfafb Import L1 (celery) as l1/ 2026-02-24 23:07:19 +00:00
giles
80c94ebea7 Squashed 'l1/' content from commit 670aa58
git-subtree-dir: l1
git-subtree-split: 670aa582df99e87fca7c247b949baf452e8c234f
2026-02-24 23:07:19 +00:00
giles
3ca1c14432 Initial monorepo commit 2026-02-24 23:04:48 +00:00
1364 changed files with 178615 additions and 20086 deletions

View File

@@ -0,0 +1,177 @@
# Sexp Fragment Protocol: Component Defs Between Services
## Context
Fragment endpoints return raw sexp source (e.g., `(~blog-nav-wrapper :items ...)`). The consuming service embeds this in its page sexp, which the client evaluates. But blog-specific components like `~blog-nav-wrapper` are only in blog's `_COMPONENT_ENV` — not in market's. So market's `client_components_tag()` never sends them to the client, causing "Unknown component" errors.
The fix: transfer component definitions alongside fragments. Services tell the provider what they already have; the provider sends only what's missing. The consuming service registers received defs into its `_COMPONENT_ENV` so they're included in `client_components_tag()` output for the client.
## Approach: Structured Sexp Request/Response
Replace the current GET + `X-Fragment-Request` header protocol with POST + sexp body. This aligns with the vision in `docs/sexpr-internal-protocol-first.md`.
### Request format (POST body)
```scheme
(fragment-request
:type "nav-tree"
:params (:app-name "market" :path "/")
:components (~blog-nav-wrapper ~blog-nav-item-link ~header-row-sx ...))
```
`:components` lists component names already in the consumer's `_COMPONENT_ENV`. Provider skips these.
### Response format
```scheme
(fragment-response
:defs ((defcomp ~blog-nav-wrapper (&key ...) ...) (defcomp ~blog-nav-item-link ...))
:content (~blog-nav-wrapper :items ...))
```
`:defs` contains only components the consumer doesn't have. `:content` is the fragment sexp (same as current response body).
## Changes
### 1. `shared/infrastructure/fragments.py` — Client side
**`fetch_fragment()`**: Switch from GET to POST with sexp body.
- Build request body using `sexp_call`:
```python
from shared.sexp.helpers import sexp_call, SexpExpr
from shared.sexp.jinja_bridge import _COMPONENT_ENV
comp_names = [k for k in _COMPONENT_ENV if k.startswith("~")]
body = sexp_call("fragment-request",
type=fragment_type,
params=params or {},
components=SexpExpr("(" + " ".join(comp_names) + ")"))
```
- POST to same URL, body as `text/sexp`, keep `X-Fragment-Request` header for backward compat
- Parse response: extract `:defs` and `:content` from the sexp response
- Register defs into `_COMPONENT_ENV` via `register_components()`
- Return `:content` wrapped as `SexpExpr`
**New helper `_parse_fragment_response(text)`**:
- `parse()` the response sexp
- Extract keyword args (reuse the keyword-extraction pattern from `evaluator.py`)
- Return `(defs_source, content_source)` tuple
### 2. `shared/sexp/helpers.py` — Response builder
**New `fragment_response(content, request_text)`**:
```python
def fragment_response(content: str, request_text: str) -> str:
"""Build a structured fragment response with missing component defs."""
from .parser import parse, serialize
from .types import Keyword, Component
from .jinja_bridge import _COMPONENT_ENV
# Parse request to get :components list
req = parse(request_text)
loaded = set()
# extract :components keyword value
...
# Diff against _COMPONENT_ENV, serialize missing defs
defs_parts = []
for key, val in _COMPONENT_ENV.items():
if not isinstance(val, Component):
continue
if key in loaded or f"~{val.name}" in loaded:
continue
defs_parts.append(_serialize_defcomp(val))
defs_sexp = "(" + " ".join(defs_parts) + ")" if defs_parts else "nil"
return sexp_call("fragment-response",
defs=SexpExpr(defs_sexp),
content=SexpExpr(content))
```
### 3. Fragment endpoints — All services
**Generic change in each `bp/fragments/routes.py`**: Update the route handler to accept POST, read sexp body, use `fragment_response()` for the response.
The `get_fragment` handler becomes:
```python
@bp.route("/<fragment_type>", methods=["GET", "POST"])
async def get_fragment(fragment_type: str):
handler = _handlers.get(fragment_type)
if handler is None:
return Response("", status=200, content_type="text/sexp")
content = await handler()
# Structured sexp protocol (POST with sexp body)
request_body = await request.get_data(as_text=True)
if request_body and request.content_type == "text/sexp":
from shared.sexp.helpers import fragment_response
body = fragment_response(content, request_body)
return Response(body, status=200, content_type="text/sexp")
# Legacy GET fallback
return Response(content, status=200, content_type="text/sexp")
```
Since all fragment endpoints follow the identical `_handlers` + `get_fragment` pattern, we can extract this into a shared helper in `fragments.py` or a new `shared/infrastructure/fragment_endpoint.py`.
### 4. Extract shared fragment endpoint helper
To avoid touching every service's fragment routes, create a shared blueprint factory:
**`shared/infrastructure/fragment_endpoint.py`**:
```python
def create_fragment_blueprint(handlers: dict) -> Blueprint:
"""Create a fragment endpoint blueprint with sexp protocol support."""
bp = Blueprint("fragments", __name__, url_prefix="/internal/fragments")
@bp.before_request
async def _require_fragment_header():
if not request.headers.get(FRAGMENT_HEADER):
return Response("", status=403)
@bp.route("/<fragment_type>", methods=["GET", "POST"])
async def get_fragment(fragment_type: str):
handler = handlers.get(fragment_type)
if handler is None:
return Response("", status=200, content_type="text/sexp")
content = await handler()
# Sexp protocol: POST with structured request/response
if request.method == "POST" and request.content_type == "text/sexp":
request_body = await request.get_data(as_text=True)
from shared.sexp.helpers import fragment_response
body = fragment_response(content, request_body)
return Response(body, status=200, content_type="text/sexp")
return Response(content, status=200, content_type="text/sexp")
return bp
```
Then each service's `register()` just returns `create_fragment_blueprint(_handlers)`. This is a small refactor since they all duplicate the same boilerplate today.
## Files to modify
| File | Change |
|------|--------|
| `shared/infrastructure/fragments.py` | POST sexp body, parse response, register defs |
| `shared/sexp/helpers.py` | `fragment_response()` builder, `_serialize_defcomp()` |
| `shared/infrastructure/fragment_endpoint.py` | **New** — shared blueprint factory |
| `blog/bp/fragments/routes.py` | Use `create_fragment_blueprint` |
| `market/bp/fragments/routes.py` | Use `create_fragment_blueprint` |
| `events/bp/fragments/routes.py` | Use `create_fragment_blueprint` |
| `cart/bp/fragments/routes.py` | Use `create_fragment_blueprint` |
| `account/bp/fragments/routes.py` | Use `create_fragment_blueprint` |
| `orders/bp/fragments/routes.py` | Use `create_fragment_blueprint` |
| `federation/bp/fragments/routes.py` | Use `create_fragment_blueprint` |
| `relations/bp/fragments/routes.py` | Use `create_fragment_blueprint` |
## Verification
1. Start blog + market services: `./dev.sh blog market`
2. Load market page — should fetch nav-tree from blog with sexp protocol
3. Check market logs: no "Unknown component" errors
4. Inspect page source: `client_components_tag()` output includes `~blog-nav-wrapper` etc.
5. Cross-domain sx-get navigation (blog → market) works without reload
6. Run sexp tests: `python3 -m pytest shared/sexp/tests/ -x -q`
7. Second page load: `:components` list in request includes blog nav components, response `:defs` is empty

View File

@@ -0,0 +1,325 @@
# Split Cart into Microservices
## Context
The cart app currently owns too much: CartItem, Order/OrderItem, PageConfig, ContainerRelation, plus all checkout/payment logic. We're splitting it into 4 pieces:
1. **Relations service** — internal only, owns ContainerRelation
2. **Likes service** — internal only, unified generic likes replacing ProductLike + PostLike
3. **PageConfig → blog** — move to blog (which already owns pages)
4. **Orders service** — public (orders.rose-ash.com), owns Order/OrderItem + SumUp checkout
After the split, cart becomes a thin CartItem CRUD + inbox service.
---
## Phase 1: Relations Service (internal only)
### 1.1 Scaffold `relations/`
Create minimal internal-only app (no templates, no context_fn):
| File | Notes |
|------|-------|
| `relations/__init__.py` | Empty |
| `relations/path_setup.py` | Copy from cart |
| `relations/app.py` | `create_base_app("relations")`, register data + actions BPs only |
| `relations/services/__init__.py` | Empty `register_domain_services()` |
| `relations/models/__init__.py` | `from shared.models.container_relation import ContainerRelation` |
| `relations/bp/__init__.py` | Export `register_data`, `register_actions` |
| `relations/bp/data/routes.py` | Move `get-children` handler from `cart/bp/data/routes.py:175-198` |
| `relations/bp/actions/routes.py` | Move `attach-child` + `detach-child` from `cart/bp/actions/routes.py:112-153` |
| `relations/alembic.ini` | Copy from cart, adjust path |
| `relations/alembic/env.py` | MODELS=`["shared.models.container_relation"]`, TABLES=`{"container_relations"}` |
| `relations/alembic/versions/0001_initial.py` | Create `container_relations` table |
| `relations/Dockerfile` | Follow cart pattern, `COPY relations/ ./` |
| `relations/entrypoint.sh` | Standard pattern, db=`db_relations` |
### 1.2 Retarget callers (`"cart"` → `"relations"`)
| File | Lines | Change |
|------|-------|--------|
| `events/bp/calendars/services/calendars.py` | 74, 111, 121 | `call_action("cart", ...)``call_action("relations", ...)` |
| `blog/bp/menu_items/services/menu_items.py` | 83, 137, 141, 157 | Same |
| `shared/services/market_impl.py` | 96, 109, 133 | Same |
### 1.3 Clean up cart
- Remove `get-children` from `cart/bp/data/routes.py:175-198`
- Remove `attach-child`, `detach-child` from `cart/bp/actions/routes.py:112-153`
- Remove `"shared.models.container_relation"` and `"container_relations"` from `cart/alembic/env.py`
---
## Phase 2: Likes Service (internal only)
### 2.1 New unified model
Single `likes` table in `db_likes`:
```python
class Like(Base):
__tablename__ = "likes"
id: Mapped[int] (pk)
user_id: Mapped[int] (not null, indexed)
target_type: Mapped[str] (String 32, not null) # "product" or "post"
target_slug: Mapped[str | None] (String 255) # for products
target_id: Mapped[int | None] (Integer) # for posts
created_at, updated_at, deleted_at
UniqueConstraint("user_id", "target_type", "target_slug")
UniqueConstraint("user_id", "target_type", "target_id")
Index("ix_likes_target", "target_type", "target_slug")
```
Products use `target_type="product"`, `target_slug=slug`. Posts use `target_type="post"`, `target_id=post.id`.
### 2.2 Scaffold `likes/`
| File | Notes |
|------|-------|
| `likes/__init__.py` | Empty |
| `likes/path_setup.py` | Standard |
| `likes/app.py` | Internal-only, `create_base_app("likes")`, data + actions BPs |
| `likes/services/__init__.py` | Empty `register_domain_services()` |
| `likes/models/__init__.py` | Import Like |
| `likes/models/like.py` | Generic Like model (above) |
| `likes/bp/__init__.py` | Export register functions |
| `likes/bp/data/routes.py` | `is-liked`, `liked-slugs`, `liked-ids` |
| `likes/bp/actions/routes.py` | `toggle` action |
| `likes/alembic.ini` | Standard |
| `likes/alembic/env.py` | MODELS=`["likes.models.like"]`, TABLES=`{"likes"}` |
| `likes/alembic/versions/0001_initial.py` | Create `likes` table |
| `likes/Dockerfile` | Standard pattern |
| `likes/entrypoint.sh` | Standard, db=`db_likes` |
### 2.3 Data endpoints (`likes/bp/data/routes.py`)
- `is-liked`: params `user_id, target_type, target_slug/target_id``{"liked": bool}`
- `liked-slugs`: params `user_id, target_type``["slug1", "slug2"]`
- `liked-ids`: params `user_id, target_type``[1, 2, 3]`
### 2.4 Action endpoints (`likes/bp/actions/routes.py`)
- `toggle`: payload `{user_id, target_type, target_slug?, target_id?}``{"liked": bool}`
### 2.5 Retarget market app
**`market/bp/product/routes.py`** (like_toggle, ~line 119):
Replace `toggle_product_like(g.s, user_id, product_slug)` with:
```python
result = await call_action("likes", "toggle", payload={
"user_id": user_id, "target_type": "product", "target_slug": product_slug
})
liked = result["liked"]
```
**`market/bp/browse/services/db_backend.py`** (most complex):
- `db_product_full` / `db_product_full_id`: Replace `ProductLike` subquery with `fetch_data("likes", "is-liked", ...)`. Annotate `is_liked` after query.
- `db_products_nocounts` / `db_products_counts`: Fetch `liked_slugs` once via `fetch_data("likes", "liked-slugs", ...)`, filter `Product.slug.in_(liked_slugs)` for `?liked=true`, annotate `is_liked` post-query.
**Delete**: `toggle_product_like` from `market/bp/product/services/product_operations.py`
### 2.6 Retarget blog app
**`blog/bp/post/routes.py`** (like_toggle):
Replace `toggle_post_like(g.s, user_id, post_id)` with `call_action("likes", "toggle", payload={...})`.
**Delete**: `toggle_post_like` from `blog/bp/post/services/post_operations.py`
### 2.7 Remove old like models
- Remove `ProductLike` from `shared/models/market.py` (lines 118-131) + `Product.likes` relationship (lines 110-114)
- Remove `PostLike` from `shared/models/ghost_content.py` + `Post.likes` relationship
- Remove `product_likes` from market alembic TABLES
- Remove `post_likes` from blog alembic TABLES
---
## Phase 3: PageConfig → Blog
### 3.1 Replace blog proxy endpoints with direct DB queries
**`blog/bp/data/routes.py`** (lines 77-102): Replace the 3 proxy handlers that currently call `fetch_data("cart", ...)` with direct DB queries. Copy logic from `cart/bp/data/routes.py`:
- `page-config` (cart lines 114-134)
- `page-config-by-id` (cart lines 136-149)
- `page-configs-batch` (cart lines 151-172)
- `page-config-ensure` (cart lines 49-81) — add new
Also add the `_page_config_dict` helper (cart lines 203-213).
### 3.2 Move action to blog
**`blog/bp/actions/routes.py`** (~line 40): Replace `call_action("cart", "update-page-config", ...)` proxy with direct handler. Copy logic from `cart/bp/actions/routes.py:51-110`.
### 3.3 Blog callers become local
| File | Current | After |
|------|---------|-------|
| `blog/bp/post/admin/routes.py:34` | `fetch_data("cart", "page-config", ...)` | Direct DB query (blog now owns table) |
| `blog/bp/post/admin/routes.py:87,132` | `call_action("cart", "update-page-config", ...)` | Direct call to local handler |
| `blog/bp/post/services/markets.py:44` | `fetch_data("cart", "page-config", ...)` | Direct DB query |
| `blog/bp/blog/ghost_db.py:295` | `fetch_data("cart", "page-configs-batch", ...)` | Direct DB query |
### 3.4 Retarget cross-service callers (`"cart"` → `"blog"`)
| File | Change |
|------|--------|
| `cart/bp/cart/services/page_cart.py:181` | `fetch_data("cart", "page-configs-batch", ...)``fetch_data("blog", "page-configs-batch", ...)` |
| `cart/bp/cart/global_routes.py:274` | `fetch_data("cart", "page-config-by-id", ...)``fetch_data("blog", "page-config-by-id", ...)` |
(Note: `checkout.py:117` and `cart/app.py:177` already target `"blog"`)
### 3.5 Update blog alembic
**`blog/alembic/env.py`**: Add `"shared.models.page_config"` to MODELS and `"page_configs"` to TABLES.
### 3.6 Clean up cart
- Remove all `page-config*` handlers from `cart/bp/data/routes.py` (lines 49-172)
- Remove `update-page-config` from `cart/bp/actions/routes.py` (lines 50-110)
- Remove `"shared.models.page_config"` and `"page_configs"` from `cart/alembic/env.py`
---
## Phase 4: Orders Service (public, orders.rose-ash.com)
### 4.1 Scaffold `orders/`
| File | Notes |
|------|-------|
| `orders/__init__.py` | Empty |
| `orders/path_setup.py` | Standard |
| `orders/app.py` | Public app with `context_fn`, templates, fragments, page slug hydration |
| `orders/services/__init__.py` | `register_domain_services()` |
| `orders/models/__init__.py` | `from shared.models.order import Order, OrderItem` |
| `orders/bp/__init__.py` | Export all BPs |
| `orders/bp/order/` | Move from `cart/bp/order/` (single order: detail, pay, recheck) |
| `orders/bp/orders/` | Move from `cart/bp/orders/` (order list + pagination) |
| `orders/bp/checkout/routes.py` | Webhook + return routes from `cart/bp/cart/global_routes.py` |
| `orders/bp/data/routes.py` | Minimal |
| `orders/bp/actions/routes.py` | `create-order` action (called by cart during checkout) |
| `orders/bp/fragments/routes.py` | `account-nav-item` fragment (orders link) |
| `orders/templates/` | Move `_types/order/`, `_types/orders/`, checkout templates from cart |
| `orders/alembic.ini` | Standard |
| `orders/alembic/env.py` | MODELS=`["shared.models.order"]`, TABLES=`{"orders", "order_items"}` |
| `orders/alembic/versions/0001_initial.py` | Create `orders` + `order_items` tables |
| `orders/Dockerfile` | Standard, public-facing |
| `orders/entrypoint.sh` | Standard, db=`db_orders` |
### 4.2 Move checkout services to orders
**Move to `orders/services/`:**
- `checkout.py` — from `cart/bp/cart/services/checkout.py` (move: `create_order_from_cart`, `resolve_page_config`, `build_sumup_*`, `get_order_with_details`. Keep `find_or_create_cart_item` in cart.)
- `check_sumup_status.py` — from `cart/bp/cart/services/check_sumup_status.py`
**`clear_cart_for_order`** stays in cart as new action:
- Add `clear-cart-for-order` to `cart/bp/actions/routes.py`
- Orders calls `call_action("cart", "clear-cart-for-order", payload={user_id, session_id, page_post_id})`
### 4.3 `create-order` action endpoint (`orders/bp/actions/routes.py`)
Cart's `POST /checkout/` calls this:
```
Payload: {cart_items: [{product_id, product_title, product_slug, product_image,
product_special_price, product_regular_price, product_price_currency,
quantity, market_place_container_id}],
calendar_entries, tickets, user_id, session_id,
product_total, calendar_total, ticket_total,
page_post_id, redirect_url, webhook_base_url}
Returns: {order_id, sumup_hosted_url, page_config_id, sumup_reference, description}
```
### 4.4 Refactor cart's checkout route
`cart/bp/cart/global_routes.py` `POST /checkout/`:
1. Load local cart data (get_cart, calendar entries, tickets, totals)
2. Serialize cart items to dicts
3. `result = await call_action("orders", "create-order", payload={...})`
4. Redirect to `result["sumup_hosted_url"]`
Same for page-scoped checkout in `cart/bp/cart/page_routes.py`.
### 4.5 Move webhook + return routes to orders
- `POST /checkout/webhook/<order_id>/``orders/bp/checkout/routes.py`
- `GET /checkout/return/<order_id>/``orders/bp/checkout/routes.py`
- SumUp redirect/webhook URLs must now point to orders.rose-ash.com
### 4.6 Move order list/detail routes
- `cart/bp/order/``orders/bp/order/`
- `cart/bp/orders/``orders/bp/orders/`
### 4.7 Move startup reconciliation
`_reconcile_pending_orders` from `cart/app.py:209-265``orders/app.py`
### 4.8 Clean up cart
- Remove `cart/bp/order/`, `cart/bp/orders/`
- Remove checkout webhook/return from `cart/bp/cart/global_routes.py`
- Remove `_reconcile_pending_orders` from `cart/app.py`
- Remove order templates from `cart/templates/`
- Remove `"shared.models.order"` and `"orders", "order_items"` from `cart/alembic/env.py`
---
## Phase 5: Infrastructure (applies to all new services)
### 5.1 docker-compose.yml
Add 3 new services (relations, likes, orders) with own DATABASE_URL (db_relations, db_likes, db_orders), own REDIS_URL (Redis DB 7, 8, 9).
Add to `x-app-env`:
```yaml
INTERNAL_URL_RELATIONS: http://relations:8000
INTERNAL_URL_LIKES: http://likes:8000
INTERNAL_URL_ORDERS: http://orders:8000
APP_URL_ORDERS: https://orders.rose-ash.com
```
### 5.2 docker-compose.dev.yml
Add all 3 services with dev volumes (ports 8008, 8009, 8010).
Add to `x-sibling-models` for all 3 new services.
### 5.3 deploy.sh
Add `relations likes orders` to APPS list.
### 5.4 Caddyfile (`/root/caddy/Caddyfile`)
Add only orders (public):
```
orders.rose-ash.com { reverse_proxy rose-ash-dev-orders-1:8000 }
```
### 5.5 shared/infrastructure/factory.py
Add to model import loop: `"relations.models", "likes.models", "orders.models"`
### 5.6 shared/infrastructure/urls.py
Add `orders_url(path)` helper.
### 5.7 All existing Dockerfiles
Add sibling model COPY lines for the 3 new services to every existing Dockerfile (blog, market, cart, events, federation, account).
### 5.8 CLAUDE.md
Update project structure and add notes about the new services.
---
## Data Migration (one-time, run before code switch)
1. `container_relations` from `db_cart``db_relations`
2. `product_likes` from `db_market` + `post_likes` from `db_blog``db_likes.likes`
3. `page_configs` from `db_cart``db_blog`
4. `orders` + `order_items` from `db_cart``db_orders`
Use `pg_dump`/`pg_restore` or direct SQL for migration.
---
## Post-Split Cart State
After all 4 phases, cart owns only:
- **Model**: CartItem (table in db_cart)
- **Alembic**: `cart_items` only
- **Data endpoints**: `cart-summary`, `cart-items`
- **Action endpoints**: `adopt-cart-for-user`, `clear-cart-for-order` (new)
- **Inbox handlers**: Add/Remove/Update `rose:CartItem`
- **Public routes**: cart overview, page cart, add-to-cart, quantity, delete
- **Fragments**: `cart-mini`
- **Checkout**: POST /checkout/ (creates order via `call_action("orders", "create-order")`, redirects to SumUp)
---
## Verification
1. **Relations**: Blog attach/detach marketplace to page; events attach/detach calendar
2. **Likes**: Toggle product like on market page; toggle post like on blog; `?liked=true` filter
3. **PageConfig**: Blog admin page config update; cart checkout resolves page config from blog
4. **Orders**: Add to cart → checkout → SumUp redirect → webhook → order paid; order list/detail on orders.rose-ash.com
5. No remaining `call_action("cart", "attach-child|detach-child|update-page-config")`
6. No remaining `fetch_data("cart", "page-config*|get-children")`
7. Cart alembic only manages `cart_items` table

View File

@@ -2,7 +2,7 @@ name: Build and Deploy
on:
push:
branches: [main, decoupling]
branches: ['**']
env:
REGISTRY: registry.rose-ash.com:5000
@@ -58,13 +58,22 @@ jobs:
fi
fi
for app in blog market cart events federation account; do
# Map compose service name to source directory
app_dir() {
case \"\$1\" in
sx_docs) echo \"sx\" ;;
*) echo \"\$1\" ;;
esac
}
for app in blog market cart events federation account relations likes orders test sx_docs; do
dir=\$(app_dir \"\$app\")
IMAGE_EXISTS=\$(docker image ls -q ${{ env.REGISTRY }}/\$app:latest 2>/dev/null)
if [ \"\$REBUILD_ALL\" = true ] || echo \"\$CHANGED\" | grep -q \"^\$app/\" || [ -z \"\$IMAGE_EXISTS\" ]; then
if [ \"\$REBUILD_ALL\" = true ] || echo \"\$CHANGED\" | grep -q \"^\$dir/\" || [ -z \"\$IMAGE_EXISTS\" ]; then
echo \"Building \$app...\"
docker build \
--build-arg CACHEBUST=\$(date +%s) \
-f \$app/Dockerfile \
-f \$dir/Dockerfile \
-t ${{ env.REGISTRY }}/\$app:latest \
-t ${{ env.REGISTRY }}/\$app:${{ github.sha }} \
.
@@ -75,13 +84,18 @@ jobs:
fi
done
source .env
docker stack deploy -c docker-compose.yml rose-ash
echo 'Waiting for swarm services to update...'
sleep 10
docker stack services rose-ash
# Deploy swarm stack only on main branch
if [ '${{ github.ref_name }}' = 'main' ]; then
source .env
docker stack deploy -c docker-compose.yml rose-ash
echo 'Waiting for swarm services to update...'
sleep 10
docker stack services rose-ash
else
echo 'Skipping swarm deploy (branch: ${{ github.ref_name }})'
fi
# Deploy dev stack (bind-mounted source + auto-reload)
# Dev stack always deployed (bind-mounted source + auto-reload)
echo 'Deploying dev stack...'
docker compose -p rose-ash-dev -f docker-compose.yml -f docker-compose.dev.yml up -d
echo 'Dev stack deployed'

151
CLAUDE.md
View File

@@ -1,6 +1,6 @@
# Art DAG Monorepo
# Rose Ash Monorepo
Federated content-addressed DAG execution engine for distributed media processing with ActivityPub ownership and provenance tracking.
Cooperative web platform: federated content, commerce, events, and media processing. Each domain runs as an independent Quart microservice with its own database, communicating via HMAC-signed internal HTTP and ActivityPub events.
## Deployment
@@ -9,71 +9,134 @@ Federated content-addressed DAG execution engine for distributed media processin
## Project Structure
```
core/ # DAG engine (artdag package) - nodes, effects, analysis, planning
l1/ # L1 Celery rendering server (FastAPI + Celery + Redis + PostgreSQL)
l2/ # L2 ActivityPub registry (FastAPI + PostgreSQL)
common/ # Shared templates, middleware, models (artdag_common package)
client/ # CLI client
test/ # Integration & e2e tests
blog/ # Content management, Ghost CMS sync, navigation, WYSIWYG editor
market/ # Product catalog, marketplace pages, web scraping
cart/ # Shopping cart CRUD, checkout (delegates order creation to orders)
events/ # Calendar & event management, ticketing
federation/ # ActivityPub social hub, user profiles
account/ # OAuth2 authorization server, user dashboard, membership
orders/ # Order history, SumUp payment/webhook handling, reconciliation
relations/ # (internal) Cross-domain parent/child relationship tracking
likes/ # (internal) Unified like/favourite tracking across domains
shared/ # Shared library: models, infrastructure, templates, static assets
artdag/ # Art DAG — media processing engine (separate codebase, see below)
```
### Shared Library (`shared/`)
```
shared/
models/ # Canonical SQLAlchemy ORM models for all domains
db/ # Async session management, per-domain DB support, alembic helpers
infrastructure/ # App factory, OAuth, ActivityPub, fragments, internal auth, Jinja
services/ # Domain service implementations + DI registry
contracts/ # DTOs and service protocols
browser/ # Middleware, Redis caching, CSRF, error handlers
events/ # Activity bus + background processor (AP-shaped events)
config/ # YAML config loading (frozen/readonly)
static/ # Shared CSS, JS, images
templates/ # Base HTML layouts, partials (inherited by all apps)
```
### Art DAG (`artdag/`)
Federated content-addressed DAG execution engine for distributed media processing.
```
artdag/
core/ # DAG engine (artdag package) — nodes, effects, analysis, planning
l1/ # L1 Celery rendering server (FastAPI + Celery + Redis + PostgreSQL)
l2/ # L2 ActivityPub registry (FastAPI + PostgreSQL)
common/ # Shared templates, middleware, models (artdag_common package)
client/ # CLI client
test/ # Integration & e2e tests
```
## Tech Stack
Python 3.11+, FastAPI, Celery, Redis, PostgreSQL (asyncpg for L1), SQLAlchemy, Pydantic, JAX (CPU/GPU), IPFS/Kubo, Docker Swarm, HTMX + Jinja2 for web UI.
**Web platform:** Python 3.11+, Quart (async Flask), SQLAlchemy (asyncpg), Jinja2, HTMX, PostgreSQL, Redis, Docker Swarm, Hypercorn.
**Art DAG:** FastAPI, Celery, JAX (CPU/GPU), IPFS/Kubo, Pydantic.
## Key Commands
### Testing
### Development
```bash
cd l1 && pytest tests/ # L1 unit tests
cd core && pytest tests/ # Core unit tests
cd test && python run.py # Full integration pipeline
./dev.sh # Start all services + infra (db, redis, pgbouncer)
./dev.sh blog market # Start specific services + infra
./dev.sh --build blog # Rebuild image then start
./dev.sh down # Stop everything
./dev.sh logs blog # Tail service logs
```
- pytest uses `asyncio_mode = "auto"` for async tests
- Test files: `test_*.py`, fixtures in `conftest.py`
### Linting & Type Checking (L1)
### Deployment
```bash
cd l1 && ruff check . # Lint (E, F, I, UP rules)
cd l1 && mypy app/types.py app/routers/recipes.py tests/
./deploy.sh # Auto-detect changed apps, build + push + restart
./deploy.sh blog market # Deploy specific apps
./deploy.sh --all # Deploy everything
```
- Line length: 100 chars (E501 ignored)
- Mypy: strict on `app/types.py`, `app/routers/recipes.py`, `tests/`; gradual elsewhere
- Mypy ignores imports for: celery, redis, artdag, artdag_common, ipfs_client
### Docker
### Art DAG
```bash
docker build -f l1/Dockerfile -t celery-l1-server:latest .
docker build -f l1/Dockerfile.gpu -t celery-l1-gpu:latest .
docker build -f l2/Dockerfile -t l2-server:latest .
./deploy.sh # Build, push, deploy stacks
cd artdag/l1 && pytest tests/ # L1 unit tests
cd artdag/core && pytest tests/ # Core unit tests
cd artdag/test && python run.py # Full integration pipeline
cd artdag/l1 && ruff check . # Lint
cd artdag/l1 && mypy app/types.py app/routers/recipes.py tests/
```
## Architecture Patterns
- **3-Phase Execution**: Analyze -> Plan -> Execute (tasks in `l1/tasks/`)
- **Content-Addressed**: All data identified by SHA3-256 hashes or IPFS CIDs
- **Services Pattern**: Business logic in `app/services/`, API endpoints in `app/routers/`
- **Types Module**: Pydantic models and TypedDicts in `app/types.py`
- **Celery Tasks**: In `l1/tasks/`, decorated with `@app.task`
- **S-Expression Effects**: Composable effect language in `l1/sexp_effects/`
- **Storage**: Local filesystem, S3, or IPFS backends (`storage_providers.py`)
- **Inter-Service Reads**: `fetch_data()` → GET `/internal/data/{query}` (HMAC-signed)
- **Inter-Service Actions**: `call_action()` → POST `/internal/actions/{name}` (HMAC-signed)
- **Inter-Service AP Inbox**: `send_internal_activity()` → POST `/internal/inbox` (HMAC-signed, AP-shaped activities for cross-service writes with denormalized data)
### Web Platform
## Auth
- **App factory:** `create_base_app(name, context_fn, before_request_fns, domain_services_fn)` in `shared/infrastructure/factory.py` — creates Quart app with DB, Redis, CSRF, OAuth, AP, session management
- **Blueprint pattern:** Each blueprint exposes `register() -> Blueprint`, handlers stored in `_handlers` dict
- **Per-service database:** Each service has own PostgreSQL DB via PgBouncer; cross-domain data fetched via HTTP
- **Alembic per-service:** Each service declares `MODELS` and `TABLES` in `alembic/env.py`, delegates to `shared.db.alembic_env.run_alembic()`
- **Inter-service reads:** `fetch_data(service, query, params)` → GET `/internal/data/{query}` (HMAC-signed, 3s timeout)
- **Inter-service writes:** `call_action(service, action, payload)` → POST `/internal/actions/{action}` (HMAC-signed, 5s timeout)
- **Inter-service AP inbox:** `send_internal_activity()` → POST `/internal/inbox` (HMAC-signed, AP-shaped activities for cross-service writes)
- **Fragments:** HTML fragments fetched cross-service via `fetch_fragments()` for composing shared UI (nav, cart mini, auth menu)
- **Soft deletes:** Models use `deleted_at` column pattern
- **Context processors:** Each app provides its own `context_fn` that assembles template context from local DB + cross-service fragments
- L1 <-> L2: scoped JWT tokens (no shared secrets)
- L2: password + OAuth SSO, token revocation in Redis (30-day expiry)
- Federation: ActivityPub RSA signatures (`core/artdag/activitypub/`)
### Auth
- **Account** is the OAuth2 authorization server; all other apps are OAuth clients
- Per-app first-party session cookies (Safari ITP compatible), synchronized via device ID
- Grant verification: apps check grant validity against account DB (cached in Redis)
- Silent SSO: `prompt=none` OAuth flow for automatic cross-app login
- ActivityPub: RSA signatures, per-app virtual actor projections sharing same keypair
### Art DAG
- **3-Phase Execution:** Analyze → Plan → Execute (tasks in `artdag/l1/tasks/`)
- **Content-Addressed:** All data identified by SHA3-256 hashes or IPFS CIDs
- **S-Expression Effects:** Composable effect language in `artdag/l1/sexp_effects/`
- **Storage:** Local filesystem, S3, or IPFS backends
- L1 ↔ L2: scoped JWT tokens; L2: password + OAuth SSO
## Domains
| Service | Public URL | Dev Port |
|---------|-----------|----------|
| blog | blog.rose-ash.com | 8001 |
| market | market.rose-ash.com | 8002 |
| cart | cart.rose-ash.com | 8003 |
| events | events.rose-ash.com | 8004 |
| federation | federation.rose-ash.com | 8005 |
| account | account.rose-ash.com | 8006 |
| relations | (internal only) | 8008 |
| likes | (internal only) | 8009 |
| orders | orders.rose-ash.com | 8010 |
## Key Config Files
- `l1/pyproject.toml` - mypy, pytest, ruff config for L1
- `l1/celery_app.py` - Celery initialization
- `l1/database.py` / `l2/db.py` - SQLAlchemy models
- `l1/docker-compose.yml` / `l2/docker-compose.yml` - Swarm stacks
- `docker-compose.yml` / `docker-compose.dev.yml` — service definitions, env vars, volumes
- `deploy.sh` / `dev.sh` — deployment and development scripts
- `shared/infrastructure/factory.py` — app factory (all services use this)
- `{service}/alembic/env.py` — per-service migration config
- `_config/app-config.yaml` — runtime YAML config (mounted into containers)
## Tools

View File

@@ -16,6 +16,9 @@ app_urls:
events: "https://events.rose-ash.com"
federation: "https://federation.rose-ash.com"
account: "https://account.rose-ash.com"
sx: "https://sx.rose-ash.com"
test: "https://test.rose-ash.com"
orders: "https://orders.rose-ash.com"
cache:
fs_root: /app/_snapshot # <- absolute path to your snapshot dir
categories:

View File

@@ -38,6 +38,12 @@ COPY events/__init__.py ./events/__init__.py
COPY events/models/ ./events/models/
COPY federation/__init__.py ./federation/__init__.py
COPY federation/models/ ./federation/models/
COPY relations/__init__.py ./relations/__init__.py
COPY relations/models/ ./relations/models/
COPY likes/__init__.py ./likes/__init__.py
COPY likes/models/ ./likes/models/
COPY orders/__init__.py ./orders/__init__.py
COPY orders/models/ ./orders/models/
# ---------- Runtime setup ----------
COPY account/entrypoint.sh /usr/local/bin/entrypoint.sh

View File

@@ -0,0 +1,43 @@
"""Add author profile fields to users table.
Merges Ghost Author profile data into User — bio, profile_image, cover_image,
website, location, facebook, twitter, slug, is_admin.
Revision ID: 0003
Revises: 0002_hash_oauth_tokens
"""
from alembic import op
import sqlalchemy as sa
revision = "acct_0003"
down_revision = "acct_0002"
branch_labels = None
depends_on = None
def upgrade():
op.add_column("users", sa.Column("slug", sa.String(191), nullable=True))
op.add_column("users", sa.Column("bio", sa.Text(), nullable=True))
op.add_column("users", sa.Column("profile_image", sa.Text(), nullable=True))
op.add_column("users", sa.Column("cover_image", sa.Text(), nullable=True))
op.add_column("users", sa.Column("website", sa.Text(), nullable=True))
op.add_column("users", sa.Column("location", sa.Text(), nullable=True))
op.add_column("users", sa.Column("facebook", sa.Text(), nullable=True))
op.add_column("users", sa.Column("twitter", sa.Text(), nullable=True))
op.add_column("users", sa.Column(
"is_admin", sa.Boolean(), nullable=False, server_default=sa.text("false"),
))
op.create_index("ix_users_slug", "users", ["slug"], unique=True)
def downgrade():
op.drop_index("ix_users_slug")
op.drop_column("users", "is_admin")
op.drop_column("users", "twitter")
op.drop_column("users", "facebook")
op.drop_column("users", "location")
op.drop_column("users", "website")
op.drop_column("users", "cover_image")
op.drop_column("users", "profile_image")
op.drop_column("users", "bio")
op.drop_column("users", "slug")

View File

@@ -1,5 +1,6 @@
from __future__ import annotations
import path_setup # noqa: F401 # adds shared/ to sys.path
import sx.sx_components as sx_components # noqa: F401 # ensure Hypercorn --reload watches this file
from pathlib import Path
from quart import g, request
@@ -7,7 +8,7 @@ from jinja2 import FileSystemLoader, ChoiceLoader
from shared.infrastructure.factory import create_base_app
from bp import register_account_bp, register_auth_bp, register_fragments
from bp import register_account_bp, register_auth_bp
async def account_context() -> dict:
@@ -43,14 +44,14 @@ async def account_context() -> dict:
if ident["session_id"] is not None:
cart_params["session_id"] = ident["session_id"]
cart_mini_html, auth_menu_html, nav_tree_html = await fetch_fragments([
cart_mini, auth_menu, nav_tree = await fetch_fragments([
("cart", "cart-mini", cart_params or None),
("account", "auth-menu", {"email": user.email} if user else None),
("blog", "nav-tree", {"app_name": "account", "path": request.path}),
])
ctx["cart_mini_html"] = cart_mini_html
ctx["auth_menu_html"] = auth_menu_html
ctx["nav_tree_html"] = nav_tree_html
ctx["cart_mini"] = cart_mini
ctx["auth_menu"] = auth_menu
ctx["nav_tree"] = nav_tree
return ctx
@@ -71,10 +72,22 @@ def create_app() -> "Quart":
app.jinja_loader,
])
# Setup defpage routes
import sx.sx_components # noqa: F811 — ensure components loaded
from sxc.pages import setup_account_pages
setup_account_pages()
# --- blueprints ---
app.register_blueprint(register_auth_bp())
app.register_blueprint(register_account_bp())
app.register_blueprint(register_fragments())
account_bp = register_account_bp()
app.register_blueprint(account_bp)
from shared.sx.pages import auto_mount_pages
auto_mount_pages(app, "account")
from shared.sx.handlers import auto_mount_fragment_handlers
auto_mount_fragment_handlers(app, "account")
from bp.actions.routes import register as register_actions
app.register_blueprint(register_actions())

View File

@@ -1,3 +1,2 @@
from .account.routes import register as register_account_bp
from .auth.routes import register as register_auth_bp
from .fragments import register_fragments

View File

@@ -1,107 +1,34 @@
"""Account pages blueprint.
Moved from federation/bp/auth — newsletters, fragment pages (tickets, bookings).
Mounted at root /.
Mounted at root /. GET page handlers replaced by defpage.
"""
from __future__ import annotations
from quart import (
Blueprint,
request,
render_template,
make_response,
redirect,
g,
)
from sqlalchemy import select
from shared.models import UserNewsletter
from shared.models.ghost_membership_entities import GhostNewsletter
from shared.infrastructure.urls import login_url
from shared.infrastructure.fragments import fetch_fragment, fetch_fragments
oob = {
"oob_extends": "oob_elements.html",
"extends": "_types/root/_index.html",
"parent_id": "root-header-child",
"child_id": "auth-header-child",
"header": "_types/auth/header/_header.html",
"parent_header": "_types/root/header/_header.html",
"nav": "_types/auth/_nav.html",
"main": "_types/auth/_main_panel.html",
}
from shared.infrastructure.fragments import fetch_fragments
from shared.sx.helpers import sx_response
def register(url_prefix="/"):
account_bp = Blueprint("account", __name__, url_prefix=url_prefix)
@account_bp.context_processor
async def context():
@account_bp.before_request
async def _prepare_page_data():
"""Fetch account_nav fragments for layout."""
events_nav, cart_nav, artdag_nav = await fetch_fragments([
("events", "account-nav-item", {}),
("cart", "account-nav-item", {}),
("artdag", "nav-item", {}),
], required=False)
return {"oob": oob, "account_nav_html": events_nav + cart_nav + artdag_nav}
@account_bp.get("/")
async def account():
from shared.browser.app.utils.htmx import is_htmx_request
if not g.get("user"):
return redirect(login_url("/"))
if not is_htmx_request():
html = await render_template("_types/auth/index.html")
else:
html = await render_template("_types/auth/_oob_elements.html")
return await make_response(html)
@account_bp.get("/newsletters/")
async def newsletters():
from shared.browser.app.utils.htmx import is_htmx_request
if not g.get("user"):
return redirect(login_url("/newsletters/"))
result = await g.s.execute(
select(GhostNewsletter).order_by(GhostNewsletter.name)
)
all_newsletters = result.scalars().all()
sub_result = await g.s.execute(
select(UserNewsletter).where(
UserNewsletter.user_id == g.user.id,
)
)
user_subs = {un.newsletter_id: un for un in sub_result.scalars().all()}
newsletter_list = []
for nl in all_newsletters:
un = user_subs.get(nl.id)
newsletter_list.append({
"newsletter": nl,
"un": un,
"subscribed": un.subscribed if un else False,
})
nl_oob = {**oob, "main": "_types/auth/_newsletters_panel.html"}
if not is_htmx_request():
html = await render_template(
"_types/auth/index.html",
oob=nl_oob,
newsletter_list=newsletter_list,
)
else:
html = await render_template(
"_types/auth/_oob_elements.html",
oob=nl_oob,
newsletter_list=newsletter_list,
)
return await make_response(html)
g.account_nav = events_nav + cart_nav + artdag_nav
@account_bp.post("/newsletter/<int:newsletter_id>/toggle/")
async def toggle_newsletter(newsletter_id: int):
@@ -128,42 +55,7 @@ def register(url_prefix="/"):
await g.s.flush()
return await render_template(
"_types/auth/_newsletter_toggle.html",
un=un,
)
# Catch-all for fragment-provided pages — must be last
@account_bp.get("/<slug>/")
async def fragment_page(slug):
from shared.browser.app.utils.htmx import is_htmx_request
from quart import abort
if not g.get("user"):
return redirect(login_url(f"/{slug}/"))
fragment_html = await fetch_fragment(
"events", "account-page",
params={"slug": slug, "user_id": str(g.user.id)},
)
if not fragment_html:
abort(404)
w_oob = {**oob, "main": "_types/auth/_fragment_panel.html"}
if not is_htmx_request():
html = await render_template(
"_types/auth/index.html",
oob=w_oob,
page_fragment_html=fragment_html,
)
else:
html = await render_template(
"_types/auth/_oob_elements.html",
oob=w_oob,
page_fragment_html=fragment_html,
)
return await make_response(html)
from sx.sx_components import render_newsletter_toggle
return sx_response(render_newsletter_toggle(un))
return account_bp

View File

@@ -12,7 +12,6 @@ from datetime import datetime, timezone, timedelta
from quart import (
Blueprint,
request,
render_template,
redirect,
url_for,
session as qsession,
@@ -45,7 +44,7 @@ from .services import (
SESSION_USER_KEY = "uid"
ACCOUNT_SESSION_KEY = "account_sid"
ALLOWED_CLIENTS = {"blog", "market", "cart", "events", "federation", "artdag", "artdag_l2"}
ALLOWED_CLIENTS = {"blog", "market", "cart", "events", "federation", "orders", "test", "sx", "artdag", "artdag_l2"}
def register(url_prefix="/auth"):
@@ -275,7 +274,11 @@ def register(url_prefix="/auth"):
if g.get("user"):
redirect_url = pop_login_redirect_target()
return redirect(redirect_url)
return await render_template("auth/login.html")
from shared.sx.page import get_template_context
from sx.sx_components import render_login_page
ctx = await get_template_context()
return await render_login_page(ctx)
@rate_limit(
key_func=lambda: request.headers.get("X-Forwarded-For", request.remote_addr),
@@ -288,28 +291,20 @@ def register(url_prefix="/auth"):
is_valid, email = validate_email(email_input)
if not is_valid:
return (
await render_template(
"auth/login.html",
error="Please enter a valid email address.",
email=email_input,
),
400,
)
from shared.sx.page import get_template_context
from sx.sx_components import render_login_page
ctx = await get_template_context(error="Please enter a valid email address.", email=email_input)
return await render_login_page(ctx), 400
# Per-email rate limit: 5 magic links per 15 minutes
from shared.infrastructure.rate_limit import _check_rate_limit
try:
allowed, _ = await _check_rate_limit(f"magic_email:{email}", 5, 900)
if not allowed:
return (
await render_template(
"auth/check_email.html",
email=email,
email_error=None,
),
200,
)
from shared.sx.page import get_template_context
from sx.sx_components import render_check_email_page
ctx = await get_template_context(email=email, email_error=None)
return await render_check_email_page(ctx), 200
except Exception:
pass # Redis down — allow the request
@@ -329,11 +324,10 @@ def register(url_prefix="/auth"):
"Please try again in a moment."
)
return await render_template(
"auth/check_email.html",
email=email,
email_error=email_error,
)
from shared.sx.page import get_template_context
from sx.sx_components import render_check_email_page
ctx = await get_template_context(email=email, email_error=email_error)
return await render_check_email_page(ctx)
@auth_bp.get("/magic/<token>/")
async def magic(token: str):
@@ -346,20 +340,17 @@ def register(url_prefix="/auth"):
user, error = await validate_magic_link(s, token)
if error:
return (
await render_template("auth/login.html", error=error),
400,
)
from shared.sx.page import get_template_context
from sx.sx_components import render_login_page
ctx = await get_template_context(error=error)
return await render_login_page(ctx), 400
user_id = user.id
except Exception:
return (
await render_template(
"auth/login.html",
error="Could not sign you in right now. Please try again.",
),
502,
)
from shared.sx.page import get_template_context
from sx.sx_components import render_login_page
ctx = await get_template_context(error="Could not sign you in right now. Please try again.")
return await render_login_page(ctx), 502
assert user_id is not None
@@ -688,8 +679,11 @@ def register(url_prefix="/auth"):
@auth_bp.get("/device/")
async def device_form():
"""Browser form where user enters the code displayed in terminal."""
from shared.sx.page import get_template_context
from sx.sx_components import render_device_page
code = request.args.get("code", "")
return await render_template("auth/device.html", code=code)
ctx = await get_template_context(code=code)
return await render_device_page(ctx)
@auth_bp.post("/device")
@auth_bp.post("/device/")
@@ -699,22 +693,20 @@ def register(url_prefix="/auth"):
user_code = (form.get("code") or "").strip().replace("-", "").upper()
if not user_code or len(user_code) != 8:
return await render_template(
"auth/device.html",
error="Please enter a valid 8-character code.",
code=form.get("code", ""),
), 400
from shared.sx.page import get_template_context
from sx.sx_components import render_device_page
ctx = await get_template_context(error="Please enter a valid 8-character code.", code=form.get("code", ""))
return await render_device_page(ctx), 400
from shared.infrastructure.auth_redis import get_auth_redis
r = await get_auth_redis()
device_code = await r.get(f"devflow_uc:{user_code}")
if not device_code:
return await render_template(
"auth/device.html",
error="Code not found or expired. Please try again.",
code=form.get("code", ""),
), 400
from shared.sx.page import get_template_context
from sx.sx_components import render_device_page
ctx = await get_template_context(error="Code not found or expired. Please try again.", code=form.get("code", ""))
return await render_device_page(ctx), 400
if isinstance(device_code, bytes):
device_code = device_code.decode()
@@ -728,17 +720,23 @@ def register(url_prefix="/auth"):
# Logged in — approve immediately
ok = await _approve_device(device_code, g.user)
if not ok:
return await render_template(
"auth/device.html",
error="Code expired or already used.",
), 400
from shared.sx.page import get_template_context
from sx.sx_components import render_device_page
ctx = await get_template_context(error="Code expired or already used.")
return await render_device_page(ctx), 400
return await render_template("auth/device_approved.html")
from shared.sx.page import get_template_context
from sx.sx_components import render_device_approved_page
ctx = await get_template_context()
return await render_device_approved_page(ctx)
@auth_bp.get("/device/complete")
@auth_bp.get("/device/complete/")
async def device_complete():
"""Post-login redirect — completes approval after magic link auth."""
from shared.sx.page import get_template_context
from sx.sx_components import render_device_page, render_device_approved_page
device_code = request.args.get("code", "")
if not device_code:
@@ -750,11 +748,12 @@ def register(url_prefix="/auth"):
ok = await _approve_device(device_code, g.user)
if not ok:
return await render_template(
"auth/device.html",
ctx = await get_template_context(
error="Code expired or already used. Please start the login process again in your terminal.",
), 400
)
return await render_device_page(ctx), 400
return await render_template("auth/device_approved.html")
ctx = await get_template_context()
return await render_device_approved_page(ctx)
return auth_bp

View File

@@ -1 +0,0 @@
from .routes import register as register_fragments

View File

@@ -1,52 +0,0 @@
"""Account app fragment endpoints.
Exposes HTML fragments at ``/internal/fragments/<type>`` for consumption
by other coop apps via the fragment client.
Fragments:
auth-menu Desktop + mobile auth menu (sign-in or user link)
"""
from __future__ import annotations
from quart import Blueprint, Response, request, render_template
from shared.infrastructure.fragments import FRAGMENT_HEADER
def register():
bp = Blueprint("fragments", __name__, url_prefix="/internal/fragments")
# ---------------------------------------------------------------
# Fragment handlers
# ---------------------------------------------------------------
async def _auth_menu():
user_email = request.args.get("email", "")
return await render_template(
"fragments/auth_menu.html",
user_email=user_email,
)
_handlers = {
"auth-menu": _auth_menu,
}
# ---------------------------------------------------------------
# Routing
# ---------------------------------------------------------------
@bp.before_request
async def _require_fragment_header():
if not request.headers.get(FRAGMENT_HEADER):
return Response("", status=403)
@bp.get("/<fragment_type>")
async def get_fragment(fragment_type: str):
handler = _handlers.get(fragment_type)
if handler is None:
return Response("", status=200, content_type="text/html")
html = await handler()
return Response(html, status=200, content_type="text/html")
return bp

View File

@@ -54,6 +54,7 @@ fi
RELOAD_FLAG=""
if [[ "${RELOAD:-}" == "true" ]]; then
RELOAD_FLAG="--reload"
python3 -m shared.dev_watcher &
echo "Starting Hypercorn (${APP_MODULE:-app:app}) with auto-reload..."
else
echo "Starting Hypercorn (${APP_MODULE:-app:app})..."

29
account/sx/auth.sx Normal file
View File

@@ -0,0 +1,29 @@
;; Auth page components (device auth — account-specific)
;; Login and check-email components are shared: see shared/sx/templates/auth.sx
(defcomp ~account-device-error (&key error)
(when error
(div :class "bg-red-50 border border-red-200 text-red-700 p-3 rounded mb-4"
error)))
(defcomp ~account-device-form (&key error action csrf-token code)
(div :class "py-8 max-w-md mx-auto"
(h1 :class "text-2xl font-bold mb-6" "Authorize device")
(p :class "text-stone-600 mb-4" "Enter the code shown in your terminal to sign in.")
error
(form :method "post" :action action :class "space-y-4"
(input :type "hidden" :name "csrf_token" :value csrf-token)
(div
(label :for "code" :class "block text-sm font-medium mb-1" "Device code")
(input :type "text" :name "code" :id "code" :value code :placeholder "XXXX-XXXX"
:required true :autofocus true :maxlength "9" :autocomplete "off" :spellcheck "false"
:class "w-full border border-stone-300 rounded px-3 py-3 text-center text-2xl tracking-widest font-mono uppercase focus:outline-none focus:ring-2 focus:ring-stone-500"))
(button :type "submit"
:class "w-full bg-stone-800 text-white py-2 px-4 rounded hover:bg-stone-700 transition"
"Authorize"))))
(defcomp ~account-device-approved ()
(div :class "py-8 max-w-md mx-auto text-center"
(h1 :class "text-2xl font-bold mb-4" "Device authorized")
(p :class "text-stone-600" "You can close this window and return to your terminal.")))

43
account/sx/dashboard.sx Normal file
View File

@@ -0,0 +1,43 @@
;; Account dashboard components
(defcomp ~account-error-banner (&key error)
(when error
(div :class "rounded-lg border border-red-200 bg-red-50 text-red-800 px-4 py-3 text-sm"
error)))
(defcomp ~account-user-email (&key email)
(when email
(p :class "text-sm text-stone-500 mt-1" email)))
(defcomp ~account-user-name (&key name)
(when name
(p :class "text-sm text-stone-600" name)))
(defcomp ~account-logout-form (&key csrf-token)
(form :action "/auth/logout/" :method "post"
(input :type "hidden" :name "csrf_token" :value csrf-token)
(button :type "submit"
:class "inline-flex items-center gap-2 rounded-full border border-stone-300 px-4 py-2 text-sm font-medium text-stone-700 hover:bg-stone-50 transition"
(i :class "fa-solid fa-right-from-bracket text-xs") " Sign out")))
(defcomp ~account-label-item (&key name)
(span :class "inline-flex items-center rounded-full border border-stone-200 px-3 py-1 text-xs font-medium bg-white/60"
name))
(defcomp ~account-labels-section (&key items)
(when items
(div
(h2 :class "text-base font-semibold tracking-tight mb-3" "Labels")
(div :class "flex flex-wrap gap-2" items))))
(defcomp ~account-main-panel (&key error email name logout labels)
(div :class "w-full max-w-3xl mx-auto px-4 py-6"
(div :class "bg-white/70 backdrop-blur rounded-2xl shadow border border-stone-200 p-6 sm:p-8 space-y-8"
error
(div :class "flex items-center justify-between"
(div
(h1 :class "text-xl font-semibold tracking-tight" "Account")
email
name)
logout)
labels)))

View File

@@ -0,0 +1,8 @@
;; Account auth-menu fragment handler
;;
;; Renders the desktop + mobile auth menu (sign-in or user link).
(defhandler auth-menu (&key email)
(~auth-menu
:user-email (when email email)
:account-url (app-url "account" "")))

31
account/sx/newsletters.sx Normal file
View File

@@ -0,0 +1,31 @@
;; Newsletter management components
(defcomp ~account-newsletter-desc (&key description)
(when description
(p :class "text-xs text-stone-500 mt-0.5 truncate" description)))
(defcomp ~account-newsletter-toggle (&key id url hdrs target cls checked knob-cls)
(div :id id :class "flex items-center"
(button :sx-post url :sx-headers hdrs :sx-target target :sx-swap "outerHTML"
:class cls :role "switch" :aria-checked checked
(span :class knob-cls))))
(defcomp ~account-newsletter-item (&key name desc toggle)
(div :class "flex items-center justify-between py-4 first:pt-0 last:pb-0"
(div :class "min-w-0 flex-1"
(p :class "text-sm font-medium text-stone-800" name)
desc)
(div :class "ml-4 flex-shrink-0" toggle)))
(defcomp ~account-newsletter-list (&key items)
(div :class "divide-y divide-stone-100" items))
(defcomp ~account-newsletter-empty ()
(p :class "text-sm text-stone-500" "No newsletters available."))
(defcomp ~account-newsletters-panel (&key list)
(div :class "w-full max-w-3xl mx-auto px-4 py-6"
(div :class "bg-white/70 backdrop-blur rounded-2xl shadow border border-stone-200 p-6 sm:p-8 space-y-6"
(h1 :class "text-xl font-semibold tracking-tight" "Newsletters")
list)))

339
account/sx/sx_components.py Normal file
View File

@@ -0,0 +1,339 @@
"""
Account service s-expression page components.
Renders account dashboard, newsletters, fragment pages, login, and device
auth pages. Called from route handlers in place of ``render_template()``.
"""
from __future__ import annotations
import os
from typing import Any
from shared.sx.jinja_bridge import load_service_components
from shared.sx.helpers import (
call_url, sx_call, SxExpr,
root_header_sx, full_page_sx,
)
# Load account-specific .sx components + handlers at import time
load_service_components(os.path.dirname(os.path.dirname(__file__)),
service_name="account")
# ---------------------------------------------------------------------------
# Header helpers
# ---------------------------------------------------------------------------
def _auth_nav_sx(ctx: dict) -> str:
"""Auth section desktop nav items."""
parts = [
sx_call("nav-link",
href=call_url(ctx, "account_url", "/newsletters/"),
label="newsletters",
select_colours=ctx.get("select_colours", ""),
)
]
account_nav = ctx.get("account_nav")
if account_nav:
parts.append(account_nav)
return "(<> " + " ".join(parts) + ")"
def _auth_header_sx(ctx: dict, *, oob: bool = False) -> str:
"""Build the account section header row."""
return sx_call(
"menu-row-sx",
id="auth-row", level=1, colour="sky",
link_href=call_url(ctx, "account_url", "/"),
link_label="account", icon="fa-solid fa-user",
nav=SxExpr(_auth_nav_sx(ctx)),
child_id="auth-header-child", oob=oob,
)
def _auth_nav_mobile_sx(ctx: dict) -> str:
"""Mobile nav menu for auth section."""
parts = [
sx_call("nav-link",
href=call_url(ctx, "account_url", "/newsletters/"),
label="newsletters",
select_colours=ctx.get("select_colours", ""),
)
]
account_nav = ctx.get("account_nav")
if account_nav:
parts.append(account_nav)
return "(<> " + " ".join(parts) + ")"
# ---------------------------------------------------------------------------
# Account dashboard (GET /)
# ---------------------------------------------------------------------------
def _account_main_panel_sx(ctx: dict) -> str:
"""Account info panel with user details and logout."""
from quart import g
from shared.browser.app.csrf import generate_csrf_token
user = getattr(g, "user", None)
error = ctx.get("error", "")
error_sx = sx_call("account-error-banner", error=error) if error else ""
user_email_sx = ""
user_name_sx = ""
if user:
user_email_sx = sx_call("account-user-email", email=user.email)
if user.name:
user_name_sx = sx_call("account-user-name", name=user.name)
logout_sx = sx_call("account-logout-form", csrf_token=generate_csrf_token())
labels_sx = ""
if user and hasattr(user, "labels") and user.labels:
label_items = " ".join(
sx_call("account-label-item", name=label.name)
for label in user.labels
)
labels_sx = sx_call("account-labels-section",
items=SxExpr("(<> " + label_items + ")"))
return sx_call(
"account-main-panel",
error=SxExpr(error_sx) if error_sx else None,
email=SxExpr(user_email_sx) if user_email_sx else None,
name=SxExpr(user_name_sx) if user_name_sx else None,
logout=SxExpr(logout_sx),
labels=SxExpr(labels_sx) if labels_sx else None,
)
# ---------------------------------------------------------------------------
# Newsletters (GET /newsletters/)
# ---------------------------------------------------------------------------
def _newsletter_toggle_sx(un: Any, account_url_fn: Any, csrf_token: str) -> str:
"""Render a single newsletter toggle switch."""
nid = un.newsletter_id
toggle_url = account_url_fn(f"/newsletter/{nid}/toggle/")
if un.subscribed:
bg = "bg-emerald-500"
translate = "translate-x-6"
checked = "true"
else:
bg = "bg-stone-300"
translate = "translate-x-1"
checked = "false"
return sx_call(
"account-newsletter-toggle",
id=f"nl-{nid}", url=toggle_url,
hdrs=f'{{"X-CSRFToken": "{csrf_token}"}}',
target=f"#nl-{nid}",
cls=f"relative inline-flex h-6 w-11 items-center rounded-full transition-colors focus:outline-none focus:ring-2 focus:ring-emerald-500 focus:ring-offset-2 {bg}",
checked=checked,
knob_cls=f"inline-block h-4 w-4 rounded-full bg-white shadow transform transition-transform {translate}",
)
def _newsletter_toggle_off_sx(nid: int, toggle_url: str, csrf_token: str) -> str:
"""Render an unsubscribed newsletter toggle (no subscription record yet)."""
return sx_call(
"account-newsletter-toggle",
id=f"nl-{nid}", url=toggle_url,
hdrs=f'{{"X-CSRFToken": "{csrf_token}"}}',
target=f"#nl-{nid}",
cls="relative inline-flex h-6 w-11 items-center rounded-full transition-colors focus:outline-none focus:ring-2 focus:ring-emerald-500 focus:ring-offset-2 bg-stone-300",
checked="false",
knob_cls="inline-block h-4 w-4 rounded-full bg-white shadow transform transition-transform translate-x-1",
)
def _newsletters_panel_sx(ctx: dict, newsletter_list: list) -> str:
"""Newsletters management panel."""
from shared.browser.app.csrf import generate_csrf_token
account_url_fn = ctx.get("account_url") or (lambda p: p)
csrf = generate_csrf_token()
if newsletter_list:
items = []
for item in newsletter_list:
nl = item["newsletter"]
un = item.get("un")
desc_sx = sx_call(
"account-newsletter-desc", description=nl.description
) if nl.description else ""
if un:
toggle = _newsletter_toggle_sx(un, account_url_fn, csrf)
else:
toggle_url = account_url_fn(f"/newsletter/{nl.id}/toggle/")
toggle = _newsletter_toggle_off_sx(nl.id, toggle_url, csrf)
items.append(sx_call(
"account-newsletter-item",
name=nl.name,
desc=SxExpr(desc_sx) if desc_sx else None,
toggle=SxExpr(toggle),
))
list_sx = sx_call(
"account-newsletter-list",
items=SxExpr("(<> " + " ".join(items) + ")"),
)
else:
list_sx = sx_call("account-newsletter-empty")
return sx_call("account-newsletters-panel", list=SxExpr(list_sx))
# ---------------------------------------------------------------------------
# Auth pages (login, device, check_email)
# ---------------------------------------------------------------------------
def _login_page_content(ctx: dict) -> str:
"""Login form content."""
from shared.browser.app.csrf import generate_csrf_token
from quart import url_for
error = ctx.get("error", "")
email = ctx.get("email", "")
action = url_for("auth.start_login")
error_sx = sx_call("auth-error-banner", error=error) if error else ""
return sx_call(
"auth-login-form",
error=SxExpr(error_sx) if error_sx else None,
action=action,
csrf_token=generate_csrf_token(), email=email,
)
def _device_page_content(ctx: dict) -> str:
"""Device authorization form content."""
from shared.browser.app.csrf import generate_csrf_token
from quart import url_for
error = ctx.get("error", "")
code = ctx.get("code", "")
action = url_for("auth.device_submit")
error_sx = sx_call("account-device-error", error=error) if error else ""
return sx_call(
"account-device-form",
error=SxExpr(error_sx) if error_sx else None,
action=action,
csrf_token=generate_csrf_token(), code=code,
)
def _device_approved_content() -> str:
"""Device approved success content."""
return sx_call("account-device-approved")
# ---------------------------------------------------------------------------
# Public API: Account dashboard
# ---------------------------------------------------------------------------
def _fragment_content(frag: object) -> str:
"""Convert a fragment response to sx content string.
SxExpr (from text/sx responses) is embedded as-is; plain strings
(from text/html) are wrapped in ``~rich-text``.
"""
from shared.sx.parser import SxExpr
if isinstance(frag, SxExpr):
return frag.source
s = str(frag) if frag else ""
if not s:
return ""
return f'(~rich-text :html "{_sx_escape(s)}")'
# ---------------------------------------------------------------------------
# Public API: Auth pages (login, device)
# ---------------------------------------------------------------------------
async def render_login_page(ctx: dict) -> str:
"""Full page: login form."""
hdr = root_header_sx(ctx)
return full_page_sx(ctx, header_rows=hdr,
content=_login_page_content(ctx),
meta_html='<title>Login \u2014 Rose Ash</title>')
async def render_device_page(ctx: dict) -> str:
"""Full page: device authorization form."""
hdr = root_header_sx(ctx)
return full_page_sx(ctx, header_rows=hdr,
content=_device_page_content(ctx),
meta_html='<title>Authorize Device \u2014 Rose Ash</title>')
async def render_device_approved_page(ctx: dict) -> str:
"""Full page: device approved."""
hdr = root_header_sx(ctx)
return full_page_sx(ctx, header_rows=hdr,
content=_device_approved_content(),
meta_html='<title>Device Authorized \u2014 Rose Ash</title>')
# ---------------------------------------------------------------------------
# Public API: Check email page (POST /start/ success)
# ---------------------------------------------------------------------------
def _check_email_content(email: str, email_error: str | None = None) -> str:
"""Check email confirmation content."""
from markupsafe import escape
error_sx = sx_call(
"auth-check-email-error", error=str(escape(email_error))
) if email_error else ""
return sx_call(
"auth-check-email",
email=str(escape(email)),
error=SxExpr(error_sx) if error_sx else None,
)
async def render_check_email_page(ctx: dict) -> str:
"""Full page: check email after magic link sent."""
email = ctx.get("email", "")
email_error = ctx.get("email_error")
hdr = root_header_sx(ctx)
return full_page_sx(ctx, header_rows=hdr,
content=_check_email_content(email, email_error),
meta_html='<title>Check your email \u2014 Rose Ash</title>')
# ---------------------------------------------------------------------------
# Public API: Fragment renderers for POST handlers
# ---------------------------------------------------------------------------
def render_newsletter_toggle(un) -> str:
"""Render a newsletter toggle switch for POST response (uses account_url)."""
from shared.browser.app.csrf import generate_csrf_token
from quart import g
account_url_fn = getattr(g, "_account_url", None)
if account_url_fn is None:
from shared.infrastructure.urls import account_url
account_url_fn = account_url
return _newsletter_toggle_sx(un, account_url_fn, generate_csrf_token())
# ---------------------------------------------------------------------------
# Internal helpers
# ---------------------------------------------------------------------------
def _sx_escape(s: str) -> str:
"""Escape a string for embedding in sx string literals."""
return s.replace("\\", "\\\\").replace('"', '\\"')

0
account/sxc/__init__.py Normal file
View File

View File

@@ -0,0 +1,134 @@
"""Account defpage setup — registers layouts, page helpers, and loads .sx pages."""
from __future__ import annotations
from typing import Any
def setup_account_pages() -> None:
"""Register account-specific layouts, page helpers, and load page definitions."""
_register_account_layouts()
_register_account_helpers()
_load_account_page_files()
def _load_account_page_files() -> None:
import os
from shared.sx.pages import load_page_dir
load_page_dir(os.path.dirname(__file__), "account")
# ---------------------------------------------------------------------------
# Layouts
# ---------------------------------------------------------------------------
def _register_account_layouts() -> None:
from shared.sx.layouts import register_custom_layout
register_custom_layout("account", _account_full, _account_oob, _account_mobile)
def _account_full(ctx: dict, **kw: Any) -> str:
from shared.sx.helpers import root_header_sx, header_child_sx
from sx.sx_components import _auth_header_sx
root_hdr = root_header_sx(ctx)
hdr_child = header_child_sx(_auth_header_sx(ctx))
return "(<> " + root_hdr + " " + hdr_child + ")"
def _account_oob(ctx: dict, **kw: Any) -> str:
from shared.sx.helpers import root_header_sx
from sx.sx_components import _auth_header_sx
return "(<> " + _auth_header_sx(ctx, oob=True) + " " + root_header_sx(ctx, oob=True) + ")"
def _account_mobile(ctx: dict, **kw: Any) -> str:
from shared.sx.helpers import mobile_menu_sx, mobile_root_nav_sx, sx_call, SxExpr
from sx.sx_components import _auth_nav_mobile_sx
ctx = _inject_account_nav(ctx)
auth_section = sx_call("mobile-menu-section",
label="account", href="/", level=1, colour="sky",
items=SxExpr(_auth_nav_mobile_sx(ctx)))
return mobile_menu_sx(auth_section, mobile_root_nav_sx(ctx))
def _inject_account_nav(ctx: dict) -> dict:
"""Ensure account_nav is in ctx from g.account_nav."""
if "account_nav" not in ctx:
from quart import g
ctx = dict(ctx)
ctx["account_nav"] = getattr(g, "account_nav", "")
return ctx
# ---------------------------------------------------------------------------
# Page helpers
# ---------------------------------------------------------------------------
def _register_account_helpers() -> None:
from shared.sx.pages import register_page_helpers
register_page_helpers("account", {
"account-content": _h_account_content,
"newsletters-content": _h_newsletters_content,
"fragment-content": _h_fragment_content,
})
def _h_account_content(**kw):
from sx.sx_components import _account_main_panel_sx
return _account_main_panel_sx({})
async def _h_newsletters_content(**kw):
from quart import g
from sqlalchemy import select
from shared.models import UserNewsletter
from shared.models.ghost_membership_entities import GhostNewsletter
result = await g.s.execute(
select(GhostNewsletter).order_by(GhostNewsletter.name)
)
all_newsletters = result.scalars().all()
sub_result = await g.s.execute(
select(UserNewsletter).where(
UserNewsletter.user_id == g.user.id,
)
)
user_subs = {un.newsletter_id: un for un in sub_result.scalars().all()}
newsletter_list = []
for nl in all_newsletters:
un = user_subs.get(nl.id)
newsletter_list.append({
"newsletter": nl,
"un": un,
"subscribed": un.subscribed if un else False,
})
if not newsletter_list:
from shared.sx.helpers import sx_call
return sx_call("account-newsletter-empty")
from sx.sx_components import _newsletters_panel_sx
ctx = {"account_url": getattr(g, "_account_url", None)}
if ctx["account_url"] is None:
from shared.infrastructure.urls import account_url
ctx["account_url"] = account_url
return _newsletters_panel_sx(ctx, newsletter_list)
async def _h_fragment_content(slug=None, **kw):
from quart import g, abort
from shared.infrastructure.fragments import fetch_fragment
if not slug or not g.get("user"):
return ""
fragment_html = await fetch_fragment(
"events", "account-page",
params={"slug": slug, "user_id": str(g.user.id)},
)
if not fragment_html:
abort(404)
from sx.sx_components import _fragment_content
return _fragment_content(fragment_html)

View File

@@ -0,0 +1,31 @@
;; Account app — declarative page definitions
;; ---------------------------------------------------------------------------
;; Account dashboard
;; ---------------------------------------------------------------------------
(defpage account-dashboard
:path "/"
:auth :login
:layout :account
:content (account-content))
;; ---------------------------------------------------------------------------
;; Newsletters
;; ---------------------------------------------------------------------------
(defpage newsletters
:path "/newsletters/"
:auth :login
:layout :account
:content (newsletters-content))
;; ---------------------------------------------------------------------------
;; Fragment pages (tickets, bookings, etc. from events service)
;; ---------------------------------------------------------------------------
(defpage fragment-page
:path "/<slug>/"
:auth :login
:layout :account
:content (fragment-content slug))

View File

@@ -1,44 +0,0 @@
<div class="w-full max-w-3xl mx-auto px-4 py-6">
<div class="bg-white/70 backdrop-blur rounded-2xl shadow border border-stone-200 p-6 sm:p-8 space-y-6">
<h1 class="text-xl font-semibold tracking-tight">Bookings</h1>
{% if bookings %}
<div class="divide-y divide-stone-100">
{% for booking in bookings %}
<div class="py-4 first:pt-0 last:pb-0">
<div class="flex items-start justify-between gap-4">
<div class="min-w-0 flex-1">
<p class="text-sm font-medium text-stone-800">{{ booking.name }}</p>
<div class="mt-1 flex flex-wrap items-center gap-x-3 gap-y-1 text-xs text-stone-500">
<span>{{ booking.start_at.strftime('%d %b %Y, %H:%M') }}</span>
{% if booking.end_at %}
<span>&ndash; {{ booking.end_at.strftime('%H:%M') }}</span>
{% endif %}
{% if booking.calendar_name %}
<span>&middot; {{ booking.calendar_name }}</span>
{% endif %}
{% if booking.cost %}
<span>&middot; &pound;{{ booking.cost }}</span>
{% endif %}
</div>
</div>
<div class="flex-shrink-0">
{% if booking.state == 'confirmed' %}
<span class="inline-flex items-center rounded-full bg-emerald-50 border border-emerald-200 px-2.5 py-0.5 text-xs font-medium text-emerald-700">confirmed</span>
{% elif booking.state == 'provisional' %}
<span class="inline-flex items-center rounded-full bg-amber-50 border border-amber-200 px-2.5 py-0.5 text-xs font-medium text-amber-700">provisional</span>
{% else %}
<span class="inline-flex items-center rounded-full bg-stone-50 border border-stone-200 px-2.5 py-0.5 text-xs font-medium text-stone-600">{{ booking.state }}</span>
{% endif %}
</div>
</div>
</div>
{% endfor %}
</div>
{% else %}
<p class="text-sm text-stone-500">No bookings yet.</p>
{% endif %}
</div>
</div>

View File

@@ -1 +0,0 @@
{{ page_fragment_html | safe }}

View File

@@ -1,49 +0,0 @@
<div class="w-full max-w-3xl mx-auto px-4 py-6">
<div class="bg-white/70 backdrop-blur rounded-2xl shadow border border-stone-200 p-6 sm:p-8 space-y-8">
{% if error %}
<div class="rounded-lg border border-red-200 bg-red-50 text-red-800 px-4 py-3 text-sm">
{{ error }}
</div>
{% endif %}
{# Account header #}
<div class="flex items-center justify-between">
<div>
<h1 class="text-xl font-semibold tracking-tight">Account</h1>
{% if g.user %}
<p class="text-sm text-stone-500 mt-1">{{ g.user.email }}</p>
{% if g.user.name %}
<p class="text-sm text-stone-600">{{ g.user.name }}</p>
{% endif %}
{% endif %}
</div>
<form action="/auth/logout/" method="post">
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}">
<button
type="submit"
class="inline-flex items-center gap-2 rounded-full border border-stone-300 px-4 py-2 text-sm font-medium text-stone-700 hover:bg-stone-50 transition"
>
<i class="fa-solid fa-right-from-bracket text-xs"></i>
Sign out
</button>
</form>
</div>
{# Labels #}
{% set labels = g.user.labels if g.user is defined and g.user.labels is defined else [] %}
{% if labels %}
<div>
<h2 class="text-base font-semibold tracking-tight mb-3">Labels</h2>
<div class="flex flex-wrap gap-2">
{% for label in labels %}
<span class="inline-flex items-center rounded-full border border-stone-200 px-3 py-1 text-xs font-medium bg-white/60">
{{ label.name }}
</span>
{% endfor %}
</div>
</div>
{% endif %}
</div>
</div>

View File

@@ -1,7 +0,0 @@
{% import 'macros/links.html' as links %}
{% call links.link(account_url('/newsletters/'), hx_select_search, select_colours, True, aclass=styles.nav_button) %}
newsletters
{% endcall %}
{% if account_nav_html %}
{{ account_nav_html | safe }}
{% endif %}

View File

@@ -1,17 +0,0 @@
<div id="nl-{{ un.newsletter_id }}" class="flex items-center">
<button
hx-post="{{ account_url('/newsletter/' ~ un.newsletter_id ~ '/toggle/') }}"
hx-headers='{"X-CSRFToken": "{{ csrf_token() }}"}'
hx-target="#nl-{{ un.newsletter_id }}"
hx-swap="outerHTML"
class="relative inline-flex h-6 w-11 items-center rounded-full transition-colors focus:outline-none focus:ring-2 focus:ring-emerald-500 focus:ring-offset-2
{% if un.subscribed %}bg-emerald-500{% else %}bg-stone-300{% endif %}"
role="switch"
aria-checked="{{ 'true' if un.subscribed else 'false' }}"
>
<span
class="inline-block h-4 w-4 rounded-full bg-white shadow transform transition-transform
{% if un.subscribed %}translate-x-6{% else %}translate-x-1{% endif %}"
></span>
</button>
</div>

View File

@@ -1,46 +0,0 @@
<div class="w-full max-w-3xl mx-auto px-4 py-6">
<div class="bg-white/70 backdrop-blur rounded-2xl shadow border border-stone-200 p-6 sm:p-8 space-y-6">
<h1 class="text-xl font-semibold tracking-tight">Newsletters</h1>
{% if newsletter_list %}
<div class="divide-y divide-stone-100">
{% for item in newsletter_list %}
<div class="flex items-center justify-between py-4 first:pt-0 last:pb-0">
<div class="min-w-0 flex-1">
<p class="text-sm font-medium text-stone-800">{{ item.newsletter.name }}</p>
{% if item.newsletter.description %}
<p class="text-xs text-stone-500 mt-0.5 truncate">{{ item.newsletter.description }}</p>
{% endif %}
</div>
<div class="ml-4 flex-shrink-0">
{% if item.un %}
{% with un=item.un %}
{% include "_types/auth/_newsletter_toggle.html" %}
{% endwith %}
{% else %}
{# No subscription row yet — show an off toggle that will create one #}
<div id="nl-{{ item.newsletter.id }}" class="flex items-center">
<button
hx-post="{{ account_url('/newsletter/' ~ item.newsletter.id ~ '/toggle/') }}"
hx-headers='{"X-CSRFToken": "{{ csrf_token() }}"}'
hx-target="#nl-{{ item.newsletter.id }}"
hx-swap="outerHTML"
class="relative inline-flex h-6 w-11 items-center rounded-full transition-colors focus:outline-none focus:ring-2 focus:ring-emerald-500 focus:ring-offset-2 bg-stone-300"
role="switch"
aria-checked="false"
>
<span class="inline-block h-4 w-4 rounded-full bg-white shadow transform transition-transform translate-x-1"></span>
</button>
</div>
{% endif %}
</div>
</div>
{% endfor %}
</div>
{% else %}
<p class="text-sm text-stone-500">No newsletters available.</p>
{% endif %}
</div>
</div>

View File

@@ -1,29 +0,0 @@
{% extends 'oob_elements.html' %}
{# OOB elements for HTMX navigation - all elements that need updating #}
{# Import shared OOB macros #}
{% from '_types/root/_oob_menu.html' import mobile_menu with context %}
{# Header with app title - includes cart-mini, navigation, and market-specific header #}
{% block oobs %}
{% from '_types/root/_n/macros.html' import oob_header with context %}
{{oob_header('root-header-child', 'auth-header-child', '_types/auth/header/_header.html')}}
{% from '_types/root/header/_header.html' import header_row with context %}
{{ header_row(oob=True) }}
{% endblock %}
{% block mobile_menu %}
{% include '_types/auth/_nav.html' %}
{% endblock %}
{% block content %}
{% include oob.main %}
{% endblock %}

View File

@@ -1,44 +0,0 @@
<div class="w-full max-w-3xl mx-auto px-4 py-6">
<div class="bg-white/70 backdrop-blur rounded-2xl shadow border border-stone-200 p-6 sm:p-8 space-y-6">
<h1 class="text-xl font-semibold tracking-tight">Tickets</h1>
{% if tickets %}
<div class="divide-y divide-stone-100">
{% for ticket in tickets %}
<div class="py-4 first:pt-0 last:pb-0">
<div class="flex items-start justify-between gap-4">
<div class="min-w-0 flex-1">
<a href="{{ events_url('/tickets/' ~ ticket.code ~ '/') }}"
class="text-sm font-medium text-stone-800 hover:text-emerald-700 transition">
{{ ticket.entry_name }}
</a>
<div class="mt-1 flex flex-wrap items-center gap-x-3 gap-y-1 text-xs text-stone-500">
<span>{{ ticket.entry_start_at.strftime('%d %b %Y, %H:%M') }}</span>
{% if ticket.calendar_name %}
<span>&middot; {{ ticket.calendar_name }}</span>
{% endif %}
{% if ticket.ticket_type_name %}
<span>&middot; {{ ticket.ticket_type_name }}</span>
{% endif %}
</div>
</div>
<div class="flex-shrink-0">
{% if ticket.state == 'checked_in' %}
<span class="inline-flex items-center rounded-full bg-blue-50 border border-blue-200 px-2.5 py-0.5 text-xs font-medium text-blue-700">checked in</span>
{% elif ticket.state == 'confirmed' %}
<span class="inline-flex items-center rounded-full bg-emerald-50 border border-emerald-200 px-2.5 py-0.5 text-xs font-medium text-emerald-700">confirmed</span>
{% else %}
<span class="inline-flex items-center rounded-full bg-amber-50 border border-amber-200 px-2.5 py-0.5 text-xs font-medium text-amber-700">{{ ticket.state }}</span>
{% endif %}
</div>
</div>
</div>
{% endfor %}
</div>
{% else %}
<p class="text-sm text-stone-500">No tickets yet.</p>
{% endif %}
</div>
</div>

View File

@@ -1,33 +0,0 @@
{% extends "_types/root/index.html" %}
{% block content %}
<div class="w-full max-w-md">
<div class="bg-white/70 dark:bg-neutral-900/70 backdrop-blur rounded-2xl shadow p-6 sm:p-8 border border-neutral-200 dark:border-neutral-800">
<h1 class="text-2xl font-semibold tracking-tight">Check your email</h1>
<p class="text-base text-stone-700 dark:text-stone-300 mt-3">
If an account exists for
<strong class="text-stone-900 dark:text-white">{{ email }}</strong>,
youll receive a link to sign in. It expires in 15 minutes.
</p>
{% if email_error %}
<div
class="mt-4 rounded-lg border border-red-300 bg-red-50 text-red-700 text-sm px-3 py-2 flex items-start gap-2"
role="alert"
>
<span class="font-medium">Heads up:</span>
<span>{{ email_error }}</span>
</div>
{% endif %}
<p class="mt-6 text-sm">
<a
href="{{ blog_url('/auth/login/') }}"
class="text-stone-600 dark:text-stone-300 hover:underline"
>
← Back
</a>
</p>
</div>
</div>
{% endblock %}

View File

@@ -1,12 +0,0 @@
{% import 'macros/links.html' as links %}
{% macro header_row(oob=False) %}
{% call links.menu_row(id='auth-row', oob=oob) %}
{% call links.link(account_url('/'), hx_select_search ) %}
<i class="fa-solid fa-user"></i>
<div>account</div>
{% endcall %}
{% call links.desktop_nav() %}
{% include "_types/auth/_nav.html" %}
{% endcall %}
{% endcall %}
{% endmacro %}

View File

@@ -1,18 +0,0 @@
{% extends "_types/root/_index.html" %}
{% block root_header_child %}
{% from '_types/root/_n/macros.html' import index_row with context %}
{% call index_row('auth-header-child', '_types/auth/header/_header.html') %}
{% block auth_header_child %}
{% endblock %}
{% endcall %}
{% endblock %}
{% block _main_mobile_menu %}
{% include "_types/auth/_nav.html" %}
{% endblock %}
{% block content %}
{% include '_types/auth/_main_panel.html' %}
{% endblock %}

View File

@@ -1,18 +0,0 @@
{% extends oob.extends %}
{% block root_header_child %}
{% from '_types/root/_n/macros.html' import index_row with context %}
{% call index_row(oob.child_id, oob.header) %}
{% block auth_header_child %}
{% endblock %}
{% endcall %}
{% endblock %}
{% block _main_mobile_menu %}
{% include oob.nav %}
{% endblock %}
{% block content %}
{% include oob.main %}
{% endblock %}

View File

@@ -1,46 +0,0 @@
{% extends "_types/root/index.html" %}
{% block content %}
<div class="w-full max-w-md">
<div class="bg-white/70 dark:bg-neutral-900/70 backdrop-blur rounded-2xl shadow p-6 sm:p-8 border border-neutral-200 dark:border-neutral-800">
<h1 class="text-2xl font-semibold tracking-tight">Sign in</h1>
<p class="mt-2 text-sm text-neutral-600 dark:text-neutral-400">
Enter your email and well email you a one-time sign-in link.
</p>
{% if error %}
<div class="mt-4 rounded-lg border border-red-200 bg-red-50 text-red-800 dark:border-red-900/40 dark:bg-red-950/40 dark:text-red-200 px-4 py-3 text-sm">
{{ error }}
</div>
{% endif %}
<form
method="post" action="{{ blog_url('/auth/start/') }}"
class="mt-6 space-y-5"
>
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}">
<div>
<label for="email" class="block text-sm font-medium text-neutral-700 dark:text-neutral-300">
Email
</label>
<input
type="email"
id="email"
name="email"
value="{{ email or '' }}"
required
class="mt-2 block w-full rounded-lg border border-neutral-300 dark:border-neutral-700 bg-white dark:bg-neutral-900 px-3 py-2 text-neutral-900 dark:text-neutral-100 shadow-sm focus:outline-none focus:ring-2 focus:ring-offset-0 focus:ring-neutral-900 dark:focus:ring-neutral-200"
autocomplete="email"
inputmode="email"
>
</div>
<button
type="submit"
class="inline-flex w-full items-center justify-center rounded-lg bg-neutral-900 px-4 py-2.5 text-sm font-medium text-white hover:bg-neutral-800 focus:outline-none focus:ring-2 focus:ring-neutral-900 disabled:opacity-50 dark:bg-neutral-50 dark:text-neutral-900 dark:hover:bg-white"
>
Send link
</button>
</form>
</div>
</div>
{% endblock %}

View File

@@ -1,19 +0,0 @@
{% extends "_types/root/_index.html" %}
{% block meta %}{% endblock %}
{% block title %}Check your email — Rose Ash{% endblock %}
{% block content %}
<div class="py-8 max-w-md mx-auto text-center">
<h1 class="text-2xl font-bold mb-4">Check your email</h1>
<p class="text-stone-600 mb-2">
We sent a sign-in link to <strong>{{ email }}</strong>.
</p>
<p class="text-stone-500 text-sm">
Click the link in the email to sign in. The link expires in 15 minutes.
</p>
{% if email_error %}
<div class="bg-yellow-50 border border-yellow-200 text-yellow-700 p-3 rounded mt-4">
{{ email_error }}
</div>
{% endif %}
</div>
{% endblock %}

View File

@@ -1,41 +0,0 @@
{% extends "_types/root/_index.html" %}
{% block meta %}{% endblock %}
{% block title %}Authorize Device — Rose Ash{% endblock %}
{% block content %}
<div class="py-8 max-w-md mx-auto">
<h1 class="text-2xl font-bold mb-6">Authorize device</h1>
<p class="text-stone-600 mb-4">Enter the code shown in your terminal to sign in.</p>
{% if error %}
<div class="bg-red-50 border border-red-200 text-red-700 p-3 rounded mb-4">
{{ error }}
</div>
{% endif %}
<form method="post" action="{{ url_for('auth.device_submit') }}" class="space-y-4">
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}">
<div>
<label for="code" class="block text-sm font-medium mb-1">Device code</label>
<input
type="text"
name="code"
id="code"
value="{{ code | default('') }}"
placeholder="XXXX-XXXX"
required
autofocus
maxlength="9"
autocomplete="off"
spellcheck="false"
class="w-full border border-stone-300 rounded px-3 py-3 text-center text-2xl tracking-widest font-mono uppercase focus:outline-none focus:ring-2 focus:ring-stone-500"
>
</div>
<button
type="submit"
class="w-full bg-stone-800 text-white py-2 px-4 rounded hover:bg-stone-700 transition"
>
Authorize
</button>
</form>
</div>
{% endblock %}

View File

@@ -1,9 +0,0 @@
{% extends "_types/root/_index.html" %}
{% block meta %}{% endblock %}
{% block title %}Device Authorized — Rose Ash{% endblock %}
{% block content %}
<div class="py-8 max-w-md mx-auto text-center">
<h1 class="text-2xl font-bold mb-4">Device authorized</h1>
<p class="text-stone-600">You can close this window and return to your terminal.</p>
</div>
{% endblock %}

View File

@@ -1,36 +0,0 @@
{% extends "_types/root/_index.html" %}
{% block meta %}{% endblock %}
{% block title %}Login — Rose Ash{% endblock %}
{% block content %}
<div class="py-8 max-w-md mx-auto">
<h1 class="text-2xl font-bold mb-6">Sign in</h1>
{% if error %}
<div class="bg-red-50 border border-red-200 text-red-700 p-3 rounded mb-4">
{{ error }}
</div>
{% endif %}
<form method="post" action="{{ url_for('auth.start_login') }}" class="space-y-4">
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}">
<div>
<label for="email" class="block text-sm font-medium mb-1">Email address</label>
<input
type="email"
name="email"
id="email"
value="{{ email | default('') }}"
required
autofocus
class="w-full border border-stone-300 rounded px-3 py-2 focus:outline-none focus:ring-2 focus:ring-stone-500"
>
</div>
<button
type="submit"
class="w-full bg-stone-800 text-white py-2 px-4 rounded hover:bg-stone-700 transition"
>
Send magic link
</button>
</form>
</div>
{% endblock %}

View File

@@ -1,36 +0,0 @@
{# Desktop auth menu #}
<span id="auth-menu-desktop" class="hidden md:inline-flex">
{% if user_email %}
<a
href="{{ account_url('/') }}"
class="justify-center cursor-pointer flex flex-row items-center p-3 gap-2 rounded bg-stone-200 text-black"
data-close-details
>
<i class="fa-solid fa-user"></i>
<span>{{ user_email }}</span>
</a>
{% else %}
<a
href="{{ account_url('/') }}"
class="justify-center cursor-pointer flex flex-row items-center p-3 gap-2 rounded bg-stone-200 text-black"
data-close-details
>
<i class="fa-solid fa-key"></i>
<span>sign in or register</span>
</a>
{% endif %}
</span>
{# Mobile auth menu #}
<span id="auth-menu-mobile" class="block md:hidden text-md font-bold">
{% if user_email %}
<a href="{{ account_url('/') }}" data-close-details>
<i class="fa-solid fa-user"></i>
<span>{{ user_email }}</span>
</a>
{% else %}
<a href="{{ account_url('/') }}">
<i class="fa-solid fa-key"></i>
<span>sign in or register</span>
</a>
{% endif %}
</span>

View File

View File

@@ -0,0 +1,39 @@
"""Unit tests for account auth operations."""
from __future__ import annotations
import pytest
from account.bp.auth.services.auth_operations import validate_email
class TestValidateEmail:
def test_valid_email(self):
ok, email = validate_email("user@example.com")
assert ok is True
assert email == "user@example.com"
def test_uppercase_lowered(self):
ok, email = validate_email("USER@EXAMPLE.COM")
assert ok is True
assert email == "user@example.com"
def test_whitespace_stripped(self):
ok, email = validate_email(" user@example.com ")
assert ok is True
assert email == "user@example.com"
def test_empty_string(self):
ok, email = validate_email("")
assert ok is False
def test_no_at_sign(self):
ok, email = validate_email("notanemail")
assert ok is False
def test_just_at(self):
ok, email = validate_email("@")
assert ok is True # has "@", passes the basic check
def test_spaces_only(self):
ok, email = validate_email(" ")
assert ok is False

View File

@@ -0,0 +1,164 @@
"""Unit tests for Ghost membership helpers."""
from __future__ import annotations
from datetime import datetime
import pytest
from account.services.ghost_membership import (
_iso, _to_str_or_none, _member_email,
_price_cents, _sanitize_member_payload,
)
class TestIso:
def test_none(self):
assert _iso(None) is None
def test_empty(self):
assert _iso("") is None
def test_z_suffix(self):
result = _iso("2024-06-15T12:00:00Z")
assert isinstance(result, datetime)
assert result.year == 2024
def test_offset(self):
result = _iso("2024-06-15T12:00:00+00:00")
assert isinstance(result, datetime)
class TestToStrOrNone:
def test_none(self):
assert _to_str_or_none(None) is None
def test_dict(self):
assert _to_str_or_none({"a": 1}) is None
def test_list(self):
assert _to_str_or_none([1, 2]) is None
def test_bytes(self):
assert _to_str_or_none(b"hello") is None
def test_empty_string(self):
assert _to_str_or_none("") is None
def test_whitespace_only(self):
assert _to_str_or_none(" ") is None
def test_valid_string(self):
assert _to_str_or_none("hello") == "hello"
def test_int(self):
assert _to_str_or_none(42) == "42"
def test_strips_whitespace(self):
assert _to_str_or_none(" hi ") == "hi"
def test_set(self):
assert _to_str_or_none({1, 2}) is None
def test_tuple(self):
assert _to_str_or_none((1,)) is None
def test_bytearray(self):
assert _to_str_or_none(bytearray(b"x")) is None
class TestMemberEmail:
def test_normal(self):
assert _member_email({"email": "USER@EXAMPLE.COM"}) == "user@example.com"
def test_none(self):
assert _member_email({"email": None}) is None
def test_empty(self):
assert _member_email({"email": ""}) is None
def test_whitespace(self):
assert _member_email({"email": " "}) is None
def test_missing_key(self):
assert _member_email({}) is None
def test_strips(self):
assert _member_email({"email": " a@b.com "}) == "a@b.com"
class TestPriceCents:
def test_valid(self):
assert _price_cents({"price": {"amount": 1500}}) == 1500
def test_string_amount(self):
assert _price_cents({"price": {"amount": "2000"}}) == 2000
def test_missing_price(self):
assert _price_cents({}) is None
def test_missing_amount(self):
assert _price_cents({"price": {}}) is None
def test_none_amount(self):
assert _price_cents({"price": {"amount": None}}) is None
def test_nested_none(self):
assert _price_cents({"price": None}) is None
class TestSanitizeMemberPayload:
def test_email_lowercased(self):
result = _sanitize_member_payload({"email": "USER@EXAMPLE.COM"})
assert result["email"] == "user@example.com"
def test_empty_email_excluded(self):
result = _sanitize_member_payload({"email": ""})
assert "email" not in result
def test_name_included(self):
result = _sanitize_member_payload({"name": "Alice"})
assert result["name"] == "Alice"
def test_note_included(self):
result = _sanitize_member_payload({"note": "VIP"})
assert result["note"] == "VIP"
def test_subscribed_bool(self):
result = _sanitize_member_payload({"subscribed": 1})
assert result["subscribed"] is True
def test_labels_with_id(self):
result = _sanitize_member_payload({
"labels": [{"id": "abc"}, {"name": "VIP"}]
})
assert result["labels"] == [{"id": "abc"}, {"name": "VIP"}]
def test_labels_empty_items_excluded(self):
result = _sanitize_member_payload({
"labels": [{"id": None, "name": None}]
})
assert "labels" not in result
def test_newsletters_with_id(self):
result = _sanitize_member_payload({
"newsletters": [{"id": "n1", "subscribed": True}]
})
assert result["newsletters"] == [{"subscribed": True, "id": "n1"}]
def test_newsletters_default_subscribed(self):
result = _sanitize_member_payload({
"newsletters": [{"name": "Weekly"}]
})
assert result["newsletters"][0]["subscribed"] is True
def test_dict_email_excluded(self):
result = _sanitize_member_payload({"email": {"bad": "input"}})
assert "email" not in result
def test_id_passthrough(self):
result = _sanitize_member_payload({"id": "ghost-member-123"})
assert result["id"] == "ghost-member-123"
def test_empty_payload(self):
result = _sanitize_member_payload({})
assert result == {}

8
artdag/.dockerignore Normal file
View File

@@ -0,0 +1,8 @@
.git
.gitea
**/.env
**/.env.gpu
**/__pycache__
**/.pytest_cache
**/*.pyc
test/

View File

@@ -0,0 +1,114 @@
name: Build and Deploy
on:
push:
branches: [main]
env:
REGISTRY: registry.rose-ash.com:5000
ARTDAG_DIR: /root/art-dag-mono
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install tools
run: |
apt-get update && apt-get install -y --no-install-recommends openssh-client
- name: Set up SSH
env:
SSH_KEY: ${{ secrets.DEPLOY_SSH_KEY }}
DEPLOY_HOST: ${{ secrets.DEPLOY_HOST }}
run: |
mkdir -p ~/.ssh
echo "$SSH_KEY" > ~/.ssh/id_rsa
chmod 600 ~/.ssh/id_rsa
ssh-keyscan -H "$DEPLOY_HOST" >> ~/.ssh/known_hosts 2>/dev/null || true
- name: Build and deploy
env:
DEPLOY_HOST: ${{ secrets.DEPLOY_HOST }}
run: |
ssh "root@$DEPLOY_HOST" "
cd ${{ env.ARTDAG_DIR }}
OLD_HEAD=\$(git rev-parse HEAD 2>/dev/null || echo none)
git fetch origin main
git reset --hard origin/main
NEW_HEAD=\$(git rev-parse HEAD)
# Change detection
BUILD_L1=false
BUILD_L2=false
if [ \"\$OLD_HEAD\" = \"none\" ] || [ \"\$OLD_HEAD\" = \"\$NEW_HEAD\" ]; then
BUILD_L1=true
BUILD_L2=true
else
CHANGED=\$(git diff --name-only \$OLD_HEAD \$NEW_HEAD)
# common/ or core/ change -> rebuild both
if echo \"\$CHANGED\" | grep -qE '^(common|core)/'; then
BUILD_L1=true
BUILD_L2=true
fi
if echo \"\$CHANGED\" | grep -q '^l1/'; then
BUILD_L1=true
fi
if echo \"\$CHANGED\" | grep -q '^l2/'; then
BUILD_L2=true
fi
if echo \"\$CHANGED\" | grep -q '^client/'; then
BUILD_L1=true
fi
fi
# Build L1
if [ \"\$BUILD_L1\" = true ]; then
echo 'Building L1...'
docker build \
--build-arg CACHEBUST=\$(date +%s) \
-f l1/Dockerfile \
-t ${{ env.REGISTRY }}/celery-l1-server:latest \
-t ${{ env.REGISTRY }}/celery-l1-server:${{ github.sha }} \
.
docker push ${{ env.REGISTRY }}/celery-l1-server:latest
docker push ${{ env.REGISTRY }}/celery-l1-server:${{ github.sha }}
else
echo 'Skipping L1 (no changes)'
fi
# Build L2
if [ \"\$BUILD_L2\" = true ]; then
echo 'Building L2...'
docker build \
--build-arg CACHEBUST=\$(date +%s) \
-f l2/Dockerfile \
-t ${{ env.REGISTRY }}/l2-server:latest \
-t ${{ env.REGISTRY }}/l2-server:${{ github.sha }} \
.
docker push ${{ env.REGISTRY }}/l2-server:latest
docker push ${{ env.REGISTRY }}/l2-server:${{ github.sha }}
else
echo 'Skipping L2 (no changes)'
fi
# Deploy stacks (--resolve-image always forces re-pull of :latest)
if [ \"\$BUILD_L1\" = true ]; then
cd l1 && source .env && docker stack deploy --resolve-image always -c docker-compose.yml celery && cd ..
echo 'L1 stack deployed'
fi
if [ \"\$BUILD_L2\" = true ]; then
cd l2 && source .env && docker stack deploy --resolve-image always -c docker-compose.yml activitypub && cd ..
echo 'L2 stack deployed'
fi
sleep 10
echo '=== L1 Services ==='
docker stack services celery
echo '=== L2 Services ==='
docker stack services activitypub
"

74
artdag/CLAUDE.md Normal file
View File

@@ -0,0 +1,74 @@
# Art DAG Monorepo
Federated content-addressed DAG execution engine for distributed media processing with ActivityPub ownership and provenance tracking.
## Project Structure
```
core/ # DAG engine (artdag package) - nodes, effects, analysis, planning
l1/ # L1 Celery rendering server (FastAPI + Celery + Redis + PostgreSQL)
l2/ # L2 ActivityPub registry (FastAPI + PostgreSQL)
common/ # Shared templates, middleware, models (artdag_common package)
client/ # CLI client
test/ # Integration & e2e tests
```
## Tech Stack
Python 3.11+, FastAPI, Celery, Redis, PostgreSQL (asyncpg for L1), SQLAlchemy, Pydantic, JAX (CPU/GPU), IPFS/Kubo, Docker Swarm, HTMX + Jinja2 for web UI.
## Key Commands
### Testing
```bash
cd l1 && pytest tests/ # L1 unit tests
cd core && pytest tests/ # Core unit tests
cd test && python run.py # Full integration pipeline
```
- pytest uses `asyncio_mode = "auto"` for async tests
- Test files: `test_*.py`, fixtures in `conftest.py`
### Linting & Type Checking (L1)
```bash
cd l1 && ruff check . # Lint (E, F, I, UP rules)
cd l1 && mypy app/types.py app/routers/recipes.py tests/
```
- Line length: 100 chars (E501 ignored)
- Mypy: strict on `app/types.py`, `app/routers/recipes.py`, `tests/`; gradual elsewhere
- Mypy ignores imports for: celery, redis, artdag, artdag_common, ipfs_client
### Docker
```bash
docker build -f l1/Dockerfile -t celery-l1-server:latest .
docker build -f l1/Dockerfile.gpu -t celery-l1-gpu:latest .
docker build -f l2/Dockerfile -t l2-server:latest .
./deploy.sh # Build, push, deploy stacks
```
## Architecture Patterns
- **3-Phase Execution**: Analyze -> Plan -> Execute (tasks in `l1/tasks/`)
- **Content-Addressed**: All data identified by SHA3-256 hashes or IPFS CIDs
- **Services Pattern**: Business logic in `app/services/`, API endpoints in `app/routers/`
- **Types Module**: Pydantic models and TypedDicts in `app/types.py`
- **Celery Tasks**: In `l1/tasks/`, decorated with `@app.task`
- **S-Expression Effects**: Composable effect language in `l1/sexp_effects/`
- **Storage**: Local filesystem, S3, or IPFS backends (`storage_providers.py`)
## Auth
- L1 <-> L2: scoped JWT tokens (no shared secrets)
- L2: password + OAuth SSO, token revocation in Redis (30-day expiry)
- Federation: ActivityPub RSA signatures (`core/artdag/activitypub/`)
## Key Config Files
- `l1/pyproject.toml` - mypy, pytest, ruff config for L1
- `l1/celery_app.py` - Celery initialization
- `l1/database.py` / `l2/db.py` - SQLAlchemy models
- `l1/docker-compose.yml` / `l2/docker-compose.yml` - Swarm stacks
## Tools
- Use Context7 MCP for up-to-date library documentation
- Playwright MCP is available for browser automation/testing

5
artdag/client/.gitignore vendored Normal file
View File

@@ -0,0 +1,5 @@
__pycache__/
*.py[cod]
.venv/
venv/
.scripts

263
artdag/client/README.md Normal file
View File

@@ -0,0 +1,263 @@
# Art DAG Client
CLI for interacting with the Art DAG L1 rendering server.
## Setup
```bash
pip install -r requirements.txt
```
## Configuration
```bash
# Set L1 server URL (default: http://localhost:8100)
export ARTDAG_SERVER=http://localhost:8100
# Set L2 server URL for auth (default: http://localhost:8200)
export ARTDAG_L2=https://artdag.rose-ash.com
# Or pass with commands
./artdag.py --server http://localhost:8100 --l2 https://artdag.rose-ash.com <command>
```
## Authentication
Most commands require authentication. Login credentials are stored locally in `~/.artdag/token.json`.
```bash
# Register a new account
artdag register <username> [--email user@example.com]
# Login
artdag login <username>
# Check current user
artdag whoami
# Logout
artdag logout
```
## Commands Reference
### Server & Stats
```bash
# Show server info
artdag info
# Show user stats (counts of runs, recipes, effects, media, storage)
artdag stats
# List known named assets
artdag assets
```
### Runs
```bash
# List runs (with pagination)
artdag runs [--limit N] [--offset N]
# Start a run
artdag run <recipe> <input_cid> [--name output_name] [--wait]
# Get run status
artdag status <run_id>
# Get detailed run info
artdag status <run_id> --plan # Show execution plan with steps
artdag status <run_id> --artifacts # Show output artifacts
artdag status <run_id> --analysis # Show audio analysis data
# Delete a run
artdag delete-run <run_id> [--force]
```
### Recipes
```bash
# List recipes (with pagination)
artdag recipes [--limit N] [--offset N]
# Show recipe details
artdag recipe <recipe_id>
# Upload a recipe (YAML or S-expression)
artdag upload-recipe <filepath>
# Run a recipe with inputs
artdag run-recipe <recipe_id> -i node_id:cid [--wait]
# Delete a recipe
artdag delete-recipe <recipe_id> [--force]
```
### Effects
```bash
# List effects (with pagination)
artdag effects [--limit N] [--offset N]
# Show effect details
artdag effect <cid>
# Show effect with source code
artdag effect <cid> --source
# Upload an effect (.py file)
artdag upload-effect <filepath>
```
### Media / Cache
```bash
# List cached content (with pagination and type filter)
artdag cache [--limit N] [--offset N] [--type all|image|video|audio]
# View/download cached content
artdag view <cid> # Show metadata (size, type, friendly name)
artdag view <cid> --raw # Get raw content info
artdag view <cid> -o output.mp4 # Download raw file
artdag view <cid> -o - | mpv - # Pipe raw content to player
# Upload file to cache and IPFS
artdag upload <filepath>
# Import local file to cache (local server only)
artdag import <filepath>
# View/update metadata
artdag meta <cid> # View metadata
artdag meta <cid> -d "Description" # Set description
artdag meta <cid> -t "tag1,tag2" # Set tags
artdag meta <cid> --publish "my-video" # Publish to L2
# Delete cached content
artdag delete-cache <cid> [--force]
```
### Storage Providers
```bash
# List storage providers
artdag storage list
# Add a provider (interactive)
artdag storage add <type> [--name friendly_name] [--capacity GB]
# Types: pinata, web3storage, nftstorage, infura, filebase, storj, local
# Test provider connectivity
artdag storage test <id>
# Delete a provider
artdag storage delete <id> [--force]
```
### Folders & Collections
```bash
# Folders
artdag folder list
artdag folder create <path>
artdag folder delete <path>
# Collections
artdag collection list
artdag collection create <name>
artdag collection delete <name>
```
### v2 API (3-Phase Execution)
```bash
# Generate execution plan
artdag plan <recipe_file> -i name:cid [--features beats,energy] [--output plan.json]
# Execute a plan
artdag execute-plan <plan_file> [--wait]
# Run recipe (plan + execute in one step)
artdag run-v2 <recipe_file> -i name:cid [--wait]
# Check v2 run status
artdag run-status <run_id>
```
### Publishing to L2
```bash
# Publish a run output to L2
artdag publish <run_id> <output_name>
```
### Data Management
```bash
# Clear all user data (preserves storage configs)
artdag clear-data [--force]
```
## Example Workflows
### Basic Rendering
```bash
# Login
artdag login myuser
# Check available assets
artdag assets
# Run an effect on an input
artdag run dog cat --wait
# View runs
artdag runs
# Download result
artdag view <output_cid> -o result.mp4
```
### Recipe-Based Processing
```bash
# Upload a recipe
artdag upload-recipe my-recipe.yaml
# View recipes
artdag recipes
# Run with inputs
artdag run-recipe <recipe_id> -i video:bafkrei... --wait
# View run plan
artdag status <run_id> --plan
```
### Managing Storage
```bash
# Add Pinata storage
artdag storage add pinata --name "My Pinata"
# Test connection
artdag storage test 1
# View all providers
artdag storage list
```
### Browsing Media
```bash
# List all media
artdag cache
# Filter by type
artdag cache --type video --limit 20
# View with pagination
artdag cache --offset 20 --limit 20
```

2316
artdag/client/artdag.py Executable file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,3 @@
click>=8.0.0
requests>=2.31.0
PyYAML>=6.0

View File

@@ -0,0 +1,38 @@
;; GPU Effects Performance Test
;; Tests rotation, zoom, hue-shift, ripple
(stream "gpu_effects_test"
:fps 30
:width 1920
:height 1080
:seed 42
;; Load primitives
(require-primitives "geometry")
(require-primitives "core")
(require-primitives "math")
(require-primitives "image")
(require-primitives "color_ops")
;; Frame pipeline - test GPU effects
(frame
(let [;; Create a base gradient image
r (+ 0.5 (* 0.5 (math:sin (* t 1))))
g (+ 0.5 (* 0.5 (math:sin (* t 1.3))))
b (+ 0.5 (* 0.5 (math:sin (* t 1.7))))
color [(* r 255) (* g 255) (* b 255)]
base (image:make-image 1920 1080 color)
;; Apply rotation (this is the main GPU bottleneck we optimized)
angle (* t 30)
rotated (geometry:rotate base angle)
;; Apply hue shift
hue-shift (* 180 (math:sin (* t 0.5)))
hued (color_ops:hue-shift rotated hue-shift)
;; Apply brightness based on time
brightness (+ 0.8 (* 0.4 (math:sin (* t 2))))
bright (color_ops:brightness hued brightness)]
bright)))

View File

@@ -0,0 +1,26 @@
;; Simple Test - No external assets required
;; Just generates a color gradient that changes over time
(stream "simple_test"
:fps 30
:width 720
:height 720
:seed 42
;; Load standard primitives
(require-primitives "geometry")
(require-primitives "core")
(require-primitives "math")
(require-primitives "image")
(require-primitives "color_ops")
;; Frame pipeline - animated gradient
(frame
(let [;; Time-based color cycling (0-1 range)
r (+ 0.5 (* 0.5 (math:sin (* t 1))))
g (+ 0.5 (* 0.5 (math:sin (* t 1.3))))
b (+ 0.5 (* 0.5 (math:sin (* t 1.7))))
;; Convert to 0-255 range and create solid color frame
color [(* r 255) (* g 255) (* b 255)]
frame (image:make-image 720 720 color)]
frame)))

293
artdag/common/README.md Normal file
View File

@@ -0,0 +1,293 @@
# artdag-common
Shared components for Art-DAG L1 (celery) and L2 (activity-pub) servers.
## Features
- **Jinja2 Templating**: Unified template environment with shared base templates
- **Reusable Components**: Cards, tables, pagination, DAG visualization, media preview
- **Authentication Middleware**: Cookie and JWT token parsing
- **Content Negotiation**: HTML/JSON/ActivityPub format detection
- **Utility Functions**: Hash truncation, file size formatting, status colors
## Installation
```bash
pip install -e /path/to/artdag-common
# Or add to requirements.txt
-e file:../common
```
## Quick Start
```python
from fastapi import FastAPI, Request
from artdag_common import create_jinja_env, render
app = FastAPI()
# Initialize templates with app-specific directory
templates = create_jinja_env("app/templates")
@app.get("/")
async def home(request: Request):
return render(templates, "home.html", request, title="Home")
```
## Package Structure
```
artdag_common/
├── __init__.py # Package exports
├── constants.py # CDN URLs, colors, configs
├── rendering.py # Jinja2 environment and helpers
├── middleware/
│ ├── auth.py # Authentication utilities
│ └── content_negotiation.py # Accept header parsing
├── models/
│ ├── requests.py # Shared request models
│ └── responses.py # Shared response models
├── utils/
│ ├── formatting.py # Text/date formatting
│ ├── media.py # Media type detection
│ └── pagination.py # Pagination helpers
└── templates/
├── base.html # Base layout template
└── components/
├── badge.html # Status/type badges
├── card.html # Info cards
├── dag.html # Cytoscape DAG visualization
├── media_preview.html # Video/image/audio preview
├── pagination.html # HTMX pagination
└── table.html # Styled tables
```
## Jinja2 Templates
### Base Template
The `base.html` template provides:
- Dark theme with Tailwind CSS
- HTMX integration
- Navigation slot
- Content block
- Optional Cytoscape.js block
```html
{% extends "base.html" %}
{% block title %}My Page{% endblock %}
{% block content %}
<h1>Hello World</h1>
{% endblock %}
```
### Reusable Components
#### Card
```html
{% include "components/card.html" %}
```
```html
<!-- Usage in your template -->
<div class="...card styles...">
{% block card_title %}Title{% endblock %}
{% block card_content %}Content{% endblock %}
</div>
```
#### Badge
Status and type badges with appropriate colors:
```html
{% from "components/badge.html" import status_badge, type_badge %}
{{ status_badge("completed") }} <!-- Green -->
{{ status_badge("failed") }} <!-- Red -->
{{ type_badge("video") }}
```
#### DAG Visualization
Interactive Cytoscape.js graph:
```html
{% include "components/dag.html" %}
```
Requires passing `nodes` and `edges` data to template context.
#### Media Preview
Responsive media preview with format detection:
```html
{% include "components/media_preview.html" %}
```
Supports video, audio, and image formats.
#### Pagination
HTMX-powered infinite scroll pagination:
```html
{% include "components/pagination.html" %}
```
## Template Rendering
### Full Page Render
```python
from artdag_common import render
@app.get("/runs/{run_id}")
async def run_detail(run_id: str, request: Request):
run = get_run(run_id)
return render(templates, "runs/detail.html", request, run=run)
```
### Fragment Render (HTMX)
```python
from artdag_common import render_fragment
@app.get("/runs/{run_id}/status")
async def run_status_fragment(run_id: str):
run = get_run(run_id)
html = render_fragment(templates, "components/status.html", status=run.status)
return HTMLResponse(html)
```
## Authentication Middleware
### UserContext
```python
from artdag_common.middleware.auth import UserContext, get_user_from_cookie
@app.get("/profile")
async def profile(request: Request):
user = get_user_from_cookie(request)
if not user:
return RedirectResponse("/login")
return {"username": user.username, "actor_id": user.actor_id}
```
### Token Parsing
```python
from artdag_common.middleware.auth import get_user_from_header, decode_jwt_claims
@app.get("/api/me")
async def api_me(request: Request):
user = get_user_from_header(request)
if not user:
raise HTTPException(401, "Not authenticated")
return {"user": user.username}
```
## Content Negotiation
Detect what response format the client wants:
```python
from artdag_common.middleware.content_negotiation import wants_html, wants_json, wants_activity_json
@app.get("/users/{username}")
async def user_profile(username: str, request: Request):
user = get_user(username)
if wants_activity_json(request):
return ActivityPubActor(user)
elif wants_json(request):
return user.dict()
else:
return render(templates, "users/profile.html", request, user=user)
```
## Constants
### CDN URLs
```python
from artdag_common import TAILWIND_CDN, HTMX_CDN, CYTOSCAPE_CDN
# Available in templates as globals:
# {{ TAILWIND_CDN }}
# {{ HTMX_CDN }}
# {{ CYTOSCAPE_CDN }}
```
### Node Colors
```python
from artdag_common import NODE_COLORS
# {
# "SOURCE": "#3b82f6", # Blue
# "EFFECT": "#22c55e", # Green
# "OUTPUT": "#a855f7", # Purple
# "ANALYSIS": "#f59e0b", # Amber
# "_LIST": "#6366f1", # Indigo
# "default": "#6b7280", # Gray
# }
```
### Status Colors
```python
STATUS_COLORS = {
"completed": "bg-green-600",
"cached": "bg-blue-600",
"running": "bg-yellow-600",
"pending": "bg-gray-600",
"failed": "bg-red-600",
}
```
## Custom Jinja2 Filters
The following filters are available in all templates:
| Filter | Usage | Description |
|--------|-------|-------------|
| `truncate_hash` | `{{ hash\|truncate_hash }}` | Shorten hash to 16 chars with ellipsis |
| `format_size` | `{{ bytes\|format_size }}` | Format bytes as KB/MB/GB |
| `status_color` | `{{ status\|status_color }}` | Get Tailwind class for status |
Example:
```html
<span class="{{ run.status|status_color }}">
{{ run.status }}
</span>
<code>{{ content_hash|truncate_hash }}</code>
<span>{{ file_size|format_size }}</span>
```
## Development
```bash
cd /root/art-dag/common
# Install in development mode
pip install -e .
# Run tests
pytest
```
## Dependencies
- `fastapi>=0.100.0` - Web framework
- `jinja2>=3.1.0` - Templating engine
- `pydantic>=2.0.0` - Data validation

View File

@@ -0,0 +1,18 @@
"""
Art-DAG Common Library
Shared components for L1 (celery) and L2 (activity-pub) servers.
"""
from .constants import NODE_COLORS, TAILWIND_CDN, HTMX_CDN, CYTOSCAPE_CDN
from .rendering import create_jinja_env, render, render_fragment
__all__ = [
"NODE_COLORS",
"TAILWIND_CDN",
"HTMX_CDN",
"CYTOSCAPE_CDN",
"create_jinja_env",
"render",
"render_fragment",
]

View File

@@ -0,0 +1,76 @@
"""
Shared constants for Art-DAG servers.
"""
# CDN URLs
TAILWIND_CDN = "https://cdn.tailwindcss.com?plugins=typography"
HTMX_CDN = "https://unpkg.com/htmx.org@1.9.10"
CYTOSCAPE_CDN = "https://cdnjs.cloudflare.com/ajax/libs/cytoscape/3.28.1/cytoscape.min.js"
DAGRE_CDN = "https://cdnjs.cloudflare.com/ajax/libs/dagre/0.8.5/dagre.min.js"
CYTOSCAPE_DAGRE_CDN = "https://cdn.jsdelivr.net/npm/cytoscape-dagre@2.5.0/cytoscape-dagre.min.js"
# Node colors for DAG visualization
NODE_COLORS = {
"SOURCE": "#3b82f6", # Blue - input sources
"EFFECT": "#22c55e", # Green - processing effects
"OUTPUT": "#a855f7", # Purple - final outputs
"ANALYSIS": "#f59e0b", # Amber - analysis nodes
"_LIST": "#6366f1", # Indigo - list aggregation
"default": "#6b7280", # Gray - unknown types
}
# Status colors
STATUS_COLORS = {
"completed": "bg-green-600",
"cached": "bg-blue-600",
"running": "bg-yellow-600",
"pending": "bg-gray-600",
"failed": "bg-red-600",
}
# Tailwind dark theme configuration
TAILWIND_CONFIG = """
<script>
tailwind.config = {
darkMode: 'class',
theme: {
extend: {
colors: {
dark: {
600: '#374151',
700: '#1f2937',
800: '#111827',
900: '#030712',
}
}
}
}
}
</script>
<style type="text/tailwindcss">
@layer utilities {
.prose-invert {
--tw-prose-body: #d1d5db;
--tw-prose-headings: #f9fafb;
--tw-prose-lead: #9ca3af;
--tw-prose-links: #60a5fa;
--tw-prose-bold: #f9fafb;
--tw-prose-counters: #9ca3af;
--tw-prose-bullets: #6b7280;
--tw-prose-hr: #374151;
--tw-prose-quotes: #f3f4f6;
--tw-prose-quote-borders: #374151;
--tw-prose-captions: #9ca3af;
--tw-prose-code: #f9fafb;
--tw-prose-pre-code: #e5e7eb;
--tw-prose-pre-bg: #1f2937;
--tw-prose-th-borders: #4b5563;
--tw-prose-td-borders: #374151;
}
}
</style>
"""
# Default pagination settings
DEFAULT_PAGE_SIZE = 20
MAX_PAGE_SIZE = 100

View File

@@ -0,0 +1,91 @@
"""Fragment client for fetching HTML fragments from coop apps.
Lightweight httpx-based client (no Quart dependency) for Art-DAG to consume
coop app fragments like nav-tree, auth-menu, and cart-mini.
"""
from __future__ import annotations
import asyncio
import logging
import os
from typing import Sequence
import httpx
log = logging.getLogger(__name__)
FRAGMENT_HEADER = "X-Fragment-Request"
_client: httpx.AsyncClient | None = None
_DEFAULT_TIMEOUT = 2.0
def _get_client() -> httpx.AsyncClient:
global _client
if _client is None or _client.is_closed:
_client = httpx.AsyncClient(
timeout=httpx.Timeout(_DEFAULT_TIMEOUT),
follow_redirects=False,
)
return _client
def _internal_url(app_name: str) -> str:
"""Resolve internal base URL for a coop app.
Looks up ``INTERNAL_URL_{APP}`` first, falls back to ``http://{app}:8000``.
"""
env_key = f"INTERNAL_URL_{app_name.upper()}"
return os.getenv(env_key, f"http://{app_name}:8000").rstrip("/")
async def fetch_fragment(
app_name: str,
fragment_type: str,
*,
params: dict | None = None,
timeout: float = _DEFAULT_TIMEOUT,
required: bool = False,
) -> str:
"""Fetch an HTML fragment from a coop app.
Returns empty string on failure by default (required=False).
"""
base = _internal_url(app_name)
url = f"{base}/internal/fragments/{fragment_type}"
try:
resp = await _get_client().get(
url,
params=params,
headers={FRAGMENT_HEADER: "1"},
timeout=timeout,
)
if resp.status_code == 200:
return resp.text
msg = f"Fragment {app_name}/{fragment_type} returned {resp.status_code}"
log.warning(msg)
if required:
raise RuntimeError(msg)
return ""
except RuntimeError:
raise
except Exception as exc:
msg = f"Fragment {app_name}/{fragment_type} failed: {exc}"
log.warning(msg)
if required:
raise RuntimeError(msg) from exc
return ""
async def fetch_fragments(
requests: Sequence[tuple[str, str, dict | None]],
*,
timeout: float = _DEFAULT_TIMEOUT,
required: bool = False,
) -> list[str]:
"""Fetch multiple fragments concurrently."""
return list(await asyncio.gather(*(
fetch_fragment(app, ftype, params=params, timeout=timeout, required=required)
for app, ftype, params in requests
)))

View File

@@ -0,0 +1,16 @@
"""
Middleware and FastAPI dependencies for Art-DAG servers.
"""
from .auth import UserContext, get_user_from_cookie, get_user_from_header, require_auth
from .content_negotiation import wants_html, wants_json, ContentType
__all__ = [
"UserContext",
"get_user_from_cookie",
"get_user_from_header",
"require_auth",
"wants_html",
"wants_json",
"ContentType",
]

View File

@@ -0,0 +1,276 @@
"""
Authentication middleware and dependencies.
Provides common authentication patterns for L1 and L2 servers.
Each server can extend or customize these as needed.
"""
from dataclasses import dataclass
from typing import Callable, Optional, Awaitable, Any
import base64
import json
from fastapi import Request, HTTPException, Depends
from fastapi.responses import RedirectResponse
@dataclass
class UserContext:
"""User context extracted from authentication."""
username: str
actor_id: str # Full actor ID like "@user@server.com"
token: Optional[str] = None
l2_server: Optional[str] = None # L2 server URL for this user
email: Optional[str] = None # User's email address
@property
def display_name(self) -> str:
"""Get display name (username without @ prefix)."""
return self.username.lstrip("@")
def get_user_from_cookie(request: Request) -> Optional[UserContext]:
"""
Extract user context from session cookie.
Supports two cookie formats:
1. artdag_session: base64-encoded JSON {"username": "user", "actor_id": "@user@server.com"}
2. auth_token: raw JWT token (used by L1 servers)
Args:
request: FastAPI request
Returns:
UserContext if valid cookie found, None otherwise
"""
# Try artdag_session cookie first (base64-encoded JSON)
cookie = request.cookies.get("artdag_session")
if cookie:
try:
data = json.loads(base64.b64decode(cookie))
username = data.get("username", "")
actor_id = data.get("actor_id", "")
if not actor_id and username:
actor_id = f"@{username}"
return UserContext(
username=username,
actor_id=actor_id,
email=data.get("email", ""),
)
except (json.JSONDecodeError, ValueError, KeyError):
pass
# Try auth_token cookie (raw JWT, used by L1)
token = request.cookies.get("auth_token")
if token:
claims = decode_jwt_claims(token)
if claims:
username = claims.get("username") or claims.get("sub", "")
actor_id = claims.get("actor_id") or claims.get("actor")
if not actor_id and username:
actor_id = f"@{username}"
return UserContext(
username=username,
actor_id=actor_id or "",
token=token,
email=claims.get("email", ""),
)
return None
def get_user_from_header(request: Request) -> Optional[UserContext]:
"""
Extract user context from Authorization header.
Supports:
- Bearer <token> format (JWT or opaque token)
- Basic <base64(user:pass)> format
Args:
request: FastAPI request
Returns:
UserContext if valid header found, None otherwise
"""
auth_header = request.headers.get("Authorization", "")
if auth_header.startswith("Bearer "):
token = auth_header[7:]
# Attempt to decode JWT claims
claims = decode_jwt_claims(token)
if claims:
username = claims.get("username") or claims.get("sub", "")
actor_id = claims.get("actor_id") or claims.get("actor")
# Default actor_id to @username if not provided
if not actor_id and username:
actor_id = f"@{username}"
return UserContext(
username=username,
actor_id=actor_id or "",
token=token,
)
return None
def decode_jwt_claims(token: str) -> Optional[dict]:
"""
Decode JWT claims without verification.
This is useful for extracting user info from a token
when full verification is handled elsewhere.
Args:
token: JWT token string
Returns:
Claims dict if valid JWT format, None otherwise
"""
try:
parts = token.split(".")
if len(parts) != 3:
return None
# Decode payload (second part)
payload = parts[1]
# Add padding if needed
padding = 4 - len(payload) % 4
if padding != 4:
payload += "=" * padding
return json.loads(base64.urlsafe_b64decode(payload))
except (json.JSONDecodeError, ValueError):
return None
def create_auth_dependency(
token_validator: Optional[Callable[[str], Awaitable[Optional[dict]]]] = None,
allow_cookie: bool = True,
allow_header: bool = True,
):
"""
Create a customized auth dependency for a specific server.
Args:
token_validator: Optional async function to validate tokens with backend
allow_cookie: Whether to check cookies for auth
allow_header: Whether to check Authorization header
Returns:
FastAPI dependency function
"""
async def get_current_user(request: Request) -> Optional[UserContext]:
ctx = None
# Try header first (API clients)
if allow_header:
ctx = get_user_from_header(request)
if ctx and token_validator:
# Validate token with backend
validated = await token_validator(ctx.token)
if not validated:
ctx = None
# Fall back to cookie (browser)
if ctx is None and allow_cookie:
ctx = get_user_from_cookie(request)
return ctx
return get_current_user
async def require_auth(request: Request) -> UserContext:
"""
Dependency that requires authentication.
Raises HTTPException 401 if not authenticated.
Use with Depends() in route handlers.
Example:
@app.get("/protected")
async def protected_route(user: UserContext = Depends(require_auth)):
return {"user": user.username}
"""
# Try header first
ctx = get_user_from_header(request)
if ctx is None:
ctx = get_user_from_cookie(request)
if ctx is None:
# Check Accept header to determine response type
accept = request.headers.get("accept", "")
if "text/html" in accept:
raise HTTPException(
status_code=302,
headers={"Location": "/login"}
)
raise HTTPException(
status_code=401,
detail="Authentication required"
)
return ctx
def require_owner(resource_owner_field: str = "username"):
"""
Dependency factory that requires the user to own the resource.
Args:
resource_owner_field: Field name on the resource that contains owner username
Returns:
Dependency function
Example:
@app.delete("/items/{item_id}")
async def delete_item(
item: Item = Depends(get_item),
user: UserContext = Depends(require_owner("created_by"))
):
...
"""
async def check_ownership(
request: Request,
user: UserContext = Depends(require_auth),
) -> UserContext:
# The actual ownership check must be done in the route
# after fetching the resource
return user
return check_ownership
def set_auth_cookie(response: Any, user: UserContext, max_age: int = 86400 * 30) -> None:
"""
Set authentication cookie on response.
Args:
response: FastAPI response object
user: User context to store
max_age: Cookie max age in seconds (default 30 days)
"""
cookie_data = {
"username": user.username,
"actor_id": user.actor_id,
}
if user.email:
cookie_data["email"] = user.email
data = json.dumps(cookie_data)
cookie_value = base64.b64encode(data.encode()).decode()
response.set_cookie(
key="artdag_session",
value=cookie_value,
max_age=max_age,
httponly=True,
samesite="lax",
secure=True, # Require HTTPS in production
)
def clear_auth_cookie(response: Any) -> None:
"""Clear authentication cookie."""
response.delete_cookie(key="artdag_session")

View File

@@ -0,0 +1,174 @@
"""
Content negotiation utilities.
Helps determine what response format the client wants.
"""
from enum import Enum
from typing import Optional
from fastapi import Request
class ContentType(Enum):
"""Response content types."""
HTML = "text/html"
JSON = "application/json"
ACTIVITY_JSON = "application/activity+json"
XML = "application/xml"
def wants_html(request: Request) -> bool:
"""
Check if the client wants HTML response.
Returns True if:
- Accept header contains text/html
- Accept header contains application/xhtml+xml
- No Accept header (browser default)
Args:
request: FastAPI request
Returns:
True if HTML is preferred
"""
accept = request.headers.get("accept", "")
# No accept header usually means browser
if not accept:
return True
# Check for HTML preferences
if "text/html" in accept:
return True
if "application/xhtml" in accept:
return True
return False
def wants_json(request: Request) -> bool:
"""
Check if the client wants JSON response.
Returns True if:
- Accept header contains application/json
- Accept header does NOT contain text/html
- Request has .json suffix (convention)
Args:
request: FastAPI request
Returns:
True if JSON is preferred
"""
accept = request.headers.get("accept", "")
# Explicit JSON preference
if "application/json" in accept:
# But not if HTML is also requested (browsers often send both)
if "text/html" not in accept:
return True
# Check URL suffix convention
if request.url.path.endswith(".json"):
return True
return False
def wants_activity_json(request: Request) -> bool:
"""
Check if the client wants ActivityPub JSON-LD response.
Used for federation with other ActivityPub servers.
Args:
request: FastAPI request
Returns:
True if ActivityPub format is preferred
"""
accept = request.headers.get("accept", "")
if "application/activity+json" in accept:
return True
if "application/ld+json" in accept:
return True
return False
def get_preferred_type(request: Request) -> ContentType:
"""
Determine the preferred content type from Accept header.
Args:
request: FastAPI request
Returns:
ContentType enum value
"""
if wants_activity_json(request):
return ContentType.ACTIVITY_JSON
if wants_json(request):
return ContentType.JSON
return ContentType.HTML
def is_htmx_request(request: Request) -> bool:
"""
Check if this is an HTMX request (partial page update).
HTMX requests set the HX-Request header.
Args:
request: FastAPI request
Returns:
True if this is an HTMX request
"""
return request.headers.get("HX-Request") == "true"
def get_htmx_target(request: Request) -> Optional[str]:
"""
Get the HTMX target element ID.
Args:
request: FastAPI request
Returns:
Target element ID or None
"""
return request.headers.get("HX-Target")
def get_htmx_trigger(request: Request) -> Optional[str]:
"""
Get the HTMX trigger element ID.
Args:
request: FastAPI request
Returns:
Trigger element ID or None
"""
return request.headers.get("HX-Trigger")
def is_ios_request(request: Request) -> bool:
"""
Check if request is from iOS device.
Useful for video format selection (iOS prefers MP4).
Args:
request: FastAPI request
Returns:
True if iOS user agent detected
"""
user_agent = request.headers.get("user-agent", "").lower()
return "iphone" in user_agent or "ipad" in user_agent

View File

@@ -0,0 +1,25 @@
"""
Shared Pydantic models for Art-DAG servers.
"""
from .requests import (
PaginationParams,
PublishRequest,
StorageConfigRequest,
MetadataUpdateRequest,
)
from .responses import (
PaginatedResponse,
ErrorResponse,
SuccessResponse,
)
__all__ = [
"PaginationParams",
"PublishRequest",
"StorageConfigRequest",
"MetadataUpdateRequest",
"PaginatedResponse",
"ErrorResponse",
"SuccessResponse",
]

View File

@@ -0,0 +1,74 @@
"""
Request models shared across L1 and L2 servers.
"""
from typing import Optional, List, Dict, Any
from pydantic import BaseModel, Field
from ..constants import DEFAULT_PAGE_SIZE, MAX_PAGE_SIZE
class PaginationParams(BaseModel):
"""Common pagination parameters."""
page: int = Field(default=1, ge=1, description="Page number (1-indexed)")
limit: int = Field(
default=DEFAULT_PAGE_SIZE,
ge=1,
le=MAX_PAGE_SIZE,
description="Items per page"
)
@property
def offset(self) -> int:
"""Calculate offset for database queries."""
return (self.page - 1) * self.limit
class PublishRequest(BaseModel):
"""Request to publish content to L2/storage."""
name: str = Field(..., min_length=1, max_length=255)
description: Optional[str] = Field(default=None, max_length=2000)
tags: List[str] = Field(default_factory=list)
storage_id: Optional[str] = Field(default=None, description="Target storage provider")
class MetadataUpdateRequest(BaseModel):
"""Request to update content metadata."""
name: Optional[str] = Field(default=None, max_length=255)
description: Optional[str] = Field(default=None, max_length=2000)
tags: Optional[List[str]] = Field(default=None)
metadata: Optional[Dict[str, Any]] = Field(default=None)
class StorageConfigRequest(BaseModel):
"""Request to configure a storage provider."""
provider_type: str = Field(..., description="Provider type (pinata, web3storage, local, etc.)")
name: str = Field(..., min_length=1, max_length=100)
api_key: Optional[str] = Field(default=None)
api_secret: Optional[str] = Field(default=None)
endpoint: Optional[str] = Field(default=None)
config: Optional[Dict[str, Any]] = Field(default_factory=dict)
is_default: bool = Field(default=False)
class RecipeRunRequest(BaseModel):
"""Request to run a recipe."""
recipe_id: str = Field(..., description="Recipe content hash or ID")
inputs: Dict[str, str] = Field(..., description="Map of input name to content hash")
features: List[str] = Field(
default=["beats", "energy"],
description="Analysis features to extract"
)
class PlanRequest(BaseModel):
"""Request to generate an execution plan."""
recipe_yaml: str = Field(..., description="Recipe YAML content")
input_hashes: Dict[str, str] = Field(..., description="Map of input name to content hash")
features: List[str] = Field(default=["beats", "energy"])
class ExecutePlanRequest(BaseModel):
"""Request to execute a generated plan."""
plan_json: str = Field(..., description="JSON-serialized execution plan")
run_id: Optional[str] = Field(default=None, description="Optional run ID for tracking")

View File

@@ -0,0 +1,96 @@
"""
Response models shared across L1 and L2 servers.
"""
from typing import Optional, List, Dict, Any, Generic, TypeVar
from pydantic import BaseModel, Field
T = TypeVar("T")
class PaginatedResponse(BaseModel, Generic[T]):
"""Generic paginated response."""
data: List[Any] = Field(default_factory=list)
pagination: Dict[str, Any] = Field(default_factory=dict)
@classmethod
def create(
cls,
items: List[Any],
page: int,
limit: int,
total: int,
) -> "PaginatedResponse":
"""Create a paginated response."""
return cls(
data=items,
pagination={
"page": page,
"limit": limit,
"total": total,
"has_more": page * limit < total,
"total_pages": (total + limit - 1) // limit,
}
)
class ErrorResponse(BaseModel):
"""Standard error response."""
error: str = Field(..., description="Error message")
detail: Optional[str] = Field(default=None, description="Detailed error info")
code: Optional[str] = Field(default=None, description="Error code")
class SuccessResponse(BaseModel):
"""Standard success response."""
success: bool = Field(default=True)
message: Optional[str] = Field(default=None)
data: Optional[Dict[str, Any]] = Field(default=None)
class RunStatus(BaseModel):
"""Run execution status."""
run_id: str
status: str = Field(..., description="pending, running, completed, failed")
recipe: Optional[str] = None
plan_id: Optional[str] = None
output_hash: Optional[str] = None
output_ipfs_cid: Optional[str] = None
total_steps: int = 0
cached_steps: int = 0
completed_steps: int = 0
error: Optional[str] = None
class CacheItemResponse(BaseModel):
"""Cached content item response."""
content_hash: str
media_type: Optional[str] = None
size: Optional[int] = None
name: Optional[str] = None
description: Optional[str] = None
tags: List[str] = Field(default_factory=list)
ipfs_cid: Optional[str] = None
created_at: Optional[str] = None
class RecipeResponse(BaseModel):
"""Recipe response."""
recipe_id: str
name: str
description: Optional[str] = None
inputs: List[Dict[str, Any]] = Field(default_factory=list)
outputs: List[str] = Field(default_factory=list)
node_count: int = 0
created_at: Optional[str] = None
class StorageProviderResponse(BaseModel):
"""Storage provider configuration response."""
storage_id: str
provider_type: str
name: str
is_default: bool = False
is_connected: bool = False
usage_bytes: Optional[int] = None
pin_count: int = 0

View File

@@ -0,0 +1,160 @@
"""
Jinja2 template rendering system for Art-DAG servers.
Provides a unified template environment that can load from:
1. The shared artdag_common/templates directory
2. App-specific template directories
Usage:
from artdag_common import create_jinja_env, render
# In app initialization
templates = create_jinja_env("app/templates")
# In route handler
return render(templates, "runs/detail.html", request, run=run, user=user)
"""
from pathlib import Path
from typing import Any, Optional, Union
from fastapi import Request
from fastapi.responses import HTMLResponse
from jinja2 import Environment, ChoiceLoader, FileSystemLoader, PackageLoader, select_autoescape
from .constants import (
TAILWIND_CDN,
HTMX_CDN,
CYTOSCAPE_CDN,
DAGRE_CDN,
CYTOSCAPE_DAGRE_CDN,
TAILWIND_CONFIG,
NODE_COLORS,
STATUS_COLORS,
)
def create_jinja_env(*template_dirs: Union[str, Path]) -> Environment:
"""
Create a Jinja2 environment with the shared templates and optional app-specific dirs.
Args:
*template_dirs: Additional template directories to search (app-specific)
Returns:
Configured Jinja2 Environment
Example:
env = create_jinja_env("/app/templates", "/app/custom")
"""
loaders = []
# Add app-specific directories first (higher priority)
for template_dir in template_dirs:
path = Path(template_dir)
if path.exists():
loaders.append(FileSystemLoader(str(path)))
# Add shared templates from this package (lower priority, fallback)
loaders.append(PackageLoader("artdag_common", "templates"))
env = Environment(
loader=ChoiceLoader(loaders),
autoescape=select_autoescape(["html", "xml"]),
trim_blocks=True,
lstrip_blocks=True,
)
# Add global context available to all templates
env.globals.update({
"TAILWIND_CDN": TAILWIND_CDN,
"HTMX_CDN": HTMX_CDN,
"CYTOSCAPE_CDN": CYTOSCAPE_CDN,
"DAGRE_CDN": DAGRE_CDN,
"CYTOSCAPE_DAGRE_CDN": CYTOSCAPE_DAGRE_CDN,
"TAILWIND_CONFIG": TAILWIND_CONFIG,
"NODE_COLORS": NODE_COLORS,
"STATUS_COLORS": STATUS_COLORS,
})
# Add custom filters
env.filters["truncate_hash"] = truncate_hash
env.filters["format_size"] = format_size
env.filters["status_color"] = status_color
return env
def render(
env: Environment,
template_name: str,
request: Request,
status_code: int = 200,
**context: Any,
) -> HTMLResponse:
"""
Render a template to an HTMLResponse.
Args:
env: Jinja2 environment
template_name: Template file path (e.g., "runs/detail.html")
request: FastAPI request object
status_code: HTTP status code (default 200)
**context: Template context variables
Returns:
HTMLResponse with rendered content
"""
template = env.get_template(template_name)
html = template.render(request=request, **context)
return HTMLResponse(html, status_code=status_code)
def render_fragment(
env: Environment,
template_name: str,
**context: Any,
) -> str:
"""
Render a template fragment to a string (for HTMX partial updates).
Args:
env: Jinja2 environment
template_name: Template file path
**context: Template context variables
Returns:
Rendered HTML string
"""
template = env.get_template(template_name)
return template.render(**context)
# Custom Jinja2 filters
def truncate_hash(value: str, length: int = 16) -> str:
"""Truncate a hash to specified length with ellipsis."""
if not value:
return ""
if len(value) <= length:
return value
return f"{value[:length]}..."
def format_size(size_bytes: Optional[int]) -> str:
"""Format file size in human-readable form."""
if size_bytes is None:
return "Unknown"
if size_bytes < 1024:
return f"{size_bytes} B"
elif size_bytes < 1024 * 1024:
return f"{size_bytes / 1024:.1f} KB"
elif size_bytes < 1024 * 1024 * 1024:
return f"{size_bytes / (1024 * 1024):.1f} MB"
else:
return f"{size_bytes / (1024 * 1024 * 1024):.1f} GB"
def status_color(status: str) -> str:
"""Get Tailwind CSS class for a status."""
return STATUS_COLORS.get(status, STATUS_COLORS["pending"])

View File

@@ -0,0 +1,96 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>{% block title %}Art-DAG{% endblock %}</title>
<!-- Tailwind CSS (same CDN as coop) -->
<script src="https://cdn.tailwindcss.com?plugins=typography"></script>
<script>
tailwind.config = {
theme: {
extend: {
colors: {
dark: {
600: '#374151',
700: '#1f2937',
800: '#111827',
900: '#030712',
}
}
}
}
}
</script>
<!-- HTMX -->
<script src="https://unpkg.com/htmx.org@2.0.8"></script>
<!-- Hyperscript (for nav-tree scrolling arrows) -->
<script src="https://unpkg.com/hyperscript.org@0.9.12"></script>
<!-- Font Awesome (for auth-menu + nav icons) -->
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.5.1/css/all.min.css">
{% block head %}{% endblock %}
<style>
/* HTMX loading indicator */
.htmx-indicator { display: none; }
.htmx-request .htmx-indicator { display: inline-flex; }
img { max-width: 100%; height: auto; }
.no-scrollbar::-webkit-scrollbar { display: none; }
.no-scrollbar { -ms-overflow-style: none; scrollbar-width: none; }
.scrollbar-hide::-webkit-scrollbar { display: none; }
.scrollbar-hide { -ms-overflow-style: none; scrollbar-width: none; }
</style>
<script>
if (matchMedia('(hover: hover) and (pointer: fine)').matches) {
document.documentElement.classList.add('hover-capable');
}
</script>
</head>
<body class="bg-stone-50 text-stone-900 min-h-screen">
<div class="max-w-screen-2xl mx-auto py-1 px-1">
{% block header %}
{# Coop-style header: sky banner with title, nav-tree, auth-menu, cart-mini #}
<div class="w-full">
<div class="flex flex-col items-center md:flex-row justify-center md:justify-between w-full p-1 bg-sky-500">
<div class="w-full flex flex-row items-top">
{# Cart mini #}
{% block cart_mini %}{% endblock %}
{# Site title #}
<div class="font-bold text-5xl flex-1">
<a href="/" class="flex justify-center md:justify-start">
<h1>{% block brand %}Art-DAG{% endblock %}</h1>
</a>
</div>
{# Desktop nav: nav-tree + auth-menu #}
<nav class="hidden md:flex gap-4 text-sm ml-2 justify-end items-center flex-0">
{% block nav_tree %}{% endblock %}
{% block auth_menu %}{% endblock %}
</nav>
</div>
</div>
{# Mobile auth #}
<div class="block md:hidden text-md font-bold">
{% block auth_menu_mobile %}{% endblock %}
</div>
</div>
{% endblock %}
{# App-specific sub-nav (Runs, Recipes, Effects, etc.) #}
{% block sub_nav %}{% endblock %}
</div>{# close max-w-screen-2xl wrapper #}
<main class="bg-dark-800 text-gray-100 min-h-screen">
<div class="max-w-screen-2xl mx-auto px-4 py-4">
{% block content %}{% endblock %}
</div>
</main>
{% block footer %}{% endblock %}
{% block scripts %}{% endblock %}
</body>
</html>

View File

@@ -0,0 +1,64 @@
{#
Badge component for status and type indicators.
Usage:
{% from "components/badge.html" import badge, status_badge, type_badge %}
{{ badge("Active", "green") }}
{{ status_badge("completed") }}
{{ type_badge("EFFECT") }}
#}
{% macro badge(text, color="gray", class="") %}
<span class="inline-flex items-center px-2.5 py-0.5 rounded-full text-xs font-medium bg-{{ color }}-600/20 text-{{ color }}-400 {{ class }}">
{{ text }}
</span>
{% endmacro %}
{% macro status_badge(status, class="") %}
{% set colors = {
"completed": "green",
"cached": "blue",
"running": "yellow",
"pending": "gray",
"failed": "red",
"active": "green",
"inactive": "gray",
} %}
{% set color = colors.get(status, "gray") %}
<span class="inline-flex items-center px-2.5 py-0.5 rounded-full text-xs font-medium bg-{{ color }}-600/20 text-{{ color }}-400 {{ class }}">
{% if status == "running" %}
<svg class="animate-spin -ml-0.5 mr-1.5 h-3 w-3" xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24">
<circle class="opacity-25" cx="12" cy="12" r="10" stroke="currentColor" stroke-width="4"></circle>
<path class="opacity-75" fill="currentColor" d="M4 12a8 8 0 018-8V0C5.373 0 0 5.373 0 12h4zm2 5.291A7.962 7.962 0 014 12H0c0 3.042 1.135 5.824 3 7.938l3-2.647z"></path>
</svg>
{% endif %}
{{ status | capitalize }}
</span>
{% endmacro %}
{% macro type_badge(node_type, class="") %}
{% set colors = {
"SOURCE": "blue",
"EFFECT": "green",
"OUTPUT": "purple",
"ANALYSIS": "amber",
"_LIST": "indigo",
} %}
{% set color = colors.get(node_type, "gray") %}
<span class="inline-flex items-center px-2.5 py-0.5 rounded-full text-xs font-medium bg-{{ color }}-600/20 text-{{ color }}-400 {{ class }}">
{{ node_type }}
</span>
{% endmacro %}
{% macro role_badge(role, class="") %}
{% set colors = {
"input": "blue",
"output": "purple",
"intermediate": "gray",
} %}
{% set color = colors.get(role, "gray") %}
<span class="inline-flex items-center px-2.5 py-0.5 rounded-full text-xs font-medium bg-{{ color }}-600/20 text-{{ color }}-400 {{ class }}">
{{ role | capitalize }}
</span>
{% endmacro %}

View File

@@ -0,0 +1,45 @@
{#
Card component for displaying information.
Usage:
{% include "components/card.html" with title="Status", content="Active", class="col-span-2" %}
Or as a block:
{% call card(title="Details") %}
<p>Card content here</p>
{% endcall %}
#}
{% macro card(title=None, class="") %}
<div class="bg-dark-600 rounded-lg p-4 {{ class }}">
{% if title %}
<h3 class="text-sm font-medium text-gray-400 mb-2">{{ title }}</h3>
{% endif %}
<div class="text-white">
{{ caller() if caller else "" }}
</div>
</div>
{% endmacro %}
{% macro stat_card(title, value, color="white", class="") %}
<div class="bg-dark-600 rounded-lg p-4 text-center {{ class }}">
<div class="text-2xl font-bold text-{{ color }}-400">{{ value }}</div>
<div class="text-sm text-gray-400">{{ title }}</div>
</div>
{% endmacro %}
{% macro info_card(title, items, class="") %}
<div class="bg-dark-600 rounded-lg p-4 {{ class }}">
{% if title %}
<h3 class="text-sm font-medium text-gray-400 mb-3">{{ title }}</h3>
{% endif %}
<dl class="space-y-2">
{% for label, value in items %}
<div class="flex justify-between">
<dt class="text-gray-400">{{ label }}</dt>
<dd class="text-white font-mono text-sm">{{ value }}</dd>
</div>
{% endfor %}
</dl>
</div>
{% endmacro %}

View File

@@ -0,0 +1,176 @@
{#
Cytoscape.js DAG visualization component.
Usage:
{% from "components/dag.html" import dag_container, dag_scripts, dag_legend %}
{# In head block #}
{{ dag_scripts() }}
{# In content #}
{{ dag_container(id="plan-dag", height="400px") }}
{{ dag_legend() }}
{# In scripts block #}
<script>
initDag('plan-dag', {{ nodes | tojson }}, {{ edges | tojson }});
</script>
#}
{% macro dag_scripts() %}
<script src="{{ CYTOSCAPE_CDN }}"></script>
<script src="{{ DAGRE_CDN }}"></script>
<script src="{{ CYTOSCAPE_DAGRE_CDN }}"></script>
<script>
// Global Cytoscape instance for WebSocket updates
window.artdagCy = null;
function initDag(containerId, nodes, edges) {
const nodeColors = {{ NODE_COLORS | tojson }};
window.artdagCy = cytoscape({
container: document.getElementById(containerId),
elements: {
nodes: nodes,
edges: edges
},
style: [
{
selector: 'node',
style: {
'label': 'data(label)',
'text-valign': 'center',
'text-halign': 'center',
'background-color': function(ele) {
return nodeColors[ele.data('nodeType')] || nodeColors['default'];
},
'color': '#fff',
'font-size': '10px',
'width': 80,
'height': 40,
'shape': 'round-rectangle',
'text-wrap': 'wrap',
'text-max-width': '70px',
}
},
{
selector: 'node[status="cached"], node[status="completed"]',
style: {
'border-width': 3,
'border-color': '#22c55e'
}
},
{
selector: 'node[status="running"]',
style: {
'border-width': 3,
'border-color': '#eab308',
'border-style': 'dashed'
}
},
{
selector: 'node:selected',
style: {
'border-width': 3,
'border-color': '#3b82f6'
}
},
{
selector: 'edge',
style: {
'width': 2,
'line-color': '#6b7280',
'target-arrow-color': '#6b7280',
'target-arrow-shape': 'triangle',
'curve-style': 'bezier'
}
}
],
layout: {
name: 'dagre',
rankDir: 'TB',
nodeSep: 50,
rankSep: 80,
padding: 20
},
userZoomingEnabled: true,
userPanningEnabled: true,
boxSelectionEnabled: false
});
// Click handler for node details
window.artdagCy.on('tap', 'node', function(evt) {
const node = evt.target;
const data = node.data();
showNodeDetails(data);
});
return window.artdagCy;
}
function showNodeDetails(data) {
const panel = document.getElementById('node-details');
if (!panel) return;
panel.innerHTML = `
<h4 class="font-medium text-white mb-2">${data.label || data.id}</h4>
<dl class="space-y-1 text-sm">
<div class="flex justify-between">
<dt class="text-gray-400">Type</dt>
<dd class="text-white">${data.nodeType || 'Unknown'}</dd>
</div>
<div class="flex justify-between">
<dt class="text-gray-400">Status</dt>
<dd class="text-white">${data.status || 'pending'}</dd>
</div>
${data.cacheId ? `
<div class="flex justify-between">
<dt class="text-gray-400">Cache ID</dt>
<dd class="text-white font-mono text-xs">${data.cacheId.substring(0, 16)}...</dd>
</div>
` : ''}
${data.level !== undefined ? `
<div class="flex justify-between">
<dt class="text-gray-400">Level</dt>
<dd class="text-white">${data.level}</dd>
</div>
` : ''}
</dl>
`;
panel.classList.remove('hidden');
}
// Future WebSocket support: update node status in real-time
function updateNodeStatus(stepId, status, cacheId) {
if (!window.artdagCy) return;
const node = window.artdagCy.getElementById(stepId);
if (node && node.length > 0) {
node.data('status', status);
if (cacheId) {
node.data('cacheId', cacheId);
}
}
}
</script>
{% endmacro %}
{% macro dag_container(id="dag-container", height="400px", class="") %}
<div id="{{ id }}" class="w-full bg-dark-700 rounded-lg {{ class }}" style="height: {{ height }};"></div>
<div id="node-details" class="hidden mt-4 p-4 bg-dark-600 rounded-lg"></div>
{% endmacro %}
{% macro dag_legend(node_types=None) %}
{% set types = node_types or ["SOURCE", "EFFECT", "_LIST"] %}
<div class="flex gap-4 text-sm flex-wrap mt-4">
{% for type in types %}
<span class="flex items-center gap-2">
<span class="w-4 h-4 rounded" style="background-color: {{ NODE_COLORS.get(type, NODE_COLORS.default) }}"></span>
{{ type }}
</span>
{% endfor %}
<span class="flex items-center gap-2">
<span class="w-4 h-4 rounded border-2 border-green-500 bg-dark-600"></span>
Cached
</span>
</div>
{% endmacro %}

View File

@@ -0,0 +1,98 @@
{#
Media preview component for videos, images, and audio.
Usage:
{% from "components/media_preview.html" import media_preview, video_player, image_preview, audio_player %}
{{ media_preview(content_hash, media_type, title="Preview") }}
{{ video_player(src="/cache/abc123/mp4", poster="/cache/abc123/thumb") }}
#}
{% macro media_preview(content_hash, media_type, title=None, class="", show_download=True) %}
<div class="bg-dark-600 rounded-lg overflow-hidden {{ class }}">
{% if title %}
<div class="px-4 py-2 border-b border-dark-500">
<h3 class="text-sm font-medium text-gray-400">{{ title }}</h3>
</div>
{% endif %}
<div class="aspect-video bg-dark-700 flex items-center justify-center">
{% if media_type == "video" %}
{{ video_player("/cache/" + content_hash + "/mp4") }}
{% elif media_type == "image" %}
{{ image_preview("/cache/" + content_hash + "/raw") }}
{% elif media_type == "audio" %}
{{ audio_player("/cache/" + content_hash + "/raw") }}
{% else %}
<div class="text-gray-400 text-center p-4">
<svg class="w-12 h-12 mx-auto mb-2" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M9 12h6m-6 4h6m2 5H7a2 2 0 01-2-2V5a2 2 0 012-2h5.586a1 1 0 01.707.293l5.414 5.414a1 1 0 01.293.707V19a2 2 0 01-2 2z"/>
</svg>
<p>Preview not available</p>
</div>
{% endif %}
</div>
{% if show_download %}
<div class="px-4 py-2 border-t border-dark-500">
<a href="/cache/{{ content_hash }}/raw" download
class="text-blue-400 hover:text-blue-300 text-sm">
Download original
</a>
</div>
{% endif %}
</div>
{% endmacro %}
{% macro video_player(src, poster=None, autoplay=False, muted=True, loop=False, class="") %}
<video
class="w-full h-full object-contain {{ class }}"
controls
playsinline
{% if poster %}poster="{{ poster }}"{% endif %}
{% if autoplay %}autoplay{% endif %}
{% if muted %}muted{% endif %}
{% if loop %}loop{% endif %}
>
<source src="{{ src }}" type="video/mp4">
Your browser does not support the video tag.
</video>
{% endmacro %}
{% macro image_preview(src, alt="", class="") %}
<img
src="{{ src }}"
alt="{{ alt }}"
class="w-full h-full object-contain {{ class }}"
loading="lazy"
>
{% endmacro %}
{% macro audio_player(src, class="") %}
<div class="w-full px-4 {{ class }}">
<audio controls class="w-full">
<source src="{{ src }}">
Your browser does not support the audio element.
</audio>
</div>
{% endmacro %}
{% macro thumbnail(content_hash, media_type, size="w-24 h-24", class="") %}
<div class="bg-dark-700 rounded {{ size }} flex items-center justify-center overflow-hidden {{ class }}">
{% if media_type == "image" %}
<img src="/cache/{{ content_hash }}/raw" alt="" class="w-full h-full object-cover" loading="lazy">
{% elif media_type == "video" %}
<svg class="w-8 h-8 text-gray-400" fill="currentColor" viewBox="0 0 24 24">
<path d="M8 5v14l11-7z"/>
</svg>
{% elif media_type == "audio" %}
<svg class="w-8 h-8 text-gray-400" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M9 19V6l12-3v13M9 19c0 1.105-1.343 2-3 2s-3-.895-3-2 1.343-2 3-2 3 .895 3 2zm12-3c0 1.105-1.343 2-3 2s-3-.895-3-2 1.343-2 3-2 3 .895 3 2zM9 10l12-3"/>
</svg>
{% else %}
<svg class="w-8 h-8 text-gray-400" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M9 12h6m-6 4h6m2 5H7a2 2 0 01-2-2V5a2 2 0 012-2h5.586a1 1 0 01.707.293l5.414 5.414a1 1 0 01.293.707V19a2 2 0 01-2 2z"/>
</svg>
{% endif %}
</div>
{% endmacro %}

View File

@@ -0,0 +1,82 @@
{#
Pagination component with HTMX infinite scroll support.
Usage:
{% from "components/pagination.html" import infinite_scroll_trigger, page_links %}
{# Infinite scroll (HTMX) #}
{{ infinite_scroll_trigger(url="/items?page=2", colspan=3, has_more=True) }}
{# Traditional pagination #}
{{ page_links(current_page=1, total_pages=5, base_url="/items") }}
#}
{% macro infinite_scroll_trigger(url, colspan=1, has_more=True, target=None) %}
{% if has_more %}
<tr hx-get="{{ url }}"
hx-trigger="revealed"
hx-swap="afterend"
{% if target %}hx-target="{{ target }}"{% endif %}
class="htmx-indicator-row">
<td colspan="{{ colspan }}" class="text-center py-4">
<span class="text-gray-400 htmx-indicator">
<svg class="animate-spin h-5 w-5 inline mr-2" xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24">
<circle class="opacity-25" cx="12" cy="12" r="10" stroke="currentColor" stroke-width="4"></circle>
<path class="opacity-75" fill="currentColor" d="M4 12a8 8 0 018-8V0C5.373 0 0 5.373 0 12h4zm2 5.291A7.962 7.962 0 014 12H0c0 3.042 1.135 5.824 3 7.938l3-2.647z"></path>
</svg>
Loading more...
</span>
</td>
</tr>
{% endif %}
{% endmacro %}
{% macro page_links(current_page, total_pages, base_url, class="") %}
<nav class="flex items-center justify-center space-x-2 {{ class }}">
{# Previous button #}
{% if current_page > 1 %}
<a href="{{ base_url }}?page={{ current_page - 1 }}"
class="px-3 py-2 rounded-lg bg-dark-600 text-gray-300 hover:bg-dark-500 transition-colors">
&larr; Previous
</a>
{% else %}
<span class="px-3 py-2 rounded-lg bg-dark-700 text-gray-500 cursor-not-allowed">
&larr; Previous
</span>
{% endif %}
{# Page numbers #}
<div class="flex items-center space-x-1">
{% for page in range(1, total_pages + 1) %}
{% if page == current_page %}
<span class="px-3 py-2 rounded-lg bg-blue-600 text-white">{{ page }}</span>
{% elif page == 1 or page == total_pages or (page >= current_page - 2 and page <= current_page + 2) %}
<a href="{{ base_url }}?page={{ page }}"
class="px-3 py-2 rounded-lg bg-dark-600 text-gray-300 hover:bg-dark-500 transition-colors">
{{ page }}
</a>
{% elif page == current_page - 3 or page == current_page + 3 %}
<span class="px-2 text-gray-500">...</span>
{% endif %}
{% endfor %}
</div>
{# Next button #}
{% if current_page < total_pages %}
<a href="{{ base_url }}?page={{ current_page + 1 }}"
class="px-3 py-2 rounded-lg bg-dark-600 text-gray-300 hover:bg-dark-500 transition-colors">
Next &rarr;
</a>
{% else %}
<span class="px-3 py-2 rounded-lg bg-dark-700 text-gray-500 cursor-not-allowed">
Next &rarr;
</span>
{% endif %}
</nav>
{% endmacro %}
{% macro page_info(page, limit, total) %}
<div class="text-sm text-gray-400">
Showing {{ (page - 1) * limit + 1 }}-{{ [page * limit, total] | min }} of {{ total }}
</div>
{% endmacro %}

View File

@@ -0,0 +1,51 @@
{#
Table component with dark theme styling.
Usage:
{% from "components/table.html" import table, table_row %}
{% call table(columns=["Name", "Status", "Actions"]) %}
{% for item in items %}
{{ table_row([item.name, item.status, actions_html]) }}
{% endfor %}
{% endcall %}
#}
{% macro table(columns, class="", id="") %}
<div class="overflow-x-auto {{ class }}" {% if id %}id="{{ id }}"{% endif %}>
<table class="w-full text-sm">
<thead class="text-gray-400 border-b border-dark-600">
<tr>
{% for col in columns %}
<th class="text-left py-3 px-4 font-medium">{{ col }}</th>
{% endfor %}
</tr>
</thead>
<tbody class="divide-y divide-dark-600">
{{ caller() }}
</tbody>
</table>
</div>
{% endmacro %}
{% macro table_row(cells, class="", href=None) %}
<tr class="hover:bg-dark-600/50 transition-colors {{ class }}">
{% for cell in cells %}
<td class="py-3 px-4">
{% if href and loop.first %}
<a href="{{ href }}" class="text-blue-400 hover:text-blue-300">{{ cell }}</a>
{% else %}
{{ cell | safe }}
{% endif %}
</td>
{% endfor %}
</tr>
{% endmacro %}
{% macro empty_row(colspan, message="No items found") %}
<tr>
<td colspan="{{ colspan }}" class="py-8 text-center text-gray-400">
{{ message }}
</td>
</tr>
{% endmacro %}

View File

@@ -0,0 +1,19 @@
"""
Utility functions shared across Art-DAG servers.
"""
from .pagination import paginate, get_pagination_params
from .media import detect_media_type, get_media_extension, is_streamable
from .formatting import format_date, format_size, truncate_hash, format_duration
__all__ = [
"paginate",
"get_pagination_params",
"detect_media_type",
"get_media_extension",
"is_streamable",
"format_date",
"format_size",
"truncate_hash",
"format_duration",
]

View File

@@ -0,0 +1,165 @@
"""
Formatting utilities for display.
"""
from datetime import datetime
from typing import Optional, Union
def format_date(
value: Optional[Union[str, datetime]],
length: int = 10,
include_time: bool = False,
) -> str:
"""
Format a date/datetime for display.
Args:
value: Date string or datetime object
length: Length to truncate to (default 10 for YYYY-MM-DD)
include_time: Whether to include time portion
Returns:
Formatted date string
"""
if value is None:
return ""
if isinstance(value, str):
# Parse ISO format string
try:
if "T" in value:
dt = datetime.fromisoformat(value.replace("Z", "+00:00"))
else:
return value[:length]
except ValueError:
return value[:length]
else:
dt = value
if include_time:
return dt.strftime("%Y-%m-%d %H:%M")
return dt.strftime("%Y-%m-%d")
def format_size(size_bytes: Optional[int]) -> str:
"""
Format file size in human-readable form.
Args:
size_bytes: Size in bytes
Returns:
Human-readable size string (e.g., "1.5 MB")
"""
if size_bytes is None:
return "Unknown"
if size_bytes < 0:
return "Unknown"
if size_bytes == 0:
return "0 B"
units = ["B", "KB", "MB", "GB", "TB"]
unit_index = 0
size = float(size_bytes)
while size >= 1024 and unit_index < len(units) - 1:
size /= 1024
unit_index += 1
if unit_index == 0:
return f"{int(size)} {units[unit_index]}"
return f"{size:.1f} {units[unit_index]}"
def truncate_hash(value: str, length: int = 16, suffix: str = "...") -> str:
"""
Truncate a hash or long string with ellipsis.
Args:
value: String to truncate
length: Maximum length before truncation
suffix: Suffix to add when truncated
Returns:
Truncated string
"""
if not value:
return ""
if len(value) <= length:
return value
return f"{value[:length]}{suffix}"
def format_duration(seconds: Optional[float]) -> str:
"""
Format duration in human-readable form.
Args:
seconds: Duration in seconds
Returns:
Human-readable duration string (e.g., "2m 30s")
"""
if seconds is None or seconds < 0:
return "Unknown"
if seconds < 1:
return f"{int(seconds * 1000)}ms"
if seconds < 60:
return f"{seconds:.1f}s"
minutes = int(seconds // 60)
remaining_seconds = int(seconds % 60)
if minutes < 60:
if remaining_seconds:
return f"{minutes}m {remaining_seconds}s"
return f"{minutes}m"
hours = minutes // 60
remaining_minutes = minutes % 60
if remaining_minutes:
return f"{hours}h {remaining_minutes}m"
return f"{hours}h"
def format_count(count: int) -> str:
"""
Format a count with abbreviation for large numbers.
Args:
count: Number to format
Returns:
Formatted string (e.g., "1.2K", "3.5M")
"""
if count < 1000:
return str(count)
if count < 1000000:
return f"{count / 1000:.1f}K"
if count < 1000000000:
return f"{count / 1000000:.1f}M"
return f"{count / 1000000000:.1f}B"
def format_percentage(value: float, decimals: int = 1) -> str:
"""
Format a percentage value.
Args:
value: Percentage value (0-100 or 0-1)
decimals: Number of decimal places
Returns:
Formatted percentage string
"""
# Assume 0-1 if less than 1
if value <= 1:
value *= 100
if decimals == 0:
return f"{int(value)}%"
return f"{value:.{decimals}f}%"

View File

@@ -0,0 +1,166 @@
"""
Media type detection and handling utilities.
"""
from pathlib import Path
from typing import Optional
import mimetypes
# Initialize mimetypes database
mimetypes.init()
# Media type categories
VIDEO_TYPES = {"video/mp4", "video/webm", "video/quicktime", "video/x-msvideo", "video/avi"}
IMAGE_TYPES = {"image/jpeg", "image/png", "image/gif", "image/webp", "image/svg+xml"}
AUDIO_TYPES = {"audio/mpeg", "audio/wav", "audio/ogg", "audio/flac", "audio/aac", "audio/mp3"}
# File extension mappings
EXTENSION_TO_CATEGORY = {
# Video
".mp4": "video",
".webm": "video",
".mov": "video",
".avi": "video",
".mkv": "video",
# Image
".jpg": "image",
".jpeg": "image",
".png": "image",
".gif": "image",
".webp": "image",
".svg": "image",
# Audio
".mp3": "audio",
".wav": "audio",
".ogg": "audio",
".flac": "audio",
".aac": "audio",
".m4a": "audio",
}
def detect_media_type(path: Path) -> str:
"""
Detect the media category for a file.
Args:
path: Path to the file
Returns:
Category string: "video", "image", "audio", or "unknown"
"""
if not path:
return "unknown"
# Try extension first
ext = path.suffix.lower()
if ext in EXTENSION_TO_CATEGORY:
return EXTENSION_TO_CATEGORY[ext]
# Try mimetypes
mime_type, _ = mimetypes.guess_type(str(path))
if mime_type:
if mime_type in VIDEO_TYPES or mime_type.startswith("video/"):
return "video"
if mime_type in IMAGE_TYPES or mime_type.startswith("image/"):
return "image"
if mime_type in AUDIO_TYPES or mime_type.startswith("audio/"):
return "audio"
return "unknown"
def get_mime_type(path: Path) -> str:
"""
Get the MIME type for a file.
Args:
path: Path to the file
Returns:
MIME type string or "application/octet-stream"
"""
mime_type, _ = mimetypes.guess_type(str(path))
return mime_type or "application/octet-stream"
def get_media_extension(media_type: str) -> str:
"""
Get the typical file extension for a media type.
Args:
media_type: Media category or MIME type
Returns:
File extension with dot (e.g., ".mp4")
"""
if media_type == "video":
return ".mp4"
if media_type == "image":
return ".png"
if media_type == "audio":
return ".mp3"
# Try as MIME type
ext = mimetypes.guess_extension(media_type)
return ext or ""
def is_streamable(path: Path) -> bool:
"""
Check if a file type is streamable (video/audio).
Args:
path: Path to the file
Returns:
True if the file can be streamed
"""
media_type = detect_media_type(path)
return media_type in ("video", "audio")
def needs_conversion(path: Path, target_format: str = "mp4") -> bool:
"""
Check if a video file needs format conversion.
Args:
path: Path to the file
target_format: Target format (default mp4)
Returns:
True if conversion is needed
"""
media_type = detect_media_type(path)
if media_type != "video":
return False
ext = path.suffix.lower().lstrip(".")
return ext != target_format
def get_video_src(
content_hash: str,
original_path: Optional[Path] = None,
is_ios: bool = False,
) -> str:
"""
Get the appropriate video source URL.
For iOS devices, prefer MP4 format.
Args:
content_hash: Content hash for the video
original_path: Optional original file path
is_ios: Whether the client is iOS
Returns:
URL path for the video source
"""
if is_ios:
return f"/cache/{content_hash}/mp4"
if original_path and original_path.suffix.lower() in (".mp4", ".webm"):
return f"/cache/{content_hash}/raw"
return f"/cache/{content_hash}/mp4"

View File

@@ -0,0 +1,85 @@
"""
Pagination utilities.
"""
from typing import List, Any, Tuple, Optional
from fastapi import Request
from ..constants import DEFAULT_PAGE_SIZE, MAX_PAGE_SIZE
def get_pagination_params(request: Request) -> Tuple[int, int]:
"""
Extract pagination parameters from request query string.
Args:
request: FastAPI request
Returns:
Tuple of (page, limit)
"""
try:
page = int(request.query_params.get("page", 1))
page = max(1, page)
except ValueError:
page = 1
try:
limit = int(request.query_params.get("limit", DEFAULT_PAGE_SIZE))
limit = max(1, min(limit, MAX_PAGE_SIZE))
except ValueError:
limit = DEFAULT_PAGE_SIZE
return page, limit
def paginate(
items: List[Any],
page: int = 1,
limit: int = DEFAULT_PAGE_SIZE,
) -> Tuple[List[Any], dict]:
"""
Paginate a list of items.
Args:
items: Full list of items
page: Page number (1-indexed)
limit: Items per page
Returns:
Tuple of (paginated items, pagination info dict)
"""
total = len(items)
start = (page - 1) * limit
end = start + limit
paginated = items[start:end]
return paginated, {
"page": page,
"limit": limit,
"total": total,
"has_more": end < total,
"total_pages": (total + limit - 1) // limit if total > 0 else 1,
}
def calculate_offset(page: int, limit: int) -> int:
"""Calculate database offset from page and limit."""
return (page - 1) * limit
def build_pagination_info(
page: int,
limit: int,
total: int,
) -> dict:
"""Build pagination info dictionary."""
return {
"page": page,
"limit": limit,
"total": total,
"has_more": page * limit < total,
"total_pages": (total + limit - 1) // limit if total > 0 else 1,
}

View File

@@ -0,0 +1,22 @@
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[project]
name = "artdag-common"
version = "0.1.3"
description = "Shared components for Art-DAG L1 and L2 servers"
requires-python = ">=3.10"
dependencies = [
"fastapi>=0.100.0",
"jinja2>=3.1.0",
"pydantic>=2.0.0",
]
[project.optional-dependencies]
dev = [
"pytest>=7.0.0",
]
[tool.hatch.build.targets.wheel]
packages = ["artdag_common"]

47
artdag/core/.gitignore vendored Normal file
View File

@@ -0,0 +1,47 @@
__pycache__/
*.py[cod]
*$py.class
*.so
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
# Testing
.pytest_cache/
.coverage
htmlcov/
# IDE
.idea/
.vscode/
*.swp
*.swo
# Environment
.env
.venv
env/
venv/
# Private keys (ActivityPub secrets)
.cache/
# Test outputs
test_cache/
test_plan_output.json
analysis.json
plan.json
plan_with_analysis.json

110
artdag/core/README.md Normal file
View File

@@ -0,0 +1,110 @@
# artdag
Content-addressed DAG execution engine with ActivityPub ownership.
## Features
- **Content-addressed nodes**: `node_id = SHA3-256(type + config + inputs)` for automatic deduplication
- **Quantum-resistant hashing**: SHA-3 throughout for future-proof integrity
- **ActivityPub ownership**: Cryptographically signed ownership claims
- **Federated identity**: `@user@artdag.rose-ash.com` style identities
- **Pluggable executors**: Register custom node types
- **Built-in video primitives**: SOURCE, SEGMENT, RESIZE, TRANSFORM, SEQUENCE, MUX, BLEND
## Installation
```bash
pip install -e .
```
### Optional: External Effect Tools
Some effects can use external tools for better performance:
**Pixelsort** (glitch art pixel sorting):
```bash
# Rust CLI (recommended - fast)
cargo install --git https://github.com/Void-ux/pixelsort.git pixelsort
# Or Python CLI
pip install git+https://github.com/Blotz/pixelsort-cli
```
**Datamosh** (video glitch/corruption):
```bash
# FFglitch (recommended)
./scripts/install-ffglitch.sh
# Or Python CLI
pip install git+https://github.com/tiberiuiancu/datamoshing
```
Check available tools:
```bash
python -m artdag.sexp.external_tools
```
## Quick Start
```python
from artdag import Engine, DAGBuilder, Registry
from artdag.activitypub import OwnershipManager
# Create ownership manager
manager = OwnershipManager("./my_registry")
# Create your identity
actor = manager.create_actor("alice", "Alice")
print(f"Created: {actor.handle}") # @alice@artdag.rose-ash.com
# Register an asset with ownership
asset, activity = manager.register_asset(
actor=actor,
name="my_image",
path="/path/to/image.jpg",
tags=["photo", "art"],
)
print(f"Owned: {asset.name} (hash: {asset.content_hash})")
# Build and execute a DAG
engine = Engine("./cache")
builder = DAGBuilder()
source = builder.source(str(asset.path))
resized = builder.resize(source, width=1920, height=1080)
builder.set_output(resized)
result = engine.execute(builder.build())
print(f"Output: {result.output_path}")
```
## Architecture
```
artdag/
├── dag.py # Node, DAG, DAGBuilder
├── cache.py # Content-addressed file cache
├── executor.py # Base executor + registry
├── engine.py # DAG execution engine
├── activitypub/ # Identity + ownership
│ ├── actor.py # Actor identity with RSA keys
│ ├── activity.py # Create, Announce activities
│ ├── signatures.py # RSA signing/verification
│ └── ownership.py # Links actors to assets
├── nodes/ # Built-in executors
│ ├── source.py # SOURCE
│ ├── transform.py # SEGMENT, RESIZE, TRANSFORM
│ ├── compose.py # SEQUENCE, LAYER, MUX, BLEND
│ └── effect.py # EFFECT (identity, etc.)
└── effects/ # Effect implementations
└── identity/ # The foundational identity effect
```
## Related Repos
- **Registry**: https://git.rose-ash.com/art-dag/registry - Asset registry with ownership proofs
- **Recipes**: https://git.rose-ash.com/art-dag/recipes - DAG recipes using effects
## License
MIT

View File

@@ -0,0 +1,61 @@
# artdag - Content-addressed DAG execution engine with ActivityPub ownership
#
# A standalone execution engine that processes directed acyclic graphs (DAGs)
# where each node represents an operation. Nodes are content-addressed for
# automatic caching and deduplication.
#
# Core concepts:
# - Node: An operation with type, config, and inputs
# - DAG: A graph of nodes with a designated output node
# - Executor: Implements the actual operation for a node type
# - Engine: Executes DAGs by resolving dependencies and running executors
from .dag import Node, DAG, DAGBuilder, NodeType
from .cache import Cache, CacheEntry
from .executor import Executor, register_executor, get_executor
from .engine import Engine
from .registry import Registry, Asset
from .activities import Activity, ActivityStore, ActivityManager, make_is_shared_fn
# Analysis and planning modules (optional, require extra dependencies)
try:
from .analysis import Analyzer, AnalysisResult
except ImportError:
Analyzer = None
AnalysisResult = None
try:
from .planning import RecipePlanner, ExecutionPlan, ExecutionStep
except ImportError:
RecipePlanner = None
ExecutionPlan = None
ExecutionStep = None
__all__ = [
# Core
"Node",
"DAG",
"DAGBuilder",
"NodeType",
"Cache",
"CacheEntry",
"Executor",
"register_executor",
"get_executor",
"Engine",
"Registry",
"Asset",
"Activity",
"ActivityStore",
"ActivityManager",
"make_is_shared_fn",
# Analysis (optional)
"Analyzer",
"AnalysisResult",
# Planning (optional)
"RecipePlanner",
"ExecutionPlan",
"ExecutionStep",
]
__version__ = "0.1.0"

View File

@@ -0,0 +1,371 @@
# artdag/activities.py
"""
Persistent activity (job) tracking for cache management.
Activities represent executions of DAGs. They track:
- Input node IDs (sources)
- Output node ID (terminal node)
- Intermediate node IDs (everything in between)
This enables deletion rules:
- Shared items (ActivityPub published) cannot be deleted
- Inputs/outputs of activities cannot be deleted
- Intermediates can be deleted (reconstructible)
- Activities can only be discarded if no items are shared
"""
import json
import logging
import time
import uuid
from dataclasses import dataclass, field
from pathlib import Path
from typing import Any, Callable, Dict, List, Optional, Set
from .cache import Cache, CacheEntry
from .dag import DAG
logger = logging.getLogger(__name__)
def make_is_shared_fn(activitypub_store: "ActivityStore") -> Callable[[str], bool]:
"""
Create an is_shared function from an ActivityPub ActivityStore.
Args:
activitypub_store: The ActivityPub activity store
(from artdag.activitypub.activity)
Returns:
Function that checks if a cid has been published
"""
def is_shared(cid: str) -> bool:
activities = activitypub_store.find_by_object_hash(cid)
return any(a.activity_type == "Create" for a in activities)
return is_shared
@dataclass
class Activity:
"""
A recorded execution of a DAG.
Tracks which cache entries are inputs, outputs, and intermediates
to enforce deletion rules.
"""
activity_id: str
input_ids: List[str] # Source node cache IDs
output_id: str # Terminal node cache ID
intermediate_ids: List[str] # Everything in between
created_at: float
status: str = "completed" # pending|running|completed|failed
dag_snapshot: Optional[Dict[str, Any]] = None # Serialized DAG for reconstruction
def to_dict(self) -> Dict[str, Any]:
return {
"activity_id": self.activity_id,
"input_ids": self.input_ids,
"output_id": self.output_id,
"intermediate_ids": self.intermediate_ids,
"created_at": self.created_at,
"status": self.status,
"dag_snapshot": self.dag_snapshot,
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "Activity":
return cls(
activity_id=data["activity_id"],
input_ids=data["input_ids"],
output_id=data["output_id"],
intermediate_ids=data["intermediate_ids"],
created_at=data["created_at"],
status=data.get("status", "completed"),
dag_snapshot=data.get("dag_snapshot"),
)
@classmethod
def from_dag(cls, dag: DAG, activity_id: str = None) -> "Activity":
"""
Create an Activity from a DAG.
Classifies nodes as inputs, output, or intermediates.
"""
if activity_id is None:
activity_id = str(uuid.uuid4())
# Find input nodes (nodes with no inputs - sources)
input_ids = []
for node_id, node in dag.nodes.items():
if not node.inputs:
input_ids.append(node_id)
# Output is the terminal node
output_id = dag.output_id
# Intermediates are everything else
intermediate_ids = []
for node_id in dag.nodes:
if node_id not in input_ids and node_id != output_id:
intermediate_ids.append(node_id)
return cls(
activity_id=activity_id,
input_ids=sorted(input_ids),
output_id=output_id,
intermediate_ids=sorted(intermediate_ids),
created_at=time.time(),
status="completed",
dag_snapshot=dag.to_dict(),
)
@property
def all_node_ids(self) -> List[str]:
"""All node IDs involved in this activity."""
return self.input_ids + [self.output_id] + self.intermediate_ids
class ActivityStore:
"""
Persistent storage for activities.
Provides methods to check deletion eligibility and perform deletions.
"""
def __init__(self, store_dir: Path | str):
self.store_dir = Path(store_dir)
self.store_dir.mkdir(parents=True, exist_ok=True)
self._activities: Dict[str, Activity] = {}
self._load()
def _index_path(self) -> Path:
return self.store_dir / "activities.json"
def _load(self):
"""Load activities from disk."""
index_path = self._index_path()
if index_path.exists():
try:
with open(index_path) as f:
data = json.load(f)
self._activities = {
a["activity_id"]: Activity.from_dict(a)
for a in data.get("activities", [])
}
except (json.JSONDecodeError, KeyError) as e:
logger.warning(f"Failed to load activities: {e}")
self._activities = {}
def _save(self):
"""Save activities to disk."""
data = {
"version": "1.0",
"activities": [a.to_dict() for a in self._activities.values()],
}
with open(self._index_path(), "w") as f:
json.dump(data, f, indent=2)
def add(self, activity: Activity) -> None:
"""Add an activity."""
self._activities[activity.activity_id] = activity
self._save()
def get(self, activity_id: str) -> Optional[Activity]:
"""Get an activity by ID."""
return self._activities.get(activity_id)
def remove(self, activity_id: str) -> bool:
"""Remove an activity record (does not delete cache entries)."""
if activity_id not in self._activities:
return False
del self._activities[activity_id]
self._save()
return True
def list(self) -> List[Activity]:
"""List all activities."""
return list(self._activities.values())
def find_by_input_ids(self, input_ids: List[str]) -> List[Activity]:
"""Find activities with the same inputs (for UI grouping)."""
sorted_inputs = sorted(input_ids)
return [
a for a in self._activities.values()
if sorted(a.input_ids) == sorted_inputs
]
def find_using_node(self, node_id: str) -> List[Activity]:
"""Find all activities that reference a node ID."""
return [
a for a in self._activities.values()
if node_id in a.all_node_ids
]
def __len__(self) -> int:
return len(self._activities)
class ActivityManager:
"""
Manages activities and cache deletion with sharing rules.
Deletion rules:
1. Shared items (ActivityPub published) cannot be deleted
2. Inputs/outputs of activities cannot be deleted
3. Intermediates can be deleted (reconstructible)
4. Activities can only be discarded if no items are shared
"""
def __init__(
self,
cache: Cache,
activity_store: ActivityStore,
is_shared_fn: Callable[[str], bool],
):
"""
Args:
cache: The L1 cache
activity_store: Activity persistence
is_shared_fn: Function that checks if a cid is shared
(published via ActivityPub)
"""
self.cache = cache
self.activities = activity_store
self._is_shared = is_shared_fn
def record_activity(self, dag: DAG) -> Activity:
"""Record a completed DAG execution as an activity."""
activity = Activity.from_dag(dag)
self.activities.add(activity)
return activity
def is_shared(self, node_id: str) -> bool:
"""Check if a cache entry is shared (published via ActivityPub)."""
entry = self.cache.get_entry(node_id)
if not entry or not entry.cid:
return False
return self._is_shared(entry.cid)
def can_delete_cache_entry(self, node_id: str) -> bool:
"""
Check if a cache entry can be deleted.
Returns False if:
- Entry is shared (ActivityPub published)
- Entry is an input or output of any activity
"""
# Check if shared
if self.is_shared(node_id):
return False
# Check if it's an input or output of any activity
for activity in self.activities.list():
if node_id in activity.input_ids:
return False
if node_id == activity.output_id:
return False
# It's either an intermediate or orphaned - can delete
return True
def can_discard_activity(self, activity_id: str) -> bool:
"""
Check if an activity can be discarded.
Returns False if any cache entry (input, output, or intermediate)
is shared via ActivityPub.
"""
activity = self.activities.get(activity_id)
if not activity:
return False
# Check if any item is shared
for node_id in activity.all_node_ids:
if self.is_shared(node_id):
return False
return True
def discard_activity(self, activity_id: str) -> bool:
"""
Discard an activity and delete its intermediate cache entries.
Returns False if the activity cannot be discarded (has shared items).
When discarded:
- Intermediate cache entries are deleted
- The activity record is removed
- Inputs remain (may be used by other activities)
- Output is deleted if orphaned (not shared, not used elsewhere)
"""
if not self.can_discard_activity(activity_id):
return False
activity = self.activities.get(activity_id)
if not activity:
return False
output_id = activity.output_id
intermediate_ids = list(activity.intermediate_ids)
# Remove the activity record first
self.activities.remove(activity_id)
# Delete intermediates
for node_id in intermediate_ids:
self.cache.remove(node_id)
logger.debug(f"Deleted intermediate: {node_id}")
# Check if output is now orphaned
if self._is_orphaned(output_id) and not self.is_shared(output_id):
self.cache.remove(output_id)
logger.debug(f"Deleted orphaned output: {output_id}")
# Inputs remain - they may be used by other activities
# But check if any are orphaned now
for input_id in activity.input_ids:
if self._is_orphaned(input_id) and not self.is_shared(input_id):
self.cache.remove(input_id)
logger.debug(f"Deleted orphaned input: {input_id}")
return True
def _is_orphaned(self, node_id: str) -> bool:
"""Check if a node is not referenced by any activity."""
for activity in self.activities.list():
if node_id in activity.all_node_ids:
return False
return True
def get_deletable_entries(self) -> List[CacheEntry]:
"""Get all cache entries that can be deleted."""
deletable = []
for entry in self.cache.list_entries():
if self.can_delete_cache_entry(entry.node_id):
deletable.append(entry)
return deletable
def get_discardable_activities(self) -> List[Activity]:
"""Get all activities that can be discarded."""
return [
a for a in self.activities.list()
if self.can_discard_activity(a.activity_id)
]
def cleanup_intermediates(self) -> int:
"""
Delete all intermediate cache entries.
Intermediates are safe to delete as they can be reconstructed
from inputs using the DAG.
Returns:
Number of entries deleted
"""
deleted = 0
for activity in self.activities.list():
for node_id in activity.intermediate_ids:
if self.cache.has(node_id):
self.cache.remove(node_id)
deleted += 1
return deleted

View File

@@ -0,0 +1,33 @@
# primitive/activitypub/__init__.py
"""
ActivityPub implementation for Art DAG.
Provides decentralized identity and ownership for assets.
Domain: artdag.rose-ash.com
Core concepts:
- Actor: A user identity with cryptographic keys
- Object: An asset (image, video, etc.)
- Activity: An action (Create, Announce, Like, etc.)
- Signature: Cryptographic proof of authorship
"""
from .actor import Actor, ActorStore
from .activity import Activity, CreateActivity, ActivityStore
from .signatures import sign_activity, verify_signature, verify_activity_ownership
from .ownership import OwnershipManager, OwnershipRecord
__all__ = [
"Actor",
"ActorStore",
"Activity",
"CreateActivity",
"ActivityStore",
"sign_activity",
"verify_signature",
"verify_activity_ownership",
"OwnershipManager",
"OwnershipRecord",
]
DOMAIN = "artdag.rose-ash.com"

View File

@@ -0,0 +1,203 @@
# primitive/activitypub/activity.py
"""
ActivityPub Activity types.
Activities represent actions taken by actors on objects.
Key activity types for Art DAG:
- Create: Actor creates/claims ownership of an object
- Announce: Actor shares/boosts an object
- Like: Actor endorses an object
"""
import json
import time
import uuid
from dataclasses import dataclass, field
from pathlib import Path
from typing import Any, Dict, List, Optional
from .actor import Actor, DOMAIN
def _generate_id() -> str:
"""Generate unique activity ID."""
return str(uuid.uuid4())
@dataclass
class Activity:
"""
Base ActivityPub Activity.
Attributes:
activity_id: Unique identifier
activity_type: Type (Create, Announce, Like, etc.)
actor_id: ID of the actor performing the activity
object_data: The object of the activity
published: ISO timestamp
signature: Cryptographic signature (added after signing)
"""
activity_id: str
activity_type: str
actor_id: str
object_data: Dict[str, Any]
published: str = field(default_factory=lambda: time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime()))
signature: Optional[Dict[str, Any]] = None
def to_activitypub(self) -> Dict[str, Any]:
"""Return ActivityPub JSON-LD representation."""
activity = {
"@context": "https://www.w3.org/ns/activitystreams",
"type": self.activity_type,
"id": f"https://{DOMAIN}/activities/{self.activity_id}",
"actor": self.actor_id,
"object": self.object_data,
"published": self.published,
}
if self.signature:
activity["signature"] = self.signature
return activity
def to_dict(self) -> Dict[str, Any]:
"""Serialize for storage."""
return {
"activity_id": self.activity_id,
"activity_type": self.activity_type,
"actor_id": self.actor_id,
"object_data": self.object_data,
"published": self.published,
"signature": self.signature,
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "Activity":
"""Deserialize from storage."""
return cls(
activity_id=data["activity_id"],
activity_type=data["activity_type"],
actor_id=data["actor_id"],
object_data=data["object_data"],
published=data.get("published", ""),
signature=data.get("signature"),
)
@dataclass
class CreateActivity(Activity):
"""
Create activity - establishes ownership of an object.
Used when an actor creates or claims an asset.
"""
activity_type: str = field(default="Create", init=False)
@classmethod
def for_asset(
cls,
actor: Actor,
asset_name: str,
cid: str,
asset_type: str = "Image",
metadata: Dict[str, Any] = None,
) -> "CreateActivity":
"""
Create a Create activity for an asset.
Args:
actor: The actor claiming ownership
asset_name: Name of the asset
cid: SHA-3 hash of the asset content
asset_type: ActivityPub object type (Image, Video, Audio, etc.)
metadata: Additional metadata
Returns:
CreateActivity establishing ownership
"""
object_data = {
"type": asset_type,
"name": asset_name,
"id": f"https://{DOMAIN}/objects/{cid}",
"contentHash": {
"algorithm": "sha3-256",
"value": cid,
},
"attributedTo": actor.id,
}
if metadata:
object_data["metadata"] = metadata
return cls(
activity_id=_generate_id(),
actor_id=actor.id,
object_data=object_data,
)
class ActivityStore:
"""
Persistent storage for activities.
Activities are stored as an append-only log for auditability.
"""
def __init__(self, store_dir: Path | str):
self.store_dir = Path(store_dir)
self.store_dir.mkdir(parents=True, exist_ok=True)
self._activities: List[Activity] = []
self._load()
def _log_path(self) -> Path:
return self.store_dir / "activities.json"
def _load(self):
"""Load activities from disk."""
log_path = self._log_path()
if log_path.exists():
with open(log_path) as f:
data = json.load(f)
self._activities = [
Activity.from_dict(a) for a in data.get("activities", [])
]
def _save(self):
"""Save activities to disk."""
data = {
"version": "1.0",
"activities": [a.to_dict() for a in self._activities],
}
with open(self._log_path(), "w") as f:
json.dump(data, f, indent=2)
def add(self, activity: Activity) -> None:
"""Add an activity to the log."""
self._activities.append(activity)
self._save()
def get(self, activity_id: str) -> Optional[Activity]:
"""Get an activity by ID."""
for a in self._activities:
if a.activity_id == activity_id:
return a
return None
def list(self) -> List[Activity]:
"""List all activities."""
return list(self._activities)
def find_by_actor(self, actor_id: str) -> List[Activity]:
"""Find activities by actor."""
return [a for a in self._activities if a.actor_id == actor_id]
def find_by_object_hash(self, cid: str) -> List[Activity]:
"""Find activities referencing an object by hash."""
results = []
for a in self._activities:
obj_hash = a.object_data.get("contentHash", {})
if isinstance(obj_hash, dict) and obj_hash.get("value") == cid:
results.append(a)
elif a.object_data.get("contentHash") == cid:
results.append(a)
return results
def __len__(self) -> int:
return len(self._activities)

View File

@@ -0,0 +1,206 @@
# primitive/activitypub/actor.py
"""
ActivityPub Actor management.
An Actor is an identity with:
- Username and display name
- RSA key pair for signing
- ActivityPub-compliant JSON-LD representation
"""
import json
import time
from dataclasses import dataclass, field
from pathlib import Path
from typing import Any, Dict, Optional
from cryptography.hazmat.primitives import hashes, serialization
from cryptography.hazmat.primitives.asymmetric import rsa, padding
DOMAIN = "artdag.rose-ash.com"
def _generate_keypair() -> tuple[bytes, bytes]:
"""Generate RSA key pair for signing."""
private_key = rsa.generate_private_key(
public_exponent=65537,
key_size=2048,
)
private_pem = private_key.private_bytes(
encoding=serialization.Encoding.PEM,
format=serialization.PrivateFormat.PKCS8,
encryption_algorithm=serialization.NoEncryption(),
)
public_pem = private_key.public_key().public_bytes(
encoding=serialization.Encoding.PEM,
format=serialization.PublicFormat.SubjectPublicKeyInfo,
)
return private_pem, public_pem
@dataclass
class Actor:
"""
An ActivityPub Actor (identity).
Attributes:
username: Unique username (e.g., "giles")
display_name: Human-readable name
public_key: PEM-encoded public key
private_key: PEM-encoded private key (kept secret)
created_at: Timestamp of creation
"""
username: str
display_name: str
public_key: bytes
private_key: bytes
created_at: float = field(default_factory=time.time)
domain: str = DOMAIN
@property
def id(self) -> str:
"""ActivityPub actor ID (URL)."""
return f"https://{self.domain}/users/{self.username}"
@property
def handle(self) -> str:
"""Fediverse handle."""
return f"@{self.username}@{self.domain}"
@property
def inbox(self) -> str:
"""ActivityPub inbox URL."""
return f"{self.id}/inbox"
@property
def outbox(self) -> str:
"""ActivityPub outbox URL."""
return f"{self.id}/outbox"
@property
def key_id(self) -> str:
"""Key ID for HTTP Signatures."""
return f"{self.id}#main-key"
def to_activitypub(self) -> Dict[str, Any]:
"""Return ActivityPub JSON-LD representation."""
return {
"@context": [
"https://www.w3.org/ns/activitystreams",
"https://w3id.org/security/v1",
],
"type": "Person",
"id": self.id,
"preferredUsername": self.username,
"name": self.display_name,
"inbox": self.inbox,
"outbox": self.outbox,
"publicKey": {
"id": self.key_id,
"owner": self.id,
"publicKeyPem": self.public_key.decode("utf-8"),
},
}
def to_dict(self) -> Dict[str, Any]:
"""Serialize for storage."""
return {
"username": self.username,
"display_name": self.display_name,
"public_key": self.public_key.decode("utf-8"),
"private_key": self.private_key.decode("utf-8"),
"created_at": self.created_at,
"domain": self.domain,
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "Actor":
"""Deserialize from storage."""
return cls(
username=data["username"],
display_name=data["display_name"],
public_key=data["public_key"].encode("utf-8"),
private_key=data["private_key"].encode("utf-8"),
created_at=data.get("created_at", time.time()),
domain=data.get("domain", DOMAIN),
)
@classmethod
def create(cls, username: str, display_name: str = None) -> "Actor":
"""Create a new actor with generated keys."""
private_pem, public_pem = _generate_keypair()
return cls(
username=username,
display_name=display_name or username,
public_key=public_pem,
private_key=private_pem,
)
class ActorStore:
"""
Persistent storage for actors.
Structure:
store_dir/
actors.json # Index of all actors
keys/
<username>.private.pem
<username>.public.pem
"""
def __init__(self, store_dir: Path | str):
self.store_dir = Path(store_dir)
self.store_dir.mkdir(parents=True, exist_ok=True)
self._actors: Dict[str, Actor] = {}
self._load()
def _index_path(self) -> Path:
return self.store_dir / "actors.json"
def _load(self):
"""Load actors from disk."""
index_path = self._index_path()
if index_path.exists():
with open(index_path) as f:
data = json.load(f)
self._actors = {
username: Actor.from_dict(actor_data)
for username, actor_data in data.get("actors", {}).items()
}
def _save(self):
"""Save actors to disk."""
data = {
"version": "1.0",
"domain": DOMAIN,
"actors": {
username: actor.to_dict()
for username, actor in self._actors.items()
},
}
with open(self._index_path(), "w") as f:
json.dump(data, f, indent=2)
def create(self, username: str, display_name: str = None) -> Actor:
"""Create and store a new actor."""
if username in self._actors:
raise ValueError(f"Actor {username} already exists")
actor = Actor.create(username, display_name)
self._actors[username] = actor
self._save()
return actor
def get(self, username: str) -> Optional[Actor]:
"""Get an actor by username."""
return self._actors.get(username)
def list(self) -> list[Actor]:
"""List all actors."""
return list(self._actors.values())
def __contains__(self, username: str) -> bool:
return username in self._actors
def __len__(self) -> int:
return len(self._actors)

View File

@@ -0,0 +1,226 @@
# primitive/activitypub/ownership.py
"""
Ownership integration between ActivityPub and Registry.
Connects actors, activities, and assets to establish provable ownership.
"""
import json
from dataclasses import dataclass
from pathlib import Path
from typing import Any, Dict, List, Optional
from .actor import Actor, ActorStore
from .activity import Activity, CreateActivity, ActivityStore
from .signatures import sign_activity, verify_activity_ownership
from ..registry import Registry, Asset
@dataclass
class OwnershipRecord:
"""
A verified ownership record linking actor to asset.
Attributes:
actor_handle: The actor's fediverse handle
asset_name: Name of the owned asset
cid: SHA-3 hash of the asset
activity_id: ID of the Create activity establishing ownership
verified: Whether the signature has been verified
"""
actor_handle: str
asset_name: str
cid: str
activity_id: str
verified: bool = False
class OwnershipManager:
"""
Manages ownership relationships between actors and assets.
Integrates:
- ActorStore: Identity management
- Registry: Asset storage
- ActivityStore: Ownership activities
"""
def __init__(self, base_dir: Path | str):
self.base_dir = Path(base_dir)
self.base_dir.mkdir(parents=True, exist_ok=True)
# Initialize stores
self.actors = ActorStore(self.base_dir / "actors")
self.activities = ActivityStore(self.base_dir / "activities")
self.registry = Registry(self.base_dir / "registry")
def create_actor(self, username: str, display_name: str = None) -> Actor:
"""Create a new actor identity."""
return self.actors.create(username, display_name)
def get_actor(self, username: str) -> Optional[Actor]:
"""Get an actor by username."""
return self.actors.get(username)
def register_asset(
self,
actor: Actor,
name: str,
cid: str,
url: str = None,
local_path: Path | str = None,
tags: List[str] = None,
metadata: Dict[str, Any] = None,
) -> tuple[Asset, Activity]:
"""
Register an asset and establish ownership.
Creates the asset in the registry and a signed Create activity
proving the actor's ownership.
Args:
actor: The actor claiming ownership
name: Name for the asset
cid: SHA-3-256 hash of the content
url: Public URL (canonical location)
local_path: Optional local path
tags: Optional tags
metadata: Optional metadata
Returns:
Tuple of (Asset, signed CreateActivity)
"""
# Add to registry
asset = self.registry.add(
name=name,
cid=cid,
url=url,
local_path=local_path,
tags=tags,
metadata=metadata,
)
# Create ownership activity
activity = CreateActivity.for_asset(
actor=actor,
asset_name=name,
cid=asset.cid,
asset_type=self._asset_type_to_ap(asset.asset_type),
metadata=metadata,
)
# Sign the activity
signed_activity = sign_activity(activity, actor)
# Store the activity
self.activities.add(signed_activity)
return asset, signed_activity
def _asset_type_to_ap(self, asset_type: str) -> str:
"""Convert registry asset type to ActivityPub type."""
type_map = {
"image": "Image",
"video": "Video",
"audio": "Audio",
"unknown": "Document",
}
return type_map.get(asset_type, "Document")
def get_owner(self, asset_name: str) -> Optional[Actor]:
"""
Get the owner of an asset.
Finds the earliest Create activity for the asset and returns
the actor if the signature is valid.
"""
asset = self.registry.get(asset_name)
if not asset:
return None
# Find Create activities for this asset
activities = self.activities.find_by_object_hash(asset.cid)
create_activities = [a for a in activities if a.activity_type == "Create"]
if not create_activities:
return None
# Get the earliest (first owner)
earliest = min(create_activities, key=lambda a: a.published)
# Extract username from actor_id
# Format: https://artdag.rose-ash.com/users/{username}
actor_id = earliest.actor_id
if "/users/" in actor_id:
username = actor_id.split("/users/")[-1]
actor = self.actors.get(username)
if actor and verify_activity_ownership(earliest, actor):
return actor
return None
def verify_ownership(self, asset_name: str, actor: Actor) -> bool:
"""
Verify that an actor owns an asset.
Checks for a valid signed Create activity linking the actor
to the asset.
"""
asset = self.registry.get(asset_name)
if not asset:
return False
activities = self.activities.find_by_object_hash(asset.cid)
for activity in activities:
if activity.activity_type == "Create" and activity.actor_id == actor.id:
if verify_activity_ownership(activity, actor):
return True
return False
def list_owned_assets(self, actor: Actor) -> List[Asset]:
"""List all assets owned by an actor."""
activities = self.activities.find_by_actor(actor.id)
owned = []
for activity in activities:
if activity.activity_type == "Create":
# Find asset by hash
obj_hash = activity.object_data.get("contentHash", {})
if isinstance(obj_hash, dict):
hash_value = obj_hash.get("value")
else:
hash_value = obj_hash
if hash_value:
asset = self.registry.find_by_hash(hash_value)
if asset:
owned.append(asset)
return owned
def get_ownership_records(self) -> List[OwnershipRecord]:
"""Get all ownership records."""
records = []
for activity in self.activities.list():
if activity.activity_type != "Create":
continue
# Extract info
actor_id = activity.actor_id
username = actor_id.split("/users/")[-1] if "/users/" in actor_id else "unknown"
actor = self.actors.get(username)
obj_hash = activity.object_data.get("contentHash", {})
hash_value = obj_hash.get("value") if isinstance(obj_hash, dict) else obj_hash
records.append(OwnershipRecord(
actor_handle=actor.handle if actor else f"@{username}@unknown",
asset_name=activity.object_data.get("name", "unknown"),
cid=hash_value or "unknown",
activity_id=activity.activity_id,
verified=verify_activity_ownership(activity, actor) if actor else False,
))
return records

View File

@@ -0,0 +1,163 @@
# primitive/activitypub/signatures.py
"""
Cryptographic signatures for ActivityPub.
Uses RSA-SHA256 signatures compatible with HTTP Signatures spec
and Linked Data Signatures for ActivityPub.
"""
import base64
import hashlib
import json
import time
from typing import Any, Dict
from cryptography.hazmat.primitives import hashes, serialization
from cryptography.hazmat.primitives.asymmetric import padding, rsa
from cryptography.exceptions import InvalidSignature
from .actor import Actor
from .activity import Activity
def _canonicalize(data: Dict[str, Any]) -> str:
"""
Canonicalize JSON for signing.
Uses JCS (JSON Canonicalization Scheme) - sorted keys, no whitespace.
"""
return json.dumps(data, sort_keys=True, separators=(",", ":"))
def _hash_sha256(data: str) -> bytes:
"""Hash string with SHA-256."""
return hashlib.sha256(data.encode()).digest()
def sign_activity(activity: Activity, actor: Actor) -> Activity:
"""
Sign an activity with the actor's private key.
Uses Linked Data Signatures with RsaSignature2017.
Args:
activity: The activity to sign
actor: The actor whose key signs the activity
Returns:
Activity with signature attached
"""
# Load private key
private_key = serialization.load_pem_private_key(
actor.private_key,
password=None,
)
# Create signature options
created = time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime())
# Canonicalize the activity (without signature)
activity_data = activity.to_activitypub()
activity_data.pop("signature", None)
canonical = _canonicalize(activity_data)
# Create the data to sign: hash of options + hash of document
options = {
"@context": "https://w3id.org/security/v1",
"type": "RsaSignature2017",
"creator": actor.key_id,
"created": created,
}
options_hash = _hash_sha256(_canonicalize(options))
document_hash = _hash_sha256(canonical)
to_sign = options_hash + document_hash
# Sign with RSA-SHA256
signature_bytes = private_key.sign(
to_sign,
padding.PKCS1v15(),
hashes.SHA256(),
)
signature_value = base64.b64encode(signature_bytes).decode("utf-8")
# Attach signature to activity
activity.signature = {
"type": "RsaSignature2017",
"creator": actor.key_id,
"created": created,
"signatureValue": signature_value,
}
return activity
def verify_signature(activity: Activity, public_key_pem: bytes) -> bool:
"""
Verify an activity's signature.
Args:
activity: The activity with signature
public_key_pem: PEM-encoded public key
Returns:
True if signature is valid
"""
if not activity.signature:
return False
try:
# Load public key
public_key = serialization.load_pem_public_key(public_key_pem)
# Reconstruct signature options
options = {
"@context": "https://w3id.org/security/v1",
"type": activity.signature["type"],
"creator": activity.signature["creator"],
"created": activity.signature["created"],
}
# Canonicalize activity without signature
activity_data = activity.to_activitypub()
activity_data.pop("signature", None)
canonical = _canonicalize(activity_data)
# Recreate signed data
options_hash = _hash_sha256(_canonicalize(options))
document_hash = _hash_sha256(canonical)
signed_data = options_hash + document_hash
# Decode and verify signature
signature_bytes = base64.b64decode(activity.signature["signatureValue"])
public_key.verify(
signature_bytes,
signed_data,
padding.PKCS1v15(),
hashes.SHA256(),
)
return True
except (InvalidSignature, KeyError, ValueError):
return False
def verify_activity_ownership(activity: Activity, actor: Actor) -> bool:
"""
Verify that an activity was signed by the claimed actor.
Args:
activity: The activity to verify
actor: The claimed actor
Returns:
True if the activity was signed by this actor
"""
if not activity.signature:
return False
# Check creator matches actor
if activity.signature.get("creator") != actor.key_id:
return False
# Verify signature
return verify_signature(activity, actor.public_key)

View File

@@ -0,0 +1,26 @@
# artdag/analysis - Audio and video feature extraction
#
# Provides the Analysis phase of the 3-phase execution model:
# 1. ANALYZE - Extract features from inputs
# 2. PLAN - Generate execution plan with cache IDs
# 3. EXECUTE - Run steps with caching
from .schema import (
AnalysisResult,
AudioFeatures,
VideoFeatures,
BeatInfo,
EnergyEnvelope,
SpectrumBands,
)
from .analyzer import Analyzer
__all__ = [
"Analyzer",
"AnalysisResult",
"AudioFeatures",
"VideoFeatures",
"BeatInfo",
"EnergyEnvelope",
"SpectrumBands",
]

View File

@@ -0,0 +1,282 @@
# artdag/analysis/analyzer.py
"""
Main Analyzer class for the Analysis phase.
Coordinates audio and video feature extraction with caching.
"""
import json
import logging
from datetime import datetime, timezone
from pathlib import Path
from typing import Dict, List, Optional
from .schema import AnalysisResult, AudioFeatures, VideoFeatures
from .audio import analyze_audio, FEATURE_ALL as AUDIO_ALL
from .video import analyze_video, FEATURE_ALL as VIDEO_ALL
logger = logging.getLogger(__name__)
class AnalysisCache:
"""
Simple file-based cache for analysis results.
Stores results as JSON files keyed by analysis cache_id.
"""
def __init__(self, cache_dir: Path):
self.cache_dir = Path(cache_dir)
self.cache_dir.mkdir(parents=True, exist_ok=True)
def _path_for(self, cache_id: str) -> Path:
"""Get cache file path for a cache_id."""
return self.cache_dir / f"{cache_id}.json"
def get(self, cache_id: str) -> Optional[AnalysisResult]:
"""Retrieve cached analysis result."""
path = self._path_for(cache_id)
if not path.exists():
return None
try:
with open(path, "r") as f:
data = json.load(f)
return AnalysisResult.from_dict(data)
except (json.JSONDecodeError, KeyError) as e:
logger.warning(f"Failed to load analysis cache {cache_id}: {e}")
return None
def put(self, result: AnalysisResult) -> None:
"""Store analysis result in cache."""
path = self._path_for(result.cache_id)
with open(path, "w") as f:
json.dump(result.to_dict(), f, indent=2)
def has(self, cache_id: str) -> bool:
"""Check if analysis result is cached."""
return self._path_for(cache_id).exists()
def remove(self, cache_id: str) -> bool:
"""Remove cached analysis result."""
path = self._path_for(cache_id)
if path.exists():
path.unlink()
return True
return False
class Analyzer:
"""
Analyzes media inputs to extract features.
The Analyzer is the first phase of the 3-phase execution model.
It extracts features from inputs that inform downstream processing.
Example:
analyzer = Analyzer(cache_dir=Path("./analysis_cache"))
# Analyze a music file for beats
result = analyzer.analyze(
input_path=Path("/path/to/music.mp3"),
input_hash="abc123...",
features=["beats", "energy"]
)
print(f"Tempo: {result.tempo} BPM")
print(f"Beats: {result.beat_times}")
"""
def __init__(
self,
cache_dir: Optional[Path] = None,
content_cache: Optional["Cache"] = None, # artdag.Cache for input lookup
):
"""
Initialize the Analyzer.
Args:
cache_dir: Directory for analysis cache. If None, no caching.
content_cache: artdag Cache for looking up inputs by hash
"""
self.cache = AnalysisCache(cache_dir) if cache_dir else None
self.content_cache = content_cache
def get_input_path(self, input_hash: str, input_path: Optional[Path] = None) -> Path:
"""
Resolve input to a file path.
Args:
input_hash: Content hash of the input
input_path: Optional direct path to file
Returns:
Path to the input file
Raises:
ValueError: If input cannot be resolved
"""
if input_path and input_path.exists():
return input_path
if self.content_cache:
entry = self.content_cache.get(input_hash)
if entry:
return Path(entry.output_path)
raise ValueError(f"Cannot resolve input {input_hash}: no path provided and not in cache")
def analyze(
self,
input_hash: str,
features: List[str],
input_path: Optional[Path] = None,
media_type: Optional[str] = None,
) -> AnalysisResult:
"""
Analyze an input file and extract features.
Args:
input_hash: Content hash of the input (for cache key)
features: List of features to extract:
Audio: "beats", "tempo", "energy", "spectrum", "onsets"
Video: "metadata", "motion_tempo", "scene_changes"
Meta: "all" (extracts all relevant features)
input_path: Optional direct path to file
media_type: Optional hint ("audio", "video", or None for auto-detect)
Returns:
AnalysisResult with extracted features
"""
# Compute cache ID
temp_result = AnalysisResult(
input_hash=input_hash,
features_requested=sorted(features),
)
cache_id = temp_result.cache_id
# Check cache
if self.cache and self.cache.has(cache_id):
cached = self.cache.get(cache_id)
if cached:
logger.info(f"Analysis cache hit: {cache_id[:16]}...")
return cached
# Resolve input path
path = self.get_input_path(input_hash, input_path)
logger.info(f"Analyzing {path} for features: {features}")
# Detect media type if not specified
if media_type is None:
media_type = self._detect_media_type(path)
# Extract features
audio_features = None
video_features = None
# Normalize features
if "all" in features:
audio_features_list = [AUDIO_ALL]
video_features_list = [VIDEO_ALL]
else:
audio_features_list = [f for f in features if f in ("beats", "tempo", "energy", "spectrum", "onsets")]
video_features_list = [f for f in features if f in ("metadata", "motion_tempo", "scene_changes")]
if media_type in ("audio", "video") and audio_features_list:
try:
audio_features = analyze_audio(path, features=audio_features_list)
except Exception as e:
logger.warning(f"Audio analysis failed: {e}")
if media_type == "video" and video_features_list:
try:
video_features = analyze_video(path, features=video_features_list)
except Exception as e:
logger.warning(f"Video analysis failed: {e}")
result = AnalysisResult(
input_hash=input_hash,
features_requested=sorted(features),
audio=audio_features,
video=video_features,
analyzed_at=datetime.now(timezone.utc).isoformat(),
)
# Cache result
if self.cache:
self.cache.put(result)
return result
def analyze_multiple(
self,
inputs: Dict[str, Path],
features: List[str],
) -> Dict[str, AnalysisResult]:
"""
Analyze multiple inputs.
Args:
inputs: Dict mapping input_hash to file path
features: Features to extract from all inputs
Returns:
Dict mapping input_hash to AnalysisResult
"""
results = {}
for input_hash, input_path in inputs.items():
try:
results[input_hash] = self.analyze(
input_hash=input_hash,
features=features,
input_path=input_path,
)
except Exception as e:
logger.error(f"Analysis failed for {input_hash}: {e}")
raise
return results
def _detect_media_type(self, path: Path) -> str:
"""
Detect if file is audio or video.
Args:
path: Path to media file
Returns:
"audio" or "video"
"""
import subprocess
import json
cmd = [
"ffprobe", "-v", "quiet",
"-print_format", "json",
"-show_streams",
str(path)
]
try:
result = subprocess.run(cmd, capture_output=True, text=True, check=True)
data = json.loads(result.stdout)
streams = data.get("streams", [])
has_video = any(s.get("codec_type") == "video" for s in streams)
has_audio = any(s.get("codec_type") == "audio" for s in streams)
if has_video:
return "video"
elif has_audio:
return "audio"
else:
return "unknown"
except (subprocess.CalledProcessError, json.JSONDecodeError):
# Fall back to extension-based detection
ext = path.suffix.lower()
if ext in (".mp4", ".mov", ".avi", ".mkv", ".webm"):
return "video"
elif ext in (".mp3", ".wav", ".flac", ".ogg", ".m4a", ".aac"):
return "audio"
return "unknown"

View File

@@ -0,0 +1,336 @@
# artdag/analysis/audio.py
"""
Audio feature extraction.
Uses librosa for beat detection, energy analysis, and spectral features.
Falls back to basic ffprobe if librosa is not available.
"""
import json
import logging
import subprocess
from pathlib import Path
from typing import List, Optional, Tuple
from .schema import AudioFeatures, BeatInfo, EnergyEnvelope, SpectrumBands
logger = logging.getLogger(__name__)
# Feature names for requesting specific analysis
FEATURE_BEATS = "beats"
FEATURE_TEMPO = "tempo"
FEATURE_ENERGY = "energy"
FEATURE_SPECTRUM = "spectrum"
FEATURE_ONSETS = "onsets"
FEATURE_ALL = "all"
def _get_audio_info_ffprobe(path: Path) -> Tuple[float, int, int]:
"""Get basic audio info using ffprobe."""
cmd = [
"ffprobe", "-v", "quiet",
"-print_format", "json",
"-show_streams",
"-select_streams", "a:0",
str(path)
]
try:
result = subprocess.run(cmd, capture_output=True, text=True, check=True)
data = json.loads(result.stdout)
if not data.get("streams"):
raise ValueError("No audio stream found")
stream = data["streams"][0]
duration = float(stream.get("duration", 0))
sample_rate = int(stream.get("sample_rate", 44100))
channels = int(stream.get("channels", 2))
return duration, sample_rate, channels
except (subprocess.CalledProcessError, json.JSONDecodeError, KeyError) as e:
logger.warning(f"ffprobe failed: {e}")
raise ValueError(f"Could not read audio info: {e}")
def _extract_audio_to_wav(path: Path, duration: Optional[float] = None) -> Path:
"""Extract audio to temporary WAV file for librosa processing."""
import tempfile
wav_path = Path(tempfile.mktemp(suffix=".wav"))
cmd = ["ffmpeg", "-y", "-i", str(path)]
if duration:
cmd.extend(["-t", str(duration)])
cmd.extend([
"-vn", # No video
"-acodec", "pcm_s16le",
"-ar", "22050", # Resample to 22050 Hz for librosa
"-ac", "1", # Mono
str(wav_path)
])
try:
subprocess.run(cmd, capture_output=True, check=True)
return wav_path
except subprocess.CalledProcessError as e:
logger.error(f"Audio extraction failed: {e.stderr}")
raise ValueError(f"Could not extract audio: {e}")
def analyze_beats(path: Path, sample_rate: int = 22050) -> BeatInfo:
"""
Detect beats and tempo using librosa.
Args:
path: Path to audio file (or pre-extracted WAV)
sample_rate: Sample rate for analysis
Returns:
BeatInfo with beat times, tempo, and confidence
"""
try:
import librosa
except ImportError:
raise ImportError("librosa required for beat detection. Install with: pip install librosa")
# Load audio
y, sr = librosa.load(str(path), sr=sample_rate, mono=True)
# Detect tempo and beats
tempo, beat_frames = librosa.beat.beat_track(y=y, sr=sr)
# Convert frames to times
beat_times = librosa.frames_to_time(beat_frames, sr=sr).tolist()
# Estimate confidence from onset strength consistency
onset_env = librosa.onset.onset_strength(y=y, sr=sr)
beat_strength = onset_env[beat_frames] if len(beat_frames) > 0 else []
confidence = float(beat_strength.mean() / onset_env.max()) if len(beat_strength) > 0 and onset_env.max() > 0 else 0.5
# Detect downbeats (first beat of each bar)
# Use beat phase to estimate bar positions
downbeat_times = None
if len(beat_times) >= 4:
# Assume 4/4 time signature, downbeats every 4 beats
downbeat_times = [beat_times[i] for i in range(0, len(beat_times), 4)]
return BeatInfo(
beat_times=beat_times,
tempo=float(tempo) if hasattr(tempo, '__float__') else float(tempo[0]) if len(tempo) > 0 else 120.0,
confidence=min(1.0, max(0.0, confidence)),
downbeat_times=downbeat_times,
time_signature=4,
)
def analyze_energy(path: Path, window_ms: float = 50.0, sample_rate: int = 22050) -> EnergyEnvelope:
"""
Extract energy (loudness) envelope.
Args:
path: Path to audio file
window_ms: Analysis window size in milliseconds
sample_rate: Sample rate for analysis
Returns:
EnergyEnvelope with times and normalized values
"""
try:
import librosa
import numpy as np
except ImportError:
raise ImportError("librosa and numpy required. Install with: pip install librosa numpy")
y, sr = librosa.load(str(path), sr=sample_rate, mono=True)
# Calculate frame size from window_ms
hop_length = int(sr * window_ms / 1000)
# RMS energy
rms = librosa.feature.rms(y=y, hop_length=hop_length)[0]
# Normalize to 0-1
rms_max = rms.max()
if rms_max > 0:
rms_normalized = rms / rms_max
else:
rms_normalized = rms
# Generate time points
times = librosa.frames_to_time(np.arange(len(rms)), sr=sr, hop_length=hop_length)
return EnergyEnvelope(
times=times.tolist(),
values=rms_normalized.tolist(),
window_ms=window_ms,
)
def analyze_spectrum(
path: Path,
band_ranges: Optional[dict] = None,
window_ms: float = 50.0,
sample_rate: int = 22050
) -> SpectrumBands:
"""
Extract frequency band envelopes.
Args:
path: Path to audio file
band_ranges: Dict mapping band name to (low_hz, high_hz)
window_ms: Analysis window size
sample_rate: Sample rate
Returns:
SpectrumBands with bass, mid, high envelopes
"""
try:
import librosa
import numpy as np
except ImportError:
raise ImportError("librosa and numpy required")
if band_ranges is None:
band_ranges = {
"bass": (20, 200),
"mid": (200, 2000),
"high": (2000, 20000),
}
y, sr = librosa.load(str(path), sr=sample_rate, mono=True)
hop_length = int(sr * window_ms / 1000)
# Compute STFT
n_fft = 2048
stft = np.abs(librosa.stft(y, n_fft=n_fft, hop_length=hop_length))
# Frequency bins
freqs = librosa.fft_frequencies(sr=sr, n_fft=n_fft)
def band_energy(low_hz: float, high_hz: float) -> List[float]:
"""Sum energy in frequency band."""
mask = (freqs >= low_hz) & (freqs <= high_hz)
if not mask.any():
return [0.0] * stft.shape[1]
band = stft[mask, :].sum(axis=0)
# Normalize
band_max = band.max()
if band_max > 0:
band = band / band_max
return band.tolist()
times = librosa.frames_to_time(np.arange(stft.shape[1]), sr=sr, hop_length=hop_length)
return SpectrumBands(
bass=band_energy(*band_ranges["bass"]),
mid=band_energy(*band_ranges["mid"]),
high=band_energy(*band_ranges["high"]),
times=times.tolist(),
band_ranges=band_ranges,
)
def analyze_onsets(path: Path, sample_rate: int = 22050) -> List[float]:
"""
Detect onset times (note/sound starts).
Args:
path: Path to audio file
sample_rate: Sample rate
Returns:
List of onset times in seconds
"""
try:
import librosa
except ImportError:
raise ImportError("librosa required")
y, sr = librosa.load(str(path), sr=sample_rate, mono=True)
# Detect onsets
onset_frames = librosa.onset.onset_detect(y=y, sr=sr)
onset_times = librosa.frames_to_time(onset_frames, sr=sr)
return onset_times.tolist()
def analyze_audio(
path: Path,
features: Optional[List[str]] = None,
) -> AudioFeatures:
"""
Extract audio features from file.
Args:
path: Path to audio/video file
features: List of features to extract. Options:
- "beats": Beat detection (tempo, beat times)
- "energy": Loudness envelope
- "spectrum": Frequency band envelopes
- "onsets": Note onset times
- "all": All features
Returns:
AudioFeatures with requested analysis
"""
if features is None:
features = [FEATURE_ALL]
# Normalize features
if FEATURE_ALL in features:
features = [FEATURE_BEATS, FEATURE_ENERGY, FEATURE_SPECTRUM, FEATURE_ONSETS]
# Get basic info via ffprobe
duration, sample_rate, channels = _get_audio_info_ffprobe(path)
result = AudioFeatures(
duration=duration,
sample_rate=sample_rate,
channels=channels,
)
# Check if librosa is available for advanced features
try:
import librosa # noqa: F401
has_librosa = True
except ImportError:
has_librosa = False
if any(f in features for f in [FEATURE_BEATS, FEATURE_ENERGY, FEATURE_SPECTRUM, FEATURE_ONSETS]):
logger.warning("librosa not available, skipping advanced audio features")
if not has_librosa:
return result
# Extract audio to WAV for librosa
wav_path = None
try:
wav_path = _extract_audio_to_wav(path)
if FEATURE_BEATS in features or FEATURE_TEMPO in features:
try:
result.beats = analyze_beats(wav_path)
except Exception as e:
logger.warning(f"Beat detection failed: {e}")
if FEATURE_ENERGY in features:
try:
result.energy = analyze_energy(wav_path)
except Exception as e:
logger.warning(f"Energy analysis failed: {e}")
if FEATURE_SPECTRUM in features:
try:
result.spectrum = analyze_spectrum(wav_path)
except Exception as e:
logger.warning(f"Spectrum analysis failed: {e}")
if FEATURE_ONSETS in features:
try:
result.onsets = analyze_onsets(wav_path)
except Exception as e:
logger.warning(f"Onset detection failed: {e}")
finally:
# Clean up temporary WAV file
if wav_path and wav_path.exists():
wav_path.unlink()
return result

View File

@@ -0,0 +1,352 @@
# artdag/analysis/schema.py
"""
Data structures for analysis results.
Analysis extracts features from input media that inform downstream processing.
Results are cached by: analysis_cache_id = SHA3-256(input_hash + sorted(features))
"""
import hashlib
import json
from dataclasses import dataclass, field
from typing import Any, Dict, List, Optional, Tuple
def _stable_hash(data: Any, algorithm: str = "sha3_256") -> str:
"""Create stable hash from arbitrary data."""
json_str = json.dumps(data, sort_keys=True, separators=(",", ":"))
hasher = hashlib.new(algorithm)
hasher.update(json_str.encode())
return hasher.hexdigest()
@dataclass
class BeatInfo:
"""
Beat detection results.
Attributes:
beat_times: List of beat positions in seconds
tempo: Estimated tempo in BPM
confidence: Tempo detection confidence (0-1)
downbeat_times: First beat of each bar (if detected)
time_signature: Detected or assumed time signature (e.g., 4)
"""
beat_times: List[float]
tempo: float
confidence: float = 1.0
downbeat_times: Optional[List[float]] = None
time_signature: int = 4
def to_dict(self) -> Dict[str, Any]:
return {
"beat_times": self.beat_times,
"tempo": self.tempo,
"confidence": self.confidence,
"downbeat_times": self.downbeat_times,
"time_signature": self.time_signature,
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "BeatInfo":
return cls(
beat_times=data["beat_times"],
tempo=data["tempo"],
confidence=data.get("confidence", 1.0),
downbeat_times=data.get("downbeat_times"),
time_signature=data.get("time_signature", 4),
)
@dataclass
class EnergyEnvelope:
"""
Energy (loudness) over time.
Attributes:
times: Time points in seconds
values: Energy values (0-1, normalized)
window_ms: Analysis window size in milliseconds
"""
times: List[float]
values: List[float]
window_ms: float = 50.0
def to_dict(self) -> Dict[str, Any]:
return {
"times": self.times,
"values": self.values,
"window_ms": self.window_ms,
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "EnergyEnvelope":
return cls(
times=data["times"],
values=data["values"],
window_ms=data.get("window_ms", 50.0),
)
def at_time(self, t: float) -> float:
"""Interpolate energy value at given time."""
if not self.times:
return 0.0
if t <= self.times[0]:
return self.values[0]
if t >= self.times[-1]:
return self.values[-1]
# Binary search for bracketing indices
lo, hi = 0, len(self.times) - 1
while hi - lo > 1:
mid = (lo + hi) // 2
if self.times[mid] <= t:
lo = mid
else:
hi = mid
# Linear interpolation
t0, t1 = self.times[lo], self.times[hi]
v0, v1 = self.values[lo], self.values[hi]
alpha = (t - t0) / (t1 - t0) if t1 != t0 else 0
return v0 + alpha * (v1 - v0)
@dataclass
class SpectrumBands:
"""
Frequency band envelopes over time.
Attributes:
bass: Low frequency envelope (20-200 Hz typical)
mid: Mid frequency envelope (200-2000 Hz typical)
high: High frequency envelope (2000-20000 Hz typical)
times: Time points in seconds
band_ranges: Frequency ranges for each band in Hz
"""
bass: List[float]
mid: List[float]
high: List[float]
times: List[float]
band_ranges: Dict[str, Tuple[float, float]] = field(default_factory=lambda: {
"bass": (20, 200),
"mid": (200, 2000),
"high": (2000, 20000),
})
def to_dict(self) -> Dict[str, Any]:
return {
"bass": self.bass,
"mid": self.mid,
"high": self.high,
"times": self.times,
"band_ranges": self.band_ranges,
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "SpectrumBands":
return cls(
bass=data["bass"],
mid=data["mid"],
high=data["high"],
times=data["times"],
band_ranges=data.get("band_ranges", {
"bass": (20, 200),
"mid": (200, 2000),
"high": (2000, 20000),
}),
)
@dataclass
class AudioFeatures:
"""
All extracted audio features.
Attributes:
duration: Audio duration in seconds
sample_rate: Sample rate in Hz
channels: Number of audio channels
beats: Beat detection results
energy: Energy envelope
spectrum: Frequency band envelopes
onsets: Note/sound onset times
"""
duration: float
sample_rate: int
channels: int
beats: Optional[BeatInfo] = None
energy: Optional[EnergyEnvelope] = None
spectrum: Optional[SpectrumBands] = None
onsets: Optional[List[float]] = None
def to_dict(self) -> Dict[str, Any]:
return {
"duration": self.duration,
"sample_rate": self.sample_rate,
"channels": self.channels,
"beats": self.beats.to_dict() if self.beats else None,
"energy": self.energy.to_dict() if self.energy else None,
"spectrum": self.spectrum.to_dict() if self.spectrum else None,
"onsets": self.onsets,
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "AudioFeatures":
return cls(
duration=data["duration"],
sample_rate=data["sample_rate"],
channels=data["channels"],
beats=BeatInfo.from_dict(data["beats"]) if data.get("beats") else None,
energy=EnergyEnvelope.from_dict(data["energy"]) if data.get("energy") else None,
spectrum=SpectrumBands.from_dict(data["spectrum"]) if data.get("spectrum") else None,
onsets=data.get("onsets"),
)
@dataclass
class VideoFeatures:
"""
Extracted video features.
Attributes:
duration: Video duration in seconds
frame_rate: Frames per second
width: Frame width in pixels
height: Frame height in pixels
codec: Video codec name
motion_tempo: Estimated tempo from motion analysis (optional)
scene_changes: Times of detected scene changes
"""
duration: float
frame_rate: float
width: int
height: int
codec: str = ""
motion_tempo: Optional[float] = None
scene_changes: Optional[List[float]] = None
def to_dict(self) -> Dict[str, Any]:
return {
"duration": self.duration,
"frame_rate": self.frame_rate,
"width": self.width,
"height": self.height,
"codec": self.codec,
"motion_tempo": self.motion_tempo,
"scene_changes": self.scene_changes,
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "VideoFeatures":
return cls(
duration=data["duration"],
frame_rate=data["frame_rate"],
width=data["width"],
height=data["height"],
codec=data.get("codec", ""),
motion_tempo=data.get("motion_tempo"),
scene_changes=data.get("scene_changes"),
)
@dataclass
class AnalysisResult:
"""
Complete analysis result for an input.
Combines audio and video features with metadata for caching.
Attributes:
input_hash: Content hash of the analyzed input
features_requested: List of features that were requested
audio: Audio features (if input has audio)
video: Video features (if input has video)
cache_id: Computed cache ID for this analysis
analyzed_at: Timestamp of analysis
"""
input_hash: str
features_requested: List[str]
audio: Optional[AudioFeatures] = None
video: Optional[VideoFeatures] = None
cache_id: Optional[str] = None
analyzed_at: Optional[str] = None
def __post_init__(self):
"""Compute cache_id if not provided."""
if self.cache_id is None:
self.cache_id = self._compute_cache_id()
def _compute_cache_id(self) -> str:
"""
Compute cache ID from input hash and requested features.
cache_id = SHA3-256(input_hash + sorted(features_requested))
"""
content = {
"input_hash": self.input_hash,
"features": sorted(self.features_requested),
}
return _stable_hash(content)
def to_dict(self) -> Dict[str, Any]:
return {
"input_hash": self.input_hash,
"features_requested": self.features_requested,
"audio": self.audio.to_dict() if self.audio else None,
"video": self.video.to_dict() if self.video else None,
"cache_id": self.cache_id,
"analyzed_at": self.analyzed_at,
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "AnalysisResult":
return cls(
input_hash=data["input_hash"],
features_requested=data["features_requested"],
audio=AudioFeatures.from_dict(data["audio"]) if data.get("audio") else None,
video=VideoFeatures.from_dict(data["video"]) if data.get("video") else None,
cache_id=data.get("cache_id"),
analyzed_at=data.get("analyzed_at"),
)
def to_json(self) -> str:
"""Serialize to JSON string."""
return json.dumps(self.to_dict(), indent=2)
@classmethod
def from_json(cls, json_str: str) -> "AnalysisResult":
"""Deserialize from JSON string."""
return cls.from_dict(json.loads(json_str))
# Convenience accessors
@property
def tempo(self) -> Optional[float]:
"""Get tempo if beats were analyzed."""
return self.audio.beats.tempo if self.audio and self.audio.beats else None
@property
def beat_times(self) -> Optional[List[float]]:
"""Get beat times if beats were analyzed."""
return self.audio.beats.beat_times if self.audio and self.audio.beats else None
@property
def downbeat_times(self) -> Optional[List[float]]:
"""Get downbeat times if analyzed."""
return self.audio.beats.downbeat_times if self.audio and self.audio.beats else None
@property
def duration(self) -> float:
"""Get duration from video or audio."""
if self.video:
return self.video.duration
if self.audio:
return self.audio.duration
return 0.0
@property
def dimensions(self) -> Optional[Tuple[int, int]]:
"""Get video dimensions if available."""
if self.video:
return (self.video.width, self.video.height)
return None

View File

@@ -0,0 +1,266 @@
# artdag/analysis/video.py
"""
Video feature extraction.
Uses ffprobe for basic metadata and optional OpenCV for motion analysis.
"""
import json
import logging
import subprocess
from fractions import Fraction
from pathlib import Path
from typing import List, Optional
from .schema import VideoFeatures
logger = logging.getLogger(__name__)
# Feature names
FEATURE_METADATA = "metadata"
FEATURE_MOTION_TEMPO = "motion_tempo"
FEATURE_SCENE_CHANGES = "scene_changes"
FEATURE_ALL = "all"
def _parse_frame_rate(rate_str: str) -> float:
"""Parse frame rate string like '30000/1001' or '30'."""
try:
if "/" in rate_str:
frac = Fraction(rate_str)
return float(frac)
return float(rate_str)
except (ValueError, ZeroDivisionError):
return 30.0 # Default
def analyze_metadata(path: Path) -> VideoFeatures:
"""
Extract video metadata using ffprobe.
Args:
path: Path to video file
Returns:
VideoFeatures with basic metadata
"""
cmd = [
"ffprobe", "-v", "quiet",
"-print_format", "json",
"-show_streams",
"-show_format",
"-select_streams", "v:0",
str(path)
]
try:
result = subprocess.run(cmd, capture_output=True, text=True, check=True)
data = json.loads(result.stdout)
except (subprocess.CalledProcessError, json.JSONDecodeError) as e:
raise ValueError(f"Could not read video info: {e}")
if not data.get("streams"):
raise ValueError("No video stream found")
stream = data["streams"][0]
fmt = data.get("format", {})
# Get duration from format or stream
duration = float(fmt.get("duration", stream.get("duration", 0)))
# Parse frame rate
frame_rate = _parse_frame_rate(stream.get("avg_frame_rate", "30"))
return VideoFeatures(
duration=duration,
frame_rate=frame_rate,
width=int(stream.get("width", 0)),
height=int(stream.get("height", 0)),
codec=stream.get("codec_name", ""),
)
def analyze_scene_changes(path: Path, threshold: float = 0.3) -> List[float]:
"""
Detect scene changes using ffmpeg scene detection.
Args:
path: Path to video file
threshold: Scene change threshold (0-1, lower = more sensitive)
Returns:
List of scene change times in seconds
"""
cmd = [
"ffmpeg", "-i", str(path),
"-vf", f"select='gt(scene,{threshold})',showinfo",
"-f", "null", "-"
]
try:
result = subprocess.run(cmd, capture_output=True, text=True)
stderr = result.stderr
except subprocess.CalledProcessError as e:
logger.warning(f"Scene detection failed: {e}")
return []
# Parse scene change times from ffmpeg output
scene_times = []
for line in stderr.split("\n"):
if "pts_time:" in line:
try:
# Extract pts_time value
for part in line.split():
if part.startswith("pts_time:"):
time_str = part.split(":")[1]
scene_times.append(float(time_str))
break
except (ValueError, IndexError):
continue
return scene_times
def analyze_motion_tempo(path: Path, sample_duration: float = 30.0) -> Optional[float]:
"""
Estimate tempo from video motion periodicity.
Analyzes optical flow or frame differences to detect rhythmic motion.
This is useful for matching video speed to audio tempo.
Args:
path: Path to video file
sample_duration: Duration to analyze (seconds)
Returns:
Estimated motion tempo in BPM, or None if not detectable
"""
try:
import cv2
import numpy as np
except ImportError:
logger.warning("OpenCV not available, skipping motion tempo analysis")
return None
cap = cv2.VideoCapture(str(path))
if not cap.isOpened():
logger.warning(f"Could not open video: {path}")
return None
try:
fps = cap.get(cv2.CAP_PROP_FPS)
if fps <= 0:
fps = 30.0
max_frames = int(sample_duration * fps)
frame_diffs = []
prev_gray = None
frame_count = 0
while frame_count < max_frames:
ret, frame = cap.read()
if not ret:
break
# Convert to grayscale and resize for speed
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
gray = cv2.resize(gray, (160, 90))
if prev_gray is not None:
# Calculate frame difference
diff = cv2.absdiff(gray, prev_gray)
frame_diffs.append(np.mean(diff))
prev_gray = gray
frame_count += 1
if len(frame_diffs) < 60: # Need at least 2 seconds at 30fps
return None
# Convert to numpy array
motion = np.array(frame_diffs)
# Normalize
motion = motion - motion.mean()
if motion.std() > 0:
motion = motion / motion.std()
# Autocorrelation to find periodicity
n = len(motion)
acf = np.correlate(motion, motion, mode="full")[n-1:]
acf = acf / acf[0] # Normalize
# Find peaks in autocorrelation (potential beat periods)
# Look for periods between 0.3s (200 BPM) and 2s (30 BPM)
min_lag = int(0.3 * fps)
max_lag = min(int(2.0 * fps), len(acf) - 1)
if max_lag <= min_lag:
return None
# Find the highest peak in the valid range
search_range = acf[min_lag:max_lag]
if len(search_range) == 0:
return None
peak_idx = np.argmax(search_range) + min_lag
peak_value = acf[peak_idx]
# Only report if peak is significant
if peak_value < 0.1:
return None
# Convert lag to BPM
period_seconds = peak_idx / fps
bpm = 60.0 / period_seconds
# Sanity check
if 30 <= bpm <= 200:
return round(bpm, 1)
return None
finally:
cap.release()
def analyze_video(
path: Path,
features: Optional[List[str]] = None,
) -> VideoFeatures:
"""
Extract video features from file.
Args:
path: Path to video file
features: List of features to extract. Options:
- "metadata": Basic video info (always included)
- "motion_tempo": Estimated tempo from motion
- "scene_changes": Scene change detection
- "all": All features
Returns:
VideoFeatures with requested analysis
"""
if features is None:
features = [FEATURE_METADATA]
if FEATURE_ALL in features:
features = [FEATURE_METADATA, FEATURE_MOTION_TEMPO, FEATURE_SCENE_CHANGES]
# Basic metadata is always extracted
result = analyze_metadata(path)
if FEATURE_MOTION_TEMPO in features:
try:
result.motion_tempo = analyze_motion_tempo(path)
except Exception as e:
logger.warning(f"Motion tempo analysis failed: {e}")
if FEATURE_SCENE_CHANGES in features:
try:
result.scene_changes = analyze_scene_changes(path)
except Exception as e:
logger.warning(f"Scene change detection failed: {e}")
return result

464
artdag/core/artdag/cache.py Normal file
View File

@@ -0,0 +1,464 @@
# primitive/cache.py
"""
Content-addressed file cache for node outputs.
Each node's output is stored at: cache_dir / node_id / output_file
This enables automatic reuse when the same operation is requested.
"""
import json
import logging
import shutil
import time
from dataclasses import dataclass, field
from pathlib import Path
from typing import Dict, List, Optional
logger = logging.getLogger(__name__)
def _file_hash(path: Path, algorithm: str = "sha3_256") -> str:
"""
Compute content hash of a file.
Uses SHA-3 (Keccak) by default for quantum resistance.
"""
import hashlib
hasher = hashlib.new(algorithm)
with open(path, "rb") as f:
for chunk in iter(lambda: f.read(65536), b""):
hasher.update(chunk)
return hasher.hexdigest()
@dataclass
class CacheEntry:
"""Metadata about a cached output."""
node_id: str
output_path: Path
created_at: float
size_bytes: int
node_type: str
cid: str = "" # Content identifier (IPFS CID or local hash)
execution_time: float = 0.0
def to_dict(self) -> Dict:
return {
"node_id": self.node_id,
"output_path": str(self.output_path),
"created_at": self.created_at,
"size_bytes": self.size_bytes,
"node_type": self.node_type,
"cid": self.cid,
"execution_time": self.execution_time,
}
@classmethod
def from_dict(cls, data: Dict) -> "CacheEntry":
# Support both "cid" and legacy "content_hash"
cid = data.get("cid") or data.get("content_hash", "")
return cls(
node_id=data["node_id"],
output_path=Path(data["output_path"]),
created_at=data["created_at"],
size_bytes=data["size_bytes"],
node_type=data["node_type"],
cid=cid,
execution_time=data.get("execution_time", 0.0),
)
@dataclass
class CacheStats:
"""Statistics about cache usage."""
total_entries: int = 0
total_size_bytes: int = 0
hits: int = 0
misses: int = 0
hit_rate: float = 0.0
def record_hit(self):
self.hits += 1
self._update_rate()
def record_miss(self):
self.misses += 1
self._update_rate()
def _update_rate(self):
total = self.hits + self.misses
self.hit_rate = self.hits / total if total > 0 else 0.0
class Cache:
"""
Code-addressed file cache.
The filesystem IS the index - no JSON index files needed.
Each node's hash is its directory name.
Structure:
cache_dir/
<hash>/
output.ext # Actual output file
metadata.json # Per-node metadata (optional)
"""
def __init__(self, cache_dir: Path | str):
self.cache_dir = Path(cache_dir)
self.cache_dir.mkdir(parents=True, exist_ok=True)
self.stats = CacheStats()
def _node_dir(self, node_id: str) -> Path:
"""Get the cache directory for a node."""
return self.cache_dir / node_id
def _find_output_file(self, node_dir: Path) -> Optional[Path]:
"""Find the output file in a node directory."""
if not node_dir.exists() or not node_dir.is_dir():
return None
for f in node_dir.iterdir():
if f.is_file() and f.name.startswith("output."):
return f
return None
def get(self, node_id: str) -> Optional[Path]:
"""
Get cached output path for a node.
Checks filesystem directly - no in-memory index.
Returns the output path if cached, None otherwise.
"""
node_dir = self._node_dir(node_id)
output_file = self._find_output_file(node_dir)
if output_file:
self.stats.record_hit()
logger.debug(f"Cache hit: {node_id[:16]}...")
return output_file
self.stats.record_miss()
return None
def put(self, node_id: str, source_path: Path, node_type: str,
execution_time: float = 0.0, move: bool = False) -> Path:
"""
Store a file in the cache.
Args:
node_id: The code-addressed node ID (hash)
source_path: Path to the file to cache
node_type: Type of the node (for metadata)
execution_time: How long the node took to execute
move: If True, move the file instead of copying
Returns:
Path to the cached file
"""
node_dir = self._node_dir(node_id)
node_dir.mkdir(parents=True, exist_ok=True)
# Preserve extension
ext = source_path.suffix or ".out"
output_path = node_dir / f"output{ext}"
# Copy or move file (skip if already in place)
source_resolved = Path(source_path).resolve()
output_resolved = output_path.resolve()
if source_resolved != output_resolved:
if move:
shutil.move(source_path, output_path)
else:
shutil.copy2(source_path, output_path)
# Compute content hash (IPFS CID of the result)
cid = _file_hash(output_path)
# Store per-node metadata (optional, for stats/debugging)
metadata = {
"node_id": node_id,
"output_path": str(output_path),
"created_at": time.time(),
"size_bytes": output_path.stat().st_size,
"node_type": node_type,
"cid": cid,
"execution_time": execution_time,
}
metadata_path = node_dir / "metadata.json"
with open(metadata_path, "w") as f:
json.dump(metadata, f, indent=2)
logger.debug(f"Cached: {node_id[:16]}... ({metadata['size_bytes']} bytes)")
return output_path
def has(self, node_id: str) -> bool:
"""Check if a node is cached (without affecting stats)."""
return self._find_output_file(self._node_dir(node_id)) is not None
def remove(self, node_id: str) -> bool:
"""Remove a node from the cache."""
node_dir = self._node_dir(node_id)
if node_dir.exists():
shutil.rmtree(node_dir)
return True
return False
def clear(self):
"""Clear all cached entries."""
for node_dir in self.cache_dir.iterdir():
if node_dir.is_dir() and not node_dir.name.startswith("_"):
shutil.rmtree(node_dir)
self.stats = CacheStats()
def get_stats(self) -> CacheStats:
"""Get cache statistics (scans filesystem)."""
stats = CacheStats()
for node_dir in self.cache_dir.iterdir():
if node_dir.is_dir() and not node_dir.name.startswith("_"):
output_file = self._find_output_file(node_dir)
if output_file:
stats.total_entries += 1
stats.total_size_bytes += output_file.stat().st_size
stats.hits = self.stats.hits
stats.misses = self.stats.misses
stats.hit_rate = self.stats.hit_rate
return stats
def list_entries(self) -> List[CacheEntry]:
"""List all cache entries (scans filesystem)."""
entries = []
for node_dir in self.cache_dir.iterdir():
if node_dir.is_dir() and not node_dir.name.startswith("_"):
entry = self._load_entry_from_disk(node_dir.name)
if entry:
entries.append(entry)
return entries
def _load_entry_from_disk(self, node_id: str) -> Optional[CacheEntry]:
"""Load entry metadata from disk."""
node_dir = self._node_dir(node_id)
metadata_path = node_dir / "metadata.json"
output_file = self._find_output_file(node_dir)
if not output_file:
return None
if metadata_path.exists():
try:
with open(metadata_path) as f:
data = json.load(f)
return CacheEntry.from_dict(data)
except (json.JSONDecodeError, KeyError):
pass
# Fallback: create entry from filesystem
return CacheEntry(
node_id=node_id,
output_path=output_file,
created_at=output_file.stat().st_mtime,
size_bytes=output_file.stat().st_size,
node_type="unknown",
cid=_file_hash(output_file),
)
def get_entry(self, node_id: str) -> Optional[CacheEntry]:
"""Get cache entry metadata (without affecting stats)."""
return self._load_entry_from_disk(node_id)
def find_by_cid(self, cid: str) -> Optional[CacheEntry]:
"""Find a cache entry by its content hash (scans filesystem)."""
for entry in self.list_entries():
if entry.cid == cid:
return entry
return None
def prune(self, max_size_bytes: int = None, max_age_seconds: float = None) -> int:
"""
Prune cache based on size or age.
Args:
max_size_bytes: Remove oldest entries until under this size
max_age_seconds: Remove entries older than this
Returns:
Number of entries removed
"""
removed = 0
now = time.time()
entries = self.list_entries()
# Remove by age first
if max_age_seconds is not None:
for entry in entries:
if now - entry.created_at > max_age_seconds:
self.remove(entry.node_id)
removed += 1
# Then by size (remove oldest first)
if max_size_bytes is not None:
stats = self.get_stats()
if stats.total_size_bytes > max_size_bytes:
sorted_entries = sorted(entries, key=lambda e: e.created_at)
total_size = stats.total_size_bytes
for entry in sorted_entries:
if total_size <= max_size_bytes:
break
self.remove(entry.node_id)
total_size -= entry.size_bytes
removed += 1
return removed
def get_output_path(self, node_id: str, extension: str = ".mkv") -> Path:
"""Get the output path for a node (creates directory if needed)."""
node_dir = self._node_dir(node_id)
node_dir.mkdir(parents=True, exist_ok=True)
return node_dir / f"output{extension}"
# Effect storage methods
def _effects_dir(self) -> Path:
"""Get the effects subdirectory."""
effects_dir = self.cache_dir / "_effects"
effects_dir.mkdir(parents=True, exist_ok=True)
return effects_dir
def store_effect(self, source: str) -> str:
"""
Store an effect in the cache.
Args:
source: Effect source code
Returns:
Content hash (cache ID) of the effect
"""
import hashlib as _hashlib
# Compute content hash
cid = _hashlib.sha3_256(source.encode("utf-8")).hexdigest()
# Try to load full metadata if effects module available
try:
from .effects.loader import load_effect
loaded = load_effect(source)
meta_dict = loaded.meta.to_dict()
dependencies = loaded.dependencies
requires_python = loaded.requires_python
except ImportError:
# Fallback: store without parsed metadata
meta_dict = {}
dependencies = []
requires_python = ">=3.10"
effect_dir = self._effects_dir() / cid
effect_dir.mkdir(parents=True, exist_ok=True)
# Store source
source_path = effect_dir / "effect.py"
source_path.write_text(source, encoding="utf-8")
# Store metadata
metadata = {
"cid": cid,
"meta": meta_dict,
"dependencies": dependencies,
"requires_python": requires_python,
"stored_at": time.time(),
}
metadata_path = effect_dir / "metadata.json"
with open(metadata_path, "w") as f:
json.dump(metadata, f, indent=2)
logger.info(f"Stored effect '{loaded.meta.name}' with hash {cid[:16]}...")
return cid
def get_effect(self, cid: str) -> Optional[str]:
"""
Get effect source by content hash.
Args:
cid: SHA3-256 hash of effect source
Returns:
Effect source code if found, None otherwise
"""
effect_dir = self._effects_dir() / cid
source_path = effect_dir / "effect.py"
if not source_path.exists():
return None
return source_path.read_text(encoding="utf-8")
def get_effect_path(self, cid: str) -> Optional[Path]:
"""
Get path to effect source file.
Args:
cid: SHA3-256 hash of effect source
Returns:
Path to effect.py if found, None otherwise
"""
effect_dir = self._effects_dir() / cid
source_path = effect_dir / "effect.py"
if not source_path.exists():
return None
return source_path
def get_effect_metadata(self, cid: str) -> Optional[dict]:
"""
Get effect metadata by content hash.
Args:
cid: SHA3-256 hash of effect source
Returns:
Metadata dict if found, None otherwise
"""
effect_dir = self._effects_dir() / cid
metadata_path = effect_dir / "metadata.json"
if not metadata_path.exists():
return None
try:
with open(metadata_path) as f:
return json.load(f)
except (json.JSONDecodeError, KeyError):
return None
def has_effect(self, cid: str) -> bool:
"""Check if an effect is cached."""
effect_dir = self._effects_dir() / cid
return (effect_dir / "effect.py").exists()
def list_effects(self) -> List[dict]:
"""List all cached effects with their metadata."""
effects = []
effects_dir = self._effects_dir()
if not effects_dir.exists():
return effects
for effect_dir in effects_dir.iterdir():
if effect_dir.is_dir():
metadata = self.get_effect_metadata(effect_dir.name)
if metadata:
effects.append(metadata)
return effects
def remove_effect(self, cid: str) -> bool:
"""Remove an effect from the cache."""
effect_dir = self._effects_dir() / cid
if not effect_dir.exists():
return False
shutil.rmtree(effect_dir)
logger.info(f"Removed effect {cid[:16]}...")
return True

724
artdag/core/artdag/cli.py Normal file
View File

@@ -0,0 +1,724 @@
#!/usr/bin/env python3
"""
Art DAG CLI
Command-line interface for the 3-phase execution model:
artdag analyze - Extract features from inputs
artdag plan - Generate execution plan
artdag execute - Run the plan
artdag run-recipe - Full pipeline
Usage:
artdag analyze <recipe> -i <name>:<hash>[@<path>] [--features <list>]
artdag plan <recipe> -i <name>:<hash> [--analysis <file>]
artdag execute <plan.json> [--dry-run]
artdag run-recipe <recipe> -i <name>:<hash>[@<path>]
"""
import argparse
import json
import sys
from pathlib import Path
from typing import Dict, List, Optional, Tuple
def parse_input(input_str: str) -> Tuple[str, str, Optional[str]]:
"""
Parse input specification: name:hash[@path]
Returns (name, hash, path or None)
"""
if "@" in input_str:
name_hash, path = input_str.rsplit("@", 1)
else:
name_hash = input_str
path = None
if ":" not in name_hash:
raise ValueError(f"Invalid input format: {input_str}. Expected name:hash[@path]")
name, hash_value = name_hash.split(":", 1)
return name, hash_value, path
def parse_inputs(input_list: List[str]) -> Tuple[Dict[str, str], Dict[str, str]]:
"""
Parse list of input specifications.
Returns (input_hashes, input_paths)
"""
input_hashes = {}
input_paths = {}
for input_str in input_list:
name, hash_value, path = parse_input(input_str)
input_hashes[name] = hash_value
if path:
input_paths[name] = path
return input_hashes, input_paths
def cmd_analyze(args):
"""Run analysis phase."""
from .analysis import Analyzer
# Parse inputs
input_hashes, input_paths = parse_inputs(args.input)
# Parse features
features = args.features.split(",") if args.features else ["all"]
# Create analyzer
cache_dir = Path(args.cache_dir) if args.cache_dir else Path("./analysis_cache")
analyzer = Analyzer(cache_dir=cache_dir)
# Analyze each input
results = {}
for name, hash_value in input_hashes.items():
path = input_paths.get(name)
if path:
path = Path(path)
print(f"Analyzing {name} ({hash_value[:16]}...)...")
result = analyzer.analyze(
input_hash=hash_value,
features=features,
input_path=path,
)
results[hash_value] = result.to_dict()
# Print summary
if result.audio and result.audio.beats:
print(f" Tempo: {result.audio.beats.tempo:.1f} BPM")
print(f" Beats: {len(result.audio.beats.beat_times)}")
if result.video:
print(f" Duration: {result.video.duration:.1f}s")
print(f" Dimensions: {result.video.width}x{result.video.height}")
# Write output
output_path = Path(args.output) if args.output else Path("analysis.json")
with open(output_path, "w") as f:
json.dump(results, f, indent=2)
print(f"\nAnalysis saved to: {output_path}")
def cmd_plan(args):
"""Run planning phase."""
from .analysis import AnalysisResult
from .planning import RecipePlanner, Recipe
# Load recipe
recipe = Recipe.from_file(Path(args.recipe))
print(f"Recipe: {recipe.name} v{recipe.version}")
# Parse inputs
input_hashes, _ = parse_inputs(args.input)
# Load analysis if provided
analysis = {}
if args.analysis:
with open(args.analysis, "r") as f:
analysis_data = json.load(f)
for hash_value, data in analysis_data.items():
analysis[hash_value] = AnalysisResult.from_dict(data)
# Create planner
planner = RecipePlanner(use_tree_reduction=not args.no_tree_reduction)
# Generate plan
print("Generating execution plan...")
plan = planner.plan(
recipe=recipe,
input_hashes=input_hashes,
analysis=analysis,
)
# Print summary
print(f"\nPlan ID: {plan.plan_id[:16]}...")
print(f"Steps: {len(plan.steps)}")
steps_by_level = plan.get_steps_by_level()
max_level = max(steps_by_level.keys()) if steps_by_level else 0
print(f"Levels: {max_level + 1}")
for level in sorted(steps_by_level.keys()):
steps = steps_by_level[level]
print(f" Level {level}: {len(steps)} steps (parallel)")
# Write output
output_path = Path(args.output) if args.output else Path("plan.json")
with open(output_path, "w") as f:
f.write(plan.to_json())
print(f"\nPlan saved to: {output_path}")
def cmd_execute(args):
"""Run execution phase."""
from .planning import ExecutionPlan
from .cache import Cache
from .executor import get_executor
from .dag import NodeType
from . import nodes # Register built-in executors
# Load plan
with open(args.plan, "r") as f:
plan = ExecutionPlan.from_json(f.read())
print(f"Executing plan: {plan.plan_id[:16]}...")
print(f"Steps: {len(plan.steps)}")
if args.dry_run:
print("\n=== DRY RUN ===")
# Check cache status
cache = Cache(Path(args.cache_dir) if args.cache_dir else Path("./cache"))
steps_by_level = plan.get_steps_by_level()
cached_count = 0
pending_count = 0
for level in sorted(steps_by_level.keys()):
steps = steps_by_level[level]
print(f"\nLevel {level}:")
for step in steps:
if cache.has(step.cache_id):
print(f" [CACHED] {step.step_id}: {step.node_type}")
cached_count += 1
else:
print(f" [PENDING] {step.step_id}: {step.node_type}")
pending_count += 1
print(f"\nSummary: {cached_count} cached, {pending_count} pending")
return
# Execute locally (for testing - production uses Celery)
cache = Cache(Path(args.cache_dir) if args.cache_dir else Path("./cache"))
cache_paths = {}
for name, hash_value in plan.input_hashes.items():
if cache.has(hash_value):
entry = cache.get(hash_value)
cache_paths[hash_value] = str(entry.output_path)
steps_by_level = plan.get_steps_by_level()
executed = 0
cached = 0
for level in sorted(steps_by_level.keys()):
steps = steps_by_level[level]
print(f"\nLevel {level}: {len(steps)} steps")
for step in steps:
if cache.has(step.cache_id):
cached_path = cache.get(step.cache_id)
cache_paths[step.cache_id] = str(cached_path)
cache_paths[step.step_id] = str(cached_path)
print(f" [CACHED] {step.step_id}")
cached += 1
continue
print(f" [RUNNING] {step.step_id}: {step.node_type}...")
# Get executor
try:
node_type = NodeType[step.node_type]
except KeyError:
node_type = step.node_type
executor = get_executor(node_type)
if executor is None:
print(f" ERROR: No executor for {step.node_type}")
continue
# Resolve inputs
input_paths = []
for input_id in step.input_steps:
if input_id in cache_paths:
input_paths.append(Path(cache_paths[input_id]))
else:
input_step = plan.get_step(input_id)
if input_step and input_step.cache_id in cache_paths:
input_paths.append(Path(cache_paths[input_step.cache_id]))
if len(input_paths) != len(step.input_steps):
print(f" ERROR: Missing inputs")
continue
# Execute
output_path = cache.get_output_path(step.cache_id)
try:
result_path = executor.execute(step.config, input_paths, output_path)
cache.put(step.cache_id, result_path, node_type=step.node_type)
cache_paths[step.cache_id] = str(result_path)
cache_paths[step.step_id] = str(result_path)
print(f" [DONE] -> {result_path}")
executed += 1
except Exception as e:
print(f" [FAILED] {e}")
# Final output
output_step = plan.get_step(plan.output_step)
output_path = cache_paths.get(output_step.cache_id) if output_step else None
print(f"\n=== Complete ===")
print(f"Cached: {cached}")
print(f"Executed: {executed}")
if output_path:
print(f"Output: {output_path}")
def cmd_run_recipe(args):
"""Run complete pipeline: analyze → plan → execute."""
from .analysis import Analyzer, AnalysisResult
from .planning import RecipePlanner, Recipe
from .cache import Cache
from .executor import get_executor
from .dag import NodeType
from . import nodes # Register built-in executors
# Load recipe
recipe = Recipe.from_file(Path(args.recipe))
print(f"Recipe: {recipe.name} v{recipe.version}")
# Parse inputs
input_hashes, input_paths = parse_inputs(args.input)
# Parse features
features = args.features.split(",") if args.features else ["beats", "energy"]
cache_dir = Path(args.cache_dir) if args.cache_dir else Path("./cache")
# Phase 1: Analyze
print("\n=== Phase 1: Analysis ===")
analyzer = Analyzer(cache_dir=cache_dir / "analysis")
analysis = {}
for name, hash_value in input_hashes.items():
path = input_paths.get(name)
if path:
path = Path(path)
print(f"Analyzing {name}...")
result = analyzer.analyze(
input_hash=hash_value,
features=features,
input_path=path,
)
analysis[hash_value] = result
if result.audio and result.audio.beats:
print(f" Tempo: {result.audio.beats.tempo:.1f} BPM, {len(result.audio.beats.beat_times)} beats")
# Phase 2: Plan
print("\n=== Phase 2: Planning ===")
# Check for cached plan
plans_dir = cache_dir / "plans"
plans_dir.mkdir(parents=True, exist_ok=True)
# Generate plan to get plan_id (deterministic hash)
planner = RecipePlanner(use_tree_reduction=True)
plan = planner.plan(
recipe=recipe,
input_hashes=input_hashes,
analysis=analysis,
)
plan_cache_path = plans_dir / f"{plan.plan_id}.json"
if plan_cache_path.exists():
print(f"Plan cached: {plan.plan_id[:16]}...")
from .planning import ExecutionPlan
with open(plan_cache_path, "r") as f:
plan = ExecutionPlan.from_json(f.read())
else:
# Save plan to cache
with open(plan_cache_path, "w") as f:
f.write(plan.to_json())
print(f"Plan saved: {plan.plan_id[:16]}...")
print(f"Plan: {len(plan.steps)} steps")
steps_by_level = plan.get_steps_by_level()
print(f"Levels: {len(steps_by_level)}")
# Phase 3: Execute
print("\n=== Phase 3: Execution ===")
cache = Cache(cache_dir)
# Build initial cache paths
cache_paths = {}
for name, hash_value in input_hashes.items():
path = input_paths.get(name)
if path:
cache_paths[hash_value] = path
cache_paths[name] = path
executed = 0
cached = 0
for level in sorted(steps_by_level.keys()):
steps = steps_by_level[level]
print(f"\nLevel {level}: {len(steps)} steps")
for step in steps:
if cache.has(step.cache_id):
cached_path = cache.get(step.cache_id)
cache_paths[step.cache_id] = str(cached_path)
cache_paths[step.step_id] = str(cached_path)
print(f" [CACHED] {step.step_id}")
cached += 1
continue
# Handle SOURCE specially
if step.node_type == "SOURCE":
cid = step.config.get("cid")
if cid in cache_paths:
cache_paths[step.cache_id] = cache_paths[cid]
cache_paths[step.step_id] = cache_paths[cid]
print(f" [SOURCE] {step.step_id}")
continue
print(f" [RUNNING] {step.step_id}: {step.node_type}...")
try:
node_type = NodeType[step.node_type]
except KeyError:
node_type = step.node_type
executor = get_executor(node_type)
if executor is None:
print(f" SKIP: No executor for {step.node_type}")
continue
# Resolve inputs
input_paths_list = []
for input_id in step.input_steps:
if input_id in cache_paths:
input_paths_list.append(Path(cache_paths[input_id]))
else:
input_step = plan.get_step(input_id)
if input_step and input_step.cache_id in cache_paths:
input_paths_list.append(Path(cache_paths[input_step.cache_id]))
if len(input_paths_list) != len(step.input_steps):
print(f" ERROR: Missing inputs for {step.step_id}")
continue
output_path = cache.get_output_path(step.cache_id)
try:
result_path = executor.execute(step.config, input_paths_list, output_path)
cache.put(step.cache_id, result_path, node_type=step.node_type)
cache_paths[step.cache_id] = str(result_path)
cache_paths[step.step_id] = str(result_path)
print(f" [DONE]")
executed += 1
except Exception as e:
print(f" [FAILED] {e}")
# Final output
output_step = plan.get_step(plan.output_step)
output_path = cache_paths.get(output_step.cache_id) if output_step else None
print(f"\n=== Complete ===")
print(f"Cached: {cached}")
print(f"Executed: {executed}")
if output_path:
print(f"Output: {output_path}")
def cmd_run_recipe_ipfs(args):
"""Run complete pipeline with IPFS-primary mode.
Everything stored on IPFS:
- Inputs (media files)
- Analysis results (JSON)
- Execution plans (JSON)
- Step outputs (media files)
"""
import hashlib
import shutil
import tempfile
from .analysis import Analyzer, AnalysisResult
from .planning import RecipePlanner, Recipe, ExecutionPlan
from .executor import get_executor
from .dag import NodeType
from . import nodes # Register built-in executors
# Check for ipfs_client
try:
from art_celery import ipfs_client
except ImportError:
# Try relative import for when running from art-celery
try:
import ipfs_client
except ImportError:
print("Error: ipfs_client not available. Install art-celery or run from art-celery directory.")
sys.exit(1)
# Check IPFS availability
if not ipfs_client.is_available():
print("Error: IPFS daemon not available. Start IPFS with 'ipfs daemon'")
sys.exit(1)
print("=== IPFS-Primary Mode ===")
print(f"IPFS Node: {ipfs_client.get_node_id()[:16]}...")
# Load recipe
recipe_path = Path(args.recipe)
recipe = Recipe.from_file(recipe_path)
print(f"\nRecipe: {recipe.name} v{recipe.version}")
# Parse inputs
input_hashes, input_paths = parse_inputs(args.input)
# Parse features
features = args.features.split(",") if args.features else ["beats", "energy"]
# Phase 0: Register on IPFS
print("\n=== Phase 0: Register on IPFS ===")
# Register recipe
recipe_bytes = recipe_path.read_bytes()
recipe_cid = ipfs_client.add_bytes(recipe_bytes)
print(f"Recipe CID: {recipe_cid}")
# Register inputs
input_cids = {}
for name, hash_value in input_hashes.items():
path = input_paths.get(name)
if path:
cid = ipfs_client.add_file(Path(path))
if cid:
input_cids[name] = cid
print(f"Input '{name}': {cid}")
else:
print(f"Error: Failed to add input '{name}' to IPFS")
sys.exit(1)
# Phase 1: Analyze
print("\n=== Phase 1: Analysis ===")
# Create temp dir for analysis
work_dir = Path(tempfile.mkdtemp(prefix="artdag_ipfs_"))
analysis_cids = {}
analysis = {}
try:
for name, hash_value in input_hashes.items():
input_cid = input_cids.get(name)
if not input_cid:
continue
print(f"Analyzing {name}...")
# Fetch from IPFS to temp
temp_input = work_dir / f"input_{name}.mkv"
if not ipfs_client.get_file(input_cid, temp_input):
print(f" Error: Failed to fetch from IPFS")
continue
# Run analysis
analyzer = Analyzer(cache_dir=None)
result = analyzer.analyze(
input_hash=hash_value,
features=features,
input_path=temp_input,
)
if result.audio and result.audio.beats:
print(f" Tempo: {result.audio.beats.tempo:.1f} BPM, {len(result.audio.beats.beat_times)} beats")
# Store analysis on IPFS
analysis_cid = ipfs_client.add_json(result.to_dict())
if analysis_cid:
analysis_cids[hash_value] = analysis_cid
analysis[hash_value] = result
print(f" Analysis CID: {analysis_cid}")
# Phase 2: Plan
print("\n=== Phase 2: Planning ===")
planner = RecipePlanner(use_tree_reduction=True)
plan = planner.plan(
recipe=recipe,
input_hashes=input_hashes,
analysis=analysis if analysis else None,
)
# Store plan on IPFS
import json
plan_dict = json.loads(plan.to_json())
plan_cid = ipfs_client.add_json(plan_dict)
print(f"Plan ID: {plan.plan_id[:16]}...")
print(f"Plan CID: {plan_cid}")
print(f"Steps: {len(plan.steps)}")
steps_by_level = plan.get_steps_by_level()
print(f"Levels: {len(steps_by_level)}")
# Phase 3: Execute
print("\n=== Phase 3: Execution ===")
# CID results
cid_results = dict(input_cids)
step_cids = {}
executed = 0
cached = 0
for level in sorted(steps_by_level.keys()):
steps = steps_by_level[level]
print(f"\nLevel {level}: {len(steps)} steps")
for step in steps:
# Handle SOURCE
if step.node_type == "SOURCE":
source_name = step.config.get("name") or step.step_id
cid = cid_results.get(source_name)
if cid:
step_cids[step.step_id] = cid
print(f" [SOURCE] {step.step_id}")
continue
print(f" [RUNNING] {step.step_id}: {step.node_type}...")
try:
node_type = NodeType[step.node_type]
except KeyError:
node_type = step.node_type
executor = get_executor(node_type)
if executor is None:
print(f" SKIP: No executor for {step.node_type}")
continue
# Fetch inputs from IPFS
input_paths_list = []
for i, input_step_id in enumerate(step.input_steps):
input_cid = step_cids.get(input_step_id) or cid_results.get(input_step_id)
if not input_cid:
print(f" ERROR: Missing input CID for {input_step_id}")
continue
temp_path = work_dir / f"step_{step.step_id}_input_{i}.mkv"
if not ipfs_client.get_file(input_cid, temp_path):
print(f" ERROR: Failed to fetch {input_cid}")
continue
input_paths_list.append(temp_path)
if len(input_paths_list) != len(step.input_steps):
print(f" ERROR: Missing inputs")
continue
# Execute
output_path = work_dir / f"step_{step.step_id}_output.mkv"
try:
result_path = executor.execute(step.config, input_paths_list, output_path)
# Add to IPFS
output_cid = ipfs_client.add_file(result_path)
if output_cid:
step_cids[step.step_id] = output_cid
print(f" [DONE] CID: {output_cid}")
executed += 1
else:
print(f" [FAILED] Could not add to IPFS")
except Exception as e:
print(f" [FAILED] {e}")
# Final output
output_step = plan.get_step(plan.output_step)
output_cid = step_cids.get(output_step.step_id) if output_step else None
print(f"\n=== Complete ===")
print(f"Executed: {executed}")
if output_cid:
print(f"Output CID: {output_cid}")
print(f"Fetch with: ipfs get {output_cid}")
# Summary of all CIDs
print(f"\n=== All CIDs ===")
print(f"Recipe: {recipe_cid}")
print(f"Plan: {plan_cid}")
for name, cid in input_cids.items():
print(f"Input '{name}': {cid}")
for hash_val, cid in analysis_cids.items():
print(f"Analysis '{hash_val[:16]}...': {cid}")
if output_cid:
print(f"Output: {output_cid}")
finally:
# Cleanup temp
shutil.rmtree(work_dir, ignore_errors=True)
def main():
parser = argparse.ArgumentParser(
prog="artdag",
description="Art DAG - Declarative media composition",
)
subparsers = parser.add_subparsers(dest="command", help="Commands")
# analyze command
analyze_parser = subparsers.add_parser("analyze", help="Extract features from inputs")
analyze_parser.add_argument("recipe", help="Recipe YAML file")
analyze_parser.add_argument("-i", "--input", action="append", required=True,
help="Input: name:hash[@path]")
analyze_parser.add_argument("--features", help="Features to extract (comma-separated)")
analyze_parser.add_argument("-o", "--output", help="Output file (default: analysis.json)")
analyze_parser.add_argument("--cache-dir", help="Analysis cache directory")
# plan command
plan_parser = subparsers.add_parser("plan", help="Generate execution plan")
plan_parser.add_argument("recipe", help="Recipe YAML file")
plan_parser.add_argument("-i", "--input", action="append", required=True,
help="Input: name:hash")
plan_parser.add_argument("--analysis", help="Analysis JSON file")
plan_parser.add_argument("-o", "--output", help="Output file (default: plan.json)")
plan_parser.add_argument("--no-tree-reduction", action="store_true",
help="Disable tree reduction optimization")
# execute command
execute_parser = subparsers.add_parser("execute", help="Execute a plan")
execute_parser.add_argument("plan", help="Plan JSON file")
execute_parser.add_argument("--dry-run", action="store_true",
help="Show what would execute")
execute_parser.add_argument("--cache-dir", help="Cache directory")
# run-recipe command
run_parser = subparsers.add_parser("run-recipe", help="Full pipeline: analyze → plan → execute")
run_parser.add_argument("recipe", help="Recipe YAML file")
run_parser.add_argument("-i", "--input", action="append", required=True,
help="Input: name:hash[@path]")
run_parser.add_argument("--features", help="Features to extract (comma-separated)")
run_parser.add_argument("--cache-dir", help="Cache directory")
run_parser.add_argument("--ipfs-primary", action="store_true",
help="Use IPFS-primary mode (everything on IPFS, no local cache)")
args = parser.parse_args()
if args.command == "analyze":
cmd_analyze(args)
elif args.command == "plan":
cmd_plan(args)
elif args.command == "execute":
cmd_execute(args)
elif args.command == "run-recipe":
if getattr(args, 'ipfs_primary', False):
cmd_run_recipe_ipfs(args)
else:
cmd_run_recipe(args)
else:
parser.print_help()
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,201 @@
# primitive/client.py
"""
Client SDK for the primitive execution server.
Provides a simple API for submitting DAGs and retrieving results.
Usage:
client = PrimitiveClient("http://localhost:8080")
# Build a DAG
builder = DAGBuilder()
source = builder.source("/path/to/video.mp4")
segment = builder.segment(source, duration=5.0)
builder.set_output(segment)
dag = builder.build()
# Execute and wait for result
result = client.execute(dag)
print(f"Output: {result.output_path}")
"""
import json
import time
from dataclasses import dataclass
from pathlib import Path
from typing import Optional
from urllib.request import urlopen, Request
from urllib.error import HTTPError, URLError
from .dag import DAG, DAGBuilder
@dataclass
class ExecutionResult:
"""Result from server execution."""
success: bool
output_path: Optional[Path] = None
error: Optional[str] = None
execution_time: float = 0.0
nodes_executed: int = 0
nodes_cached: int = 0
@dataclass
class CacheStats:
"""Cache statistics from server."""
total_entries: int = 0
total_size_bytes: int = 0
hits: int = 0
misses: int = 0
hit_rate: float = 0.0
class PrimitiveClient:
"""
Client for the primitive execution server.
Args:
base_url: Server URL (e.g., "http://localhost:8080")
timeout: Request timeout in seconds
"""
def __init__(self, base_url: str = "http://localhost:8080", timeout: float = 300):
self.base_url = base_url.rstrip("/")
self.timeout = timeout
def _request(self, method: str, path: str, data: dict = None) -> dict:
"""Make HTTP request to server."""
url = f"{self.base_url}{path}"
if data is not None:
body = json.dumps(data).encode()
headers = {"Content-Type": "application/json"}
else:
body = None
headers = {}
req = Request(url, data=body, headers=headers, method=method)
try:
with urlopen(req, timeout=self.timeout) as response:
return json.loads(response.read().decode())
except HTTPError as e:
error_body = e.read().decode()
try:
error_data = json.loads(error_body)
raise RuntimeError(error_data.get("error", str(e)))
except json.JSONDecodeError:
raise RuntimeError(f"HTTP {e.code}: {error_body}")
except URLError as e:
raise ConnectionError(f"Failed to connect to server: {e}")
def health(self) -> bool:
"""Check if server is healthy."""
try:
result = self._request("GET", "/health")
return result.get("status") == "ok"
except Exception:
return False
def submit(self, dag: DAG) -> str:
"""
Submit a DAG for execution.
Args:
dag: The DAG to execute
Returns:
Job ID for tracking
"""
result = self._request("POST", "/execute", dag.to_dict())
return result["job_id"]
def status(self, job_id: str) -> str:
"""
Get job status.
Args:
job_id: Job ID from submit()
Returns:
Status: "pending", "running", "completed", or "failed"
"""
result = self._request("GET", f"/status/{job_id}")
return result["status"]
def result(self, job_id: str) -> Optional[ExecutionResult]:
"""
Get job result (non-blocking).
Args:
job_id: Job ID from submit()
Returns:
ExecutionResult if complete, None if still running
"""
data = self._request("GET", f"/result/{job_id}")
if not data.get("ready", False):
return None
return ExecutionResult(
success=data.get("success", False),
output_path=Path(data["output_path"]) if data.get("output_path") else None,
error=data.get("error"),
execution_time=data.get("execution_time", 0),
nodes_executed=data.get("nodes_executed", 0),
nodes_cached=data.get("nodes_cached", 0),
)
def wait(self, job_id: str, poll_interval: float = 0.5) -> ExecutionResult:
"""
Wait for job completion and return result.
Args:
job_id: Job ID from submit()
poll_interval: Seconds between status checks
Returns:
ExecutionResult
"""
while True:
result = self.result(job_id)
if result is not None:
return result
time.sleep(poll_interval)
def execute(self, dag: DAG, poll_interval: float = 0.5) -> ExecutionResult:
"""
Submit DAG and wait for result.
Convenience method combining submit() and wait().
Args:
dag: The DAG to execute
poll_interval: Seconds between status checks
Returns:
ExecutionResult
"""
job_id = self.submit(dag)
return self.wait(job_id, poll_interval)
def cache_stats(self) -> CacheStats:
"""Get cache statistics."""
data = self._request("GET", "/cache/stats")
return CacheStats(
total_entries=data.get("total_entries", 0),
total_size_bytes=data.get("total_size_bytes", 0),
hits=data.get("hits", 0),
misses=data.get("misses", 0),
hit_rate=data.get("hit_rate", 0.0),
)
def clear_cache(self) -> None:
"""Clear the server cache."""
self._request("DELETE", "/cache")
# Re-export DAGBuilder for convenience
__all__ = ["PrimitiveClient", "ExecutionResult", "CacheStats", "DAGBuilder"]

344
artdag/core/artdag/dag.py Normal file
View File

@@ -0,0 +1,344 @@
# primitive/dag.py
"""
Core DAG data structures.
Nodes are content-addressed: node_id = hash(type + config + input_ids)
This enables automatic caching and deduplication.
"""
import hashlib
import json
from dataclasses import dataclass, field
from enum import Enum, auto
from typing import Any, Dict, List, Optional
class NodeType(Enum):
"""Built-in node types."""
# Source operations
SOURCE = auto() # Load file from path
# Transform operations
SEGMENT = auto() # Extract time range
RESIZE = auto() # Scale/crop/pad
TRANSFORM = auto() # Visual effects (color, blur, etc.)
# Compose operations
SEQUENCE = auto() # Concatenate in time
LAYER = auto() # Stack spatially (overlay)
MUX = auto() # Combine video + audio streams
BLEND = auto() # Blend two inputs
AUDIO_MIX = auto() # Mix multiple audio streams
SWITCH = auto() # Time-based input switching
# Analysis operations
ANALYZE = auto() # Extract features (audio, motion, etc.)
# Generation operations
GENERATE = auto() # Create content (text, graphics, etc.)
def _stable_hash(data: Any, algorithm: str = "sha3_256") -> str:
"""
Create stable hash from arbitrary data.
Uses SHA-3 (Keccak) for quantum resistance.
Returns full hash - no truncation.
Args:
data: Data to hash (will be JSON serialized)
algorithm: Hash algorithm (default: sha3_256)
Returns:
Full hex digest
"""
# Convert to JSON with sorted keys for stability
json_str = json.dumps(data, sort_keys=True, separators=(",", ":"))
hasher = hashlib.new(algorithm)
hasher.update(json_str.encode())
return hasher.hexdigest()
@dataclass
class Node:
"""
A node in the execution DAG.
Attributes:
node_type: The operation type (NodeType enum or string for custom types)
config: Operation-specific configuration
inputs: List of input node IDs (resolved during execution)
node_id: Content-addressed ID (computed from type + config + inputs)
name: Optional human-readable name for debugging
"""
node_type: NodeType | str
config: Dict[str, Any] = field(default_factory=dict)
inputs: List[str] = field(default_factory=list)
node_id: Optional[str] = None
name: Optional[str] = None
def __post_init__(self):
"""Compute node_id if not provided."""
if self.node_id is None:
self.node_id = self._compute_id()
def _compute_id(self) -> str:
"""Compute content-addressed ID from node contents."""
type_str = self.node_type.name if isinstance(self.node_type, NodeType) else str(self.node_type)
content = {
"type": type_str,
"config": self.config,
"inputs": sorted(self.inputs), # Sort for stability
}
return _stable_hash(content)
def to_dict(self) -> Dict[str, Any]:
"""Serialize node to dictionary."""
type_str = self.node_type.name if isinstance(self.node_type, NodeType) else str(self.node_type)
return {
"node_id": self.node_id,
"node_type": type_str,
"config": self.config,
"inputs": self.inputs,
"name": self.name,
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "Node":
"""Deserialize node from dictionary."""
type_str = data["node_type"]
try:
node_type = NodeType[type_str]
except KeyError:
node_type = type_str # Custom type as string
return cls(
node_type=node_type,
config=data.get("config", {}),
inputs=data.get("inputs", []),
node_id=data.get("node_id"),
name=data.get("name"),
)
@dataclass
class DAG:
"""
A directed acyclic graph of nodes.
Attributes:
nodes: Dictionary mapping node_id -> Node
output_id: The ID of the final output node
metadata: Optional metadata about the DAG (source, version, etc.)
"""
nodes: Dict[str, Node] = field(default_factory=dict)
output_id: Optional[str] = None
metadata: Dict[str, Any] = field(default_factory=dict)
def add_node(self, node: Node) -> str:
"""Add a node to the DAG, returning its ID."""
if node.node_id in self.nodes:
# Node already exists (deduplication via content addressing)
return node.node_id
self.nodes[node.node_id] = node
return node.node_id
def set_output(self, node_id: str) -> None:
"""Set the output node."""
if node_id not in self.nodes:
raise ValueError(f"Node {node_id} not in DAG")
self.output_id = node_id
def get_node(self, node_id: str) -> Node:
"""Get a node by ID."""
if node_id not in self.nodes:
raise KeyError(f"Node {node_id} not found")
return self.nodes[node_id]
def topological_order(self) -> List[str]:
"""Return nodes in topological order (dependencies first)."""
visited = set()
order = []
def visit(node_id: str):
if node_id in visited:
return
visited.add(node_id)
node = self.nodes[node_id]
for input_id in node.inputs:
visit(input_id)
order.append(node_id)
# Visit all nodes (not just output, in case of disconnected components)
for node_id in self.nodes:
visit(node_id)
return order
def validate(self) -> List[str]:
"""Validate DAG structure. Returns list of errors (empty if valid)."""
errors = []
if self.output_id is None:
errors.append("No output node set")
elif self.output_id not in self.nodes:
errors.append(f"Output node {self.output_id} not in DAG")
# Check all input references are valid
for node_id, node in self.nodes.items():
for input_id in node.inputs:
if input_id not in self.nodes:
errors.append(f"Node {node_id} references missing input {input_id}")
# Check for cycles (skip if we already found missing inputs)
if not any("missing" in e for e in errors):
try:
self.topological_order()
except (RecursionError, KeyError):
errors.append("DAG contains cycles or invalid references")
return errors
def to_dict(self) -> Dict[str, Any]:
"""Serialize DAG to dictionary."""
return {
"nodes": {nid: node.to_dict() for nid, node in self.nodes.items()},
"output_id": self.output_id,
"metadata": self.metadata,
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "DAG":
"""Deserialize DAG from dictionary."""
dag = cls(metadata=data.get("metadata", {}))
for node_data in data.get("nodes", {}).values():
dag.add_node(Node.from_dict(node_data))
dag.output_id = data.get("output_id")
return dag
def to_json(self) -> str:
"""Serialize DAG to JSON string."""
return json.dumps(self.to_dict(), indent=2)
@classmethod
def from_json(cls, json_str: str) -> "DAG":
"""Deserialize DAG from JSON string."""
return cls.from_dict(json.loads(json_str))
class DAGBuilder:
"""
Fluent builder for constructing DAGs.
Example:
builder = DAGBuilder()
source = builder.source("/path/to/video.mp4")
segment = builder.segment(source, duration=5.0)
builder.set_output(segment)
dag = builder.build()
"""
def __init__(self):
self.dag = DAG()
def _add(self, node_type: NodeType | str, config: Dict[str, Any],
inputs: List[str] = None, name: str = None) -> str:
"""Add a node and return its ID."""
node = Node(
node_type=node_type,
config=config,
inputs=inputs or [],
name=name,
)
return self.dag.add_node(node)
# Source operations
def source(self, path: str, name: str = None) -> str:
"""Add a SOURCE node."""
return self._add(NodeType.SOURCE, {"path": path}, name=name)
# Transform operations
def segment(self, input_id: str, duration: float = None,
offset: float = 0, precise: bool = True, name: str = None) -> str:
"""Add a SEGMENT node."""
config = {"offset": offset, "precise": precise}
if duration is not None:
config["duration"] = duration
return self._add(NodeType.SEGMENT, config, [input_id], name=name)
def resize(self, input_id: str, width: int, height: int,
mode: str = "fit", name: str = None) -> str:
"""Add a RESIZE node."""
return self._add(
NodeType.RESIZE,
{"width": width, "height": height, "mode": mode},
[input_id],
name=name
)
def transform(self, input_id: str, effects: Dict[str, Any],
name: str = None) -> str:
"""Add a TRANSFORM node."""
return self._add(NodeType.TRANSFORM, {"effects": effects}, [input_id], name=name)
# Compose operations
def sequence(self, input_ids: List[str], transition: Dict[str, Any] = None,
name: str = None) -> str:
"""Add a SEQUENCE node."""
config = {"transition": transition or {"type": "cut"}}
return self._add(NodeType.SEQUENCE, config, input_ids, name=name)
def layer(self, input_ids: List[str], configs: List[Dict] = None,
name: str = None) -> str:
"""Add a LAYER node."""
return self._add(
NodeType.LAYER,
{"inputs": configs or [{}] * len(input_ids)},
input_ids,
name=name
)
def mux(self, video_id: str, audio_id: str, shortest: bool = True,
name: str = None) -> str:
"""Add a MUX node."""
return self._add(
NodeType.MUX,
{"video_stream": 0, "audio_stream": 1, "shortest": shortest},
[video_id, audio_id],
name=name
)
def blend(self, input1_id: str, input2_id: str, mode: str = "overlay",
opacity: float = 0.5, name: str = None) -> str:
"""Add a BLEND node."""
return self._add(
NodeType.BLEND,
{"mode": mode, "opacity": opacity},
[input1_id, input2_id],
name=name
)
def audio_mix(self, input_ids: List[str], gains: List[float] = None,
normalize: bool = True, name: str = None) -> str:
"""Add an AUDIO_MIX node to mix multiple audio streams."""
config = {"normalize": normalize}
if gains is not None:
config["gains"] = gains
return self._add(NodeType.AUDIO_MIX, config, input_ids, name=name)
# Output
def set_output(self, node_id: str) -> "DAGBuilder":
"""Set the output node."""
self.dag.set_output(node_id)
return self
def build(self) -> DAG:
"""Build and validate the DAG."""
errors = self.dag.validate()
if errors:
raise ValueError(f"Invalid DAG: {errors}")
return self.dag

View File

@@ -0,0 +1,55 @@
"""
Cacheable effect system.
Effects are single Python files with:
- PEP 723 embedded dependencies
- @-tag metadata in docstrings
- Frame-by-frame or whole-video API
Effects are cached by content hash (SHA3-256) and executed in
sandboxed environments for determinism.
"""
from .meta import EffectMeta, ParamSpec, ExecutionContext
from .loader import load_effect, load_effect_file, LoadedEffect, compute_cid
from .binding import (
AnalysisData,
ResolvedBinding,
resolve_binding,
resolve_all_bindings,
bindings_to_lookup_table,
has_bindings,
extract_binding_sources,
)
from .sandbox import Sandbox, SandboxConfig, SandboxResult, is_bwrap_available, get_venv_path
from .runner import run_effect, run_effect_from_cache, EffectExecutor
__all__ = [
# Meta types
"EffectMeta",
"ParamSpec",
"ExecutionContext",
# Loader
"load_effect",
"load_effect_file",
"LoadedEffect",
"compute_cid",
# Binding
"AnalysisData",
"ResolvedBinding",
"resolve_binding",
"resolve_all_bindings",
"bindings_to_lookup_table",
"has_bindings",
"extract_binding_sources",
# Sandbox
"Sandbox",
"SandboxConfig",
"SandboxResult",
"is_bwrap_available",
"get_venv_path",
# Runner
"run_effect",
"run_effect_from_cache",
"EffectExecutor",
]

View File

@@ -0,0 +1,311 @@
"""
Parameter binding resolution.
Resolves bind expressions to per-frame lookup tables at plan time.
Binding options:
- :range [lo hi] - map 0-1 to output range
- :smooth N - smoothing window in seconds
- :offset N - time offset in seconds
- :on-event V - value on discrete events
- :decay N - exponential decay after event
- :noise N - add deterministic noise (seeded)
- :seed N - explicit RNG seed
"""
import hashlib
import math
import random
from dataclasses import dataclass
from typing import Any, Dict, List, Optional, Tuple
@dataclass
class AnalysisData:
"""
Analysis data for binding resolution.
Attributes:
frame_rate: Video frame rate
total_frames: Total number of frames
features: Dict mapping feature name to per-frame values
events: Dict mapping event name to list of frame indices
"""
frame_rate: float
total_frames: int
features: Dict[str, List[float]] # feature -> [value_per_frame]
events: Dict[str, List[int]] # event -> [frame_indices]
def get_feature(self, name: str, frame: int) -> float:
"""Get feature value at frame, interpolating if needed."""
if name not in self.features:
return 0.0
values = self.features[name]
if not values:
return 0.0
if frame >= len(values):
return values[-1]
return values[frame]
def get_events_in_range(
self, name: str, start_frame: int, end_frame: int
) -> List[int]:
"""Get event frames in range."""
if name not in self.events:
return []
return [f for f in self.events[name] if start_frame <= f < end_frame]
@dataclass
class ResolvedBinding:
"""
Resolved binding with per-frame values.
Attributes:
param_name: Parameter this binding applies to
values: List of values, one per frame
"""
param_name: str
values: List[float]
def get(self, frame: int) -> float:
"""Get value at frame."""
if frame >= len(self.values):
return self.values[-1] if self.values else 0.0
return self.values[frame]
def resolve_binding(
binding: Dict[str, Any],
analysis: AnalysisData,
param_name: str,
cache_id: str = None,
) -> ResolvedBinding:
"""
Resolve a binding specification to per-frame values.
Args:
binding: Binding spec with source, feature, and options
analysis: Analysis data with features and events
param_name: Name of the parameter being bound
cache_id: Cache ID for deterministic seeding
Returns:
ResolvedBinding with values for each frame
"""
feature = binding.get("feature")
if not feature:
raise ValueError(f"Binding for {param_name} missing feature")
# Get base values
values = []
is_event = feature in analysis.events
if is_event:
# Event-based binding
on_event = binding.get("on_event", 1.0)
decay = binding.get("decay", 0.0)
values = _resolve_event_binding(
analysis.events.get(feature, []),
analysis.total_frames,
analysis.frame_rate,
on_event,
decay,
)
else:
# Continuous feature binding
feature_values = analysis.features.get(feature, [])
if not feature_values:
# No data, use zeros
values = [0.0] * analysis.total_frames
else:
# Extend to total frames if needed
values = list(feature_values)
while len(values) < analysis.total_frames:
values.append(values[-1] if values else 0.0)
# Apply offset
offset = binding.get("offset")
if offset:
offset_frames = int(offset * analysis.frame_rate)
values = _apply_offset(values, offset_frames)
# Apply smoothing
smooth = binding.get("smooth")
if smooth:
window_frames = int(smooth * analysis.frame_rate)
values = _apply_smoothing(values, window_frames)
# Apply range mapping
range_spec = binding.get("range")
if range_spec:
lo, hi = range_spec
values = _apply_range(values, lo, hi)
# Apply noise
noise = binding.get("noise")
if noise:
seed = binding.get("seed")
if seed is None and cache_id:
# Derive seed from cache_id for determinism
seed = int(hashlib.sha256(cache_id.encode()).hexdigest()[:8], 16)
values = _apply_noise(values, noise, seed or 0)
return ResolvedBinding(param_name=param_name, values=values)
def _resolve_event_binding(
event_frames: List[int],
total_frames: int,
frame_rate: float,
on_event: float,
decay: float,
) -> List[float]:
"""
Resolve event-based binding with optional decay.
Args:
event_frames: List of frame indices where events occur
total_frames: Total number of frames
frame_rate: Video frame rate
on_event: Value at event
decay: Decay time constant in seconds (0 = instant)
Returns:
List of values per frame
"""
values = [0.0] * total_frames
if not event_frames:
return values
event_set = set(event_frames)
if decay <= 0:
# No decay - just mark event frames
for f in event_frames:
if 0 <= f < total_frames:
values[f] = on_event
else:
# Apply exponential decay
decay_frames = decay * frame_rate
for f in event_frames:
if f < 0 or f >= total_frames:
continue
# Apply decay from this event forward
for i in range(f, total_frames):
elapsed = i - f
decayed = on_event * math.exp(-elapsed / decay_frames)
if decayed < 0.001:
break
values[i] = max(values[i], decayed)
return values
def _apply_offset(values: List[float], offset_frames: int) -> List[float]:
"""Shift values by offset frames (positive = delay)."""
if offset_frames == 0:
return values
n = len(values)
result = [0.0] * n
for i in range(n):
src = i - offset_frames
if 0 <= src < n:
result[i] = values[src]
return result
def _apply_smoothing(values: List[float], window_frames: int) -> List[float]:
"""Apply moving average smoothing."""
if window_frames <= 1:
return values
n = len(values)
result = []
half = window_frames // 2
for i in range(n):
start = max(0, i - half)
end = min(n, i + half + 1)
avg = sum(values[start:end]) / (end - start)
result.append(avg)
return result
def _apply_range(values: List[float], lo: float, hi: float) -> List[float]:
"""Map values from 0-1 to lo-hi range."""
return [lo + v * (hi - lo) for v in values]
def _apply_noise(values: List[float], amount: float, seed: int) -> List[float]:
"""Add deterministic noise to values."""
rng = random.Random(seed)
return [v + rng.uniform(-amount, amount) for v in values]
def resolve_all_bindings(
config: Dict[str, Any],
analysis: AnalysisData,
cache_id: str = None,
) -> Dict[str, ResolvedBinding]:
"""
Resolve all bindings in a config dict.
Looks for values with _binding: True marker.
Args:
config: Node config with potential bindings
analysis: Analysis data
cache_id: Cache ID for seeding
Returns:
Dict mapping param name to resolved binding
"""
resolved = {}
for key, value in config.items():
if isinstance(value, dict) and value.get("_binding"):
resolved[key] = resolve_binding(value, analysis, key, cache_id)
return resolved
def bindings_to_lookup_table(
bindings: Dict[str, ResolvedBinding],
) -> Dict[str, List[float]]:
"""
Convert resolved bindings to simple lookup tables.
Returns dict mapping param name to list of per-frame values.
This format is JSON-serializable for inclusion in execution plans.
"""
return {name: binding.values for name, binding in bindings.items()}
def has_bindings(config: Dict[str, Any]) -> bool:
"""Check if config contains any bindings."""
for value in config.values():
if isinstance(value, dict) and value.get("_binding"):
return True
return False
def extract_binding_sources(config: Dict[str, Any]) -> List[str]:
"""
Extract all analysis source references from bindings.
Returns list of node IDs that provide analysis data.
"""
sources = []
for value in config.values():
if isinstance(value, dict) and value.get("_binding"):
source = value.get("source")
if source and source not in sources:
sources.append(source)
return sources

View File

@@ -0,0 +1,347 @@
"""
FFmpeg pipe-based frame processing.
Processes video through Python frame-by-frame effects using FFmpeg pipes:
FFmpeg decode -> Python process_frame -> FFmpeg encode
This avoids writing intermediate frames to disk.
"""
import logging
import subprocess
from dataclasses import dataclass
from pathlib import Path
from typing import Any, Callable, Dict, List, Optional, Tuple
import numpy as np
logger = logging.getLogger(__name__)
@dataclass
class VideoInfo:
"""Video metadata."""
width: int
height: int
frame_rate: float
total_frames: int
duration: float
pixel_format: str = "rgb24"
def probe_video(path: Path) -> VideoInfo:
"""
Get video information using ffprobe.
Args:
path: Path to video file
Returns:
VideoInfo with dimensions, frame rate, etc.
"""
cmd = [
"ffprobe",
"-v", "error",
"-select_streams", "v:0",
"-show_entries", "stream=width,height,r_frame_rate,nb_frames,duration",
"-of", "csv=p=0",
str(path),
]
result = subprocess.run(cmd, capture_output=True, text=True)
if result.returncode != 0:
raise RuntimeError(f"ffprobe failed: {result.stderr}")
parts = result.stdout.strip().split(",")
if len(parts) < 4:
raise RuntimeError(f"Unexpected ffprobe output: {result.stdout}")
width = int(parts[0])
height = int(parts[1])
# Parse frame rate (could be "30/1" or "30")
fr_parts = parts[2].split("/")
if len(fr_parts) == 2:
frame_rate = float(fr_parts[0]) / float(fr_parts[1])
else:
frame_rate = float(fr_parts[0])
# nb_frames might be N/A
total_frames = 0
duration = 0.0
try:
total_frames = int(parts[3])
except (ValueError, IndexError):
pass
try:
duration = float(parts[4]) if len(parts) > 4 else 0.0
except (ValueError, IndexError):
pass
if total_frames == 0 and duration > 0:
total_frames = int(duration * frame_rate)
return VideoInfo(
width=width,
height=height,
frame_rate=frame_rate,
total_frames=total_frames,
duration=duration,
)
FrameProcessor = Callable[[np.ndarray, Dict[str, Any], Any], Tuple[np.ndarray, Any]]
def process_video(
input_path: Path,
output_path: Path,
process_frame: FrameProcessor,
params: Dict[str, Any],
bindings: Dict[str, List[float]] = None,
initial_state: Any = None,
pixel_format: str = "rgb24",
output_codec: str = "libx264",
output_options: List[str] = None,
) -> Tuple[Path, Any]:
"""
Process video through frame-by-frame effect.
Args:
input_path: Input video path
output_path: Output video path
process_frame: Function (frame, params, state) -> (frame, state)
params: Static parameter dict
bindings: Per-frame parameter lookup tables
initial_state: Initial state for process_frame
pixel_format: Pixel format for frame data
output_codec: Video codec for output
output_options: Additional ffmpeg output options
Returns:
Tuple of (output_path, final_state)
"""
bindings = bindings or {}
output_options = output_options or []
# Probe input
info = probe_video(input_path)
logger.info(f"Processing {info.width}x{info.height} @ {info.frame_rate}fps")
# Calculate bytes per frame
if pixel_format == "rgb24":
bytes_per_pixel = 3
elif pixel_format == "rgba":
bytes_per_pixel = 4
else:
bytes_per_pixel = 3 # Default to RGB
frame_size = info.width * info.height * bytes_per_pixel
# Start decoder process
decode_cmd = [
"ffmpeg",
"-i", str(input_path),
"-f", "rawvideo",
"-pix_fmt", pixel_format,
"-",
]
# Start encoder process
encode_cmd = [
"ffmpeg",
"-y",
"-f", "rawvideo",
"-pix_fmt", pixel_format,
"-s", f"{info.width}x{info.height}",
"-r", str(info.frame_rate),
"-i", "-",
"-i", str(input_path), # For audio
"-map", "0:v",
"-map", "1:a?",
"-c:v", output_codec,
"-c:a", "aac",
*output_options,
str(output_path),
]
logger.debug(f"Decoder: {' '.join(decode_cmd)}")
logger.debug(f"Encoder: {' '.join(encode_cmd)}")
decoder = subprocess.Popen(
decode_cmd,
stdout=subprocess.PIPE,
stderr=subprocess.DEVNULL,
)
encoder = subprocess.Popen(
encode_cmd,
stdin=subprocess.PIPE,
stderr=subprocess.PIPE,
)
state = initial_state
frame_idx = 0
try:
while True:
# Read frame from decoder
raw_frame = decoder.stdout.read(frame_size)
if len(raw_frame) < frame_size:
break
# Convert to numpy
frame = np.frombuffer(raw_frame, dtype=np.uint8)
frame = frame.reshape((info.height, info.width, bytes_per_pixel))
# Build per-frame params
frame_params = dict(params)
for param_name, values in bindings.items():
if frame_idx < len(values):
frame_params[param_name] = values[frame_idx]
# Process frame
processed, state = process_frame(frame, frame_params, state)
# Ensure correct shape and dtype
if processed.shape != frame.shape:
raise ValueError(
f"Frame shape mismatch: {processed.shape} vs {frame.shape}"
)
processed = processed.astype(np.uint8)
# Write to encoder
encoder.stdin.write(processed.tobytes())
frame_idx += 1
if frame_idx % 100 == 0:
logger.debug(f"Processed frame {frame_idx}")
except Exception as e:
logger.error(f"Frame processing failed at frame {frame_idx}: {e}")
raise
finally:
decoder.stdout.close()
decoder.wait()
encoder.stdin.close()
encoder.wait()
if encoder.returncode != 0:
stderr = encoder.stderr.read().decode() if encoder.stderr else ""
raise RuntimeError(f"Encoder failed: {stderr}")
logger.info(f"Processed {frame_idx} frames")
return output_path, state
def process_video_batch(
input_path: Path,
output_path: Path,
process_frames: Callable[[List[np.ndarray], Dict[str, Any]], List[np.ndarray]],
params: Dict[str, Any],
batch_size: int = 30,
pixel_format: str = "rgb24",
output_codec: str = "libx264",
) -> Path:
"""
Process video in batches for effects that need temporal context.
Args:
input_path: Input video path
output_path: Output video path
process_frames: Function (frames_batch, params) -> processed_batch
params: Parameter dict
batch_size: Number of frames per batch
pixel_format: Pixel format
output_codec: Output codec
Returns:
Output path
"""
info = probe_video(input_path)
if pixel_format == "rgb24":
bytes_per_pixel = 3
elif pixel_format == "rgba":
bytes_per_pixel = 4
else:
bytes_per_pixel = 3
frame_size = info.width * info.height * bytes_per_pixel
decode_cmd = [
"ffmpeg",
"-i", str(input_path),
"-f", "rawvideo",
"-pix_fmt", pixel_format,
"-",
]
encode_cmd = [
"ffmpeg",
"-y",
"-f", "rawvideo",
"-pix_fmt", pixel_format,
"-s", f"{info.width}x{info.height}",
"-r", str(info.frame_rate),
"-i", "-",
"-i", str(input_path),
"-map", "0:v",
"-map", "1:a?",
"-c:v", output_codec,
"-c:a", "aac",
str(output_path),
]
decoder = subprocess.Popen(
decode_cmd,
stdout=subprocess.PIPE,
stderr=subprocess.DEVNULL,
)
encoder = subprocess.Popen(
encode_cmd,
stdin=subprocess.PIPE,
stderr=subprocess.PIPE,
)
batch = []
total_processed = 0
try:
while True:
raw_frame = decoder.stdout.read(frame_size)
if len(raw_frame) < frame_size:
# Process remaining batch
if batch:
processed = process_frames(batch, params)
for frame in processed:
encoder.stdin.write(frame.astype(np.uint8).tobytes())
total_processed += 1
break
frame = np.frombuffer(raw_frame, dtype=np.uint8)
frame = frame.reshape((info.height, info.width, bytes_per_pixel))
batch.append(frame)
if len(batch) >= batch_size:
processed = process_frames(batch, params)
for frame in processed:
encoder.stdin.write(frame.astype(np.uint8).tobytes())
total_processed += 1
batch = []
finally:
decoder.stdout.close()
decoder.wait()
encoder.stdin.close()
encoder.wait()
if encoder.returncode != 0:
stderr = encoder.stderr.read().decode() if encoder.stderr else ""
raise RuntimeError(f"Encoder failed: {stderr}")
logger.info(f"Processed {total_processed} frames in batches of {batch_size}")
return output_path

View File

@@ -0,0 +1,455 @@
"""
Effect file loader.
Parses effect files with:
- PEP 723 inline script metadata for dependencies
- @-tag docstrings for effect metadata
- META object for programmatic access
"""
import ast
import hashlib
import re
from dataclasses import dataclass, field
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple
from .meta import EffectMeta, ParamSpec
@dataclass
class LoadedEffect:
"""
A loaded effect with all metadata.
Attributes:
source: Original source code
cid: SHA3-256 hash of source
meta: Extracted EffectMeta
dependencies: List of pip dependencies
requires_python: Python version requirement
module: Compiled module (if loaded)
"""
source: str
cid: str
meta: EffectMeta
dependencies: List[str] = field(default_factory=list)
requires_python: str = ">=3.10"
module: Any = None
def has_frame_api(self) -> bool:
"""Check if effect has frame-by-frame API."""
return self.meta.api_type == "frame"
def has_video_api(self) -> bool:
"""Check if effect has whole-video API."""
return self.meta.api_type == "video"
def compute_cid(source: str) -> str:
"""Compute SHA3-256 hash of effect source."""
return hashlib.sha3_256(source.encode("utf-8")).hexdigest()
def parse_pep723_metadata(source: str) -> Tuple[List[str], str]:
"""
Parse PEP 723 inline script metadata.
Looks for:
# /// script
# requires-python = ">=3.10"
# dependencies = ["numpy", "opencv-python"]
# ///
Returns:
Tuple of (dependencies list, requires_python string)
"""
dependencies = []
requires_python = ">=3.10"
# Match the script block
pattern = r"# /// script\n(.*?)# ///"
match = re.search(pattern, source, re.DOTALL)
if not match:
return dependencies, requires_python
block = match.group(1)
# Parse dependencies
deps_match = re.search(r'# dependencies = \[(.*?)\]', block, re.DOTALL)
if deps_match:
deps_str = deps_match.group(1)
# Extract quoted strings
dependencies = re.findall(r'"([^"]+)"', deps_str)
# Parse requires-python
python_match = re.search(r'# requires-python = "([^"]+)"', block)
if python_match:
requires_python = python_match.group(1)
return dependencies, requires_python
def parse_docstring_metadata(docstring: str) -> Dict[str, Any]:
"""
Parse @-tag metadata from docstring.
Supports:
@effect name
@version 1.0.0
@author @user@domain
@temporal false
@description
Multi-line description text.
@param name type
@range lo hi
@default value
Description text.
@example
(fx effect :param value)
Returns:
Dictionary with extracted metadata
"""
if not docstring:
return {}
result = {
"name": "",
"version": "1.0.0",
"author": "",
"temporal": False,
"description": "",
"params": [],
"examples": [],
}
lines = docstring.strip().split("\n")
i = 0
current_param = None
while i < len(lines):
line = lines[i].strip()
if line.startswith("@effect "):
result["name"] = line[8:].strip()
elif line.startswith("@version "):
result["version"] = line[9:].strip()
elif line.startswith("@author "):
result["author"] = line[8:].strip()
elif line.startswith("@temporal "):
val = line[10:].strip().lower()
result["temporal"] = val in ("true", "yes", "1")
elif line.startswith("@description"):
# Collect multi-line description
desc_lines = []
i += 1
while i < len(lines):
next_line = lines[i]
if next_line.strip().startswith("@"):
i -= 1 # Back up to process this tag
break
desc_lines.append(next_line)
i += 1
result["description"] = "\n".join(desc_lines).strip()
elif line.startswith("@param "):
# Parse parameter: @param name type
parts = line[7:].split()
if len(parts) >= 2:
current_param = {
"name": parts[0],
"type": parts[1],
"range": None,
"default": None,
"description": "",
}
# Collect param details
desc_lines = []
i += 1
while i < len(lines):
next_line = lines[i]
stripped = next_line.strip()
if stripped.startswith("@range "):
range_parts = stripped[7:].split()
if len(range_parts) >= 2:
try:
current_param["range"] = (
float(range_parts[0]),
float(range_parts[1]),
)
except ValueError:
pass
elif stripped.startswith("@default "):
current_param["default"] = stripped[9:].strip()
elif stripped.startswith("@param ") or stripped.startswith("@example"):
i -= 1 # Back up
break
elif stripped.startswith("@"):
i -= 1
break
elif stripped:
desc_lines.append(stripped)
i += 1
current_param["description"] = " ".join(desc_lines)
result["params"].append(current_param)
current_param = None
elif line.startswith("@example"):
# Collect example
example_lines = []
i += 1
while i < len(lines):
next_line = lines[i]
if next_line.strip().startswith("@") and not next_line.strip().startswith("@example"):
if next_line.strip().startswith("@example"):
i -= 1
break
if next_line.strip().startswith("@example"):
i -= 1
break
example_lines.append(next_line)
i += 1
example = "\n".join(example_lines).strip()
if example:
result["examples"].append(example)
i += 1
return result
def extract_meta_from_ast(source: str) -> Optional[Dict[str, Any]]:
"""
Extract META object from source AST.
Looks for:
META = EffectMeta(...)
Returns the keyword arguments if found.
"""
try:
tree = ast.parse(source)
except SyntaxError:
return None
for node in ast.walk(tree):
if isinstance(node, ast.Assign):
for target in node.targets:
if isinstance(target, ast.Name) and target.id == "META":
if isinstance(node.value, ast.Call):
return _extract_call_kwargs(node.value)
return None
def _extract_call_kwargs(call: ast.Call) -> Dict[str, Any]:
"""Extract keyword arguments from an AST Call node."""
result = {}
for keyword in call.keywords:
if keyword.arg is None:
continue
value = _ast_to_value(keyword.value)
if value is not None:
result[keyword.arg] = value
return result
def _ast_to_value(node: ast.expr) -> Any:
"""Convert AST node to Python value."""
if isinstance(node, ast.Constant):
return node.value
elif isinstance(node, ast.Str): # Python 3.7 compat
return node.s
elif isinstance(node, ast.Num): # Python 3.7 compat
return node.n
elif isinstance(node, ast.NameConstant): # Python 3.7 compat
return node.value
elif isinstance(node, ast.List):
return [_ast_to_value(elt) for elt in node.elts]
elif isinstance(node, ast.Tuple):
return tuple(_ast_to_value(elt) for elt in node.elts)
elif isinstance(node, ast.Dict):
return {
_ast_to_value(k): _ast_to_value(v)
for k, v in zip(node.keys, node.values)
if k is not None
}
elif isinstance(node, ast.Call):
# Handle ParamSpec(...) calls
if isinstance(node.func, ast.Name) and node.func.id == "ParamSpec":
return _extract_call_kwargs(node)
return None
def get_module_docstring(source: str) -> str:
"""Extract the module-level docstring from source."""
try:
tree = ast.parse(source)
except SyntaxError:
return ""
if tree.body and isinstance(tree.body[0], ast.Expr):
if isinstance(tree.body[0].value, ast.Constant):
return tree.body[0].value.value
elif isinstance(tree.body[0].value, ast.Str): # Python 3.7 compat
return tree.body[0].value.s
return ""
def load_effect(source: str) -> LoadedEffect:
"""
Load an effect from source code.
Parses:
1. PEP 723 metadata for dependencies
2. Module docstring for @-tag metadata
3. META object for programmatic metadata
Priority: META object > docstring > defaults
Args:
source: Effect source code
Returns:
LoadedEffect with all metadata
Raises:
ValueError: If effect is invalid
"""
cid = compute_cid(source)
# Parse PEP 723 metadata
dependencies, requires_python = parse_pep723_metadata(source)
# Parse docstring metadata
docstring = get_module_docstring(source)
doc_meta = parse_docstring_metadata(docstring)
# Try to extract META from AST
ast_meta = extract_meta_from_ast(source)
# Build EffectMeta, preferring META object over docstring
name = ""
if ast_meta and "name" in ast_meta:
name = ast_meta["name"]
elif doc_meta.get("name"):
name = doc_meta["name"]
if not name:
raise ValueError("Effect must have a name (@effect or META.name)")
version = ast_meta.get("version") if ast_meta else doc_meta.get("version", "1.0.0")
temporal = ast_meta.get("temporal") if ast_meta else doc_meta.get("temporal", False)
author = ast_meta.get("author") if ast_meta else doc_meta.get("author", "")
description = ast_meta.get("description") if ast_meta else doc_meta.get("description", "")
examples = ast_meta.get("examples") if ast_meta else doc_meta.get("examples", [])
# Build params
params = []
if ast_meta and "params" in ast_meta:
for p in ast_meta["params"]:
if isinstance(p, dict):
type_map = {"float": float, "int": int, "bool": bool, "str": str}
param_type = type_map.get(p.get("param_type", "float"), float)
if isinstance(p.get("param_type"), type):
param_type = p["param_type"]
params.append(
ParamSpec(
name=p.get("name", ""),
param_type=param_type,
default=p.get("default"),
range=p.get("range"),
description=p.get("description", ""),
)
)
elif doc_meta.get("params"):
for p in doc_meta["params"]:
type_map = {"float": float, "int": int, "bool": bool, "str": str}
param_type = type_map.get(p.get("type", "float"), float)
default = p.get("default")
if default is not None:
try:
default = param_type(default)
except (ValueError, TypeError):
pass
params.append(
ParamSpec(
name=p["name"],
param_type=param_type,
default=default,
range=p.get("range"),
description=p.get("description", ""),
)
)
# Determine API type by checking for function definitions
api_type = "frame" # default
try:
tree = ast.parse(source)
for node in ast.walk(tree):
if isinstance(node, ast.FunctionDef):
if node.name == "process":
api_type = "video"
break
elif node.name == "process_frame":
api_type = "frame"
break
except SyntaxError:
pass
meta = EffectMeta(
name=name,
version=version if isinstance(version, str) else "1.0.0",
temporal=bool(temporal),
params=params,
author=author if isinstance(author, str) else "",
description=description if isinstance(description, str) else "",
examples=examples if isinstance(examples, list) else [],
dependencies=dependencies,
requires_python=requires_python,
api_type=api_type,
)
return LoadedEffect(
source=source,
cid=cid,
meta=meta,
dependencies=dependencies,
requires_python=requires_python,
)
def load_effect_file(path: Path) -> LoadedEffect:
"""Load an effect from a file path."""
source = path.read_text(encoding="utf-8")
return load_effect(source)
def compute_deps_hash(dependencies: List[str]) -> str:
"""
Compute hash of sorted dependencies.
Used for venv caching - same deps = same hash = reuse venv.
"""
sorted_deps = sorted(dep.lower().strip() for dep in dependencies)
deps_str = "\n".join(sorted_deps)
return hashlib.sha3_256(deps_str.encode("utf-8")).hexdigest()

View File

@@ -0,0 +1,247 @@
"""
Effect metadata types.
Defines the core dataclasses for effect metadata:
- ParamSpec: Parameter specification with type, range, and default
- EffectMeta: Complete effect metadata including params and flags
"""
from dataclasses import dataclass, field
from typing import Any, Dict, List, Optional, Tuple, Type, Union
@dataclass
class ParamSpec:
"""
Specification for an effect parameter.
Attributes:
name: Parameter name (used in recipes as :name)
param_type: Python type (float, int, bool, str)
default: Default value if not specified
range: Optional (min, max) tuple for numeric types
description: Human-readable description
choices: Optional list of allowed values (for enums)
"""
name: str
param_type: Type
default: Any = None
range: Optional[Tuple[float, float]] = None
description: str = ""
choices: Optional[List[Any]] = None
def validate(self, value: Any) -> Any:
"""
Validate and coerce a parameter value.
Args:
value: Input value to validate
Returns:
Validated and coerced value
Raises:
ValueError: If value is invalid
"""
if value is None:
if self.default is not None:
return self.default
raise ValueError(f"Parameter '{self.name}' requires a value")
# Type coercion
try:
if self.param_type == bool:
if isinstance(value, str):
value = value.lower() in ("true", "1", "yes")
else:
value = bool(value)
elif self.param_type == int:
value = int(value)
elif self.param_type == float:
value = float(value)
elif self.param_type == str:
value = str(value)
else:
value = self.param_type(value)
except (ValueError, TypeError) as e:
raise ValueError(
f"Parameter '{self.name}' expects {self.param_type.__name__}, "
f"got {type(value).__name__}: {e}"
)
# Range check for numeric types
if self.range is not None and self.param_type in (int, float):
min_val, max_val = self.range
if value < min_val or value > max_val:
raise ValueError(
f"Parameter '{self.name}' must be in range "
f"[{min_val}, {max_val}], got {value}"
)
# Choices check
if self.choices is not None and value not in self.choices:
raise ValueError(
f"Parameter '{self.name}' must be one of {self.choices}, got {value}"
)
return value
def to_dict(self) -> Dict[str, Any]:
"""Convert to dictionary for serialization."""
d = {
"name": self.name,
"type": self.param_type.__name__,
"description": self.description,
}
if self.default is not None:
d["default"] = self.default
if self.range is not None:
d["range"] = list(self.range)
if self.choices is not None:
d["choices"] = self.choices
return d
@dataclass
class EffectMeta:
"""
Complete metadata for an effect.
Attributes:
name: Effect name (used in recipes)
version: Semantic version string
temporal: If True, effect needs complete input (can't be collapsed)
params: List of parameter specifications
author: Optional author identifier
description: Human-readable description
examples: List of example S-expression usages
dependencies: List of Python package dependencies
requires_python: Minimum Python version
api_type: "frame" for frame-by-frame, "video" for whole-video
"""
name: str
version: str = "1.0.0"
temporal: bool = False
params: List[ParamSpec] = field(default_factory=list)
author: str = ""
description: str = ""
examples: List[str] = field(default_factory=list)
dependencies: List[str] = field(default_factory=list)
requires_python: str = ">=3.10"
api_type: str = "frame" # "frame" or "video"
def get_param(self, name: str) -> Optional[ParamSpec]:
"""Get a parameter spec by name."""
for param in self.params:
if param.name == name:
return param
return None
def validate_params(self, params: Dict[str, Any]) -> Dict[str, Any]:
"""
Validate all parameters.
Args:
params: Dictionary of parameter values
Returns:
Dictionary with validated/coerced values including defaults
Raises:
ValueError: If any parameter is invalid
"""
result = {}
for spec in self.params:
value = params.get(spec.name)
result[spec.name] = spec.validate(value)
return result
def to_dict(self) -> Dict[str, Any]:
"""Convert to dictionary for serialization."""
return {
"name": self.name,
"version": self.version,
"temporal": self.temporal,
"params": [p.to_dict() for p in self.params],
"author": self.author,
"description": self.description,
"examples": self.examples,
"dependencies": self.dependencies,
"requires_python": self.requires_python,
"api_type": self.api_type,
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "EffectMeta":
"""Create from dictionary."""
params = []
for p in data.get("params", []):
# Map type name back to Python type
type_map = {"float": float, "int": int, "bool": bool, "str": str}
param_type = type_map.get(p.get("type", "float"), float)
params.append(
ParamSpec(
name=p["name"],
param_type=param_type,
default=p.get("default"),
range=tuple(p["range"]) if p.get("range") else None,
description=p.get("description", ""),
choices=p.get("choices"),
)
)
return cls(
name=data["name"],
version=data.get("version", "1.0.0"),
temporal=data.get("temporal", False),
params=params,
author=data.get("author", ""),
description=data.get("description", ""),
examples=data.get("examples", []),
dependencies=data.get("dependencies", []),
requires_python=data.get("requires_python", ">=3.10"),
api_type=data.get("api_type", "frame"),
)
@dataclass
class ExecutionContext:
"""
Context passed to effect execution.
Provides controlled access to resources within sandbox.
"""
input_paths: List[str]
output_path: str
params: Dict[str, Any]
seed: int # Deterministic seed for RNG
frame_rate: float = 30.0
width: int = 1920
height: int = 1080
# Resolved bindings (frame -> param value lookup)
bindings: Dict[str, List[float]] = field(default_factory=dict)
def get_param_at_frame(self, param_name: str, frame: int) -> Any:
"""
Get parameter value at a specific frame.
If parameter has a binding, looks up the bound value.
Otherwise returns the static parameter value.
"""
if param_name in self.bindings:
binding_values = self.bindings[param_name]
if frame < len(binding_values):
return binding_values[frame]
# Past end of binding data, use last value
return binding_values[-1] if binding_values else self.params.get(param_name)
return self.params.get(param_name)
def get_rng(self) -> "random.Random":
"""Get a seeded random number generator."""
import random
return random.Random(self.seed)

Some files were not shown because too many files have changed in this diff Show More