giles c361946974 perf: deadline tweaks (tcl 2400→300s, erlang 120→600s); plan + Phase 1 findings
Phase 1 of the jit-perf-regression plan reproduced and quantified the alleged
30× substrate slowdown across 5 guests (tcl, lua, erlang, prolog, haskell). On
a quiet machine all five suites pass cleanly:

  tcl test.sh         57.8s wall, 16.3s user, 376/376 ✓
  lua test.sh         27.3s wall,  4.2s user, 185/185 ✓
  erlang conformance  3m25s wall, 36.8s user, 530/530 ✓ (needs ≥600s budget)
  prolog conformance  3m54s wall, 1m08s user, 590/590 ✓
  haskell conformance 6m59s wall, 2m37s user, 156/156 ✓

Per-test user-time at architecture HEAD vs pre-substrate-merge baseline
(83dbb595) is essentially flat (tcl 0.83×, lua 1.4×, prolog 0.82×). The
symptoms reported in the plan (test timeouts, OOMs, 30-min hangs) were heavy
CPU contention from concurrent loops + one undersized internal `timeout 120`
in erlang's conformance script. There is no substrate regression to bisect.

Changes:

* lib/tcl/test.sh: `timeout 2400` → `timeout 300`. The original 180s deadline
  is comfortable on a quiet machine (3.1× headroom); 300s gives some safety
  margin for moderate contention without masking real regressions.
* lib/erlang/conformance.sh: `timeout 120` → `timeout 600`. The 120s budget
  was actually too tight for the full 9-suite chain even before this work.
* lib/erlang/scoreboard.{json,md}: 0/0 → 530/530 — populated by a successful
  conformance run with the new deadline. The previous 0/0 was a stale
  artefact of the run timing out before parsing any markers.
* plans/jit-perf-regression.md: full Phase 1 progress log including
  per-guest perf table, quiet-machine re-measurement, and conclusion.

Phases 2–4 (bisect, diagnose, fix) skipped — there is no substrate regression
to find. Phase 6 (perf-regression alarm) still planned to catch the next
quadratic blow-up early instead of via watchdog bumps.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-08 14:05:29 +00:00
2026-03-25 00:36:57 +00:00
2026-02-24 20:10:23 +00:00

Rose Ash

Monorepo for the Rose Ash cooperative platform — six Quart microservices sharing a common infrastructure layer, a single PostgreSQL database, and an ActivityPub federation layer.

Services

Service URL Description
blog blog.rose-ash.com Content management, Ghost sync, navigation, editor
market market.rose-ash.com Product listings, scraping, market pages
cart cart.rose-ash.com Shopping cart, checkout, orders, SumUp payments
events events.rose-ash.com Calendar, event entries, container widgets
federation federation.rose-ash.com OAuth2 authorization server, ActivityPub hub, social features
account account.rose-ash.com User dashboard, newsletters, tickets, bookings

All services are Python 3.11 / Quart apps served by Hypercorn, deployed as a Docker Swarm stack.

Repository structure

rose-ash/
├── shared/              # Common code: models, services, infrastructure, templates
│   ├── models/          # Canonical SQLAlchemy ORM models (all domains)
│   ├── services/        # Domain service implementations + registry
│   ├── contracts/       # DTOs, protocols, widget contracts
│   ├── infrastructure/  # App factory, OAuth, ActivityPub, fragments, Jinja setup
│   ├── templates/       # Shared base templates and partials
│   ├── static/          # Shared CSS, JS, images
│   ├── editor/          # Prose editor (Node build, blog only)
│   └── alembic/         # Database migrations
├── blog/                # Blog app
├── market/              # Market app
├── cart/                # Cart app
├── events/              # Events app
├── federation/          # Federation app
├── account/             # Account app
├── docker-compose.yml   # Swarm stack definition
├── deploy.sh            # Local build + restart script
├── .gitea/workflows/    # CI: build changed apps + deploy
├── _config/             # Runtime config (app-config.yaml)
├── schema.sql           # Reference schema snapshot
└── .env                 # Environment variables (not committed)

Each app follows the same layout:

{app}/
├── app.py               # App entry point (creates Quart app)
├── path_setup.py        # Adds project root + app dir to sys.path
├── entrypoint.sh        # Container entrypoint (wait for DB, run migrations, start)
├── Dockerfile           # Build instructions (monorepo context)
├── bp/                  # Blueprints (routes, handlers)
│   └── fragments/       # Fragment endpoints for cross-app composition
├── models/              # Re-export stubs pointing to shared/models/
├── services/            # App-specific service wiring
├── templates/           # App-specific templates (override shared/)
└── config/              # App-specific config

Key architecture patterns

Shared models — All ORM models live in shared/models/. Each app's models/ directory contains thin re-export stubs. factory.py imports all six apps' models at startup so SQLAlchemy relationship references resolve across domains.

Service contracts — Apps communicate through typed protocols (shared/contracts/protocols.py) and frozen dataclass DTOs (shared/contracts/dtos.py), wired via a singleton registry (shared/services/registry.py). No direct HTTP calls between apps for domain logic.

Fragment composition — Apps expose HTML fragments at /internal/fragments/<type> for cross-app UI composition. The blog fetches cart, account, navigation, and event fragments to compose its pages. Fragments are cached in Redis with short TTLs.

OAuth SSO — Federation is the OAuth2 authorization server. All other apps are OAuth clients with per-app first-party session cookies (Safari ITP compatible). Login/callback/logout routes are auto-registered via shared/infrastructure/oauth.py.

ActivityPub — Each app has its own AP actor (virtual projection of the same keypair). The federation app is the social hub (timeline, compose, follow, notifications). Activities are emitted to ap_activities table and processed by EventProcessor.

Development

Quick deploy (skip CI)

# Rebuild + restart one app
./deploy.sh blog

# Rebuild + restart multiple apps
./deploy.sh blog market

# Rebuild all
./deploy.sh --all

# Auto-detect changes from git
./deploy.sh

Full stack deploy

source .env
docker stack deploy -c docker-compose.yml coop

Build a single app image

docker build -f blog/Dockerfile -t registry.rose-ash.com:5000/blog:latest .

Run migrations

Migrations run automatically on the blog service startup when RUN_MIGRATIONS=true is set (only blog runs migrations; all other apps skip them).

# Manual migration
docker exec -it $(docker ps -qf name=coop_blog) bash -c "cd shared && alembic upgrade head"

CI/CD

A single Gitea Actions workflow (.gitea/workflows/ci.yml) handles all six apps:

  1. Detects which files changed since the last deploy
  2. If shared/ or docker-compose.yml changed, rebuilds all apps
  3. Otherwise rebuilds only apps with changes (or missing images)
  4. Pushes images to the private registry
  5. Runs docker stack deploy to update the swarm

Required secrets

Secret Value
DEPLOY_SSH_KEY Private SSH key for root access to the deploy host
DEPLOY_HOST Hostname or IP of the deploy server

Infrastructure

  • Runtime: Python 3.11, Quart (async Flask), Hypercorn
  • Database: PostgreSQL 16 (shared by all apps)
  • Cache: Redis 7 (page cache, fragment cache, sessions)
  • Orchestration: Docker Swarm
  • Registry: registry.rose-ash.com:5000
  • CI: Gitea Actions
  • Reverse proxy: Caddy (external, not in this repo)
Description
No description provided
Readme 62 MiB
Languages
Python 58.8%
JavaScript 34%
OCaml 2.8%
HTML 1.7%
Common Lisp 1.5%
Other 1.2%