Update recipes to use IPFS CIDs
- Change :hash to :cid throughout - Update cat asset: QmXrj6tSSn1vQXxxEY2Tyoudvt4CeeqR9gGQwSt7WFrhMZ - Update dog effect: QmT99H4MC5p18MGuxAeKGeXD71cGCzMNRxFfvt4FuCwpn6 - Update invert effect: QmPWaW5E5WFrmDjT6w8enqvtJhM8c5jvQu7XN1doHA3Z7J Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
149
PRIMITIVES.md
Normal file
149
PRIMITIVES.md
Normal file
@@ -0,0 +1,149 @@
|
||||
# Art DAG Primitive Language
|
||||
|
||||
## Overview
|
||||
|
||||
Primitives enable declarative composition of audio-reactive video. The key insight: **data flows alongside media**.
|
||||
|
||||
```
|
||||
audio → ANALYZE → data → BIND/MAP/COMPUTE → parameters → TRANSFORM → video
|
||||
```
|
||||
|
||||
## Two Types of Flow
|
||||
|
||||
| Flow | Examples | Description |
|
||||
|------|----------|-------------|
|
||||
| **Media** | video, audio files | Actual content that gets transformed |
|
||||
| **Data** | beat times, tempo, energy envelope | Analysis results that drive parameters |
|
||||
|
||||
## Primitives
|
||||
|
||||
### Source Primitives
|
||||
|
||||
| Primitive | Status | Description |
|
||||
|-----------|--------|-------------|
|
||||
| `SOURCE` | ✅ Implemented | Load single media file |
|
||||
| `SOURCE_LIST` | ❌ Not implemented | Collect multiple inputs into a list |
|
||||
| `PARAM` | ❌ Not implemented | Recipe parameter (data, not media) |
|
||||
|
||||
### Analysis Primitives
|
||||
|
||||
| Primitive | Status | Description |
|
||||
|-----------|--------|-------------|
|
||||
| `ANALYZE` | ❌ Not implemented | Extract features from media |
|
||||
|
||||
**ANALYZE features:**
|
||||
|
||||
| Feature | Output | Description |
|
||||
|---------|--------|-------------|
|
||||
| `beats` | `{beat_times: [], tempo: float}` | Beat positions |
|
||||
| `downbeats` | `{downbeat_times: []}` | First beat of each bar |
|
||||
| `tempo` | `{bpm: float, confidence: float}` | Tempo detection |
|
||||
| `energy` | `{envelope: [{time, value}...]}` | Loudness over time |
|
||||
| `spectrum` | `{bass: [], mid: [], high: []}` | Frequency bands over time |
|
||||
| `onsets` | `{onset_times: []}` | Note/sound starts |
|
||||
| `motion_tempo` | `{motion_bpm: float}` | Video motion periodicity |
|
||||
|
||||
### Data Processing Primitives
|
||||
|
||||
| Primitive | Status | Description |
|
||||
|-----------|--------|-------------|
|
||||
| `GROUP` | ❌ Not implemented | Chunk data (e.g., beats → measures) |
|
||||
| `COMPUTE` | ❌ Not implemented | Arithmetic/expressions on data |
|
||||
| `SELECT` | ❌ Not implemented | Conditional data selection |
|
||||
| `BIND` | ❌ Not implemented | Map data ranges to parameter ranges |
|
||||
|
||||
**BIND example:**
|
||||
```yaml
|
||||
- source: energy.envelope # 0.0 → 1.0
|
||||
target: saturation # mapped to 1.0 → 2.0
|
||||
range: [1.0, 2.0]
|
||||
attack_ms: 10 # response shaping
|
||||
release_ms: 100
|
||||
```
|
||||
|
||||
### Iteration Primitives
|
||||
|
||||
| Primitive | Status | Description |
|
||||
|-----------|--------|-------------|
|
||||
| `MAP` | ❌ Not implemented | Apply operation to each item in list |
|
||||
| `RANDOM_SLICE` | ❌ Not implemented | Extract random segment from random pool item |
|
||||
| `SEGMENT_AT` | ❌ Not implemented | Cut media at specified times |
|
||||
|
||||
**MAP operations:**
|
||||
- `ANALYZE` - analyze each item
|
||||
- `TRANSFORM` - apply effects to each item
|
||||
- `COMPUTE` - calculate value for each item
|
||||
- `RANDOM_SLICE` - extract random segment for each item
|
||||
|
||||
### Transform Primitives
|
||||
|
||||
| Primitive | Status | Description |
|
||||
|-----------|--------|-------------|
|
||||
| `SEGMENT` | ✅ Implemented | Extract time range |
|
||||
| `RESIZE` | ✅ Implemented | Scale/crop/pad |
|
||||
| `TRANSFORM` | ✅ Implemented | Static effects (color, blur, speed) |
|
||||
| `TRANSFORM_DYNAMIC` | ❌ Not implemented | Time-varying effects from BIND |
|
||||
|
||||
### Compose Primitives
|
||||
|
||||
| Primitive | Status | Description |
|
||||
|-----------|--------|-------------|
|
||||
| `SEQUENCE` | ✅ Implemented | Concatenate in time |
|
||||
| `LAYER` | ✅ Implemented | Stack spatially |
|
||||
| `MUX` | ✅ Implemented | Combine video + audio |
|
||||
| `BLEND` | ✅ Implemented | Blend two inputs |
|
||||
|
||||
## Patterns
|
||||
|
||||
### Pattern 1: Beat-Synced Cuts
|
||||
|
||||
```
|
||||
music → ANALYZE(beats) → GROUP(4) → MAP(RANDOM_SLICE, videos) → SEQUENCE → MUX(music)
|
||||
```
|
||||
|
||||
Audio drives cut timing, videos provide content.
|
||||
|
||||
### Pattern 2: Energy-Reactive Effects
|
||||
|
||||
```
|
||||
music → ANALYZE(energy) → BIND(saturation, brightness) → TRANSFORM_DYNAMIC(video) → MUX
|
||||
```
|
||||
|
||||
Audio amplitude drives visual intensity.
|
||||
|
||||
### Pattern 3: Tempo Matching
|
||||
|
||||
```
|
||||
music → ANALYZE(tempo) ─┐
|
||||
├→ COMPUTE(speed_factor) → TRANSFORM(speed) → SEQUENCE
|
||||
videos → MAP(ANALYZE(motion_tempo)) ─┘
|
||||
```
|
||||
|
||||
Video speed adjusted to match audio tempo.
|
||||
|
||||
### Pattern 4: Spectrum-Driven Layers
|
||||
|
||||
```
|
||||
music → ANALYZE(spectrum) → BIND(bass→layer1_opacity, high→layer2_opacity)
|
||||
↓
|
||||
video1 ────────────────────→ LAYER ← video2
|
||||
```
|
||||
|
||||
Different frequency bands control different visual layers.
|
||||
|
||||
## Design Principles
|
||||
|
||||
1. **Separation of concerns**: ANALYZE extracts data, BIND maps it, TRANSFORM applies it
|
||||
2. **Composability**: Small primitives combine into complex behaviors
|
||||
3. **Declarative**: Describe *what* you want, not *how* to compute it
|
||||
4. **Reproducibility**: Seeds and deterministic operations ensure same inputs → same output
|
||||
5. **Data as first-class**: Analysis results flow through the DAG like media
|
||||
|
||||
## Implementation Priority
|
||||
|
||||
1. `ANALYZE` (beats, energy, tempo) - foundation for audio-reactive
|
||||
2. `BIND` - connects analysis to effects
|
||||
3. `TRANSFORM_DYNAMIC` - applies time-varying effects
|
||||
4. `MAP` - enables iteration over lists
|
||||
5. `SOURCE_LIST` - multiple input handling
|
||||
6. `GROUP`, `COMPUTE`, `SELECT` - data manipulation
|
||||
191
README.md
191
README.md
@@ -1,30 +1,185 @@
|
||||
# Art DAG Recipes
|
||||
|
||||
Recipes that transform assets using effects from [art-dag](https://github.com/gilesbradshaw/art-dag).
|
||||
Declarative media composition using content-addressed primitives and effects.
|
||||
|
||||
## Structure
|
||||
## Recipes
|
||||
|
||||
| Recipe | Description | Inputs | Status |
|
||||
|--------|-------------|--------|--------|
|
||||
| [identity-cat](recipes/identity-cat/) | Apply identity effect to cat | fixed | ✅ Working |
|
||||
| [identity-then-dog](recipes/identity-then-dog/) | Chain identity → dog effects | fixed | ✅ Working |
|
||||
| [dog-concat](recipes/dog-concat/) | Dog video + user video concatenated | fixed + variable | ✅ Working |
|
||||
| [beat-cuts](recipes/beat-cuts/) | Cut between videos on beats | variable | 🔮 Future |
|
||||
| [energy-reactive](recipes/energy-reactive/) | Effects pulse with music energy | variable | 🔮 Future |
|
||||
| [tempo-match](recipes/tempo-match/) | Speed-match videos to music tempo | variable | 🔮 Future |
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Upload a recipe
|
||||
artdag upload-recipe recipes/dog-concat/recipe.yaml
|
||||
|
||||
# Upload your video
|
||||
artdag upload /path/to/my-video.mp4
|
||||
# → returns content_hash
|
||||
|
||||
# Run with variable input
|
||||
artdag run-recipe <recipe_id> -i source_second:<content_hash>
|
||||
```
|
||||
|
||||
## Recipe Structure
|
||||
|
||||
```
|
||||
recipes/
|
||||
└── identity-cat/
|
||||
├── recipe.yaml # DAG definition
|
||||
└── README.md # Description
|
||||
├── identity-cat/
|
||||
│ ├── recipe.yaml # DAG definition
|
||||
│ └── README.md # Documentation
|
||||
├── dog-concat/
|
||||
│ └── recipe.yaml
|
||||
└── beat-cuts/ # Future: audio-reactive
|
||||
└── recipe.yaml
|
||||
```
|
||||
|
||||
## Registry References
|
||||
|
||||
Recipes reference assets and effects by content hash from:
|
||||
- **Assets**: https://github.com/gilesbradshaw/art-dag/blob/main/registry/registry.json
|
||||
- **Effects**: https://github.com/gilesbradshaw/art-dag/tree/main/effects
|
||||
|
||||
## Recipe Schema
|
||||
|
||||
A recipe is a DAG with:
|
||||
- `name`: Unique recipe identifier
|
||||
- `inputs`: Assets referenced by content_hash
|
||||
- `nodes`: DAG nodes using primitives (SOURCE, TRANSFORM, etc.) and effects
|
||||
- `output`: Expected output hash (for verification)
|
||||
```yaml
|
||||
name: recipe-name
|
||||
version: "1.0"
|
||||
description: "What this recipe does"
|
||||
|
||||
## Owner
|
||||
# Content-addressed references
|
||||
registry:
|
||||
assets:
|
||||
cat:
|
||||
hash: "33268b6e..."
|
||||
url: "https://..."
|
||||
effects:
|
||||
dog:
|
||||
hash: "d048fe31..."
|
||||
|
||||
Recipes owned by `@giles@artdag.rose-ash.com`
|
||||
# DAG definition
|
||||
dag:
|
||||
nodes:
|
||||
- id: source_cat
|
||||
type: SOURCE
|
||||
config:
|
||||
asset: cat # Fixed: from registry
|
||||
|
||||
- id: user_video
|
||||
type: SOURCE
|
||||
config:
|
||||
input: true # Variable: supplied at runtime
|
||||
name: "User Video"
|
||||
description: "Your video file"
|
||||
|
||||
- id: result
|
||||
type: SEQUENCE
|
||||
inputs:
|
||||
- source_cat
|
||||
- user_video
|
||||
|
||||
output: result
|
||||
|
||||
owner: "@giles@artdag.rose-ash.com"
|
||||
```
|
||||
|
||||
## Primitives
|
||||
|
||||
### Implemented
|
||||
|
||||
| Primitive | Description | Example |
|
||||
|-----------|-------------|---------|
|
||||
| `SOURCE` | Load media file | `config: { asset: cat }` or `{ input: true }` |
|
||||
| `SEGMENT` | Extract time range | `config: { offset: 0, duration: 5.0 }` |
|
||||
| `RESIZE` | Scale/crop/pad | `config: { width: 1920, height: 1080, mode: fit }` |
|
||||
| `TRANSFORM` | Visual effects | `config: { effects: { saturation: 1.5 } }` |
|
||||
| `SEQUENCE` | Concatenate in time | `config: { transition: { type: cut } }` |
|
||||
| `LAYER` | Stack spatially | `config: { inputs: [{}, {opacity: 0.5}] }` |
|
||||
| `MUX` | Combine video + audio | `config: { shortest: true }` |
|
||||
| `BLEND` | Blend two inputs | `config: { mode: overlay, opacity: 0.5 }` |
|
||||
| `EFFECT` | Apply registered effect | `config: { effect: dog }` |
|
||||
|
||||
### Future (Audio-Reactive)
|
||||
|
||||
| Primitive | Description | Example |
|
||||
|-----------|-------------|---------|
|
||||
| `ANALYZE` | Extract audio features | `config: { feature: beats }` |
|
||||
| `BIND` | Map data → parameters | `config: { source: energy, target: saturation }` |
|
||||
| `MAP` | Apply op to each item | `config: { operation: RANDOM_SLICE }` |
|
||||
| `TRANSFORM_DYNAMIC` | Time-varying effects | Effects driven by BIND output |
|
||||
| `SOURCE_LIST` | Multiple inputs as list | `config: { input: true, min_items: 2 }` |
|
||||
| `GROUP` | Chunk data | `config: { size: 4, output: segments }` |
|
||||
| `COMPUTE` | Arithmetic on data | `config: { expression: "tempo / 120" }` |
|
||||
|
||||
See [PRIMITIVES.md](PRIMITIVES.md) for full design documentation.
|
||||
|
||||
## Input Types
|
||||
|
||||
### Fixed Inputs
|
||||
Referenced by content hash from registry. Always the same.
|
||||
```yaml
|
||||
config:
|
||||
asset: cat # Resolved from registry.assets.cat.hash
|
||||
```
|
||||
|
||||
### Variable Inputs
|
||||
Supplied at runtime by the user.
|
||||
```yaml
|
||||
config:
|
||||
input: true
|
||||
name: "My Video"
|
||||
description: "Video to process"
|
||||
```
|
||||
|
||||
## DAG Patterns
|
||||
|
||||
### Chain Effects
|
||||
```
|
||||
SOURCE → EFFECT → EFFECT → output
|
||||
```
|
||||
|
||||
### Concatenate
|
||||
```
|
||||
SOURCE ──┐
|
||||
├→ SEQUENCE → output
|
||||
SOURCE ──┘
|
||||
```
|
||||
|
||||
### Mux Audio + Video
|
||||
```
|
||||
video SOURCE ──┐
|
||||
├→ MUX → output
|
||||
audio SOURCE ──┘
|
||||
```
|
||||
|
||||
### Audio-Reactive (Future)
|
||||
```
|
||||
audio → ANALYZE → BIND ──┐
|
||||
├→ TRANSFORM_DYNAMIC → MUX → output
|
||||
video ───────────────────┘
|
||||
```
|
||||
|
||||
## Content Addressing
|
||||
|
||||
Everything is identified by SHA3-256 hash:
|
||||
- **Assets**: `33268b6e167deaf018cc538de12dbe562612b33e89a749391cef855b320a269b`
|
||||
- **Effects**: `d048fe313433eb4e38f0e24194ffae91b896ca3e6eed3e50b2cc37b7be495555`
|
||||
- **Nodes**: `hash(type + config + inputs)` - automatic deduplication
|
||||
- **Recipes**: Hash of YAML file
|
||||
|
||||
Same inputs + same recipe = same output. Always.
|
||||
|
||||
## Ownership
|
||||
|
||||
Recipes are signed by ActivityPub actors (e.g., `@giles@artdag.rose-ash.com`).
|
||||
|
||||
Ownership enables:
|
||||
- Provenance tracking
|
||||
- Revenue distribution down the creation chain
|
||||
- Federated sharing across L2 servers
|
||||
|
||||
## Registry References
|
||||
|
||||
- **Assets**: https://git.rose-ash.com/art-dag/registry
|
||||
- **Effects**: https://git.rose-ash.com/art-dag/effects
|
||||
- **Art Source**: https://git.rose-ash.com/art-dag/art-source
|
||||
|
||||
58
recipes/beat-cuts/recipe.sexp
Normal file
58
recipes/beat-cuts/recipe.sexp
Normal file
@@ -0,0 +1,58 @@
|
||||
; beat-cuts recipe
|
||||
; Analyzes music for beats, cuts between videos every N beats
|
||||
; Demonstrates: ANALYZE → GROUP → MAP → SEQUENCE → MUX
|
||||
; NOTE: Uses future primitives not yet implemented
|
||||
|
||||
(recipe "beat-cuts"
|
||||
:version "1.0"
|
||||
:description "Cut between videos on every 4 beats of the music"
|
||||
:owner "@giles@artdag.rose-ash.com"
|
||||
|
||||
; === INPUTS ===
|
||||
|
||||
(def music
|
||||
(source :input "Music"
|
||||
:description "Audio file to analyze for beats"))
|
||||
|
||||
; Multiple variable inputs (SOURCE_LIST not yet implemented)
|
||||
(def video-pool
|
||||
(source-list :input "Videos"
|
||||
:description "Video clips to cut between"
|
||||
:min-items 1))
|
||||
|
||||
; === ANALYSIS ===
|
||||
|
||||
; Detect beats → outputs data { beat_times: [...], tempo: 125.0 }
|
||||
(def beats
|
||||
(-> music
|
||||
(analyze :beats)))
|
||||
|
||||
; Group beats into measures of 4
|
||||
(def measures
|
||||
(-> beats
|
||||
(group :size 4 :output "segments")))
|
||||
|
||||
; === VIDEO PROCESSING ===
|
||||
|
||||
; For each measure, extract random slice from random video
|
||||
(def slices
|
||||
(map measures
|
||||
:operation "random-slice"
|
||||
:pool video-pool
|
||||
:seed-from music))
|
||||
|
||||
; === COMPOSITION ===
|
||||
|
||||
; Concatenate all slices
|
||||
(def video-concat
|
||||
(-> slices (sequence)))
|
||||
|
||||
; Combine with original music
|
||||
(mux video-concat music :shortest true))
|
||||
|
||||
; === NOTES ===
|
||||
; New primitives needed:
|
||||
; source-list - collect multiple inputs
|
||||
; analyze :beats - beat detection
|
||||
; group - chunk data into groups
|
||||
; map - apply operation to list
|
||||
100
recipes/beat-cuts/recipe.yaml
Normal file
100
recipes/beat-cuts/recipe.yaml
Normal file
@@ -0,0 +1,100 @@
|
||||
# beat-cuts recipe
|
||||
# Analyzes music for beats, cuts between videos every N beats
|
||||
# Demonstrates: ANALYZE → MAP → SEQUENCE → MUX
|
||||
|
||||
name: beat-cuts
|
||||
version: "1.0"
|
||||
description: "Cut between videos on every 4 beats of the music"
|
||||
|
||||
dag:
|
||||
nodes:
|
||||
# === INPUTS ===
|
||||
|
||||
- id: music
|
||||
type: SOURCE
|
||||
config:
|
||||
input: true
|
||||
name: "Music"
|
||||
description: "Audio file to analyze for beats"
|
||||
|
||||
# Video pool - multiple variable inputs
|
||||
- id: video_pool
|
||||
type: SOURCE_LIST # NOT IMPLEMENTED: collects multiple inputs into a list
|
||||
config:
|
||||
input: true
|
||||
name: "Videos"
|
||||
description: "Video clips to cut between"
|
||||
min_items: 1
|
||||
|
||||
# === ANALYSIS ===
|
||||
|
||||
# Detect beats in audio → outputs data, not media
|
||||
- id: beats
|
||||
type: ANALYZE # NOT IMPLEMENTED: needs beat detection
|
||||
config:
|
||||
feature: beats # What to extract: beats, tempo, energy, spectrum, onsets
|
||||
# Output: { beat_times: [0.0, 0.48, 0.96, ...], tempo: 125.0 }
|
||||
inputs:
|
||||
- music
|
||||
|
||||
# Group beats into measures of 4
|
||||
- id: measures
|
||||
type: GROUP # NOT IMPLEMENTED: groups data into chunks
|
||||
config:
|
||||
size: 4
|
||||
output: segments # Convert to [{start, duration}, ...]
|
||||
inputs:
|
||||
- beats
|
||||
|
||||
# === VIDEO PROCESSING ===
|
||||
|
||||
# For each measure, extract a random slice from a random video
|
||||
- id: slices
|
||||
type: MAP # NOT IMPLEMENTED: applies operation to each item
|
||||
config:
|
||||
operation: RANDOM_SLICE # For each segment: pick random video, random offset
|
||||
seed_from: music # Deterministic based on music hash
|
||||
inputs:
|
||||
items: measures # The segments to iterate over
|
||||
pool: video_pool # The videos to sample from
|
||||
|
||||
# === COMPOSITION ===
|
||||
|
||||
# Concatenate all slices
|
||||
- id: video_concat
|
||||
type: SEQUENCE
|
||||
config:
|
||||
transition:
|
||||
type: cut
|
||||
inputs:
|
||||
- slices # SEQUENCE would need to accept list output from MAP
|
||||
|
||||
# Combine with original music
|
||||
- id: final
|
||||
type: MUX
|
||||
config:
|
||||
video_stream: 0
|
||||
audio_stream: 1
|
||||
shortest: true
|
||||
inputs:
|
||||
- video_concat
|
||||
- music
|
||||
|
||||
output: final
|
||||
|
||||
owner: "@giles@artdag.rose-ash.com"
|
||||
|
||||
# === NOTES ===
|
||||
#
|
||||
# New primitives needed:
|
||||
# SOURCE_LIST - collect multiple inputs into a list
|
||||
# ANALYZE(feature: beats) - detect beats, output { beat_times, tempo }
|
||||
# GROUP - chunk data into groups, output segments
|
||||
# MAP - apply operation to each item in a list
|
||||
# RANDOM_SLICE - extract random segment from random pool item
|
||||
#
|
||||
# Data flow:
|
||||
# music → ANALYZE → beat_times[] → GROUP → segments[] → MAP → video_slices[] → SEQUENCE
|
||||
#
|
||||
# The key insight: ANALYZE outputs DATA that flows to GROUP/MAP,
|
||||
# while media files flow through SEQUENCE/MUX
|
||||
50
recipes/dog-concat/README.md
Normal file
50
recipes/dog-concat/README.md
Normal file
@@ -0,0 +1,50 @@
|
||||
# dog-concat
|
||||
|
||||
Creates the dog video (from cat) and concatenates it with a user-supplied video.
|
||||
|
||||
## Variable Inputs
|
||||
|
||||
| Node ID | Name | Description |
|
||||
|---------|------|-------------|
|
||||
| `source_second` | Second Video | Video to concatenate after the dog video |
|
||||
|
||||
## Fixed Inputs
|
||||
|
||||
| Asset | Content Hash |
|
||||
|-------|--------------|
|
||||
| `cat` | `33268b6e167deaf018cc538de12dbe562612b33e89a749391cef855b320a269b` |
|
||||
|
||||
## DAG Structure
|
||||
|
||||
```
|
||||
source_cat (SOURCE:fixed) source_second (SOURCE:variable)
|
||||
↓ ↓
|
||||
apply_identity (EFFECT:identity) │
|
||||
↓ │
|
||||
apply_dog (EFFECT:dog) │
|
||||
↓ │
|
||||
└────────────┬───────────────────────┘
|
||||
↓
|
||||
concat_result (SEQUENCE)
|
||||
↓
|
||||
output
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
# 1. Upload recipe
|
||||
artdag upload-recipe recipes/dog-concat/recipe.yaml
|
||||
|
||||
# 2. Upload your second video to get its content hash
|
||||
artdag upload /path/to/my-video.mp4
|
||||
|
||||
# 3. Run the recipe with the variable input
|
||||
artdag run-recipe <recipe_id> -i source_second:<content_hash_of_video>
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
A video containing:
|
||||
1. The dog video (generated from cat.jpg via identity→dog effects)
|
||||
2. Followed by your supplied second video (cut transition)
|
||||
35
recipes/dog-concat/recipe.sexp
Normal file
35
recipes/dog-concat/recipe.sexp
Normal file
@@ -0,0 +1,35 @@
|
||||
; dog-concat recipe
|
||||
; Creates dog video from cat, then concatenates with a user-supplied video
|
||||
; Demonstrates: def bindings, variable inputs, branching DAG
|
||||
|
||||
(recipe "dog-concat"
|
||||
:version "1.0"
|
||||
:description "Create dog video from cat, concatenate with another video"
|
||||
:owner "@giles@artdag.rose-ash.com"
|
||||
|
||||
; Registry
|
||||
(asset cat
|
||||
:cid "QmXrj6tSSn1vQXxxEY2Tyoudvt4CeeqR9gGQwSt7WFrhMZ"
|
||||
:url "https://rose-ash.com/content/images/2026/01/cat.jpg")
|
||||
|
||||
(effect identity
|
||||
:cid "640ea11ee881ebf4101af0a955439105ab11e763682b209e88ea08fc66e1cc03"
|
||||
:url "https://github.com/gilesbradshaw/art-dag/tree/main/effects/identity")
|
||||
|
||||
(effect dog
|
||||
:cid "QmT99H4MC5p18MGuxAeKGeXD71cGCzMNRxFfvt4FuCwpn6"
|
||||
:url "https://github.com/gilesbradshaw/art-dag/tree/main/effects/dog")
|
||||
|
||||
; Create dog video from cat
|
||||
(def dog-video
|
||||
(-> (source cat)
|
||||
(effect identity)
|
||||
(effect dog)))
|
||||
|
||||
; User-supplied second video
|
||||
(def second-video
|
||||
(source :input "Second Video"
|
||||
:description "Video to concatenate after the dog video"))
|
||||
|
||||
; Concatenate: dog first, then user video
|
||||
(sequence dog-video second-video))
|
||||
67
recipes/dog-concat/recipe.yaml
Normal file
67
recipes/dog-concat/recipe.yaml
Normal file
@@ -0,0 +1,67 @@
|
||||
# dog-concat recipe
|
||||
# Creates dog video from cat, then concatenates with a user-supplied video
|
||||
# Demonstrates: SOURCE → EFFECT → EFFECT → SEQUENCE(with variable input) → output
|
||||
|
||||
name: dog-concat
|
||||
version: "1.0"
|
||||
description: "Create dog video from cat, concatenate with another video"
|
||||
|
||||
# Registry references (by content hash)
|
||||
registry:
|
||||
assets:
|
||||
cat:
|
||||
hash: "33268b6e167deaf018cc538de12dbe562612b33e89a749391cef855b320a269b"
|
||||
url: "https://rose-ash.com/content/images/2026/01/cat.jpg"
|
||||
effects:
|
||||
identity:
|
||||
hash: "640ea11ee881ebf4101af0a955439105ab11e763682b209e88ea08fc66e1cc03"
|
||||
url: "https://github.com/gilesbradshaw/art-dag/tree/main/effects/identity"
|
||||
dog:
|
||||
hash: "d048fe313433eb4e38f0e24194ffae91b896ca3e6eed3e50b2cc37b7be495555"
|
||||
url: "https://github.com/gilesbradshaw/art-dag/tree/main/effects/dog"
|
||||
|
||||
# DAG definition
|
||||
dag:
|
||||
nodes:
|
||||
# First: create the dog video (same as identity-then-dog)
|
||||
- id: source_cat
|
||||
type: SOURCE
|
||||
config:
|
||||
asset: cat
|
||||
|
||||
- id: apply_identity
|
||||
type: EFFECT
|
||||
config:
|
||||
effect: identity
|
||||
inputs:
|
||||
- source_cat
|
||||
|
||||
- id: apply_dog
|
||||
type: EFFECT
|
||||
config:
|
||||
effect: dog
|
||||
inputs:
|
||||
- apply_identity
|
||||
|
||||
# Second: load the user-supplied video (variable input)
|
||||
- id: source_second
|
||||
type: SOURCE
|
||||
config:
|
||||
input: true
|
||||
name: "Second Video"
|
||||
description: "Video to concatenate after the dog video"
|
||||
|
||||
# Concatenate: dog video first, then second video
|
||||
- id: concat_result
|
||||
type: SEQUENCE
|
||||
config:
|
||||
transition:
|
||||
type: cut
|
||||
inputs:
|
||||
- apply_dog
|
||||
- source_second
|
||||
|
||||
output: concat_result
|
||||
|
||||
# Ownership
|
||||
owner: "@giles@artdag.rose-ash.com"
|
||||
@@ -9,17 +9,16 @@
|
||||
|
||||
; Registry
|
||||
(asset cat
|
||||
:hash "33268b6e167deaf018cc538de12dbe562612b33e89a749391cef855b320a269b"
|
||||
:url "https://rose-ash.com/content/images/2026/01/cat.jpg")
|
||||
:cid "QmXrj6tSSn1vQXxxEY2Tyoudvt4CeeqR9gGQwSt7WFrhMZ")
|
||||
|
||||
(effect identity
|
||||
:hash "8d8dc76b311e8146371a4dc19450c3845109928cf646333b43eea067f36e2bba")
|
||||
:cid "640ea11ee881ebf4101af0a955439105ab11e763682b209e88ea08fc66e1cc03")
|
||||
|
||||
(effect dog
|
||||
:hash "84e7c6d79a1a8cbc8241898b791683f796087af3ee3830c1421291d24ddce2cf")
|
||||
:cid "QmT99H4MC5p18MGuxAeKGeXD71cGCzMNRxFfvt4FuCwpn6")
|
||||
|
||||
(effect invert
|
||||
:hash "9144a60cdb73b9d3bf5ba2b4333e5b7e381ab02d2b09ee585375b4fa68d35327")
|
||||
:cid "QmPWaW5E5WFrmDjT6w8enqvtJhM8c5jvQu7XN1doHA3Z7J")
|
||||
|
||||
; Create dog video from cat
|
||||
(def dog-video
|
||||
|
||||
80
recipes/energy-reactive/recipe.sexp
Normal file
80
recipes/energy-reactive/recipe.sexp
Normal file
@@ -0,0 +1,80 @@
|
||||
; energy-reactive recipe
|
||||
; Analyzes audio energy over time, applies visual effects driven by loudness
|
||||
; Demonstrates: ANALYZE → BIND → TRANSFORM_DYNAMIC
|
||||
; NOTE: Uses future primitives not yet implemented
|
||||
|
||||
(recipe "energy-reactive"
|
||||
:version "1.0"
|
||||
:description "Video effects that pulse with the music's energy"
|
||||
:owner "@giles@artdag.rose-ash.com"
|
||||
|
||||
; === INPUTS ===
|
||||
|
||||
(def music
|
||||
(source :input "Music"
|
||||
:description "Audio file to analyze"))
|
||||
|
||||
(def video
|
||||
(source :input "Video"
|
||||
:description "Video to apply reactive effects to"))
|
||||
|
||||
; === ANALYSIS ===
|
||||
|
||||
; Extract energy envelope (loudness over time)
|
||||
(def energy
|
||||
(-> music
|
||||
(analyze :energy
|
||||
:window-ms 50
|
||||
:normalize true)))
|
||||
|
||||
; Extract frequency bands
|
||||
(def spectrum
|
||||
(-> music
|
||||
(analyze :spectrum
|
||||
:bands {:bass [20 200]
|
||||
:mid [200 2000]
|
||||
:high [2000 20000]})))
|
||||
|
||||
; === EFFECT BINDING ===
|
||||
|
||||
; Map analysis data to effect parameters
|
||||
(def effects-bound
|
||||
(bind
|
||||
; energy → saturation boost
|
||||
{:source "energy.envelope"
|
||||
:target "saturation"
|
||||
:range [1.0 2.0]}
|
||||
|
||||
; energy → brightness pulse
|
||||
{:source "energy.envelope"
|
||||
:target "brightness"
|
||||
:range [0.0 0.3]}
|
||||
|
||||
; bass → zoom pulse
|
||||
{:source "spectrum.bass"
|
||||
:target "scale"
|
||||
:range [1.0 1.1]
|
||||
:attack-ms 10
|
||||
:release-ms 100}
|
||||
|
||||
; highs → clarity (inverse blur)
|
||||
{:source "spectrum.high"
|
||||
:target "blur"
|
||||
:range [3 0]}))
|
||||
|
||||
; === VIDEO PROCESSING ===
|
||||
|
||||
; Apply time-varying effects
|
||||
(def reactive-video
|
||||
(-> video
|
||||
(transform-dynamic effects-bound)))
|
||||
|
||||
; === OUTPUT ===
|
||||
|
||||
(mux reactive-video music :shortest true))
|
||||
|
||||
; === NOTES ===
|
||||
; New primitives:
|
||||
; analyze :energy/:spectrum - audio feature extraction
|
||||
; bind - map data streams to effect parameters
|
||||
; transform-dynamic - effects with time-varying params
|
||||
128
recipes/energy-reactive/recipe.yaml
Normal file
128
recipes/energy-reactive/recipe.yaml
Normal file
@@ -0,0 +1,128 @@
|
||||
# energy-reactive recipe
|
||||
# Analyzes audio energy over time, applies visual effects driven by loudness
|
||||
# Demonstrates: ANALYZE → BIND → TRANSFORM
|
||||
|
||||
name: energy-reactive
|
||||
version: "1.0"
|
||||
description: "Video effects that pulse with the music's energy"
|
||||
|
||||
dag:
|
||||
nodes:
|
||||
# === INPUTS ===
|
||||
|
||||
- id: music
|
||||
type: SOURCE
|
||||
config:
|
||||
input: true
|
||||
name: "Music"
|
||||
description: "Audio file to analyze"
|
||||
|
||||
- id: video
|
||||
type: SOURCE
|
||||
config:
|
||||
input: true
|
||||
name: "Video"
|
||||
description: "Video to apply reactive effects to"
|
||||
|
||||
# === ANALYSIS ===
|
||||
|
||||
# Extract energy envelope from audio
|
||||
- id: energy
|
||||
type: ANALYZE # NOT IMPLEMENTED
|
||||
config:
|
||||
feature: energy
|
||||
window_ms: 50 # Analysis window size
|
||||
normalize: true # 0.0 to 1.0
|
||||
# Output: { envelope: [{time: 0.0, value: 0.3}, {time: 0.05, value: 0.7}, ...] }
|
||||
inputs:
|
||||
- music
|
||||
|
||||
# Extract frequency bands for more control
|
||||
- id: spectrum
|
||||
type: ANALYZE # NOT IMPLEMENTED
|
||||
config:
|
||||
feature: spectrum
|
||||
bands:
|
||||
bass: [20, 200] # Hz ranges
|
||||
mid: [200, 2000]
|
||||
high: [2000, 20000]
|
||||
# Output: { bass: [...], mid: [...], high: [...] }
|
||||
inputs:
|
||||
- music
|
||||
|
||||
# === EFFECT BINDING ===
|
||||
|
||||
# Bind analysis data to effect parameters
|
||||
# This creates a time-varying parameter stream
|
||||
- id: effects_bound
|
||||
type: BIND # NOT IMPLEMENTED: connects data → parameters
|
||||
config:
|
||||
mappings:
|
||||
# energy 0→1 maps to saturation 1.0→2.0
|
||||
- source: energy.envelope
|
||||
target: saturation
|
||||
range: [1.0, 2.0]
|
||||
|
||||
# energy maps to brightness pulse
|
||||
- source: energy.envelope
|
||||
target: brightness
|
||||
range: [0.0, 0.3]
|
||||
|
||||
# bass hits → zoom pulse
|
||||
- source: spectrum.bass
|
||||
target: scale
|
||||
range: [1.0, 1.1]
|
||||
attack_ms: 10 # Fast attack
|
||||
release_ms: 100 # Slower release
|
||||
|
||||
# high frequencies → blur reduction (clarity on highs)
|
||||
- source: spectrum.high
|
||||
target: blur
|
||||
range: [3, 0] # Inverse: more highs = less blur
|
||||
inputs:
|
||||
- energy
|
||||
- spectrum
|
||||
|
||||
# === VIDEO PROCESSING ===
|
||||
|
||||
# Apply time-varying effects to video
|
||||
- id: reactive_video
|
||||
type: TRANSFORM_DYNAMIC # NOT IMPLEMENTED: TRANSFORM with time-varying params
|
||||
config:
|
||||
effects_source: effects_bound
|
||||
# Effects applied per-frame based on bound parameters
|
||||
inputs:
|
||||
- video
|
||||
- effects_bound
|
||||
|
||||
# === COMPOSITION ===
|
||||
|
||||
- id: final
|
||||
type: MUX
|
||||
config:
|
||||
shortest: true
|
||||
inputs:
|
||||
- reactive_video
|
||||
- music
|
||||
|
||||
output: final
|
||||
|
||||
owner: "@giles@artdag.rose-ash.com"
|
||||
|
||||
# === NOTES ===
|
||||
#
|
||||
# New primitives needed:
|
||||
# ANALYZE(feature: energy) - extract loudness envelope over time
|
||||
# ANALYZE(feature: spectrum) - extract frequency band envelopes
|
||||
# BIND - map analysis data to effect parameter ranges
|
||||
# TRANSFORM_DYNAMIC - apply effects with time-varying parameters
|
||||
#
|
||||
# This pattern:
|
||||
# audio → ANALYZE → data streams → BIND → parameter streams → TRANSFORM_DYNAMIC → video
|
||||
#
|
||||
# The BIND primitive is key: it's a declarative way to say
|
||||
# "when bass is loud, zoom in" without writing code
|
||||
#
|
||||
# Attack/release in BIND smooths the response:
|
||||
# - attack_ms: how fast to respond to increases
|
||||
# - release_ms: how fast to return to baseline
|
||||
21
recipes/identity-cat/recipe.sexp
Normal file
21
recipes/identity-cat/recipe.sexp
Normal file
@@ -0,0 +1,21 @@
|
||||
; identity-cat recipe
|
||||
; Applies the identity effect to the foundational cat image
|
||||
; Demonstrates: SOURCE → EFFECT → output
|
||||
|
||||
(recipe "identity-cat"
|
||||
:version "1.0"
|
||||
:description "Apply identity effect to cat - output equals input"
|
||||
:owner "@giles@artdag.rose-ash.com"
|
||||
|
||||
; Registry
|
||||
(asset cat
|
||||
:cid "QmXrj6tSSn1vQXxxEY2Tyoudvt4CeeqR9gGQwSt7WFrhMZ"
|
||||
:url "https://rose-ash.com/content/images/2026/01/cat.jpg")
|
||||
|
||||
(effect identity
|
||||
:cid "640ea11ee881ebf4101af0a955439105ab11e763682b209e88ea08fc66e1cc03"
|
||||
:url "https://github.com/gilesbradshaw/art-dag/tree/main/effects/identity")
|
||||
|
||||
; DAG: source → effect
|
||||
(-> (source cat)
|
||||
(effect identity)))
|
||||
26
recipes/identity-then-dog/recipe.sexp
Normal file
26
recipes/identity-then-dog/recipe.sexp
Normal file
@@ -0,0 +1,26 @@
|
||||
; identity-then-dog recipe
|
||||
; Chains identity effect followed by dog effect
|
||||
; Demonstrates: SOURCE → EFFECT → EFFECT → output
|
||||
|
||||
(recipe "identity-then-dog"
|
||||
:version "1.0"
|
||||
:description "Apply identity then dog effect to cat - makes a dog video"
|
||||
:owner "@giles@artdag.rose-ash.com"
|
||||
|
||||
; Registry
|
||||
(asset cat
|
||||
:cid "QmXrj6tSSn1vQXxxEY2Tyoudvt4CeeqR9gGQwSt7WFrhMZ"
|
||||
:url "https://rose-ash.com/content/images/2026/01/cat.jpg")
|
||||
|
||||
(effect identity
|
||||
:cid "640ea11ee881ebf4101af0a955439105ab11e763682b209e88ea08fc66e1cc03"
|
||||
:url "https://github.com/gilesbradshaw/art-dag/tree/main/effects/identity")
|
||||
|
||||
(effect dog
|
||||
:cid "QmT99H4MC5p18MGuxAeKGeXD71cGCzMNRxFfvt4FuCwpn6"
|
||||
:url "https://github.com/gilesbradshaw/art-dag/tree/main/effects/dog")
|
||||
|
||||
; DAG: source → identity → dog
|
||||
(-> (source cat)
|
||||
(effect identity)
|
||||
(effect dog)))
|
||||
92
recipes/tempo-match/recipe.sexp
Normal file
92
recipes/tempo-match/recipe.sexp
Normal file
@@ -0,0 +1,92 @@
|
||||
; tempo-match recipe
|
||||
; Detects music tempo, adjusts video playback speed to match
|
||||
; Demonstrates: ANALYZE → COMPUTE → TRANSFORM with conditional logic
|
||||
; NOTE: Uses future primitives not yet implemented
|
||||
|
||||
(recipe "tempo-match"
|
||||
:version "1.0"
|
||||
:description "Speed-match videos to music tempo, sync cuts to downbeats"
|
||||
:owner "@giles@artdag.rose-ash.com"
|
||||
|
||||
; === INPUTS ===
|
||||
|
||||
(def music
|
||||
(source :input "Music"
|
||||
:description "Audio file - tempo will be detected"))
|
||||
|
||||
(def video-pool
|
||||
(source-list :input "Videos"
|
||||
:description "Videos to tempo-match and sequence"
|
||||
:min-items 1))
|
||||
|
||||
; Optional parameter override
|
||||
(def target-bpm
|
||||
(param :name "Target BPM"
|
||||
:description "Override detected tempo (optional)"
|
||||
:type "number"
|
||||
:required false))
|
||||
|
||||
; === ANALYSIS ===
|
||||
|
||||
; Detect tempo from audio
|
||||
(def tempo
|
||||
(-> music
|
||||
(analyze :tempo)))
|
||||
|
||||
; Detect downbeats (first beat of each bar)
|
||||
(def downbeats
|
||||
(-> music
|
||||
(analyze :downbeats :time-signature 4)))
|
||||
|
||||
; === COMPUTATION ===
|
||||
|
||||
; Use override if provided, otherwise detected tempo
|
||||
(def final-tempo
|
||||
(select :condition "target-bpm != null"
|
||||
:if-true target-bpm
|
||||
:if-false "tempo.bpm"))
|
||||
|
||||
; Detect motion tempo in each video
|
||||
(def video-tempos
|
||||
(map video-pool
|
||||
:operation "analyze"
|
||||
:feature "motion-tempo"))
|
||||
|
||||
; Compute speed multiplier for each video
|
||||
(def speed-factors
|
||||
(map video-tempos
|
||||
:operation "compute"
|
||||
:expression "final-tempo / item.motion-tempo"
|
||||
:clamp [0.5 2.0]))
|
||||
|
||||
; === VIDEO PROCESSING ===
|
||||
|
||||
; Apply speed adjustment
|
||||
(def tempo-matched
|
||||
(map video-pool
|
||||
:operation "transform"
|
||||
:speed speed-factors))
|
||||
|
||||
; Segment at downbeats, cycling through videos
|
||||
(def downbeat-segments
|
||||
(segment-at tempo-matched
|
||||
:times downbeats
|
||||
:distribute "round-robin"))
|
||||
|
||||
; Concatenate segments
|
||||
(def video-final
|
||||
(-> downbeat-segments (sequence)))
|
||||
|
||||
; === OUTPUT ===
|
||||
|
||||
(mux video-final music))
|
||||
|
||||
; === NOTES ===
|
||||
; New primitives:
|
||||
; param - recipe parameter (data, not media)
|
||||
; source-list - multiple media inputs
|
||||
; analyze :tempo/:downbeats/:motion-tempo
|
||||
; select - conditional data selection
|
||||
; compute - arithmetic expressions
|
||||
; map - apply to list items
|
||||
; segment-at - cut at specific times
|
||||
152
recipes/tempo-match/recipe.yaml
Normal file
152
recipes/tempo-match/recipe.yaml
Normal file
@@ -0,0 +1,152 @@
|
||||
# tempo-match recipe
|
||||
# Detects music tempo, adjusts video playback speed to match
|
||||
# Demonstrates: ANALYZE → COMPUTE → TRANSFORM
|
||||
|
||||
name: tempo-match
|
||||
version: "1.0"
|
||||
description: "Speed-match videos to music tempo, sync cuts to downbeats"
|
||||
|
||||
dag:
|
||||
nodes:
|
||||
# === INPUTS ===
|
||||
|
||||
- id: music
|
||||
type: SOURCE
|
||||
config:
|
||||
input: true
|
||||
name: "Music"
|
||||
description: "Audio file - tempo will be detected"
|
||||
|
||||
- id: video_pool
|
||||
type: SOURCE_LIST # NOT IMPLEMENTED
|
||||
config:
|
||||
input: true
|
||||
name: "Videos"
|
||||
description: "Videos to tempo-match and sequence"
|
||||
min_items: 1
|
||||
|
||||
- id: target_bpm
|
||||
type: PARAM # NOT IMPLEMENTED: recipe parameter (not media)
|
||||
config:
|
||||
name: "Target BPM"
|
||||
description: "Override detected tempo (optional)"
|
||||
type: number
|
||||
required: false
|
||||
|
||||
# === ANALYSIS ===
|
||||
|
||||
- id: tempo
|
||||
type: ANALYZE # NOT IMPLEMENTED
|
||||
config:
|
||||
feature: tempo
|
||||
# Output: { bpm: 128.0, confidence: 0.95 }
|
||||
inputs:
|
||||
- music
|
||||
|
||||
- id: downbeats
|
||||
type: ANALYZE # NOT IMPLEMENTED
|
||||
config:
|
||||
feature: downbeats # First beat of each bar
|
||||
time_signature: 4 # Assume 4/4
|
||||
# Output: { downbeat_times: [0.0, 1.88, 3.76, ...] }
|
||||
inputs:
|
||||
- music
|
||||
|
||||
# === COMPUTATION ===
|
||||
|
||||
# Decide which tempo to use
|
||||
- id: final_tempo
|
||||
type: SELECT # NOT IMPLEMENTED: conditional/fallback
|
||||
config:
|
||||
condition: "target_bpm != null"
|
||||
if_true: target_bpm
|
||||
if_false: tempo.bpm
|
||||
inputs:
|
||||
- target_bpm
|
||||
- tempo
|
||||
|
||||
# For each video, detect its "natural" tempo (motion analysis)
|
||||
- id: video_tempos
|
||||
type: MAP # NOT IMPLEMENTED
|
||||
config:
|
||||
operation: ANALYZE
|
||||
feature: motion_tempo # Estimate tempo from visual motion
|
||||
inputs:
|
||||
items: video_pool
|
||||
|
||||
# Compute speed multiplier for each video
|
||||
- id: speed_factors
|
||||
type: MAP # NOT IMPLEMENTED
|
||||
config:
|
||||
operation: COMPUTE
|
||||
expression: "final_tempo / item.motion_tempo"
|
||||
clamp: [0.5, 2.0] # Limit speed range
|
||||
inputs:
|
||||
items: video_tempos
|
||||
params:
|
||||
- final_tempo
|
||||
|
||||
# === VIDEO PROCESSING ===
|
||||
|
||||
# Apply speed adjustment to each video
|
||||
- id: tempo_matched
|
||||
type: MAP # NOT IMPLEMENTED
|
||||
config:
|
||||
operation: TRANSFORM
|
||||
effects:
|
||||
speed: "{{speed_factor}}" # From speed_factors
|
||||
inputs:
|
||||
items: video_pool
|
||||
params: speed_factors
|
||||
|
||||
# Segment videos to fit between downbeats
|
||||
- id: downbeat_segments
|
||||
type: SEGMENT_AT # NOT IMPLEMENTED: cut at specific times
|
||||
config:
|
||||
times_from: downbeats.downbeat_times
|
||||
distribute: round_robin # Cycle through videos
|
||||
inputs:
|
||||
- tempo_matched
|
||||
|
||||
# Sequence the segments
|
||||
- id: video_final
|
||||
type: SEQUENCE
|
||||
config:
|
||||
transition:
|
||||
type: cut
|
||||
inputs:
|
||||
- downbeat_segments
|
||||
|
||||
# === OUTPUT ===
|
||||
|
||||
- id: final
|
||||
type: MUX
|
||||
inputs:
|
||||
- video_final
|
||||
- music
|
||||
|
||||
output: final
|
||||
|
||||
owner: "@giles@artdag.rose-ash.com"
|
||||
|
||||
# === NOTES ===
|
||||
#
|
||||
# New primitives needed:
|
||||
# PARAM - recipe parameter that's data, not media
|
||||
# SOURCE_LIST - collect multiple media inputs
|
||||
# ANALYZE(feature: tempo) - detect BPM
|
||||
# ANALYZE(feature: downbeats) - detect bar starts
|
||||
# ANALYZE(feature: motion_tempo) - estimate tempo from video motion
|
||||
# SELECT - conditional data selection
|
||||
# COMPUTE - arithmetic/expressions on data
|
||||
# MAP - apply operation to list items
|
||||
# SEGMENT_AT - cut media at specified times
|
||||
#
|
||||
# This recipe shows:
|
||||
# 1. Optional parameter override (target_bpm)
|
||||
# 2. Analysis of BOTH audio AND video
|
||||
# 3. Computation combining multiple data sources
|
||||
# 4. Speed-matching videos to a common tempo
|
||||
# 5. Cutting on musical boundaries (downbeats)
|
||||
#
|
||||
# The pattern generalizes: any audio feature can drive any video parameter
|
||||
Reference in New Issue
Block a user