Blueprint Authoring Guide
This comprehensive guide explains how to author Renku blueprints: the YAML schema, how producers compose into workflows, and the rules the planner and runner enforce.
Overview
Section titled “Overview”Blueprints are YAML files that define complete video generation workflows. This guide is for advanced users who want to:
- Create custom blueprints for specific use cases
- Build reusable producers for new AI models
- Understand the internal workings of the planner and runner
If you’re new to Renku, start with the Quick Start guide first.
Core Concepts
Section titled “Core Concepts”System Inputs
Section titled “System Inputs”System inputs are special inputs that Renku automatically recognizes by name. You do not need to declare these in your blueprint’s inputs: section - they are automatically available when referenced in connections.
| System Input | Type | Description |
|---|---|---|
Duration | int | Total duration of the movie in seconds. Provided in input YAML. |
NumOfSegments | int | Number of segments to generate. Provided in input YAML. |
SegmentDuration | int | Duration of each segment. Auto-computed as Duration / NumOfSegments. |
MovieId | string | Unique identifier for the movie. Auto-injected by the system. |
StorageRoot | string | Root directory for storage. Auto-injected from CLI config. |
StorageBasePath | string | Base path within storage root. Auto-injected from CLI config. |
How it works:
- System inputs are automatically injected when referenced in blueprint connections
DurationandNumOfSegmentsvalues are provided in the input YAML fileSegmentDurationis auto-computed asDuration / NumOfSegments(unless you provide it explicitly)- Timeline composers and video exporters automatically receive
MovieId,StorageRoot,StorageBasePath
Blueprint vs Producer
Section titled “Blueprint vs Producer”Blueprints are workflow definitions that:
- Define user-facing inputs and final artifacts
- Import and connect multiple producers
- Define loops for parallel execution
- Specify collectors for fan-in aggregation
Producers are execution units that:
- Accept inputs and produce artifacts
- Map inputs to specific AI model parameters
- Support multiple model variants
- Can be reused across multiple blueprints
Data Flow
Section titled “Data Flow”Data flows through the system via connections:
Blueprint Input → Producer Input → Producer → Artifact → Next Producer Input │ └→ Blueprint ArtifactFor looped producers:
ScriptProducer.NarrationScript[0] → AudioProducer[0].TextInputScriptProducer.NarrationScript[1] → AudioProducer[1].TextInputScriptProducer.NarrationScript[2] → AudioProducer[2].TextInputBlueprint YAML Reference
Section titled “Blueprint YAML Reference”Complete Schema
Section titled “Complete Schema”meta: name: <string> # Human-readable name (required) description: <string> # Purpose and behavior id: <string> # Unique identifier in PascalCase (required) version: <semver> # Semantic version (e.g., 0.1.0) author: <string> # Creator name license: <string> # License type (e.g., MIT)
inputs: - name: <string> # Input identifier in PascalCase (required) description: <string> # Purpose and usage type: <string> # Data type (required) required: <boolean> # Whether mandatory (default: true)
artifacts: - name: <string> # Artifact identifier in PascalCase (required) description: <string> # Purpose and content type: <string> # Output type (required) itemType: <string> # Element type for arrays countInput: <string> # Input that determines array size
loops: - name: <string> # Loop identifier in lowercase (required) description: <string> # Purpose and behavior countInput: <string> # Input that determines iteration count (required) countInputOffset: <int> # Offset added to count parent: <string> # Parent loop for nesting
producers: - name: <string> # Producer instance name in PascalCase (required) producer: <string> # Qualified name: "category/name" (preferred) path: <string> # Relative path to producer YAML (for custom producers) loop: <string> # Loop dimension(s) (e.g., "segment")
connections: - from: <string> # Source reference (required) to: <string> # Target reference (required) if: <string> # Condition name for conditional execution
conditions: <conditionName>: when: <string> # Path to artifact field (e.g., Producer.Artifact.Field[dim]) is: <any> # Equals this value isNot: <any> # Does not equal this value contains: <string> # String contains value greaterThan: <number> # Greater than lessThan: <number> # Less than exists: <boolean> # Field exists and is truthy matches: <string> # Regex pattern all: <array> # AND: all sub-conditions must pass any: <array> # OR: at least one sub-condition must pass
collectors: - name: <string> # Collector identifier (required) from: <string> # Source artifact reference (required) into: <string> # Target input reference (required) groupBy: <string> # Loop dimension for grouping (required) orderBy: <string> # Loop dimension for orderingInput Types
Section titled “Input Types”| Type | Description |
|---|---|
string | Text value |
int | Integer number |
image | Image file reference |
audio | Audio file reference |
video | Video file reference |
json | Structured JSON data |
collection | Array of items (used with fanIn: true) |
Artifact Types
Section titled “Artifact Types”| Type | Description |
|---|---|
string | Text output |
json | Structured JSON with virtual property expansion (see JSON Artifacts) |
image | Image file |
audio | Audio file |
video | Video file |
array | Array of items (requires itemType) |
multiDimArray | Multi-dimensional array |
Example Blueprint
Section titled “Example Blueprint”meta: name: Video Only Narration description: Generate video segments from a textual inquiry. id: Video version: 0.1.0 author: Renku license: MIT
inputs: # Note: Duration, NumOfSegments, SegmentDuration are system inputs # They don't need to be declared - just provide values in input YAML - name: InquiryPrompt description: The prompt describing the movie script. type: string required: true - name: Style description: Visual style for the video. type: string required: true
artifacts: - name: SegmentVideo description: Generated video for each segment. type: array itemType: video countInput: NumOfSegments # References system input
loops: - name: segment description: Iterates over narration segments. countInput: NumOfSegments # References system input
producers: - name: ScriptProducer producer: prompt/script - name: VideoPromptProducer producer: prompt/video - name: VideoProducer producer: asset/text-to-video loop: segment
connections: # Wire inputs to ScriptProducer - from: InquiryPrompt to: ScriptProducer.InquiryPrompt - from: Duration to: ScriptProducer.Duration # System input - no declaration needed - from: NumOfSegments to: ScriptProducer.NumOfSegments # System input - no declaration needed
# Wire script to VideoPromptProducer (looped) - from: ScriptProducer.NarrationScript[segment] to: VideoPromptProducer[segment].NarrativeText - from: Style to: VideoPromptProducer[segment].Style - from: SegmentDuration to: VideoPromptProducer[segment].SegmentDuration # Auto-computed
# Wire prompts to VideoProducer (looped) - from: VideoPromptProducer.VideoPrompt[segment] to: VideoProducer[segment].Prompt - from: SegmentDuration to: VideoProducer[segment].Duration # Auto-computed
# Wire output to blueprint artifact - from: VideoProducer[segment].SegmentVideo to: SegmentVideo[segment]Producer YAML Reference
Section titled “Producer YAML Reference”Complete Schema
Section titled “Complete Schema”meta: name: <string> # Human-readable name (required) description: <string> # Purpose and behavior id: <string> # Unique identifier in PascalCase (required) version: <semver> # Semantic version author: <string> # Creator name license: <string> # License type promptFile: <string> # Path to TOML prompt config (LLM producers only) outputSchema: <string> # Path to JSON schema for structured output (LLM producers only)
inputs: - name: <string> # Input identifier (required) description: <string> # Purpose and usage type: <string> # Data type (required) fanIn: <boolean> # Is this a fan-in collection input dimensions: <string> # Dimension labels (e.g., "segment")
artifacts: - name: <string> # Artifact identifier (required) description: <string> # Purpose and content type: <string> # Output type (required) itemType: <string> # Element type for arrays countInput: <string> # Input for array size
mappings: <provider>: # Provider name: replicate, fal-ai, openai, renku <model>: # Model identifier <ProducerInput>: <providerField> # Simple mapping <ProducerInput>: # Object form field: <providerField> transform: # Value transformation <inputValue>: <providerValue> expand: <boolean> # Spread object into payload combine: # Combine multiple inputs inputs: [<input1>, <input2>] table: "<val1>+<val2>": <result> conditional: # Conditional mapping when: input: <inputName> notEmpty: <boolean> then: <mapping>Note on LLM Producers: For producers that use LLMs (OpenAI, etc.), the promptFile and outputSchema fields are defined in the meta: section of the producer YAML file. These are intrinsic to the producer’s functionality.
Example: Script Producer (OpenAI)
Section titled “Example: Script Producer (OpenAI)”meta: name: Script Generation description: Generate documentary scripts. id: ScriptProducer version: 0.1.0 promptFile: ./script.toml # Prompt template for LLM outputSchema: ./script-output.json # JSON schema for structured output
inputs: - name: InquiryPrompt type: string - name: Duration type: int - name: NumOfSegments type: int - name: Audience type: string
artifacts: - name: MovieTitle type: string - name: MovieSummary type: string - name: NarrationScript type: array itemType: string countInput: NumOfSegmentsNote: LLM producers define promptFile and outputSchema in the meta: section. The model selection (e.g., gpt-4o) is specified in the input template file.
Example: Video Producer (with Mappings)
Section titled “Example: Video Producer (with Mappings)”meta: name: Video Generation id: VideoProducer version: 0.1.0
inputs: - name: Prompt type: string - name: AspectRatio type: string - name: Resolution type: string
artifacts: - name: SegmentVideo type: video
mappings: replicate: bytedance/seedance-1-pro-fast: Prompt: prompt AspectRatio: aspect_ratio Resolution: resolution google/veo-3.1-fast: Prompt: prompt AspectRatio: aspect_ratio fal-ai: bytedance/seedream/v4.5/text-to-image: Prompt: prompt AspectRatio: field: image_size transform: "16:9": landscape_16_9 "9:16": portrait_16_9 "1:1": square_hdDerived Video Artifacts
Section titled “Derived Video Artifacts”All video-generating producers automatically support derived artifacts that are extracted from generated videos using ffmpeg. These enable powerful workflows like video-to-video chaining.
Available Derived Artifacts
Section titled “Available Derived Artifacts”| Artifact | Type | Description | Use Cases |
|---|---|---|---|
FirstFrame | image (PNG) | First frame of the video | Visual preview, thumbnails |
LastFrame | image (PNG) | Last frame of the video | Seamless video transitions, end-frame matching |
AudioTrack | audio (WAV) | Audio track from video | Audio post-processing, mixing |
How It Works
Section titled “How It Works”- Declare derived artifacts in your producer YAML (already included in catalog producers)
- Connect them to downstream producers in your blueprint
- Automatic extraction happens when the video is downloaded - only connected artifacts are extracted
- Graceful fallback if ffmpeg is not installed, a warning is shown but the blueprint runs
Example: Seamless Video Transitions
Section titled “Example: Seamless Video Transitions”Chain videos together using the last frame of one video as the start image for the next:
loops: - name: segment countInput: NumOfSegments
producers: - name: TextToVideoProducer producer: asset/text-to-video - name: ImageToVideoProducer producer: asset/image-to-video loop: segment
connections: # First segment: text-to-video - from: Prompt to: TextToVideoProducer.Prompt
# Use LastFrame from first video as StartImage for transitions - from: TextToVideoProducer.LastFrame to: ImageToVideoProducer[0].StartImage
# Chain subsequent segments using previous segment's LastFrame - from: ImageToVideoProducer[segment].LastFrame to: ImageToVideoProducer[segment+1].StartImageTranscription and Karaoke Subtitles
Section titled “Transcription and Karaoke Subtitles”Add speech transcription and karaoke-style animated subtitles to your videos. This creates engaging effects similar to Instagram and TikTok, with word-level highlighting synchronized to the audio.
Components
Section titled “Components”| Component | Description |
|---|---|
| TranscriptionProducer | Converts audio segments to word-level transcripts with precise timestamps |
| VideoExporter | Renders the transcript as animated karaoke subtitles on the final video |
TranscriptionProducer
Section titled “TranscriptionProducer”meta: name: Speech Transcription id: TranscriptionProducer version: 0.1.0
inputs: - name: Timeline description: Timeline document with audio clip timing information. type: json - name: AudioSegments description: Audio clips to transcribe (from Audio track). type: collection itemType: audio dimensions: segment fanIn: true - name: LanguageCode description: ISO 639-3 language code (e.g., eng, spa, fra). type: string
artifacts: - name: Transcription description: Word-level transcription aligned to video timeline. type: jsonBlueprint Wiring
Section titled “Blueprint Wiring”Add transcription to an existing video + audio blueprint:
producers: # ... existing producers ... - name: TranscriptionProducer producer: asset/transcription - name: VideoExporter producer: composition/video-exporter
connections: # ... existing connections ...
# Wire timeline to transcription producer - from: TimelineComposer.Timeline to: TranscriptionProducer.Timeline
# Wire audio segments to transcription producer - from: AudioProducer[segment].GeneratedAudio to: TranscriptionProducer.AudioSegments
# Wire transcription to exporter - from: TranscriptionProducer.Transcription to: VideoExporter.Transcription
# Wire timeline to exporter (required) - from: TimelineComposer.Timeline to: VideoExporter.Timeline
collectors: # ... existing collectors ...
# CRITICAL: Fan-in requires BOTH connection AND collector - name: TranscriptionAudio from: AudioProducer[segment].GeneratedAudio into: TranscriptionProducer.AudioSegments groupBy: segmentKaraoke Configuration
Section titled “Karaoke Configuration”Configure appearance in your input template:
models: - model: speech/transcription provider: renku producerId: TranscriptionProducer config: sttProvider: fal-ai sttModel: elevenlabs/speech-to-text
- model: ffmpeg/native-render provider: renku producerId: VideoExporter config: karaoke: fontSize: 48 # Font size in pixels fontColor: white # Default text color highlightColor: "#FFD700" # Highlighted word color (gold) bottomMarginPercent: 10 # Position from bottom maxWordsPerLine: 8 # Words per line highlightAnimation: pop # Animation: none, pop, spring, pulse animationScale: 1.15 # Animation peak scaleAnimation Types
Section titled “Animation Types”| Animation | Description | Best For |
|---|---|---|
none | Static highlight, no animation | Professional, minimal style |
pop | Quick scale up then settle | Subtle, professional feel (default) |
spring | Bouncy scale with oscillation | Dynamic, playful content |
pulse | Gentle continuous sine wave | Rhythmic, musical content |
Track Type Handling
Section titled “Track Type Handling”- Audio track (
kind: Audio): Always transcribed - contains speech/narration - Music track (
kind: Music): Never transcribed - background music only - Video track with audio: Audio track extracted and transcribed
Example: Timeline Composer (Fan-In)
Section titled “Example: Timeline Composer (Fan-In)”meta: name: Timeline Composer id: TimelineComposer version: 0.1.0
inputs: - name: VideoSegments type: collection itemType: video dimensions: segment fanIn: true - name: AudioSegments type: collection itemType: audio dimensions: segment fanIn: true - name: Duration type: int
artifacts: - name: Timeline type: json
# Timeline composer uses renku provider - no field mappings needed# Model selection and config are specified in the input templateConnections
Section titled “Connections”Connections define data flow between nodes.
Direct Connections
Section titled “Direct Connections”connections: - from: InquiryPrompt to: ScriptProducer.InquiryPromptArray Indexing
Section titled “Array Indexing”connections: - from: ScriptProducer.NarrationScript[segment] to: AudioProducer[segment].TextInputExpands to:
NarrationScript[0]→AudioProducer[0].TextInputNarrationScript[1]→AudioProducer[1].TextInput- etc.
Multi-Dimensional Indexing (Nested Loops)
Section titled “Multi-Dimensional Indexing (Nested Loops)”When you have nested loops (a loop with a parent), use multiple indices:
# Define nested loopsloops: - name: segment countInput: NumOfSegments - name: image parent: segment # image is nested inside segment countInput: NumOfImagesPerNarrative
# Producer runs for each segment × image combinationproducers: - name: ImageProducer producer: asset/text-to-image loop: segment.image # Dot notation for nested loops
# Connections use multiple indicesconnections: - from: ImagePromptProducer.ImagePrompt[segment][image] to: ImageProducer[segment][image].Prompt - from: ImageProducer[segment][image].GeneratedImage to: SegmentImage[segment][image]If NumOfSegments = 3 and NumOfImagesPerNarrative = 2, this creates 6 producer instances: [0][0], [0][1], [1][0], [1][1], [2][0], [2][1].
Offset Selectors
Section titled “Offset Selectors”connections: - from: ImageProducer[image].SegmentImage to: ImageToVideoProducer[segment].InputImage1 - from: ImageProducer[image+1].SegmentImage to: ImageToVideoProducer[segment].InputImage2Creates sliding window patterns. For N segments, you need N+1 images.
Indexed Collection Binding
Section titled “Indexed Collection Binding”When a producer has a collection input, you can connect different artifacts to specific indices:
connections: # Different artifacts bound to specific collection indices - from: CharacterImageProducer.GeneratedImage to: VideoProducer[clip].ReferenceImages[0] - from: ProductImageProducer.GeneratedImage to: VideoProducer[clip].ReferenceImages[1]This pattern is useful when:
- A producer accepts multiple reference images as a collection
- Each reference image comes from a different upstream producer
- The same images should be used for all loop instances
How it works:
- The planner creates element-level input bindings:
ReferenceImages[0]→ artifact ID - At runtime, the SDK reconstructs the array from these element-level bindings
- The producer receives
ReferenceImages: [CharacterImage, ProductImage]
Comparison with whole-collection binding:
| Pattern | Syntax | Use Case |
|---|---|---|
| Whole-collection | AllImages → ReferenceImages | Connect an entire array artifact |
| Element-level | Image1 → ReferenceImages[0] | Connect individual artifacts to specific indices |
Broadcast Connections
Section titled “Broadcast Connections”A scalar input broadcasts to all loop instances:
connections: - from: Style to: VideoPromptProducer[segment].StyleConnecting to JSON Artifact Properties
Section titled “Connecting to JSON Artifact Properties”When a producer outputs a type: json artifact with a schema, you can connect to nested properties using dot-path syntax:
connections: # Connect to top-level property - from: DocProducer.VideoScript.Title to: TitleRenderer.Title
# Connect to array element property - from: DocProducer.VideoScript.Segments[segment].Script to: AudioProducer[segment].TextInput
# Connect to nested array property - from: DocProducer.VideoScript.Segments[segment].ImagePrompts[image].Prompt to: ImageProducer[segment][image].PromptSee JSON Artifacts for full details on defining JSON artifacts with schemas.
Conditional Connections
Section titled “Conditional Connections”Connections can be made conditional based on runtime values from upstream artifacts. This enables dynamic workflow branching where different producers execute depending on the data produced earlier in the pipeline.
Defining Conditions
Section titled “Defining Conditions”Define named conditions in the conditions: section of your blueprint:
conditions: isImageNarration: when: DocProducer.VideoScript.Segments[segment].NarrationType is: "ImageNarration" isAudioNeeded: any: - when: DocProducer.VideoScript.Segments[segment].NarrationType is: "TalkingHead" - when: DocProducer.VideoScript.Segments[segment].UseNarrationAudio is: true isTalkingHead: when: DocProducer.VideoScript.Segments[segment].NarrationType is: "TalkingHead"Condition Path Format
Section titled “Condition Path Format”The when field references a path to a value in an upstream artifact:
<Producer>.<Artifact>.<FieldPath>[dimension]- Producer: The producer that creates the artifact (e.g.,
DocProducer) - Artifact: The artifact name (e.g.,
VideoScript) - FieldPath: Dot-separated path to the field (e.g.,
Segments[segment].NarrationType) - Dimensions: Use dimension placeholders like
[segment]for per-instance evaluation
Condition Operators
Section titled “Condition Operators”| Operator | Description |
|---|---|
is | Equals the specified value |
isNot | Does not equal the specified value |
contains | String contains the value |
greaterThan | Greater than (numeric) |
lessThan | Less than (numeric) |
greaterOrEqual | Greater than or equal (numeric) |
lessOrEqual | Less than or equal (numeric) |
exists | Field exists and is truthy |
matches | Matches a regular expression |
Condition Groups
Section titled “Condition Groups”Combine multiple conditions with all (AND) or any (OR):
conditions: needsAudio: any: - when: DocProducer.VideoScript.Segments[segment].NarrationType is: "TalkingHead" - when: DocProducer.VideoScript.Segments[segment].UseNarrationAudio is: true
isHighQuality: all: - when: DocProducer.VideoScript.Segments[segment].Quality is: "high" - when: DocProducer.VideoScript.Segments[segment].Duration greaterThan: 10Using Conditions on Connections
Section titled “Using Conditions on Connections”Reference a named condition using the if: attribute:
connections: # ImageProducer only runs when NarrationType is "ImageNarration" - from: DocProducer.VideoScript.Segments[segment].ImagePrompts[image] to: ImageProducer[segment][image].Prompt if: isImageNarration
# AudioProducer runs when TalkingHead OR UseNarrationAudio is true - from: DocProducer.VideoScript.Segments[segment].Script to: AudioProducer[segment].TextInput if: isAudioNeededRuntime Behavior
Section titled “Runtime Behavior”When a condition is evaluated at runtime:
- Condition Evaluation: The runner resolves the artifact data and evaluates the condition for each dimension instance
- Input Filtering: If a condition is not satisfied, that input is filtered out
- Job Skipping: If ALL conditional inputs for a job are not satisfied, the job is skipped
- Artifact Absence: Skipped jobs produce no artifacts - those artifact IDs are absent from the manifest
Example: With 3 segments where NarrationType = ["ImageNarration", "TalkingHead", "ImageNarration"]:
ImageProducer[0]andImageProducer[2]execute (ImageNarration)ImageProducer[1]is skipped (TalkingHead)AudioProducer[1]executes (TalkingHead)AudioProducer[0]andAudioProducer[2]may be skipped (unless UseNarrationAudio=true)
JSON Artifacts
Section titled “JSON Artifacts”JSON artifacts store structured data as a single blob while exposing nested properties as virtual artifacts for granular connections. This enables:
- Granular connections: Wire individual properties to downstream producers
- Efficient caching: Only re-run downstream jobs when specific properties change
- Schema validation: Structured output from LLMs with JSON schema enforcement
Defining JSON Artifacts in Producers
Section titled “Defining JSON Artifacts in Producers”meta: id: DocumentaryPromptProducer name: Documentary Script Generation promptFile: ./documentary-prompt.toml # Prompt template outputSchema: ./documentary-prompt-output.json # JSON schema for validation
inputs: - name: InquiryPrompt type: string - name: NumOfSegments type: int - name: NumOfImagesPerSegment type: int default: 1
artifacts: - name: VideoScript type: json description: The generated video script arrays: - path: Segments countInput: NumOfSegments - path: Segments.ImagePrompts countInput: NumOfImagesPerSegmentNote: The model selection (e.g., gpt-4o) is specified in the input template file, not in the producer.
The arrays Field
Section titled “The arrays Field”The arrays field maps JSON array paths to input variables that determine their sizes:
arrays: - path: Segments # Top-level array countInput: NumOfSegments # Sized by NumOfSegments input - path: Segments.ImagePrompts # Nested array within each segment countInput: NumOfImagesPerSegment # Sized by NumOfImagesPerSegment inputThis enables the planner to:
- Create dimension placeholders for arrays (e.g.,
[segment],[image]) - Expand virtual artifacts for each array element
- Track granular changes for incremental re-runs
Schema Association
Section titled “Schema Association”The outputSchema from the producer’s meta: section is automatically associated with type: json artifacts. The schema defines the structure of the JSON output:
{ "name": "VideoScript", "strict": true, "schema": { "type": "object", "properties": { "Title": { "type": "string" }, "Segments": { "type": "array", "items": { "type": "object", "properties": { "Script": { "type": "string" }, "ImagePrompts": { "type": "array", "items": { "type": "object", "properties": { "Prompt": { "type": "string" } } } } } } } } }}Virtual Artifact Paths
Section titled “Virtual Artifact Paths”The planner expands JSON schemas into virtual artifacts that can be referenced in connections:
| Virtual Artifact ID | Description |
|---|---|
Producer.VideoScript.Title | Top-level string property |
Producer.VideoScript.Segments[segment].Script | Script for each segment |
Producer.VideoScript.Segments[segment].ImagePrompts[image].Prompt | Image prompt for each segment/image |
Granular Dirty Tracking
Section titled “Granular Dirty Tracking”Each virtual artifact gets its own content hash. When you edit a JSON artifact:
- Only virtual artifacts whose content actually changed are marked dirty
- Only downstream jobs consuming those specific properties re-run
- Other downstream jobs use cached results
Example: If you edit only Segments[1].Script, only AudioProducer[1] re-runs. AudioProducer[0] and AudioProducer[2] use cached results.
Loops and Dimensions
Section titled “Loops and Dimensions”Basic Loop
Section titled “Basic Loop”loops: - name: segment countInput: NumOfSegmentsIf NumOfSegments = 3, looped producers run 3 times.
Loop with Offset
Section titled “Loop with Offset”loops: - name: image countInput: NumOfSegments countInputOffset: 1Count is NumOfSegments + 1. Use for sliding window patterns.
Nested Loops
Section titled “Nested Loops”loops: - name: segment countInput: NumOfSegments - name: image parent: segment countInput: NumOfImagesPerSegmentCreates two-dimensional iteration. If NumOfSegments = 3 and NumOfImagesPerSegment = 2, you get 6 instances.
Assigning Producers to Loops
Section titled “Assigning Producers to Loops”producers: - name: ScriptProducer producer: prompt/script # No loop - runs once
- name: AudioProducer producer: asset/text-to-speech loop: segment # Runs per segment
- name: ImageProducer producer: asset/text-to-image loop: segment.image # Runs per segment × imageCollectors and Fan-In
Section titled “Collectors and Fan-In”Collectors aggregate multiple artifacts for downstream processing.
Basic Collector
Section titled “Basic Collector”collectors: - name: TimelineVideo from: VideoProducer[segment].SegmentVideo into: TimelineComposer.VideoSegments groupBy: segmentCollector with Ordering
Section titled “Collector with Ordering”collectors: - name: TimelineImages from: ImageProducer[segment][image].SegmentImage into: TimelineComposer.ImageSegments groupBy: segment orderBy: imageFan-In Inputs
Section titled “Fan-In Inputs”The target input must have fanIn: true:
inputs: - name: VideoSegments type: collection itemType: video dimensions: segment fanIn: trueCanonical IDs
Section titled “Canonical IDs”Canonical IDs are fully qualified node identifiers.
Format
Section titled “Format”Type:path.to.name[index0][index1]...Examples
Section titled “Examples”| ID | Description |
|---|---|
Input:InquiryPrompt | Blueprint input |
Input:ScriptProducer.Duration | Producer input |
Artifact:VideoProducer.SegmentVideo[0] | First video |
Artifact:ImageProducer.SegmentImage[2][1] | Image at [2][1] |
Producer:AudioProducer[0] | First AudioProducer instance |
Planner and Runner
Section titled “Planner and Runner”Planner Process
Section titled “Planner Process”- Load blueprint tree - Blueprint + all producer imports
- Build graph - Nodes for inputs, artifacts, producers
- Resolve dimensions - Calculate loop sizes
- Expand instances - Create concrete instances per dimension
- Align edges - Match dimension indices
- Build execution layers - Topological sort into parallel groups
Runner Process
Section titled “Runner Process”- Execute layer by layer - Jobs in same layer can run in parallel
- Resolve artifacts - Load upstream data for each job
- Materialize fan-in - Group and order collection items
- Invoke providers - Call AI APIs
- Store artifacts - Persist to blob storage
Dirty Tracking
Section titled “Dirty Tracking”For incremental runs:
- Compare input hashes against manifest
- Find changed or missing artifacts
- Mark affected jobs as dirty
- Propagate through dependency graph
- Only execute dirty jobs
Input Files Reference
Section titled “Input Files Reference”Structure
Section titled “Structure”inputs: # User-provided inputs InquiryPrompt: "Your topic here" Style: "Documentary" AspectRatio: "16:9"
# System inputs - provide values here, no blueprint declaration needed Duration: 60 # Total movie duration in seconds NumOfSegments: 3 # Number of segments to generate # SegmentDuration is auto-computed as Duration/NumOfSegments (20 seconds) # You can override it: SegmentDuration: 15
models: - model: gpt-5-mini provider: openai producerId: ScriptProducer config: text_format: json_schema - model: minimax/speech-2.6-hd provider: replicate producerId: AudioProducer - model: bytedance/seedance-1-pro-fast provider: replicate producerId: VideoProducer - model: timeline/ordered provider: renku producerId: TimelineComposer config: tracks: ["Video", "Audio"] masterTracks: ["Audio"]Model Selection
Section titled “Model Selection”Select which model to use for each producer. The config field passes provider-specific options:
models: # LLM with structured output - model: gpt-5-mini provider: openai producerId: ScriptProducer config: text_format: json_schema
# Video generation - model: google/veo-3.1-fast provider: replicate producerId: VideoProducer
# Timeline composition with custom config - model: timeline/ordered provider: renku producerId: TimelineComposer config: tracks: ["Video", "Audio", "Music"] masterTracks: ["Audio"] musicClip: volume: 0.4Note: Input-to-provider field mappings are defined in the producer YAML’s mappings: section, not in the input template. This keeps input files simple.
Validation Rules
Section titled “Validation Rules”Blueprint Validation
Section titled “Blueprint Validation”| Rule | Error |
|---|---|
| Meta section required | Blueprint must have a meta section |
| Meta.id required | Blueprint meta must have an id |
| At least one artifact | Blueprint must declare at least one artifact |
| Cannot mix models and producers | Cannot define both models and producer imports |
Input Validation
Section titled “Input Validation”| Rule | Error |
|---|---|
| Optional inputs need defaults | Optional input must declare a default value |
| Input name required | Input must have a name |
| Input type required | Input must have a type |
Connection Validation
Section titled “Connection Validation”| Rule | Error |
|---|---|
| Valid dimension syntax | Invalid dimension selector |
| Known dimension | Unknown loop symbol |
| Dimension count match | Inconsistent dimension counts |
Common Patterns
Section titled “Common Patterns”Audio-Only Narration
Section titled “Audio-Only Narration”loops: - name: segment countInput: NumOfSegments
producers: - name: ScriptProducer producer: prompt/script - name: AudioProducer producer: asset/text-to-speech loop: segment
connections: - from: ScriptProducer.NarrationScript[segment] to: AudioProducer[segment].TextInput - from: AudioProducer[segment].SegmentAudio to: SegmentAudio[segment]Image-to-Video (Sliding Window)
Section titled “Image-to-Video (Sliding Window)”loops: - name: segment countInput: NumOfSegments - name: image countInput: NumOfSegments countInputOffset: 1 # N+1 images
producers: - name: ImageProducer producer: asset/text-to-image loop: image - name: ImageToVideoProducer producer: asset/image-to-video loop: segment
connections: - from: ImageProducer[image].SegmentImage to: ImageToVideoProducer[segment].InputImage1 - from: ImageProducer[image+1].SegmentImage to: ImageToVideoProducer[segment].InputImage2Full Timeline
Section titled “Full Timeline”producers: - name: VideoProducer producer: asset/text-to-video loop: segment - name: AudioProducer producer: asset/text-to-speech loop: segment - name: TimelineComposer producer: composition/timeline-composer
collectors: - name: TimelineVideo from: VideoProducer[segment].SegmentVideo into: TimelineComposer.VideoSegments groupBy: segment - name: TimelineAudio from: AudioProducer[segment].SegmentAudio into: TimelineComposer.AudioSegments groupBy: segmentDebugging
Section titled “Debugging”Validate Blueprint
Section titled “Validate Blueprint”renku blueprints:validate ./my-blueprint.yamlDry Run
Section titled “Dry Run”renku generate --inputs=./inputs.yaml --blueprint=./blueprint.yaml --dry-runInspect Execution Plan
Section titled “Inspect Execution Plan”cat {builds}/{movie}/runs/rev-0001-plan.jsonCommon Issues
Section titled “Common Issues”| Problem | Cause | Solution |
|---|---|---|
| Fan-in empty | Missing collector | Add collector from source to target |
| Dimension error | Mismatched dimensions | Verify source/target dimensions match |
| Optional input error | Missing default | Add default: to optional inputs |
| Missing artifacts | Wrong build path | Check dist/ per package |
Directory Structure
Section titled “Directory Structure”Catalog Organization
Section titled “Catalog Organization”catalog/├── blueprints/│ ├── audio-only/│ │ ├── audio-only.yaml│ │ └── input-template.yaml│ └── video-only/│ ├── video-only.yaml│ └── input-template.yaml├── producers/│ ├── asset/ # Media generation producers│ │ ├── text-to-image.yaml│ │ ├── text-to-video.yaml│ │ ├── text-to-speech.yaml│ │ └── image-to-video.yaml│ ├── prompt/ # LLM-based producers│ │ ├── script/│ │ │ ├── script.yaml│ │ │ └── script.toml│ │ └── video/│ │ └── video.yaml│ └── composition/ # Composition producers│ └── timeline-composer.yaml└── models/ └── *.yamlProducer Categories:
asset/- Media generation: images, video, audio, musicprompt/- LLM-based script and prompt generationcomposition/- Timeline composition and video export
Naming Conventions
Section titled “Naming Conventions”| Item | Convention | Example |
|---|---|---|
| Blueprint files | kebab-case | image-to-video.yaml |
| Producer files | kebab-case | script.yaml |
| IDs | PascalCase | id: ImageToVideo |
| Loop names | lowercase | name: segment |
| Input/Artifact names | PascalCase | name: InquiryPrompt |
Producer References
Section titled “Producer References”Preferred: Qualified names resolve from the catalog:
# Works from any location - resolves from catalogproducers: - name: ScriptProducer producer: prompt/script # → catalog/producers/prompt/script/script.yaml - name: VideoProducer producer: asset/text-to-video # → catalog/producers/asset/text-to-video.yamlLegacy: Relative paths for custom local producers:
# For custom producers not in the catalogproducers: - name: MyCustomProducer path: ./my-producers/custom.yaml # Relative to blueprint file