Skip to content

Blueprint Authoring Guide

This comprehensive guide explains how to author Renku blueprints: the YAML schema, how producers compose into workflows, and the rules the planner and runner enforce.

Blueprints are YAML files that define complete video generation workflows. This guide is for advanced users who want to:

  • Create custom blueprints for specific use cases
  • Build reusable producers for new AI models
  • Understand the internal workings of the planner and runner

If you’re new to Renku, start with the Quick Start guide first.

System inputs are special inputs that Renku automatically recognizes by name. You do not need to declare these in your blueprint’s inputs: section - they are automatically available when referenced in connections.

System InputTypeDescription
DurationintTotal duration of the movie in seconds. Provided in input YAML.
NumOfSegmentsintNumber of segments to generate. Provided in input YAML.
SegmentDurationintDuration of each segment. Auto-computed as Duration / NumOfSegments.
MovieIdstringUnique identifier for the movie. Auto-injected by the system.
StorageRootstringRoot directory for storage. Auto-injected from CLI config.
StorageBasePathstringBase path within storage root. Auto-injected from CLI config.

How it works:

  • System inputs are automatically injected when referenced in blueprint connections
  • Duration and NumOfSegments values are provided in the input YAML file
  • SegmentDuration is auto-computed as Duration / NumOfSegments (unless you provide it explicitly)
  • Timeline composers and video exporters automatically receive MovieId, StorageRoot, StorageBasePath

Blueprints are workflow definitions that:

  • Define user-facing inputs and final artifacts
  • Import and connect multiple producers
  • Define loops for parallel execution
  • Specify collectors for fan-in aggregation

Producers are execution units that:

  • Accept inputs and produce artifacts
  • Map inputs to specific AI model parameters
  • Support multiple model variants
  • Can be reused across multiple blueprints

Data flows through the system via connections:

Blueprint Input → Producer Input → Producer → Artifact → Next Producer Input
└→ Blueprint Artifact

For looped producers:

ScriptProducer.NarrationScript[0] → AudioProducer[0].TextInput
ScriptProducer.NarrationScript[1] → AudioProducer[1].TextInput
ScriptProducer.NarrationScript[2] → AudioProducer[2].TextInput

meta:
name: <string> # Human-readable name (required)
description: <string> # Purpose and behavior
id: <string> # Unique identifier in PascalCase (required)
version: <semver> # Semantic version (e.g., 0.1.0)
author: <string> # Creator name
license: <string> # License type (e.g., MIT)
inputs:
- name: <string> # Input identifier in PascalCase (required)
description: <string> # Purpose and usage
type: <string> # Data type (required)
required: <boolean> # Whether mandatory (default: true)
artifacts:
- name: <string> # Artifact identifier in PascalCase (required)
description: <string> # Purpose and content
type: <string> # Output type (required)
itemType: <string> # Element type for arrays
countInput: <string> # Input that determines array size
loops:
- name: <string> # Loop identifier in lowercase (required)
description: <string> # Purpose and behavior
countInput: <string> # Input that determines iteration count (required)
countInputOffset: <int> # Offset added to count
parent: <string> # Parent loop for nesting
producers:
- name: <string> # Producer instance name in PascalCase (required)
producer: <string> # Qualified name: "category/name" (preferred)
path: <string> # Relative path to producer YAML (for custom producers)
loop: <string> # Loop dimension(s) (e.g., "segment")
connections:
- from: <string> # Source reference (required)
to: <string> # Target reference (required)
if: <string> # Condition name for conditional execution
conditions:
<conditionName>:
when: <string> # Path to artifact field (e.g., Producer.Artifact.Field[dim])
is: <any> # Equals this value
isNot: <any> # Does not equal this value
contains: <string> # String contains value
greaterThan: <number> # Greater than
lessThan: <number> # Less than
exists: <boolean> # Field exists and is truthy
matches: <string> # Regex pattern
all: <array> # AND: all sub-conditions must pass
any: <array> # OR: at least one sub-condition must pass
collectors:
- name: <string> # Collector identifier (required)
from: <string> # Source artifact reference (required)
into: <string> # Target input reference (required)
groupBy: <string> # Loop dimension for grouping (required)
orderBy: <string> # Loop dimension for ordering
TypeDescription
stringText value
intInteger number
imageImage file reference
audioAudio file reference
videoVideo file reference
jsonStructured JSON data
collectionArray of items (used with fanIn: true)
TypeDescription
stringText output
jsonStructured JSON with virtual property expansion (see JSON Artifacts)
imageImage file
audioAudio file
videoVideo file
arrayArray of items (requires itemType)
multiDimArrayMulti-dimensional array
meta:
name: Video Only Narration
description: Generate video segments from a textual inquiry.
id: Video
version: 0.1.0
author: Renku
license: MIT
inputs:
# Note: Duration, NumOfSegments, SegmentDuration are system inputs
# They don't need to be declared - just provide values in input YAML
- name: InquiryPrompt
description: The prompt describing the movie script.
type: string
required: true
- name: Style
description: Visual style for the video.
type: string
required: true
artifacts:
- name: SegmentVideo
description: Generated video for each segment.
type: array
itemType: video
countInput: NumOfSegments # References system input
loops:
- name: segment
description: Iterates over narration segments.
countInput: NumOfSegments # References system input
producers:
- name: ScriptProducer
producer: prompt/script
- name: VideoPromptProducer
producer: prompt/video
- name: VideoProducer
producer: asset/text-to-video
loop: segment
connections:
# Wire inputs to ScriptProducer
- from: InquiryPrompt
to: ScriptProducer.InquiryPrompt
- from: Duration
to: ScriptProducer.Duration # System input - no declaration needed
- from: NumOfSegments
to: ScriptProducer.NumOfSegments # System input - no declaration needed
# Wire script to VideoPromptProducer (looped)
- from: ScriptProducer.NarrationScript[segment]
to: VideoPromptProducer[segment].NarrativeText
- from: Style
to: VideoPromptProducer[segment].Style
- from: SegmentDuration
to: VideoPromptProducer[segment].SegmentDuration # Auto-computed
# Wire prompts to VideoProducer (looped)
- from: VideoPromptProducer.VideoPrompt[segment]
to: VideoProducer[segment].Prompt
- from: SegmentDuration
to: VideoProducer[segment].Duration # Auto-computed
# Wire output to blueprint artifact
- from: VideoProducer[segment].SegmentVideo
to: SegmentVideo[segment]

meta:
name: <string> # Human-readable name (required)
description: <string> # Purpose and behavior
id: <string> # Unique identifier in PascalCase (required)
version: <semver> # Semantic version
author: <string> # Creator name
license: <string> # License type
promptFile: <string> # Path to TOML prompt config (LLM producers only)
outputSchema: <string> # Path to JSON schema for structured output (LLM producers only)
inputs:
- name: <string> # Input identifier (required)
description: <string> # Purpose and usage
type: <string> # Data type (required)
fanIn: <boolean> # Is this a fan-in collection input
dimensions: <string> # Dimension labels (e.g., "segment")
artifacts:
- name: <string> # Artifact identifier (required)
description: <string> # Purpose and content
type: <string> # Output type (required)
itemType: <string> # Element type for arrays
countInput: <string> # Input for array size
mappings:
<provider>: # Provider name: replicate, fal-ai, openai, renku
<model>: # Model identifier
<ProducerInput>: <providerField> # Simple mapping
<ProducerInput>: # Object form
field: <providerField>
transform: # Value transformation
<inputValue>: <providerValue>
expand: <boolean> # Spread object into payload
combine: # Combine multiple inputs
inputs: [<input1>, <input2>]
table:
"<val1>+<val2>": <result>
conditional: # Conditional mapping
when:
input: <inputName>
notEmpty: <boolean>
then: <mapping>

Note on LLM Producers: For producers that use LLMs (OpenAI, etc.), the promptFile and outputSchema fields are defined in the meta: section of the producer YAML file. These are intrinsic to the producer’s functionality.

meta:
name: Script Generation
description: Generate documentary scripts.
id: ScriptProducer
version: 0.1.0
promptFile: ./script.toml # Prompt template for LLM
outputSchema: ./script-output.json # JSON schema for structured output
inputs:
- name: InquiryPrompt
type: string
- name: Duration
type: int
- name: NumOfSegments
type: int
- name: Audience
type: string
artifacts:
- name: MovieTitle
type: string
- name: MovieSummary
type: string
- name: NarrationScript
type: array
itemType: string
countInput: NumOfSegments

Note: LLM producers define promptFile and outputSchema in the meta: section. The model selection (e.g., gpt-4o) is specified in the input template file.

meta:
name: Video Generation
id: VideoProducer
version: 0.1.0
inputs:
- name: Prompt
type: string
- name: AspectRatio
type: string
- name: Resolution
type: string
artifacts:
- name: SegmentVideo
type: video
mappings:
replicate:
bytedance/seedance-1-pro-fast:
Prompt: prompt
AspectRatio: aspect_ratio
Resolution: resolution
google/veo-3.1-fast:
Prompt: prompt
AspectRatio: aspect_ratio
fal-ai:
bytedance/seedream/v4.5/text-to-image:
Prompt: prompt
AspectRatio:
field: image_size
transform:
"16:9": landscape_16_9
"9:16": portrait_16_9
"1:1": square_hd

All video-generating producers automatically support derived artifacts that are extracted from generated videos using ffmpeg. These enable powerful workflows like video-to-video chaining.

ArtifactTypeDescriptionUse Cases
FirstFrameimage (PNG)First frame of the videoVisual preview, thumbnails
LastFrameimage (PNG)Last frame of the videoSeamless video transitions, end-frame matching
AudioTrackaudio (WAV)Audio track from videoAudio post-processing, mixing
  1. Declare derived artifacts in your producer YAML (already included in catalog producers)
  2. Connect them to downstream producers in your blueprint
  3. Automatic extraction happens when the video is downloaded - only connected artifacts are extracted
  4. Graceful fallback if ffmpeg is not installed, a warning is shown but the blueprint runs

Chain videos together using the last frame of one video as the start image for the next:

loops:
- name: segment
countInput: NumOfSegments
producers:
- name: TextToVideoProducer
producer: asset/text-to-video
- name: ImageToVideoProducer
producer: asset/image-to-video
loop: segment
connections:
# First segment: text-to-video
- from: Prompt
to: TextToVideoProducer.Prompt
# Use LastFrame from first video as StartImage for transitions
- from: TextToVideoProducer.LastFrame
to: ImageToVideoProducer[0].StartImage
# Chain subsequent segments using previous segment's LastFrame
- from: ImageToVideoProducer[segment].LastFrame
to: ImageToVideoProducer[segment+1].StartImage

Add speech transcription and karaoke-style animated subtitles to your videos. This creates engaging effects similar to Instagram and TikTok, with word-level highlighting synchronized to the audio.

ComponentDescription
TranscriptionProducerConverts audio segments to word-level transcripts with precise timestamps
VideoExporterRenders the transcript as animated karaoke subtitles on the final video
meta:
name: Speech Transcription
id: TranscriptionProducer
version: 0.1.0
inputs:
- name: Timeline
description: Timeline document with audio clip timing information.
type: json
- name: AudioSegments
description: Audio clips to transcribe (from Audio track).
type: collection
itemType: audio
dimensions: segment
fanIn: true
- name: LanguageCode
description: ISO 639-3 language code (e.g., eng, spa, fra).
type: string
artifacts:
- name: Transcription
description: Word-level transcription aligned to video timeline.
type: json

Add transcription to an existing video + audio blueprint:

producers:
# ... existing producers ...
- name: TranscriptionProducer
producer: asset/transcription
- name: VideoExporter
producer: composition/video-exporter
connections:
# ... existing connections ...
# Wire timeline to transcription producer
- from: TimelineComposer.Timeline
to: TranscriptionProducer.Timeline
# Wire audio segments to transcription producer
- from: AudioProducer[segment].GeneratedAudio
to: TranscriptionProducer.AudioSegments
# Wire transcription to exporter
- from: TranscriptionProducer.Transcription
to: VideoExporter.Transcription
# Wire timeline to exporter (required)
- from: TimelineComposer.Timeline
to: VideoExporter.Timeline
collectors:
# ... existing collectors ...
# CRITICAL: Fan-in requires BOTH connection AND collector
- name: TranscriptionAudio
from: AudioProducer[segment].GeneratedAudio
into: TranscriptionProducer.AudioSegments
groupBy: segment

Configure appearance in your input template:

models:
- model: speech/transcription
provider: renku
producerId: TranscriptionProducer
config:
sttProvider: fal-ai
sttModel: elevenlabs/speech-to-text
- model: ffmpeg/native-render
provider: renku
producerId: VideoExporter
config:
karaoke:
fontSize: 48 # Font size in pixels
fontColor: white # Default text color
highlightColor: "#FFD700" # Highlighted word color (gold)
boxColor: "[email protected]" # Background with opacity
bottomMarginPercent: 10 # Position from bottom
maxWordsPerLine: 8 # Words per line
highlightAnimation: pop # Animation: none, pop, spring, pulse
animationScale: 1.15 # Animation peak scale
AnimationDescriptionBest For
noneStatic highlight, no animationProfessional, minimal style
popQuick scale up then settleSubtle, professional feel (default)
springBouncy scale with oscillationDynamic, playful content
pulseGentle continuous sine waveRhythmic, musical content
  • Audio track (kind: Audio): Always transcribed - contains speech/narration
  • Music track (kind: Music): Never transcribed - background music only
  • Video track with audio: Audio track extracted and transcribed
meta:
name: Timeline Composer
id: TimelineComposer
version: 0.1.0
inputs:
- name: VideoSegments
type: collection
itemType: video
dimensions: segment
fanIn: true
- name: AudioSegments
type: collection
itemType: audio
dimensions: segment
fanIn: true
- name: Duration
type: int
artifacts:
- name: Timeline
type: json
# Timeline composer uses renku provider - no field mappings needed
# Model selection and config are specified in the input template

Connections define data flow between nodes.

connections:
- from: InquiryPrompt
to: ScriptProducer.InquiryPrompt
connections:
- from: ScriptProducer.NarrationScript[segment]
to: AudioProducer[segment].TextInput

Expands to:

  • NarrationScript[0]AudioProducer[0].TextInput
  • NarrationScript[1]AudioProducer[1].TextInput
  • etc.

When you have nested loops (a loop with a parent), use multiple indices:

# Define nested loops
loops:
- name: segment
countInput: NumOfSegments
- name: image
parent: segment # image is nested inside segment
countInput: NumOfImagesPerNarrative
# Producer runs for each segment × image combination
producers:
- name: ImageProducer
producer: asset/text-to-image
loop: segment.image # Dot notation for nested loops
# Connections use multiple indices
connections:
- from: ImagePromptProducer.ImagePrompt[segment][image]
to: ImageProducer[segment][image].Prompt
- from: ImageProducer[segment][image].GeneratedImage
to: SegmentImage[segment][image]

If NumOfSegments = 3 and NumOfImagesPerNarrative = 2, this creates 6 producer instances: [0][0], [0][1], [1][0], [1][1], [2][0], [2][1].

connections:
- from: ImageProducer[image].SegmentImage
to: ImageToVideoProducer[segment].InputImage1
- from: ImageProducer[image+1].SegmentImage
to: ImageToVideoProducer[segment].InputImage2

Creates sliding window patterns. For N segments, you need N+1 images.

When a producer has a collection input, you can connect different artifacts to specific indices:

connections:
# Different artifacts bound to specific collection indices
- from: CharacterImageProducer.GeneratedImage
to: VideoProducer[clip].ReferenceImages[0]
- from: ProductImageProducer.GeneratedImage
to: VideoProducer[clip].ReferenceImages[1]

This pattern is useful when:

  • A producer accepts multiple reference images as a collection
  • Each reference image comes from a different upstream producer
  • The same images should be used for all loop instances

How it works:

  1. The planner creates element-level input bindings: ReferenceImages[0] → artifact ID
  2. At runtime, the SDK reconstructs the array from these element-level bindings
  3. The producer receives ReferenceImages: [CharacterImage, ProductImage]

Comparison with whole-collection binding:

PatternSyntaxUse Case
Whole-collectionAllImages → ReferenceImagesConnect an entire array artifact
Element-levelImage1 → ReferenceImages[0]Connect individual artifacts to specific indices

A scalar input broadcasts to all loop instances:

connections:
- from: Style
to: VideoPromptProducer[segment].Style

When a producer outputs a type: json artifact with a schema, you can connect to nested properties using dot-path syntax:

connections:
# Connect to top-level property
- from: DocProducer.VideoScript.Title
to: TitleRenderer.Title
# Connect to array element property
- from: DocProducer.VideoScript.Segments[segment].Script
to: AudioProducer[segment].TextInput
# Connect to nested array property
- from: DocProducer.VideoScript.Segments[segment].ImagePrompts[image].Prompt
to: ImageProducer[segment][image].Prompt

See JSON Artifacts for full details on defining JSON artifacts with schemas.

Connections can be made conditional based on runtime values from upstream artifacts. This enables dynamic workflow branching where different producers execute depending on the data produced earlier in the pipeline.

Define named conditions in the conditions: section of your blueprint:

conditions:
isImageNarration:
when: DocProducer.VideoScript.Segments[segment].NarrationType
is: "ImageNarration"
isAudioNeeded:
any:
- when: DocProducer.VideoScript.Segments[segment].NarrationType
is: "TalkingHead"
- when: DocProducer.VideoScript.Segments[segment].UseNarrationAudio
is: true
isTalkingHead:
when: DocProducer.VideoScript.Segments[segment].NarrationType
is: "TalkingHead"

The when field references a path to a value in an upstream artifact:

<Producer>.<Artifact>.<FieldPath>[dimension]
  • Producer: The producer that creates the artifact (e.g., DocProducer)
  • Artifact: The artifact name (e.g., VideoScript)
  • FieldPath: Dot-separated path to the field (e.g., Segments[segment].NarrationType)
  • Dimensions: Use dimension placeholders like [segment] for per-instance evaluation
OperatorDescription
isEquals the specified value
isNotDoes not equal the specified value
containsString contains the value
greaterThanGreater than (numeric)
lessThanLess than (numeric)
greaterOrEqualGreater than or equal (numeric)
lessOrEqualLess than or equal (numeric)
existsField exists and is truthy
matchesMatches a regular expression

Combine multiple conditions with all (AND) or any (OR):

conditions:
needsAudio:
any:
- when: DocProducer.VideoScript.Segments[segment].NarrationType
is: "TalkingHead"
- when: DocProducer.VideoScript.Segments[segment].UseNarrationAudio
is: true
isHighQuality:
all:
- when: DocProducer.VideoScript.Segments[segment].Quality
is: "high"
- when: DocProducer.VideoScript.Segments[segment].Duration
greaterThan: 10

Reference a named condition using the if: attribute:

connections:
# ImageProducer only runs when NarrationType is "ImageNarration"
- from: DocProducer.VideoScript.Segments[segment].ImagePrompts[image]
to: ImageProducer[segment][image].Prompt
if: isImageNarration
# AudioProducer runs when TalkingHead OR UseNarrationAudio is true
- from: DocProducer.VideoScript.Segments[segment].Script
to: AudioProducer[segment].TextInput
if: isAudioNeeded

When a condition is evaluated at runtime:

  1. Condition Evaluation: The runner resolves the artifact data and evaluates the condition for each dimension instance
  2. Input Filtering: If a condition is not satisfied, that input is filtered out
  3. Job Skipping: If ALL conditional inputs for a job are not satisfied, the job is skipped
  4. Artifact Absence: Skipped jobs produce no artifacts - those artifact IDs are absent from the manifest

Example: With 3 segments where NarrationType = ["ImageNarration", "TalkingHead", "ImageNarration"]:

  • ImageProducer[0] and ImageProducer[2] execute (ImageNarration)
  • ImageProducer[1] is skipped (TalkingHead)
  • AudioProducer[1] executes (TalkingHead)
  • AudioProducer[0] and AudioProducer[2] may be skipped (unless UseNarrationAudio=true)

JSON artifacts store structured data as a single blob while exposing nested properties as virtual artifacts for granular connections. This enables:

  • Granular connections: Wire individual properties to downstream producers
  • Efficient caching: Only re-run downstream jobs when specific properties change
  • Schema validation: Structured output from LLMs with JSON schema enforcement
meta:
id: DocumentaryPromptProducer
name: Documentary Script Generation
promptFile: ./documentary-prompt.toml # Prompt template
outputSchema: ./documentary-prompt-output.json # JSON schema for validation
inputs:
- name: InquiryPrompt
type: string
- name: NumOfSegments
type: int
- name: NumOfImagesPerSegment
type: int
default: 1
artifacts:
- name: VideoScript
type: json
description: The generated video script
arrays:
- path: Segments
countInput: NumOfSegments
- path: Segments.ImagePrompts
countInput: NumOfImagesPerSegment

Note: The model selection (e.g., gpt-4o) is specified in the input template file, not in the producer.

The arrays field maps JSON array paths to input variables that determine their sizes:

arrays:
- path: Segments # Top-level array
countInput: NumOfSegments # Sized by NumOfSegments input
- path: Segments.ImagePrompts # Nested array within each segment
countInput: NumOfImagesPerSegment # Sized by NumOfImagesPerSegment input

This enables the planner to:

  1. Create dimension placeholders for arrays (e.g., [segment], [image])
  2. Expand virtual artifacts for each array element
  3. Track granular changes for incremental re-runs

The outputSchema from the producer’s meta: section is automatically associated with type: json artifacts. The schema defines the structure of the JSON output:

{
"name": "VideoScript",
"strict": true,
"schema": {
"type": "object",
"properties": {
"Title": { "type": "string" },
"Segments": {
"type": "array",
"items": {
"type": "object",
"properties": {
"Script": { "type": "string" },
"ImagePrompts": {
"type": "array",
"items": {
"type": "object",
"properties": {
"Prompt": { "type": "string" }
}
}
}
}
}
}
}
}
}

The planner expands JSON schemas into virtual artifacts that can be referenced in connections:

Virtual Artifact IDDescription
Producer.VideoScript.TitleTop-level string property
Producer.VideoScript.Segments[segment].ScriptScript for each segment
Producer.VideoScript.Segments[segment].ImagePrompts[image].PromptImage prompt for each segment/image

Each virtual artifact gets its own content hash. When you edit a JSON artifact:

  • Only virtual artifacts whose content actually changed are marked dirty
  • Only downstream jobs consuming those specific properties re-run
  • Other downstream jobs use cached results

Example: If you edit only Segments[1].Script, only AudioProducer[1] re-runs. AudioProducer[0] and AudioProducer[2] use cached results.


loops:
- name: segment
countInput: NumOfSegments

If NumOfSegments = 3, looped producers run 3 times.

loops:
- name: image
countInput: NumOfSegments
countInputOffset: 1

Count is NumOfSegments + 1. Use for sliding window patterns.

loops:
- name: segment
countInput: NumOfSegments
- name: image
parent: segment
countInput: NumOfImagesPerSegment

Creates two-dimensional iteration. If NumOfSegments = 3 and NumOfImagesPerSegment = 2, you get 6 instances.

producers:
- name: ScriptProducer
producer: prompt/script
# No loop - runs once
- name: AudioProducer
producer: asset/text-to-speech
loop: segment
# Runs per segment
- name: ImageProducer
producer: asset/text-to-image
loop: segment.image
# Runs per segment × image

Collectors aggregate multiple artifacts for downstream processing.

collectors:
- name: TimelineVideo
from: VideoProducer[segment].SegmentVideo
into: TimelineComposer.VideoSegments
groupBy: segment
collectors:
- name: TimelineImages
from: ImageProducer[segment][image].SegmentImage
into: TimelineComposer.ImageSegments
groupBy: segment
orderBy: image

The target input must have fanIn: true:

inputs:
- name: VideoSegments
type: collection
itemType: video
dimensions: segment
fanIn: true

Canonical IDs are fully qualified node identifiers.

Type:path.to.name[index0][index1]...
IDDescription
Input:InquiryPromptBlueprint input
Input:ScriptProducer.DurationProducer input
Artifact:VideoProducer.SegmentVideo[0]First video
Artifact:ImageProducer.SegmentImage[2][1]Image at [2][1]
Producer:AudioProducer[0]First AudioProducer instance

  1. Load blueprint tree - Blueprint + all producer imports
  2. Build graph - Nodes for inputs, artifacts, producers
  3. Resolve dimensions - Calculate loop sizes
  4. Expand instances - Create concrete instances per dimension
  5. Align edges - Match dimension indices
  6. Build execution layers - Topological sort into parallel groups
  1. Execute layer by layer - Jobs in same layer can run in parallel
  2. Resolve artifacts - Load upstream data for each job
  3. Materialize fan-in - Group and order collection items
  4. Invoke providers - Call AI APIs
  5. Store artifacts - Persist to blob storage

For incremental runs:

  1. Compare input hashes against manifest
  2. Find changed or missing artifacts
  3. Mark affected jobs as dirty
  4. Propagate through dependency graph
  5. Only execute dirty jobs

inputs:
# User-provided inputs
InquiryPrompt: "Your topic here"
Style: "Documentary"
AspectRatio: "16:9"
# System inputs - provide values here, no blueprint declaration needed
Duration: 60 # Total movie duration in seconds
NumOfSegments: 3 # Number of segments to generate
# SegmentDuration is auto-computed as Duration/NumOfSegments (20 seconds)
# You can override it: SegmentDuration: 15
models:
- model: gpt-5-mini
provider: openai
producerId: ScriptProducer
config:
text_format: json_schema
- model: minimax/speech-2.6-hd
provider: replicate
producerId: AudioProducer
- model: bytedance/seedance-1-pro-fast
provider: replicate
producerId: VideoProducer
- model: timeline/ordered
provider: renku
producerId: TimelineComposer
config:
tracks: ["Video", "Audio"]
masterTracks: ["Audio"]

Select which model to use for each producer. The config field passes provider-specific options:

models:
# LLM with structured output
- model: gpt-5-mini
provider: openai
producerId: ScriptProducer
config:
text_format: json_schema
# Video generation
- model: google/veo-3.1-fast
provider: replicate
producerId: VideoProducer
# Timeline composition with custom config
- model: timeline/ordered
provider: renku
producerId: TimelineComposer
config:
tracks: ["Video", "Audio", "Music"]
masterTracks: ["Audio"]
musicClip:
volume: 0.4

Note: Input-to-provider field mappings are defined in the producer YAML’s mappings: section, not in the input template. This keeps input files simple.


RuleError
Meta section requiredBlueprint must have a meta section
Meta.id requiredBlueprint meta must have an id
At least one artifactBlueprint must declare at least one artifact
Cannot mix models and producersCannot define both models and producer imports
RuleError
Optional inputs need defaultsOptional input must declare a default value
Input name requiredInput must have a name
Input type requiredInput must have a type
RuleError
Valid dimension syntaxInvalid dimension selector
Known dimensionUnknown loop symbol
Dimension count matchInconsistent dimension counts

loops:
- name: segment
countInput: NumOfSegments
producers:
- name: ScriptProducer
producer: prompt/script
- name: AudioProducer
producer: asset/text-to-speech
loop: segment
connections:
- from: ScriptProducer.NarrationScript[segment]
to: AudioProducer[segment].TextInput
- from: AudioProducer[segment].SegmentAudio
to: SegmentAudio[segment]
loops:
- name: segment
countInput: NumOfSegments
- name: image
countInput: NumOfSegments
countInputOffset: 1 # N+1 images
producers:
- name: ImageProducer
producer: asset/text-to-image
loop: image
- name: ImageToVideoProducer
producer: asset/image-to-video
loop: segment
connections:
- from: ImageProducer[image].SegmentImage
to: ImageToVideoProducer[segment].InputImage1
- from: ImageProducer[image+1].SegmentImage
to: ImageToVideoProducer[segment].InputImage2
producers:
- name: VideoProducer
producer: asset/text-to-video
loop: segment
- name: AudioProducer
producer: asset/text-to-speech
loop: segment
- name: TimelineComposer
producer: composition/timeline-composer
collectors:
- name: TimelineVideo
from: VideoProducer[segment].SegmentVideo
into: TimelineComposer.VideoSegments
groupBy: segment
- name: TimelineAudio
from: AudioProducer[segment].SegmentAudio
into: TimelineComposer.AudioSegments
groupBy: segment

Terminal window
renku blueprints:validate ./my-blueprint.yaml
Terminal window
renku generate --inputs=./inputs.yaml --blueprint=./blueprint.yaml --dry-run
Terminal window
cat {builds}/{movie}/runs/rev-0001-plan.json
ProblemCauseSolution
Fan-in emptyMissing collectorAdd collector from source to target
Dimension errorMismatched dimensionsVerify source/target dimensions match
Optional input errorMissing defaultAdd default: to optional inputs
Missing artifactsWrong build pathCheck dist/ per package

catalog/
├── blueprints/
│ ├── audio-only/
│ │ ├── audio-only.yaml
│ │ └── input-template.yaml
│ └── video-only/
│ ├── video-only.yaml
│ └── input-template.yaml
├── producers/
│ ├── asset/ # Media generation producers
│ │ ├── text-to-image.yaml
│ │ ├── text-to-video.yaml
│ │ ├── text-to-speech.yaml
│ │ └── image-to-video.yaml
│ ├── prompt/ # LLM-based producers
│ │ ├── script/
│ │ │ ├── script.yaml
│ │ │ └── script.toml
│ │ └── video/
│ │ └── video.yaml
│ └── composition/ # Composition producers
│ └── timeline-composer.yaml
└── models/
└── *.yaml

Producer Categories:

  • asset/ - Media generation: images, video, audio, music
  • prompt/ - LLM-based script and prompt generation
  • composition/ - Timeline composition and video export
ItemConventionExample
Blueprint fileskebab-caseimage-to-video.yaml
Producer fileskebab-casescript.yaml
IDsPascalCaseid: ImageToVideo
Loop nameslowercasename: segment
Input/Artifact namesPascalCasename: InquiryPrompt

Preferred: Qualified names resolve from the catalog:

# Works from any location - resolves from catalog
producers:
- name: ScriptProducer
producer: prompt/script # → catalog/producers/prompt/script/script.yaml
- name: VideoProducer
producer: asset/text-to-video # → catalog/producers/asset/text-to-video.yaml

Legacy: Relative paths for custom local producers:

# For custom producers not in the catalog
producers:
- name: MyCustomProducer
path: ./my-producers/custom.yaml # Relative to blueprint file