Generating a Hero Frame then Animating It
Image-to-video models follow a starting frame more faithfully than text-to-video models follow a long prompt — you get more control over the look of the scene. This Workflow generates a hero frame with Flux Schnell, then feeds it directly into Wan 2.2 Lightning image-to-video, all in a single API call.
| Generated hero frame | Animated 5-second clip |
|---|---|
![]() |
Project Setup
Section titled “Project Setup”# Create a project directory.mkdir prodia-animate-hero-workflowcd prodia-animate-hero-workflowInstall Node (if not already installed):
brew install node# Close the current terminal and open a new one so that node is available.apt install node# Close the current terminal and open a new one so that node is available.winget install -e --id OpenJS.NodeJS.LTS# Close the current terminal and open a new one so that node is available.Create project skeleton:
# Requires node --version >= 18# Initialize the project with npm.npm init -y
# Install the prodia-js library.npm install prodia --saveInstall Python (if not already installed):
brew install python# Close the current terminal and open a new one so that python is available.apt install python3 python3-venv python-is-python3# Close the current terminal and open a new one so that python is available.winget install -e --id Python.Python.3.12# Close the current terminal and open a new one so that python is available.# Requires python --version >= 3.12python -m venv venvsource venv/bin/activatepip install requestsInstall curl (if not already installed):
brew install curl# Close the current terminal and open a new one so that curl is available.apt install curl# Close the current terminal and open a new one so that curl is available.# NOTE: Windows 10 and up have curl installed by default and this can be# skipped.winget install -e --id cURL.cURL# Close the current terminal and open a new one so that curl is available.# Export your token so it can be used by the main code.export PRODIA_TOKEN=your-token-hereYour token is exported to an environment variable. If you close or switch your
shell you’ll need to run export PRODIA_TOKEN=your-token-here again.
Create a main file for your project:
const { createProdia } = require("prodia/v2");
const prodia = createProdia({ token: process.env.PRODIA_TOKEN // get it from environment});Create the following main.py
from requests.adapters import HTTPAdapter, Retryimport osimport requestsimport sys
prodia_token = os.getenv('PRODIA_TOKEN')prodia_url = 'https://inference.prodia.com/v2/job'
session = requests.Session()retries = Retry(allowed_methods=None, status_forcelist=Retry.RETRY_AFTER_STATUS_CODES)session.mount('http://', HTTPAdapter(max_retries=retries))session.mount('https://', HTTPAdapter(max_retries=retries))session.headers.update({'Authorization': f"Bearer {prodia_token}"})set -euo pipefailYou’re now ready to make some API calls!
Generate then animate (in a single workflow)
Section titled “Generate then animate (in a single workflow)”The first job generates the hero frame. The second job receives that image as its starting frame and produces a 5-second 720p MP4. Wan 2.2 Lightning is the fastest image-to-video option on Prodia (~22s per generation).
const { createProdia } = require("prodia/v2");const fs = require("node:fs/promises");
const prodia = createProdia({ token: process.env.PRODIA_TOKEN,});
(async () => { const job = await prodia.job({ type: "workflow.serial.v1", config: { jobs: [ { type: "inference.flux-fast.schnell.txt2img.v2", config: { prompt: "a tropical beach at sunrise with calm turquoise waves, palm trees swaying gently, photorealistic, cinematic lighting", seed: 42, }, }, { type: "inference.wan2-2.lightning.img2vid.v0", config: { prompt: "soft waves rolling in, palm tree leaves swaying in the breeze, the sun rising slowly", resolution: "720p", seed: 42, }, }, ], }, }, { accept: "video/mp4", });
const video = await job.arrayBuffer(); await fs.writeFile("beach.mp4", new Uint8Array(video)); // open beach.mp4})();node main.jsfrom requests.adapters import HTTPAdapter, Retryimport osimport requestsimport sys
prodia_token = os.getenv('PRODIA_TOKEN')prodia_url = 'https://inference.prodia.com/v2/job'
session = requests.Session()retries = Retry(allowed_methods=None, status_forcelist=Retry.RETRY_AFTER_STATUS_CODES)session.mount('http://', HTTPAdapter(max_retries=retries))session.mount('https://', HTTPAdapter(max_retries=retries))session.headers.update({'Authorization': f"Bearer {prodia_token}"})
headers = { 'Accept': 'video/mp4',}
job = { 'type': 'workflow.serial.v1', 'config': { 'jobs': [ { 'type': 'inference.flux-fast.schnell.txt2img.v2', 'config': { 'prompt': 'a tropical beach at sunrise with calm turquoise waves, palm trees swaying gently, photorealistic, cinematic lighting', 'seed': 42, }, }, { 'type': 'inference.wan2-2.lightning.img2vid.v0', 'config': { 'prompt': 'soft waves rolling in, palm tree leaves swaying in the breeze, the sun rising slowly', 'resolution': '720p', 'seed': 42, }, }, ], },}
res = session.post(prodia_url, headers=headers, json=job, timeout=240)print(f"Request ID: {res.headers['x-request-id']}")print(f"Status: {res.status_code}")
if res.status_code != 200: print(res.text) sys.exit(1)
with open('beach.mp4', 'wb') as f: f.write(res.content)python main.pyset -euo pipefail
cat <<EOF > job.json{ "type": "workflow.serial.v1", "config": { "jobs": [ { "type": "inference.flux-fast.schnell.txt2img.v2", "config": { "prompt": "a tropical beach at sunrise with calm turquoise waves, palm trees swaying gently, photorealistic, cinematic lighting", "seed": 42 } }, { "type": "inference.wan2-2.lightning.img2vid.v0", "config": { "prompt": "soft waves rolling in, palm tree leaves swaying in the breeze, the sun rising slowly", "resolution": "720p", "seed": 42 } } ] }}EOF
curl -sSf --retry 3 --max-time 240 \ -H "Authorization: Bearer $PRODIA_TOKEN" \ -H 'Accept: video/mp4' \ -H 'Content-Type: application/json' \ --data-binary @job.json \ --output beach.mp4 \ https://inference.prodia.com/v2/jobbash main.shopen beach.mp4xdg-open beach.mp4start beach.mp4- Two prompts, two purposes. The first prompt describes the scene — the second describes the motion. Keep the image prompt static and visual (“at sunrise”, “palm trees”, “cinematic lighting”), and let the video prompt focus on what moves (“waves rolling in”, “leaves swaying”).
- Resolution. Wan 2.2 Lightning supports
"720p"(1280x720) and"480p"(832x480). 720p is the default. - Pinning seeds. Both jobs accept a
seedfor reproducibility — useful when you want the same output every time, or when iterating on one prompt while keeping the other fixed. - Long-running jobs. Image-to-video runs end-to-end in ~25–35 seconds for this chain. Set generous timeouts on your HTTP client (the curl example uses
--max-time 240). If you run many of these concurrently, prefer the async API and poll for completion. - Other video models. For higher quality at the cost of time, swap in Seedance Pro (
inference.seedance.pro.img2vid.v1, ~60s, 1080p) or Veo (inference.veo.fast.img2vid.v2for fast,inference.veo.img2vid.v2for quality).
