Skip to content

Generating a Hero Frame then Animating It

Image-to-video models follow a starting frame more faithfully than text-to-video models follow a long prompt — you get more control over the look of the scene. This Workflow generates a hero frame with Flux Schnell, then feeds it directly into Wan 2.2 Lightning image-to-video, all in a single API call.

Generated hero frameAnimated 5-second clip
Hero frame: tropical beach at sunrise with palm trees and turquoise water
Terminal window
# Create a project directory.
mkdir prodia-animate-hero-workflow
cd prodia-animate-hero-workflow

Install Node (if not already installed):

Terminal window
brew install node
# Close the current terminal and open a new one so that node is available.

Create project skeleton:

Terminal window
# Requires node --version >= 18
# Initialize the project with npm.
npm init -y
# Install the prodia-js library.
npm install prodia --save
Terminal window
# Export your token so it can be used by the main code.
export PRODIA_TOKEN=your-token-here

Your token is exported to an environment variable. If you close or switch your shell you’ll need to run export PRODIA_TOKEN=your-token-here again.

Create a main file for your project:

main.js
const { createProdia } = require("prodia/v2");
const prodia = createProdia({
token: process.env.PRODIA_TOKEN // get it from environment
});

You’re now ready to make some API calls!

Generate then animate (in a single workflow)

Section titled “Generate then animate (in a single workflow)”

The first job generates the hero frame. The second job receives that image as its starting frame and produces a 5-second 720p MP4. Wan 2.2 Lightning is the fastest image-to-video option on Prodia (~22s per generation).

main.js
const { createProdia } = require("prodia/v2");
const fs = require("node:fs/promises");
const prodia = createProdia({
token: process.env.PRODIA_TOKEN,
});
(async () => {
const job = await prodia.job({
type: "workflow.serial.v1",
config: {
jobs: [
{
type: "inference.flux-fast.schnell.txt2img.v2",
config: {
prompt: "a tropical beach at sunrise with calm turquoise waves, palm trees swaying gently, photorealistic, cinematic lighting",
seed: 42,
},
},
{
type: "inference.wan2-2.lightning.img2vid.v0",
config: {
prompt: "soft waves rolling in, palm tree leaves swaying in the breeze, the sun rising slowly",
resolution: "720p",
seed: 42,
},
},
],
},
}, {
accept: "video/mp4",
});
const video = await job.arrayBuffer();
await fs.writeFile("beach.mp4", new Uint8Array(video));
// open beach.mp4
})();
Terminal window
node main.js
Terminal window
open beach.mp4
  • Two prompts, two purposes. The first prompt describes the scene — the second describes the motion. Keep the image prompt static and visual (“at sunrise”, “palm trees”, “cinematic lighting”), and let the video prompt focus on what moves (“waves rolling in”, “leaves swaying”).
  • Resolution. Wan 2.2 Lightning supports "720p" (1280x720) and "480p" (832x480). 720p is the default.
  • Pinning seeds. Both jobs accept a seed for reproducibility — useful when you want the same output every time, or when iterating on one prompt while keeping the other fixed.
  • Long-running jobs. Image-to-video runs end-to-end in ~25–35 seconds for this chain. Set generous timeouts on your HTTP client (the curl example uses --max-time 240). If you run many of these concurrently, prefer the async API and poll for completion.
  • Other video models. For higher quality at the cost of time, swap in Seedance Pro (inference.seedance.pro.img2vid.v1, ~60s, 1080p) or Veo (inference.veo.fast.img2vid.v2 for fast, inference.veo.img2vid.v2 for quality).