Available workflows
- Text to video — Generate video from a text prompt
- Reference to video — Use reference images, videos, or audio to guide generation
- First-last-frame to video — Provide a starting and ending frame to generate the video between them
Realism character reference is coming soon. Sign up for early access to get notified when it launches.
Model strengths
Multimodal reference-based generation
Add up to 9 reference images, 3 reference videos, and 3 reference audio files. The model pulls from all of them to create a single coherent output. Object details, textures, visual styles, timbres, and character features carry through the entire generation.- Video references — Upload a clip and the model replicates its camera movements, action choreography, editing rhythm, and visual effects
- Audio references — The model syncs visuals to music, dialogue, or sound effects at a phoneme level, with support for multiple languages and dialects
- Image references — Lock in character identity, product appearance, and stylistic consistency. Faces, clothing, and material textures stay stable across the full video
Precise, targeted video editing
Edit existing video content directly without regenerating from scratch.- Subject replacement — Swap characters in an existing clip while keeping the original motion, camera work, and composition intact
- Object-level editing — Add, remove, or change specific elements in a scene. The rest of the video stays untouched
- Inpainting — Reconstruct regions of existing video with context-aware generation that maintains temporal coherence and visual consistency
Seamless video extension
Turn short clips into longer sequences with natural continuity.- Video extension — Extend any clip forward in time. Character appearance, lighting, and motion blend smoothly with the existing footage
- Preceding scene generation — Generate footage that leads into an existing clip, extending video backward to build narrative context
- Interpolation completion — Bridge gaps between two separate clips with generated intermediate footage that respects the visual and temporal logic of both endpoints
Get started
Try on Comfy Cloud
Open Seedance 2.0 in Comfy Cloud
- Update ComfyUI to the latest version, or access Comfy Cloud
- Find the Seedance 2.0 node in the Node Library, or load the Seedance 2.0 template from Templates
- Drop the node on your canvas and start creating