Skip to main content
Seedance 2.0 turns text, images, video, and audio into high-quality video with synced audio, consistent characters, and cinematic camera control. The model is now available inside ComfyUI.
To use the API nodes, you need to ensure that you are logged in properly and using a permitted network environment. Please refer to the API Nodes Overview section of the documentation to understand the specific requirements for using the API nodes.
Make sure your ComfyUI is updated.Workflows in this guide can be found in the Workflow Templates. If you can’t find them in the template, your ComfyUI may be outdated. (Desktop version’s update will delay sometime)If nodes are missing when loading a workflow, possible reasons:
  1. You are not using the latest ComfyUI version (Nightly version)
  2. Some nodes failed to import at startup

Available workflows

  • Text to video — Generate video from a text prompt
  • Reference to video — Use reference images, videos, or audio to guide generation
  • First-last-frame to video — Provide a starting and ending frame to generate the video between them
Realism character reference is coming soon. Sign up for early access to get notified when it launches.

Model strengths

Multimodal reference-based generation

Add up to 9 reference images, 3 reference videos, and 3 reference audio files. The model pulls from all of them to create a single coherent output. Object details, textures, visual styles, timbres, and character features carry through the entire generation.
  • Video references — Upload a clip and the model replicates its camera movements, action choreography, editing rhythm, and visual effects
  • Audio references — The model syncs visuals to music, dialogue, or sound effects at a phoneme level, with support for multiple languages and dialects
  • Image references — Lock in character identity, product appearance, and stylistic consistency. Faces, clothing, and material textures stay stable across the full video

Precise, targeted video editing

Edit existing video content directly without regenerating from scratch.
  • Subject replacement — Swap characters in an existing clip while keeping the original motion, camera work, and composition intact
  • Object-level editing — Add, remove, or change specific elements in a scene. The rest of the video stays untouched
  • Inpainting — Reconstruct regions of existing video with context-aware generation that maintains temporal coherence and visual consistency

Seamless video extension

Turn short clips into longer sequences with natural continuity.
  • Video extension — Extend any clip forward in time. Character appearance, lighting, and motion blend smoothly with the existing footage
  • Preceding scene generation — Generate footage that leads into an existing clip, extending video backward to build narrative context
  • Interpolation completion — Bridge gaps between two separate clips with generated intermediate footage that respects the visual and temporal logic of both endpoints

Get started

Try on Comfy Cloud

Open Seedance 2.0 in Comfy Cloud
  1. Update ComfyUI to the latest version, or access Comfy Cloud
  2. Find the Seedance 2.0 node in the Node Library, or load the Seedance 2.0 template from Templates
  3. Drop the node on your canvas and start creating