Wan-Animate is a unified framework for character animation and replacement developed by the WAN Team. The model can animate any character based on a performer’s video, precisely replicating the performer’s facial expressions and movements to generate highly realistic character videos. It can also replace characters in a video with animated characters, preserving their expressions and movements while replicating the original lighting and color tone for seamless environmental integration.

Model Highlights

  • Dual Mode Functionality: A single architecture supports both animation and replacement functions, enabling easy operation switching.
  • Advanced Body Motion Control: Uses spatially-aligned skeleton signals for accurate body movement replication
  • Precise Motion and Expression: Accurately reproduces the movements and facial expressions from the reference video.
  • Natural Environment Integration: Seamlessly blends the replaced character with the original video environment.
  • Smooth Long Video Generation: Iterative generation ensures consistent motion and visual flow in extended videos

ComfyOrg Wan2.2 Animate stream replay

Make sure your ComfyUI is updated.Workflows in this guide can be found in the Workflow Templates. If you can’t find them in the template, your ComfyUI may be outdated.(Desktop version’s update will delay sometime)If nodes are missing when loading a workflow, possible reasons:
  1. You are not using the latest ComfyUI version(Nightly version)
  2. You are using Stable or Desktop version (Latest changes may not be included)
  3. Some nodes failed to import at startup

About Wan2.2 Animate workflow

In this docs, we will provide two workflow:
  1. Workflow that only uses core nodes (It is incomplete; you need to preprocess the image by yourself first)
  2. Workflow that inculdes some custom nodes (It is complete; you can use it directly, but some new user might not know how to install the custom nodes)

Wan2.2 Anmate ComfyUI native workflow(without custom nodes)

1. Download Workflow File

Download the following workflow file and drag it into ComfyUI to load the workflow.

Download JSON Workflow

Download materials below as input: Reference Image: Reference_Image Input Video diffusion_models clip_visions loras vae text_encoders
ComfyUI/
├───📂 models/
│   ├───📂 diffusion_models/
│   │   ├─── Wan2_2-Animate-14B_fp8_e4m3fn_scaled_KJ.safetensors
│   │   └─── wan2.2_animate_14B_bf16.safetensors
│   ├───📂 loras/
│   │   └─── lightx2v_I2V_14B_480p_cfg_step_distill_rank64_bf16.safetensors
│   ├───📂 text_encoders/
│   │   └─── umt5_xxl_fp8_e4m3fn_scaled.safetensors 
│   ├───📂 clip_visions/ 
│   │   └─── clip_vision_h.safetensors
│   └───📂 vae/
│       └── wan_2.1_vae.safetensors

3. Install custom nodes

Download the following workflow file and drag it into ComfyUI to load the workflow, if you have ComfyUI-Manager installed, you can just click the Install missing nodes button to install the missing nodes. We need to install the following custom nodes: If you don’t know how to install custom nodes please refer to How to install custom nodes

4. Workflow Instructions

The Wan2.2 animate has two modes: Mix and move
  • Mix: use the reference image to replace the character in the video
  • Move: Use the character movement from the input video to animate the character in the reference image (like Wan2.2 Fun Control)

4.1 Mix mode

Workflow Instructions
  1. If you are running this workflow for the first time, please use a small size for video generation, in case you don’t have enough VRAM to run the workflow, and due to the WanAnimateToVideo limited, the video width or height should be multiples of 16.
  2. Make sure all the models are loaded correctly
  3. Update the prompt if you want
  4. Upload the reference image, the character is this image will be the target character
  5. You can use the videos we provided as input videos for the first time, the DWPose Estimator node in comfyui_controlnet_aux will preprocess the input video to pose and face control videos
  6. The Points Editor is from KJNodes, by default this node will not load the first frame from the input video, you need to run the workflow once or manually upload the first frame
    • Bleow the Points Editor node, we have added note about how this node work, and how to edit it please refer to it
  7. For the “Video Extend” group, it’s in order to extend to the output video length
    • Each Video Extend will extend another 77 frames(Around 4.8125 seconds)
    • If your input video is less then 5s, you might not need it
    • If you want to extend longer, you need to copy and paste multiple times, you need to link the batch_images from last Video Extend to next one, and also the video_frame_offset from last Video Extend to next one
  8. Click the Run button or use the shortcut Ctrl(cmd) + Enter to execute video generation

4.2 Move mode

We used subgraph in the Wan2.2 animate workflow, here is how to switching to move mode: Subgraph If you want to switch to Move mode, you can disconnect background_video and character_mask inputs from the Video Sampling and output(Subgraph) node.