Model Highlights
- Dual Mode Functionality: A single architecture supports both animation and replacement functions, enabling easy operation switching.
- Advanced Body Motion Control: Uses spatially-aligned skeleton signals for accurate body movement replication
- Precise Motion and Expression: Accurately reproduces the movements and facial expressions from the reference video.
- Natural Environment Integration: Seamlessly blends the replaced character with the original video environment.
- Smooth Long Video Generation: Iterative generation ensures consistent motion and visual flow in extended videos
ComfyOrg Wan2.2 Animate stream replay
Make sure your ComfyUI is updated.Workflows in this guide can be found in the Workflow Templates.
If you can’t find them in the template, your ComfyUI may be outdated.(Desktop version’s update will delay sometime)If nodes are missing when loading a workflow, possible reasons:
- You are not using the latest ComfyUI version(Nightly version)
- You are using Stable or Desktop version (Latest changes may not be included)
- Some nodes failed to import at startup
About Wan2.2 Animate workflow
In this docs, we will provide two workflow:- Workflow that only uses core nodes (It is incomplete; you need to preprocess the image by yourself first)
- Workflow that inculdes some custom nodes (It is complete; you can use it directly, but some new user might not know how to install the custom nodes)
Wan2.2 Anmate ComfyUI native workflow(without custom nodes)
1. Download Workflow File
Download the following workflow file and drag it into ComfyUI to load the workflow.Download JSON Workflow
Download materials below as input: Reference Image:
2. Model links
diffusion_models- Wan2_2-Animate-14B_fp8_e4m3fn_scaled_KJ.safetensors This is the model that from Kijai’s repo
- wan2.2_animate_14B_bf16.safetensors original model weight
- lightx2v_I2V_14B_480p_cfg_step_distill_rank64_bf16.safetensors 这是一个 4 步的加速 lora
3. Install custom nodes
Download the following workflow file and drag it into ComfyUI to load the workflow, if you have ComfyUI-Manager installed, you can just click theInstall missing nodes
button to install the missing nodes.
We need to install the following custom nodes:
If you don’t know how to install custom nodes please refer to How to install custom nodes
4. Workflow Instructions
The Wan2.2 animate has two modes: Mix and move- Mix: use the reference image to replace the character in the video
- Move: Use the character movement from the input video to animate the character in the reference image (like Wan2.2 Fun Control)
4.1 Mix mode

- If you are running this workflow for the first time, please use a small size for video generation, in case you don’t have enough VRAM to run the workflow, and due to the
WanAnimateToVideo
limited, the video width or height should be multiples of 16. - Make sure all the models are loaded correctly
- Update the prompt if you want
- Upload the reference image, the character is this image will be the target character
- You can use the videos we provided as input videos for the first time, the DWPose Estimator node in comfyui_controlnet_aux will preprocess the input video to pose and face control videos
- The
Points Editor
is from KJNodes, by default this node will not load the first frame from the input video, you need to run the workflow once or manually upload the first frame- Bleow the
Points Editor
node, we have added note about how this node work, and how to edit it please refer to it
- Bleow the
- For the “Video Extend” group, it’s in order to extend to the output video length
- Each
Video Extend
will extend another 77 frames(Around 4.8125 seconds) - If your input video is less then 5s, you might not need it
- If you want to extend longer, you need to copy and paste multiple times, you need to link the
batch_images
from last Video Extend to next one, and also thevideo_frame_offset
from last Video Extend to next one
- Each
- Click the
Run
button or use the shortcutCtrl(cmd) + Enter
to execute video generation
4.2 Move mode
We used subgraph in the Wan2.2 animate workflow, here is how to switching to move mode:
background_video
and character_mask
inputs from the Video Sampling and output(Subgraph)
node.