Official usage guide for Alibaba Cloud Tongyi Wanxiang 2.2 video generation model in ComfyUI
Model Type | Model Name | Parameters | Main Function | Model Repository |
---|---|---|---|---|
Hybrid Model | Wan2.2-TI2V-5B | 5B | Hybrid version supporting both text-to-video and image-to-video, a single model meets two core task requirements | 🤗 Wan2.2-TI2V-5B |
Image-to-Video | Wan2.2-I2V-A14B | 14B | Converts static images into dynamic videos, maintaining content consistency and smooth dynamic process | 🤗 Wan2.2-I2V-A14B |
Text-to-Video | Wan2.2-T2V-A14B | 14B | Generates high-quality videos from text descriptions, with cinematic-level aesthetic control and precise semantic compliance | 🤗 Wan2.2-T2V-A14B |
Workflow
-> Browse Templates
-> Video
, find “Wan2.2 5B video generation” to load the workflow.
Download JSON Workflow File
Load Diffusion Model
node loads the wan2.2_ti2v_5B_fp16.safetensors
model.Load CLIP
node loads the umt5_xxl_fp8_e4m3fn_scaled.safetensors
model.Load VAE
node loads the wan2.2_vae.safetensors
model.Load image
node to upload an image.Wan22ImageToVideoLatent
node, you can adjust the size settings and the total number of video frames (length
).CLIP Text Encoder
node at step 5.Run
button, or use the shortcut Ctrl(cmd) + Enter
to execute video generation.Workflow
-> Browse Templates
-> Video
, find “Wan2.2 14B T2V” to load the workflow.
Or update your ComfyUI to the latest version, then download the following video and drag it into ComfyUI to load the workflow.
Download JSON Workflow File
Load Diffusion Model
node loads the wan2.2_t2v_high_noise_14B_fp8_scaled.safetensors
model.Load Diffusion Model
node loads the wan2.2_t2v_low_noise_14B_fp8_scaled.safetensors
model.Load CLIP
node loads the umt5_xxl_fp8_e4m3fn_scaled.safetensors
model.Load VAE
node loads the wan_2.1_vae.safetensors
model.EmptyHunyuanLatentVideo
node, you can adjust the size settings and the total number of video frames (length
).CLIP Text Encoder
node at step 5.Run
button, or use the shortcut Ctrl(cmd) + Enter
to execute video generation.Workflow
-> Browse Templates
-> Video
, find “Wan2.2 14B I2V” to load the workflow.
Or update your ComfyUI to the latest version, then download the following video and drag it into ComfyUI to load the workflow.
Download JSON Workflow File
You can use the following image as input:Load Diffusion Model
node loads the wan2.2_t2v_high_noise_14B_fp8_scaled.safetensors
model.Load Diffusion Model
node loads the wan2.2_t2v_low_noise_14B_fp8_scaled.safetensors
model.Load CLIP
node loads the umt5_xxl_fp8_e4m3fn_scaled.safetensors
model.Load VAE
node loads the wan_2.1_vae.safetensors
model.Load Image
node, upload the image to be used as the initial frame.CLIP Text Encoder
node at step 6.EmptyHunyuanLatentVideo
, you can adjust the size settings and the total number of video frames (length
).Run
button, or use the shortcut Ctrl(cmd) + Enter
to execute video generation.