Skip to main content
Wan-Move is a motion-controllable video generation framework developed by Alibaba’s Tongyi Lab. It enables users to control object motion in generated videos by specifying point trajectories on the input image, making image-to-video generation more precise and controllable. Key Features:
  • High-Quality 5s 480p Motion Control: Generates 5-second, 480p videos with fine-grained motion controllability
  • Latent Trajectory Guidance: Represents motion conditions by propagating first frame features along trajectories
  • Fine-grained Point-level Control: Object motions are represented with dense point trajectories, enabling precise region-level control
  • No Architecture Changes: Seamlessly integrates into Wan-I2V-14B without extra motion modules
Related Links:

Wan-Move image-to-video workflow

Download JSON Workflow File

Run on ComfyUI Cloud

Make sure your ComfyUI is updated.Workflows in this guide can be found in the Workflow Templates. If you can’t find them in the template, your ComfyUI may be outdated. (Desktop version’s update will delay sometime)If nodes are missing when loading a workflow, possible reasons:
  1. You are not using the latest ComfyUI version (Nightly version)
  2. Some nodes failed to import at startup
text_encoders clip_vision loras diffusion_models vae Model Storage Location
📂 ComfyUI/
├── 📂 models/
│   ├── 📂 text_encoders/
│   │      └── umt5_xxl_fp8_e4m3fn_scaled.safetensors
│   ├── 📂 clip_vision/
│   │      └── clip_vision_h.safetensors
│   ├── 📂 loras/
│   │      └── lightx2v_I2V_14B_480p_cfg_step_distill_rank64_bf16.safetensors
│   ├── 📂 diffusion_models/
│   │      └── Wan21-WanMove_fp8_scaled_e4m3fn_KJ.safetensors
│   └── 📂 vae/
│          └── wan_2.1_vae.safetensors