Wan-Move is a motion-controllable video generation framework developed by Alibaba’s Tongyi Lab. It enables users to control object motion in generated videos by specifying point trajectories on the input image, making image-to-video generation more precise and controllable.
Key Features:
- High-Quality 5s 480p Motion Control: Generates 5-second, 480p videos with fine-grained motion controllability
- Latent Trajectory Guidance: Represents motion conditions by propagating first frame features along trajectories
- Fine-grained Point-level Control: Object motions are represented with dense point trajectories, enabling precise region-level control
- No Architecture Changes: Seamlessly integrates into Wan-I2V-14B without extra motion modules
Related Links:
Wan-Move image-to-video workflow
Download JSON Workflow File
Run on ComfyUI Cloud
Make sure your ComfyUI is updated.Workflows in this guide can be found in the Workflow Templates.
If you can’t find them in the template, your ComfyUI may be outdated. (Desktop version’s update will delay sometime)If nodes are missing when loading a workflow, possible reasons:
- You are not using the latest ComfyUI version (Nightly version)
- Some nodes failed to import at startup
- The Desktop is base on ComfyUI stable release, it will auto-update when there is a new Desktop stable release available.
- Cloud will update after ComfyUI stable release.
So, if you find any core node missing in this document, it might be because the new core nodes have not yet been released in the latest stable version. Please wait for the next stable release.
Model links
text_encoders
clip_vision
loras
diffusion_models
vae
Model Storage Location
📂 ComfyUI/
├── 📂 models/
│ ├── 📂 text_encoders/
│ │ └── umt5_xxl_fp8_e4m3fn_scaled.safetensors
│ ├── 📂 clip_vision/
│ │ └── clip_vision_h.safetensors
│ ├── 📂 loras/
│ │ └── lightx2v_I2V_14B_480p_cfg_step_distill_rank64_bf16.safetensors
│ ├── 📂 diffusion_models/
│ │ └── Wan21-WanMove_fp8_scaled_e4m3fn_KJ.safetensors
│ └── 📂 vae/
│ └── wan_2.1_vae.safetensors