- “ComfyUI/models/text_encoders/”
- “ComfyUI/models/clip/”
If you save a model after ComfyUI has started, you’ll need to refresh the ComfyUI frontend to get the latest model file path listSupported model formats:
.ckpt
.pt
.pt2
.bin
.pth
.safetensors
.pkl
.sft
Inputs
Parameter | Data Type | Description |
---|---|---|
clip_name | COMBO[STRING] | Specifies the name of the CLIP model to be loaded. This name is used to locate the model file within a predefined directory structure. |
type | COMBO[STRING] | Determines the type of CLIP model to load. As ComfyUI supports more models, new types will be added here. Please check the CLIPLoader class definition in node.py for details. |
device | COMBO[STRING] | Choose the device for loading the CLIP model. default will run the model on GPU, while selecting CPU will force loading on CPU. |
Device Options Explained
When to choose “default”:- Have sufficient GPU memory
- Want the best performance
- Let the system optimize memory usage automatically
- Insufficient GPU memory
- Need to reserve GPU memory for other models (like UNet)
- Running in a low VRAM environment
- Debugging or special purpose needs
Supported Combinations
Model Type | Corresponding Encoder |
---|---|
stable_diffusion | clip-l |
stable_cascade | clip-g |
sd3 | t5 xxl/ clip-g / clip-l |
stable_audio | t5 base |
mochi | t5 xxl |
cosmos | old t5 xxl |
lumina2 | gemma 2 2B |
wan | umt5 xxl |
CLIPLoader
class definition in node.py
Outputs
Parameter | Data Type | Description |
---|---|---|
clip | CLIP | The loaded CLIP model, ready for use in downstream tasks or further processing. |