The ClipTextEncode node is used to convert text prompts into AI-understandable ‘language’ for image generation.
CLIP Text Encode (CLIPTextEncode)
acts like a translator, converting your creative text prompts into a special “language” that AI can understand, helping the AI accurately interpret what kind of image you want to create.
Imagine communicating with a foreign artist - you need a translator to help accurately convey the artwork you want. This node acts as that translator, using the CLIP model (an AI model trained on vast amounts of image-text pairs) to understand your text descriptions and convert them into “instructions” that the AI art model can understand.
Parameter | Data Type | Input Method | Default | Range | Description |
---|---|---|---|---|---|
text | STRING | Text Input | Empty | Any text | Like detailed instructions to an artist, enter your image description here. Supports multi-line text for detailed descriptions. |
clip | CLIP | Model Selection | None | Loaded CLIP models | Like choosing a specific translator, different CLIP models are like different translators with slightly different understandings of artistic styles. |
Output Name | Data Type | Description |
---|---|---|
CONDITIONING | CONDITIONING | These are the translated “painting instructions” containing detailed creative guidance that the AI model can understand. These instructions tell the AI model how to create an image matching your description. |
ComfyUI/models/embeddings
folderembedding:model_name
in your text
Example: If you have a model called EasyNegative.pt
, you can use it like this:(beautiful:1.2)
will make the “beautiful” feature more prominent()
have a default weight of 1.1ctrl + up/down arrow
to quickly adjust weights