Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.comfy.org/llms.txt

Use this file to discover all available pages before exploring further.

This node applies a style model to a given conditioning, enhancing or altering its style based on the output of a CLIP vision model. It integrates the style model’s conditioning into the existing conditioning, allowing for a seamless blend of styles in the generation process.

Inputs

Required

ParameterComfy dtypeDescription
conditioningCONDITIONINGThe original conditioning data to which the style model’s conditioning will be applied. It’s crucial for defining the base context or style that will be enhanced or altered.
style_modelSTYLE_MODELThe style model used to generate new conditioning based on the CLIP vision model’s output. It plays a key role in defining the new style to be applied.
clip_vision_outputCLIP_VISION_OUTPUTThe output from a CLIP vision model, which is used by the style model to generate new conditioning. It provides the visual context necessary for style application.

Outputs

ParameterComfy dtypeDescription
conditioningCONDITIONINGThe enhanced or altered conditioning, incorporating the style model’s output. It represents the final, styled conditioning ready for further processing or generation.