This article will introduce how to use Stability AI Stable Diffusion 3.5 API node’s text-to-image and image-to-image capabilities in ComfyUI
metadata
. Please download and drag it into ComfyUI to load the corresponding workflow.
prompt
parameter in the Stability AI Stable Diffusion 3.5 Image
node to input your desired image description. More detailed prompts often result in better image quality.model
parameter to choose which SD 3.5 model version to use.style_preset
parameter to control the visual style of the image. Different presets produce images with different stylistic characteristics, such as “cinematic” or “anime”. Select “None” to not apply any specific style.String(Multiline)
to modify negative prompts, specifying elements you don’t want to appear in the generated image.Run
button or use the shortcut Ctrl(cmd) + Enter
to execute the image generation.Save Image
node. The image will also be saved to the ComfyUI/output/
directory.Load Image
node is in “Bypass” mode. To enable it, refer to the step guide and right-click the node to set “Mode” to “Always” to enable input, switching to image-to-image mode.image_denoise
has no effect when there is no input image.metadata
. Please download and drag it into ComfyUI to load the corresponding workflow.
Load Image
node, which will serve as the basis for generation.prompt
parameter in the Stability AI Stable Diffusion 3.5 Image
node to describe elements you want to change or enhance in the reference image.style_preset
parameter to control the visual style of the image. Different presets produce images with different stylistic characteristics.image_denoise
parameter (range 0.0-1.0) to control how much the original image is modified:
String(Multiline)
to modify negative prompts, specifying elements you don’t want to appear in the generated image.Run
button or use the shortcut Ctrl(cmd) + Enter
to execute the image generation.Save Image
node. The image will also be saved to the ComfyUI/output/
directory.