This article will introduce how to use Stability AI Stable Diffusion 3.5 API node’s text-to-image and image-to-image capabilities in ComfyUI
The Stability AI Stable Diffusion 3.5 Image node allows you to use Stability AI’s Stable Diffusion 3.5 model to create high-quality, detail-rich image content through text prompts or reference images.
In this guide, we will show you how to set up workflows for both text-to-image and image-to-image generation using this node.
To use the API nodes, you need to ensure that you are logged in properly and using a permitted network environment. Please refer to the API Nodes Overview section of the documentation to understand the specific requirements for using the API nodes.
If you find missing nodes when loading the workflow file below, it may be due to the following situations:
Please make sure you have successfully updated ComfyUI to the latest Development (Nightly) version. See: How to Update ComfyUI section to learn how to update ComfyUI.
The image below contains workflow information in its metadata
. Please download and drag it into ComfyUI to load the corresponding workflow.
You can follow the numbered steps in the image to complete the basic text-to-image workflow:
prompt
parameter in the Stability AI Stable Diffusion 3.5 Image
node to input your desired image description. More detailed prompts often result in better image quality.model
parameter to choose which SD 3.5 model version to use.style_preset
parameter to control the visual style of the image. Different presets produce images with different stylistic characteristics, such as “cinematic” or “anime”. Select “None” to not apply any specific style.String(Multiline)
to modify negative prompts, specifying elements you don’t want to appear in the generated image.Run
button or use the shortcut Ctrl(cmd) + Enter
to execute the image generation.Save Image
node. The image will also be saved to the ComfyUI/output/
directory.Load Image
node is in “Bypass” mode. To enable it, refer to the step guide and right-click the node to set “Mode” to “Always” to enable input, switching to image-to-image mode.image_denoise
has no effect when there is no input image.The image below contains workflow information in its metadata
. Please download and drag it into ComfyUI to load the corresponding workflow.
Download the image below to use as input  to control how much the original image is modified:
String(Multiline)
to modify negative prompts, specifying elements you don’t want to appear in the generated image.Run
button or use the shortcut Ctrl(cmd) + Enter
to execute the image generation.Save Image
node. The image will also be saved to the ComfyUI/output/
directory.The image below shows a comparison of results with and without input image using the same parameter settings:
Image Denoise: This parameter determines how much of the original image’s features are preserved during generation. It’s the most crucial adjustment parameter in image-to-image mode. The image below shows the effects of different denoising strengths:
You can refer to the documentation below to understand detailed parameter settings for the corresponding node
Stability Stable Diffusion 3.5 Image API Node Documentation