OpenAI DALL·E 2 Node
Learn how to use the OpenAI DALL·E 2 API node to generate images in ComfyUI
OpenAI DALL·E 2 is part of the ComfyUI API Nodes series, allowing users to generate images through OpenAI’s DALL·E 2 model.
This node supports:
- Text-to-image generation
- Image editing functionality (inpainting through masks)
Node Overview
The OpenAI DALL·E 2 node generates images synchronously through OpenAI’s image generation API. It receives text prompts and returns images that match the description.
To use the API nodes, you need to ensure that you are logged in properly and using a permitted network environment. Please refer to the API Nodes Overview section of the documentation to understand the specific requirements for using the API nodes.
Parameter Description
Required Parameters
Parameter | Description |
---|---|
prompt | Text prompt describing the image content you want to generate |
Widget Parameters
Parameter | Description | Options/Range | Default Value |
---|---|---|---|
seed | Seed value for image generation (currently not implemented in the backend) | 0 to 2^31-1 | 0 |
size | Output image dimensions | ”256x256”, “512x512”, “1024x1024" | "1024x1024” |
n | Number of images to generate | 1 to 8 | 1 |
Optional Parameters
Parameter | Description | Options/Range | Default Value |
---|---|---|---|
image | Optional reference image for image editing | Any image input | None |
mask | Optional mask for local inpainting | Mask input | None |
Usage Method
Workflow Examples
This API node currently supports two workflows:
- Text to Image
- Inpainting
Image to Image workflow is not supported
Text to Image Example
The image below contains a simple text-to-image workflow. Please download the corresponding image and drag it into ComfyUI to load the workflow.
The corresponding example is very simple
You only need to load the OpenAI DALL·E 2
node, input the description of the image you want to generate in the prompt
node, connect a Save Image
node, and then run the workflow.
Inpainting Workflow
DALL·E 2 supports image editing functionality, allowing you to use a mask to specify the area to be replaced. Below is a simple inpainting workflow example:
1. Workflow File Download
Download the image below and drag it into ComfyUI to load the corresponding workflow.
We will use the image below as input:
2. Workflow File Usage Instructions
Since this workflow is relatively simple, if you want to manually implement the corresponding workflow yourself, you can follow the steps below:
- Use the
Load Image
node to load the image - Right-click on the load image node and select
MaskEditor
- In the mask editor, use the brush to draw the area you want to redraw
- Connect the loaded image to the
image
input of the OpenAI DALL·E 2 node - Connect the mask to the
mask
input of the OpenAI DALL·E 2 node - Edit the prompt in the
prompt
node - Run the workflow
Notes
- If you want to use the image editing functionality, you must provide both an image and a mask (both are required)
- The mask and image must be the same size
- When inputting large images, the node will automatically resize the image to an appropriate size
- The URLs returned by the API are valid for a short period, please save the results promptly
- Each generation consumes credits, charged according to image size and quantity