OpenAI DALL·E 2 is part of the ComfyUI API Nodes series, allowing users to generate images through OpenAI’s DALL·E 2 model.

This node supports:

  • Text-to-image generation
  • Image editing functionality (inpainting through masks)

Node Overview

The OpenAI DALL·E 2 node generates images synchronously through OpenAI’s image generation API. It receives text prompts and returns images that match the description.

Prerequisites

Before using this node, you need:

  1. A logged-in Comfy Org account
  2. Sufficient credit balance in your account

If you haven’t set this up yet, please refer to:

Parameter Description

Required Parameters

ParameterDescription
promptText prompt describing the image content you want to generate

Widget Parameters

ParameterDescriptionOptions/RangeDefault Value
seedSeed value for image generation (currently not implemented in the backend)0 to 2^31-10
sizeOutput image dimensions”256x256”, “512x512”, “1024x1024""1024x1024”
nNumber of images to generate1 to 81

Optional Parameters

ParameterDescriptionOptions/RangeDefault Value
imageOptional reference image for image editingAny image inputNone
maskOptional mask for local inpaintingMask inputNone

Usage Method

Workflow Examples

This API node currently supports two workflows:

  • Text to Image
  • Inpainting

Image to Image workflow is not supported

Text to Image Example

The image below contains a simple text-to-image workflow. Please download the corresponding image and drag it into ComfyUI to load the workflow.

The corresponding example is very simple

You only need to load the OpenAI DALL·E 2 node, input the description of the image you want to generate in the prompt node, connect a Save Image node, and then run the workflow.

Inpainting Workflow

DALL·E 2 supports image editing functionality, allowing you to use a mask to specify the area to be replaced. Below is a simple inpainting workflow example:

1. Workflow File Download

Download the image below and drag it into ComfyUI to load the corresponding workflow.

We will use the image below as input:

2. Workflow File Usage Instructions

Since this workflow is relatively simple, if you want to manually implement the corresponding workflow yourself, you can follow the steps below:

  1. Use the Load Image node to load the image
  2. Right-click on the load image node and select MaskEditor
  3. In the mask editor, use the brush to draw the area you want to redraw
  4. Connect the loaded image to the image input of the OpenAI DALL·E 2 node
  5. Connect the mask to the mask input of the OpenAI DALL·E 2 node
  6. Edit the prompt in the prompt node
  7. Run the workflow

Notes

  • If you want to use the image editing functionality, you must provide both an image and a mask (both are required)
  • The mask and image must be the same size
  • When inputting large images, the node will automatically resize the image to an appropriate size
  • The URLs returned by the API are valid for a short period, please save the results promptly
  • Each generation consumes credits, charged according to image size and quantity