OpenAI GPT-Image-1 is part of the ComfyUI API nodes series that allows users to generate images through OpenAI’s GPT-Image-1 model. This is the same model used for image generation in ChatGPT 4o.

This node supports:

  • Text-to-image generation
  • Image editing functionality (inpainting through masks)

Node Overview

The OpenAI GPT-Image-1 node synchronously generates images through OpenAI’s image generation API. It receives text prompts and returns images matching the description. GPT-Image-1 is OpenAI’s most advanced image generation model currently available, capable of creating highly detailed and realistic images.

Prerequisites

Before using this node, you need:

  1. Please update ComfyUI to the latest commit (not stable). If you are using the desktop version, please update it to the latest.
  2. A logged-in Comfy Org account
  3. Sufficient credit balance in your account

If you haven’t set up yet, please refer to:

Parameter Description

Required Parameters

ParameterTypeDescription
promptTextText prompt describing the image content you want to generate

Widget Parameters

ParameterTypeOptionsDefaultDescription
seedInteger0-21474836470Random seed used to control generation results
qualityOptionlow, medium, highlowImage quality setting, affects cost and generation time
backgroundOptionopaque, transparentopaqueWhether the returned image has a background
sizeOptionauto, 1024x1024, 1024x1536, 1536x1024autoSize of the generated image
nInteger1-81Number of images to generate

Optional Parameters

ParameterTypeOptionsDefaultDescription
imageImageAny image inputNoneOptional reference image for image editing
maskMaskMask inputNoneOptional mask for inpainting (white areas will be replaced)

Usage Examples

Text-to-Image Example

The image below contains a simple text-to-image workflow. Please download the image and drag it into ComfyUI to load the corresponding workflow.

The corresponding workflow is very simple:

You only need to load the OpenAI GPT-Image-1 node, input the description of the image you want to generate in the prompt node, connect a Save Image node, and then run the workflow.

Image-to-Image Example

The image below contains a simple image-to-image workflow. Please download the image and drag it into ComfyUI to load the corresponding workflow.

We will use the image below as input:

In this workflow, we use the OpenAI GPT-Image-1 node to generate images and the Load Image node to load the input image, then connect it to the image input of the OpenAI GPT-Image-1 node.

Multiple Image Input Example

Please download the image below and drag it into ComfyUI to load the corresponding workflow.

Use the hat image below as an additional input image.

The corresponding workflow is shown in the image below:

The Batch Images node is used to load multiple images into the OpenAI GPT-Image-1 node.

Inpainting Workflow

GPT-Image-1 also supports image editing functionality, allowing you to specify areas to replace using a mask. Below is a simple inpainting workflow example:

Download the image below and drag it into ComfyUI to load the corresponding workflow. We will continue to use the input image from the image-to-image workflow section.

The corresponding workflow is shown in the image

Compared to the image-to-image workflow, we use the MaskEditor in the Load Image node through the right-click menu to draw a mask, then connect it to the mask input of the OpenAI GPT-Image-1 node to complete the workflow.

Notes

  • The mask and image must be the same size
  • When inputting large images, the node will automatically resize the image to an appropriate size