Getting Started
This page will take you step-by-step through the process of creating a custom node.
Our example will take a batch of images, and return one of the images. Initially, the node will return the image which is, on average, the lightest in color; we’ll then extend it to have a range of selection criteria, and then finally add some client side code.
This page assumes very little knowledge of Python or Javascript.
After this walkthrough, dive into the details of backend code, and frontend code.
Write a basic node
Prerequisites
- A working ComfyUI installation. For development, we recommend installing ComfyUI manually.
- A working comfy-cli installation.
Setting up
After answering a few questions, you’ll have a new directory set up.
Defining the node
Add the following code to the end of src/nodes.py
:
A custom node is defined using a Python class, which must include these four things: CATEGORY
,
which specifies where in the add new node menu the custom node will be located,
INPUT_TYPES
, which is a class method defining what inputs the node will take
(see later for details of the dictionary returned),
RETURN_TYPES
, which defines what outputs the node will produce, and FUNCTION
, the name
of the function that will be called when the node is executed.
IMAGE
(singular) even though
we expect to receive a batch of images, and return just one. In Comfy, IMAGE
means
image batch, and a single image is treated as a batch of size 1.The main function
The main function, choose_image
, receives named arguments as defined in INPUT_TYPES
, and
returns a tuple
as defined in RETURN_TYPES
. Since we’re dealing with images, which are internally
stored as torch.Tensor
,
Then add the function to your class. The datatype for image is torch.Tensor
with shape [B,H,W,C]
,
where B
is the batch size and C
is the number of channels - 3, for RGB. If we iterate over such
a tensor, we will get a series of B
tensors of shape [H,W,C]
. The .flatten()
method turns
this into a one dimensional tensor, of length H*W*C
, torch.mean()
takes the mean, and .item()
turns a single value tensor into a Python float.
Notes on those last two lines:
images[brightest]
will return a Tensor of shape[H,W,C]
.unsqueeze
is used to insert a (length 1) dimension at, in this case, dimension zero, to give us[B,H,W,C]
withB=1
: a single image.- in
return (result,)
, the trailing comma is essential to ensure you return a tuple.
Register the node
To make Comfy recognize the new node, it must be available at the package level. Modify the NODE_CLASS_MAPPINGS
variable at the end of src/nodes.py
. You must restart ComfyUI to see any changes.
Add some options
That node is maybe a bit boring, so we might add some options; a widget that allows you to
choose the brightest image, or the reddest, bluest, or greenest. Edit your INPUT_TYPES
to look like:
Then update the main function. We’ll use a fairly naive definition of ‘reddest’ as being the average
R
value of the pixels divided by the average of all three colors. So:
Tweak the UI
Maybe we’d like a bit of visual feedback, so let’s send a little text message to be displayed.
Send a message from server
This requires two lines to be added to the Python code:
and, at the end of the choose_image
method, add a line to send a message to the front end (send_sync
takes a message
type, which should be unique, and a dictionary)
Write a client extension
To add some Javascript to the client, create a subdirectory, web/js
in your custom node directory, and modify the end of __init__.py
to tell Comfy about it by exporting WEB_DIRECTORY
:
The client extension is saved as a .js
file in the web/js
subdirectory, so create image_selector/web/js/imageSelector.js
with the
code below. (For more, see client side coding).
All we’ve done is register an extension and add a listener for the message type we are sending in the setup()
method. This reads the dictionary we sent (which is stored in event.detail
).
Stop the Comfy server, start it again, reload the webpage, and run your workflow.
The complete example
The complete example is available here. You can download the example workflow JSON file or view it below:
Was this page helpful?