ComfyUI Flux Kontext Dev Native Workflow Example.
FLUX.1 Kontext is a breakthrough multimodal image editing model from Black Forest Labs that supports simultaneous text and image input, intelligently understanding image context and performing precise editing. Its development version is an open-source diffusion transformer model with 12 billion parameters, featuring excellent context understanding and character consistency maintenance, ensuring that key elements such as character features and composition layout remain stable even after multiple iterative edits.
It shares the same core capabilities as the FLUX.1 Kontext suite:
While the previously released API version offers the highest fidelity and speed, FLUX.1 Kontext [Dev] runs entirely on local machines, providing unparalleled flexibility for developers, researchers, and advanced users who wish to experiment.
Currently in ComfyUI, you can use all these versions, where Pro and Max versions can be called through API nodes, while the Dev open source version please refer to the instructions in this guide.
In this tutorial, we cover two types of workflows, which are essentially the same:
The main advantage of using group nodes is workflow conciseness - you can reuse group nodes to implement complex workflows and quickly reuse node groups. Additionally, in the new version of the frontend, we’ve added a quick group node addition feature for Flux.1 Kontext Dev:
This feature is currently experimental and may be adjusted in future versions.
If you find missing nodes when loading the workflow file below, it may be due to the following situations:
Please make sure you have successfully updated ComfyUI to the latest Development (Nightly) version. See: How to Update ComfyUI section to learn how to update ComfyUI.
To run the workflows in this guide successfully, you first need to download the following model files. You can also directly get the model download links from the corresponding workflows, which already contain the model file download information.
Diffusion Model
If you want to use the original weights, you can visit Black Forest Labs’ related repository to obtain and use the original model weights.
VAE
Text Encoder
Model save location
This workflow is a normal workflow, but it uses the Load Image(from output)
node to load the image to be edited, making it more convenient for you to access the edited image for multiple rounds of editing.
Download the following files and drag them into ComfyUI to load the corresponding workflow
Input Image
You can refer to the numbers in the image to complete the workflow run:
Load Diffusion Model
node, load the flux1-dev-kontext_fp8_scaled.safetensors
modelDualCLIP Load
node, ensure that clip_l.safetensors
and t5xxl_fp16.safetensors
or t5xxl_fp8_e4m3fn_scaled.safetensors
are loadedLoad VAE
node, ensure that ae.safetensors
model is loadedLoad Image(from output)
node, load the provided input imageCLIP Text Encode
node, modify the prompts, only English is supportedQueue
button, or use the shortcut Ctrl(cmd) + Enter
to run the workflowThis workflow uses the FLUX.1 Kontext Image Edit group node, making the interface and workflow reuse simpler.
This example also uses two images as input, using the Image Stitch
node to combine two images into one, and then using Flux.1 Kontext for editing.
Download the following files and drag them into ComfyUI to load the corresponding workflow
Input Images
You can refer to the numbers in the image to complete the workflow run:
Load VAE
node, load the ae.safetensors
modelLoad Image
node, load the first provided input imageLoad Image
node, load the second provided input imageQueue
button, or use the shortcut Ctrl(cmd) + Enter
to run the workflowTo make it easier for users to edit with Flux.1 Kontext Dev, we have added a selection toolbox feature. This feature allows users to quickly and conveniently add the FLUX.1 Kontext Image Edit
group node.
You can watch the video demo below. When you select the Load Image
node, you can find the new edit button in the selection toolbox.
This feature is currently experimental and may be adjusted in future versions.
"Change the car color to red"
"Change to daytime while maintaining the same style of the painting"
Principles:
"Transform to Bauhaus art style"
"Transform to oil painting with visible brushstrokes, thick paint texture"
"Change to Bauhaus style while maintaining the original composition"
Framework:
"The woman with short black hair"
instead of “she”"while maintaining the same facial features, hairstyle, and expression"
"Replace 'joy' with 'BFL'"
"Replace text while maintaining the same font style"
❌ Wrong: "Transform the person into a Viking"
✅ Correct: "Change the clothes to be a viking warrior while preserving facial features"
❌ Wrong: "Put him on a beach"
✅ Correct: "Change the background to a beach while keeping the person in the exact same position, scale, and pose"
❌ Wrong: "Make it a sketch"
✅ Correct: "Convert to pencil sketch with natural graphite lines, cross-hatching, and visible paper texture"
Object Modification:
"Change [object] to [new state], keep [content to preserve] unchanged"
Style Transfer:
"Transform to [specific style], while maintaining [composition/character/other] unchanged"
Background Replacement:
"Change the background to [new background], keep the subject in the exact same position and pose"
Text Editing:
"Replace '[original text]' with '[new text]', maintain the same font style"
Remember: The more specific, the better. Kontext excels at understanding detailed instructions and maintaining consistency.