ComfyUI Flux Kontext Dev Native Workflow Example
ComfyUI Flux Kontext Dev Native Workflow Example.
About FLUX.1 Kontext Dev
FLUX.1 Kontext is a breakthrough multimodal image editing model from Black Forest Labs that supports simultaneous text and image input, intelligently understanding image context and performing precise editing. Its development version is an open-source diffusion transformer model with 12 billion parameters, featuring excellent context understanding and character consistency maintenance, ensuring that key elements such as character features and composition layout remain stable even after multiple iterative edits.
It shares the same core capabilities as the FLUX.1 Kontext suite:
- Character Consistency: Preserves unique elements in images across multiple scenes and environments, such as reference characters or objects in the image.
- Editing: Makes targeted modifications to specific elements in the image without affecting other parts.
- Style Reference: Generates novel scenes while preserving the unique style of the reference image according to text prompts.
- Interactive Speed: Minimal latency in image generation and editing.
While the previously released API version offers the highest fidelity and speed, FLUX.1 Kontext [Dev] runs entirely on local machines, providing unparalleled flexibility for developers, researchers, and advanced users who wish to experiment.
Version Information
- [FLUX.1 Kontext [pro] - Commercial version, focused on rapid iterative editing
- FLUX.1 Kontext [max] - Experimental version with stronger prompt adherence
- FLUX.1 Kontext [dev] - Open source version (used in this tutorial), 12B parameters, mainly for research
Currently in ComfyUI, you can use all these versions, where Pro and Max versions can be called through API nodes, while the Dev open source version please refer to the instructions in this guide.
Workflow Description
In this tutorial, we cover two types of workflows, which are essentially the same:
- A workflow using the FLUX.1 Kontext Image Edit group node, making the interface and workflow reuse simpler
- Another workflow without using group nodes, showing the complete original workflow.
The main advantage of using group nodes is workflow conciseness - you can reuse group nodes to implement complex workflows and quickly reuse node groups. Additionally, in the new version of the frontend, we’ve added a quick group node addition feature for Flux.1 Kontext Dev:
This feature is currently experimental and may be adjusted in future versions.
If you find missing nodes when loading the workflow file below, it may be due to the following situations:
- You are not using the latest Development (Nightly) version of ComfyUI.
- You are using the Stable (Release) version or Desktop version of ComfyUI (which does not include the latest feature updates).
- You are using the latest Commit version of ComfyUI, but some nodes failed to import during startup.
Please make sure you have successfully updated ComfyUI to the latest Development (Nightly) version. See: How to Update ComfyUI section to learn how to update ComfyUI.
Model Download
To run the workflows in this guide successfully, you first need to download the following model files. You can also directly get the model download links from the corresponding workflows, which already contain the model file download information.
Diffusion Model
If you want to use the original weights, you can visit Black Forest Labs’ related repository to obtain and use the original model weights.
VAE
Text Encoder
Model save location
Flux.1 Kontext Dev Basic Workflow
This workflow is a normal workflow, but it uses the Load Image(from output)
node to load the image to be edited, making it more convenient for you to access the edited image for multiple rounds of editing.
1. Workflow and Input Image Download
Download the following files and drag them into ComfyUI to load the corresponding workflow
Input Image
2. Complete the workflow step by step
You can refer to the numbers in the image to complete the workflow run:
- In the
Load Diffusion Model
node, load theflux1-dev-kontext_fp8_scaled.safetensors
model - In the
DualCLIP Load
node, ensure thatclip_l.safetensors
andt5xxl_fp16.safetensors
ort5xxl_fp8_e4m3fn_scaled.safetensors
are loaded - In the
Load VAE
node, ensure thatae.safetensors
model is loaded - In the
Load Image(from output)
node, load the provided input image - In the
CLIP Text Encode
node, modify the prompts, only English is supported - Click the
Queue
button, or use the shortcutCtrl(cmd) + Enter
to run the workflow
Flux.1 Kontext Dev Grouped Workflow
This workflow uses the FLUX.1 Kontext Image Edit group node, making the interface and workflow reuse simpler.
This example also uses two images as input, using the Image Stitch
node to combine two images into one, and then using Flux.1 Kontext for editing.
1. Workflow and Input Image Download
Download the following files and drag them into ComfyUI to load the corresponding workflow
Input Images
2. Complete the workflow step by step
You can refer to the numbers in the image to complete the workflow run:
- In the
Load VAE
node, load theae.safetensors
model - In the
Load Image
node, load the first provided input image - In the
Load Image
node, load the second provided input image - Since other models and related nodes are packaged in the group node, you need to follow the reference in the step diagram to ensure that the corresponding models are correctly loaded and write prompts
- Click the
Queue
button, or use the shortcutCtrl(cmd) + Enter
to run the workflow
Flux Kontext Prompt Techniques
1. Basic Modifications
- Simple and direct:
"Change the car color to red"
- Maintain style:
"Change to daytime while maintaining the same style of the painting"
2. Style Transfer
Principles:
- Clearly name style:
"Transform to Bauhaus art style"
- Describe characteristics:
"Transform to oil painting with visible brushstrokes, thick paint texture"
- Preserve composition:
"Change to Bauhaus style while maintaining the original composition"
3. Character Consistency
Framework:
- Specific description:
"The woman with short black hair"
instead of “she” - Preserve features:
"while maintaining the same facial features, hairstyle, and expression"
- Step-by-step modifications: Change background first, then actions
4. Text Editing
- Use quotes:
"Replace 'joy' with 'BFL'"
- Maintain format:
"Replace text while maintaining the same font style"
Common Problem Solutions
Character Changes Too Much
❌ Wrong: "Transform the person into a Viking"
✅ Correct: "Change the clothes to be a viking warrior while preserving facial features"
Composition Position Changes
❌ Wrong: "Put him on a beach"
✅ Correct: "Change the background to a beach while keeping the person in the exact same position, scale, and pose"
Style Application Inaccuracy
❌ Wrong: "Make it a sketch"
✅ Correct: "Convert to pencil sketch with natural graphite lines, cross-hatching, and visible paper texture"
Core Principles
- Be Specific and Clear - Use precise descriptions, avoid vague terms
- Step-by-step Editing - Break complex modifications into multiple simple steps
- Explicit Preservation - State what should remain unchanged
- Verb Selection - Use “change”, “replace” rather than “transform”
Best Practice Templates
Object Modification:
"Change [object] to [new state], keep [content to preserve] unchanged"
Style Transfer:
"Transform to [specific style], while maintaining [composition/character/other] unchanged"
Background Replacement:
"Change the background to [new background], keep the subject in the exact same position and pose"
Text Editing:
"Replace '[original text]' with '[new text]', maintain the same font style"
Remember: The more specific, the better. Kontext excels at understanding detailed instructions and maintaining consistency.