ComfyUI Native HiDream-I1 Text-to-Image Workflow Example
This guide will walk you through completing a ComfyUI native HiDream-I1 text-to-image workflow example
HiDream-I1 is a text-to-image model officially open-sourced by HiDream-ai on April 7, 2025. The model has 17B parameters and is released under the MIT license, supporting personal projects, scientific research, and commercial use. It currently performs excellently in multiple benchmark tests.
Model Features
Hybrid Architecture Design A combination of Diffusion Transformer (DiT) and Mixture of Experts (MoE) architecture:
- Based on Diffusion Transformer (DiT), with dual-stream MMDiT modules processing multimodal information and single-stream DiT modules optimizing global consistency.
- Dynamic routing mechanism flexibly allocates computing resources, enhancing complex scene processing capabilities and delivering excellent performance in color restoration, edge processing, and other details.
Multimodal Text Encoder Integration Integrates four text encoders:
- OpenCLIP ViT-bigG, OpenAI CLIP ViT-L (visual semantic alignment)
- T5-XXL (long text parsing)
- Llama-3.1-8B-Instruct (instruction understanding) This combination achieves SOTA performance in complex semantic parsing of colors, quantities, spatial relationships, etc., with Chinese prompt support significantly outperforming similar open-source models.
Original Model Versions
HiDream-ai provides three versions of the HiDream-I1 model to meet different needs. Below are the links to the original model repositories:
Model Name | Description | Inference Steps | Repository Link |
---|---|---|---|
HiDream-I1-Full | Full version | 50 | ๐ค HiDream-I1-Full |
HiDream-I1-Dev | Distilled dev | 28 | ๐ค HiDream-I1-Dev |
HiDream-I1-Fast | Distilled fast | 16 | ๐ค HiDream-I1-Fast |
About This Workflow Example
In this example, we will use the repackaged version from ComfyOrg. You can find all the model files weโll use in this example in the HiDream-I1_ComfyUI repository.
Before starting, please update your ComfyUI version to ensure itโs at least after this commit to make sure your ComfyUI has native support for HiDream
HiDream-I1 Workflow
The model requirements for different ComfyUI native HiDream-I1 workflows are basically the same, with only the diffusion models files being different.
If you donโt know which version to choose, please refer to the following suggestions:
- HiDream-I1-Full can generate the highest quality images
- HiDream-I1-Dev balances high-quality image generation with speed
- HiDream-I1-Fast can generate images in just 16 steps, suitable for scenarios requiring real-time iteration
For the dev and fast versions, negative prompts are not needed, so please set the cfg
parameter to 1.0
during sampling. We have noted the corresponding parameter settings in the relevant workflows.
The full versions of all three versions require a lot of VRAM - you may need more than 27GB of VRAM to run them smoothly. In the corresponding workflow tutorials, we will use the fp8 version as a demonstration example to ensure that most users can run it smoothly. However, we will still provide download links for different versions of the model in the corresponding examples, and you can choose the appropriate file based on your VRAM situation.
Model Installation
The following model files are common files that we will use. Please click on the corresponding links to download and save them according to the model file save location. We will guide you to download the corresponding diffusion models in the corresponding workflows.
text_encoders๏ผ
- clip_l_hidream.safetensors
- clip_g_hidream.safetensors
- t5xxl_fp8_e4m3fn_scaled.safetensors This model has been used in many workflows, you may have already downloaded this file.
- llama_3.1_8b_instruct_fp8_scaled.safetensors
VAE
- ae.safetensors This is Fluxโs VAE model, if you have used Fluxโs workflow before, you may have already downloaded this file.
diffusion models We will guide you to download the corresponding model files in the corresponding workflows.
Model file save location
HiDream-I1 Full Version Workflow
1. Model File Download
Please select the appropriate version based on your hardware. Click the link and download the corresponding model file to save it to the ComfyUI/models/diffusion_models/
folder.
- FP8 version: hidream_i1_full_fp8.safetensors requires more than 16GB of VRAM
- Full version: hidream_i1_full_f16.safetensors requires more than 27GB of VRAM
2. Workflow File Download
Please download the image below and drag it into ComfyUI to load the corresponding workflow
3. Complete the Workflow Step by Step
Complete the workflow execution step by step
- Make sure the
Load Diffusion Model
node is using thehidream_i1_full_fp8.safetensors
file - Make sure the four corresponding text encoders in
QuadrupleCLIPLoader
are loaded correctly- clip_l_hidream.safetensors
- clip_g_hidream.safetensors
- t5xxl_fp8_e4m3fn_scaled.safetensors
- llama_3.1_8b_instruct_fp8_scaled.safetensors
- Make sure the
Load VAE
node is using theae.safetensors
file - For the full version, you need to set the
shift
parameter inModelSamplingSD3
to3.0
- For the
Ksampler
node, you need to make the following settings- Set
steps
to50
- Set
cfg
to5.0
- (Optional) Set
sampler
tolcm
- (Optional) Set
scheduler
tonormal
- Set
- Click the
Run
button, or use the shortcutCtrl(cmd) + Enter
to execute the image generation
HiDream-I1 Dev Version Workflow
1. Model File Download
Please select the appropriate version based on your hardware, click the link and download the corresponding model file to save to the ComfyUI/models/diffusion_models/
folder.
- FP8 version: hidream_i1_dev_fp8.safetensors requires more than 16GB of VRAM
- Full version: hidream_i1_dev_bf16.safetensors requires more than 27GB of VRAM
2. Workflow File Download
Please download the image below and drag it into ComfyUI to load the corresponding workflow
3. Complete the Workflow Step by Step
Complete the workflow execution step by step
- Make sure the
Load Diffusion Model
node is using thehidream_i1_dev_fp8.safetensors
file - Make sure the four corresponding text encoders in
QuadrupleCLIPLoader
are loaded correctly- clip_l_hidream.safetensors
- clip_g_hidream.safetensors
- t5xxl_fp8_e4m3fn_scaled.safetensors
- llama_3.1_8b_instruct_fp8_scaled.safetensors
- Make sure the
Load VAE
node is using theae.safetensors
file - For the dev version, you need to set the
shift
parameter inModelSamplingSD3
to6.0
- For the
Ksampler
node, you need to make the following settings- Set
steps
to28
- (Important) Set
cfg
to1.0
- (Optional) Set
sampler
tolcm
- (Optional) Set
scheduler
tonormal
- Set
- Click the
Run
button, or use the shortcutCtrl(cmd) + Enter
to execute the image generation
HiDream-I1 Fast Version Workflow
1. Model File Download
Please select the appropriate version based on your hardware, click the link and download the corresponding model file to save to the ComfyUI/models/diffusion_models/
folder.
- FP8 version: hidream_i1_fast_fp8.safetensors requires more than 16GB of VRAM
- Full version: hidream_i1_fast_bf16.safetensors requires more than 27GB of VRAM
2. Workflow File Download
Please download the image below and drag it into ComfyUI to load the corresponding workflow
3. Complete the Workflow Step by Step
Complete the workflow execution step by step
- Make sure the
Load Diffusion Model
node is using thehidream_i1_fast_fp8.safetensors
file - Make sure the four corresponding text encoders in
QuadrupleCLIPLoader
are loaded correctly- clip_l_hidream.safetensors
- clip_g_hidream.safetensors
- t5xxl_fp8_e4m3fn_scaled.safetensors
- llama_3.1_8b_instruct_fp8_scaled.safetensors
- Make sure the
Load VAE
node is using theae.safetensors
file - For the fast version, you need to set the
shift
parameter inModelSamplingSD3
to3.0
- For the
Ksampler
node, you need to make the following settings- Set
steps
to16
- (Important) Set
cfg
to1.0
- (Optional) Set
sampler
tolcm
- (Optional) Set
scheduler
tonormal
- Set
- Click the
Run
button, or use the shortcutCtrl(cmd) + Enter
to execute the image generation
Other Related Resources
GGUF Version Models
You need to use the โUnet Loader (GGUF)โ node in City96โs ComfyUI-GGUF to replace the โLoad Diffusion Modelโ node.
NF4 Version Models
- HiDream-I1-nf4
- Use the ComfyUI-HiDream-Sampler node to use the NF4 version model.
Was this page helpful?