Core API Enhancement & Performance OptimizationsThis release introduces significant backend improvements and performance optimizations that enhance workflow execution and node development capabilities:
ComfyAPI Core v0.0.2: Major update to the core API framework, providing improved stability and extensibility for custom node development and third-party integrations
Partial Execution Support: New backend support for partial workflow execution, enabling more efficient processing of complex multi-stage workflows by allowing selective node execution
WAN Camera Memory Optimization: Enhanced memory management for WAN-based camera workflows, reducing VRAM usage during video processing operations
WanFirstLastFrameToVideo Fix: Resolved critical issue preventing proper video generation when clip vision components are not available, improving workflow reliability
VAE Nonlinearity Enhancement: Replaced manual activation functions with optimized torch.silu in VAE operations, providing better performance and numerical stability for image encoding/decoding
WAN VAE Optimizations: Additional fine-tuning optimizations for WAN VAE operations, improving processing speed and memory efficiency in video generation workflows
V3 Node Schema Definition: Initial implementation of next-generation node schema system, laying the groundwork for enhanced node type definitions and improved workflow validation
Template Updates: Multiple template version updates (0.1.44, 0.1.45) ensuring compatibility with latest node development standards and best practices
Enhanced Video Workflows: Improved stability and performance for video generation pipelines, particularly those using WAN-based models
Better Memory Management: Optimized memory usage patterns enable more complex workflows on systems with limited VRAM
Improved API Reliability: Core API enhancements provide more stable foundation for custom node development and workflow automation
Partial Execution Flexibility: New partial execution capabilities allow for more efficient debugging and iterative workflow development
These foundational improvements strengthen ComfyUI’s core architecture while delivering immediate benefits for video processing workflows and memory-intensive operations, making the platform more robust for both casual creators and professional AI development workflows.
Memory Optimization & Large Model PerformanceThis release focuses on critical memory optimizations for large model workflows, particularly improving performance with WAN 2.2 models and enhancing VRAM management for high-end systems:
Reduced Memory Footprint: Eliminated unnecessary memory clones in WAN 2.2 VAE operations, significantly reducing memory usage during image encoding/decoding workflows
5B I2V Model Support: Major memory optimization for WAN 2.2 5B image-to-video models, making these large-scale models more accessible for creators with limited VRAM
Windows Large Card Support: Added extra reserved VRAM allocation for high-end graphics cards on Windows, preventing system instability during intensive generation workflows
Better Memory Allocation: Improved memory management for users working with multiple large models simultaneously
Faster VAE Processing: WAN 2.2 VAE operations now run more efficiently with reduced memory overhead, enabling smoother image generation pipelines
Stable Large Model Inference: Enhanced stability when working with billion-parameter models, crucial for professional AI art creation and research workflows
Improved Batch Processing: Memory optimizations enable better handling of batch operations with large models
These targeted optimizations make ComfyUI more reliable for professional workflows involving large-scale models, particularly benefiting creators working with cutting-edge image-to-video generation and high-resolution image processing tasks.
Hardware Acceleration & Audio Processing ImprovementsThis release focuses on expanding hardware support and enhancing audio processing capabilities for workflow creators:
Documentation Updates: Enhanced README with HiDream E1.1 examples and updated model integration guides
Line Ending Fixes: Improved cross-platform compatibility by standardizing line endings in workflows
Code Cleanup: Removed deprecated code and optimized various components for better maintainability
These improvements make ComfyUI more accessible across different hardware platforms while enhancing audio processing capabilities essential for modern multimedia AI workflows.
Advanced Sampling & Training Infrastructure ImprovementsThis release introduces significant enhancements to sampling algorithms, training capabilities, and node functionality for AI researchers and workflow creators:
Async Node Support: Full support for asynchronous node functions with earlier execution optimization, improving workflow performance for I/O intensive operations
Chroma Flexibility: Un-hardcoded patch_size parameter in Chroma, allowing better adaptation to different model configurations
LTXV VAE Decoder: Switched to improved default padding mode for better image quality with LTXV models
Safetensors Memory Management: Added workaround for mmap issues, improving reliability when loading large model files
Warning Systems: Added torch import mistake warnings to catch common configuration issues
Template Updates: Multiple template version updates (0.1.36, 0.1.37, 0.1.39) for improved custom node development
Documentation: Enhanced fast_fp16_accumulation documentation in portable configurations
These improvements make ComfyUI more robust for production workflows while introducing powerful new sampling techniques and training capabilities essential for advanced AI research and creative applications.
Advanced Sampling & Model Control EnhancementsThis release delivers significant improvements to sampling algorithms and model control systems, particularly benefiting advanced AI researchers and workflow creators:
These improvements make ComfyUI more robust for production workflows while expanding creative possibilities for AI artists working with advanced sampling techniques and model control systems.
Enhanced Model Support & Workflow ReliabilityThis release brings significant improvements to model compatibility and workflow stability:
Expanded Model Documentation: Added comprehensive support documentation for Flux Kontext and Omnigen 2 models, making it easier for creators to integrate these powerful models into their workflows
VAE Encoding Improvements: Removed unnecessary random noise injection during VAE encoding, resulting in more consistent and predictable outputs across workflow runs
Memory Management Fix: Resolved a critical memory estimation bug specifically affecting Kontext model usage, preventing out-of-memory errors and improving workflow stability
These changes enhance the reliability of advanced model workflows while maintaining ComfyUI’s flexibility for AI artists and researchers working with cutting-edge generative models.
Cosmos Predict2 Support: Full implementation for both text-to-image (2B and 14B models) and image-to-video generation workflows, expanding video creation capabilities
Enhanced Flux Compatibility: Chroma Text Encoder now works seamlessly with regular Flux models, improving text conditioning quality
LoRA Training Integration: New native LoRA training node using weight adapter scheme, enabling direct model fine-tuning within ComfyUI workflows
Performance & Hardware Optimizations
AMD GPU Enhancements: Enabled FP8 operations and PyTorch attention on GFX1201 and other compatible AMD GPUs for faster inference
Apple Silicon Fixes: Addressed long-standing FP16 attention issues on Apple devices, improving stability for Mac users
Flux Model Stability: Resolved black image generation issues with certain Flux models in FP16 precision
Advanced Sampling Improvements
Rectified Flow (RF) Samplers: Added SEEDS and multistep DPM++ SDE samplers with RF support, providing more sampling options for cutting-edge models
ModelSamplingContinuousEDM: New cosmos_rflow option for enhanced sampling control with Cosmos models
Memory Optimization: Improved memory estimation for Cosmos models with uncapped resolution support
Developer & Integration Features
SQLite Database Support: Enhanced data management capabilities for custom nodes and workflow storage
PyProject.toml Integration: Automatic web folder registration and settings configuration from pyproject files
Frontend Flexibility: Support for semver suffixes and prerelease frontend versions for custom deployments
Tokenizer Enhancements: Configurable min_length settings with tokenizer_data for better text processing
Quality of Life Improvements
Kontext Aspect Ratio Fix: Resolved widget-only limitation, now works properly in all connection modes
SaveLora Consistency: Standardized filename format across all save nodes for better file organization
Python Version Warnings: Added alerts for outdated Python installations to prevent compatibility issues
WebcamCapture Fixes: Corrected IS_CHANGED signature for reliable live input workflows
This release significantly expands ComfyUI’s model ecosystem support while delivering crucial stability improvements and enhanced hardware compatibility across different platforms.
Pillow Compatibility: Updated deprecated API calls to maintain compatibility with latest image processing libraries
ROCm Support: Improved version detection for AMD GPU users
Template Updates: Enhanced project templates for custom node development
These updates strengthen ComfyUI’s foundation for complex AI workflows while making the platform more accessible to new users through improved documentation and helper tools.