使用Stability AI的超级稳定扩散模型生成高质量图像的节点
Stability Stable Image Ultra 节点使用 Stability AI 的 Stable Diffusion Ultra API 生成高质量图像。它支持文本到图像和图像到图像的生成,能够根据文本提示词创建细节丰富、艺术表现力强的视觉内容。
参数 | 类型 | 默认值 | 说明 |
---|---|---|---|
prompt | 字符串 | "" | 详细描述要生成内容的文本提示词。强大、描述性的提示词能明确定义元素、颜色和主题,从而获得更好的结果。可以使用(词:权重) 格式控制特定词的权重,其中权重为0到1之间的值。例如:天空是清爽的(蓝色:0.3)和(绿色:0.8) 表示天空是蓝色和绿色的,但绿色比蓝色更明显。 |
aspect_ratio | 选择项 | ”1:1” | 输出图像的宽高比 |
style_preset | 选择项 | ”None” | 可选的生成图像的预设风格 |
seed | 整数 | 0 | 用于创建噪声的随机种子,范围0-4294967294 |
参数 | 类型 | 默认值 | 说明 |
---|---|---|---|
image | 图像 | - | 用于图像到图像生成的输入图像 |
negative_prompt | 字符串 | "" | 描述不希望在输出图像中看到的内容。这是一个高级功能 |
image_denoise | 浮点数 | 0.5 | 输入图像的去噪强度,范围0.0-1.0。0.0会产生与输入完全相同的图像,1.0则相当于没有提供任何图像 |
输出 | 类型 | 说明 |
---|---|---|
IMAGE | 图像 | 生成的图像结果 |
Stability AI Stable Image Ultra 工作流示例
[节点源码 (更新于2025-05-03)]
class StabilityStableImageUltraNode:
"""
Generates images synchronously based on prompt and resolution.
"""
RETURN_TYPES = (IO.IMAGE,)
DESCRIPTION = cleandoc(__doc__ or "") # Handle potential None value
FUNCTION = "api_call"
API_NODE = True
CATEGORY = "api node/image/stability"
@classmethod
def INPUT_TYPES(s):
return {
"required": {
"prompt": (
IO.STRING,
{
"multiline": True,
"default": "",
"tooltip": "What you wish to see in the output image. A strong, descriptive prompt that clearly defines" +
"What you wish to see in the output image. A strong, descriptive prompt that clearly defines" +
"elements, colors, and subjects will lead to better results. " +
"To control the weight of a given word use the format `(word:weight)`," +
"where `word` is the word you'd like to control the weight of and `weight`" +
"is a value between 0 and 1. For example: `The sky was a crisp (blue:0.3) and (green:0.8)`" +
"would convey a sky that was blue and green, but more green than blue."
},
),
"aspect_ratio": ([x.value for x in StabilityAspectRatio],
{
"default": StabilityAspectRatio.ratio_1_1,
"tooltip": "Aspect ratio of generated image.",
},
),
"style_preset": (get_stability_style_presets(),
{
"tooltip": "Optional desired style of generated image.",
},
),
"seed": (
IO.INT,
{
"default": 0,
"min": 0,
"max": 4294967294,
"control_after_generate": True,
"tooltip": "The random seed used for creating the noise.",
},
),
},
"optional": {
"image": (IO.IMAGE,),
"negative_prompt": (
IO.STRING,
{
"default": "",
"forceInput": True,
"tooltip": "A blurb of text describing what you do not wish to see in the output image. This is an advanced feature."
},
),
"image_denoise": (
IO.FLOAT,
{
"default": 0.5,
"min": 0.0,
"max": 1.0,
"step": 0.01,
"tooltip": "Denoise of input image; 0.0 yields image identical to input, 1.0 is as if no image was provided at all.",
},
),
},
"hidden": {
"auth_token": "AUTH_TOKEN_COMFY_ORG",
},
}
def api_call(self, prompt: str, aspect_ratio: str, style_preset: str, seed: int,
negative_prompt: str=None, image: torch.Tensor = None, image_denoise: float=None,
auth_token=None):
# prepare image binary if image present
image_binary = None
if image is not None:
image_binary = tensor_to_bytesio(image, 1504 * 1504).read()
else:
image_denoise = None
if not negative_prompt:
negative_prompt = None
if style_preset == "None":
style_preset = None
files = {
"image": image_binary
}
operation = SynchronousOperation(
endpoint=ApiEndpoint(
path="/proxy/stability/v2beta/stable-image/generate/ultra",
method=HttpMethod.POST,
request_model=StabilityStableUltraRequest,
response_model=StabilityStableUltraResponse,
),
request=StabilityStableUltraRequest(
prompt=prompt,
negative_prompt=negative_prompt,
aspect_ratio=aspect_ratio,
seed=seed,
strength=image_denoise,
style_preset=style_preset,
),
files=files,
content_type="multipart/form-data",
auth_token=auth_token,
)
response_api = operation.execute()
if response_api.finish_reason != "SUCCESS":
raise Exception(f"Stable Image Ultra generation failed: {response_api.finish_reason}.")
image_data = base64.b64decode(response_api.image)
returned_image = bytesio_to_image_tensor(BytesIO(image_data))
return (returned_image,)
Was this page helpful?