The Kling Start-End Frame to Video node lets you create smooth video transitions between two images. It automatically generates all the intermediate frames to produce a fluid transformation.

Parameters

Required Parameters

ParameterTypeDescription
start_frameImageStarting image for the video
end_frameImageEnding image for the video
promptStringText describing video content and transition
negative_promptStringElements to avoid in the video
cfg_scaleFloatControls how closely to follow the prompt
aspect_ratioSelectOutput video aspect ratio
modeSelectVideo generation settings (mode/duration/model)

Mode Options

Available mode combinations:

  • standard mode / 5s duration / kling-v1
  • standard mode / 5s duration / kling-v1-5
  • pro mode / 5s duration / kling-v1
  • pro mode / 5s duration / kling-v1-5
  • pro mode / 5s duration / kling-v1-6
  • pro mode / 10s duration / kling-v1-5
  • pro mode / 10s duration / kling-v1-6

Default: “pro mode / 5s duration / kling-v1”

Output

OutputTypeDescription
VIDEOVideoGenerated video

How It Works

The node analyzes the start and end frames to create a smooth transition sequence between them. It sends the images and parameters to Kling’s API server, which generates all necessary intermediate frames for a fluid transformation.

The transition style and content can be guided using prompts, while negative prompts help avoid unwanted elements.

Source Code

[Node Source Code (Updated 2025-05-03)]



class KlingStartEndFrameNode(KlingImage2VideoNode):
    """
    Kling First Last Frame Node. This node allows creation of a video from a first and last frame. It calls the normal image to video endpoint, but only allows the subset of input options that support the `image_tail` request field.
    """

    @staticmethod
    def get_mode_string_mapping() -> dict[str, tuple[str, str, str]]:
        """
        Returns a mapping of mode strings to their corresponding (mode, duration, model_name) tuples.
        Only includes config combos that support the `image_tail` request field.
        """
        return {
            "standard mode / 5s duration / kling-v1": ("std", "5", "kling-v1"),
            "standard mode / 5s duration / kling-v1-5": ("std", "5", "kling-v1-5"),
            "pro mode / 5s duration / kling-v1": ("pro", "5", "kling-v1"),
            "pro mode / 5s duration / kling-v1-5": ("pro", "5", "kling-v1-5"),
            "pro mode / 5s duration / kling-v1-6": ("pro", "5", "kling-v1-6"),
            "pro mode / 10s duration / kling-v1-5": ("pro", "10", "kling-v1-5"),
            "pro mode / 10s duration / kling-v1-6": ("pro", "10", "kling-v1-6"),
        }

    @classmethod
    def INPUT_TYPES(s):
        modes = list(KlingStartEndFrameNode.get_mode_string_mapping().keys())
        return {
            "required": {
                "start_frame": model_field_to_node_input(
                    IO.IMAGE, KlingImage2VideoRequest, "image"
                ),
                "end_frame": model_field_to_node_input(
                    IO.IMAGE, KlingImage2VideoRequest, "image_tail"
                ),
                "prompt": model_field_to_node_input(
                    IO.STRING, KlingImage2VideoRequest, "prompt", multiline=True
                ),
                "negative_prompt": model_field_to_node_input(
                    IO.STRING,
                    KlingImage2VideoRequest,
                    "negative_prompt",
                    multiline=True,
                ),
                "cfg_scale": model_field_to_node_input(
                    IO.FLOAT, KlingImage2VideoRequest, "cfg_scale"
                ),
                "aspect_ratio": model_field_to_node_input(
                    IO.COMBO,
                    KlingImage2VideoRequest,
                    "aspect_ratio",
                    enum_type=AspectRatio,
                ),
                "mode": (
                    modes,
                    {
                        "default": modes[2],
                        "tooltip": "The configuration to use for the video generation following the format: mode / duration / model_name.",
                    },
                ),
            },
            "hidden": {"auth_token": "AUTH_TOKEN_COMFY_ORG"},
        }

    DESCRIPTION = "Generate a video sequence that transitions between your provided start and end images. The node creates all frames in between, producing a smooth transformation from the first frame to the last."

    def parse_inputs_from_mode(self, mode: str) -> tuple[str, str, str]:
        """Parses the mode input into a tuple of (model_name, duration, mode)."""
        return KlingStartEndFrameNode.get_mode_string_mapping()[mode]

    def api_call(
        self,
        start_frame: torch.Tensor,
        end_frame: torch.Tensor,
        prompt: str,
        negative_prompt: str,
        cfg_scale: float,
        aspect_ratio: str,
        mode: str,
        auth_token: Optional[str] = None,
    ):
        mode, duration, model_name = self.parse_inputs_from_mode(mode)
        return super().api_call(
            prompt=prompt,
            negative_prompt=negative_prompt,
            model_name=model_name,
            start_frame=start_frame,
            cfg_scale=cfg_scale,
            mode=mode,
            aspect_ratio=aspect_ratio,
            duration=duration,
            end_frame=end_frame,
            auth_token=auth_token,
        )