The workflows are based on the extracted models from https://huggingface.co/Kijai/LTXV2_comfy The extracted models runs easier on the computer (as separate files), as well as GGUF support etc

(but you can easily swap out the model loader for the ComfyUI default model loader if you want to load the checkpoint with "all in one" vae built-in etc)


More workflows :

ComfyUI official workflows: https://docs.comfy.org/tutorials/video/ltx/ltx-2

LTX-Video official workflows: https://github.com/Lightricks/ComfyUI-LTXVideo/tree/master/example_workflows

RunComfy (can download workflow to use locally):

LTX-2 Controlnet (pose, depth etc) https://www.runcomfy.com/comfyui-workflows/ltx-2-controlnet-in-comfyui-depth-controlled-video-workflow

LTX-2 First Last Frame https://www.runcomfy.com/comfyui-workflows/ltx-2-first-last-frame-in-comfyui-audio-visual-motion-control

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support