TODOS:

  • Quants with fixed config
  • Demo videos for different quants on the same noise seed

Notice: F16 quant is not working properly because don't have a config, but it can be used for converting (after converting do not forget to add config field to gguf metadata).

Untested GGUF version of the LTX-2 Rapid, converted in colab using city96 gguf convertion instructions.

Downloads last month
2,737
GGUF
Model size
19B params
Architecture
ltxv
Hardware compatibility
Log In to add your hardware

3-bit

4-bit

5-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for 3ndetz/LTX2-Rapid-Merges-GGUF

Base model

Lightricks/LTX-2
Quantized
(1)
this model