XPU Not working "No backend can handle 'dequantize_per_tensor_fp8': eager: x: device xpu not in {'cuda', 'cpu'}"
#3
by
AI-Joe-git
- opened
No backend can handle 'dequantize_per_tensor_fp8': eager: x: device xpu not in {'cuda', 'cpu'}
System Information
- ComfyUI Version: 0.7.0
- Arguments: main.py --output-directory C:\Users\uscha\Documents\AI-Playground\media --front-end-version Comfy-Org/ComfyUI_frontend@latest --max-upload-size 500
- OS: win32
- Python Version: 3.11.0 (main, Oct 24 2022, 18:26:48) [MSC v.1933 64 bit (AMD64)]
- Embedded Python: false
- PyTorch Version: 2.11.0.dev20251215+xpu
Devices
- Name: xpu:0 Intel(R) Arc(TM) 140V GPU
- Type: xpu
- VRAM Total: 27230117888
- VRAM Free: 27230116864
- Torch VRAM Total: 2097152
- Torch VRAM Free: 2096128
Yes, I do, but it doesn’t even switch to CPU if CUDA is not available. My only hope is to wait for a proper GGUF version that might resolve the issue. I've successfully run inference with the --cpu flag, but as you can imagine, it's so slow that it’s not practical.
UPDATE: It works now, and it's incredibly fast !