mlx-community/whisper-large-v3-asr-8bit
This model was converted to MLX format from ./whisper-large-v3 using mlx-audio version 0.2.10.
Refer to the original model card for more details on the model.
Use with mlx-audio
pip install -U mlx-audio
CLI Example:
python -m mlx_audio.stt.generate --model mlx-community/whisper-large-v3-asr-8bit --audio "audio.wav"
Python Example:
from mlx_audio.stt.utils import load_model
from mlx_audio.stt.generate import generate_transcription
model = load_model("mlx-community/whisper-large-v3-asr-8bit")
transcription = generate_transcription(
model=model,
audio_path="path_to_audio.wav",
output_path="path_to_output.txt",
format="txt",
verbose=True,
)
print(transcription.text)
- Downloads last month
- 14
Model size
0.3B params
Tensor type
F16
·
U32
·
Hardware compatibility
Log In
to add your hardware
6-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support