mlx-community/mxbai-embed-large-v1
The Model mlx-community/mxbai-embed-large-v1 was converted to MLX format from mixedbread-ai/mxbai-embed-large-v1 using mlx-lm version 0.0.3.
Use with mlx
pip install mlx-embeddings
from mlx_embeddings import load, generate
import mlx.core as mx
model, tokenizer = load("mlx-community/mxbai-embed-large-v1")
# For text embeddings
output = generate(model, processor, texts=["I like grapes", "I like fruits"])
embeddings = output.text_embeds # Normalized embeddings
# Compute dot product between normalized embeddings
similarity_matrix = mx.matmul(embeddings, embeddings.T)
print("Similarity matrix between texts:")
print(similarity_matrix)
- Downloads last month
- 208
Hardware compatibility
Log In
to view the estimation
Quantized
Model tree for mlx-community/mxbai-embed-large-v1
Base model
mixedbread-ai/mxbai-embed-large-v1Evaluation results
- accuracy on MTEB AmazonCounterfactualClassification (en)test set self-reported75.045
- ap on MTEB AmazonCounterfactualClassification (en)test set self-reported37.736
- f1 on MTEB AmazonCounterfactualClassification (en)test set self-reported68.927
- accuracy on MTEB AmazonPolarityClassificationtest set self-reported93.840
- ap on MTEB AmazonPolarityClassificationtest set self-reported90.932
- f1 on MTEB AmazonPolarityClassificationtest set self-reported93.830
- accuracy on MTEB AmazonReviewsClassification (en)test set self-reported49.184
- f1 on MTEB AmazonReviewsClassification (en)test set self-reported48.742
- map_at_1 on MTEB ArguAnatest set self-reported41.252
- map_at_10 on MTEB ArguAnatest set self-reported57.778
- map_at_100 on MTEB ArguAnatest set self-reported58.233
- map_at_1000 on MTEB ArguAnatest set self-reported58.237
- map_at_3 on MTEB ArguAnatest set self-reported53.450
- map_at_5 on MTEB ArguAnatest set self-reported56.376
- mrr_at_1 on MTEB ArguAnatest set self-reported41.679
- mrr_at_10 on MTEB ArguAnatest set self-reported57.927
- mrr_at_100 on MTEB ArguAnatest set self-reported58.389
- mrr_at_1000 on MTEB ArguAnatest set self-reported58.392
- mrr_at_3 on MTEB ArguAnatest set self-reported53.651
- mrr_at_5 on MTEB ArguAnatest set self-reported56.521