BERT Hash Pico Embeddings
This is a BERT Hash Pico model fined-tuned using sentence-transformers. It maps sentences & paragraphs to a 80-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
This model is an alternative to MUVERA fixed-dimensional encoding with ColBERT models. MUVERA encoding enables encoding the multi-vector outputs of ColBERT into single dense vector outputs. While this is a great step, the main issue with MUVERA is that it tends to need wide vectors to be effective (5K - 10K dimensional vectors). bert-hash-pico-embeddings outputs 80-dimensional vectors.
The training dataset is a subset of this embedding training collection. The training workflow was a two step distillation process as follows.
- Distill embeddings from the larger bert-hash-nano-embeddings model using this model distillation script from Sentence Transformers.
- Build a distilled dataset of teacher scores using the mixedbread-ai/mxbai-rerank-xsmall-v1 cross-encoder for a random sample of the training dataset mentioned above.
- Further fine-tune the model on the distilled dataset using KLDivLoss.
Usage (txtai)
This model can be used to build embeddings databases with txtai for semantic search and/or as a knowledge source for retrieval augmented generation (RAG).
import txtai
embeddings = txtai.Embeddings(
path="neuml/bert-hash-pico-embeddings",
content=True,
vectors={"trust_remote_code": True}
)
embeddings.index(documents())
# Run a query
embeddings.search("query to run")
Usage (Sentence-Transformers)
Alternatively, the model can be loaded with sentence-transformers.
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer("neuml/bert-hash-pico-embeddings", trust_remote_code=True)
embeddings = model.encode(sentences)
print(embeddings)
Usage (Hugging Face Transformers)
The model can also be used directly with Transformers.
from transformers import AutoTokenizer, AutoModel
import torch
# Mean Pooling - Take attention mask into account for correct averaging
def meanpooling(output, mask):
embeddings = output[0] # First element of model_output contains all token embeddings
mask = mask.unsqueeze(-1).expand(embeddings.size()).float()
return torch.sum(embeddings * mask, 1) / torch.clamp(mask.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("neuml/bert-hash-pico-embeddings", trust_remote_code=True)
model = AutoModel.from_pretrained("neuml/bert-hash-pico-embeddings", trust_remote_code=True)
# Tokenize sentences
inputs = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
output = model(**inputs)
# Perform pooling. In this case, mean pooling.
embeddings = meanpooling(output, inputs['attention_mask'])
print("Sentence embeddings:")
print(embeddings)
Evaluation
The following table shows a subset of BEIR scored with the txtai benchmarks script.
This evaluation is compared against the ColBERT MUVERA series of models.
Scores reported are ndcg@10 and grouped into the following three categories.
BERT Hash Embeddings vs MUVERA
| Model | Parameters | NFCorpus | SciDocs | SciFact | Average |
|---|---|---|---|---|---|
| BERT Hash Pico Embeddings | 0.4M | 0.2075 | 0.0812 | 0.3912 | 0.2266 |
| ColBERT MUVERA Pico | 0.4M | 0.1926 | 0.0564 | 0.4424 | 0.2305 |
BERT Hash Embeddings vs MUVERA with maxsim re-ranking of the top 100 results per MUVERA paper
| Model | Parameters | NFCorpus | SciDocs | SciFact | Average |
|---|---|---|---|---|---|
| BERT Hash Pico Embeddings | 0.4M | 0.2702 | 0.1104 | 0.5965 | 0.3257 |
| ColBERT MUVERA Pico | 0.4M | 0.2821 | 0.1004 | 0.6090 | 0.3305 |
Compare to other models
| Model | Parameters | NFCorpus | SciDocs | SciFact | Average |
|---|---|---|---|---|---|
| ColBERT MUVERA Pico (full multi-vector maxsim) | 0.4M | 0.3005 | 0.1117 | 0.6452 | 0.3525 |
| all-MiniLM-L6-v2 | 22.7M | 0.3089 | 0.2164 | 0.6527 | 0.3927 |
| mxbai-embed-xsmall-v1 | 24.1M | 0.3186 | 0.2155 | 0.6598 | 0.3980 |
In analyzing the results, bert-hash-pico-embeddings scores slightly worse than MUVERA with colbert-muvera-pico. Comparing the standard MUVERA output of 10240 vs 80 dimensions, 10K standard F32 vectors needs 400 MB of storage vs 3.2 MB
Keeping in mind this is only a 448K parameter model, the performance is still impressive at only ~2% of the number of parameters of popular small embeddings models.
While this isn't a state of the art model, it's an extremely competitive method for building vectors on edge and low resource devices.
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertHashModel'})
(1): Pooling({'word_embedding_dimension': 80, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
More Information
Read more about this model and how it was built in this article.
- Downloads last month
- 11
Model tree for NeuML/bert-hash-pico-embeddings
Base model
NeuML/bert-hash-pico