YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices (Safetensors Checkpoint)

MobileBERT is a thin version of BERT_LARGE, while equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks. See here for the original model checkpoint in TensorFlow. This is simply that checkpoint converted to safetensors.

Example usage in transformers

from transformers import MobileBertTokenizer, MobileBertForMaskedLM
import torch

tokenizer = MobileBertTokenizer.from_pretrained("google/mobilebert-uncased")

model = MobileBertForMaskedLM.from_pretrained(
    "vysri/mobilebert-uncased-pytorch"
)

model.eval()
sentence = "The capital of France is [MASK]."
inputs = tokenizer(sentence, return_tensors="pt")

with torch.no_grad():
    outputs = model(**inputs)

mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = outputs.logits[0, mask_token_index].argmax(axis=-1)
predicted_token = tokenizer.decode(predicted_token_id)

print(f"Input: {sentence}")
print(f"Prediction: {predicted_token}")
Downloads last month
7
Safetensors
Model size
36.6M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support