Leveraging Domain Adaptation and Data Augmentation to Improve Qur'anic IR in English and Arabic
Paper
•
2312.02803
•
Published
Model Description: BIOptimus v.0.4 model is a BERT-like model pre-trained on PubMed abstracts. It is a biomedical language model pre-trained using contextualized weight distillation and Curriculum Learning. This model achieves state-of-the-art performance on several biomedical NER datasets from BLURB benchmark.