Model Stock: All we need is just a few fine-tuned models
Paper
• 2403.19522 • Published
• 14
This is a merge of pre-trained language models created using mergekit.
This model was merged using the Model Stock merge method using NousResearch/DeepHermes-3-Llama-3-8B-Preview as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
merge_method: model_stock
models:
- model: Nexesenex/Llama_3.1_8b_Smarteaz_0.1b
parameters:
weight: 1.0
- model: huihui-ai/Dolphin3.0-Llama3.1-8B-abliterated
parameters:
weight: 1.0
base_model: NousResearch/DeepHermes-3-Llama-3-8B-Preview
dtype: bfloat16
normalize: true
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 26.65 |
| IFEval (0-Shot) | 68.09 |
| BBH (3-Shot) | 31.12 |
| MATH Lvl 5 (4-Shot) | 18.66 |
| GPQA (0-shot) | 5.48 |
| MuSR (0-shot) | 9.46 |
| MMLU-PRO (5-shot) | 27.08 |