Model Stock: All we need is just a few fine-tuned models
Paper • 2403.19522 • Published • 14
This is a merge of pre-trained language models created using mergekit.
This model was merged using the Model Stock merge method using artificialguybr/LLAMA3.2-1B-Synthia-II-Redmond as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
merge_method: model_stock
models:
- model: pankajmathur/orca_mini_v9_6_1B-instruct
parameters:
weight: 1.0
- model: cognitivecomputations/Dolphin3.0-Llama3.2-1B
parameters:
weight: 1.0
base_model: artificialguybr/LLAMA3.2-1B-Synthia-II-Redmond
dtype: bfloat16
normalize: false
chat_template: auto
tokenizer:
source: union
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 5.87 |
| IFEval (0-Shot) | 24.31 |
| BBH (3-Shot) | 3.65 |
| MATH Lvl 5 (4-Shot) | 2.34 |
| GPQA (0-shot) | 2.01 |
| MuSR (0-shot) | 1.60 |
| MMLU-PRO (5-shot) | 1.29 |