Microtron-4B

Microtron-4B is a merge of the following models using mergekit:

🧩 Configuration

models:
  - model: anthracite-org/magnum-v2-4b
  - model: rasyosef/Llama-3.1-Minitron-4B-Chat
  - model: nvidia/Llama-3.1-Minitron-4B-Width-Base
merge_method: model_stock
base_model: nvidia/Llama-3.1-Minitron-4B-Width-Base
dtype: bfloat16
Downloads last month
-
Safetensors
Model size
5B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for bunnycore/Microtron-4B

Quantizations
3 models