| base_model: failspy/Llama-3-8B-Instruct-MopeyMule+PJMixers-Archive/LLaMa-3-Instruct-ToxicQAFinal-ORPO-8B-QDoRA | |
| chat_template: llama3 | |
| dtype: float32 | |
| merge_method: sce | |
| modules: | |
| default: | |
| slices: | |
| - sources: | |
| - layer_range: [0, 32] | |
| model: Cas-Archive/L3-Penumbral-Mind-RP-8B | |
| - layer_range: [0, 32] | |
| model: ChaoticNeutrals/T-900-8B | |
| - layer_range: [0, 32] | |
| model: failspy/Llama-3-8B-Instruct-MopeyMule+PJMixers-Archive/LLaMa-3-Instruct-ToxicQAFinal-ORPO-8B-QDoRA | |
| parameters: | |
| select_topk: 0.25 | |
| tokenizer: | |
| pad_to_multiple_of: 32 |