π“Œ³ REAPπ“Œ³ the Experts: Why Pruning Prevails for One-Shot MoE Compression
πŸ“„ Paper β€’ πŸ’» Code β€’ πŸ“ Blog

MiniMax-M2.1-REAP-50

✨ Highlights

50% Expert-Pruned MiniMax-M2.1 optimized for code generation and function calling.

  • 50% Expert Pruning: ~~80 experts remaining per layer
  • Calibrated for Code & Tools: Same calibration mix as GLM-4.7 REAP models
  • One-Shot Compression: No fine-tuning required

πŸ™ Acknowledgments


πŸ“‹ Model Specifications

Property Value
Base Model MiniMax-M2.1
Compression 50% experts removed
Parameters ~220B
Experts per Layer ~80
Precision BF16
Disk Size ~420GB

πŸ”¬ Calibration Dataset: Deep Dive

REAP's effectiveness depends critically on calibration data that represents the target use case. We specifically optimized for code generation, function/tool calling, and agentic workflows.

Why These 3 Datasets?

Dataset Samples Purpose Why It Matters
evol-codealpaca-v1 700 Code generation 51% of mix β€” Code tasks activate specific expert pathways; pruning without code calibration destroys coding ability
xlam-function-calling-60k 330 Function/tool calling 24% of mix β€” Tool use requires structured JSON output; experts handling schema generation must be preserved
SWE-smith-trajectories 330 Agentic multi-turn 24% of mix β€” Real SWE-bench trajectories with tool calls, file edits, and multi-step reasoning

The Science Behind Dataset Selection

REAP Algorithm:
1. Forward pass calibration samples through model
2. Record which experts activate and their magnitudes
3. Compute saliency = router_weight Γ— activation_norm
4. Prune lowest-saliency experts

Key Insight: Experts are TASK-SPECIFIC
β”œβ”€β”€ Some experts specialize in natural language
β”œβ”€β”€ Some experts specialize in code syntax
β”œβ”€β”€ Some experts specialize in JSON/structured output
└── Some experts specialize in multi-turn context

If calibration lacks code β†’ code-specialized experts appear "unused" β†’ get pruned β†’ model loses coding ability

Cerebras' Original Mix (from paper)

Cerebras used the same 3 datasets in their GLM-4.6 REAP experiments:

  • evol-codealpaca-v1 for code generation
  • xlam-function-calling-60k for tool calling
  • SWE-smith-trajectories for agentic tasks

We followed this exact recipe for reproducibility.

Combined Dataset

Our calibration mix: 0xSero/glm47-reap-calibration-v2


πŸ“¦ Related Models

Model Compression Experts Size
MiniMax-M2.1-REAP-25 25% ~120 ~620GB
MiniMax-M2.1-REAP-30 30% ~112 ~580GB
MiniMax-M2.1-REAP-40 40% ~96 ~500GB
MiniMax-M2.1-REAP-50 50% ~80 ~420GB

πŸš€ Deployment

vLLM

vllm serve 0xSero/MiniMax-M2.1-REAP-50 \
    --tensor-parallel-size 8 \
    --trust-remote-code \
    --dtype bfloat16

🧩 Reproduction

REAP Pruning

#!/bin/bash
# MiniMax REAP - same calibration as GLM-4.7

export MODEL_DIR=/path/to/MiniMax-M2.1
export REAP_DATASET=0xSero/glm47-reap-calibration-mix
export REAP_SAMPLES_PER_CATEGORY=999
export REAP_MODEL_MAX_LENGTH=2048

python src/reap/prune.py \
    --model-name $MODEL_DIR \
    --dataset-name $REAP_DATASET \
    --compression-ratio 0.50 \
    --prune-method reap \
    --seed 42 \
    --distance_measure cosine

βš–οΈ License

Apache 2.0


🧾 Citation

@article{lasby2025reap,
  title={REAP the Experts: Why Pruning Prevails for One-Shot MoE Compression},
  author={Lasby, Mike and Lazarevich, Ivan and Sinnadurai, Nish and Lie, Sean and Ioannou, Yani and Thangarasa, Vithursan},
  journal={arXiv preprint arXiv:2510.13999},
  year={2025},
  url={https://arxiv.org/abs/2510.13999}
}
Downloads last month
29
GGUF
Model size
116B params
Architecture
minimax-m2
Hardware compatibility
Log In to add your hardware

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for bullerwins/MiniMax-M2.1-REAP-50-GGUF

Quantized
(5)
this model

Paper for bullerwins/MiniMax-M2.1-REAP-50-GGUF