--- title: Model Tools emoji: 📚 colorFrom: pink colorTo: yellow sdk: static pinned: false --- # Model Tools by Naphula Tools to enhance LLM quantizations and merging. Merge and audit large language models with low VRAM. # [graph_v18.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/graph_v18.py) - Merge models in minutes instead of hours on low VRAM. For a 3060/3060 Ti user: This script enables functionality that is otherwise impossible (merging 70B models or large 7B merges with `--cuda`) without OOM. [More details here](https://huggingface.co/spaces/Naphula/model_tools/blob/main/mergekit_low-VRAM-graph_patch.md) - Update: v18 is much faster than v4 and replaces the trial-and-error loop with an adaptive math-based calculator (using GrimJim's measure.py logic) # config.py - Simply replace line 13 | BEFORE `ScalarOrGradient: TypeAlias = Union[float, List[float]]` → AFTER `ScalarOrGradient: TypeAlias = Union[float, List[float], str, bool]` | to allow for custom filepath strings within parameter settings. # [enable_fix_mistral_regex_true.md](https://huggingface.co/spaces/Naphula/model_tools/blob/main/enable_fix_mistral_regex_true.md) - Merge models with extreme tokenizer incompatibility. Requires modifying the `mergekit.yaml` `tokenizer` section and adding `--fix-mistral-regex` to your merge commands. (Note: Do not use `token_surgeon.py`, `gen_id_patcher.py`, or `vocab_id_patcher.py` with this, they are obsolete now.) Configured for MN 12B by default. Follow the steps in this guide to modify these scripts: - `mergekit/merge.py` - `mergekit/options.py` - `mergekit/scripts/moe.py` - `mergekit/scripts/tokensurgeon.py` - `mergekit/tokenizer/build.py` # [audit_della.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/Audits/audit_della.py) - Audit the compatibility of donor models for `Della` merges before merging. See: [example chart Asmodeus](https://huggingface.co/spaces/Naphula/model_tools/blob/main/Audits/Asmodeus_Audit.png), [example log Asmodeus](https://huggingface.co/spaces/Naphula/model_tools/blob/main/Audits/Asmodeus_Audit.log), [example chart Slimaki](https://huggingface.co/spaces/Naphula/model_tools/blob/main/Audits/Slimaki_Audit.png), [example log Slimaki](https://huggingface.co/spaces/Naphula/model_tools/blob/main/Audits/Slimaki_Audit.log) # [audit_karcher.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/Audits/audit_karcher.py) - Audit the compatibility of donor models for `Karcher` merges before merging. See: [example chart Goetia](https://cdn-uploads.huggingface.co/production/uploads/68e840caa318194c44ec2a04/nSuSM6v_BQBP4tAWK9rGQ.png) # [generalized_task_arithmetic.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/Audits/generalized_task_arithmetic.py) - Live audit reports of **actual contribution magnitude** on a per-layer basis for `Della` merges. See: [example audit Asmodeus](https://huggingface.co/spaces/Naphula/model_tools/blob/main/Audits/Asmodeus_Live_Audit.png), [example audit Slimaki](https://huggingface.co/spaces/Naphula/model_tools/blob/main/Audits/Slimaki_Live_Audit.png) # [model_stock.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/Audits/model_stock.py) - Live audit reports of **actual contribution magnitude** on a per-layer basis for `Model_Stock` merges. # [metadata_audit.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/metadata_audit.py) - Checks multiple models within subdirectories for vocab or rope mismatch (useful for large merges). Calibrated for Mistral Nemo 12B by default. # llama moe - Add support for Llama Mixture of Experts. If you want to merge custom Llama MoE you can add these scripts to your mergekit environment: - [mergekit-main\mergekit\architecture\moe_defs.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/moe_defs.py) - [mergekit-main\mergekit\__init__.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/__init__.py) - [mergekit-main\mergekit\moe\llama.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/llama.py) - Then assign the num_experts_per_tok in config.json (or the config.yaml) # [tokensurgeon.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/tokensurgeon.py) - Uses adaptive VRAM from Grim Jim's `measure.py` like `graph_v18` to prevent OOM. Use recommended [batch file](https://huggingface.co/spaces/Naphula/model_tools/blob/main/fix_tokenizers.bat) here or modify sh. This avoids 'Potemkin village' fake patches like `gen_id_patcher` and `vocab_id_patcher`. For this to work properly, you must also run [shield_embeddings.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/shield_embeddings.py) and [shield_norms.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/shield_norms.py) on any merges made from models patched with with tokensurgeon. # [tokeninspector.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/tokeninspector.py) - Audit your tokensurgeon results. # [arcee_fusion_salience_scanner.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/arcee_fusion_salience_scanner.py) - Scan the salience % of your arcee_fusion merges. The default `tukey_fence` value is 1.5 which results in 12.5% salience, but [this can be adjusted (see guide here)](modify_arcee_fusion_tukey_fence_parameter.md). # [eos_scanner.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/eos_scanner.py) - Updated! This tool scans the tokenizer jsons to detect any mismatches with EOS tokens, which cause early termination bugs. You can then use the [gen_id_patcher.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/gen_id_patcher.py) and [vocab_id_patcher.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/vocab_id_patcher.py), or the [chatml_to_mistral.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/chatml_to_mistral.py) to patch missing `generation_config.json` files for EOS token. See [this post](https://huggingface.co/Naphula/Q0_Bench/discussions/1?not-for-all-audiences=true#6987717c762f0a45f672e250) as well as the [EOS Scanner ReadMe](https://huggingface.co/spaces/Naphula/model_tools/blob/main/eos_scanner_readme.md) for more info. # [weight_counter.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/weight_counter.py) - This counts the number of models in a yaml and adds up the total weight values. Useful for large della/ties merges. # [fp32_to_bf16.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/fp32_to_bf16.py) - Converts FP32 to BF16 safetensors # [fp32_to_fp16.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/fp32_to_fp16.py) - Converts FP32 to FP16 safetensors # [pytorch_to_safetensors.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/pytorch_to_safetensors.py) - Converts pytorch bin to safetensors format # [textonly_ripper_v2.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/textonly_ripper_v2.py) - Converts a sharded, multimodal (text and vision) model into a text-only version. Readme at [textonly_ripper.md](https://huggingface.co/spaces/Naphula/model_tools/blob/main/textonly_ripper.md) # [json_reverter.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/json_reverter.py) - Revert changes to all JSON files done by `gen_id_patcher.py`, `vocab_id_patcher.py` or other scripts, within a specified root folder. It re-downloads the source files from the HF repo. # [vocab_resizer.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/vocab_resizer.py) - Converts models with larger vocab_sizes to a standard size (default 131072 Mistral 24B) for use with mergekit. Note that `tokenizer.model` must be manually copied into the `/fixed/` folder. # [lm_head_remover.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/lm_head_remover.py) - This script will load a "fat" 18.9GB model (default Gemma 9B), force it to tie the weights (deduplicating the lm_head), and re-save it. This will drop the file size to ~17.2GB and make it compatible with the others. # [model_index_json_generator.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/model_index_json_generator.py) - Generates a missing `model.safetensors.index.json` file. Useful for cases where safetensors may have been sharded at the wrong size. [Single tensor variant here.](https://huggingface.co/spaces/Naphula/model_tools/blob/main/model_index_json_generator_SingleTensor.py) # [folder_content_combiner_anyfiles.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/folder_content_combiner_anyfiles.py) - Combines all files in the script's current directory into a single output file, sorted alphabetically. # [folder+subfolder_content_combiner_anyfiles.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/folder+subfolder_content_combiner_anyfiles.py) - Combines all files in the script's directory, including all files within subdirectories (excluding blacklisted formats) into a single output file, sorted alphabetically. # [GGUF Repo Suite](https://huggingface.co/spaces/Naphula/gguf-repo-suite) - Create and quantize Hugging Face models # [Markdown Viewer](https://huggingface.co/spaces/Naphula/Portable_Offline_Markdown_Viewer) - Portable Offline Markdown Viewer # [Markdown to SMF](https://huggingface.co/spaces/Naphula/model_tools/blob/main/md_to_smf.py) - Converts a Markdown string to an SMF-compatible BBCode string. Not perfect—sometimes misses double bold tags. # [Quant Clone](https://github.com/electroglyph/quant_clone) - A tool which allows you to recreate UD quants such as Q8_K_XL. Examples: [Mistral 24B](https://huggingface.co/spaces/Naphula/model_tools/raw/main/Mistral-Small-3.2-24B-Instruct-2506-UD-Q8_K_XL_UD.txt), [Mistral 7B](https://huggingface.co/spaces/Naphula/model_tools/raw/main/Warlock-7B-v2-Q8_K_XL.txt) # [Text Analysis Suite v1.5](https://huggingface.co/spaces/Naphula/TAS_1.5) - Analyze text files with advanced metrics --- # Not Functional # [Failed Experiment gguf_to_safetensors_v2.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/gguf_to_safetensors_v2.py) - Unsuccessful attempt by Gemini to patch the gguf_to_safetensors script. Missing json files are hard to reconstruct. Also see [safetensors_meta_ripper_v1.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/safetensors_meta_ripper_v1.py) and [tokenizer_ripper_v1.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/tokenizer_ripper_v1.py) # [IQ5_NL.md](https://huggingface.co/spaces/Naphula/model_tools/blob/main/IQ5_NL.md) - Note: Not functional yet. Includes the code needed to quantize IQ5_NL GGUFs using block size 32.