Spaces:
Running
Running
metadata
title: Model Tools
emoji: 📚
colorFrom: pink
colorTo: yellow
sdk: static
pinned: false
Model Tools by Naphula
Tools to enhance LLM quantizations and merging
graph_v18.py
- Merge models in minutes instead of hours on low VRAM. For a 3060/3060 Ti user: This script enables functionality that is otherwise impossible (merging 70B models or large 7B merges with
--cuda) without OOM. More details here - Update: v18 is much faster than v4 and replaces the trial-and-error loop with an adaptive math-based calculator (using GrimJim's measure.py logic)
metadata_audit.py
- Checks multiple models within subdirectories for vocab or rope mismatch (useful for large merges). Calibrated for Mistral Nemo 12B by default.
config.py
- Simply replace line 13 | BEFORE
ScalarOrGradient: TypeAlias = Union[float, List[float]]→ AFTERScalarOrGradient: TypeAlias = Union[float, List[float], str, bool]| to allow for custom filepath strings within parameter settings.
fp32_to_fp16.py
- Converts FP32 to FP16 safetensors
textonly_ripper_v2.py
- Converts a sharded, multimodal (text and vision) model into a text-only version. Readme at textonly_ripper.md
vocab_resizer.py
- Converts models with larger vocab_sizes to a standard size (default 131072 Mistral 24B) for use with mergekit. Note that
tokenizer.modelmust be manually copied into the/fixed/folder.
lm_head_remover.py
- This script will load a "fat" 18.9GB model (default Gemma 9B), force it to tie the weights (deduplicating the lm_head), and re-save it. This will drop the file size to ~17.2GB and make it compatible with the others.
model_index_json_generator.py
- Generates a missing
model.safetensors.index.jsonfile. Useful for cases where safetensors may have been sharded at the wrong size.
folder_content_combiner_anyfiles.py
- Combines all files in the script's current directory into a single output file, sorted alphabetically.
GGUF Repo Suite
- Create and quantize Hugging Face models
Failed Experiment gguf_to_safetensors_v2.py
- Unsuccessful attempt by Gemini to patch the gguf_to_safetensors script. Missing json files are hard to reconstruct. Also see safetensors_meta_ripper_v1.py and tokenizer_ripper_v1.py
Markdown Viewer
- Portable Offline Markdown Viewer
Markdown to SMF
- Converts a Markdown string to an SMF-compatible BBCode string. Not perfect—sometimes misses double bold tags.
Quant Clone
- A tool which allows you to recreate UD quants such as Q8_K_XL. Examples: Mistral 24B, Mistral 7B
Text Analysis Suite v1.5
- Analyze text files with advanced metrics