TX-12G
Enhanced reasoning and code generation. Runs on 12GB RAM.
TX-12G is TARX's mid-tier model, offering significantly improved reasoning and code generation while remaining accessible to users with 12GB+ RAM.
Model Details
| Property | Value |
|---|---|
| Parameters | 12B |
| Quantization | Optimized mixed precision |
| RAM Required | 12 GB minimum |
| GPU VRAM | 8 GB+ recommended |
| Context Length | 16,384 tokens |
| License | Apache 2.0 |
Capabilities
- β Everything TX-8G does, plus:
- β Complex multi-step reasoning
- β Advanced code generation & debugging
- β Nuanced writing with style matching
- β Technical documentation
- β Data analysis & interpretation
When to Use TX-12G vs TX-8G
| Use Case | TX-8G | TX-12G |
|---|---|---|
| Quick questions | β | Overkill |
| Email drafting | β | β |
| Simple code | β | β |
| Complex debugging | β οΈ | β |
| Multi-file refactoring | β | β |
| Technical writing | β οΈ | β |
| Research synthesis | β οΈ | β |
Usage
With TARX Desktop
Settings β Model β TX-12G
Model downloads automatically on first use (~8GB download).
With Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"Tarxxxxxx/TX-12G",
device_map="auto",
torch_dtype="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Tarxxxxxx/TX-12G")
With llama.cpp
wget https://huggingface.co/Tarxxxxxx/TX-12G/resolve/main/tx-12g.Q6_K.gguf
./main -m tx-12g.Q6_K.gguf -p "Debug this Python function:" -n 512
Hardware Requirements
| Hardware | Performance |
|---|---|
| Apple M1 Pro/Max (16GB+) | βββββ Excellent |
| Apple M2/M3 (16GB+) | βββββ Excellent |
| NVIDIA RTX 3080+ | βββββ Excellent |
| Intel i7 + 32GB RAM | ββββ Good |
| AMD Ryzen 7 + 32GB | ββββ Good |
Links
Built by TARX | tarx.com
- Downloads last month
- 3
Hardware compatibility
Log In
to add your hardware
We're not able to determine the quantization variants.