TX-12G

Enhanced reasoning and code generation. Runs on 12GB RAM.

TX-12G is TARX's mid-tier model, offering significantly improved reasoning and code generation while remaining accessible to users with 12GB+ RAM.

Model Details

Property Value
Parameters 12B
Quantization Optimized mixed precision
RAM Required 12 GB minimum
GPU VRAM 8 GB+ recommended
Context Length 16,384 tokens
License Apache 2.0

Capabilities

  • βœ… Everything TX-8G does, plus:
  • βœ… Complex multi-step reasoning
  • βœ… Advanced code generation & debugging
  • βœ… Nuanced writing with style matching
  • βœ… Technical documentation
  • βœ… Data analysis & interpretation

When to Use TX-12G vs TX-8G

Use Case TX-8G TX-12G
Quick questions βœ… Overkill
Email drafting βœ… βœ…
Simple code βœ… βœ…
Complex debugging ⚠️ βœ…
Multi-file refactoring ❌ βœ…
Technical writing ⚠️ βœ…
Research synthesis ⚠️ βœ…

Usage

With TARX Desktop

Settings β†’ Model β†’ TX-12G

Model downloads automatically on first use (~8GB download).

With Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained(
    "Tarxxxxxx/TX-12G",
    device_map="auto",
    torch_dtype="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Tarxxxxxx/TX-12G")

With llama.cpp

wget https://huggingface.co/Tarxxxxxx/TX-12G/resolve/main/tx-12g.Q6_K.gguf
./main -m tx-12g.Q6_K.gguf -p "Debug this Python function:" -n 512

Hardware Requirements

Hardware Performance
Apple M1 Pro/Max (16GB+) ⭐⭐⭐⭐⭐ Excellent
Apple M2/M3 (16GB+) ⭐⭐⭐⭐⭐ Excellent
NVIDIA RTX 3080+ ⭐⭐⭐⭐⭐ Excellent
Intel i7 + 32GB RAM ⭐⭐⭐⭐ Good
AMD Ryzen 7 + 32GB ⭐⭐⭐⭐ Good

Links


Built by TARX | tarx.com

Downloads last month
3
GGUF
Model size
8B params
Architecture
qwen3vl
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support