The Dataset Viewer has been disabled on this dataset.

🦥 Unsloth Training Scripts for HF Jobs

UV scripts for fine-tuning LLMs and VLMs using Unsloth on HF Jobs (on-demand cloud GPUs). UV handles dependency installation automatically, so you can run these scripts directly without any local setup.

Prerequisites

  • A Hugging Face account with a token
  • The HF CLI: curl -LsSf https://hf.co/cli/install.sh | bash
  • A dataset on the Hub (see format requirements below)

Data Format

VLM Fine-tuning

Requires images and messages columns:

{
    "images": [<PIL.Image>],  # List of images
    "messages": [
        {
            "role": "user",
            "content": [
                {"type": "image"},
                {"type": "text", "text": "What's in this image?"}
            ]
        },
        {
            "role": "assistant",
            "content": [
                {"type": "text", "text": "A golden retriever playing fetch in a park."}
            ]
        }
    ]
}

See davanstrien/iconclass-vlm-sft for a working dataset example, and davanstrien/iconclass-vlm-qwen3-best for a model trained with these scripts.

Continued Pretraining

Any dataset with a text column:

{"text": "Your domain-specific text here..."}

Use --text-column if your column has a different name.

Usage

View available options for any script:

uv run https://huggingface.co/datasets/uv-scripts/unsloth-jobs/raw/main/sft-qwen3-vl.py --help

VLM fine-tuning

hf jobs uv run \
  https://huggingface.co/datasets/uv-scripts/unsloth-jobs/raw/main/sft-qwen3-vl.py \
  --flavor a100-large --secrets HF_TOKEN --timeout 4h \
  -- --dataset your-username/your-vlm-dataset \
     --num-epochs 1 \
     --eval-split 0.2 \
     --output-repo your-username/my-vlm

Continued pretraining

hf jobs uv run \
  https://huggingface.co/datasets/uv-scripts/unsloth-jobs/raw/main/continued-pretraining.py \
  --flavor a100-large --secrets HF_TOKEN \
  -- --dataset your-username/domain-corpus \
     --text-column content \
     --max-steps 1000 \
     --output-repo your-username/domain-llm

With Trackio monitoring

hf jobs uv run \
  https://huggingface.co/datasets/uv-scripts/unsloth-jobs/raw/main/sft-qwen3-vl.py \
  --flavor a100-large --secrets HF_TOKEN \
  -- --dataset your-username/dataset \
     --trackio-space your-username/trackio \
     --output-repo your-username/my-model

Scripts

Script Base Model Task
sft-qwen3-vl.py Qwen3-VL-8B VLM fine-tuning
sft-gemma3-vlm.py Gemma 3 4B VLM fine-tuning (smaller)
continued-pretraining.py Qwen3-0.6B Domain adaptation

Common Options

Option Description Default
--dataset HF dataset ID required
--output-repo Where to save trained model required
--max-steps Number of training steps 500
--num-epochs Train for N epochs instead of steps -
--eval-split Fraction for evaluation (e.g., 0.2) 0 (disabled)
--batch-size Per-device batch size 2
--gradient-accumulation Gradient accumulation steps 4
--lora-r LoRA rank 16
--learning-rate Learning rate 2e-4
--merge-model Upload merged model (not just adapter) false
--trackio-space HF Space for live monitoring -
--run-name Custom name for Trackio run auto

Tips

  • Use --max-steps 10 to verify everything works before a full run
  • --eval-split 0.1 helps detect overfitting
  • Run hf jobs hardware to see GPU pricing (A100-large ~$2.50/hr, L40S ~$1.80/hr)
  • Add --streaming for very large datasets
  • First training step may take a few minutes (CUDA kernel compilation)

Links

Downloads last month
64