Alif-1.0-8B Emergency Response (Urdu)
Fine-tuned version of Alif-1.0-8B-Instruct for Urdu emergency call handling.
🚀 Try the Model in Colab
Model Description
This model is an Urdu conversational language model fine-tuned to understand and respond to emergency call scenarios.
It is adapted from Alif-1.0-8B-Instruct using QLoRA on a curated Urdu emergency calls dataset (~5k samples).
- Base model: large-traversaal/Alif-1.0-8B-Instruct
- Language: Urdu
- Task: Emergency call understanding and response
- Fine-tuning: QLoRA (4-bit NF4, LoRA adapters)
Intended Use
Intended:
- Research on Urdu emergency dialogue systems
- Prototyping and academic experimentation
- Conversational AI demos
Not intended:
- Real-world emergency response
- Medical, legal, or safety-critical decision-making
- Autonomous deployment without human supervision
Training Data
Fine-tuned on the Urdu Emergency Calls Dataset:
https://huggingface.co/datasets/hamza-amin/urdu-emergency-calls
The dataset contains instruction-style emergency conversations in Urdu.
Training Details
- LoRA rank: 8
- LoRA alpha: 16
- Target modules:
q_proj,v_proj - Epochs: 1
- Learning rate: 2e-4
- Precision: bf16
- Optimizer: paged_adamw_8bit
Evaluation
- Validation loss: ~0.39
- Perplexity: ~1.5
These results indicate stable training and effective domain adaptation for a small, task-specific dataset.
Limitations
- Trained on a limited, curated dataset
- May fail on ambiguous or unseen emergency scenarios
- Not suitable for real emergency use without human oversight
Quick Start
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
from peft import PeftModel
import torch
base_model_name = "large-traversaal/Alif-1.0-8B-Instruct"
adapter_name = "hamza-amin/alif-emergency-finetuned"
tokenizer = AutoTokenizer.from_pretrained(base_model_name, trust_remote_code=True)
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=False,
)
base_model = AutoModelForCausalLM.from_pretrained(
base_model_name,
quantization_config=bnb_config,
device_map="auto",
trust_remote_code=True
)
model = PeftModel.from_pretrained(base_model, adapter_name)
Example Outputs
Query: میں ملتان کے شاہ رکن عالم کالونی میں ہوں، ایک شخص کنویں میں گر گیا ہے۔
Response: کیا آپ صحیح مقام بتا سکتے ہیں؟
Query: جی ہاں، میں نے بس حادثے میں زخمیوں کی مدد شروع کر دی ہے۔
Response: ایمبولینس روانہ ہو چکی ہے، آپ زخمیوں کا خیال رکھیں۔
Citation
@misc{alif-emergency-urdu,
author = {Hamza Amin},
title = {Alif Emergency Response Model for Urdu},
year = {2024},
publisher = {HuggingFace},
url = {https://huggingface.co/hamza-amin/alif-emergency-finetuned}
}
Acknowledgments
- Base model: Alif-1.0-8B-Instruct
- Dataset: urdu-emergency-calls
- Training platform: Kaggle
Model tree for hamza-amin/alif-emergency-finetuned
Base model
unsloth/Meta-Llama-3.1-8B