# Humigence Command - Ready to Use! 🚀 ## ✅ **Yes, you can launch this with "humigence"!** The Humigence training pipeline has been successfully refactored and is now ready to use with the `humigence` command. ## 🎯 **How to Use** ### **Launch Humigence CLI** ```bash humigence ``` ### **What You'll See** ``` ──────────────── Humigence — Your AI. Your pipeline. Zero code. ──────────────── A complete MLOps suite built for makers, teams, and enterprises. Options: 1. Supervised Fine-Tuning ✅ 2. RAG Implementation (coming soon) 3. EnterpriseGPT (coming soon) 4. Batch Inference (coming soon) 5. Context Length (coming soon) 6. Exit Select an option: ``` ### **Training Options** 1. **Select "1. Supervised Fine-Tuning"** 2. **Choose Setup Mode**: Basic or Advanced 3. **Select Model**: TinyLlama, Qwen, Phi-2, etc. 4. **Choose Training Recipe**: LoRA, QLoRA, etc. 5. **Select Dataset**: Your available datasets 6. **Choose Training Mode**: Multi-GPU or Single-GPU 7. **Confirm Configuration**: Review and start training ## 🚀 **What's New (Accelerate Refactor)** ### **Clean Architecture** - **Hugging Face Accelerate**: Stable DDP training - **Single-GPU Evaluation**: Always on cuda:0 - **No More NCCL Errors**: Robust distributed training - **Clean Code**: Removed over-engineering ### **Key Features** - ✅ **Multi-GPU Training**: 2× RTX 5090s support - ✅ **Single-GPU Fallback**: Automatic fallback if needed - ✅ **LoRA/QLoRA Support**: Parameter-efficient fine-tuning - ✅ **Structured Logging**: Clean, readable output - ✅ **Error Handling**: Robust error management ## 📋 **Training Modes** ### **Multi-GPU Training (Recommended)** - Uses `accelerate launch` with 2× RTX 5090s - Stable DDP training with NCCL backend - Automatic device management - Mixed precision (bf16/fp16) ### **Single-GPU Training** - Uses `python train.py` for single GPU - Fallback option if multi-GPU fails - Same functionality, single device ## 🎯 **Usage Examples** ### **Interactive CLI** ```bash humigence # Select option 1 # Choose Multi-GPU Training # Follow the configuration wizard ``` ### **Direct Training (Advanced)** ```bash # Multi-GPU accelerate launch --config_file accelerate_config.yaml train.py --config_file config.json # Single-GPU python train.py --config_file config.json ``` ## 🔧 **Technical Details** ### **Files Created/Updated** - **`train.py`** - Clean Accelerate-based training script - **`accelerate_config.yaml`** - Multi-GPU configuration - **`cli/main.py`** - Updated CLI integration - **`humigence`** - Command-line entry point ### **Dependencies** - **Hugging Face Accelerate** - Distributed training - **Transformers** - Model loading and training - **PEFT** - LoRA/QLoRA support - **Rich** - Beautiful CLI interface ## 🎉 **Ready to Use!** The Humigence training pipeline is now: - ✅ **Refactored** with Hugging Face Accelerate - ✅ **Tested** and working correctly - ✅ **Installed** as `humigence` command - ✅ **Ready** for production use **Just run `humigence` and start training!** 🚀 ## 📊 **What You Get** 1. **Clean CLI Interface** - Easy to use 2. **Stable Multi-GPU Training** - No more NCCL errors 3. **Single-GPU Evaluation** - No device mismatches 4. **Structured Reporting** - Clear training summaries 5. **Error Handling** - Robust error management 6. **Production Ready** - Works with your 2× RTX 5090s **The refactored Humigence pipeline is ready for your AI training needs!** 🎯