Emotion Classification with Fine-tuned DistilBERT
This repository provides a fine-tuned DistilBERT model for multi-class emotion classification. The model is trained on a six-category emotion dataset and achieves strong performance compared to traditional machine learning baselines.
β οΈ Note: This project was developed as part of a course assignment for educational and research purposes.
π Emotion Labels
- sadness
- joy
- love
- anger
- fear
- surprise
π Model Details
- Base model:
distilbert-base-uncased - Task: Sequence Classification
- Number of labels: 6
- Framework: Hugging Face Transformers
- Training method: Full fine-tuning
π Performance (Validation Set)
- Accuracy: ~94.3%
- F1-score (weighted): ~94.4%
The fine-tuned Transformer significantly outperforms classical models such as Logistic Regression, SVM, Random Forest, and Gradient Boosting trained on frozen hidden-state features.
π¦ Usage
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("zhangxiaoxin/distilbert-base-uncased-finetuned-emotion")
model = AutoModelForSequenceClassification.from_pretrained(
"zhangxiaoxin/distilbert-base-uncased-finetuned-emotion"
)
π License
This project is released under the MIT License.
- Downloads last month
- -