Emotion Classification with Fine-tuned DistilBERT

This repository provides a fine-tuned DistilBERT model for multi-class emotion classification. The model is trained on a six-category emotion dataset and achieves strong performance compared to traditional machine learning baselines.

⚠️ Note: This project was developed as part of a course assignment for educational and research purposes.

πŸ”– Emotion Labels

  • sadness
  • joy
  • love
  • anger
  • fear
  • surprise

πŸš€ Model Details

  • Base model: distilbert-base-uncased
  • Task: Sequence Classification
  • Number of labels: 6
  • Framework: Hugging Face Transformers
  • Training method: Full fine-tuning

πŸ“Š Performance (Validation Set)

  • Accuracy: ~94.3%
  • F1-score (weighted): ~94.4%

The fine-tuned Transformer significantly outperforms classical models such as Logistic Regression, SVM, Random Forest, and Gradient Boosting trained on frozen hidden-state features.

πŸ“¦ Usage

from transformers import AutoTokenizer, AutoModelForSequenceClassification

tokenizer = AutoTokenizer.from_pretrained("zhangxiaoxin/distilbert-base-uncased-finetuned-emotion")
model = AutoModelForSequenceClassification.from_pretrained(
    "zhangxiaoxin/distilbert-base-uncased-finetuned-emotion"
)

πŸ“œ License

This project is released under the MIT License.

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Dataset used to train zhangxiaoxin/distilbert-base-uncased-finetuned-emotion