--- library_name: transformers license: apache-2.0 base_model: ntu-spml/distilhubert tags: - generated_from_trainer datasets: - Emo-Codec/CREMA-D_synth metrics: - accuracy - precision - recall - f1 model-index: - name: distilhubert-tone-classification results: - task: name: Audio Classification type: audio-classification dataset: name: CREMA-D type: Emo-Codec/CREMA-D_synth metrics: - name: Accuracy type: accuracy value: 0.7024128686327078 - name: Precision type: precision value: 0.7036509389001218 - name: Recall type: recall value: 0.7024128686327078 - name: F1 type: f1 value: 0.6970142752522046 --- # distilhubert-tone-classification This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the CREMA-D dataset. It achieves the following results on the evaluation set: - Loss: 1.1479 - Accuracy: 0.7024 - Precision: 0.7037 - Recall: 0.7024 - F1: 0.6970 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 1.339 | 1.0 | 442 | 1.3491 | 0.4987 | 0.5533 | 0.4987 | 0.4664 | | 1.0008 | 2.0 | 884 | 1.0219 | 0.6408 | 0.6668 | 0.6408 | 0.6373 | | 0.7673 | 3.0 | 1326 | 0.9572 | 0.6676 | 0.6870 | 0.6676 | 0.6557 | | 0.5888 | 4.0 | 1768 | 0.8830 | 0.6890 | 0.6930 | 0.6890 | 0.6889 | | 0.4396 | 5.0 | 2210 | 1.0893 | 0.6810 | 0.7064 | 0.6810 | 0.6738 | | 0.2987 | 6.0 | 2652 | 1.0561 | 0.6810 | 0.6892 | 0.6810 | 0.6738 | | 0.2009 | 7.0 | 3094 | 1.1421 | 0.6836 | 0.6944 | 0.6836 | 0.6769 | | 0.1345 | 8.0 | 3536 | 1.1479 | 0.7024 | 0.7037 | 0.7024 | 0.6970 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1