The dataset viewer is not available for this split.
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
CMOSE: Comprehensive Multi-Modality Online Student Engagement Dataset with High-Quality Labels
Project page is here
Video clip name
Each video clip is named as videoX_Y_personZ, which means it is the Yth clip of the Zth subject from coaching session X.
Openface
We extract the second level features from OpenFace. The extracted files are stored under "secondfeature/videoX_Y_personZ.csv". These features include:
Gaze Direction and Angles
- Three coordinates to describe the gaze direction of left and right eyes respectively
- Two scalars to describe the horizontal and vertical gaze angles
Head Position
- Three coordinates to describe the location of the head relative to the camera
Head Rotation
- Rotation of the head described with pitch, yaw, and roll
Facial Action Units (AUs)
- Intensities of 17 AUs represented as scalars
- Presence of 18 AUs represented as scalars
I3D
We use the I3D Repository to extract the I3D vectors. One I3D vector is extracted for each clip. The features are stored in "final_data_1.json".
Acoustics
We use ParselMouth to extract the acoustics features. They are stored in "label_results_w_audio_final.json". We also calculate the high level features such as the percentage of high/low volume, high/low pitch, and std of volume/pitch. These are stored in "new_bert_ac_dict.json".
Narrations
We collect the narrations from the Live Transcript functions in Zoom. They are stored in "label_results_w_audio_final.json". We also extract the bert features from the narrations and store them in "new_bert_ac_dict.json".
Data split
Split information can be found in "final_data_1.json". Note that "split" should be one of "train", "unlabel", and "test". We use "unlabel" for validation purposes.
- Downloads last month
- 105