MarkChenX commited on
Commit
6739e1a
·
0 Parent(s):

Initial commit: Multi-TW Chinese language learning dataset

Browse files
.gitattributes ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ validation/*.arrow filter=lfs diff=lfs merge=lfs -text
2
+ validation/*.parquet filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Multi-TW: Traditional Chinese Language Learning Dataset
2
+
3
+ ## Dataset Description
4
+
5
+ Multi-TW is a Traditional Chinese language learning and assessment dataset containing 900 multiple-choice questions with multimedia content. This dataset is designed for evaluating multi-modal language models on Traditional Chinese comprehension tasks.
6
+
7
+ ## Dataset Structure
8
+
9
+ The dataset contains 900 samples in the validation split, suitable for benchmarking purposes.
10
+
11
+ ### Data Fields
12
+
13
+ - `id`: Unique identifier for each question
14
+ - `instruction`: Task instructions in Chinese
15
+ - `question`: The question text in Chinese
16
+ - `option1`: Multiple choice option A
17
+ - `option2`: Multiple choice option B
18
+ - `option3`: Multiple choice option C
19
+ - `option4`: Multiple choice option D (may be empty)
20
+ - `answer`: Correct answer (A, B, C, or D)
21
+ - `image`: PIL Image object (for visual questions)
22
+ - `audio`: Audio data with sampling rate (for audio questions)
23
+
24
+ ### Data Composition
25
+
26
+ - **Total samples**: 900
27
+ - **Samples with images**: 450
28
+ - **Samples with audio**: 450
29
+ - **Answer distribution**: A: 249, B: 261, C: 263, D: 127
30
+ - **Question types**: L (Listening): 660, R (Reading): 240
31
+
32
+ ## Usage
33
+
34
+ ```python
35
+ from datasets import load_dataset
36
+
37
+ # Load the dataset
38
+ dataset = load_dataset("ntuai/multi-tw")
39
+ validation_data = dataset["validation"]
40
+
41
+ # Access a sample
42
+ sample = validation_data[0]
43
+ print(f"Question: {sample['question']}")
44
+ print(f"Options: {sample['option1']}, {sample['option2']}, {sample['option3']}")
45
+ print(f"Answer: {sample['answer']}")
46
+
47
+ # Check if sample has image or audio
48
+ if sample['image'] is not None:
49
+ # Process image
50
+ image = sample['image']
51
+
52
+ if sample['audio'] is not None:
53
+ # Process audio
54
+ audio_array = sample['audio']['array']
55
+ sampling_rate = sample['audio']['sampling_rate']
56
+ ```
57
+
58
+ ## Dataset Statistics
59
+
60
+ The dataset covers various aspects of Chinese language learning:
61
+
62
+ - **Visual comprehension**: Questions requiring image understanding
63
+ - **Audio comprehension**: Questions requiring audio understanding
64
+ - **Multiple choice format**: 3-4 options per question
65
+ - **Balanced distribution**: Relatively even distribution across answer choices
66
+
67
+ ## License
68
+
69
+ MIT License
70
+
71
+ ## Citation
72
+
73
+ If you use this dataset in your research, please cite:
74
+
75
+ ```bibtex
76
+ @dataset{multi_tw_2024,
77
+ title={Multi-TW: Chinese Language Learning Dataset},
78
+ author={NTUAI Club},
79
+ year={2024},
80
+ publisher={Hugging Face},
81
+ url={https://huggingface.co/datasets/ntuai/multi-tw}
82
+ }
83
+ ```
dataset_dict.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"splits": ["validation"]}
validation/data-00000-of-00002.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:edfdaad0c85876c7173c18ffad9d6631684d4ac16ab773f8e9ae12aa79765f67
3
+ size 406990848
validation/data-00001-of-00002.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b7b9745944184b60e04eb17b088f70a47e6ae0cdc47ef6a019f36b3d95fb8da1
3
+ size 466324984
validation/dataset_info.json ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "citation": "",
3
+ "description": "",
4
+ "features": {
5
+ "id": {
6
+ "dtype": "string",
7
+ "_type": "Value"
8
+ },
9
+ "instruction": {
10
+ "dtype": "string",
11
+ "_type": "Value"
12
+ },
13
+ "question": {
14
+ "dtype": "string",
15
+ "_type": "Value"
16
+ },
17
+ "option1": {
18
+ "dtype": "string",
19
+ "_type": "Value"
20
+ },
21
+ "option2": {
22
+ "dtype": "string",
23
+ "_type": "Value"
24
+ },
25
+ "option3": {
26
+ "dtype": "string",
27
+ "_type": "Value"
28
+ },
29
+ "option4": {
30
+ "dtype": "string",
31
+ "_type": "Value"
32
+ },
33
+ "answer": {
34
+ "dtype": "string",
35
+ "_type": "Value"
36
+ },
37
+ "image": {
38
+ "_type": "Image"
39
+ },
40
+ "audio": {
41
+ "_type": "Audio"
42
+ }
43
+ },
44
+ "homepage": "",
45
+ "license": ""
46
+ }
validation/state.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "data-00000-of-00002.arrow"
5
+ },
6
+ {
7
+ "filename": "data-00001-of-00002.arrow"
8
+ }
9
+ ],
10
+ "_fingerprint": "6b41480435b97c9a",
11
+ "_format_columns": null,
12
+ "_format_kwargs": {},
13
+ "_format_type": null,
14
+ "_output_all_columns": false,
15
+ "_split": null
16
+ }