model
stringclasses 9
values | teleqna
listlengths 3
3
| telelogs
listlengths 3
3
| telemath
listlengths 3
3
| 3gpp_tsg
listlengths 3
3
| date
stringdate 2026-01-09 00:00:00
2026-01-13 00:00:00
| tci
listlengths 3
3
|
|---|---|---|---|---|---|---|
gpt-5.2 (Openai)
|
[
83.6,
1.17,
1000
] |
[
75,
4.35,
100
] |
[
39,
4.9,
100
] |
[
54,
5.01,
100
] |
2026-01-09
|
[
126.9,
1.56,
0
] |
gemini-3-flash-preview (Google)
|
[
83.6,
1.17,
1000
] |
[
53,
5.02,
100
] |
[
38,
4.88,
100
] |
[
64,
4.82,
100
] |
2026-01-09
|
[
123.8,
1.59,
0
] |
deepseek-v3.2 (Deepseek)
|
[
80.1,
1.26,
1000
] |
[
53,
5.02,
100
] |
[
34,
4.76,
100
] |
[
55,
5,
100
] |
2026-01-09
|
[
119.9,
1.62,
0
] |
claude-opus-4.5 (Anthropic)
|
[
84.6,
1.14,
1000
] |
[
57,
4.98,
100
] |
[
35,
4.79,
100
] |
[
63,
4.85,
100
] |
2026-01-09
|
[
124.2,
1.58,
0
] |
ministral-8b-2512 (Mistral)
|
[
69.6,
1.4553205687950534,
1000
] |
[
19,
3.9427724440366254,
100
] |
[
25,
4.351941398892446,
100
] |
[
33,
4.725815626252608,
100
] |
2026-01-11
|
[
102.5,
1.78,
0
] |
gpt-5-mini (openai)
|
[
52.2,
1.5803979428161938,
1000
] |
[
55.00000000000001,
5,
100
] |
[
37,
4.8523658709390975,
100
] |
[
45,
5,
100
] |
2026-01-13
|
[
112.2,
1.69,
0
] |
gpt-oss-120b (Openai)
|
[
78.60000000000001,
1.2975838021968846,
1000
] |
[
46,
5.009082659620331,
100
] |
[
30,
4.605661864718382,
100
] |
[
37,
4.8523658709390975,
100
] |
2026-01-11
|
[
113.6,
1.68,
0
] |
claude-haiku-4.5 (Anthropic)
|
[
78.2,
1.3063179040595236,
1000
] |
[
27,
4.4619604333847365,
100
] |
[
32,
4.688261722621506,
100
] |
[
51,
5.024183937956915,
100
] |
2026-01-11
|
[
112.5,
1.69,
0
] |
gpt-oss-20b (Openai)
|
[
78.3,
1.3041513757270706,
1000
] |
[
38,
4.878317312145634,
100
] |
[
31,
4.648231987117317,
100
] |
[
25,
4.351941398892446,
100
] |
2026-01-11
|
[
109.3,
1.72,
0
] |
Open Telco Leaderboard Dataset
Benchmark scores for LLMs evaluated on telecommunications-specific tasks.
Schema
| Column | Type | Description |
|---|---|---|
model |
string | Model name with provider |
teleqna |
list | [score, stderr, n_samples] |
telelogs |
list | [score, stderr, n_samples] |
telemath |
list | [score, stderr, n_samples] |
3gpp_tsg |
list | [score, stderr, n_samples] |
date |
string | Evaluation date |
Each benchmark column contains a list: [score (0-100), standard_error, number_of_samples]
Benchmarks
| Benchmark | Description |
|---|---|
| TeleQnA | Q&A pairs testing telecom knowledge |
| TeleMath | Mathematical reasoning in telecommunications |
| TeleLogs | Root cause analysis for 5G network issues |
| 3GPP-TSG | Classification of 3GPP technical documents |
Usage
from datasets import load_dataset
ds = load_dataset("GSMA/leaderboard")
df = ds["train"].to_pandas()
# Extract scores from lists
for bench in ['teleqna', 'telelogs', 'telemath', '3gpp_tsg']:
df[f'{bench}_score'] = df[bench].apply(lambda x: x[0])
df[f'{bench}_stderr'] = df[bench].apply(lambda x: x[1])
df[f'{bench}_n'] = df[bench].apply(lambda x: x[2])
# Calculate mean and rank
score_cols = ['teleqna_score', 'telelogs_score', 'telemath_score', '3gpp_tsg_score']
df['mean'] = df[score_cols].mean(axis=1)
df['rank'] = df['mean'].rank(ascending=False).astype(int)
print(df[['model', 'mean', 'rank']].sort_values('rank'))
Related Datasets
Links
- Downloads last month
- 54