German BERT for Legal NER
F1-Score: 99.762
This model is fine-tuned on the German LER dataset, introduced in this paper. The LER dataset provides annotations across 19 fine-grained legal entity classes, capturing the complexity of legal texts in German.
Class-wise Performance Metrics
The table below summarizes the class-wise performance metrics of our improved model:
| Abbreviation | Class | Dataset % | F1-Score |
|---|---|---|---|
| PER | Person | 3.26 | 94.47 |
| RR | Judge | 2.83 | 99.56 |
| AN | Lawyer | 0.21 | 92.31 |
| LD | Country | 2.66 | 96.30 |
| ST | City | 1.31 | 91.53 |
| STR | Street | 0.25 | 95.05 |
| LDS | Landscape | 0.37 | 88.24 |
| ORG | Organization | 2.17 | 93.72 |
| UN | Company | 1.97 | 98.16 |
| INN | Institution | 4.09 | 97.73 |
| GRT | Court | 5.99 | 98.32 |
| MRK | Brand | 0.53 | 98.65 |
| GS | Law | 34.53 | 99.46 |
| VO | Ordinance | 1.49 | 95.72 |
| EUN | European legal norm | 2.79 | 97.79 |
| VS | Regulation | 1.13 | 89.73 |
| VT | Contract | 5.34 | 99.22 |
| RS | Court decision | 23.46 | 99.76 |
| LIT | Legal literature | 5.60 | 98.09 |
Comparison of F1 Scores
Below is a comparison of F1 scores between our previous model, gbert-legal-ner, and JuraNER:
- Downloads last month
- 35
Model tree for harshildarji/JuraNER
Base model
google-bert/bert-base-german-cased