Update README.md
Browse files
README.md
CHANGED
|
@@ -15,22 +15,28 @@ pip install lens-metric
|
|
| 15 |
```
|
| 16 |
|
| 17 |
```python
|
| 18 |
-
from lens import download_model
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
)
|
|
|
|
|
|
|
|
|
|
|
|
|
| 32 |
```
|
| 33 |
|
|
|
|
|
|
|
| 34 |
## Intended uses
|
| 35 |
|
| 36 |
Our model is intented to be used for **reference-free simplification evaluation**. Given a source text and its translation, outputs a single score between 0 and 1 where 1 represents a perfect simplification and 0 a random simplification. LENS-SALSA was trained on edit annotations of the SimpEval dataset, which covers manually-written, complex Wikipedia simplifications. We have not evaluated our model on non-English languages or non-Wikipedia domains.
|
|
|
|
| 15 |
```
|
| 16 |
|
| 17 |
```python
|
| 18 |
+
from lens import download_model, LENS_SALSA
|
| 19 |
+
|
| 20 |
+
lens_salsa_path = download_model("davidheineman/lens-salsa")
|
| 21 |
+
lens_salsa = LENS_SALSA(lens_salsa_path)
|
| 22 |
+
|
| 23 |
+
complex = [
|
| 24 |
+
"They are culturally akin to the coastal peoples of Papua New Guinea."
|
| 25 |
+
]
|
| 26 |
+
simple = [
|
| 27 |
+
"They are culturally similar to the people of Papua New Guinea."
|
| 28 |
+
]
|
| 29 |
+
|
| 30 |
+
scores, word_level_scores = lens_salsa.score(complex, simple, batch_size=8, devices=[0])
|
| 31 |
+
print(scores) # [72.40909337997437]
|
| 32 |
+
|
| 33 |
+
# LENS-SALSA also returns an error-identification tagging, recover_output() will return the tagged output
|
| 34 |
+
tagged_output = lens_salsa.recover_output(word_level_scores, threshold=0.5)
|
| 35 |
+
print(tagged_output)
|
| 36 |
```
|
| 37 |
|
| 38 |
+
For an example, please see the [quick demo Google Collab notebook](https://colab.research.google.com/drive/1rIYrbl5xzL5b5sGUQ6zFBfwlkyIDg12O?usp=sharing).
|
| 39 |
+
|
| 40 |
## Intended uses
|
| 41 |
|
| 42 |
Our model is intented to be used for **reference-free simplification evaluation**. Given a source text and its translation, outputs a single score between 0 and 1 where 1 represents a perfect simplification and 0 a random simplification. LENS-SALSA was trained on edit annotations of the SimpEval dataset, which covers manually-written, complex Wikipedia simplifications. We have not evaluated our model on non-English languages or non-Wikipedia domains.
|