Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
datasets:
|
| 3 |
+
- UKPLab/DARA-Agentbench
|
| 4 |
+
---
|
| 5 |
+
# DARA: Decomposition-Alignment-Reasoning Autonomous Language Agent for Question Answering over Knowledge Graphs
|
| 6 |
+
## Model Information
|
| 7 |
+
This model is a fine-tuned semantic parsing LLM agent for KGQA. We fine-tune the llama-2-7B on our curated reasoning trajectory in the **Agentbench format**: https://huggingface.co/datasets/UKPLab/dara-agentbench.
|
| 8 |
+
|
| 9 |
+
## Model Usage
|
| 10 |
+
|
| 11 |
+
```python
|
| 12 |
+
from transformers import AutoModelForCausalLM
|
| 13 |
+
|
| 14 |
+
model = AutoModelForCausalLM.from_pretrained( "UKPLab/agentbench-7b", torch_dtype=torch.float16, device_map="auto", cache_dir = "cache")
|
| 15 |
+
```
|
| 16 |
+
|
| 17 |
+
For more information, please check the repository https://github.com/UKPLab/acl2024-DARA
|
| 18 |
+
|
| 19 |
+
## Hyperparameters
|
| 20 |
+
- Learning rate: 2e-5
|
| 21 |
+
- Batch size: 4
|
| 22 |
+
- Training epochs: 10
|