HALoGEN: Fantastic LLM Hallucinations and Where to Find Them Paper • 2501.08292 • Published Jan 14, 2025 • 17
SelfCite: Self-Supervised Alignment for Context Attribution in Large Language Models Paper • 2502.09604 • Published Feb 13, 2025 • 37
MetaFaith: Faithful Natural Language Uncertainty Expression in LLMs Paper • 2505.24858 • Published May 30, 2025 • 17
Investigating Hallucination in Conversations for Low Resource Languages Paper • 2507.22720 • Published Jul 30, 2025 • 6