Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up
Chirag Agarwal's picture
4 3

Chirag Agarwal

AikyamLab
·
https://chirag-agarwall.github.io/
  • _cagarwal
  • chirag-agarwal-0a6a43a1

AI & ML interests

Explainability and Interpretability; AI Safety; AI Alignment

Recent Activity

upvoted a paper about 1 month ago
CLINIC: Evaluating Multilingual Trustworthiness in Language Models for Healthcare
upvoted a paper about 1 month ago
Polarity-Aware Probing for Quantifying Latent Alignment in Language Models
liked a dataset about 1 month ago
SabrinaSadiekh/not_hate_dataset
View all activity

Organizations

LLM-XAI's profile picture Aikyam Lab's profile picture

upvoted 2 papers about 1 month ago

CLINIC: Evaluating Multilingual Trustworthiness in Language Models for Healthcare

Paper • 2512.11437 • Published Dec 12, 2025 • 3

Polarity-Aware Probing for Quantifying Latent Alignment in Language Models

Paper • 2511.21737 • Published Nov 21, 2025 • 1
upvoted a collection about 1 month ago

Polarity-Aware Probing Datasets

Collection
Datasets for PA-Probing described in "Polarity-Aware Probing for Quantifying Latent Alignment in Language Models" https://www.arxiv.org/pdf/2511.21737 • 2 items • Updated Dec 7, 2025 • 1
upvoted a paper almost 2 years ago

Faithfulness vs. Plausibility: On the (Un)Reliability of Explanations from Large Language Models

Paper • 2402.04614 • Published Feb 7, 2024 • 3
Company
TOS Privacy About Careers
Website
Models Datasets Spaces Pricing Docs