Spaces:
Build error
AI Bias and Use of Language Metaphor Matching
Cool toxicity demo. Lately I was studying how to predit bias in data using AI. The toxicity datasets from Kaggle were pretty good to pose the training examples, yet I found that it couldn't get the leap in general intelligence from groups that suffer from toxicity, or lack of fairness. Also sarcasm was hard to determine, or even pick out the specific term in toxicity data that made it offensive or unfair causing alienation.
Do you think you could do syntactic similarity with a list of population groups that are commonly subject to bias or toxicity?
The groups I am interested are below which surprisingly get nearly 50/50 mostly because of distance from the term to any term in dataset that is close.
Racial Groups
Language
Sex and Gender
Sexual Identity
Age
Ableism
Overweight and Obesity
Socioeconomic Status
Education
Geographic location