Researchers from the University of Cornell discovered that artificial intelligence systems designed to identify offensive “hate speech” flag comments purportedly made by minorities “at substantially higher rates” than remarks made by whites.
Several universities maintain artificial intelligence systems designed to monitor social media websites and report users who post “hate speech.” In a study published in May, researchers at Cornell discovered that systems “flag” tweets that likely come from black social media users more often, according to Campus Reform.
The study’s authors found that, according to the AI systems’ definition of abusive speech, “tweets written in African-American English are abusive at substantially higher rates.”
The study also revealed that “black-aligned tweets” are “sexist at almost twice the rate of white-aligned tweets.”
The research team averred that the unexpected findings could be explained by “systematic racial bias” displayed by the human beings who assisted in spotting offensive content.
Who the Anti-ICE Crowd Is Defending: ‘Monster’ Illegal Hit Third-Grade Girl with Baseball-Sized Stone
‘The View’ Co-Host Whines About Trump Showing Photos of Rapist, Murderer Illegals Because of Their Skin Color
Millions go without power as Russia launches barrage at Ukraine during peace talks
Parents’ relentless hunt for missing daughter heats up as new tech breathes life into case
Pentagon Contractor Indicted for Leaking Classified Info to the Washington Post
Trump warns Canada of 100% tariffs if it becomes China’s ‘drop off port’ with new potential trade deal
Legal Analyst: There Are Still Several Ways Pam Bondi Can Charge Don Lemon for His Conduct
Campus Radicals: Union member tell-all, Dems back to DEI ways, more violent leftist threats on campus
Trump takes aim at Senate ‘blue slip’ tradition as GOP resists change
House candidate predicts historic rise of ‘new generation’ in Congress as parties target key demographic
The Tide Has Turned on DEI in Nearly Every Major Institution, But Teachers and Educators Refuse to Ditch It
Hawaii RNC members pay their way to represent state: ‘It’s not a cheap hobby’
Cruz back in Texas after photo of him boarding plane sparks backlash ahead of winter storm
‘I Went to Princeton and Harvard!’: Michelle Obama Complains About Being Known as ‘Barack Obama’s Wife’
Johnson warns House Republicans to ‘stay healthy’ as GOP majority shrinks to the edge
“The results show evidence of systematic racial bias in all datasets, as classifiers trained on them tend to predict that tweets written in African-American English are abusive at substantially higher rates,” reads the study’s abstract. “If these abusive language detection systems are used in the field they will, therefore, have a disproportionate negative impact on African-American social media users.”
One of the study’s authors said that “internal biases” may be to blame for why “we may see language written in what linguists consider African American English and be more likely to think that it’s something that is offensive.”
Automated technology for identifying hate speech is not new, nor are universities the only parties developing it. Two years ago, Google unveiled its own system called “Perspective,” designed to rate phrases and sentences based on how “toxic” they might be.
Shortly after the release of Perspective, YouTube user Tormental made a video of the program at work, alleging inconsistencies in implementation.
According to Tormental, the system rated prejudicial comments against minorities as more “toxic” than equivalent statements against white people.
Google’s system showed a similar discrepancy for bigoted comments directed at women versus men.
Story cited here.









