Researchers from the University of Cornell discovered that artificial intelligence systems designed to identify offensive “hate speech” flag comments purportedly made by minorities “at substantially higher rates” than remarks made by whites.
Several universities maintain artificial intelligence systems designed to monitor social media websites and report users who post “hate speech.” In a study published in May, researchers at Cornell discovered that systems “flag” tweets that likely come from black social media users more often, according to Campus Reform.
The study’s authors found that, according to the AI systems’ definition of abusive speech, “tweets written in African-American English are abusive at substantially higher rates.”
The study also revealed that “black-aligned tweets” are “sexist at almost twice the rate of white-aligned tweets.”
The research team averred that the unexpected findings could be explained by “systematic racial bias” displayed by the human beings who assisted in spotting offensive content.
Major cities see violent crime surge as national rates plummet significantly in 2025: survey
Deadly helicopter collision in New Jersey kills one, critically injures another
Is This Legal?: Leftist Group Recruits Military Officials to Turn Against Trump’s Drug Cartel Strikes
FBI surges resources to Minnesota as Patel calls $250M fraud scheme ‘tip of iceberg’
‘Worst of the worst’: The 10 most violent illegal immigrants nabbed in 2025
Brits Weighed In on Whether Die Hard Is a Christmas Movie – Do You Agree with Them?
‘We are not afraid’: Erika Kirk vows TPUSA will continue campus debates nationwide
Crockett Flies Into a Rage Over Vance’s ‘Street-Girl Persona’ Comments
Unsung heroes of 2025: First responders and everyday Americans who saved lives across US
Opinion: This Lib Who Converted to MAGA Nails the Left’s Exact Plan to End America Using Just 5 Moves
The biggest losers of 2025: Who fell flat as the year closed
Jasmine Crockett Rambles for Two Minutes When Asked to Outline Her Agenda, Attacks Trump Tax Cuts and Tariffs
How Charlie Kirk learned to turn off the phone — and why the Sabbath shaped his life and posthumous book
Police Say Mom and Boyfriend Murdered 12-Year-Old, Lied to Continue Collecting Food Stamps
Head of America’s ‘free enterprise’ college optimistic about academia despite left-wing bias: ‘there is hope’
“The results show evidence of systematic racial bias in all datasets, as classifiers trained on them tend to predict that tweets written in African-American English are abusive at substantially higher rates,” reads the study’s abstract. “If these abusive language detection systems are used in the field they will, therefore, have a disproportionate negative impact on African-American social media users.”
One of the study’s authors said that “internal biases” may be to blame for why “we may see language written in what linguists consider African American English and be more likely to think that it’s something that is offensive.”
Automated technology for identifying hate speech is not new, nor are universities the only parties developing it. Two years ago, Google unveiled its own system called “Perspective,” designed to rate phrases and sentences based on how “toxic” they might be.
Shortly after the release of Perspective, YouTube user Tormental made a video of the program at work, alleging inconsistencies in implementation.
According to Tormental, the system rated prejudicial comments against minorities as more “toxic” than equivalent statements against white people.
Google’s system showed a similar discrepancy for bigoted comments directed at women versus men.
Story cited here.









