Researchers from the University of Cornell discovered that artificial intelligence systems designed to identify offensive “hate speech” flag comments purportedly made by minorities “at substantially higher rates” than remarks made by whites.
Several universities maintain artificial intelligence systems designed to monitor social media websites and report users who post “hate speech.” In a study published in May, researchers at Cornell discovered that systems “flag” tweets that likely come from black social media users more often, according to Campus Reform.
The study’s authors found that, according to the AI systems’ definition of abusive speech, “tweets written in African-American English are abusive at substantially higher rates.”
The study also revealed that “black-aligned tweets” are “sexist at almost twice the rate of white-aligned tweets.”
The research team averred that the unexpected findings could be explained by “systematic racial bias” displayed by the human beings who assisted in spotting offensive content.
Mamdani sworn in by AG James in private midnight ceremony
Trump targets Minnesota fraud allegations, says ‘we’re going to get to the bottom of it’
US military confirms 5 killed in Dec 31 kinetic strike on reported narco-terror vessels
Disney World cast member injured after massive boulder prop veers off track at Indiana Jones stunt show
Trump Orders National Guard Withdrawal from Chicago, Los Angeles and Portland, Warns of Return if Crime Surges
Here’s where Trump launched airstrikes around the world in 2025: ‘Protect the homeland’
Jim Beam shuts down iconic Kentucky distillery for at least a year amid market downturn
LIES: Here Are the Top Ten Hoaxes Pushed by Mainstream Media Outlets in 2025
Washington Monument to become ‘birthday candle’ as US marks start of 250th year
Bondi signals Obama-Biden era conspiracy case could drop in 2026
House Oversight Committee Announces Minnesota Fraud Hearing, Calls on Tim Walz to Explain Himself
Dem governor-elect taps Crockett’s former ‘chief brand strategist’ for top DEI role
San Antonio teen who vanished Christmas Eve found dead by suicide in nearby field after days-long search
Deported illegal immigrant caught by GPS tracker pleads guilty to robbing 7 convenience stores in California
GOP Rep. Tom Emmer Calls for Somali Deportations – in 2015, He Dismissed His Voters’ Concerns About Them
“The results show evidence of systematic racial bias in all datasets, as classifiers trained on them tend to predict that tweets written in African-American English are abusive at substantially higher rates,” reads the study’s abstract. “If these abusive language detection systems are used in the field they will, therefore, have a disproportionate negative impact on African-American social media users.”
One of the study’s authors said that “internal biases” may be to blame for why “we may see language written in what linguists consider African American English and be more likely to think that it’s something that is offensive.”
Automated technology for identifying hate speech is not new, nor are universities the only parties developing it. Two years ago, Google unveiled its own system called “Perspective,” designed to rate phrases and sentences based on how “toxic” they might be.
Shortly after the release of Perspective, YouTube user Tormental made a video of the program at work, alleging inconsistencies in implementation.
According to Tormental, the system rated prejudicial comments against minorities as more “toxic” than equivalent statements against white people.
Google’s system showed a similar discrepancy for bigoted comments directed at women versus men.
Story cited here.









