Researchers from the University of Cornell discovered that artificial intelligence systems designed to identify offensive “hate speech” flag comments purportedly made by minorities “at substantially higher rates” than remarks made by whites.
Several universities maintain artificial intelligence systems designed to monitor social media websites and report users who post “hate speech.” In a study published in May, researchers at Cornell discovered that systems “flag” tweets that likely come from black social media users more often, according to Campus Reform.
The study’s authors found that, according to the AI systems’ definition of abusive speech, “tweets written in African-American English are abusive at substantially higher rates.”
The study also revealed that “black-aligned tweets” are “sexist at almost twice the rate of white-aligned tweets.”
The research team averred that the unexpected findings could be explained by “systematic racial bias” displayed by the human beings who assisted in spotting offensive content.
Another One: Illegal Charged With Rape, Kidnapping After Spanberger Made VA Sanctuary State, Lib Judge Released Him
Acting ICE Director Leaving the Agency for the Private Sector
Watch: ‘The View’ Hit Its Lowest Low Ever as Joy Behar Attacks ‘Narcissistic’ Jesus in Blasphemous On-Air Rant
Trump taps former deputy surgeon general to helm CDC
Ketanji Brown Jackson Publicly Attacks Her Supreme Court Colleagues for ‘Utterly Irrational’ Decisions
Scamming the West: What’s really behind the UN’s ahistorical transatlantic slavery resolution
Israel and Lebanon take a critical step forward on the road to peace
The un-shock effects of ‘Undertone’
Greenland talks on ‘good trajectory,’ White House says amid Trump takeover push
House punts Trump spy powers extension after conservatives block deal, forcing end-of-month showdown
Trump admin announces expansion of visa restriction policy in Western Hemisphere
Singer D4vd arrested and held without bail in case tied to teen found dead in Tesla: Police
EXCLUSIVE: NYC officials refuse ICE hold for illegal alien accused in arson that killed 4 and injured 7: DHS
Ex-teacher faces 25 charges, including rape and abuse as investigation widens
Jerome Powell Now Considering Move to Spite Trump: Report
“The results show evidence of systematic racial bias in all datasets, as classifiers trained on them tend to predict that tweets written in African-American English are abusive at substantially higher rates,” reads the study’s abstract. “If these abusive language detection systems are used in the field they will, therefore, have a disproportionate negative impact on African-American social media users.”
One of the study’s authors said that “internal biases” may be to blame for why “we may see language written in what linguists consider African American English and be more likely to think that it’s something that is offensive.”
Automated technology for identifying hate speech is not new, nor are universities the only parties developing it. Two years ago, Google unveiled its own system called “Perspective,” designed to rate phrases and sentences based on how “toxic” they might be.
Shortly after the release of Perspective, YouTube user Tormental made a video of the program at work, alleging inconsistencies in implementation.
According to Tormental, the system rated prejudicial comments against minorities as more “toxic” than equivalent statements against white people.
Google’s system showed a similar discrepancy for bigoted comments directed at women versus men.
Story cited here.









