News Opinons Politics

Colleges Create AI to Identify ‘Hate Speech’ – Turns Out Minorities Are the Worst Offenders

Researchers from the University of Cornell discovered that artificial intelligence systems designed to identify offensive “hate speech” flag comments purportedly made by minorities “at substantially higher rates” than remarks made by whites.

Several universities maintain artificial intelligence systems designed to monitor social media websites and report users who post “hate speech.” In a study published in May, researchers at Cornell discovered that systems “flag” tweets that likely come from black social media users more often, according to Campus Reform.

The study’s authors found that, according to the AI systems’ definition of abusive speech, “tweets written in African-American English are abusive at substantially higher rates.”


The study also revealed that “black-aligned tweets” are “sexist at almost twice the rate of white-aligned tweets.”

The research team averred that the unexpected findings could be explained by “systematic racial bias” displayed by the human beings who assisted in spotting offensive content.


Ellen DeGeneres Comes Crawling Back to US After Fleeing Post-Trump Victory, Buys $27 Million California Mansion
Virginia Democrat gives profanity-laced response to Cruz’s criticism of the state’s redistricting push
Trump Admin Discovers ‘Staggering’ Billions Stolen in Suspected California Small Business Fraud
Two teens arrested after 15-year-old shot near Washington DC’s Union Station
Bishop Ronald Hicks replaces Dolan as Archbishop of New York with installation at St Patrick’s
Trump Announces ‘Clues’ Found in Nancy Guthrie Disappearance: ‘We Could Have Some Answers’
Man arrested for allegedly threatening to kill JD Vance was in possession of child sexual abuse materials: DOJ
‘Superhuman’ Boy, 13, Swims 4 Hours in Frigid Water then Runs Over a Mile to Rescue His Mother, Sister, and Brother Lost at Sea
Trump says nuclear talks in Oman were ‘very good,’ claims Iran wants a deal ‘very badly’
DeSantis celebrates end of ‘witch hunt’ after Trump DOJ reportedly drops Hope Florida Foundation complaint
Trump vows to ‘unleash’ commercial fishing off New England, reversing Obama-era Atlantic restrictions
Ex-‘Squad’ Dem appears to be leaning on radical activist at center of damning Tlaib report in comeback bid
Deadly Consequences: Illegal Alien Who Was Released by Biden Administration Accused of Killing Innocent Man
Virginia Dems take tax hikes into overtime, target fantasy football leagues
Republican majority at risk? A look at the 6 GOP Senate seats most in jeopardy in midterm elections
See also  AI giant’s lobbyist spending exploded as it clashed with Trump administration

“The results show evidence of systematic racial bias in all datasets, as classifiers trained on them tend to predict that tweets written in African-American English are abusive at substantially higher rates,” reads the study’s abstract. “If these abusive language detection systems are used in the field they will, therefore, have a disproportionate negative impact on African-American social media users.”

One of the study’s authors said that “internal biases” may be to blame for why “we may see language written in what linguists consider African American English and be more likely to think that it’s something that is offensive.”

Automated technology for identifying hate speech is not new, nor are universities the only parties developing it. Two years ago, Google unveiled its own system called “Perspective,” designed to rate phrases and sentences based on how “toxic” they might be.

Shortly after the release of Perspective, YouTube user Tormental made a video of the program at work, alleging inconsistencies in implementation.

According to Tormental, the system rated prejudicial comments against minorities as more “toxic” than equivalent statements against white people.

Google’s system showed a similar discrepancy for bigoted comments directed at women versus men.

Story cited here.

Share this article:
Share on Facebook
Facebook
Tweet about this on Twitter
Twitter