Researchers from the University of Cornell discovered that artificial intelligence systems designed to identify offensive “hate speech” flag comments purportedly made by minorities “at substantially higher rates” than remarks made by whites.
Several universities maintain artificial intelligence systems designed to monitor social media websites and report users who post “hate speech.” In a study published in May, researchers at Cornell discovered that systems “flag” tweets that likely come from black social media users more often, according to Campus Reform.
The study’s authors found that, according to the AI systems’ definition of abusive speech, “tweets written in African-American English are abusive at substantially higher rates.”
The study also revealed that “black-aligned tweets” are “sexist at almost twice the rate of white-aligned tweets.”
The research team averred that the unexpected findings could be explained by “systematic racial bias” displayed by the human beings who assisted in spotting offensive content.
NYC’s First Lady Exposed Approving of Suicide Attack Propaganda, Plane Hijackers, and Outrageous Attacks on US Troops
Tensions and deadlocks over Trump’s US attorney picks hit fever pitch
Jewish voters feel ‘politically homeless’ as antisemitism rises on both sides
Fairfax County ignored 2023 detainer against illegal immigrant now accused of murder, ICE says
Man found not guilty by reason of insanity in killing of pregnant Seattle woman, unborn child
Woman claims space rock smashed into house after Houston-area blast rattles residents
Trump gives Iran 48-hour ultimatum to reopen Strait of Hormuz or face strikes on power plants
Venezuelan migrant arrested after Loyola Chicago student fatally shot near campus
Dead passenger allegedly stored in heated galley for 13 hours on British Airways flight, ‘foul smell’ reported
Trump administration urges judge to dissolve injunction blocking Abrego Garcia’s deportation to Liberia
Florida woman seen on video allegedly attacking pregnant driver, elderly bystander and biting police officer
FIRST ON FOX: US Border Patrol nabs Mexican fugitives in California wanted for murder, child sex crimes
Christian Street Preachers Fight Back with Lawsuits After Getting Arrested in Major American City
Schumer gambit fails as DHS shutdown hits 36 days and airport lines grow
BREAKING: Robert Mueller, Who Investigated Russian Collusion, Dies at 81
“The results show evidence of systematic racial bias in all datasets, as classifiers trained on them tend to predict that tweets written in African-American English are abusive at substantially higher rates,” reads the study’s abstract. “If these abusive language detection systems are used in the field they will, therefore, have a disproportionate negative impact on African-American social media users.”
One of the study’s authors said that “internal biases” may be to blame for why “we may see language written in what linguists consider African American English and be more likely to think that it’s something that is offensive.”
Automated technology for identifying hate speech is not new, nor are universities the only parties developing it. Two years ago, Google unveiled its own system called “Perspective,” designed to rate phrases and sentences based on how “toxic” they might be.
Shortly after the release of Perspective, YouTube user Tormental made a video of the program at work, alleging inconsistencies in implementation.
According to Tormental, the system rated prejudicial comments against minorities as more “toxic” than equivalent statements against white people.
Google’s system showed a similar discrepancy for bigoted comments directed at women versus men.
Story cited here.









