Researchers from the University of Cornell discovered that artificial intelligence systems designed to identify offensive “hate speech” flag comments purportedly made by minorities “at substantially higher rates” than remarks made by whites.
Several universities maintain artificial intelligence systems designed to monitor social media websites and report users who post “hate speech.” In a study published in May, researchers at Cornell discovered that systems “flag” tweets that likely come from black social media users more often, according to Campus Reform.
The study’s authors found that, according to the AI systems’ definition of abusive speech, “tweets written in African-American English are abusive at substantially higher rates.”
The study also revealed that “black-aligned tweets” are “sexist at almost twice the rate of white-aligned tweets.”
The research team averred that the unexpected findings could be explained by “systematic racial bias” displayed by the human beings who assisted in spotting offensive content.
Alabama Gov Kay Ivey hospitalized following minor procedure, says she is determined to make speedy recovery
After Years of Bashing Straight, Christian, White Men, Dems Reportedly Conclude They Need One to Win WH
Florida Supreme Court keeps ex-cop’s execution on hold after DNA test fails to give a clear answer
Experts Respond to Claims That the Bullet That Killed Charlie Kirk ‘Did Not Match the Rifle’ Allegedly Used by Tyler Robinson
Wisconsin mother stabs teen daughter to death to ‘protect’ her from Elon Musk: authorities
Trump signs executive order overhauling mail-in voting in major election integrity push
Alert: Judge Rules Trump Can’t Build Ballroom, Even with Private Donations, Without Outside Approval
Breaking: Markets Skyrocket as Trump Looks to Iran Endgame, With the Dow up Over 1,000 Points, S&P Booming
State Department Reopens Embassy in Venezuela Following Maduro Capture
Trump says he will attend Supreme Court oral arguments on birthright citizenship challenge
Illegal alien murder suspect avoided system as ICE pushes Dem governor to keep him locked up
Insanity: Far-Left Backlash Halts Mural Honoring Train Stabbing Victim Iryna Zarutska, Dem Mayor Calls Work ‘Divisive’
Kristi Noem, Trump respond to shocking cross-dressing photos tied to her husband
PHOTOS: Anti-ICE agitators dox agents by sending warning postcards to neighbors
Feeding Our Future fraudster sentenced to just one year in prison by judge committed to ‘combating racism’
“The results show evidence of systematic racial bias in all datasets, as classifiers trained on them tend to predict that tweets written in African-American English are abusive at substantially higher rates,” reads the study’s abstract. “If these abusive language detection systems are used in the field they will, therefore, have a disproportionate negative impact on African-American social media users.”
One of the study’s authors said that “internal biases” may be to blame for why “we may see language written in what linguists consider African American English and be more likely to think that it’s something that is offensive.”
Automated technology for identifying hate speech is not new, nor are universities the only parties developing it. Two years ago, Google unveiled its own system called “Perspective,” designed to rate phrases and sentences based on how “toxic” they might be.
Shortly after the release of Perspective, YouTube user Tormental made a video of the program at work, alleging inconsistencies in implementation.
According to Tormental, the system rated prejudicial comments against minorities as more “toxic” than equivalent statements against white people.
Google’s system showed a similar discrepancy for bigoted comments directed at women versus men.
Story cited here.









