News

Colleges Create AI to Identify ‘Hate Speech’ – Turns Out Minorities Are the Worst Offenders

By Daniel M

October 01, 2019

Researchers from the University of Cornell discovered that artificial intelligence systems designed to identify offensive “hate speech” flag comments purportedly made by minorities “at substantially higher rates” than remarks made by whites.

Several universities maintain artificial intelligence systems designed to monitor social media websites and report users who post “hate speech.” In a study published in May, researchers at Cornell discovered that systems “flag” tweets that likely come from black social media users more often, according to Campus Reform.

The study’s authors found that, according to the AI systems’ definition of abusive speech, “tweets written in African-American English are abusive at substantially higher rates.”

The study also revealed that “black-aligned tweets” are “sexist at almost twice the rate of white-aligned tweets.”

The research team averred that the unexpected findings could be explained by “systematic racial bias” displayed by the human beings who assisted in spotting offensive content.