News

Face-Reading AI Will Tell Police When Suspects Are Hiding Truth

American psychologist Paul Ekman’s research on facial expressions spawned a whole new career of human lie detectors more than four decades ago. Artificial intelligence could soon take their jobs.

While the U.S. has pioneered the use of automated technologies to reveal the hidden emotions and reactions of suspects, the technique is still nascent and a whole flock of entrepreneurial ventures are working to make it more efficient and less prone to false signals.

Facesoft, a U.K. start-up, says it has built a database of 300 million images of faces, some of which have been created by an AI system modeled on the human brain, The Times reported. The system built by the company can identify emotions like anger, fear and surprise based on micro-expressions which are often invisible to the casual observer.


“If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” co-founder and Chief Executive Officer Allan Ponniah, who’s also a plastic and reconstructive surgeon in London, told the newspaper.

Facesoft has approached police in Mumbai about using the system for monitoring crowds to detect the evolving mob dynamics, Ponniah said. It has also touted its product to police forces in the U.K.


Pritzker pushes prosecutions of Trump officials as part of Dem ‘Project 2029’ agenda
California Democrats overwhelmingly favor Newsom over Harris for 2028: Poll
Trump administration sues Harvard over alleged failure to protect Jewish and Israeli students, seeks billions
Aspiring Pastor Becomes March Madness Hero by Leading Team to Epic Upset While Wearing His Faith on His Feet
GOP sheriff leading California poll rips Newsom’s ‘love affair’ with criminals
Justice Department sues Harvard for allowing ‘flourish’ of antisemitism on campus
Tech Exec. Charged with Secretly Sending Huge Quantities of Advanced AI Equipment to China
Thune reveals reason Democrats are ‘scared’ to reopen DHS
Breaking: Chuck Norris Dies at Age 86
Slain Loyola Chicago student’s family fumes over ‘murder,’ manhunt for masked gunman in attack near campus
Think Twice: A Talking Filibuster to Pass the SAVE America Act Might Sound Nice, but Do We Really Want Schumer in Control for Months?
Jimmy Gracey’s death deemed accidental after vanishing on spring break in Barcelona, police say
Top Dems brush off ties to Imam who held memorial for Iranian leader who vowed ‘Death to America’
After Telling Them to Leave, Hochul Begs New Yorkers Who Fled to FL: Please Come Back – And Drag Your Friends Back, Too – To Pay Our High Taxes
Revealed: Biden Admin Handed ‘Sweetheart Settlement’ to Iranian Front Group on Final Days in Office

The use of AI algorithms among police has stirred controversy recently. A research group whose members include Facebook Inc., Microsoft Corp., Alphabet Inc., Amazon.com Inc. and Apple Inc published a report in April stating that current algorithms aimed at helping police determine who should be granted bail, parole or probation, and which help judges make sentencing decisions, are potentially biased, opaque, and may not even work.

See also  House oversight committee interviews former Epstein lawyer Darren Indyke

The Partnership on AI found that such systems are already in widespread use in the U.S. and were gaining a foothold in other countries too. It said it opposes any use of these systems.

Story cited here.

Share this article:
Share on Facebook
Facebook
Tweet about this on Twitter
Twitter