News

Face-Reading AI Will Tell Police When Suspects Are Hiding Truth

American psychologist Paul Ekman’s research on facial expressions spawned a whole new career of human lie detectors more than four decades ago. Artificial intelligence could soon take their jobs.

While the U.S. has pioneered the use of automated technologies to reveal the hidden emotions and reactions of suspects, the technique is still nascent and a whole flock of entrepreneurial ventures are working to make it more efficient and less prone to false signals.

Facesoft, a U.K. start-up, says it has built a database of 300 million images of faces, some of which have been created by an AI system modeled on the human brain, The Times reported. The system built by the company can identify emotions like anger, fear and surprise based on micro-expressions which are often invisible to the casual observer.


“If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” co-founder and Chief Executive Officer Allan Ponniah, who’s also a plastic and reconstructive surgeon in London, told the newspaper.

Facesoft has approached police in Mumbai about using the system for monitoring crowds to detect the evolving mob dynamics, Ponniah said. It has also touted its product to police forces in the U.K.


Santa with CCW gets pulled over, tells Ohio deputy ‘you got to protect yourself’ during festive traffic stop
Nonprofit uses underwater technology to search for missing service members
Fox News ‘Antisemitism Exposed’ Newsletter: UN bigot out at Georgetown
Kim Jong Un swipes at South Korea’s progress building a nuclear submarine while inspecting his own
Poll: Young Protestants Are Officially Outnumbered by the ‘Nones’
Wild Christmas Miracle: Watch a Bona Fide Miracle in Real Time as Skydiver Gets Caught on Plane, Plummets, then Manages to Cheat Death
Biden nearly invisible in own Christmas family photo as Hunter takes center stage
5 Things the GOP Needs to Change to Win the 2026 Midterms
Lawmakers attempt to tackle NIL, giving it the ‘old college try’
LA Garbage Crisis Is So Bad One Man Quit His Job to Pick Up the City’s Slack
And We Thought Fruitcake Was Bad: Italian Company Is 3D Printing ‘Pastries’ That Are Made of Truly Gross-Sounding Ingredients
Migrant truckers sue California DMV over canceled commercial drivers’ licenses
An Elderly Customer Never Missed a Lunch or Dinner for 10 Years – When He Didn’t Show Up, the Chef Hopped in His Car and Ended Up Saving His Life
‘Shop With a Cop’ Sends Less Fortunate Kids on Mini Christmas Shopping Sprees With New Police Pals
The iciest moments of 2025: The 5 political feuds that froze Washington

The use of AI algorithms among police has stirred controversy recently. A research group whose members include Facebook Inc., Microsoft Corp., Alphabet Inc., Amazon.com Inc. and Apple Inc published a report in April stating that current algorithms aimed at helping police determine who should be granted bail, parole or probation, and which help judges make sentencing decisions, are potentially biased, opaque, and may not even work.

See also  More female inmates allege sexual abuse in transgender separation case

The Partnership on AI found that such systems are already in widespread use in the U.S. and were gaining a foothold in other countries too. It said it opposes any use of these systems.

Story cited here.

Share this article:
Share on Facebook
Facebook
Tweet about this on Twitter
Twitter