News

Face-Reading AI Will Tell Police When Suspects Are Hiding Truth

American psychologist Paul Ekman’s research on facial expressions spawned a whole new career of human lie detectors more than four decades ago. Artificial intelligence could soon take their jobs.

While the U.S. has pioneered the use of automated technologies to reveal the hidden emotions and reactions of suspects, the technique is still nascent and a whole flock of entrepreneurial ventures are working to make it more efficient and less prone to false signals.

Facesoft, a U.K. start-up, says it has built a database of 300 million images of faces, some of which have been created by an AI system modeled on the human brain, The Times reported. The system built by the company can identify emotions like anger, fear and surprise based on micro-expressions which are often invisible to the casual observer.


“If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” co-founder and Chief Executive Officer Allan Ponniah, who’s also a plastic and reconstructive surgeon in London, told the newspaper.

Facesoft has approached police in Mumbai about using the system for monitoring crowds to detect the evolving mob dynamics, Ponniah said. It has also touted its product to police forces in the U.K.


DHS says ICE agents will not be at polling places during midterms
Trump admin says VA benefits claims backlog below 100K for first time in 6 years: ‘Record levels of service’
Trump administration blocks Venezuela from paying Maduro’s legal bills amid federal charges
Murder suspect in Baltimore robbery spree was on probation, records show
Authorities preparing to return home of Nancy Guthrie back to family: report
Toddler flees in terror as coyote chases him outside California family home in broad daylight
Ex-Air Force pilot arrested for allegedly training Chinese military pilots without authorization
Some of the most notable guests at Trump’s 2026 State of the Union: photos
Senators Sanders and Mullin clash in heated Senate hearing exchange: ‘You’re part of the problem’
Trump’s fraud czar nominee touts Minnesota blueprint to root out Obamacare fraud, senior scams
Trump’s ‘war on fraud’ draws range of reactions during SOTU speech
Former top Harris adviser ignites backlash over ‘political props’ comment targeting USA men’s hockey team
The Laundering of Iran’s Atrocities: How Western Voices Became a Shield for the Islamic Republic’s Mass Killings
Capitol Tea: Jim Justice, of Babydog fame, offers to take over Tillis dog parade
Karoline Leavitt Drops List of Policies Dems Wouldn’t Stand For, Including Voter ID and Fewer Murders

The use of AI algorithms among police has stirred controversy recently. A research group whose members include Facebook Inc., Microsoft Corp., Alphabet Inc., Amazon.com Inc. and Apple Inc published a report in April stating that current algorithms aimed at helping police determine who should be granted bail, parole or probation, and which help judges make sentencing decisions, are potentially biased, opaque, and may not even work.

See also  Newsom’s office rebuffs ‘MAGA-manufactured outrage’ on his SAT score statement

The Partnership on AI found that such systems are already in widespread use in the U.S. and were gaining a foothold in other countries too. It said it opposes any use of these systems.

Story cited here.

Share this article:
Share on Facebook
Facebook
Tweet about this on Twitter
Twitter