News

Face-Reading AI Will Tell Police When Suspects Are Hiding Truth

American psychologist Paul Ekman’s research on facial expressions spawned a whole new career of human lie detectors more than four decades ago. Artificial intelligence could soon take their jobs.

While the U.S. has pioneered the use of automated technologies to reveal the hidden emotions and reactions of suspects, the technique is still nascent and a whole flock of entrepreneurial ventures are working to make it more efficient and less prone to false signals.

Facesoft, a U.K. start-up, says it has built a database of 300 million images of faces, some of which have been created by an AI system modeled on the human brain, The Times reported. The system built by the company can identify emotions like anger, fear and surprise based on micro-expressions which are often invisible to the casual observer.


“If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” co-founder and Chief Executive Officer Allan Ponniah, who’s also a plastic and reconstructive surgeon in London, told the newspaper.

Facesoft has approached police in Mumbai about using the system for monitoring crowds to detect the evolving mob dynamics, Ponniah said. It has also touted its product to police forces in the U.K.


Numbers Don’t Lie: CNN, MSNBC Ratings Flip Upside Down from Trump’s First Term to Trump 2.0
Happy New Year: Pentagon Ends 2025 with a Bang, Hits Narco-Terrorists Whose Drugs Will Not Kill Americans This Year
Bernie Sanders ditches iconic mittens look while swearing in socialist Zohran Mamdani as NYC mayor
London mayor denies removing Star of David from fireworks show
New York Gov. Kathy Hochul Copies One of Trump’s Signature Policies
Mamdani vows to govern as ‘democratic socialist’ and embrace big government
At Least It Wasn’t Snakes: Disney Employee Injured After Errant ‘Indiana Jones’ Boulder Falls off Track
Inside Trump’s first-year power plays and the court fights testing them
Jack Smith Gives Telling Non-Answer When Asked the Key Trump Question During Deposition
Colorado Man Sentenced to Prison in Nationwide Child Sexploitation Scheme – Given 1 Year for Each of His 84 Victims
The road ahead for transit in New York City in 2026 includes fare hikes
5 big immigration changes taking effect across the US
China’s global aggression check: Taiwan tensions, military posturing and US response in 2025
Here are the top US cities Trump could target with National Guard deployments in 2026
White House race underway: With 2026 looming, both parties are already playing for 2028

The use of AI algorithms among police has stirred controversy recently. A research group whose members include Facebook Inc., Microsoft Corp., Alphabet Inc., Amazon.com Inc. and Apple Inc published a report in April stating that current algorithms aimed at helping police determine who should be granted bail, parole or probation, and which help judges make sentencing decisions, are potentially biased, opaque, and may not even work.

See also  Walz received $10K from donors tied to Somali-run day care centers

The Partnership on AI found that such systems are already in widespread use in the U.S. and were gaining a foothold in other countries too. It said it opposes any use of these systems.

Story cited here.

Share this article:
Share on Facebook
Facebook
Tweet about this on Twitter
Twitter