News

Face-Reading AI Will Tell Police When Suspects Are Hiding Truth

American psychologist Paul Ekman’s research on facial expressions spawned a whole new career of human lie detectors more than four decades ago. Artificial intelligence could soon take their jobs.

While the U.S. has pioneered the use of automated technologies to reveal the hidden emotions and reactions of suspects, the technique is still nascent and a whole flock of entrepreneurial ventures are working to make it more efficient and less prone to false signals.

Facesoft, a U.K. start-up, says it has built a database of 300 million images of faces, some of which have been created by an AI system modeled on the human brain, The Times reported. The system built by the company can identify emotions like anger, fear and surprise based on micro-expressions which are often invisible to the casual observer.


“If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” co-founder and Chief Executive Officer Allan Ponniah, who’s also a plastic and reconstructive surgeon in London, told the newspaper.

Facesoft has approached police in Mumbai about using the system for monitoring crowds to detect the evolving mob dynamics, Ponniah said. It has also touted its product to police forces in the U.K.


Texas National Guard members removed from 60-day Illinois deployment for failing ‘mission requirements’
Newlyweds found dead days before celebrating their first anniversary as police probe possible murder-suicide
LISTEN: Man leaves disturbing voicemail for ICE agents: ‘Hope they’re doxxed’
White House rips ‘imbecilic buffoon’ Tim Walz after Trump tariff criticism
Weekly Straw Poll: Vance Holds a Commanding Lead, But Other Candidates Get a Boost
Backfire: Biden Finally Responds to Peace Deal, But with Unexpected Admission That He Failed on Everything
Watch: Trump’s Joke Cracks Up Knesset but Horrifies Dems After Israeli Security Forced to Crush Leftist Disrupters
Johnson turns up volume on Democrats in shutdown standoff, telling them to ‘bring it’
California officials address growing conspiracy theories tied to Proposition 50 ballots
Judge sides with Comey after DOJ sought to limit his discovery access
Alec and Stephen Baldwin Involved in Head-on Car Crash
Defiant Letitia James rallies with far-left ally Mamdani after indictment, vows to keep fighting Trump
Nation’s only two 2025 races for governor rocked with three weeks until Election Day
Suspect in arson attack at Pennsylvania governor’s mansion pleads guilty
Hamas Carries Out Wave of Public Executions in Bid to ‘Reestablish its Rule’ in Gaza

The use of AI algorithms among police has stirred controversy recently. A research group whose members include Facebook Inc., Microsoft Corp., Alphabet Inc., Amazon.com Inc. and Apple Inc published a report in April stating that current algorithms aimed at helping police determine who should be granted bail, parole or probation, and which help judges make sentencing decisions, are potentially biased, opaque, and may not even work.

See also  Kamala Harris returns to DC for 107 Days book tour 

The Partnership on AI found that such systems are already in widespread use in the U.S. and were gaining a foothold in other countries too. It said it opposes any use of these systems.

Story cited here.

Share this article:
Share on Facebook
Facebook
Tweet about this on Twitter
Twitter