News

Face-Reading AI Will Tell Police When Suspects Are Hiding Truth

American psychologist Paul Ekman’s research on facial expressions spawned a whole new career of human lie detectors more than four decades ago. Artificial intelligence could soon take their jobs.

While the U.S. has pioneered the use of automated technologies to reveal the hidden emotions and reactions of suspects, the technique is still nascent and a whole flock of entrepreneurial ventures are working to make it more efficient and less prone to false signals.

Facesoft, a U.K. start-up, says it has built a database of 300 million images of faces, some of which have been created by an AI system modeled on the human brain, The Times reported. The system built by the company can identify emotions like anger, fear and surprise based on micro-expressions which are often invisible to the casual observer.


“If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” co-founder and Chief Executive Officer Allan Ponniah, who’s also a plastic and reconstructive surgeon in London, told the newspaper.

Facesoft has approached police in Mumbai about using the system for monitoring crowds to detect the evolving mob dynamics, Ponniah said. It has also touted its product to police forces in the U.K.


DC pipe bomb suspect says someone needed to ‘speak up’ about stolen election claims
Marjorie Taylor Greene criticizes Trump’s meetings with Zelenskyy, Netanyahu: ‘Can we just do America’
Tim Walz pushes back on Minnesota fraud allegations following viral daycare video
Ilhan Omar’s Somaliland stance slammed as Minnesota fraud scandal deepens
Trump, Zelenskyy say Ukraine peace deal close but ‘thorny issues’ remain after Florida talks
ICE delivers ‘greatest gift of all’ with Christmas arrests of convicted criminals across multiple states
Major cities see violent crime surge as national rates plummet significantly in 2025: survey
Deadly helicopter collision in New Jersey kills one, critically injures another
Is This Legal?: Leftist Group Recruits Military Officials to Turn Against Trump’s Drug Cartel Strikes
FBI surges resources to Minnesota as Patel calls $250M fraud scheme ‘tip of iceberg’
‘Worst of the worst’: The 10 most violent illegal immigrants nabbed in 2025
Brits Weighed In on Whether Die Hard Is a Christmas Movie – Do You Agree with Them?
‘We are not afraid’: Erika Kirk vows TPUSA will continue campus debates nationwide
Crockett Flies Into a Rage Over Vance’s ‘Street-Girl Persona’ Comments
Unsung heroes of 2025: First responders and everyday Americans who saved lives across US

The use of AI algorithms among police has stirred controversy recently. A research group whose members include Facebook Inc., Microsoft Corp., Alphabet Inc., Amazon.com Inc. and Apple Inc published a report in April stating that current algorithms aimed at helping police determine who should be granted bail, parole or probation, and which help judges make sentencing decisions, are potentially biased, opaque, and may not even work.

See also  Heritage Foundation staffers quit and join Mike Pence foundation

The Partnership on AI found that such systems are already in widespread use in the U.S. and were gaining a foothold in other countries too. It said it opposes any use of these systems.

Story cited here.

Share this article:
Share on Facebook
Facebook
Tweet about this on Twitter
Twitter