News

Face-Reading AI Will Tell Police When Suspects Are Hiding Truth

American psychologist Paul Ekman’s research on facial expressions spawned a whole new career of human lie detectors more than four decades ago. Artificial intelligence could soon take their jobs.

While the U.S. has pioneered the use of automated technologies to reveal the hidden emotions and reactions of suspects, the technique is still nascent and a whole flock of entrepreneurial ventures are working to make it more efficient and less prone to false signals.

Facesoft, a U.K. start-up, says it has built a database of 300 million images of faces, some of which have been created by an AI system modeled on the human brain, The Times reported. The system built by the company can identify emotions like anger, fear and surprise based on micro-expressions which are often invisible to the casual observer.


“If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” co-founder and Chief Executive Officer Allan Ponniah, who’s also a plastic and reconstructive surgeon in London, told the newspaper.

Facesoft has approached police in Mumbai about using the system for monitoring crowds to detect the evolving mob dynamics, Ponniah said. It has also touted its product to police forces in the U.K.


Vance, Cruz, head to Iowa on 2026 missions as 2028 GOP race to succeed Trump heats up
Suspect arrested for allegedly running meth lab at Michigan State University’s largest academic building
Two Kentucky bank employees shot and killed during robbery, police hunting suspect
Dominican migrant with deportation order, wanted for murder in home country freed by Biden-appointed judge
Wyoming official faces backlash after posting ‘hang bad judges’ comment on abortion ruling
Doctor and son accused of running dangerous side-business scheme in New York
DOJ sues New Jersey over laws giving illegal aliens in-state tuition, says citizens treated as ‘second-class’
Hawley champions GUARD Act as heartbroken families say AI chatbots allegedly pushed teens to self-harm
Democratic Congressman Suggests Execution for Pete Hegseth
GOP lawmakers seek to defund HBCU after it canceled Republican’s commencement speech
DOJ Axes a Slew of Gun-Control Regulations in ‘Historic’ Day for the 2nd Amendment
Watch: Johnny Carson’s Reaction to Reagan Shooting Goes Viral After Trump Targeted
MN lawmakers unload on Walz’s ‘legacy’ after he touts fraud record in final annual address: ‘Ridiculous’
DNC chair ripped for downplaying unreleased 2024 autopsy after Dem losses: ‘Self-inflicted crisis’
Trump jokes he’d look ’20 pounds heavier’ in a bulletproof vest, says he doesn’t think about threats

The use of AI algorithms among police has stirred controversy recently. A research group whose members include Facebook Inc., Microsoft Corp., Alphabet Inc., Amazon.com Inc. and Apple Inc published a report in April stating that current algorithms aimed at helping police determine who should be granted bail, parole or probation, and which help judges make sentencing decisions, are potentially biased, opaque, and may not even work.

See also  DOJ drops investigation into Jerome Powell, clearing way for Trump Fed pick Kevin Warsh

The Partnership on AI found that such systems are already in widespread use in the U.S. and were gaining a foothold in other countries too. It said it opposes any use of these systems.

Story cited here.

Share this article:
Share on Facebook
Facebook
Tweet about this on Twitter
Twitter