News

Face-Reading AI Will Tell Police When Suspects Are Hiding Truth


American psychologist Paul Ekman’s research on facial expressions spawned a whole new career of human lie detectors more than four decades ago. Artificial intelligence could soon take their jobs.

While the U.S. has pioneered the use of automated technologies to reveal the hidden emotions and reactions of suspects, the technique is still nascent and a whole flock of entrepreneurial ventures are working to make it more efficient and less prone to false signals.

Facesoft, a U.K. start-up, says it has built a database of 300 million images of faces, some of which have been created by an AI system modeled on the human brain, The Times reported. The system built by the company can identify emotions like anger, fear and surprise based on micro-expressions which are often invisible to the casual observer.


“If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” co-founder and Chief Executive Officer Allan Ponniah, who’s also a plastic and reconstructive surgeon in London, told the newspaper.

Facesoft has approached police in Mumbai about using the system for monitoring crowds to detect the evolving mob dynamics, Ponniah said. It has also touted its product to police forces in the U.K.


‘Bold vision’: Historic Bush Cabinet secretary makes key endorsement in 2024 presidential race
Trucking groups, farmers file opening brief in lawsuit against EPA: ‘unworkable mandate’
Obama’s Jab at Trump’s Mental Acuity Backfires After Desperate Joke Exposes Democrat Hypocrisy
Chicago eliminates migrant-only shelters, ‘landing zone’
US Marshals Make Disturbing Find at a Campsite in the Woods
Missing teen hiker in California reunited with family after spending night in sub-freezing temperature
Last-minute hearing could determine whether vulnerable House Dem can vote for herself in key race
Trump and Harris campaigns battle over mental and physical fitness
Election 2024: Here are the issues Ohio voters care the most about
North Carolina environmental rules are ‘insult to injury’ for Helene recovery efforts, lawmaker says
‘Utter betrayal’: New report reveals DHS official used social media to promote illegal immigration
Blinken in Israel for 11th time since Oct. 7 attack, with little to show for it
Ohio woman, 77, accused of fatally shooting man who ‘jokingly’ asked her to shoot him
Texas couple charged after allegedly attempting to kidnap, kill man wife was having affair with
Reporter struck by shrapnel during Kinzinger and Lucas Kunce shooting range event

The use of AI algorithms among police has stirred controversy recently. A research group whose members include Facebook Inc., Microsoft Corp., Alphabet Inc., Amazon.com Inc. and Apple Inc published a report in April stating that current algorithms aimed at helping police determine who should be granted bail, parole or probation, and which help judges make sentencing decisions, are potentially biased, opaque, and may not even work.

See also  Progressive district attorneys on the chopping block in 2024 elections

The Partnership on AI found that such systems are already in widespread use in the U.S. and were gaining a foothold in other countries too. It said it opposes any use of these systems.

Story cited here.

Share this article:
Share on Facebook
Facebook
Tweet about this on Twitter
Twitter