News

Face-Reading AI Will Tell Police When Suspects Are Hiding Truth

American psychologist Paul Ekman’s research on facial expressions spawned a whole new career of human lie detectors more than four decades ago. Artificial intelligence could soon take their jobs.

While the U.S. has pioneered the use of automated technologies to reveal the hidden emotions and reactions of suspects, the technique is still nascent and a whole flock of entrepreneurial ventures are working to make it more efficient and less prone to false signals.

Facesoft, a U.K. start-up, says it has built a database of 300 million images of faces, some of which have been created by an AI system modeled on the human brain, The Times reported. The system built by the company can identify emotions like anger, fear and surprise based on micro-expressions which are often invisible to the casual observer.


“If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” co-founder and Chief Executive Officer Allan Ponniah, who’s also a plastic and reconstructive surgeon in London, told the newspaper.

Facesoft has approached police in Mumbai about using the system for monitoring crowds to detect the evolving mob dynamics, Ponniah said. It has also touted its product to police forces in the U.K.


SEE IT: Feeding Our Future fraudsters bought mansions and Mercedes with $250M in stolen meal funds
Progressive powerhouses launch primary war against Democratic establishment ahead of 2026 elections
Good News for Charlie Kirk’s Family: Legal Expert Says Evidence Against Robinson Is ‘Strongest’ He’s Ever Seen
38-Year-Old Able-Bodied Man Irate After Losing Food Stamps Under Trump
After Australia passes social media ban lawmakers probed on why Congress hasn’t done more to protect kids
DHS to focus on arresting illegal immigrants with serious offenses amid negative polling on ICE raids: report
Justice Department sues Fulton County to obtain records related to 2020 election
Georgia woman hospitalized after attacker hurls corrosive chemical during evening walk
EXCLUSIVE: Inside Trump’s private schedule as media fixates on his health
Obese man on death row chooses buffet of BBQ, wings, cheeseburger, pizza, ice cream for last meal in Georgia
Florida influencer, 41, accused of inappropriately touching, exposing herself to teenage son’s friend
Republican House leader signals plan to begin contempt proceedings against Bill and Hillary Clinton
Watch: Tim Walz Tries to Make Somali Fraud Scandal Seem Perfectly Normal, Blames Trump for Talking About It
Elon Musk Sets Newsom Straight on ‘Trans Kids’ After Governor’s Office Attacks Musk’s Family
James Carville Criticizes Jasmine Crockett’s Senate Bid, Says She’s Too Self-Centered to Win

The use of AI algorithms among police has stirred controversy recently. A research group whose members include Facebook Inc., Microsoft Corp., Alphabet Inc., Amazon.com Inc. and Apple Inc published a report in April stating that current algorithms aimed at helping police determine who should be granted bail, parole or probation, and which help judges make sentencing decisions, are potentially biased, opaque, and may not even work.

See also  Top US political figures lend legitimacy to Qatari forum allied with array of anti-American groups

The Partnership on AI found that such systems are already in widespread use in the U.S. and were gaining a foothold in other countries too. It said it opposes any use of these systems.

Story cited here.

Share this article:
Share on Facebook
Facebook
Tweet about this on Twitter
Twitter