News

Face-Reading AI Will Tell Police When Suspects Are Hiding Truth

American psychologist Paul Ekman’s research on facial expressions spawned a whole new career of human lie detectors more than four decades ago. Artificial intelligence could soon take their jobs.

While the U.S. has pioneered the use of automated technologies to reveal the hidden emotions and reactions of suspects, the technique is still nascent and a whole flock of entrepreneurial ventures are working to make it more efficient and less prone to false signals.

Facesoft, a U.K. start-up, says it has built a database of 300 million images of faces, some of which have been created by an AI system modeled on the human brain, The Times reported. The system built by the company can identify emotions like anger, fear and surprise based on micro-expressions which are often invisible to the casual observer.


“If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” co-founder and Chief Executive Officer Allan Ponniah, who’s also a plastic and reconstructive surgeon in London, told the newspaper.

Facesoft has approached police in Mumbai about using the system for monitoring crowds to detect the evolving mob dynamics, Ponniah said. It has also touted its product to police forces in the U.K.


Dems provide Republicans key votes to advance Trump-backed funding package
Hunter Biden Argues He’s Not Legally Obligated to Communicate with His 7-Year-Old Daughter in New Court Filing
Trump files $10B lawsuit against IRS over alleged tax return leaks to major news outlets
Social justice advocate once named Bostonian of the Year sentenced in fraud case
Man Arrested While Allegedly Attempting to Break Luigi Mangione Out of Prison by Posing as an FBI Agent
Judges weigh Title IX funding fight over Virginia schools’ pro-transgender bathroom policies
Anti-ICE agitators mistake TSA air marshals for ICE agents, heckle them at Los Angeles-area restaurant
‘Zizian’ suspect to represent self at trial as other associates derail murder case
Dem Governor’s Attempt to Frame JD Vance’s Holocaust Remembrance Day Post as Anti-Semitic Backfires
Virginia Democrats seek dozens of new tax hikes, including on dog walking and dry cleaning
FIRST ON FOX: FEMA unleashes $2.2B in disaster relief funding across 25 states: ‘Cutting red tape’
Homan Promises ‘Justice is Coming’ to Those Funding and Organizing Interference with ICE Operations in Minneapolis
Police confirm investigation into anti-ICE Virginia Commonwealth nurse who encouraged drugging agents
No questions and format shift: Four takeaways from first Trump Cabinet meeting of the year
Wild video captures tire flying off British Airways plane moments after takeoff in Las Vegas

The use of AI algorithms among police has stirred controversy recently. A research group whose members include Facebook Inc., Microsoft Corp., Alphabet Inc., Amazon.com Inc. and Apple Inc published a report in April stating that current algorithms aimed at helping police determine who should be granted bail, parole or probation, and which help judges make sentencing decisions, are potentially biased, opaque, and may not even work.

See also  Gambling industry bankrolls members of Congress who push pro-gambling legislation

The Partnership on AI found that such systems are already in widespread use in the U.S. and were gaining a foothold in other countries too. It said it opposes any use of these systems.

Story cited here.

Share this article:
Share on Facebook
Facebook
Tweet about this on Twitter
Twitter