News

Face-Reading AI Will Tell Police When Suspects Are Hiding Truth

American psychologist Paul Ekman’s research on facial expressions spawned a whole new career of human lie detectors more than four decades ago. Artificial intelligence could soon take their jobs.

While the U.S. has pioneered the use of automated technologies to reveal the hidden emotions and reactions of suspects, the technique is still nascent and a whole flock of entrepreneurial ventures are working to make it more efficient and less prone to false signals.

Facesoft, a U.K. start-up, says it has built a database of 300 million images of faces, some of which have been created by an AI system modeled on the human brain, The Times reported. The system built by the company can identify emotions like anger, fear and surprise based on micro-expressions which are often invisible to the casual observer.


“If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” co-founder and Chief Executive Officer Allan Ponniah, who’s also a plastic and reconstructive surgeon in London, told the newspaper.

Facesoft has approached police in Mumbai about using the system for monitoring crowds to detect the evolving mob dynamics, Ponniah said. It has also touted its product to police forces in the U.K.


ICE arrests illegal immigrants convicted of child rape, sexual assault, drug trafficking
Former Connecticut police chief arrested for allegedly stealing $85K in public funds
CIA retracts, revises 19 past intelligence assessments deemed politically biased
Campus Radicals Newsletter: Teacher who lost job over 2-word post breaks silence, Chicago ‘racial segregation’
EXCLUSIVE: ‘Targeted attack’ vandalizes Trump-Kennedy Center outdoor ice rink, forces performance cancellation
Twisted: Little Girl Tells Mom She’s Afraid Trans Teacher Will Eat Her at Night, Then Mom Learns What He’s Telling Kindergartners at School
Crockett blasts ‘left’ for alleged skin darkening in ads as Texas Senate clash heats up
EPA scraps Biden coal restrictions as advocates say move will restore American dominance
Kavanaugh rips Supreme Court majority’s ‘illogical’ line on tariffs
Trump Accuses Supreme Court of Being Influenced by ‘Foreign Interests’ Following Tariff Ruling
Trump Slams Supreme Court for Not Addressing Tariff Refunds in Ruling: ‘Not Written by Smart People’
Omar calls GOP probe into husband’s $30M business surge a ‘political stunt’ as records deadline passes
President Trump Announces ‘Different Direction’ on Tariffs Following SCOTUS Decision, And Might Charge Countries More Than Before
Sanctuary City Detroit to Fire Cops After They Cooperated with ICE – DHS Responds
Fox News Poll: Trump’s tariffs faced broad disapproval even before Supreme Court ruling

The use of AI algorithms among police has stirred controversy recently. A research group whose members include Facebook Inc., Microsoft Corp., Alphabet Inc., Amazon.com Inc. and Apple Inc published a report in April stating that current algorithms aimed at helping police determine who should be granted bail, parole or probation, and which help judges make sentencing decisions, are potentially biased, opaque, and may not even work.

See also  Vance and Rubio would give GOP ‘potent one-two punch’ for 2028 ticket: Joe Concha

The Partnership on AI found that such systems are already in widespread use in the U.S. and were gaining a foothold in other countries too. It said it opposes any use of these systems.

Story cited here.

Share this article:
Share on Facebook
Facebook
Tweet about this on Twitter
Twitter