News

Face-Reading AI Will Tell Police When Suspects Are Hiding Truth

American psychologist Paul Ekman’s research on facial expressions spawned a whole new career of human lie detectors more than four decades ago. Artificial intelligence could soon take their jobs.

While the U.S. has pioneered the use of automated technologies to reveal the hidden emotions and reactions of suspects, the technique is still nascent and a whole flock of entrepreneurial ventures are working to make it more efficient and less prone to false signals.

Facesoft, a U.K. start-up, says it has built a database of 300 million images of faces, some of which have been created by an AI system modeled on the human brain, The Times reported. The system built by the company can identify emotions like anger, fear and surprise based on micro-expressions which are often invisible to the casual observer.


“If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” co-founder and Chief Executive Officer Allan Ponniah, who’s also a plastic and reconstructive surgeon in London, told the newspaper.

Facesoft has approached police in Mumbai about using the system for monitoring crowds to detect the evolving mob dynamics, Ponniah said. It has also touted its product to police forces in the U.K.


SPLC kept paying Aryan Nations operatives after bragging about bankrupting them
Former Florida Gov. Charlie Crist is running for mayor of St. Pete
AI boom tests GOP’s midterm affordability pitch as price pain spreads
WHCA shooting exposes concerns over succession security, number of ‘celebrity’ Cabinet officials at big events
Congress responds to WHCA attack with five separate bills to build Trump’s ballroom
Jasmine Crockett’s social media posts about WHCD shooting show different tones
Mentalist Oz Pearlman to skip Kimmel appearance after Trump dinner shooting
Wisconsin teacher placed on leave after social media post advocating to ‘make Americans great assassins again’
Mamdani and King Charles to attend 9/11 ceremony in New York City: What to know
After Karoline Leavitt Calls Out Dems for Their Vile Anti-Trump Rhetoric, GOP Brings the Receipts
Newsom’s wife lashes out at Trump after he rips ’60 Minutes’ host: ‘Internalized misogyny’
Mamdani’s education plan’s ‘lack of merit’ could fundamentally change student outcomes: GOP leader warns
‘Hell Week’ in Washington: A look at House Republicans’ current bind, and how we got here
DOJ Moves Against White House Ballroom Lawsuit in Wake of Shooting: ‘Enough Is Enough’
Pastor known for marriage advice arrested at rumored swingers community accused of having multiple wives

The use of AI algorithms among police has stirred controversy recently. A research group whose members include Facebook Inc., Microsoft Corp., Alphabet Inc., Amazon.com Inc. and Apple Inc published a report in April stating that current algorithms aimed at helping police determine who should be granted bail, parole or probation, and which help judges make sentencing decisions, are potentially biased, opaque, and may not even work.

See also  Injured Secret Service agent fired five shots at Cole during Trump assassination attempt

The Partnership on AI found that such systems are already in widespread use in the U.S. and were gaining a foothold in other countries too. It said it opposes any use of these systems.

Story cited here.

Share this article:
Share on Facebook
Facebook
Tweet about this on Twitter
Twitter