American psychologist Paul Ekman’s research on facial expressions spawned a whole new career of human lie detectors more than four decades ago. Artificial intelligence could soon take their jobs.
While the U.S. has pioneered the use of automated technologies to reveal the hidden emotions and reactions of suspects, the technique is still nascent and a whole flock of entrepreneurial ventures are working to make it more efficient and less prone to false signals.
Facesoft, a U.K. start-up, says it has built a database of 300 million images of faces, some of which have been created by an AI system modeled on the human brain, The Times reported. The system built by the company can identify emotions like anger, fear and surprise based on micro-expressions which are often invisible to the casual observer.
“If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” co-founder and Chief Executive Officer Allan Ponniah, who’s also a plastic and reconstructive surgeon in London, told the newspaper.
Facesoft has approached police in Mumbai about using the system for monitoring crowds to detect the evolving mob dynamics, Ponniah said. It has also touted its product to police forces in the U.K.
Mamdani ripped after conceding key campaign pledge won’t happen this year
American Husband Arrested Days After His Wife Went Missing in the Bahamas
Putting Iran in Perspective: Have Americans Lost the Stomach to Fight Those Who Want Us Dead?
House Dem leaders open door to 25th Amendment after rank-and-file push for Trump’s removal
Acting Attorney General Todd Blanche seeks death penalty for three MS-13 gang members
Rogue Dem bucks party on Trump war powers, calls Iran ‘47-year-old war crime’
Vulnerable Dem incumbent caught calling home state ‘stolen land’ in resurfaced video
Disney Set to Make Significant Layoffs as Fierce Competition Takes a Toll: Report
Despite Some GOP Claims, the ‘DIGNITY Act’ Is an Amnesty Bill and We’ve Got the Receipts to Prove It
Oklahoma principal shot in leg is praised for tackling school shooter: ‘He is a hero’
Philadelphia parking garage collapse leaves 1 dead, 2 missing
DC’s bid to block Trump’s National Guard deployment hits basic legal snag: Can’t sue itself
Spanberger ripped after taking credit for billions in investments secured under GOP predecessor: ‘Pathetic’
He’s Out! Disgraced Lawyer Michael Avenatti Moves to Hollywood Halfway House to Finish Prison Sentence
Five ways Mullin is already pushing DHS in a new direction
The use of AI algorithms among police has stirred controversy recently. A research group whose members include Facebook Inc., Microsoft Corp., Alphabet Inc., Amazon.com Inc. and Apple Inc published a report in April stating that current algorithms aimed at helping police determine who should be granted bail, parole or probation, and which help judges make sentencing decisions, are potentially biased, opaque, and may not even work.
The Partnership on AI found that such systems are already in widespread use in the U.S. and were gaining a foothold in other countries too. It said it opposes any use of these systems.
Story cited here.









