American psychologist Paul Ekman’s research on facial expressions spawned a whole new career of human lie detectors more than four decades ago. Artificial intelligence could soon take their jobs.
While the U.S. has pioneered the use of automated technologies to reveal the hidden emotions and reactions of suspects, the technique is still nascent and a whole flock of entrepreneurial ventures are working to make it more efficient and less prone to false signals.
Facesoft, a U.K. start-up, says it has built a database of 300 million images of faces, some of which have been created by an AI system modeled on the human brain, The Times reported. The system built by the company can identify emotions like anger, fear and surprise based on micro-expressions which are often invisible to the casual observer.
“If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” co-founder and Chief Executive Officer Allan Ponniah, who’s also a plastic and reconstructive surgeon in London, told the newspaper.
Facesoft has approached police in Mumbai about using the system for monitoring crowds to detect the evolving mob dynamics, Ponniah said. It has also touted its product to police forces in the U.K.
Human smugglers charged after panga boat capsizes killing 4 off San Diego coast
Texas raid targeting Tren de Aragua gang leads to arrest of 140 illegal immigrants
Historic NFL Game Sparks Controversy After What the National Anthem Singer Wore
Duckworth fires staffer who claimed to be attorney for detained illegal immigrant with criminal history
NFL Under Fire After Bizarre Pass Interference Penalty Effectively Ends Big ‘Sunday Night Football’ Game
Report: Democrat-Run City in Major Swing State Is Becoming a Human Trafficking Hub
Trump Extends an Olive Branch to Mamdani: ‘We’ll Work Something Out’
9/11 families urge Trump to press Saudis for accountability ahead of MBS visit to DC: ‘Overwhelming evidence’
The Mamdani Effect: Another Democratic Socialist Is Running for Mayor in a Major Blue City
Watch: Chuck Schumer Accidentally Admits the Truth When Asked About Biden and the Epstein Files
Illinois mayor declares ‘civil emergency’ after out-of-town protesters threaten violence amid anti-ICE unrest
Top Michigan official sparks online firestorm when asked to name number of genders: ‘Beyond embarrassing’
Musk calls Bezos a ‘copy cat’ for new AI effort
Senators ask chief judge on DC Circuit to suspend James Boasberg pending House impeachment vote
Israel investigating settlers who set fire to another West Bank village, injured IDF soldiers
The use of AI algorithms among police has stirred controversy recently. A research group whose members include Facebook Inc., Microsoft Corp., Alphabet Inc., Amazon.com Inc. and Apple Inc published a report in April stating that current algorithms aimed at helping police determine who should be granted bail, parole or probation, and which help judges make sentencing decisions, are potentially biased, opaque, and may not even work.
The Partnership on AI found that such systems are already in widespread use in the U.S. and were gaining a foothold in other countries too. It said it opposes any use of these systems.
Story cited here.









