American psychologist Paul Ekman’s research on facial expressions spawned a whole new career of human lie detectors more than four decades ago. Artificial intelligence could soon take their jobs.
While the U.S. has pioneered the use of automated technologies to reveal the hidden emotions and reactions of suspects, the technique is still nascent and a whole flock of entrepreneurial ventures are working to make it more efficient and less prone to false signals.
Facesoft, a U.K. start-up, says it has built a database of 300 million images of faces, some of which have been created by an AI system modeled on the human brain, The Times reported. The system built by the company can identify emotions like anger, fear and surprise based on micro-expressions which are often invisible to the casual observer.
“If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” co-founder and Chief Executive Officer Allan Ponniah, who’s also a plastic and reconstructive surgeon in London, told the newspaper.
Facesoft has approached police in Mumbai about using the system for monitoring crowds to detect the evolving mob dynamics, Ponniah said. It has also touted its product to police forces in the U.K.
Former classmate says suspect in Brown, MIT killings was ‘socially awkward’ and ‘angry’ during college years
DOJ restores Trump photo to Epstein files after determining no victims depicted
Man rushed to hospital in apparent self-inflicted shooting at Atlanta airport
Trump’s team reports concrete progress in Ukraine peace negotiations with European partners
Byron Donalds urges conservatives to ‘focus on the mission’ at AmericaFest after 2025 setbacks
Brown Recluse Spider Bite Leaves Woman Virtually Paralyzed: ‘I Couldn’t Feed Myself’
Vance says ‘America First’ movement rejects ‘purity tests,’ welcomes critical thinkers
Fetterman Rips Into Fellow Democrats After Anti-Semitic Australia Shooting: ‘A Rot Within the American Left’
Illegal Sentenced to 20 Years in Prison for Strangling ICE Agent
US Coast Guard pursues third ‘dark fleet’ oil tanker as Trump targets Venezuelan sanctions evasion network
42 Years Later, DNA Evidence Solves Case of 5 Texans Kidnapped from a KFC Restaurant and Executed After Robbery
Fisherman survives near-fatal shark attack with own lifesaving care, instincts that kept him alive
Watch: Sen. Kennedy Gives the Best Description of Watching ‘The View’ That We’ve Ever Heard (and Several WJ Staffers Agree With Him)
Rapper Nicki Minaj teams up with new Turning Point USA leader Erika Kirk for Q&A session
MTG tells her successor not to ‘bow in loyalty’ to White House and GOP
The use of AI algorithms among police has stirred controversy recently. A research group whose members include Facebook Inc., Microsoft Corp., Alphabet Inc., Amazon.com Inc. and Apple Inc published a report in April stating that current algorithms aimed at helping police determine who should be granted bail, parole or probation, and which help judges make sentencing decisions, are potentially biased, opaque, and may not even work.
The Partnership on AI found that such systems are already in widespread use in the U.S. and were gaining a foothold in other countries too. It said it opposes any use of these systems.
Story cited here.









