American psychologist Paul Ekman’s research on facial expressions spawned a whole new career of human lie detectors more than four decades ago. Artificial intelligence could soon take their jobs.
While the U.S. has pioneered the use of automated technologies to reveal the hidden emotions and reactions of suspects, the technique is still nascent and a whole flock of entrepreneurial ventures are working to make it more efficient and less prone to false signals.
Facesoft, a U.K. start-up, says it has built a database of 300 million images of faces, some of which have been created by an AI system modeled on the human brain, The Times reported. The system built by the company can identify emotions like anger, fear and surprise based on micro-expressions which are often invisible to the casual observer.
“If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” co-founder and Chief Executive Officer Allan Ponniah, who’s also a plastic and reconstructive surgeon in London, told the newspaper.
Facesoft has approached police in Mumbai about using the system for monitoring crowds to detect the evolving mob dynamics, Ponniah said. It has also touted its product to police forces in the U.K.
Affordability: The issue that boosted Trump and Republicans in 2024 deflated them in 2025
Murder in small-town America: The crimes that tore quiet communities apart in 2025
Texas father rescues kidnapped daughter by tracing her phone’s location, sheriff’s office says
2025 shockers: The biggest moments that rocked the campaign trail
They Got Her: FBI Caught Hillary Clinton Talking Donations with Foreign Felon on Tape
The True Story of St. Nicholas Is Much Better Than the Myths About Reindeer and the North Pole
Schumer Sinks to Lowest Approval Rating of All US Political Leaders
Most shocking examples of Chinese espionage uncovered by the US this year: ‘Just the tip of the iceberg’
Crime lords turn Motor City into car-theft supermarket for Middle East buyers: ‘Somebody’s getting paid’
LeBron’s Friend and Head Coach Eviscerates His Team After Christmas Meltdown: ‘Don’t Care Enough to Be a Professional’
Leftist Celebrity Scientist Neil deGrasse Tyson Gets Roasted After He Tries to Debunk Timeless Tale of Rudolph on Christmas Eve
Is Gavin Newsom’s social media strategy starting to get stale?
If You Live in One of These 19 States, Get Out for the Sake of Your Kids: They’re Suing HHS Over ‘Gender-Affirming Care’ Ban
Florida man allegedly steals 400 pounds of avocados to buy Christmas presents for children
Kamala Cries Sexism, Proves Why She’s Not Fit to Be a Leader: ‘I Don’t Aspire to Be Humble’
The use of AI algorithms among police has stirred controversy recently. A research group whose members include Facebook Inc., Microsoft Corp., Alphabet Inc., Amazon.com Inc. and Apple Inc published a report in April stating that current algorithms aimed at helping police determine who should be granted bail, parole or probation, and which help judges make sentencing decisions, are potentially biased, opaque, and may not even work.
The Partnership on AI found that such systems are already in widespread use in the U.S. and were gaining a foothold in other countries too. It said it opposes any use of these systems.
Story cited here.









