American psychologist Paul Ekman’s research on facial expressions spawned a whole new career of human lie detectors more than four decades ago. Artificial intelligence could soon take their jobs.
While the U.S. has pioneered the use of automated technologies to reveal the hidden emotions and reactions of suspects, the technique is still nascent and a whole flock of entrepreneurial ventures are working to make it more efficient and less prone to false signals.
Facesoft, a U.K. start-up, says it has built a database of 300 million images of faces, some of which have been created by an AI system modeled on the human brain, The Times reported. The system built by the company can identify emotions like anger, fear and surprise based on micro-expressions which are often invisible to the casual observer.
“If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” co-founder and Chief Executive Officer Allan Ponniah, who’s also a plastic and reconstructive surgeon in London, told the newspaper.
Facesoft has approached police in Mumbai about using the system for monitoring crowds to detect the evolving mob dynamics, Ponniah said. It has also touted its product to police forces in the U.K.
Noem thanks Trump for new Shield of the Americas special envoy role after DHS ouster
Supreme Court Will Hear Lawsuit Over Blaming Oil Companies for Climate Change
White House’s bombastic Iran war media blitz breaks from precedent, shocking critics
Farage slams British prime minister for ‘extraordinary’ lack of support for Trump’s Iran strikes
‘Smarter Than Most of You!’: Biden Launches Bizarre Defense of His Stuttering at Jesse Jackson Memorial Service
FBI captures Bangladeshi fugitive extradited in massive online child sextortion case
Shark attack deaths surge above decade average in 2025
Trump says ‘hatred’ between Putin, Zelenskyy blocking Ukraine peace deal
Trump touts US has ‘tremendous’ amount of Venezuelan oil, vows to ‘take care’ of Cuba after Iran focus
Major Hospital Stops Providing Some Trans Surgeries to Adults
Florida Democrat Insults Charlie Kirk’s Memory in Disgusting Protest of Day of Remembrance
Newsom rips Noem as ‘Kosplay Barbie’ over $220M ad campaign, demands DHS release $500M for LA wildfires
In an Unexpected SCOTUS Moment, Justice Gorsuch Educated the Court on How Much Founding Fathers Drank – It Was Apparently a Lot
Teens inspired by ‘Scream’ recorded ‘first kill’ plot before stabbing classmate to death
White Cops in Philadelphia Sue City Over Alleged Racial Discrimination
The use of AI algorithms among police has stirred controversy recently. A research group whose members include Facebook Inc., Microsoft Corp., Alphabet Inc., Amazon.com Inc. and Apple Inc published a report in April stating that current algorithms aimed at helping police determine who should be granted bail, parole or probation, and which help judges make sentencing decisions, are potentially biased, opaque, and may not even work.
The Partnership on AI found that such systems are already in widespread use in the U.S. and were gaining a foothold in other countries too. It said it opposes any use of these systems.
Story cited here.









