American psychologist Paul Ekman’s research on facial expressions spawned a whole new career of human lie detectors more than four decades ago. Artificial intelligence could soon take their jobs.
While the U.S. has pioneered the use of automated technologies to reveal the hidden emotions and reactions of suspects, the technique is still nascent and a whole flock of entrepreneurial ventures are working to make it more efficient and less prone to false signals.
Facesoft, a U.K. start-up, says it has built a database of 300 million images of faces, some of which have been created by an AI system modeled on the human brain, The Times reported. The system built by the company can identify emotions like anger, fear and surprise based on micro-expressions which are often invisible to the casual observer.
“If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” co-founder and Chief Executive Officer Allan Ponniah, who’s also a plastic and reconstructive surgeon in London, told the newspaper.
Facesoft has approached police in Mumbai about using the system for monitoring crowds to detect the evolving mob dynamics, Ponniah said. It has also touted its product to police forces in the U.K.
Fox News Host Confronts Minneapolis Mayor to His Face on Dangerous ICE Rhetoric: ‘Here’s the Little Secret’
Trump’s Economy Just Took a Wrecking Ball to Biden’s ‘Mom Has to Work 3 Jobs’ Nightmare
Federal court clears California’s new House map boosting Democrats ahead of 2026 midterms
Protesters clash with federal officers after another ICE shooting in Minneapolis
Hochul endorses legislation to allow New Yorkers to sue ICE agents: ‘Power does not justify abuse’
Mamdani housing czar called ‘White, middle-class homeowners’ a ‘huge problem’ during 2021 podcast appearance
Parents erupt into massive brawl during Catholic youth basketball game in Staten Island
ICE agent shoots Venezuelan national in Minneapolis after shovel attack during ambush: DHS
Taxpayer-funded Minnesota charter school shuts down in-person learning amid ICE raids
Top federal Minnesota prosecutors officially terminated after dispute over ICE shooting probe
House GOP revisits Biden handling of Jan. 6 with new panel
‘It’s Stopping’: Trump Makes Major Claim About Deadly Iranian Protests
DHS exposes background of NYC city council employee after Mamdani fumed over arrest
PHOTOS: Here Are 5 of the Worst, Most Evil Illegals that ICE Officer Ross Was Working to Arrest When Renee Good Tried to Murder Him
Key Republicans flip, kill effort to restrain Trump’s policing power over Venezuela
The use of AI algorithms among police has stirred controversy recently. A research group whose members include Facebook Inc., Microsoft Corp., Alphabet Inc., Amazon.com Inc. and Apple Inc published a report in April stating that current algorithms aimed at helping police determine who should be granted bail, parole or probation, and which help judges make sentencing decisions, are potentially biased, opaque, and may not even work.
The Partnership on AI found that such systems are already in widespread use in the U.S. and were gaining a foothold in other countries too. It said it opposes any use of these systems.
Story cited here.









