American psychologist Paul Ekman’s research on facial expressions spawned a whole new career of human lie detectors more than four decades ago. Artificial intelligence could soon take their jobs.
While the U.S. has pioneered the use of automated technologies to reveal the hidden emotions and reactions of suspects, the technique is still nascent and a whole flock of entrepreneurial ventures are working to make it more efficient and less prone to false signals.
Facesoft, a U.K. start-up, says it has built a database of 300 million images of faces, some of which have been created by an AI system modeled on the human brain, The Times reported. The system built by the company can identify emotions like anger, fear and surprise based on micro-expressions which are often invisible to the casual observer.
“If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” co-founder and Chief Executive Officer Allan Ponniah, who’s also a plastic and reconstructive surgeon in London, told the newspaper.
Facesoft has approached police in Mumbai about using the system for monitoring crowds to detect the evolving mob dynamics, Ponniah said. It has also touted its product to police forces in the U.K.
FAA restricts Texas airspace after Pentagon reportedly strikes down Customs and Border Protection drone
Federal prosecutor admits ‘extraordinary’ timing in Abrego Garcia smuggling case charges
Trump pushes Congress to pass SAVE Act during State of the Union: ‘So we’ll see how it goes’
BREAKING VIDEO: Minnesota Lawmakers Have Drawn up Impeachment Articles Against Tim Walz and Keith Ellison
Alleged Tren de Aragua criminal gang members charged in ATM robberies across New England
Atlanta-area police blast parents over vodka martini packed in school lunch: ‘That is NOT apple juice’
Vulnerable House Dem lashes out at Trump’s ‘racist’ SOTU challenge: ‘That was uncomfortable’
Mamdani’s Stylist Mocked After Leaving First-Class Plane Seat to Avoid Sitting with White People: ‘Just Like Rosa Parks’
Target Pays $110 Million to Break Minneapolis Lease Amid Chaos in the City
Kennedy warns Ayatollah wants to ‘drink our blood out of a boot’ as Iran tensions escalate
BREAKING VIDEO: Hillary Flees Mic 4 Seconds After Reporter Dares Ask Why Ghislaine Maxwell Was at Chelsea’s Wedding in 2010
DHS agent says Abrego Garcia human smuggling case grew ‘stronger’ after investigation
Hillary Clinton comes out swinging after GOP grilled her during marathon Epstein deposition
Another New York Yankee is Having His Number Retired and Will Join an Exclusive Club of Legends
COVID HANGOVER: New Research Suggests Most Schools Haven’t Recovered from Lockdowns, and the Academic Numbers Are Devastating
The use of AI algorithms among police has stirred controversy recently. A research group whose members include Facebook Inc., Microsoft Corp., Alphabet Inc., Amazon.com Inc. and Apple Inc published a report in April stating that current algorithms aimed at helping police determine who should be granted bail, parole or probation, and which help judges make sentencing decisions, are potentially biased, opaque, and may not even work.
The Partnership on AI found that such systems are already in widespread use in the U.S. and were gaining a foothold in other countries too. It said it opposes any use of these systems.
Story cited here.









