American psychologist Paul Ekman’s research on facial expressions spawned a whole new career of human lie detectors more than four decades ago. Artificial intelligence could soon take their jobs.
While the U.S. has pioneered the use of automated technologies to reveal the hidden emotions and reactions of suspects, the technique is still nascent and a whole flock of entrepreneurial ventures are working to make it more efficient and less prone to false signals.
Facesoft, a U.K. start-up, says it has built a database of 300 million images of faces, some of which have been created by an AI system modeled on the human brain, The Times reported. The system built by the company can identify emotions like anger, fear and surprise based on micro-expressions which are often invisible to the casual observer.
“If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” co-founder and Chief Executive Officer Allan Ponniah, who’s also a plastic and reconstructive surgeon in London, told the newspaper.
Facesoft has approached police in Mumbai about using the system for monitoring crowds to detect the evolving mob dynamics, Ponniah said. It has also touted its product to police forces in the U.K.
Trump admin fights in court to keep White House East Wing demolition, $300M ballroom build on track
Democrats push DOJ to reveal hidden half of Jack Smith report as GOP prepares tense deposition
The White House Reportedly Chastised Netanyahu with a ‘Stern’ Message After Weekend Strike
SNAP Recipient Complains About New Rules Blocking Junk Food Purchases: ‘What Is the Point?’
FBI doubted probable cause for Mar-a-Lago raid but pushed forward amid pressure from Biden DOJ, emails reveal
Ex-NFL reporter Michele Tafoya close to deciding on Minnesota Senate bid
Kennedy urges GOP to restart spending battle amid soaring cost of living, warns against wasting majority
Watch: Providence Police Chief Gives Astonishingly Unacceptable Response When Asked What Shooter Shouted Before Gunning Down Students
Late Breaking Video: Trump Takes Out Big Batch of Narco Terrorists in Highly Effective 3-Boat Strike
Rob Reiner’s son Nick’s struggles come into focus after parents’ deaths and more top headlines
House GOP tensions erupt as Republicans turn on each other heading into year’s end
Brian Glenn reveals engagement to Rep Marjorie Taylor Greene: ‘She said ‘yes”
‘Ghost ships’ ferrying illicit oil have sailed into Trump’s crosshairs
Doctor mysteriously found dead inside Dollar Tree freezer reportedly naked
Terrifying video shows out-of-control MTA bus plowing into cars in the Bronx, injuring 8
The use of AI algorithms among police has stirred controversy recently. A research group whose members include Facebook Inc., Microsoft Corp., Alphabet Inc., Amazon.com Inc. and Apple Inc published a report in April stating that current algorithms aimed at helping police determine who should be granted bail, parole or probation, and which help judges make sentencing decisions, are potentially biased, opaque, and may not even work.
The Partnership on AI found that such systems are already in widespread use in the U.S. and were gaining a foothold in other countries too. It said it opposes any use of these systems.
Story cited here.









