American psychologist Paul Ekman’s research on facial expressions spawned a whole new career of human lie detectors more than four decades ago. Artificial intelligence could soon take their jobs.
While the U.S. has pioneered the use of automated technologies to reveal the hidden emotions and reactions of suspects, the technique is still nascent and a whole flock of entrepreneurial ventures are working to make it more efficient and less prone to false signals.
Facesoft, a U.K. start-up, says it has built a database of 300 million images of faces, some of which have been created by an AI system modeled on the human brain, The Times reported. The system built by the company can identify emotions like anger, fear and surprise based on micro-expressions which are often invisible to the casual observer.
“If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” co-founder and Chief Executive Officer Allan Ponniah, who’s also a plastic and reconstructive surgeon in London, told the newspaper.
Facesoft has approached police in Mumbai about using the system for monitoring crowds to detect the evolving mob dynamics, Ponniah said. It has also touted its product to police forces in the U.K.
Illegal’s dragging of ICE agent shows the exact danger the officer who shot Renee Good feared, expert says
The Iran Strikes Have Flooded X with So Much AI Disinformation That I Went Crawling Back to Cable News
‘Blankies,’ ICE tactics and luxury jets: Top moments from Noem’s House testimony
Op-Ed: America’s Education Crisis, and How to Solve It
Second suspect arrested after NYC snowball fight sends 2 police officers to hospital
DOJ quietly closes autopen investigation targeting Biden and aides
Top Trump ally Steve Daines exits Montana Senate race, plans to retire
GOP senators tangle with Noem during heated hearing on her handling of deportation surge
Unearthed video shows Dem candidate supporting ‘reallocation’ of police funding to social service programs
Popular Far-Left Streamer Advises Suicide Bombers to Switch to Drones for Terror Campaigns
Perfect Justice: We’re Raining Destruction on Iran Using a Suicide Drone They Designed But We Perfected
BREAKING: Senate Rejects Dems’ War Powers Resolution Trying to Tie Trump’s Hands on Iran
DHS defends McLaughlin against allegations husband’s company profited millions from ad contracts: ‘Baseless’
Rep. Tony Gonzales admits to affair with staffer who died by suicide: ‘Lapse in judgment’
Minnesota AG Keith Ellison Feels the Heat During Fraud Hearing in DC: ‘You Should Go to Jail’
The use of AI algorithms among police has stirred controversy recently. A research group whose members include Facebook Inc., Microsoft Corp., Alphabet Inc., Amazon.com Inc. and Apple Inc published a report in April stating that current algorithms aimed at helping police determine who should be granted bail, parole or probation, and which help judges make sentencing decisions, are potentially biased, opaque, and may not even work.
The Partnership on AI found that such systems are already in widespread use in the U.S. and were gaining a foothold in other countries too. It said it opposes any use of these systems.
Story cited here.









