American psychologist Paul Ekman’s research on facial expressions spawned a whole new career of human lie detectors more than four decades ago. Artificial intelligence could soon take their jobs.
While the U.S. has pioneered the use of automated technologies to reveal the hidden emotions and reactions of suspects, the technique is still nascent and a whole flock of entrepreneurial ventures are working to make it more efficient and less prone to false signals.
Facesoft, a U.K. start-up, says it has built a database of 300 million images of faces, some of which have been created by an AI system modeled on the human brain, The Times reported. The system built by the company can identify emotions like anger, fear and surprise based on micro-expressions which are often invisible to the casual observer.
“If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” co-founder and Chief Executive Officer Allan Ponniah, who’s also a plastic and reconstructive surgeon in London, told the newspaper.
Facesoft has approached police in Mumbai about using the system for monitoring crowds to detect the evolving mob dynamics, Ponniah said. It has also touted its product to police forces in the U.K.
Watch: Trump Roasts Fox News Polls While Live on the Network, Calls for Co-Host’s Removal
Hackers tied to Iran breach FBI director’s personal email and post private images
WATCH: Scalise’s Staff Found the DHS Quote Hakeem Jeffries Hoped Was Gone Forever, Now Scalise Is on the Floor Reading It to the Whole World
Secret Service agent assigned to Jill Biden injured in ‘negligent discharge’ at Philadelphia airport
AOC says politicians, especially Democrats, should promise not to accept ‘AI money’
Sheridan Gorman’s university newspaper touts ICE tracker after freshman allegedly murdered by illegal alien
Movie Review: Hopeful Comedy ‘Home Delivery’ Delivers Heart, Laughs, and Free Admission for Expectant Mothers
Savannah Guthrie to return to Today show after absence
‘Maybe It Wasn’t a Bug…’ Internet Weighs In on Man Who Discovered He Could Access 7,000 Robotic Vacuums
Battleground Dem candidate linked public displays of faith to political violence in 2023 speech
Senate passes overnight bill to fund most of Homeland Security as fight nears end and more top headlines
Fetterman tells far-left prosecutor to ‘lighten up’ after threatening to arrest ICE agents over raids
FTC commissioner likens American Bar Association to ‘communist party’ over far-left advocacy
NHL’s Nashville Predators Unveil New LGBT Logo, and It Doesn’t Land the Way They Hoped
Just In: Trump’s Legacy Will Now Appear on All New US Currency
The use of AI algorithms among police has stirred controversy recently. A research group whose members include Facebook Inc., Microsoft Corp., Alphabet Inc., Amazon.com Inc. and Apple Inc published a report in April stating that current algorithms aimed at helping police determine who should be granted bail, parole or probation, and which help judges make sentencing decisions, are potentially biased, opaque, and may not even work.
The Partnership on AI found that such systems are already in widespread use in the U.S. and were gaining a foothold in other countries too. It said it opposes any use of these systems.
Story cited here.








