American psychologist Paul Ekman’s research on facial expressions spawned a whole new career of human lie detectors more than four decades ago. Artificial intelligence could soon take their jobs.
While the U.S. has pioneered the use of automated technologies to reveal the hidden emotions and reactions of suspects, the technique is still nascent and a whole flock of entrepreneurial ventures are working to make it more efficient and less prone to false signals.
Facesoft, a U.K. start-up, says it has built a database of 300 million images of faces, some of which have been created by an AI system modeled on the human brain, The Times reported. The system built by the company can identify emotions like anger, fear and surprise based on micro-expressions which are often invisible to the casual observer.
“If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” co-founder and Chief Executive Officer Allan Ponniah, who’s also a plastic and reconstructive surgeon in London, told the newspaper.
Facesoft has approached police in Mumbai about using the system for monitoring crowds to detect the evolving mob dynamics, Ponniah said. It has also touted its product to police forces in the U.K.
LA officials charge over 40 anti-ICE protesters who allegedly assaulted officers, horses and threatened child
California police make gruesome feline discovery in U-Haul van; owner faces animal cruelty charges
Probe into Biden’s alleged mental decline cover-up deepens with more former White House officials to testify
MAGA law group fights to expose how Biden’s DEI agenda may have tainted life-saving organ transplants
Verdict Reached in Sean Combs Trial, But Judge Directs Jury to Keep Working
DeSantis and Trump Working on Unprecedented Plan That Would Supercharge Deportations
Trump Lauds ‘Foreboding’ American Icon, Assures It’s Coming Back Better Than Ever
Trump says Massie is ‘gonna be history’ as ‘big, beautiful bill’ jumps final hurdles to passage
HHS faces transparency lawsuit over race-focused organ transplant reforms under Biden
GOP Senator Who Voted ‘Yes’ on ‘Big Beautiful Bill’ Says She Hopes It Fails in the House
Alleged cannibal attempted to eat himself during deportation flight, says Noem
Massachusetts police officer shot by colleague during service of restraining order
Barry Morphew to appear in court after grand jury indictment charging him with wife’s murder
Critics sound off against Trump’s ‘temporary pass’ for migrant farm, hospitality workers
‘Biden wanted me in here:’ Trump says inside caged ‘Alligator Alcatraz’ detention site
The use of AI algorithms among police has stirred controversy recently. A research group whose members include Facebook Inc., Microsoft Corp., Alphabet Inc., Amazon.com Inc. and Apple Inc published a report in April stating that current algorithms aimed at helping police determine who should be granted bail, parole or probation, and which help judges make sentencing decisions, are potentially biased, opaque, and may not even work.
The Partnership on AI found that such systems are already in widespread use in the U.S. and were gaining a foothold in other countries too. It said it opposes any use of these systems.
Story cited here.