American psychologist Paul Ekman’s research on facial expressions spawned a whole new career of human lie detectors more than four decades ago. Artificial intelligence could soon take their jobs.
While the U.S. has pioneered the use of automated technologies to reveal the hidden emotions and reactions of suspects, the technique is still nascent and a whole flock of entrepreneurial ventures are working to make it more efficient and less prone to false signals.
Facesoft, a U.K. start-up, says it has built a database of 300 million images of faces, some of which have been created by an AI system modeled on the human brain, The Times reported. The system built by the company can identify emotions like anger, fear and surprise based on micro-expressions which are often invisible to the casual observer.
“If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” co-founder and Chief Executive Officer Allan Ponniah, who’s also a plastic and reconstructive surgeon in London, told the newspaper.
Facesoft has approached police in Mumbai about using the system for monitoring crowds to detect the evolving mob dynamics, Ponniah said. It has also touted its product to police forces in the U.K.
Trump reveals Iran made ‘significant proposal’ after ultimatum, but ‘not good enough’
Watch: Savannah Guthrie Returns to ‘Today’ Show, Pays Tribute to Missing Mother with Outfit
American woman missing after husband says she fell overboard, swept to sea during Bahamas boat trip: police
Power Company Faces Legal Fight For Making Too Much Energy
Newsom’s California rail project now expected to cost $126B, official admits, with still no tracks laid
Israel hits South Pars natural gas field as Trump deadline looms
Children of Illegal Aliens Linked to Attempted Bombing at U.S. Air Force Base
Martinez: Why President Trump’s War On Fraud Exposes National Scandal
Ceasefire proposal could reopen key oil route amid US-Iran tensions and more top headlines
Behind ‘No Kings’ St. Paul protest: $250K production machine equal to a Def Leppard concert
Lindsey Graham turns ire toward rivals at home amid Iran and DHS shutdown fallout
Iranian intelligence chief and militia commander among those killed in Israeli strikes
President Trump makes endorsement in California gubernatorial race: ‘He will be a GREAT Governor’
GOP races to pass ICE, Border Patrol funding bill as priorities pile up, divisions emerge
Why the Strait of Hormuz matters as Trump issues fresh ultimatum to Iran
The use of AI algorithms among police has stirred controversy recently. A research group whose members include Facebook Inc., Microsoft Corp., Alphabet Inc., Amazon.com Inc. and Apple Inc published a report in April stating that current algorithms aimed at helping police determine who should be granted bail, parole or probation, and which help judges make sentencing decisions, are potentially biased, opaque, and may not even work.
The Partnership on AI found that such systems are already in widespread use in the U.S. and were gaining a foothold in other countries too. It said it opposes any use of these systems.
Story cited here.









