American psychologist Paul Ekman’s research on facial expressions spawned a whole new career of human lie detectors more than four decades ago. Artificial intelligence could soon take their jobs.
While the U.S. has pioneered the use of automated technologies to reveal the hidden emotions and reactions of suspects, the technique is still nascent and a whole flock of entrepreneurial ventures are working to make it more efficient and less prone to false signals.
Facesoft, a U.K. start-up, says it has built a database of 300 million images of faces, some of which have been created by an AI system modeled on the human brain, The Times reported. The system built by the company can identify emotions like anger, fear and surprise based on micro-expressions which are often invisible to the casual observer.
“If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” co-founder and Chief Executive Officer Allan Ponniah, who’s also a plastic and reconstructive surgeon in London, told the newspaper.
Facesoft has approached police in Mumbai about using the system for monitoring crowds to detect the evolving mob dynamics, Ponniah said. It has also touted its product to police forces in the U.K.
Trump says more nations lining up to join Abraham Accords after Kazakhstan
‘Charlie Would Be Proud’: Turning Point Helps Deliver ‘A Huge Bright Spot’ in Arizona on a Tough Election Night
Travel industry sounds alarm over how shutdown will impact Americans ahead of Thanksgiving
Video: Trump Oval Office Announcement Cut Short After Man Suffers Medical Emergency
Man with violent criminal history on parole allegedly stabs teen to death: officials
DOJ appeals judge’s order forcing the release of grand jury materials in Comey case
UPS names 3 pilots killed in Louisville cargo plane crash that left at least 12 people dead
RSF agrees to US humanitarian ceasefire proposal as government drags feet
Optimism fades as Senate Democrats dig in, hold out over Obamacare demands
Fox News Politics Newsletter: Charts show shutdown airport disruption
Breaking: SCOTUS Sides with Trump, Rules Passports Will Be Based on Biology, Not Gender Ideology
Massive street takeover disrupts town as 50-100 riders in ‘chaotic groups’ block city roads: police
Suspected suburban jihadists shared ISIS-style selfies and joked about FBI reading group chat: feds
Fact Check: Was Erika Kirk Caught Using Fake ‘Tear Solution’ Before Taking TPUSA Stage?
House Dem says election is ‘warning’ for Democrats
The use of AI algorithms among police has stirred controversy recently. A research group whose members include Facebook Inc., Microsoft Corp., Alphabet Inc., Amazon.com Inc. and Apple Inc published a report in April stating that current algorithms aimed at helping police determine who should be granted bail, parole or probation, and which help judges make sentencing decisions, are potentially biased, opaque, and may not even work.
The Partnership on AI found that such systems are already in widespread use in the U.S. and were gaining a foothold in other countries too. It said it opposes any use of these systems.
Story cited here.









