American psychologist Paul Ekman’s research on facial expressions spawned a whole new career of human lie detectors more than four decades ago. Artificial intelligence could soon take their jobs.
While the U.S. has pioneered the use of automated technologies to reveal the hidden emotions and reactions of suspects, the technique is still nascent and a whole flock of entrepreneurial ventures are working to make it more efficient and less prone to false signals.
Facesoft, a U.K. start-up, says it has built a database of 300 million images of faces, some of which have been created by an AI system modeled on the human brain, The Times reported. The system built by the company can identify emotions like anger, fear and surprise based on micro-expressions which are often invisible to the casual observer.
“If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” co-founder and Chief Executive Officer Allan Ponniah, who’s also a plastic and reconstructive surgeon in London, told the newspaper.
Facesoft has approached police in Mumbai about using the system for monitoring crowds to detect the evolving mob dynamics, Ponniah said. It has also touted its product to police forces in the U.K.
Marco Rubio Reveals the Message He Delivered Pope Leo During ‘Important Meeting’ at Vatican
Trump says US helped secure release of 5 prisoners in Belarus deal, thanks Lukashenko
Savannah Guthrie urges public to help find missing mother Nancy in emotional Mother’s Day post
Kristin Smart search ends with no remains found as detectives analyze evidence
Watch: MLB Team Makes ‘Middle School Mistake’ As Season Continues to Spiral
Biden seeks to block DOJ release of 2017 audio, court filing says
Should ‘The View’ Be Considered News? ABC and FCC Go to Battle Over Embattled Show
Major Evangelic Denomination Sees Memberships Fall Amid Debates Over Female Pastors, Growing Distrust
‘Free beer’ for Trump death Dem activist running for Wisconsin gov: ‘I will win’ if they silence me
Virginia Democrats’ $70M redistricting gamble backfires after court defeat, ignites blame game
The Harsh Reality Everyone’s Missing About Massive Lithium Find in Appalachia
Rand Paul vows to keep pressure on Fauci as statute of limitations on criminal referral expires Monday
Fact Check: Is Hantavirus Poised to Become a COVID-Style Pandemic?
Virginia Democrats roasted over spelling mistakes in redistricting documents
This Is How Terror Spreads: 3 Australian Women Back from Syria Face Slavery, Terrorism Charges
The use of AI algorithms among police has stirred controversy recently. A research group whose members include Facebook Inc., Microsoft Corp., Alphabet Inc., Amazon.com Inc. and Apple Inc published a report in April stating that current algorithms aimed at helping police determine who should be granted bail, parole or probation, and which help judges make sentencing decisions, are potentially biased, opaque, and may not even work.
The Partnership on AI found that such systems are already in widespread use in the U.S. and were gaining a foothold in other countries too. It said it opposes any use of these systems.
Story cited here.









