American psychologist Paul Ekman’s research on facial expressions spawned a whole new career of human lie detectors more than four decades ago. Artificial intelligence could soon take their jobs.
While the U.S. has pioneered the use of automated technologies to reveal the hidden emotions and reactions of suspects, the technique is still nascent and a whole flock of entrepreneurial ventures are working to make it more efficient and less prone to false signals.
Facesoft, a U.K. start-up, says it has built a database of 300 million images of faces, some of which have been created by an AI system modeled on the human brain, The Times reported. The system built by the company can identify emotions like anger, fear and surprise based on micro-expressions which are often invisible to the casual observer.
“If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” co-founder and Chief Executive Officer Allan Ponniah, who’s also a plastic and reconstructive surgeon in London, told the newspaper.
Facesoft has approached police in Mumbai about using the system for monitoring crowds to detect the evolving mob dynamics, Ponniah said. It has also touted its product to police forces in the U.K.
Eric Adams says Biden DOJ used ‘lawfare’ against him, compares treatment to Trump
State officials and daycare manager push back on viral video fraud allegations in Minnesota
Unearthed surveillance exposes how parents were allegedly involved in Minnesota’s daycare fraud scheme
Nashville shooter Audrey Hale allegedly used federal student aid to buy guns for school attack
Trucker slapped with charges in fatal driving incident previously immigrated to US illegally: source
Judge Orders Release of Sealed Tyler Robinson Records
Dems Told Us Somalis Are Massive Contributors to Minnesota. Stats Prove That’s a Lie of Profound Proportions
Breaking: DOJ Announces 98 Total Arrests in Minnesota Fraud Case – 85 of ‘Somali Descent’
Zelenskyy says peace deal is close after Trump meeting but territory remains sticking point
DHS Conducting ‘Massive’ Operation in Minneapolis After Explosive Day Care Fraud Allegations
Texas man charged with attempting to provide material support to ISIS in federal terrorism case
Lapsed Epstein deadline underscores challenge of reviewing troves of files in 30 days
US military kills 2 alleged narco-terrorists in Eastern Pacific strike operation targeting vessel
Trump tells UN agencies to ‘adapt, shrink, or die’ while offering $2B humanitarian funding pledge
Swalwell slammed by Border Patrol commander over imagery showing ICE raiding Jesus Christ’s manger
The use of AI algorithms among police has stirred controversy recently. A research group whose members include Facebook Inc., Microsoft Corp., Alphabet Inc., Amazon.com Inc. and Apple Inc published a report in April stating that current algorithms aimed at helping police determine who should be granted bail, parole or probation, and which help judges make sentencing decisions, are potentially biased, opaque, and may not even work.
The Partnership on AI found that such systems are already in widespread use in the U.S. and were gaining a foothold in other countries too. It said it opposes any use of these systems.
Story cited here.









