American psychologist Paul Ekman’s research on facial expressions spawned a whole new career of human lie detectors more than four decades ago. Artificial intelligence could soon take their jobs.
While the U.S. has pioneered the use of automated technologies to reveal the hidden emotions and reactions of suspects, the technique is still nascent and a whole flock of entrepreneurial ventures are working to make it more efficient and less prone to false signals.
Facesoft, a U.K. start-up, says it has built a database of 300 million images of faces, some of which have been created by an AI system modeled on the human brain, The Times reported. The system built by the company can identify emotions like anger, fear and surprise based on micro-expressions which are often invisible to the casual observer.
“If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” co-founder and Chief Executive Officer Allan Ponniah, who’s also a plastic and reconstructive surgeon in London, told the newspaper.
Facesoft has approached police in Mumbai about using the system for monitoring crowds to detect the evolving mob dynamics, Ponniah said. It has also touted its product to police forces in the U.K.
Russian spy ship detected just miles from Hawaii coastline prompts US Coast Guard response
UN Ambassador Waltz reveals Trump’s Middle East peace plan is ‘the only way forward’
House Democrat claims Trump spent Thanksgiving 2017 with Epstein, citing emails
Judge who ordered release of 600 Chicago illegal immigrants slammed by DHS as activist putting lives at risk
Far-Left Congressman Eric Swalwell Has Been Referred to the Justice Department for Criminal Charges
Arizona Dem tears into her own party over trend she deems antisemitic: ‘Sick and tired’
DOJ challenges California’s Prop 50 map in escalating fight over House majority
Noem Hands Out $10K Bonus Checks To Appreciative TSA Agents Who Stood Their Posts During Shutdown
Kamala Harris-endorsed candidate in hot seat for million-dollar DC home hundreds of miles outside district
Camelot or Cringe?: Meet JFK’s grandson turned congressional candidate for the scrolling generation
Alert: After Announcing Huge Trump Pardons, Ed Martin Turns His Attention to Tina Peters, the Wrongly Imprisoned Election Integrity Hero
Arkansas public university offers course in ‘queer childhoods’ taught by fairy tale scholar
Republican governor spares life of death row inmate in final hours before execution
DOJ searching for suspect who attacked Alina Habba’s office, Bondi says
Federal Judge Approves Prosecution of Democratic Congresswoman
The use of AI algorithms among police has stirred controversy recently. A research group whose members include Facebook Inc., Microsoft Corp., Alphabet Inc., Amazon.com Inc. and Apple Inc published a report in April stating that current algorithms aimed at helping police determine who should be granted bail, parole or probation, and which help judges make sentencing decisions, are potentially biased, opaque, and may not even work.
The Partnership on AI found that such systems are already in widespread use in the U.S. and were gaining a foothold in other countries too. It said it opposes any use of these systems.
Story cited here.









