American psychologist Paul Ekman’s research on facial expressions spawned a whole new career of human lie detectors more than four decades ago. Artificial intelligence could soon take their jobs.
While the U.S. has pioneered the use of automated technologies to reveal the hidden emotions and reactions of suspects, the technique is still nascent and a whole flock of entrepreneurial ventures are working to make it more efficient and less prone to false signals.
Facesoft, a U.K. start-up, says it has built a database of 300 million images of faces, some of which have been created by an AI system modeled on the human brain, The Times reported. The system built by the company can identify emotions like anger, fear and surprise based on micro-expressions which are often invisible to the casual observer.
“If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” co-founder and Chief Executive Officer Allan Ponniah, who’s also a plastic and reconstructive surgeon in London, told the newspaper.
Facesoft has approached police in Mumbai about using the system for monitoring crowds to detect the evolving mob dynamics, Ponniah said. It has also touted its product to police forces in the U.K.
Trump says Venezuela has begun releasing political prisoners ‘in a BIG WAY’
Trump Launches Plan to Protect Americans from Getting ‘Ripped Off’ By Credit Card Companies
MTG May Have Leaked Trump’s Location to Unhinged Activists: White House
Justice urges ‘stand up for our girls’ as Supreme Court weighs fate of his ‘Save Women’s Sports Act’
Who is Michael David McKee, the man accused of killing ex-wife and dentist husband in Ohio
Hochul, AOC, Mamdani slam ‘we support Hamas’ chants at Queens protest: ‘Disgusting and antisemitic’
Aurora terrorized by Venezuelan gang as dictator Maduro let Tren de Aragua seize power
ICE arrests in Minnesota surge include numerous convicted child rapists, killers
Woman Jailed After Gruesome Discovery Made in Light Bulb Box Buried in Back Yard
Pure Evil: Court Docs Claim Virginia Dem. Official Tried Getting Sexual Access to 9-Year-Old Boy… and then the Comments About Toddlers Started
One Agency Tried to Stop the Somali Welfare Fraud as Early as 2020, but Activists Used DEI to Intimidate It Into Silence
After Ayatollah Strikes Back at Trump and Says He’ll Fall From ‘The Peak of His Hubris,’ X Users Add Epic Community Note
Late Breaking: Trump Admin Launches Airstrikes on ISIS Targets in Syria
Crowd-for-hire boss rejects Minneapolis unrest as illegal chaos
US military launches airstrikes against ISIS targets in Syria, officials say
The use of AI algorithms among police has stirred controversy recently. A research group whose members include Facebook Inc., Microsoft Corp., Alphabet Inc., Amazon.com Inc. and Apple Inc published a report in April stating that current algorithms aimed at helping police determine who should be granted bail, parole or probation, and which help judges make sentencing decisions, are potentially biased, opaque, and may not even work.
The Partnership on AI found that such systems are already in widespread use in the U.S. and were gaining a foothold in other countries too. It said it opposes any use of these systems.
Story cited here.









