American psychologist Paul Ekman’s research on facial expressions spawned a whole new career of human lie detectors more than four decades ago. Artificial intelligence could soon take their jobs.
While the U.S. has pioneered the use of automated technologies to reveal the hidden emotions and reactions of suspects, the technique is still nascent and a whole flock of entrepreneurial ventures are working to make it more efficient and less prone to false signals.
Facesoft, a U.K. start-up, says it has built a database of 300 million images of faces, some of which have been created by an AI system modeled on the human brain, The Times reported. The system built by the company can identify emotions like anger, fear and surprise based on micro-expressions which are often invisible to the casual observer.
“If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” co-founder and Chief Executive Officer Allan Ponniah, who’s also a plastic and reconstructive surgeon in London, told the newspaper.
Facesoft has approached police in Mumbai about using the system for monitoring crowds to detect the evolving mob dynamics, Ponniah said. It has also touted its product to police forces in the U.K.
Fox News True Crime Newsletter: Ohio dentist murders, Kohberger sister’s warning, ‘Torso Killer’ confession
Trump pauses oil exec summit to peek at White House ballroom’s progress
Johnson meets with Muslim man who confronted, disarmed Bondi Beach attacker
WATCH: Bodycam for Ohio dentist murders shows police went to wrong home before couple was found dead
Walz Rages That ICE Won’t Share Shooting Evidence With Sanctuary City Minneapolis – Which Outright Refuses to Cooperate With ICE
House Republicans defend ICE agent in fatal shooting, say use of force was justified
California leads multistate lawsuit against Trump over child care funding freeze
Obama Presidential Center job listings push ‘anti-racism’ pledge ahead of opening
Breaking Footage: US Forces Execute Lightning-Quick Seizure of Fully-Loaded Venezuelan Oil Tanker
Trump Admin Considers Sending Large Payments to Every Greenland Resident in Bid to Acquire the Island: Report
First Hilton, Then Marriot, Now McDonald’s: Companies Rush to Embrace ICE as Conservative Pressure Campaign Crushes Woke Workers
‘Tip of the iceberg’: Senate Republicans press Gov Walz over Minnesota fraud scandal
Democratic socialist Mamdani ally mounts bid for US House of Representatives
Ohio dentist murders: Alley video, no forced entry fuel insider fears, experts say
Renee Nicole Good part of ‘ICE Watch’ group, DHS sources say
The use of AI algorithms among police has stirred controversy recently. A research group whose members include Facebook Inc., Microsoft Corp., Alphabet Inc., Amazon.com Inc. and Apple Inc published a report in April stating that current algorithms aimed at helping police determine who should be granted bail, parole or probation, and which help judges make sentencing decisions, are potentially biased, opaque, and may not even work.
The Partnership on AI found that such systems are already in widespread use in the U.S. and were gaining a foothold in other countries too. It said it opposes any use of these systems.
Story cited here.









