News

Face-Reading AI Will Tell Police When Suspects Are Hiding Truth

American psychologist Paul Ekman’s research on facial expressions spawned a whole new career of human lie detectors more than four decades ago. Artificial intelligence could soon take their jobs.

While the U.S. has pioneered the use of automated technologies to reveal the hidden emotions and reactions of suspects, the technique is still nascent and a whole flock of entrepreneurial ventures are working to make it more efficient and less prone to false signals.

Facesoft, a U.K. start-up, says it has built a database of 300 million images of faces, some of which have been created by an AI system modeled on the human brain, The Times reported. The system built by the company can identify emotions like anger, fear and surprise based on micro-expressions which are often invisible to the casual observer.


“If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” co-founder and Chief Executive Officer Allan Ponniah, who’s also a plastic and reconstructive surgeon in London, told the newspaper.

Facesoft has approached police in Mumbai about using the system for monitoring crowds to detect the evolving mob dynamics, Ponniah said. It has also touted its product to police forces in the U.K.


Mamdani says NYPD commissioner apologized after her brother called him ‘enemy’ of the Jewish people at gala
Harvard professor detained by ICE after Boston synagogue shooting, agrees to voluntarily leave US
Democrats’ anger at federal government hit record high just days before shutdown: Pew poll
Breaking: DC Pipe Bomb Suspect’s Family Has History of Working to Free Illegals from ICE, Sued the Trump Admin, And Used Trayvon Martin’s Lawyer to Fight So-Called Racism
Washington DC lights the National Christmas Tree
Ex–New York State official accused of spying for China called Hochul ‘more obedient’ than Cuomo, trial reveals
US carries out 22nd strike on alleged drug vessel operated by a Designated Terrorist Organization
‘I Didn’t Stutter’: Stephen A. Smith and Sunny Hostin Clash on ‘The View’ Over Video Urging Troops to Ignore Trump
Absolute Tragedy: U.S. Marine Dies After Horrific Camp Pendleton Accident
Suspected thieves caught on camera smashing Washington state storefront with truck in ATM heist attempt
Blue State’s Dems Steamroll Redistricting Change That All But Erases Its GOP from Congress
Grand jury declines to indict Letitia James after earlier case collapsed
FBI says Jan. 6 pipe bomb suspect purchased bomb parts in 2019
Trump scores 10 extra nominees after snafu with Senate Democrats
Admiral Tells Congress He Had a Good Reason for Ordering Second Strike on Drug Boat

The use of AI algorithms among police has stirred controversy recently. A research group whose members include Facebook Inc., Microsoft Corp., Alphabet Inc., Amazon.com Inc. and Apple Inc published a report in April stating that current algorithms aimed at helping police determine who should be granted bail, parole or probation, and which help judges make sentencing decisions, are potentially biased, opaque, and may not even work.

See also  Embattled Rep. Cory Mills used campaign funds to party at beachfront resorts, charter private jets

The Partnership on AI found that such systems are already in widespread use in the U.S. and were gaining a foothold in other countries too. It said it opposes any use of these systems.

Story cited here.

Share this article:
Share on Facebook
Facebook
Tweet about this on Twitter
Twitter