News

Face-Reading AI Will Tell Police When Suspects Are Hiding Truth

American psychologist Paul Ekman’s research on facial expressions spawned a whole new career of human lie detectors more than four decades ago. Artificial intelligence could soon take their jobs.

While the U.S. has pioneered the use of automated technologies to reveal the hidden emotions and reactions of suspects, the technique is still nascent and a whole flock of entrepreneurial ventures are working to make it more efficient and less prone to false signals.

Facesoft, a U.K. start-up, says it has built a database of 300 million images of faces, some of which have been created by an AI system modeled on the human brain, The Times reported. The system built by the company can identify emotions like anger, fear and surprise based on micro-expressions which are often invisible to the casual observer.


“If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” co-founder and Chief Executive Officer Allan Ponniah, who’s also a plastic and reconstructive surgeon in London, told the newspaper.

Facesoft has approached police in Mumbai about using the system for monitoring crowds to detect the evolving mob dynamics, Ponniah said. It has also touted its product to police forces in the U.K.


Video shows teen snatched at bus stop – but victim slips SOS at gas station to escape repeat offender suspect
Utah leaders launch probe into Supreme Court justice over alleged relationship with redistricting lawyer
Texas AG sues Houston mayor and city council over new sanctuary city ordinance limiting ICE cooperation
Jury Awards $300,000 to Woman Who Drank 14 Tequila Shots on Carnival Cruise Ship
Judge Called Out Former Lt. Governor’s ‘Very Concerning’ Behavior Days Before He Killed His Wife
1 million bees swarm highway after crash shuts down interstate ramp for hours
DOJ shakes up lead prosecutor handling Brennan investigation in South Florida, sources say
Turkish grad student who co-authored anti-Israel op-ed at Tufts self-deports after legal battle with DHS
Harris blames Trump for rising gas prices — after once saying they’re the ‘price to pay for democracy’
Numerous House Republicans Band with Democrats to Block Trump’s Desired FISA Extension
Watch: ‘Not Really Religious’ Artemis II Cmdr. Reid Wiseman Broke Down in Tears After Flight When He Saw the Cross on Navy Chaplain’s Collar
Emails reveal how campus police tracked down Bryan Kohberger’s car weeks before he became a suspect
School district’s trans policy blasted for fostering ‘deception’ under shadow of SCOTUS ruling
Another One: Illegal Charged With Rape, Kidnapping After Spanberger Made VA Sanctuary State, Lib Judge Released Him
Man, woman killed in rip current as lifeguard shortage leaves danger zones in beach destination

The use of AI algorithms among police has stirred controversy recently. A research group whose members include Facebook Inc., Microsoft Corp., Alphabet Inc., Amazon.com Inc. and Apple Inc published a report in April stating that current algorithms aimed at helping police determine who should be granted bail, parole or probation, and which help judges make sentencing decisions, are potentially biased, opaque, and may not even work.

See also  Ranking the 2028 Democratic hopefuls at Al Sharpton’s National Action Network

The Partnership on AI found that such systems are already in widespread use in the U.S. and were gaining a foothold in other countries too. It said it opposes any use of these systems.

Story cited here.

Share this article:
Share on Facebook
Facebook
Tweet about this on Twitter
Twitter