News

Face-Reading AI Will Tell Police When Suspects Are Hiding Truth

American psychologist Paul Ekman’s research on facial expressions spawned a whole new career of human lie detectors more than four decades ago. Artificial intelligence could soon take their jobs.

While the U.S. has pioneered the use of automated technologies to reveal the hidden emotions and reactions of suspects, the technique is still nascent and a whole flock of entrepreneurial ventures are working to make it more efficient and less prone to false signals.

Facesoft, a U.K. start-up, says it has built a database of 300 million images of faces, some of which have been created by an AI system modeled on the human brain, The Times reported. The system built by the company can identify emotions like anger, fear and surprise based on micro-expressions which are often invisible to the casual observer.


“If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” co-founder and Chief Executive Officer Allan Ponniah, who’s also a plastic and reconstructive surgeon in London, told the newspaper.

Facesoft has approached police in Mumbai about using the system for monitoring crowds to detect the evolving mob dynamics, Ponniah said. It has also touted its product to police forces in the U.K.


Trump says New York Times questioning his stamina could be ‘treasonous’
Trump mocks Ilhan Omar’s ‘turban’ in latest anti-Somali tirade
Trump compares real wages under his admin versus Biden’s during speech calling out Dem affordability ‘hoax’
Trump gives update on wounded National Guard member 2 weeks after DC ambush shooting: ‘He got up from bed’
American skydivers reclaim world record from Libya with massive flag jump on Pearl Harbor Day
Trump says Rep. Crockett’s Senate run ‘a gift to Republicans’: ‘Can’t imagine she wins’
$900B defense bill advances to House-wide vote as conservative mutiny threat looms
Trump rips Biden and Democrats over affordability in Pennsylvania: ‘Like Bonnie and Clyde preaching public safety’
Twin Blue-State Punks Meet SWAT Justice After Threatening Medieval Torture, Then Murder of DHS Asst. Dir. Tricia McLaughlin
Plane crash-lands on top of Toyota on Florida freeway following engine trouble
McDonald’s Pulls Unsettling Christmas Commercial After it Sparks Massive Backlash
Federal appeals court lets Pentagon reinstate transgender service ban, says judge overstepped military leaders
Expert reveals key factor that led to massive Minnesota fraud scheme
HS Senior Escorted from School for Following Doctors’ Orders and Skipping Vaccine Booster
ABC Signs Jimmy Kimmel to a New Deal Months After Affiliate Revolt

The use of AI algorithms among police has stirred controversy recently. A research group whose members include Facebook Inc., Microsoft Corp., Alphabet Inc., Amazon.com Inc. and Apple Inc published a report in April stating that current algorithms aimed at helping police determine who should be granted bail, parole or probation, and which help judges make sentencing decisions, are potentially biased, opaque, and may not even work.

See also  New dark money network could exploit campaign finance loophole banning federal contractors from spending on politics

The Partnership on AI found that such systems are already in widespread use in the U.S. and were gaining a foothold in other countries too. It said it opposes any use of these systems.

Story cited here.

Share this article:
Share on Facebook
Facebook
Tweet about this on Twitter
Twitter