News

Face-Reading AI Will Tell Police When Suspects Are Hiding Truth

American psychologist Paul Ekman’s research on facial expressions spawned a whole new career of human lie detectors more than four decades ago. Artificial intelligence could soon take their jobs.

While the U.S. has pioneered the use of automated technologies to reveal the hidden emotions and reactions of suspects, the technique is still nascent and a whole flock of entrepreneurial ventures are working to make it more efficient and less prone to false signals.

Facesoft, a U.K. start-up, says it has built a database of 300 million images of faces, some of which have been created by an AI system modeled on the human brain, The Times reported. The system built by the company can identify emotions like anger, fear and surprise based on micro-expressions which are often invisible to the casual observer.


“If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” co-founder and Chief Executive Officer Allan Ponniah, who’s also a plastic and reconstructive surgeon in London, told the newspaper.

Facesoft has approached police in Mumbai about using the system for monitoring crowds to detect the evolving mob dynamics, Ponniah said. It has also touted its product to police forces in the U.K.


Documents show Epstein received pitch for properties housing Pentagon, FBI tenants after 2008 conviction
Buttigieg, Newsom, AOC top three in new 2028 poll in key presidential primary state
College student dies in tragic ski accident at Wisconsin resort, marking second death within a month
Red state auditor’s report flags Democratic governor’s ‘concerning’ spending on ‘luxury’ expenditures
Trump’s NIH director isn’t the only official wearing multiple hats during the president’s second term
Apple iCloud accused of failing to stop ‘child porn’ in West Virginia lawsuit
Colbert Follows Up FCC ‘Equal Time’ Fiasco by Sitting Down with Liberal Senator Who’s Up for Re-election
Judge overstepped by ordering Mark Zaid’s security clearance restored: DOJ
Cambodian PM says Thai forces occupying disputed land despite Trump-brokered ceasefire
Chilling: 2 Different AIs Are Willing to Slaughter Every Person With Traditional Values to Avoid Offending a Single Trans Person – and It Gets Even Worse
Colbert Catastrophe: FEC Urged to Investigate After Host Lies About Censorship, Raising Millions for Crockett Opponent
Massie faces backlash over Epstein demand, critics suggest he should ‘seriously reconsider’ Congress
Senate hopeful with deep Dem ties has paid family over $350K from his campaign coffers
Tony Gonzales accuses dead staffer’s husband of blackmail
Pollster That Accurately Predicted Trump’s Swing-State Sweep Finds His Approval Exceeds 2024 Margin of Victory

The use of AI algorithms among police has stirred controversy recently. A research group whose members include Facebook Inc., Microsoft Corp., Alphabet Inc., Amazon.com Inc. and Apple Inc published a report in April stating that current algorithms aimed at helping police determine who should be granted bail, parole or probation, and which help judges make sentencing decisions, are potentially biased, opaque, and may not even work.

See also  FBI Director Patel says investigators have found antifa funding sources

The Partnership on AI found that such systems are already in widespread use in the U.S. and were gaining a foothold in other countries too. It said it opposes any use of these systems.

Story cited here.

Share this article:
Share on Facebook
Facebook
Tweet about this on Twitter
Twitter