American psychologist Paul Ekman’s research on facial expressions spawned a whole new career of human lie detectors more than four decades ago. Artificial intelligence could soon take their jobs.
While the U.S. has pioneered the use of automated technologies to reveal the hidden emotions and reactions of suspects, the technique is still nascent and a whole flock of entrepreneurial ventures are working to make it more efficient and less prone to false signals.
Facesoft, a U.K. start-up, says it has built a database of 300 million images of faces, some of which have been created by an AI system modeled on the human brain, The Times reported. The system built by the company can identify emotions like anger, fear and surprise based on micro-expressions which are often invisible to the casual observer.
“If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” co-founder and Chief Executive Officer Allan Ponniah, who’s also a plastic and reconstructive surgeon in London, told the newspaper.
Facesoft has approached police in Mumbai about using the system for monitoring crowds to detect the evolving mob dynamics, Ponniah said. It has also touted its product to police forces in the U.K.
Where Trump, GOP vs Democrats redistricting battle heads next in wake of key court rulings
Harris accuses Trump allies of trying to ‘rig’ 2026 midterms after Virginia court tosses redistricting measure
Minnesota nonprofit accused of siphoning $6.5M to fund Vegas trips, luxury cars, private liquor store
Alabama mother sentenced to life for hiring hitman to kill her child’s father over custody dispute
Trump warns college sports could be ‘lost forever’ as committee pushes changes, Congress urged to act
Duffys fire back after Pete Buttigieg, husband attack new road trip TV series: ‘Radical, miserable left’
Breathtaking ‘Chandelier UFO’ Video Goes Viral – But Is There a Simple Explanation?
Nearly a dozen injured after possible boat explosion at popular Florida tourist destination
Seth Moulton closing gap on progressive Democrat Ed Markey in Massachusetts Senate primary
Breaking: Bobby Cox, Manager of Braves ‘Teams That Ruled NL,’ Dead at 84
Two police officers shot, suspect ‘actively firing at police’ in Syracuse standoff lasting hours: report
Mob Attacks Indian Pastor and His Family as Villagers Try to Drive Him Away from Home
Virginia mother charged with murder after allegedly drowning her 17-month-old twin boys in bathtub: report
Shocking video shows giant black plume of smoke rising from Tennessee plastic recycling facility fire
Guess Where Hundreds of Uncounted Ballots Were Just Found in California – Hint: It’s One of Dems’ Favorite Places
The use of AI algorithms among police has stirred controversy recently. A research group whose members include Facebook Inc., Microsoft Corp., Alphabet Inc., Amazon.com Inc. and Apple Inc published a report in April stating that current algorithms aimed at helping police determine who should be granted bail, parole or probation, and which help judges make sentencing decisions, are potentially biased, opaque, and may not even work.
The Partnership on AI found that such systems are already in widespread use in the U.S. and were gaining a foothold in other countries too. It said it opposes any use of these systems.
Story cited here.









