American psychologist Paul Ekman’s research on facial expressions spawned a whole new career of human lie detectors more than four decades ago. Artificial intelligence could soon take their jobs.
While the U.S. has pioneered the use of automated technologies to reveal the hidden emotions and reactions of suspects, the technique is still nascent and a whole flock of entrepreneurial ventures are working to make it more efficient and less prone to false signals.
Facesoft, a U.K. start-up, says it has built a database of 300 million images of faces, some of which have been created by an AI system modeled on the human brain, The Times reported. The system built by the company can identify emotions like anger, fear and surprise based on micro-expressions which are often invisible to the casual observer.
“If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” co-founder and Chief Executive Officer Allan Ponniah, who’s also a plastic and reconstructive surgeon in London, told the newspaper.
Facesoft has approached police in Mumbai about using the system for monitoring crowds to detect the evolving mob dynamics, Ponniah said. It has also touted its product to police forces in the U.K.
Hero Principal Gives Credit Where It’s Due: ‘I Think God’s Hand Was on All of Us’
Airlines Seek Federal Bailouts Following Spirit Airlines Shutdown as Fuel Prices Rise
Key China-Iran infrastructure exposes critical hole in Trump’s war strategy
Xi gifts Trump Chinese rose seeds, on top of new ammo for White House ballroom
Are Marco Rubio’s 2028 presidential prospects on the rise?
Adverse court rulings slow, and may stop, House Democrats’ march to the majority
Trump wraps widely-watched trip to China, departing on Air Force One after high-stakes Xi meeting
Ugandan Evangelist Killed by Suspected Muslims After Sharing the Gospel
Israel, Jews targeted worldwide as well-funded leftist, Islamist groups join for ‘Nakba 78’ protests
This Midwestern state leads the nation in home foreclosures as US filings jump by 26%
Marine Vet Stops Gunman’s 60-Round Shooting Spree: Thankfully He Had a Concealed Carry Permit
Momentum builds to pass bill after Jack Smith’s secret Arctic Frost subpoenas
Bronx man convicted of running secret Chinese police station in Manhattan used to monitor dissidents
Trump touts ‘fantastic trade deals’ in final Xi meeting amid tariff standoff
Trump reveals Xi’s stance on arming Iran as Hormuz tensions rattle markets
The use of AI algorithms among police has stirred controversy recently. A research group whose members include Facebook Inc., Microsoft Corp., Alphabet Inc., Amazon.com Inc. and Apple Inc published a report in April stating that current algorithms aimed at helping police determine who should be granted bail, parole or probation, and which help judges make sentencing decisions, are potentially biased, opaque, and may not even work.
The Partnership on AI found that such systems are already in widespread use in the U.S. and were gaining a foothold in other countries too. It said it opposes any use of these systems.
Story cited here.









