The Biden administration is pursuing regulations for artificial intelligence systems that would require government audits to ensure they produce trustworthy outputs, which could include assessments of whether AI is promoting “misinformation” and “disinformation.”
Alan Davidson, assistant secretary of communications and information at the Commerce Department’s National Telecommunications and Information Administration (NTIA), said in a speech at the University of Pittsburgh this week that government audits of AI systems are one way to build trust in this emerging technology.
“Much as financial audits create trust in the accuracy of financial statements, accountability mechanisms for AI can help assure that an AI system is trustworthy,” he said in his prepared remarks. “Policy was necessary to make that happen in the finance sector, and it may be necessary for AI.”
ARTIFICIAL INTELLIGENCE: SHOULD THE GOVERNMENT STEP IN? AMERICANS WEIGH IN
“Our initiative will help build an ecosystem of AI audits, assessments and other mechanisms to help assure businesses and the public that AI systems can be trusted,” he added.
Davidson said audits would help regulators assess whether AI systems perform as advertised and respect privacy, and whether they lead to “discriminatory outcomes or reflect unacceptable levels of bias.”
Audits may also be used to determine whether AI systems “promote misinformation, disinformation, or other misleading content,” he added.
The goal of policing disinformation created by AI is likely to present both technical and political challenges to federal regulators. Government officials are only just beginning to consider how to regulate AI – for example, NTIA just this week said it was seeking public comment on how it should approach AI regulation.
And the issue of misinformation and disinformation has been highly controversial under the Biden administration, since Republicans and Democrats seem incapable of agreeing on a definition. Last year, the Department of Homeland Security set up a “Disinformation Governance Board” but quickly dismantled it after it received withering criticism from Republicans who worried that the board would simply attack views and political opinions that run contrary to the Biden administration.
FTC STAKES OUT TURF AS TOP AI COP: ‘PREPARED TO USE ALL OUR TOOLS’
As difficult as the task may be, worries about biased AI have been growing over the last year as the technology advances and could prove to be a fertile area for the government to regulate. Tesla and SpaceX CEO Elon Musk made it clear last year that he was worried that AI used for systems like ChatGPT was being designed to avoid any output that might offend anybody.
Musk said this was a recipe for a “woke” AI. “The danger of training AI to be woke – in other words, lie – is deadly,” he tweeted in December.
Others, like Davidson of the NTIA, are worried about bias within AI that might lead to unfair outcomes for minorities.
“Hidden biases in mortgage-approval algorithms have led to higher denial rates for communities of color,” he said in Pittsburgh. “Algorithmic hiring tools screening for ‘personality traits’ have been found non-compliant with the Americans with Disabilities Act.”
ELON MUSK’S WARNINGS ABOUT AI RESEARCH FOLLOWED MONTHS-LONG BATTLE AGAINST ‘WOKE’ AI
So far, the Biden administration has put out some preliminary guidelines on how AI should be developed, such as an AI Bill of Rights and a voluntary risk management framework published by the National Institute of Standards and Technology. Other agencies are also dipping in a toe – the Federal Trade Commission, for example, has warned AI developers that it is watching to make sure consumers aren’t being sold on imaginary benefits that AI can’t yet achieve.
Davidson said regulatory proposals are emerging quickly in part because “AI technologies are moving very fast.”
“The Biden administration supports the advancement of trustworthy AI,” he said. “We are devoting a lot of energy to that goal across government.”
Scroll down to leave a comment: