News Opinons

Free Speech Puts U.S. On ‘A Collision Course’ With Global Limits On Big Tech.


WASHINGTON — When Mark Zuckerberg of Facebook called for regulating harmful internet content in an opinion column last month, Republicans in Washington expressed outrage that he was calling on the government to regulate speech.

Within hours, the company’s top lobbyists started spreading another message to conservatives: Don’t take his suggestion too seriously.

In a flurry of calls and emails to regulators, consumer groups and think tanks — as well as in person, at a weekly breakfast gathering of influential conservatives — the operatives said Mr. Zuckerberg was not encouraging new limits on speech in the United States. His target was mostly overseas regulators, they said, and he has other ideas for Washington.


“Mark believes that by updating the rules for the internet, we can preserve what’s best about it — the freedom for people to express themselves and entrepreneurs to build new things,” one Facebook lobbyist wrote in an email widely distributed to conservative groups. The lobbyists’ actions were described by two people who encountered the outreach and shared the emails with The New York Times but would speak only on the condition of anonymity.

Mr. Zuckerberg’s call for action, and his lobbyists’ response, encapsulate why the United States is on an island of its own when it comes to managing violent and racist speech online.

Britain, Germany, Australia, New Zealand and India have adopted or are considering laws that require stricter content moderation by tech platforms. But none of them need to work around free speech protections like the First Amendment in the United States.

For tech businesses like Facebook, that means navigating fraught political terrain — and trying to play on both sides of the issue.

The companies face increasing pressure, particularly from Democrats but also from some Republicans, to stem the spread of messages that can lead to real-world violence. But many Republicans, including President Trump, complain that tech companies like Facebook and Google already curtail too many voices — and that any new limits would only make matters worse. The two parties recently had dueling congressional hearings to help make their case.

See also  Wisconsin Supreme Court hears oral arguments on 1849 abortion ban validity

“American law and judges are united, but all the cultural and social pressures around the world are in the opposite direction,” said Jeffrey Rosen, president of the National Constitution Center in Philadelphia. “The protections of the American Constitution and the demands of countries and consumers around the world are on a collision course.”

Navigating the various approaches to speech will require different solutions, said Kevin Martin, Facebook’s head of lobbying in the United States.

“Mark and Facebook recognize, and support, and are strong defenders of the First Amendment,” Mr. Martin said. That nuance was lost because the opinion piece, which ran in The Washington Post, The Independent in Britain and elsewhere, was written to speak to a global audience, he said.

Tech companies, as private businesses, have the right to choose what speech exists on their sites, much as a newspaper can pick which letters to the editor to publish.

Their online sites do already pull some content for breaking their rules. Facebook and Google have tens of thousands of content moderators to root out hate speech and false information on their sites, for example. The companies also use artificial intelligence and machine learning technology to identify content that violates their terms of service.

But many recent events, like the mosque shootings in New Zealand, show the limits of those resources and tools, and have led to more demands for regulation. A live video by a gunman in the New Zealand massacre was viewed 4,000 times before Facebook was notified. By then, copies of the video had been uploaded on several sites like 8Chan, and Facebook struggled to take down slightly altered versions.

“For the first time, I’m seeing the left and right agree that something has gotten out of control, and there is a lot of consensus on the harms created by fake news, terrorist content and election interference,” said Nicole Wong, deputy chief technology officer for the Obama administration.

See also  McConnell successor shrouded in uncertainty ahead of monumental Senate vote

Getting consensus on basic definitions of what constitutes harmful content, though, has been difficult. And American lawmakers have been little help.

In his opinion column, Mr. Zuckerberg outlined several ideas to rid sites of harmful content. He noted that the company was putting together a group of outsiders who would evaluate harmful speech on its services and whether it should be removed. He also suggested that the government help define harmful speech.

“Regulation could set baselines for what’s prohibited and require companies to build systems for keeping harmful content to a bare minimum,” he wrote.

Brendan Carr, a Republican commissioner at the Federal Communications Commission, which oversees the telecommunications, broadband and television industries, responded on Twitter that such regulation would be a direct violation of the First Amendment.

“They are trying to pass the buck,” Mr. Carr said in a later interview. “But he is asking for the government to censor speech.”

Brad Parscale, President Trump’s 2020 campaign manager, wrote on Twitter: “Every single regulatory measure Zuckerberg is calling for would benefit his company, his political allies, and himself personally.”

Some civil rights groups also raised concerns. “It is extremely difficult to define ‘harmful content,’ much less implement standards consistently and fairly for billions of users, across the entire spectrum of contemporary thought and belief,” wrote Corynne McSherry and Gennie Gebhart of the Electronic Frontier Foundation, a nonprofit group that advocates open and free expression online. Their article was headlined “Mark Zuckerberg Does Not Speak for the Internet.”

So far, Facebook has stood alone in its call for regulations of harmful speech. Google, Amazon, Twitter and Apple did not comment for this article but have been stalwart in their support of free speech online.

With a limited appetite for the government to step in and ban certain content online, regulators and some lawmakers have increasingly warned they will crack down on the internet companies for doing a poor job of policing their own policies. To do that, the government would most likely need to take away a legal immunity for internet companies, established in 1996, that shields them from liability for content posted by users.

See also  Jim Jordan weighs in on releasing Gaetz ethics report: ‘Shouldn’t go public’

The law, Section 230 of the Communications Decency Act, is often held up as central to the tech industry’s growth in the last three decades. But lawmakers have begun to weigh whether the legal protection extends too far. Last year, Congress passed a law that weakened it by holding social networks and other websites liable for knowingly hosting sex trafficking ads, the first time that an exception was written for the law.

Senator Ron Wyden, a Democrat from Oregon and an author of Section 230, said it allowed sites to better moderate content without fear.

But even Mr. Wyden says the law was not intended to protect tech giants. Senator Joe Manchin III, Democrat of West Virginia, has warned that he would pursue a carve-out of Section 230 for sites that hosted the sale of opioids.

“If these platforms don’t fix their problems, you bet I’m looking at 230,” Mr. Manchin said. “Everything is on the table.”

The building frustration in Washington is being watched with anxiety by both big and small tech companies, as well as investors. Some are warning that any new regulations would hurt start-ups, because many don’t have resources like Facebook and Google to hire staff to enforce the rules.

“Be mad at tech, that’s understandable,” Alex Feerst, head of legal policy for the online publishing site Medium, said at a recent event on the “unintentional harms” of speech regulations.

But adding restrictions to online speech laws or liabilities for platforms would come with downsides, he warned.

“Companies will simply over-remove out of risk aversion,” he said.

Story cited here.

Share this article:
Share on Facebook
Facebook
Tweet about this on Twitter
Twitter