The New York Times warns that free speech is under threat

Are federal officials violating the First Amendment when they pressure social media to crack down on “disinformation”? That’s the question raised by a federal lawsuit filed last May by the state attorneys general of Missouri and Louisiana.

The New York Times reporter Steven Lee Myers warns that the lawsuit “could derail the Biden administration’s already tortuous efforts to combat disinformation.” He worries that “the First Amendment has become, for better or for worse, an obstacle to almost all government efforts to quell a problem that, in the case of a pandemic, threatens public health and, in the case of election integrity, even democracy itself.” As Myers formulates this question, freedom of speech is a threat to “public health” and “even to democracy itself.”

It cannot be denied that when people are free to express their opinions, no matter how misguided, ill-informed or hateful, they will say things that are misleading, patently false or divisive. The First Amendment nevertheless guarantees their right to say these things, based on the premise that the dangers of unfettered speech are preferable to the dangers of government attempts to regulate speech in what it perceives to be the public interest.

Myers may disagree with that calculation or recoil at its implications. But the First Amendment clearly prohibits the government from banning speech it deems dangerous to public health or democracy. Prosecutors in Missouri vs. Biden, which include individual social media users represented by the New Civil Liberties Alliance (NCLA), argue that federal officials have violated the First Amendment by trying to achieve that goal indirectly by blurring the distinction between private moderation and state censorship. The government “can’t use third parties to do what it can’t do,” says NCLA attorney Jenin Younes times.

Myers doesn’t accept that. He believes that the private communications that prosecutors see as evidence of proxy censorship actually show that social networks made independent decisions about what speech and speakers they were willing to allow on their platforms.

Those emails were produced during discovery in response to orders from U.S. District Judge Terry A. Doughty, whom Myers portrays as biased against the Biden administration. He notes that Doughty was “set up [Donald] Trump in 2017.” and “previously blocked the Biden administration’s national vaccination mandate for health care workers and overturned a ban on new federal leases for oil and gas drilling.” In this case, Myers says, Doughty “granted prosecutors’ request for extensive discovery even and before considering their request for a temporary injunction.”

Myers also suggests that prosecutors are motivated by dubious ideological grievances. “Their claims,” ​​he says, “reflect a narrative that has taken root among conservatives that national social media companies have colluded with government officials to discriminate against them, despite evidence to the contrary.”

While Myers implies that the case and Doughty’s handling of it were driven by partisan animus, he notes that “many of the examples cited in the lawsuit also involved official actions taken during the Trump administration, including efforts to combat disinformation ahead of the 2020 presidential election.” This suggests that prosecutors’ objections to government interference in moderation decisions go beyond a desire to score political points.

Emails uncovered by the lawsuit, as well as internal Twitter communications Elon Musk shared with reporters, show that social media platforms have generally been willing to respond to concerns about content raised by public health and law enforcement officials. They responded quickly to takedown requests and asked for additional suggestions. The tone of communication is, for the most part, cordial and collaborative.

Prosecutors in Missouri vs. Biden he sees that comfort as troubling. But Myers emphasizes the exceptions. “The growing trail of internal communications,” he writes, “suggests a more intricate and agonizing struggle between government officials frustrated by the spread of dangerous falsehoods and corporate officials who resented and often resisted the government’s pleas.”

Myers admits that “government officials” were trying to prevent “the spread of dangerous falsehoods” by encouraging Facebook and others. to delete certain posts and expel certain users. He also acknowledges that the people running those platforms have “resented and often resisted” those efforts. But he doesn’t think those facts are grounds for concern that officials have used their positions to shape moderation decisions, resulting in less speech than would otherwise be allowed.

Myers misrepresents the context of these “government threats,” which is important for assessing the extent to which they have enhanced suppression of adverse speech. He notes a text message dated June 16, 2021, in which Nick Clegg, Facebook’s vice president of global affairs, “irritably” told Surgeon General Vivek Murthy, “It’s not nice to be accused of killing people.”

According to Myers, the remark was prompted by Murthy’s conclusion that “misinformation” about COVID-19 has resulted in “avoidable illness and death,” prompting him to demand “greater transparency and accountability” from social media companies. Myers fails to mention that Clegg sent the message after President Joe Biden publicly accused Facebook and other platforms of “killing people” for failing to suppress misinformation about COVID-19 vaccines. Myers also fails to mention that Murthy had just issued an advisory calling for “whole of society” efforts to combat the “urgent threat to public health” posed by “health misinformation,” including presumably “appropriate legislative and regulatory measures.” “

Myers also omits something else Clegg said in that text message: he was “keen to find a way to de-escalate and work together”. What Myers presents as evidence that Facebook “dead-end” resisted “the government’s entreaties,” in other words, is actually evidence that the platform was desperately trying to assuage the president’s anger.

To that end, Facebook did what Biden and Murthy demanded. “Thanks again for taking the time to meet earlier today,” Clegg said in an email to Murthy a week later. “I wanted to make sure you saw the steps we took just this past week adjust policies on what we remove with respect to misinformation, as well as steps taken to further address ‘dozens of misinformation’.” He boasted that his company had removed undesirable pages, groups and Instagram accounts; it had taken steps to make several pages and profiles that “harder to find on our platform” and “expanded the group of false claims we remove to keep up with recent trends.”

As White House spokeswoman Robyn M. Patterson describes it, the administration is only asking Facebook and others. to implement “their own policy to deal with misinformation and disinformation”. But federal officials have also pressured social media platforms to expand their definitions of those categories. And according to Clegg, Facebook responded to Biden’s murder charge with an “adjustment.”[ing] rules about what we remove with respect to misinformation.”

Myers thinks there’s nothing to see here. “The legal challenge for plaintiffs is to show that the government used its statutory or regulatory authority to punish companies that failed to comply,” he says. But companies usually did to “comply,” and it is no stretch to suggest that they did so because they foresaw how that “legal or regulatory power” might be deployed against them.

“As evidence of the pressure,” Myers writes, “the lawsuit cites instances where administration officials publicly suggested that the companies might face more regulation.” In his interview for times, for example, Patterson “repeated President Biden’s call for Congress to reform Section 230 of the Communications Decency Act, a law that broadly shields Internet companies from liability for what users post on their sites.” But Myers suggests the fear of losing those protections is implausible, because the Biden administration “couldn’t overturn the law on its own” and “Congress has shown little appetite to revisit the issue, despite calls from Mr. Biden and others for greater accountability for social media companies.”

Since reducing or repealing Section 230 is a bipartisan goal, it’s hardly crazy to think that angering federal officials by rejecting “collaborative work” would make such legislation more likely. Complaints of rampant misinformation would bolster Biden’s argument that “greater accountability” requires greater exposure to accountability, and Congress might be more inclined to agree.

Even without new legislation, the administration could make life difficult for social media companies through regulation, litigation and antitrust enforcement. As Myers sees it, that wouldn’t be a problem unless officials threatened companies with retaliation and then followed through on that threat. That standard would leave the government free to regulate speech on the Internet as long as it does not engage in explicit extortion.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *