Experts in public health praised Twitter’s efforts to tamp down on covid misinformation. U.S. Surgeon General Dr. Vivek Murthy cited Twitter’s policy as an example of how tech companies should go about combating misinformation in a 2021 advisory report to technology platforms.
“Health misinformation is a serious threat to public health,” Murthy wrote. “It can cause confusion, sow mistrust, harm people’s health, and undermine public health efforts. Limiting the spread of health misinformation is a moral and civic imperative that will require a whole-of-society effort.”
However, the company struggled to police misinformation accurately and recently began labeling some information about covid-19 as misinformation and banning scientists and researchers who attempted to warn the public of the long term harm of covid on the body. As of last weekend, many tweets promoting anti vaccine content and covid misinformation remained on the platform.
“That is a real danger of setting yourself up with the task of deciding what is true and what is false,” Emily Dreyfuss co-author of “Meme Wars: The Untold Story of the Online Battles Upending Democracy in America,” said of Twitter’s fumbles surrounding covid misinformation.
But she said that was all the more reason to improve the process and policies, not scrap them all together.
“During the pandemic, social media companies finally realized misinformation is a life or death issue because medical misinformation about covid had such dire consequences it could not be ignored,” she said. “Musk getting rid of these policies is backtracking on years and years of painfully won lessons on how to make the internet safe and not harmful.”
“I’m doing internal medicine and I see a lot of patients in primary care clinic,” said Max Jordan Nguemeni, a resident at Brigham and Women’s Hospital in Boston. “A lot of what I do when I offer vaccines is combatting disinformation. The spread of misinformation online on platforms people rely on for news, like Twitter, worries me, especially when I think about my patients who are more vulnerable, older, or not English speaking.”
The move comes as Musk appears to be shifting more of the responsibility to policing misinformation to users themselves through the company’s Birdwatch program, which allows Twitter users to rate and add corrections to tweets. Lately, however, as Birdwatch has scaled to more users, incorrect information about covid has been added to tweets simply because a mass of users upvoted it. This is dangerous, Dreyfuss said.
“Musk is scrapping a misinformation policy that was imperfect, and replacing it with a new system that’s much more easily hacked and gamed,” she said. “What he’s doing with this policy is washing his hands of Twitter’s responsibility of determining fact or fiction and giving it over to the users of Twitter, which we know is not going to be an effective strategy at all. They will make true whatever they want to make true.”
Yoel Roth, Twitter’s former head of trust and safety, said Musk’s decision to stop policing covid misinformation was “bad and damaging” and likely not “tenable going forward.” “You simply cannot do that if you are operating what you want to be a commercially viable consumer service,” he said.
Musk himself has spread covid misinformation. In 2020, he famously claimed that covid cases would be “close to zero” by April 2020. He also told SpaceX workers in March 2020, as the world was just beginning to shut down over the pandemic, that they were more likely to die in a car crash than die from covid. That June, he reopened the Tesla plant in Fremont, Calif., against county health and safety orders, but promised employees they could stay home if they felt ill and not be penalized. Employees who did stay home sick with covid, however, were promptly fired.
Musk also called virus restrictions “fascist” on a 2020 Tesla earnings call. During a podcast appearance in September 2020, Musk said he would not get vaccinated and downplayed covid’s death toll. “Everybody dies,” he said.