My Blog
Technology

Shadowbanning is real: How social media decides who to silence

Shadowbanning is real: How social media decides who to silence
Shadowbanning is real: How social media decides who to silence


Social media companies all decide which users’ posts to amplify — and reduce. Here’s how we get them to come clean about it.

(Daniel Diosdado For The Washington Post)

Comment

Art teacher Jennifer Bloomer has used Instagram to share activism-themed artwork and announce classes for eight years. Then last fall, while trying to promote a class called “Raising anti-racist kids through art,” her online megaphone stopped working.

It’s not that her account got suspended. Rather, she started to notice her likes dwindled and the number of people seeing her posts dropped by as much as 90 percent, according to her Instagram dashboard.

We The Users

How technology fails us — and ideas to make it better.

Read more.

Bloomer, it appears, had been “shadowbanned,” a form of online censorship where you’re still allowed to speak, but hardly anyone gets to hear you. Even more maddening, no one tells you it’s happening.

“It felt like I was being punished,” says Bloomer, 42, whose Radici Studios in Berkeley, Calif., struggled with how to sign up students without reaching them through Instagram. “Is the word anti-racist not okay with Instagram?”

She never got answers. Nor have countless other people who’ve experienced shadowbans on Instagram, Facebook, TikTok, Twitter, YouTube and other forms of social media.

Like Bloomer, you might have been shadowbanned if one of these companies has deemed what you post problematic, but not enough to ban you. There are signs, but rarely proof — that’s what makes it shadowy. You might notice a sudden drop in likes and replies, your Facebook group appears less in members’ feeds or your name no longer shows in the search box. The practice made headlines this month when Twitter owner Elon Musk released evidence intended to show shadowbanning was being used to suppress conservative views.

Two decades into the social media revolution, it’s now clear that moderating content is important to keep people safe and conversation civil. But we the users want our digital public squares to use moderation techniques that are transparent and give us a fair shot at being heard. Musk’s exposé may have cherry-picked examples to cast conservatives as victims, but he is right about this much: Companies need to tell us exactly when and why they’re suppressing our megaphones, and give us tools to appeal the decision.

The question is, how do you do that in an era in which invisible algorithms now decide which voices to amplify and which to reduce?

First we have to agree that shadowbanning exists. Even victims are filled with self-doubt bordering on paranoia: How can you know if a post isn’t getting shared because it’s been shadowbanned or because it isn’t very good? When Black Lives Matters activists accused TikTok of shadowbanning during the George Floyd protests, TikTok said it was a glitch. As recently as 2020, Instagram’s head, Adam Mosseri, said shadowbanning was “not a thing” on his social network, though he appeared to be using a historical definition of selectively choosing accounts to mute.

We the users want our digital public squares to use moderation techniques that are transparent and give us a fair shot at being heard.

Shadowbanning is real. While the term may be imprecise and sometimes misused, most social media companies now employ moderation techniques that limit people’s megaphones without telling them, including suppressing what companies call “borderline” content.

And even though it’s a popular Republican talking point, it has a much wider impact. A recent survey by the Center for Democracy and Technology found nearly one in 10 Americans on social media suspect they’ve been shadowbanned. When I asked about it on Instagram, I heard from people whose main offense appeared to be living or working on the margins of society: Black creators, sex educators, fat activists and drag performers. “There is this looming threat of being invisible,” says Brooke Erin Duffy, a professor at Cornell University who studies social media.

Social media companies are also starting to acknowledge it, though they prefer to use terms such as “deamplification” and “reducing reach.” On Dec. 7, Instagram unveiled a new feature called Account Status that lets its professional users know when their content had been deemed “not eligible” to be recommended to other users and appeal. “We want people to understand the reach their content gets,” says Claire Lerner, a spokeswoman for Facebook and Instagram parent Meta.

It’s a very good, and very late, step in the right direction. Unraveling what happened to Bloomer, the art teacher, helped me see how we can have a more productive understanding of shadowbanning — and also points to some ways we could hold tech companies accountable for how they do it.

Social media companies all decide which users’ posts to amplify — and reduce. Here’s how we get them to come clean about it. (Video: Monica Rodman/The Washington Post)

If you seek out Bloomer’s Instagram profile, filled with paintings of people and progressive causes, nothing actually got taken down. None of her posts were flagged for violating Instagram’s “community guidelines,” which spell out how accounts get suspended. She could still speak freely.

That’s because there’s an important difference between Bloomer’s experience and how we typically think about censorship. The most common form of content moderation is the power to remove. We all understand that big social media companies delete content or ban people, such as @realDonaldTrump.

Shadowbanning victims experience a kind of moderation we might call silent reduction, a term coined by Tarleton Gillespie, author of the book “Custodians of the Internet.”

“When people say ‘shadowbanning’ or ‘censorship’ or ‘pulling levers,’ they’re trying to put into words that something feels off, but they can’t see from the outside what it is, and feel they have little power to do anything about it,” Gillespie says. “That’s why the language is imprecise and angry — but not wrong.”

Reduction happens in the least-understood part of social media: recommendations. These are the algorithms that sort through the endless sea of photos, videos and comments to curate what shows up in our feeds. TikTok’s personalized “For You” section does such a good job of picking the right stuff, it’s got the world hooked.

Reduction occurs when an app puts its thumb on the algorithmic scales to say certain topics or people should get seen less.

“The single biggest reason someone’s reach goes down is how interested others are in what they’re posting — and as more people post more content, it becomes more competitive as to what others find interesting. We also demote posts if we predict they likely violate our policies,” Meta’s Lerner says.

Reduction started as an effort to tamp down spam, but its use has expanded to content that doesn’t violate the rules but gets close to it, from miracle cures and clickbait to false claims about Sept. 11 and dangerous stunts. Facebook documents brought forth by whistleblower Frances Haugen revealed a complex system for ranking content, with algorithms scoring posts based on factors such as its predicted risk to societal health or its potential to be misinformation, and then demoting it in the Facebook feed.

Musk’s “Twitter Files” expose some new details on Twitter’s reduction systems, which it internally called “visibility filtering.” Musk frames this as an inherently partisan act — an effort to tamp down right-leaning tweets and disfavored accounts such as @libsoftiktok. But it is also evidence of a social network wrestling with where to draw the lines for what not to promote on important topics that include intolerance for LGBTQ people.

Meta and Google’s YouTube have most clearly articulated their effort to tamp down the spread of problematic content, each dubbing it “borderline.” Meta CEO Mark Zuckerberg has argued it is important to reduce the reach of this borderline content because otherwise its inherent extremeness makes it more likely to go viral.

You, Zuckerberg and I might not agree about what should count as borderline, but as private companies, social media firms can exercise their own editorial judgment.

The problem is, how do they make their choices visible enough that we will trust them?

How you get shadowbanned

Bloomer, the art teacher, says she never got notice from Instagram she’d done something wrong. There was no customer service agent who would take a call. She had to do her own investigation, scouring data sources like the Insights dashboard Instagram offers to professional accounts.

She was angry and assumed it was the product of a decision by Instagram to censor her fight against racism. “Instagram seems to be taking a stand against the free class we have worked so hard to create,” she wrote in a post.

It’s my job to investigate how tech works, and even I could only guess what happened. At the time her traffic dropped, Bloomer had attempted to pay Instagram to boost her post about the “raising anti-racist kids” art class as an ad. Instagram rejected that request, saying it was “political.” (Instagram requires people who run political ads, including ones about social issues, go through an authorization process.) When she changed the phrase to “inclusive kids,” the ad got approved.

Is it possible that the ad system’s reading of “anti-racist” ended up flagging her whole account as borderline, and thus no longer recommendable? Instagram’s vague “recommendation guidelines” say nothing about social issues, but do specify it won’t recommend accounts that have been banned from running ads.

I asked Instagram. It said that ad rejection didn’t impact Bloomer’s account. But it wouldn’t tell me what happened to her account, citing user privacy.

Most social networks just leave us guessing like this. Many of the people I spoke with about shadowbanning live with a kind of algorithmic anxiety, not sure about what invisible line they might have crossed to warrant being reduced.

Not coming clean also hurts the companies. “It prevents users from knowing what the norms of the platform are — and either act within them, or if they don’t like them, leave,” says Gabriel Nicholas, who conducted CDT’s research on shadowbanning.

Some people think the key to avoiding shadowbans is to use workarounds, such as not using certain images, keywords or hashtags, or by using coded language known as algospeak.

Perhaps. But recommendation systems, trained through machine learning, can also just make dumb mistakes. Nathalie Van Raemdonck, a Free University of Brussels student getting a PhD in disinformation, told me she suspects she got shadowbanned on Instagram after a post of hers countering vaccine misinformation got inaccurately flagged as containing misinformation.

As a free-speech issue, we should be particularly concerned that there are some groups that, just based on the way an algorithm understands their identity, are more likely to be interpreted as crossing the line. In the CDT survey, the people who said they were victims were disproportionately male, Republican, Hispanic, or non-cisgender. Academics and journalists have documented shadowbanning’s impact on Black and trans people, artists, educators and sex workers.

Case in point: Syzygy, a San Francisco drag performer, told me they noticed a significant drop in likes and people viewing their posts after posting a photo of them throwing a disco ball into the air while presenting as female with digital emoji stickers over their private areas.

Instagram’s guidelines say it will not recommend content that “may be sexually explicit or suggestive.” But how do its algorithms read the body of someone in drag? Instagram says its technology is trained to find female nipples, which are allowed only in specific circumstances such as women actively engaged in breastfeeding.

Rebuilding our trust in social media isn’t as simple as passing a law saying social media companies can’t make choices about what to amplify or reduce.

Reduction is actually useful for content moderation. It allows jerks to say jerky things, but make sure that they’re not filling up everyone else’s feeds with their nonsense. Free speech does not mean free reach, to borrow a phrase coined by misinformation researchers.

What needs to change is how social media makes visible its power. “Reducing visibility of content without telling people has become the norm, and it shouldn’t be,” says CDT’s Nicholas.

As a start, he says, the industry needs to clearly acknowledge that it reduces content without notice, so users don’t feel “gaslit.” Companies could disclose high-level data about how many accounts and posts they moderate, and for what reasons.

Building transparency into algorithmic systems that weren’t designed to explain themselves won’t be easy. For everything you post, suggests Gillespie, there ought to be a little information screen that gives you all the key information about whether it was ever taken down, or reduced in visibility — and if so, what rule it broke. (There could be limited exceptions when companies are trying to stop the reverse-engineering of moderation systems.)

Musk said earlier in December he would bring something along these lines to Twitter, though so far he’s only delivered on a “view count” for tweets that give you a sense of their reach.

Instagram’s new Account Status menu may be our closest working version of shadowbanning transparency, though it’s limited in reach to people with professional accounts — and you have to really dig to find it. We’ve also yet to determine how forthcoming it is: Bloomer reports hers says, “You haven’t posted anything that is affecting your account status.”

I know many social media companies aren’t likely to voluntarily invest in transparency. A bipartisan bill introduced in the Senate in December could give them a needed push. The Platform Accountability and Transparency Act would require them to regularly disclose to the public data on viral content and moderation calls, as well as turn over more data to outside researchers.

Last but not least, we the users also need the power to push back when algorithms misunderstand us or make the wrong call. Shortly after I contacted Instagram about Bloomer’s account, the art teacher says her account returned to its regular audience. But knowing a journalist isn’t a very scalable solution.

Instagram’s new Account Status menu does have an appeal button, though the company’s response times to all kinds of customer-service queries are notoriously slow.

Offering everyone due process over shadowbans is an expensive proposition, because you need humans to respond to each request and investigate. But that’s the cost of taking full responsibility for the algorithms that want to run our public squares.



Related posts

Samsung Galaxy Buds 3 Pro Review: Better Than the AirPods Pro 2 (in Some Ways)

newsconquest

Stop Using AirTags to Track Your Pets. Here’s What Experts Say You Should Try Instead

newsconquest

The Best Horror Movies on HBO Max

newsconquest