When the big internet platforms like YouTube and Facebook began to kick hate speech and harmful misinformation off their sites last year, a process that sped up as the pandemic flared and protests against police violence roiled the country, something comedian Sarah Silverman would often say about audiences came to mind: “People go towards love.”
If you begin to push people away from a community for believing bad things, those people, being human, will look for people who accept them.
The gist of her point is that if you begin to push people away from a community for believing bad things, those people, being human, will look for people who accept them, who will show them love, and who will welcome them inside. It’s a nuanced take, one that comes from a place of almost cringeworthy-levels of empathy, that feels relevant as we grapple with communities that have nurtured right-wing extremism and harmful misinformation. Their members, suddenly cut off from the online public square, have found other platforms where they feel more welcome.
Case in point: What did QAnon believers do after YouTube, Twitter and Facebook banned their content? They did not disappear. They went to Parler, briefly, and then to sites like Twitch, where, according to the New York Times, 20 large communities of QAnon and Q-adjacent subscribers have sprung up since last fall. Twitch, which is owned by Google, doesn’t think QAnon is a hate group. And, until the Times asked about it, it didn’t think the Proud Boys counted as a hate group either.
No shade against Twitch specifically: they face exactly the same cycle of content moderation headaches that befell Facebook and Twitter as groups with aggressively controversial and hateful viewpoints colonized small parts of their server space. Twitch is currently in what I call the “Whac-a-Mole arms race” phase: They’re dealing with smart ideological entrepreneurs who know how to manipulate hashtags and discourse markers (change a single letter!) to circumvent any attempts to moderate whatever pops up.
It is not illegal to believe in racist things, nor is it possible to shame people who believe these things from attempting to find people who think like them.
Misinformation researchers have worried about the “moving toward love” problem for a while. Censoring views does not, in the networked commons, get rid of them. Far from it, in fact. It is not illegal to believe in racist things, nor is it possible to shame people who believe these things from attempting to find people who think like them. As much as we might complain about Facebook and Twitter and their echo chambers, the reality was that people with extremist views were exposed to contrary opinions fairly frequently, especially if their algorithm seemed to be tweaked toward engagement.
On the one hand, seeing the other side take umbrage at your content is a great motivator; on the other, it reminds you, on a subtle level, that you’re still part of a larger community and bear some responsibility to it. But if you suddenly find yourself in a smaller space where people believe the same things you do, your responsibility narrows.
Your beliefs become more virulent, even if you’re not able to spread them as quickly. When you do recruit someone new — say, to a Twitch stream, or to a Discord group, or to a private Telegram chat — they’re likely to be more like you, a true believer who has been kicked out of some other community.
And then there are the folks who decide to get violent. Say what you will about Facebook’s inability to take down #StoptheSteal groups before the January insurrection — and there’s a lot to say — at least Facebook had visibility, which meant it could (and did) work with law enforcement to find its users who used the platform to organize the storming of the U.S. Capitol.
Banned by Facebook, many turned to platforms where the anonymity of the content and the security of the interactions were the point. Even Zoom, the go-to app for pandemic meetups, wasn’t immune. PBS found a healthy migration of militia content to the online teleconferencing platform in the wake of Facebook’s post-election attention to election misinformation.
Similarly, Telegram and other apps that offer closed groups have become prolific spreaders of harmful misinformation. I don’t endorse wide surveillance of speech as a matter of principal, but there is no easy way for aspirational violent movements to be monitored after they’ve been forced to use platforms that can’t moderate their content without a tip-off.
The same goes for misinformation: A doctor friend of mine found his way into a closed Telegram group for teenagers who liked to spread beauty tips. My friend tried to correct what he considered to be harmful misinformation in real time, only to be kicked out of the group by the moderator. When I suggested that he might have taken a different tack — maybe contacting the moderator privately or building a community within that community of teens who wanted medically correct information — he told me that he simply did not have the time.
Indeed, who does have the time and the bandwidth? It takes significant, often herculean psychological resilience to stand up to purveyors of misinformation, and the costs to people who try can be significant.
There is no easy way for aspirational violent movements to be monitored after they’ve been forced to use platforms that can’t moderate their content without a tip-off
If I could re-engineer the internet, I would endorse the “protocols, not platform” approach that Mike Masnick, editor of the Techdirt blog, recently described. Masnick suggests far fewer restrictions on speech and far more interoperability between platforms in a way that allows for people to curate their own experiences with more agency. In other words, make it easier for people inside closed communities to pull in alternative sources of information. Dropping the “walls” that platforms have put up (think of how Twitter and Facebook operate as entirely different digital universes) and allowing for a marketplace of filtering mechanisms could change our entire online experience.
The end result would be that fewer bad actors are kicked off major platforms, but it would also mean that they don’t end up nurturing their grievances in private corners of the internet. This would require us to expand our empathy for atrocious viewpoints; but it could also mean the internet would, in the main, be a lot safer.