The Battle for Free Speech on Social Media
This week, X (Twitter) was officially banned in Brazil because it refused to reply to content takedown requests from the Brazilian government, and just last week the founder of Telegram was arrested in France for his alleged role in hosting illegal content on the platform’s channels.
Both are major events in the world of social media that signal a significant shift in the way governments are thinking about digital content.
By some accounts, Brazil is X’s fourth-largest market, and has become increasingly important as usage has declined in North America. Following a hotly-contested election, Brazil’s new government requested that social media platforms remove hundreds of posts they deemed false or dangerous. The outgoing Brazilian leader claimed election fraud, and those posts fuelled unrest. X refused to comply, leading Brazil to impose a $3 million fine. When X failed to pay or adhere to the takedown requests, the service was shut down across the country.
Telegram has gained significant influence due to its strong privacy features and lack of moderation. Telegram hosts a wide range of content, from sports and pop culture to more controversial topics like political unrest, hate speech, and terrorist activities. It has become a hub for groups that share information not allowed on other platforms.
The Real-World Implications of Digital Content
In the past few years we’ve seen social media’s impact on mental health, elections, and social movements. It has become clear that the online world is directly connected to offline consequences.
It has also been an incredible force for good in the way it can connect people and share ideas that they may otherwise never have had access to.
Which raises the question: When does social media free speech go too far?
Free Speech vs Moderation
On one side, there are those who advocate for absolute free speech. They believe that social media platforms should be neutral and have no role in censorship of any form of speech.
On the other side, proponents of moderation argue that some restrictions on free speech are necessary. Just as it’s illegal to shout “fire” in a crowded theatre, most democracies consider it a crime to incite violence, harass people, or spread hate.
Where that line should be drawn is a nearly impossible question, and the battle to define the line between free speech and moderation is what we’re witnessing today.
What has been true as long as social media has existed is that platforms and governments regularly work together to remove or restrict content they deem harmful. India has famously had accounts banned for perceived threats to its democracy; Canada has imposed restrictions on news sharing; and the US government has issued guidance, or orders, on thousands of posts ranging from medical disinformation to political violence.
However, we had yet to see a platform be entirely banned in a major market like Brazil, nor had a platform’s founder been jailed for failing to moderate content, until now.
Where do we go from here?
Two extreme outcomes are possible:
- Free speech advocates could win, absolving social media platforms of any responsibility to moderate content. This could lead to a surge in bullying, hate speech, and disinformation, but it would also preserve the right to free expression without fear of censorship.
- Moderation advocates could win, forcing platforms to ensure that content complies with local laws and regulations; However, this approach risks placing too much power in the hands of both governments and tech companies, potentially suppressing the open exchange of ideas.
As with most conflicts, the likely outcome will fall somewhere in the middle, with both positive and negative consequences. We may lose some platforms to regulatory pressure, while others may retreat to less regulated areas of the internet, beyond government control.
One thing is certain: The coming months and years will be tumultuous as the world grapples with the reality that digital content has tangible, real-world impacts.