The Problem with Social Media and 'Free Speech'

Free speech comes up often when we talk about social media, with some people talking about how it’s a great place to express your thoughts or opinions, some believing that the so-called “free speech police” stop people from expressing themselves, and others concerned about the rise of misinformation and hate speech on social platforms.

It is a debate that is always raging, but can social media platforms ever find the right balance when it comes to the issue of free speech?

Private business or public platform?

When it comes to how social media companies deal with what is published, it becomes a question of whether social media platforms should act as private companies who have a right to kick people off of public information domains where free speech or political speech shouldn’t be silenced (although there should still be repercussions for hate speech and inciting violence, as these are crimes).

Many people believe that social media platforms such as Facebook and Twitter have become far more than just public information publishers, since they make a profit from their users, and therefore have a responsibility to make a stand on certain political issues and condemn anything that goes against their apparent ideals of democracy and liberalism. 

Just as consumers are expecting more from the companies they buy from in terms of taking a stance on social and environmental issues, the same applies to the social platforms we support.

Fighting hate speech and violence

When it comes to the issue of hate speech and violence, it is clear that social media platforms should be removing those who publish hate speech or facilitate violence as these are against the law.

Social media clearly has the potential to be powerful and dangerous if left unregulated, which is why so many people believe that the platforms must take responsibility for the content that is published on their websites and apps.

It is positive that big tech companies are taking some responsibility to eliminate dangerous discourse. For example, Parler, a right-wing social platform which was thought to be inciting hatred and violence before and after the Capitol riots, was dropped from Apple, Google and AWS. 

Apple stated that it was banned for “including posts that encouraged violence, denigrated various ethnic groups, races and religions, glorified Nazism, and called for violence against specific people.” The app has since been reinstated on the App store following conversations about moderation and compliance.

Does big tech really have the capacity to regulate?

However, many concerns have been raised over the ability of these social media platforms to regulate the content on their websites. They are not the police, after all, they are private companies and given the vast and expansive nature of platforms like Facebook, Instagram and Twitter, do they actually have the capacity to regulate the content that is published? 

They have tried in certain areas but failed in others. For example, many people will have found that social media posts about the Covid-19 pandemic are accompanied by warnings and disclaimers to combat misinformation, and many of these posts might be restricted or taken down, However, racism is still prolific on social media platforms and the companies have seemingly not yet found a way to crack down on this.

For example, Instagram has come under criticism for allowing comments and accounts to remain live after the England footballers Marcus Rashford, Bukayo Saka and Jadon Sancho received a torrent of racist abuse following the Euros final. While they did remove some posts and suspend some accounts, this often came too late and some were found to have remained.

After the Capitol Riots, Facebook and Twitter banned Donald Trump for inciting violence. However, they cannot ban all those who have responded so positively to ‘Trumpism’ from these platforms and who might be acting similarly in terms of violence and hate speech.

This symbolic banning also slightly misses the point that the platforms themselves were part of the problem in allowing this to happen in the first place. On top of this, you could also argue that even if you removed anyone dangerous from Twitter or Facebook, they may just be driven underground or elsewhere.

Proactive rather than reactive

It can be argued that social platforms themselves are built in a way that creates an ‘ecosystem of disinformation, extremism, rage and bigotry’ and changes are needed from the foundations in order to find a way to mitigate the many issues they are facing.

It is hard to say exactly what changes are needed to make social platforms safer, while still preserving the ability for people to express themselves and connect. While some have been campaigning for ID to be required to set up a social media account, therefore reducing the existence of trolling and increasing accountability, others have pointed out that this would put vulnerable people at risk who rely on the community that social media gives them while keeping their identity safe, and may prevent those without official ID from accessing the services.

What is clear, however, is that many want social platforms to be more proactive rather than reactive – putting measures in place to safeguard against racial hatred and other forms of hatred and violence now, rather than simply implementing bans and taking posts down when it is already too late.

It is a difficult line to balance when we want our freedom and the sense of community and connection that social media provides while also needing to protect those that are victims of online hatred and abuse. This is why free speech and the ability of social media platforms to regulate it is an ongoing problem.

 

SB.