Should social media platforms be the arbitrator of truth?

01 Jul 2021 | Author:

Short answer? It depends…

A version of this story first appeared in PR Week. Click here to read it.

In May this year, Facebook’s Oversight Board ruled that the company was right to ban Donald Trump from its platform in January 2021 due to the latter’s clear violation of Facebook’s community standards.

However, the Board called out Facebook’s ‘indefinite’ ban and lack of uniform moderation standards, ending with a recommendation for Facebook to review the decision and create a ‘proportionate’ procedure for equal application within six months.

Unsurprisingly, the Oversight Board’s decision has ignited a storm of debate which I have been following closely and with much interest. Among other reasons, it’s fascinating because it demands an answer to an important question that social media companies have so far been trying to skirt around.

Should social media platforms be the arbitrator of truth, and if so, how far should they go?

From social sharing to social influencing

Most social media platforms originated as seemingly harmless tools to keep in touch with others, make new friends and to entertain ourselves. Facebook was built as a social networking service for Harvard students. Twitter was designed to make text messaging work cross-channel and cross-device. Instagram was launched out of a love of photos and wanting to share them with others.

Today, in this hyper-connected and information-rich age, social media has evolved from an innocuous way to keep in touch into a powerful platform to shape thoughts and ideas and break news. With its easy accessibility and the capacity to broadcast messages instantaneously to the world, social media has given everyone a voice and a stage.

But the enduring debate around the right to freedom of speech is not just whether all voices have an equal right to be heard, but also who should be the gatekeeper of that right.

Enabler vs arbitrator

Historically, social media companies have been keen to distance themselves from this debate as it makes them liable for enormous responsibilities and repercussions.

They often frame their reluctance to moderate or shut down problematic content as them taking up a neutral, non-interventionist position to facilitate the continued freedom of speech. Their message is that of enabling and empowering, not being judges of moral decency and factual accuracy.

But the right to freedom of speech is accompanied by the duty of responsible behaviour and respecting the rights of others. When social media has been proven to be a direct accomplice in the occurrence of devastating events — such as the Capitol insurrection — by virtue of amplifying clear hate speech and fake news, then it can no longer throw its hands up and plead ignorance or innocence.

Social media cannot choose to be a champion of free speech and be absolved of their attendant duty to maintain responsible behaviour.

Attempts at accountability

Big Tech platforms are clearly aware of this expectation shift and have, to their credit, taken some steps to shoulder some responsibility.

All popular social media platforms have some form of community engagement rules and channels to report offensive or problematic content. They have teams of moderators and Facebook, as mentioned earlier, even established an external Oversight Board as a sort of ‘Supreme Court’ to ostensibly provide a check and balance. Blue ‘verified’ badges on Facebook, Instagram and Twitter pages and profiles also declare a user’s authenticity.

But it is not enough. The Oversight Board is fully funded by Facebook, which already suggests an inherent bias. Many have criticised Facebook’s tendency to push the actual decision-making responsibility on difficult issues to the Board — and the Board itself is well aware of this, as shown by its recent judgment which punted the responsibility back to Facebook itself. There are plenty of horror stories about the traumatic conditions under which Facebook’s teams of moderators operate, which at the most basic level means that moderation is inaccurate, inconsistent and ineffective.

Given the record-breaking billion-dollar revenues of these social media platforms, it’s hard to see these non-committal stances and half-hearted efforts as anything other than a business decision. Not taking sides means being open to a wider audience that consumes more content, which translates into more ad revenue. If top decision-makers such as Mark Zuckerberg and Jack Dorsey wanted their companies to do better, those changes would be visible. The fact that they aren’t speaks volumes in itself.

With social media use now prevalent in society, there is little meaningful distinction left between online and offline spaces. Thus, social media companies should now be held accountable to their users in the same way that public utility providers are. They can design better digital spaces by emulating how offline public spaces are built — by having regulations and norms that govern behaviour, which are collectively determined by regulators, stakeholders and users to build healthier digital communities.

Who will fill the void?

In late April this year, major football clubs, sporting bodies, players and athletes joined a four-day social media boycott to draw attention to the constant abuse and discrimination that happens against them on these platforms. The first line of the statement issued by the English Premier League (EPL) said simply: “Social media companies must do more to stop online abuse.”

Vitriol on social media towards sports figures and bodies is nothing new by far. Check any public post by any football club after an EPL match and you’ll see thousands of messages directing abuse towards everyone involved. Clubs have been trying to engage with social media platforms and asking them to moderate this clearly abusive content, but they’ve shown that they’re either unwilling to or not capable of doing so. This boycott is a last-ditch attempt to raise awareness and shame these platforms into taking responsibility. I find it unlikely, though, that it will change over a decade of non-involvement.

If social media companies refuse to step up, then who will? My bet is on governments, and that will come at a cost. We’re already seeing India require social media platforms to appoint local representatives and try to implement new rules that require traceability. Twitter’s offices in India were raided by police after the company put a ‘manipulated media’ label on tweets from members of the ruling party. The government of Belarus diverted a plane to arrest an outspoken dissident who used social media to criticise the government.

The implications are clear: making governments the arbiter of truth can and will have serious ramifications for the neutrality and independence that social media companies so proudly protect.

The legal question

Much of the debate around “who should be the gatekeeper?” boils down to legalities. Are these social media platforms publishers themselves, or does that role belong solely to the person or company posting the content?

These questions are made more difficult by the fact that the digital world is adapting faster than our legal frameworks can cope with. Many issues that we face now with social media use are unprecedented, which is a recipe for heated debate and slow judgments.

It’s fair to say that social media platforms should not necessarily be the sole and ultimate authority on truth. In an ideal world, it would be a collaborative relationship between them and progressive governments, robust legal frameworks and responsible users. While we navigate as a society towards this utopian understanding, what’s clear is that social media companies should — and they have the capacity to — do more right now.

In this high-speed era of information, listening to the wrong voice could be a matter of life and death. Social media has given everyone a voice. Now it must take ownership of the consequences — both good and bad.


TOPIC(S) : Social Media