Foreign Policy

Large Tech's nice problem

By removing US President Donald Trump from their services, big tech companies like Twitter and Facebook have firmly anchored the principle that they own their platforms and they will decide who can speak there. This breaks new ground and raises profound questions about freedom of expression and corporate responsibility. Social media platforms create the illusion of being public spaces, but they are what they are: private property. They can set rules of conduct, and just like the owner of a bar can kick out a stubborn user, they have now shown that they can remove people who don't follow their rules.

This principle seems fine, but the question is how can it be applied consistently? Trump's January 6 speech prompted his supporters to march to the U.S. Capitol – an unprecedented attack – while lawmakers went through the formalities of declaring Joe Biden the 2020 presidential winner. But Trump's call to action was hardly the first time he'd said anything incendiary on social media. The corporations' explanations and justifications for removing it sounded selfishly cynical; Like a weather vane, they seemed to react to the change in the political wind.

Trump has made many remarks in the past that either condoned or threatened violence, such as when he spoke after the events in Charlottesville, Virginia in 2017 and during the Black Lives Matter demonstrations last year. Big Tech executives, however, portrayed themselves as free speech advocates and insisted that they operate platforms rather than news organizations. However, their record of maintaining freedom of speech in other countries was incomplete. They have blocked accounts of politicians, human rights defenders and journalists that are impractical for some governments and they have allowed politically powerful actors to vilify and dehumanize minority groups while claiming they have no tolerance for hate speech.

By removing Trump, they set a precedent: if the political ground shifts, companies will change their position. The consistent application of such rules is difficult even within a country. Trying to apply such rules globally is impossible for companies that do not have the mandate, expertise, capacity or authority to uphold a right as controversial and fragile as freedom of expression.

Big tech companies own their respective platforms. You have the right to set terms of use and standards. These are decided unilaterally, even if some companies consult the affected stakeholders in detail. But their actions don't follow the right process and they rarely explain their decisions clearly. Information on how these guidelines are developed, how they are implemented, whether there is a complaints mechanism in place, who will assess these complaints, and what legal remedies and remedies are available to the user is not readily available and companies do not have the resources to to respond to every complainant.

Arbitrariness is the norm, not the exception. While social media companies in the US, such as Milo Yiannopoulos and Alex Jones, immediately banned illegal people in other parts of the world, liberals and human rights defenders were suspended and right-wing voices had a field day. They seem to bow to power, not ideology.

Facebook has been criticized for using its platform to spread hatred against minorities in Myanmar and admitted that it was used to incite violence. It then blocked accounts from Myanmar's military. Facebook has also apologized for its role in anti-Muslim riots in Sri Lanka in 2018, and its hate speech policy runs counter to its business priorities in India, which is the largest market by number of users. Twitter has been accused of involvement in India cracking down on media reports from Kashmir. As Russian opposition leader Alexei Navalny pointed out in a series of tweets criticizing Twitter's ban on Trump, the platform has been used without restriction by authoritarian leaders. Navalny has received threats himself, but Twitter has not acted (nor has he complained). Twitter also blocked Sanjay Hegde, an Indian lawyer who posted a picture of a single dissenting German who refused to greet Hitler because Twitter cracked down on the glorification of Nazi ideology. (The picture actually celebrates the man who stood alone.)

I've witnessed the Twitter suspension process firsthand: early last month, after I published a poem I wrote a decade ago that lamented the destruction of Babri Masjid in India in 1992, I was suspended for about two days . Several leading writers were outraged. A Hindu nationalist user claimed to have removed me. Twitter later informed me that I was suspended for compiling a list of Twitter accounts whose titles violated Twitter's abuse policy (which was never explained to me). I couldn't change the title; I had to remove the list to be accepted again. Eventually I managed to transfer the data to another list, rename it, and return it about 36 hours later.

These experiences show the uneven application of guidelines that social media companies use in different contexts. Their actions are one-sided, often arbitrary, and do not follow due process. It's no surprise that Parler is suing Amazon for removing the Amazon Web Services social platform. Proper process is key, as web services company Cloudflare explained when it terminated the neo-Nazi Daily Stormer website's account in 2017.

Trump's speech was certainly a call to action. Whether there has been any incitement to violence is a matter for lawyers to debate, but those who are excited about Trump's departure from social media should wonder what happens if the executives they support are removed from these platforms without due process.

It seems simple: a mall has the right to remove displays that could offend its customers, such as shopping carts. B. graphic posters against abortion or messages about abortion rights in a society that is as divided as the USA about a woman's right to abortion. Like this mall where customers want to get a foothold and stay longer, social media companies want users to stay on their website for a long time and have a healthy experience that calms them down so they click the ads and buy the products .

But Facebook and Twitter are almost monopolies. Sure, there are alternatives, but those alternatives simply don't have the critical mass to become viable competitors. Parler is now homeless, and it isn't the first time Amazon Web Services has gone into action. In 2010 WikiLeaks were removed from its servers. A similar fate could fall on Mastodon, a social network favored by many leftists, especially in India.

In the United States, the first amendment prevents the government from restricting freedom of expression or freedom of the press, but that restriction does not apply to private entities. When that private entity is a monopoly, as it is with most big tech companies; when there are no government or other private alternatives; and when this dominant private entity appears to do the bidding of powerful interests at the behest of the legislature, the dividing line between state and private sector becomes blurred.

Big Tech claims it prevents hate speech from getting too widespread. The academic Susan Benesch has dealt intensively with the topic and distinguishes hate speech from dangerous speech. The former should be allowed; The latter may need to be regulated. In defining dangerous language, she says when a speaker is able to influence a large following, when the audience is vulnerable and does not have access to or believe in accurate information, when the language is dehumanizing a group, when the speech does Audience encouraged to nurture their complaint when it speaks of ethnic purity and undermines outsiders for polluting the purity, and when language is encoded like a dog whistle with images that have special meaning to the audience, then that language is dangerous . As writer Seth Abramson noted in a 200 tweet thread, Trump's speech was dangerous. Even the Wall Street Journal, not a fan of the Democrats, held Trump's speech incontestable.

But on what basis do companies decide that a particular speech poses such a “clear and present” danger? Several Republican leaders are outraged by the power of tech companies and have reiterated Trump's call for the protection of social media platforms to be lifted under Section 230 of the Communication Decency Act, which grants them immunity from prosecution as a data carrier rather than a Publisher apply. Indeed, in developing and implementing their standards and guidelines, companies exercise some editorial control over the content they contain.

You can't have it both ways. The Trumpian solution – removing section 230 – is simple; Without this provision, he might have been removed from social media much sooner.

Businesses need solid rules and due process. We live in a global village. The rules must be applied fairly everywhere. As David Kaye, former United Nations Special Rapporteur on Freedom of Expression, argues in his book "Speech Police", well-intentioned businesses need to work with governments that respect rights, as well as with experts on freedom of expression and human rights in a coordinated multi – Stakeholder approach to understand what free speech means, what hate speech means, what limits might apply and how they should be applied – consistently and without discrimination.

This is an impossible task for a private company. Newspapers and magazines know how to do it right. They have internal controls to decide what to display on their platform, including posting any clearly stated viewpoints and posting news that has been fact-checked and verified. Social media companies have more than enough resources to invest in infrastructure that will allow them to behave as they de facto be – publishers and editors. They crave users, credibility, and trust.

They have to earn it.

Related Articles