Authored By: Rhea Redkar

In today’s day and age, everything and everyone has seemingly moved online. Access to the internet and social media no longer remains a luxury but has become an indispensable part of the daily lives of billions of people across the globe. Due to its popular appeal, wide reach, and ease of use, it has become an important tool in collectivising the masses, over the world. However, this virtual connecter is also increasingly being used to peddle narratives of hate, that are indistinguishable and often go unaccounted for, amidst vast amount of content that is posted online every minute. Hate speech and fake news that lead to actual, physical instances of violence is not a new category of offenses. However, the use of social media as a weapon to disseminate hate to unsuspecting masses; is a relatively recent phenomenon. Hate and harassment on the internet or ‘trolling’, at the very least promote a culture of bullying and defamation and at its worse, manifests into real-world violence.

The July 2018 lynching of 5 men in Maharashtra, India is one of many such attacks that have taken place due to the spread of hate and unchecked fake news on messaging and social media platforms such as WhatsApp and Facebook. The 2018 incident was a particularly stirring scene of mob frenzy unleashed over an unsuspecting group of travellers fuelled by fake images of kidnappings that were circulated over WhatsApp. The BJP Government was quick to shirk responsibility and urged the tech giant to curb messages spreading unreliable news. However, the BJP itself boasts of a robust online presence, with the infamous IT Cell, often spreading manufactured, inflammatory content to polarise masses on issues of religion. In a September 2018 rally in Kota, UP, BJP president Amit Shah boasted that the party had “3.2 million supporters in its WhatsApp groups” who could make “sweet or sour, true or fake” messages go viral to the masses.

The struggle of the United States of America with racially motivated mass shootings also has direct connections to unmoderated hate speech on online forums. Mass shooters such as Dylan Roof of the 2015 Charleston church shooting, as well as Wade Michael Page, who shot and killed 6 people in a mass shooting at a Gurudwara in Wisconsin in 2012; were found to be white supremacists with a consistent online presence. “White supremacists are increasingly opting to operate mainly online, where the danger of public exposure and embarrassment is far lower, where younger people tend to gather, and where it requires virtually no effort or cost to join in the conversation,” a Southern Poverty Law Center report stated.

In Myanmar, a Buddhist majority country, online hate, and racist slurs against the ethnic minorities of Rohingya Muslims is commonplace. Such rampant hate campaigns on popular social media websites such as Facebook, drive and amplify the already suspicious popular narrative about the Rohingyas. Alan Davis, of the Centre for War and Peace Reporting, writes about his time in Myanmar, in a piece titled “How Social Media Spurred Myanmar’s Latest Violence”. The lack of repercussions and fact-checkers for social media campaigns peddling fake news about the Rohingyas, accusing them of elaborate terrorist attacks- resulted in the formation of a group branding itself the Arakan Rohingya Salvation Front (ARSF), that attacked members of the ruling government and elicited a “brutal army response”.

Gab, created in 2016 by programmer Andrew Torba, was promoted as the ‘free speech championing’ alternative to social media giants such as Twitter or Facebook who, in recent times, have bolstered their response towards hate speech by moderation and censorship. “The platform’s intentionally slim rule book attracted a crowd of extremists, including white nationalists and neo-Nazis, who had been banned from other social platforms.” The company came under fire in 2018 after Pittsburgh synagogue shooter, Robert Bowers was found to have posted about his plan of action on Gab, hours before the massacre. While partnering service providers such as GoDaddy and PayPal cut ties with the controversial website, citing its colossal failure to moderate hate speech, CEO Torba confirmed that no changes would be made to the policies of the website.

The ability of virtually anyone posting content online; clubbed with the privilege of anonymity, gives rise to the question of accountability. Can the internet service providers (ISP) and social media websites that provide a platform for posting such content be held liable for the unlawful actions of its users, in the context of hate speech? The liability and complicity of these ISP’s fall under the ambit of intermediary liability which comprises an important part of the Information Technology laws of most countries. Intermediary liability in law is found in the principle of vicarious liability, wherein a party is held partly responsible for the acts committed by a third party.

The evolution of intermediary liability laws in India can mainly be traced from the Information Technology (IT) Act of 2000 to the 2015 landmark Supreme Court case of Shreya Singhal v. Union of India. The term “intermediary” is defined in the IT Act as “any person who on behalf of another person receives, stores or transmits that record or provides any service with respect to that record and includes telecom service providers, web-housing service providers, search engines, online payment sites, online auction sites, online market places and cyber cafes”. Therefore, in theory, the host websites fall under the ambit of intermediary liability and would be accountable in law for the actions of their users. However, section 79(1) of the IT Act, in turn provides reasonable immunity to intermediaries provided the conditions laid down in sections 79(2) and 79(3) are satisfied. As per section 79(2), intermediaries whose involvement is limited to being a mere host, not being an active participant, and discharging their due diligence, are exempt from liability. Similarly, under section 79(3), if said intermediary aids, abets or fails to act upon the action of its user on receiving actual knowledge about the same, it is then not covered under the immunity clause of 79(1). The Intermediary Liability Guidelines under the IT Act were recently amended in 2018, placing renewed importance on privacy policies and user agreements. It also mandated intermediary websites with “more than fifty lakh users in India” to (i) be incorporated under the Companies Act 1956 or 2013, (ii) have a physical office space in the country and (iii) appoint a nodal point of contact and an alternate senior designated functionary in India, to ensure smooth coordination with law-enforcement agencies. Rule 5 of the Guidelines mandate the intermediaries to provide any information sought by governmental agencies, within 72 hours of such a request.

Shreya Singhal v UOI is a landmark Supreme Court case from 2015, wherein the court championed and guaranteed the scope of free speech online, striking down section 66A of the IT Act, which meted out punishment for sending offensive messages through communication websites. Before Shreya Singhal, section 66A was notoriously misused in several instances, more often than not by powerful political parties, to police citizen’s dissent online.  Along with this, the court also read down the threshold for “actual knowledge” as required under section 79(3)(b) of the Act, to mean “actual knowledge from a court order or on being notified by the appropriate government or its agency”. The apex court specifically reasoned that “it would be very difficult for intermediaries like Google, Facebook etc. to act when millions of requests are made and the intermediary is then to judge as to which of such requests are legitimate and which are not”.

In the USA, intermediary liability on this internet is shaped by the widely known section 230 of the Communications Decency Act of 1996. The section reads “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”, and is often credited with “creating the internet”. The section provides blanket immunity to ISP’s from intermediary liability. Beyond this legal framework, ISP’s are free to frame their own set of rules and regulations that may apply to their users to filter the content posted. However, often due to the subjectivity around what qualifies as hate speech or harassment, several posts slip through the cracks of these community guidelines set in place. A report in reference to Facebook showed that- “the company’s hate-speech rules tend to favour elites and governments over grassroots activists and racial minorities (and) in so doing, they serve the business interests of the global company, which relies on national governments not to block its service to their citizens”.

In the European Union, all member countries have different laws regarding intermediary liability in the context of online hate speech. The concept of ‘hosts’ is popular, which include ISP’s and other websites, that are said to merely act as storage for their users’ contents, not being directly involved and thus escape liability for the same. The “exemption defence” can be used by these intermediaries to defend themselves from legal liability, on being able to prove that they had no actual knowledge and were prompt to take unlawful content down on being made aware of the same. In 2016 major ISP’s including Facebook, Microsoft, Twitter and You-tube became parties to the European Commission’s “Code of Conduct on Countering Illegal Hate Speech Online”. By 2019, other internet tech giants such as Instagram, Snapchat, and Google+ also agreed to The Code. The platforms “agreed to assess the majority of users’ notifications of in 24h also respecting EU and national legislation on hate speech and committed to remove, if necessary, those messages assessed illegal”. Directive 2000/31/EC of the European Parliament and of the Council on 8 June 2000 deals with the legal liability of intermediary websites that it refers to as “information society services” and electronic commerce. Article 12 of the Directive absolves such services from liability in situations of transmission of information between users, provided that the same was not initiated or modified by the intermediary. Article 13 refers to “caching” or the storage of information provided by a recipient of the service and exempts the intermediary, if the storage in question was performed for the sole purpose of increasing the efficiency of its own service, without any modification. Article 14 of the Directive also excuses the information society services from legal ramifications, arising out of “hosting” the stored information of their users, provided that the intermediary does not have actual knowledge of illegal activities, if any, and is prompt to remove such content on being brought to its notice. The intermediaries are however, expected to comply with orders issued by a court or administrative authority of any of the member states, over and above the protections laid down in the Directive.

The global perspective on matters of intermediary liability is split between exempting host websites on the internet on one hand and holding them liable for the unlawful online activities of their users, on the other. A common theme, however, is one of accountability. Even in countries where ISP’s are granted immunity against intermediary liability, the websites have their own set of rules or community guidelines that police the content posted to their website. These, in many cases, have been recorded to stifle online free speech. A fine balance must be struck between punishing hateful content and protecting the freedom of speech on the internet, and effective intermediary liability laws will prove to be the best tool for the same.

The author is an undergraduate student at Jindal Global Law School, Sonipat.


Voicing Out Quality Opinion


Leave a Reply

Your email address will not be published. Required fields are marked *