Casteist remarks on social media now punishable: What fighting online hate speech in India entails
This article was first published on First Post here and is republished here with the permission of the author.
On 3 July, the Delhi High Court in a case ruled that posting of casteist insults on Facebook against members of the Scheduled Castes/Scheduled Tribes (SC/ST) community is punishable under the Scheduled Castes and Scheduled Tribes (Prevention of Atrocities) Act, 1989. In the case, a Rajput woman had allegedly posted derogatory remarks against the ‘Dhobi’ community on her Facebook wall. The high court quashed the complaint on grounds that for the 1989 act to apply, the insults must be directed against particular individual(s) of the Scheduled Castes/Scheduled Tribes (SC/ST) community and not the community as a whole. The court, however, made an important observation that social media is a space within ‘public view’ and therefore, insults posted on a Facebook wall against specific individuals of the SC/ST community would attract criminal sanctions. The case highlights how social media is steadily becoming a platform for posting hate speech content.
Recent incidents of online hate speech in India
Verbal abuse or attacks targeting a community on the basis of personal attributes such as race, ethnicity, religion, sexual orientation etc are increasingly being witnessed online, and are commonly classified as hate speech. Hate speech also covers content which may not be abusive in nature, but is sufficient to incite violence against a particular section of the society. A few days ago, an allegedly offensive post by a Class 11 student on Facebook sparked communal violence in West Bengal. Similarly, in May 2017, violence erupted in Saharanpur between the Thakur and Dalit communities fuelled by rumours and provocative posts on Facebook. Recently in Karnataka, hate messages on Facebook were also circulated via popular messaging services like Whatsapp which contributed to violence against the targeted communities.
A 2015 report by the UNESCO titled, ‘Countering Online Hate Speech’, highlights that some of the challenges which distinguish offline hate speech from online hate speech are related to the latter’s permanence, itinerancy (hate speech exists on various sites for a long time when the content is shared by multiple users online), anonymity and cross-jurisdictional nature (hate speech affecting people in a particular region may be posted by an internet user in another country/region).
Social media policies on hate speech
Social media platforms have taken various initiatives to curb hate speech content on their websites. According to Facebook’s community standards, “content that attacks people based on their actual or perceived race, ethnicity, national origin, religion, sex, gender or gender identity, sexual orientation, disability or disease” is considered hate speech and disallowed by Facebook. However, Facebook policy states that it would allow “clear attempts at humour or satire” that would otherwise be considered a potential threat/attack.
Twitter has an advertisement policy in place which applies to promoted tweets and prohibits the promotion of “hate content, sensitive topics, and violence globally”. In February 2017, Twitter rolled out its ‘safe search’ feature which allows users to hide content which is deemed to be ‘sensitive’. In July 2017, Twitter banned the editor of a right-wing site for “participating in or instigating targeted abuse of individuals” and also suspended the accounts of prominent leaders of a movement aimed at spreading racism, xenophobia and sexism.
Following reports that one of the London Bridge attackers had been influenced by YouTube videos of an American Islamic preacher, YouTube made changes to its policy and placed various restrictions on offensive videos which do not necessarily meet the standard for removal. For instance, under YouTube’s new policy, offensive videos (which do not have the effect of inciting violence), cannot be commented on or recommended by users. YouTube also precludes such videos from being monetised through advertising (a restriction which also existed under YouTube’s earlier policy).
Laws against online hate speech in India
Most of the laws against hate speech in India were enacted at a time when the internet was not conceived of. The Indian Penal Code (IPC) provisions which address hate speech are sections 153A, 295A and 505. Section 295A is specifically aimed at punishing deliberate and malicious acts intended to outrage religious feelings or any class by insulting its religion or religious beliefs. Section 295A covers words (written or spoken) aimed at insulting the religion/religious beliefs of any class of citizens. Section 153A of the IPC applies to words (written or spoken) aimed at promoting enmity between different groups on grounds of religion, race, place of birth, residence, language, etc. Section 505 makes its punishable to publish or circulate any statement/rumour/report with intent to incite (or which is likely to incite) any class/community of persons to commit any offence against any other class/community.
The only provision in the Information Technology Act (IT Act) dealing with the transmission of offensive messages through communication service was Section 66A, which was struck down by the Supreme Court in Shreya Singhal v UOI over concerns surrounding its misuse.
The IPC provisions against hate speech are limited in their application because of our understanding that in order to attract criminal liability, the statements must be grave inasmuch as they provoke violence against members of the targeted community. Earlier this year, the Law Commission of India in its 267th Report recommended reforms to hate speech laws in India. One of the recommendations was to broaden the meaning of hate speech to also cover content which may not necessarily provoke violence.
One of the biggest criticisms of anti-hate speech laws is that they curb the freedom of speech of internet users and lead to online censorship; this has also been one of the main reasons why internet companies have been hesitant to regulate offensive content on their platforms. However, we must recognise that while freedom of speech is a fundamental right guaranteed under the Indian Constitution, the right is not absolute and is subject to reasonable restrictions such as maintenance of public order. Further, freedom of speech should not allow one to hurt the sentiments of the members of a particular community. Germany recently enacted a hate speech law to hold Facebook liable for failure to remove offensive posts. The motivation behind enacting the law was to prevent stirring racist abuse and anti-immigrant sentiments in the country. India will do well to follow Germany’s lead and ensure that social media does not become a vehicle to promote enmity/hatred in the country.