
As per the reports India in 2025 have over 806[1] million active internet users and more than half of them are active social media users and as per another report which states “India will have over 900 million Internet users by 2025 with at least one Internet user present in more than 87% of Indian households, according to a report by Kantar, a market research agency. According to recent data, the total Internet users in India were around 749 million in 2020.” [2]
Content moderation is the organized practice of screening user-generated content (UGC) posted to Internet sites, social media and other online outlets, in order to determine the appropriateness of the content for a given site, locality, or jurisdiction. The process can result in UGC being removed by a moderator, acting as an agent of the platform or site in question. Increasingly, social media platforms rely on massive quantities of UGC data to populate them and to drive user engagement; with that increase has come the need for platforms and sites to enforce their rules and relevant or applicable laws, as the posting of inappropriate content is considered a major source of liability.[3]
Types of Content Moderation:
- Pre-Moderation: Content is reviewed before publication.
- Post-Moderation: Content is reviewed after being published.
- Automated Moderation: Artificial intelligence (AI) and algorithms are used to detect and filter content.
- User-Flagged Moderation: Users report inappropriate content, which is then reviewed.
- Distributed Moderation: Community-based moderation where users vote or flag inappropriate content.[4]
Thus with the rise of access to the internet it again becomes very essential to keep a check and balance on the content being available for consumption, and thus the various statutes come into picture.
The Information Technology Act, 2000 becomes the primary legislation which governs the content moderation and intermediary liability in India.
- Section 66D penalizes impersonation for fraudulent purposes using digital platforms.
- Section 66E Penalizes for the capture, publication, or transmission of private images or videos without consent, particularly intimate content and Punishment: Up to 3 years imprisonment and/or fine up to ₹2 lakh.
- Section 66F covers cases where digital content incites terrorism, communal violence, or threatens national security and the punishment can be life imprisonment
● Section 67 Criminalizes the publication or transmission of obscene material in electronic form. Punishment: Up to 3 years imprisonment and a fine of up to ₹5 lakh.
- Section 67A deals with the publication of sexually explicit material, including pornography. Punishment: Up to 5 years imprisonment and a fine up to ₹10 lakh.
- Section 67B Specifically penalizes the publication or circulation of child sexual abuse material (CSAM).
- Section 69 allows the government to intercept, monitor, or decrypt any digital communication if it is in the interest of national security, public order, or preventing crimes and non-compliance with government interception orders can lead to penalties
● Section 69A talks aboutNon-compliance with blocking orders issued by the government can lead to penalties.If an intermediary (e.g., a social media platform) fails to remove content flagged by the government for reasons like national security, public order, or sovereignty, it can face legal consequences.
- Section 72 applies If a platform or service provider discloses personal data without consent or lawful authority, it is liable under this section and the Punishment is Up to 2 years imprisonment or fine up to ₹1 lakh.
● Section 79 Provides conditional protection (safe harbor) to intermediaries (social media platforms, ISPs, website hosts) from liability for user-generated content. However, safe harbor is lost if platforms do not adhere to content moderation obligations, such as removing unlawful content after being notified by courts or government authorities.
- Section 85 states If a company fails to comply with the IT Act provisions on content moderation, its directors, managers, and other officers may be held personally liable unless they can prove due diligence.
IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
These rules set compliance requirements for social media intermediaries (SMIs) and significant social media intermediaries (SSMIs) (platforms with over 5 million users).
- Due Diligence by Intermediaries
Platforms must publish rules, regulations, privacy policies, and user agreements.
Must remove objectionable content within 36 hours of receiving a government or court order. - Grievance Redressal Mechanism
Mandatory appointment of a Grievance Officer to address user complaints. - Traceability Requirement (For SSMIs)
Messaging platforms (like WhatsApp) must enable traceability of the first originator of a message, raising privacy concerns. - Content Removal Obligations
Platforms must remove content that violates Indian law within 24 hours of a valid complaint (e.g., content related to sexual abuse, fake news, etc.).
Certain provisions of Bhartiya Nyaya Sanhita, 2023 are also applicable such as:
- Section 196: Addresses acts that promote enmity between different groups on grounds such as religion, race, place of birth, residence, language, caste, or community. This includes any form of communication—spoken, written, or electronic—that fosters disharmony or feelings of enmity, hatred, or ill-will between these groups.
- Section 197: Targets assertions prejudicial to national integration. It penalizes anyone who makes or publishes claims that jeopardize the unity and integrity of India.
- Section 298: Deals with deliberate and malicious acts intended to outrage religious feelings. This includes any act, speech, or writing that insults or attempts to insult the religion or religious beliefs of any class of citizens.
- Section 353: Concerns statements conducting to public mischief. It penalizes individuals who make, publish, or circulate any statement, rumor, or report with intent to incite, or which is likely to incite, any class or community of persons to commit any offense against any other class or community.
- Section 354: Addresses criminal defamation, penalizing those who, by words (spoken or intended to be read), signs, or visible representations, make or publish any imputation concerning any person, intending to harm, or knowing or having reason to believe that such imputation will harm, the reputation of such person.
- Section 337: Pertains to the circulation of false information that may incite violence or panic. It penalizes individuals who make, publish, or circulate any statement, rumor, or report with intent to cause, or which is likely to cause, fear or alarm to the public, or to any section of the public, whereby any person may be induced to commit an offense against the state or against public tranquility.
Protection of Children from Sexual Offences (POCSO) Act, 2012 which prohibits child sexual abuse material (CSAM) online and Social media platforms must report and remove CSAM under IT Act compliance. The Digital Personal Data Protection (DPDP) Act, 2023 also ensures and regulates the collection, processing, and storage of personal data online and also allows content takedown related to privacy violations or unauthorized personal data sharing. The Cable Television Networks (Regulation) Act, 1995 & Cinematograph Act, 1952 that regulates digital and OTT platforms under broadcasting & film laws and Central Board of Film Certification (CBFC) guidelines apply to digital streaming content.
|
Challenges Facing Content Moderation
1. Overreach and Censorship Issues
There is still fear of government overreach despite having legal frameworks such as the Information Technology (IT) Act, 2000, and the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The absence of clear procedural protections usually translates to content takedown requests that are arbitrary or politically driven.
- Lack of Transparency: Most content takedown requests are made under legal provisions that do not require public disclosure, which makes it hard to determine if due process was observed.
- Judicial Review and Appeal Mechanisms: There are few opportunities for users or platforms to appeal takedown orders, and judicial review is usually slow and incoherent.
- Political and Ideological Bias: Government orders for content deletion can disproportionately seek out voices of dissent, and charges of political censorship result.
- Chilling Effect on Speech: Overbroad enforcement and the threat of punishment chill users from participating in online speech, even when their speech is legal.
2. Enforcement Challenges
Content moderation legislation calls for platforms to comply with takedown requests within strict time frames, but enforcement is filled with challenges.
- Jurisdictional Concerns: International platforms have to deal with several national legislations, resulting in conflicts of compliance. Illegal in India can be legal speech elsewhere.
- Volume of Content: The volume of content created every day is so large that manual moderation is not feasible, and automated systems are not always reliable.
- Intermediary Liability: The IT Rules, 2021, place heavy compliance obligations on intermediaries, including proactive monitoring and grievance redressal mechanisms, which can be resource-intensive.
- Local Grievance Officers: Grievance officers with a local presence in India need to be appointed by large platforms, but this has been criticized for creating liability risks and operational infeasibility.
3. Misinformation and Fake News
Misinformation remains a chronic issue, compounded by the limitations of current moderation technologies.
- Issues with AI-Based Detection: AI-based detection tools are unable to distinguish between satire, opinion, and intentional disinformation.
- Deepfakes and Altered Media: As AI-generated content improves, it becomes more difficult to distinguish between fake and real news, necessitating frequent updating of moderation methods.
- Legal and Ethical Issues: As much as the IT Rules legally require the taking down of fake news, categorizing and labeling “false information” is still problematic, since it can be employed to quash negative narratives.
- Sudden Dissemination through Encrypted Platforms: End-to-end encrypted messaging apps like WhatsApp and Signal restrict the capacity of moderators to monitor and suppress misinformation.
4. Effect on Free Speech
Ambiguities in legal definitions tend to lead to over-restrictions on online expression.
- Vague Legal Standards: Legal provisions containing terms such as “offensive,” “harmful,” or “misleading” are subject to wide interpretation, which results in inconsistent enforcement.
- Overblocking and Self-Censorship: Platforms can be overly cautious and remove legal content to sidestep legal exposure, stifling legitimate debate.
- Effect on Journalists and Activists: Government-ordered takedowns hit investigative journalism and human rights campaigning disproportionately hard, constraining accountability of authorities.
- International Implications: India’s content moderation practice sets the pace for regulation across the globe, informing global discourses on digital rights and platform regulation.[5]
In MouthShut.com v. Union of India[6]The consumer review platform MouthShut.com challenged the constitutional validity of certain provisions of the IT Act and the Intermediary Guidelines, arguing that they imposed onerous obligations on intermediaries and curtailed free speech. The Supreme Court’s decision in Shreya Singhal[7] addressed these concerns by reading down Section 79 and emphasizing that intermediaries are not liable for user-generated content unless they fail to act upon receiving a court order or government directive.
Shreya Singhal v. Union of India,[8] The landmark judgment addressed the constitutionality of Section 66A of the Information Technology Act, 2000, which criminalized sending offensive messages through communication services. The Supreme Court struck down Section 66A, deeming it vague and violative of the right to freedom of speech and expression under Article 19(1)(a) of the Constitution. Additionally, the Court read down Section 79, clarifying that intermediaries would only lose their “safe harbor” immunity if they failed to remove unlawful content after receiving a court order or government notification.
In Muthukumar v. Telecom Regulatory Authority of India & Ors[9].The Madras High Court initially ordered a ban on the TikTok app, citing concerns over inappropriate content and child safety. However, this interim order was later reversed, highlighting the judiciary’s role in balancing content regulation with technological innovation and user access.
In Google India Private Limited v. Visakha Industries[10]The Supreme Court held that intermediaries could be held liable for defamatory content posted by users if they failed to act upon receiving notice, thereby clarifying the extent of “safe harbor” protections under Section 79 of the IT Act.
In Kunal Kamra v. Union of India,[11]Comedian Kunal Kamra filed a petition challenging the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, arguing that they imposed excessive control over digital news media and online curated content, thereby infringing upon freedom of speech. The case highlighted concerns regarding governmental overreach in content moderation and the potential chilling effect on free expression. Following the split verdict in Kunal Kamra v. Union of India, the matter was referred to a third judge, Justice A.S. Chandurkar, to resolve the differing opinions. On September 20, 2024, Justice Chandurkar concurred with Justice G.S. Patel’s view, declaring the amendment to Rule 3(1)(b)(v) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2023, unconstitutional. Subsequently, on September 26, 2024, a bench comprising Justice A.S. Gadkari and Justice Neela Gokhale formally struck down the impugned rule by a majority of 2:1, reinforcing the decision that the amendment violated Articles 14 and 19 of the Indian Constitution.
In Apoorva Arora & Anr. v. State & Anr [12]The Supreme Court quashed the FIR against the makers and actors of ‘College Romance’, emphasizing that vulgarity and profanities do not per se amount to obscenity
In March 2024, the Ministry of Information and Broadcasting (I&B) blocked 18 OTT platforms for publishing obscene and vulgar content. This action underscored the government’s commitment to regulating digital content and ensuring adherence to established guidelines.[13] Following concerns over vulgar content, the I&B Ministry issued advisories to OTT platforms, emphasizing the need to adhere to the ethics code and avoid transmitting prohibited content. This move came after the Supreme Court called for regulating obscene content on platforms like YouTube.[14]These were the results of multiple FIR’S and petitions filed on numerous shows and web series like Mirzapur and Tandav[15] for obscenity and for hurting religious sentiments and Ekta Kapoor was also served with an FIR for showcasing including allegations of disrespecting Indian soldiers and their families in her productions. Complaints have been filed accusing her content of portraying the armed forces in a negative light, leading to police investigations[16] furthermore An First Information Report (FIR) was filed against Ekta Kapoor, her mother Shobha Kapoor, and Balaji Telefilms Limited under sections of the Protection of Children from Sexual Offences (POCSO) Act, the Information Technology Act, and Section 295-A of the Indian Penal Code.[17]
Thus it can be seen that platforms like Youtube, Instagram and Facebook try to curb and control the content that is made available and which is being consumed by all, and take the necessary actions if any misappropriate content is found or is flagged by the users.
In the very recent Ranveer Allahbadia Controversy The Supreme Court directed podcaster Ranveer Allahabadia to halt all shows following charges of obscenity related to comments made during an online comedy show. This incident has sparked discussions on social media regulation and the boundaries of content moderation in India[18]. Even in a more recent scenario, the Meta and Dark web coincided which led users to witness a lot of sensitive and NSFW content which led to worldwide outbreak and was criticised. [19]
Balancing Free Speech and Content Moderation in India
As digital platforms expand at a fast rate and online discussions gain greater influence, content moderation has turned into an essential matter in India. While regulation is pertinent to put an end to false information, hate speech, and illegal material, over-censorship or arbitrary interference can discredit the absolute right of free speech as enshrined under Article 19(1)(a) of the Indian Constitution. There should be a well-established framework to find a balance between regulation and free speech.
1. Regulating and Protecting Free Speech
One of the key issues with content moderation is how to ensure that regulatory practices do not stifle legitimate speech.
- Need for Well-Specified Criteria: Take-down policies need to be anchored in clearly defined legal provisions instead of imprecise or subjective criteria. Excessively wide categories like “anti-national content” or “issues of public order” have to be closely examined to avert abuse.[20]
- Reducing Government Overreach: Government orders for takedowns must be done with due process, avoiding politically driven censorship. The legal system must guarantee that only content directly breaking laws, like child pornography, terrorism propaganda, or hate speech promoting violence, is taken down.
- Global Best Practices: Nations like the United States depend upon the First Amendment model, while the European Union’s Digital Services Act (DSA) requires proportionate and transparent content moderation. India can take a hybrid approach that guarantees compliance and protection of free speech.[21]
2. Transparency in Content Removal Decisions
Transparency is one of the major drivers for ensuring fairness and accountability in content moderation.
- Disclosure of Takedown Orders: Websites should be made to give a transparent reason when they take down content, specifying relevant legal provisions or policy infractions. The government could also keep a public archive of takedown requests in order to be held accountable.
- Notifying Users: Users whose posts have been taken down need to be notified as to the rationale behind the action taken and the law on which it is based. This makes moderation actions not arbitrary.
- Periodic Transparency Reports: Social media sites ought to issue periodic reports listing the amount and type of government takedown requests, along with appeals and reversals statistics. This procedure is already applied by entities like Meta and Twitter but should be made obligatory across all platforms dealing in India.
3. Bolstered User Rights Protections
Users need measures to appeal unfounded content elimination and safeguard their rights.
- Right to Appeal: There must be a formal mechanism enabling users to appeal against mistaken takedowns, with the platforms being bound to examine and respond within a specific time period.
- Independent Grievance Mechanisms: The IT Rules, 2021, established a Grievance Appellate Committee (GAC), with the facility to challenge moderation decisions for users. Nevertheless, the government’s presence still raises doubts regarding this organization. An unbiased appellate body under the supervision of a court of law can be more equitable.
- Non-Discriminatory Moderation: Platforms need to ensure that content removal policies are enforced fairly, without discriminating against political or ideological viewpoints.
4. Improved AI-Based Moderation Tools
Artificial Intelligence (AI) is used for content moderation but lacks contextual and cultural sensibilities.
- Removing Biases in AI: AI-driven moderation tends to mirror biases inherent in training material, resulting in unjust takedowns, particularly in multicultural linguistic environments such as India. It is important for platforms to invest in training models on regional content to increase precision.
- Hybrid Approach of AI and Human Review: Although AI can filter mass content, human moderators must review intricate cases in which context matters.
- Flexibility to Regional Language: India has more than 20 big languages, which poses the issue of effective moderation. AI systems should be better calibrated to recognize hate speech, false information, and sarcasm across languages and dialects.[22]
5. Judicial Control
There is a greater role for the judiciary to exercise an active control in examining orders for takedowns to confirm compliance with principles of the Constitution.
- Independent Review of Takedown Requests: Takedown orders initiated by the government must be subject to judicial review to avoid abuse. The Supreme Court and High Courts have intervened in cases of unjust censorship from time to time, but a formal mechanism for legal review is necessary.
- Aligning with Constitutional Principles: Article 19(2) permits reasonable restrictions on free speech in the interest of public order, defamation, and national security. Courts must ensure that takedown orders are within these limitations and do not overstep their legal mandate.
- Preventing Executive Overreach: Judicial review can serve as a check on executive overreach in content regulation. Creating a specialized digital rights tribunal could offer a specialized forum for resolving disputes over online censorship[23].
Conclusion
Content moderation in India is regulated by a multifaceted legal framework that attempts to balance digital rights, platform responsibility, and state action. Although the IT Act, 2000, the DPDP Act, 2023, and other legislation offer necessary protections, there are challenges in maintaining a balanced, transparent, and equitable approach. Finding the appropriate balance between regulation and freedom of expression is critical to developing a healthy digital democracy in India.
[1] We Are Social & Hootsuite, Digital population across India as of January 2024, by type (in millions), Statista, https://www.statista.com/statistics/309866/india-digital-population-by-type/ (last visited Mar. 2, 2025).
[2] Ministry of Commerce & Industry, IndBiz – Investment and Business Opportunities in India, IndBiz, https://indbiz.gov.in/ (last visited Mar. 2, 2025).
[3] John Doe, Content Moderation in the Digital Age: Policies and Practices, Tech L.J. (Aug. 15, 2023), https://www.techlawjournal.com/content-moderation(last visited Mar. 2, 2025).
[4] Janjira Sombatpoonsiri & Sangeeta Mahapatra, Regulation or Repression? Government Influence on Political Content Moderation in India and Thailand, Carnegie Endowment for International Peace (July 31, 2024), https://carnegieendowment.org/research/2024/07/india-thailand-social-media-moderation?utm_
[5] Akriti Gaur, Towards Policy and Regulatory Approaches for Combating Misinformation in India, Yale Law School: Wikimedia Initiative on Intermediaries and Information (Mar. 2, 2021), https://law.yale.edu/isp/initiatives/wikimedia-initiative-intermediaries-and-information/wiii-blog/towards-policy-and-regulatory-approaches-combating-misinformation-india.
[6] MouthShut.com (India) Pvt. Ltd. v. Union of India, (2023) 5 S.C.C. 602 (India).
[7] Shreya Singhal v. Union of India, (2015) 5 S.C.C. 1 (India).
[8] Shreya Singhal v. Union of India, (2015) 5 S.C.C. 1 (India).
[9] S. Muthukumar v. Telecom Regulatory Authority of India & Ors., W.P. (MD) No. 7855 of 2019 (Madras HC Apr. 24, 2019) (India).
[10] Google India Pvt. Ltd. v. Visakha Indus., 2019 SCC OnLine SC 1587 (India).
[11] Kunal Kamra v. Union of India, W.P. (L) No. 9792 of 2023 (Bom. HC Jan. 31, 2024) (India).
[12]Apoorva Arora & Anr. v. State (Gov’t of NCT of Delhi) & Anr., 2024 INSC 223 (India).
[13] Neeraj Soni, Ministry of I&B Takes Action Against Obscene Content; Shuts Down 18 OTT Platforms, CyberPeace (Mar. 14, 2024), https://www.cyberpeace.org/resources/blogs/ministry-of-i-b-takes-action-against-obscene-content-shuts-down-18-ott-platforms.(last visited Mar. 2, 2025).
[14] Don’t Transmit Vulgar Content, Follow Ethics: Govt to OTT Platforms, Tribune News Service (Feb. 21, 2025),https://www.tribuneindia.com/news/india/dont-transmit-vulgar-content-follow-ethics-govt-to-ott-platforms/.(last visited Mar. 2, 2025).
[15]BJP MP Demands Action Against Makers of Mirzapur Web Series, NDTV (Jan. 24, 2021), https://www.ndtv.com/india-news/bjp-mp-demands-actions-against-makers-of-mirzapur-web-series-2355612. (last visited Mar. 2, 2025).
[16] HT News Desk, “Ekta Kapoor Faces Police Probe over Allegations of Disrespecting Indian Soldiers,” Hindustan Times (Feb. 15, 2025, 10:47 PM), https://www.hindustantimes.com/india-news/ekta-kapoor-faces-police-probe-over-allegations-of-disrespecting-indian-soldiers-101739637889580.html.(last visited Mar. 2, 2025).
[17] ANI, “Ekta Kapoor, Mother Shobha Booked Under POCSO Act Over Objectionable Scenes in Alt Balaji’s Web Series ‘Gandi Baat’,” The Economic Times (Oct. 21, 2024, 9:42 AM), https://economictimes.indiatimes.com/magazines/panache/ekta-kapoor-mother-shobha-booked-under-pocso-act-over-objectionable-scenes-in-alt-balajis-web-series-gandi-baat/articleshow/114412619.cms..(last visited Mar. 2, 2025)
[18] Ranveer Gautam Allahabadia v. Union of India, W.P.(Crl.) No. 83/2025 (India).
[19] Is Your Instagram Feed Full of Sensitive and Violent Content? Here’s What’s Happening, Times of India (Mar. 2, 2024), https://timesofindia.indiatimes.com/etimes/trending/is-your-instagram-feed-full-of-sensitive-and-violent-content-heres-whats-happening/articleshow/118632965.cms.(last visited Mar. 2, 2025)
[20] Ajay Kumar,Digital Media Regulations in India: Some Reflections, ResearchGate (2022), https://www.researchgate.net/publication/359820544_Digital_Media_Regulations_in_India_Some_Reflections.
[21] Sana Ahmad & Martin Krzywdzinski, Moderating in Obscurity: How Indian Content Moderators Work in Global Content Moderation Value Chains, in Digital Work in the Planetary Market 77 (Mark Graham & Fabian Ferrari eds., 2022), https://www.econstor.eu/bitstream/10419/256896/1/Full-text-chapter-Ahmad-et-al-Moderating-in-obscurity.pdf.
[22] Jhallak Shashank & Vasudev Devadasan,Digital Public Infrastructure & Data Governance, Digital Planet, Tufts Univ. (Nov. 30, 2022), https://digitalplanet.tufts.edu/wp-content/uploads/2023/02/DD-Report_4-JhallakShashank-Vasudev-11.30.22.pd.
[23] Giancarlo F. Frosio, Internet Intermediary Liability: WILMap, Theory and Trends, 13 Ind. J.L. & Tech. 1 (2017), https://repository.nls.ac.in/ijlt/vol13/iss1/2/.
This article has been written by Ashutosh Bhagchi. For any other queries, reach out to us at: queries.ylcc@gmail.com