Blogs

India Tightens Grip on AI-Generated Content with New Regulations

India has implemented new regulations that mandate clear labeling for all AI-generated content and impose a strict three-hour deadline for social media platforms to remove flagged illegal or deceptive material. These measures aim to curb the misuse of artificial intelligence, particularly in the creation of deepfakes and other harmful synthetic media.
GL
The GreyLens Editorial Team
thegreylens.com
India Tightens Grip on AI-Generated Content with New Regulations

India has moved decisively to regulate the burgeoning field of artificial intelligence, introducing stringent new rules aimed at curbing the spread of harmful AI-generated content. The Ministry of Electronics and Information Technology (MeitY) has amended the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, bringing synthetically generated information (SGI), including deepfakes and AI-generated audio-visuals, under a formal regulatory framework. These amendments, effective from February 20, 2026, impose significant new obligations on social media platforms and providers of AI tools.

Mandatory Labeling and Rapid Takedowns

A cornerstone of the new regulations is the mandatory labeling of all AI-generated content. Platforms such as Facebook, Instagram, YouTube, and X (formerly Twitter) must now prominently display labels on synthetically generated information, ensuring users can instantly identify such material. Furthermore, the government has set a strict three-hour deadline for these platforms to remove any flagged unlawful content once it is reported by the government or ordered by a court. This rapid response mechanism is designed to significantly curtail the spread of misinformation and malicious content before it can gain traction.

The amendments also introduce a requirement for synthetic material to carry embedded identifiers, ensuring its provenance can be traced. Crucially, platforms are barred from allowing the removal or suppression of these AI labels or associated metadata once they have been applied. This move is intended to enhance transparency and accountability in the digital space, making it more difficult for malicious actors to disguise AI-generated content as authentic.

Proactive Detection and User Warnings

To further combat misuse, social media companies and AI tool providers are required to deploy automated tools to proactively detect and prevent the circulation of illegal, sexually exploitative, or deceptive AI-generated content. This includes content that violates laws related to child exploitation, sexual abuse, false documents, or deceptive portrayals of individuals and events. Platforms offering SGI tools must also provide enhanced disclosures, warning users that the misuse of synthetic content may lead to severe legal consequences, including liability under various statutes.

Users will also receive regular warnings about the repercussions of violating rules related to AI misuse. These warnings must be issued at least once every three months, reinforcing the seriousness of the regulations and the potential penalties for non-compliance. The amendments also shorten user grievance redressal timelines, aiming to provide a more efficient process for addressing complaints.

A Techno-Legal Approach to AI Governance

IT Minister Ashwini Vaishnaw has emphasized the need for a techno-legal approach to tackle harmful AI-generated content, stating that a global consensus is emerging on how AI should be used for good. The new regulations represent India's commitment to stronger governance in the rapidly evolving AI landscape. This comprehensive regulatory overhaul is seen as a crucial step in balancing the innovative potential of AI with the need to protect citizens from its potential harms, aligning India with global trends in AI governance.

The amendments are a direct response to growing concerns over the misuse of AI for creating deepfakes, non-consensual intimate imagery, and other deceptive content online. By mandating labeling, imposing swift takedown obligations, and requiring proactive detection measures, India aims to foster a more responsible and secure digital environment, ensuring that the advancement of AI technology does not come at the expense of public safety and trust. The government's proactive stance signals a broader trend towards increased digital oversight and a commitment to staying ahead of the curve in regulating emerging technologies.

This article was researched and written with AI assistance based on publicly available news sources. All content is reviewed for accuracy by The GreyLens editorial team. For corrections or feedback: news@thegreylens.com

← Back to News