New Era of AI Governance Dawns in India
India has ushered in a new era of digital governance with the enforcement of stringent regulations aimed at controlling the proliferation of artificial intelligence-generated content. Effective February 20, 2026, amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, now explicitly bring \"synthetically generated information\" under the country's intermediary framework. This move signifies a proactive stance by the government to address the growing concerns surrounding deepfakes, impersonation, and digitally supercharged misinformation. The Ministry of Electronics and Information Technology (MeitY) has mandated that all AI-generated content, including audio, visual, or audio-visual material, must carry clear and persistent labels. Furthermore, platforms are required to embed permanent provenance metadata to trace the origin of such content, a measure designed to combat disinformation campaigns. These regulations apply to major social media platforms such as Instagram, YouTube, and Facebook, compelling them to implement automated tools for verifying user declarations about AI-generated content. The government's objective is to foster greater transparency and accountability within India's burgeoning digital ecosystem.
Accelerated Takedown Timelines and Platform Accountability
A cornerstone of the new regulations is the drastic reduction in the time platforms have to remove flagged content. Previously operating under a 36-hour window, social media intermediaries now have a mere three hours to take down AI-generated content and deepfakes upon notification from authorities. For particularly egregious violations, such as non-consensual intimate deepfakes or deceptive impersonation, this timeline reportedly shrinks to two hours. Failure to comply with these accelerated takedown mandates can result in the immediate loss of safe harbor protection for platforms, a significant punitive measure. The government has also empowered itself with more direct blocking powers, allowing the Ministry of Electronics and Information Technology (MeitY) to issue legally binding directions to platforms without requiring a prior public complaint. The Advertising Standards Council of India (ASCI) has also released draft guidelines, aligned with these IT rules, for the responsible labeling of AI-generated advertising content, introducing a risk-based classification system for advertisements. These guidelines, open for consultation until June 13, 2026, focus on consumer outcomes and delineate between high-risk (prohibited), medium-risk (mandatory labeling), and low-risk (no disclosure needed) content.
User Responsibilities and Industry Concerns
The new regulatory framework extends obligations not only to platforms but also to individual users. Users uploading synthetic media may be required to declare its AI-generated nature, with potential penalties for misrepresentation, including account termination or reporting to law enforcement. This move aims to increase user awareness and responsibility in the digital space. However, industry bodies like Nasscom and digital rights groups have raised concerns, arguing that the draft IT second amendment rules might create obligations outside the existing IT Act's framework and could lead to over-censorship. They emphasize that advisories should not bypass parliamentary review and that the extended scope to ordinary users and AI-generated content presents significant technical and legal challenges, particularly regarding the continuous visibility of labels once content is edited or reposted across platforms. The Internet Freedom Foundation (IFF) has highlighted the technical difficulties in maintaining label integrity across various sharing and editing processes. The government's approach, while aiming to curb misuse, is navigating a delicate balance between promoting AI innovation and ensuring robust safeguards against its potential harms.
