The Central government has amended the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 to formally bring AI-generated content within India’s intermediary regulation framework.The amendments were notified by the Union Ministry of Electronics and Information Technology (MeitY) on February 10, 2026 and will come into force on February 20.
The amendments have been issued under the Centre;s rule-making powers under the Information Technology Act, 2000. They clarify that “information” used for unlawful acts under the IT Rules includes synthetically generated information, extending intermediary due-diligence, takedown and enforcement obligations to AI-generated content.
The changes also strengthen the user-notification obligations of intermediaries. Platforms are now required to periodically inform users, at least once every three months, that non-compliance with platform rules, privacy policies or user agreements may result in immediate suspension or termination of access, removal of content or both.
The Rules further require intermediaries to warn users that unlawful activity may attract penalties or punishment under applicable laws and that offences requiring mandatory reporting, including those under the Bharatiya Nagarik Suraksha Sanhita, 2023 and the Protection of Children from Sexual Offences (POCSO) Act, will be reported to the appropriate authorities.
Under the amended rules, AI-generated content is regulated through the newly inserted definition of “synthetically generated information.” The term covers audio-visual content that is artificially or algorithmically created, generated, modified or altered using computer resources in a manner that appears real or authentic and is likely to be perceived as indistinguishable from a natural person or real-world event.
The notification clarifies that routine or good-faith activities such as editing, formatting, transcription, translation, accessibility improvements, educational and training materials, and research outputs will not fall within the scope of synthetically generated information, provided they do not result in false or misleading electronic records.
Significant social media intermediaries must require users to declare whether content is AI-generated before it is displayed, uploaded or published on their platforms.Platforms must also deploy appropriate technical measures, including automated tools, to verify the accuracy of such declarations. Where content is confirmed to be AI-generated, it must be displayed with a clear and prominent notice indicating its synthetic nature
As for the faster takedown and compliance timeline, the amendments significantly tighten multiple enforcement and compliance timelines under the IT Rules, accelerating the pace at which intermediaries are required to act on unlawful content and user grievances.
The amendments also clarify that intermediaries must act expeditiously when they become aware of violations involving synthetically generated information, whether on their own or upon receipt of a complaint. Such action may include disabling access to the content, suspending user accounts and reporting the matter to the appropriate authorities where required by law.