Government tightens AI regulations, mandating rapid takedown and clear labelling of deepfake and synthetic content on social media platforms from 20 February 2026. AI Image
Science & Tech

New IT Amendment Targets Deepfakes and Misinformation: Centre Tightens Digital Media Norms; Mandates Rapid Takedown and Labelling of AI-Generated Content

India amends IT Rules to regulate AI-generated content, mandating social media platforms to label synthetic media and remove deepfakes within 2–3 hours from February 20, 2026

Author : NewsGram Desk
Edited by : Dhruv Sharma

Key Points:

Social media platforms must remove illegal AI content and non-consensual deepfakes within 2-3 hours from 20 February 2026.
Platforms must ensure users disclose AI-generated content, prominently label synthetic media, and cannot remove or hide AI labels or metadata.
Non-compliance may lead to loss of safe harbor protection, exposing platforms to civil and criminal liability.

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 have been notified to be amended in regards to AI-generated content on social media. The Union Government has directed social media platforms to remove flagged AI-generated and deepfake content within two to three hours as per the new framework, which will come into effect from 20 February 2026.

If a court or an “appropriate government” declares content illegal, it must be taken down within three hours, whereas sensitive material like non-consensual nudity and deepfakes must be removed within two hours as per the amended rules. Previously, the compliance window was set at 24–36 hours, which now stands reduced.

The amendments further provide a proper definition of “synthetically generated content”, directing prominent labelling of photorealistic AI-generated material. It has also ordered against the removal or suppression of AI labels or related metadata once applied to the content.

Definition of synthetic content

The definition of synthetic content has been provided by the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 as:

Audio, visual or audio-visual information artificially or algorithmically created, generated, modified or altered using a computer resource in a manner that appears real and depicts any individual or event indistinguishable from a natural person or real-world event.

Officials have clarified the narrowing down of the definition in the recent draft from the earlier released October 2025 draft, which further excludes camera enhancements such as automatic smartphone touch-ups.

Mandatory Disclosure and Labelling

Any AI-generated content needs user disclosure before uploading it to social media. The platform has to take responsibility for labelling the content themselves or remove it in cases involving any non-consensual deepfakes if the user fails to disclose its synthetic origin.

The final draft gives platforms flexibility in their presentation regarding industry feedback, while the older one proposed a fixed requirement of covering 10% of an image. It also mandated “prominent” labelling of AI-generated imagery. Previously, there was a loophole in the regulation owing to which many reposted content without disclosure of AI labels or metadata, which has now been prohibited.

If a platform fails to comply, it could lose safe harbour protection, which is the legal immunity granted to platforms enabling them not to be treated as publishers for user-generated content. The IT framework would deem failure of diligence obligations by the intermediary if it knowingly permits, promotes or fails to act against prohibited synthetic content. This would result in exposing the company to potential criminal and civil liability for user-posted content.

The previous amendment rolled out in October 2025 had the provision of the State authorizing a single officer to issue takedown orders, which has now been increased to multiple officers in order to help larger States manage higher complaint volumes.

The move aims to tighten timelines with mandatory labelling to curb the rapid spread of AI-generated misinformation, impersonation and non-consensual imagery, especially during elections and public emergencies. This can be described as one of the most stringent regulatory frameworks for synthetic media globally.

(SY)

Suggested Reading:

Subscribe to our channels on YouTube and WhatsApp

Download our app on Play Store

Parliament Budget Session LIVE: Rajya Sabha Passes CAPF Bill Amidst Opposition Walkout

Puducherry Elections 2026: Longest-Serving CM Seeks Another Term, Promises Welfare and Statehood as AINRC-BJP Alliance Fights to Retain Power in the UT

There Are Protein Condoms In The Market Now? Indian Fitness Brand’s Newest Innovation Has The Internet In Fits

Was the Centuries-Old Shroud of Turin of Jesus of Indian Origin? New DNA Analysis Reveals Shocking Details

US Freelance Journalist Shelly Kittleson Kidnapped in Baghdad, Iraqi Forces Launch Manhunt, One Suspect Arrested