Journalism begins where hype ends

,,

By far, the greatest danger of AI is that people conclude too early that they understand it"

— Eliezer Yudkowsky

Spot the AI: India Plans Tougher Rules on Synthetic Content

India's MEITY announces strict rules related to AI generated synthetic content
April 22, 2026 03:43 PM IST | Written by Neelam Sharma | Edited by Vaibhav Jha

India’s Ministry of Electronics and Information Technology (MeitY) has proposed stricter regulations around misuse of artificial intelligence where all AI-generated content must carry a label that stays visible the entire time viewers are watching it.

No more quick disclaimers that flash and disappear. If something is created or altered using AI, viewers should know it from start to finish, said the Indian IT ministry.

The latest amendment proposal is a change to the Information Technology Rules, 2021, which earlier only required “prominent visibility” of such labels. In simple terms, platforms could show a warning briefly and still be compliant. Now, the government wants that warning to stick around the whole time, making it impossible to miss.

The IT Rules Amendment proposes to add mandatory compliance for independent news creators to the existing draft amendments that would bring independent news creators under the Centre’s radar. Following on from the existing proposed amendments in the public domain there is now a revised date of May 7, 2026 for stakeholders’ comments for the new proposed additional amendment to the draft amendment after the previous amended date of the 29th April 2026.

The existing proposed draft amendments include, “Rule 3(3)(a)(ii) – A label for all synthetically generated information is to be continuously displayed on-screen and clearly visible for the duration of the content in question,” states the IT Ministry.

The IT Ministry is providing the new addition to the draft amendments to strengthen the consultative process with the stakeholders and allow them the opportunity to view each different set of proposed amendments.”

Why the stricter approach? Because AI-generated videos, deepfakes, and synthetic media are becoming harder to spot. From fake speeches to edited clips, misleading content can spread quickly, especially on social media. Continuous labels are meant to make things clearer for everyday users.

The proposal doesn’t just target big tech platforms. It could also apply to influencers, creators, and anyone sharing AI-made or AI-edited content, especially if it relates to news or current affairs.

Simultaneously with announcing the new extended deadline for feedback from the public to 7th May 2026, the Government is allowing companies, experts and everyday users a further opportunity to make submissions before these rules will become finalized. 

Also Read: U.S. Supreme Court Declines Review, No Copyright Protection for AI-Generated Works

Authors

  • Neelam Sharma

    Neelam Sharma is a passionate storyteller, and journalist with over a decade of experience across leading Indian media houses.
    Known for her calm presence on screen and powerful storytelling off it, Neelam brings a rare blend of credibility, creativity, and empathy to journalism. Her strength lies in ground reporting and research-driven narratives that connect with the heart of the audience. Whether covering social issues, human-interest features, or breaking news, she combines factual depth with a human touch—making every story not just informative.

  • Vaibhav Jha

    Vaibhav Jha is an Editor and Co-founder of AI FrontPage. In his decade long career in journalism, Vaibhav has reported for publications including The Indian Express, Hindustan Times, and The New York Times, covering the intersection of technology, policy, and society. Outside work, he’s usually trying to persuade people to watch Anurag Kashyap films.