\n\n\n\n\n\n\n
Trusted Source Badge

The government has tightened the rules on how AI-generated content can appear on social media, putting new legal obligations on platforms to identify, label and, in some cases, block such material.

The changes, notified by the Ministry of Electronics and Information Technology (MeitY) and effective from February 20, amend the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 and bring “synthetically generated information” into the compliance net.

Here is what the changes mean and why they matter.

What counts as AI-generated under the new rules? 

The rules target synthetically generated content, audio, video or images created or altered using AI in a way that looks real and can mislead people.

The government has clarified that basic edits such as cropping, colour correction, translation, formatting, subtitles or accessibility improvements are not deepfakes. The focus is on fake or deceptive content that can cause harm.

What must social media companies do now?

Platforms like Facebook, Instagram, X, YouTube and others will have to be more proactive. They must now:

  • Ask users to declare if the content is AI-generated
  • Use technology to check if that declaration is correct
  • Clearly label AI-generated content before showing it to others
  • Add permanent markers or metadata to trace where the content came from
  • Ensure that labels or markers cannot be removed or hidden

If a platform knowingly allows harmful AI content or fails to act, it can be treated as not following due diligence rules.

What kind of AI content is not allowed?

Platforms must stop AI tools from being used to create or share content that:

  • Includes child sexual abuse material or non-consensual intimate images
  • Creates fake documents or false records
  • Relates to weapons, explosives or ammunition
  • Falsely shows a real person saying or doing something they never did

This directly targets deepfake videos, fake voice clips and misleading AI visuals.

What happens if users break the rules?

Users can face:

  • Immediate removal of the content
  • Suspension or termination of their accounts
  • Disclosure of identity to victims in some cases
  • Reporting to law enforcement agencies if the content is criminal

Platforms must also remind users of these consequences at least once every three months.

Are platforms required to act faster now?

Yes. The timelines are much tighter:

  • Orders from the government or police must be acted on within three hours
  • Grievance and takedown processes have been sped up

What does this mean for everyday users?

For users, this means:

  • It will be easier to spot AI-generated content
  • Less chance of being fooled by deepfakes
  • More responsibility when posting or sharing AI-created material

Uploading or sharing fake AI content can now clearly lead to penalties

Why now?

India is one of the world’s largest social media markets and a testing ground for generative AI products. As AI tools become common, the government wants to stop misuse before it spreads widely on social media. 

The amendments come amid a surge in deepfakes and AI-generated content flooding the internet, from cloned celebrity voices and fabricated political videos to non-consensual intimate imagery.  

AI content rules, AI generated content India, MeitY AI rules, AI content labelling, AI regulation, New IT Rules 2026 India, deepfake laws India, social media rules, AI generated content labels India, synthetically generated information definition, What are the new rules for AI content in India?, How to label AI generated images on social media India#Indias #content #rules #social #media #platforms #users1770762374

Leave a Reply

Your email address will not be published. Required fields are marked *

Instagram

This error message is only visible to WordPress admins

Error: No feed found.

Please go to the Instagram Feed settings page to create a feed.