New AI Content Rules Start Today

MySandesh
4 Min Read

The central government has recently changed the rules related to AI-generated content. These new rules have come into force from 20 February 2026.

Now, sharing AI-generated content on social media or the internet without proper labeling can lead to penalties.

The Ministry of Electronics and IT (MeitY) had issued a notification about this rule on 10 February 2026. The change has been made by amending the IT (Digital Media Ethics Code) Rules, 2021.

In this amendment, the government has clearly defined synthetically or AI-generated content and has also fixed the responsibilities of social media platforms.

The rules also explain how action can be taken against users who share such content.

During the AI Impact Summit held at Bharat Mandapam in New Delhi, Prime Minister Narendra Modi also spoke about AI safety. He said that deepfakes and fake content are affecting society.

He stressed the need for watermarking and clear source identification for such content. He also called for stronger vigilance regarding child safety online.

What is Synthetically Generated (SGI) Content?

According to the new rules, any digital content created or modified using AI or computer tools will be treated as Synthetically Generated Information (SGI) if it looks like a real person, event, or place.

Such content must be labeled or watermarked before being shared online so people can clearly know it is AI-generated. However, normal photos or videos with basic editing will not be considered SGI and do not need labeling or watermarking.

Three Major Changes in the New Rules

The government has taken a strict stand against deepfake images and videos and introduced three major changes:

Mandatory labeling of AI content
Any AI-generated image or video must be labeled before sharing. Once labeled as AI, the tag cannot be removed.

Verification tools by social media platforms
Platforms must develop tools to verify whether uploaded content is AI-generated so that unverified AI content cannot be shared.

Regular user warnings
Social media companies must warn users every three months that misuse of AI may lead to fines or legal penalties. This will work as an awareness campaign.

Apart from these changes, the government has also placed some content in a No-Go zone, including:

Child sexual abuse material

Fake documents or electronic records

Information related to weapons and ammunition

Deepfake photos and videos

Responsibility of Social Media Platforms Increased

MeitY has increased the accountability of social media platforms. If authorities order the removal of any content, platforms must remove it within 3 hours. Earlier, the time limit was 36 hours.

Platforms have also been asked to use technical coding to identify which platform created AI content.

Immediate action rules have been applied to violent and sexually exploitative content involving children, and the response time for such cases has been reduced to 12 hours.

Provision for Legal Action

The government has also provided for legal action if SGI or AI-generated content rules are violated. Offences will be prosecuted under:

The Indian Penal Code (IPC)

The Protection of Indian Citizenship Code

The POCSO Act

At the same time, the government clarified that removing access to SGI content using automated tools will not violate Section 79 of the IT Act. Such actions will be treated as compliant with the new regulations.

Share This Article