Home National Centre proposes mandatory labelling of AI-generated content to curb deepfakes

Centre proposes mandatory labelling of AI-generated content to curb deepfakes

11
0
Centre proposes mandatory labelling of AI-generated content to curb deepfakes

New Delhi, Oct 23: The Ministry of Electronics and Information Technology (MeitY) has proposed amendments to the Information Technology Rules to make the clear labelling of content generated through artificial intelligence (AI) mandatory, LiveLaw.in reported.

The proposed changes aim to strengthen accountability among major social media platforms such as Facebook, YouTube, and X (formerly Twitter) to tackle the growing threat of deepfakes and deceptive digital content. According to the draft rules released on Thursday, the amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 introduce the concept of “synthetically generated information.” This refers to content that is artificially or algorithmically created, modified, or altered using a computer resource in a manner that makes it appear authentic.

The Ministry warned that deepfake audio and video content, as well as other misleading material circulating online, can cause serious harm, including damaging reputations, influencing elections, and enabling financial fraud. Under the proposed framework, intermediaries that provide tools or resources to create or modify synthetic content will be required to clearly label such information or embed a unique metadata identifier revealing its synthetic nature. For visual content, the label must cover at least 10% of the display surface, while for audio, it must be declared during at least 10% of the duration. Intermediaries must also ensure that such labelling or metadata cannot be removed or suppressed.

Significant social media intermediaries (SSMIs) like Meta, X, and YouTube will be required to obtain a declaration from users at the time of uploading content, confirming whether the material is synthetically generated.

They must also employ “reasonable and appropriate technical measures,” including automated tools, to verify these declarations. If synthetic content is detected, platforms must display a clear label or notice indicating that it has been algorithmically generated. The draft rules specify that intermediaries may be held liable for due diligence violations if they knowingly permit, promote, or fail to act upon misleading synthetic content. The proposed Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025 are currently open for public consultation. Feedback and suggestions can be submitted via email to itrules.consultation@meity.gov.in by November 6, 2025, before the amendments are finalised.

Greater Kashmir