New IT Rules Aim to Tackle Deepfakes in India
The Indian government is introducing new Information Technology (IT) rules aimed at addressing the growing misuse of AI-generated content, particularly deepfakes. These proposed rules will mandate social media platforms like YouTube and Instagram to label AI-generated content clearly. This initiative comes in response to concerns over the potential harms of deepfakes, which can misrepresent individuals and damage reputations.
Under the draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, social media platforms will need to obtain a declaration from users regarding whether the content they upload is synthetically generated. This means that when users post videos or images created using AI technologies, they must indicate this clearly. The platforms will also be responsible for verifying these declarations.
One of the notable requirements is that the AI content must be prominently labeled. For visual content, the label should cover at least 10% of the total surface area, while for audio content, it should occupy the initial 10% of its duration. This ensures that users are immediately aware of what they are consuming, enabling them to make informed decisions.
IT Minister Ashwini Vaishnaw highlighted the importance of transparency in content creation. He stated that the public has raised concerns about the negative impact of deepfakes on personal privacy and societal misconceptions. The government's action seeks to empower users with knowledge about whether the content they encounter is real or synthetic.
The proposed changes come in the wake of various incidents, including a viral deepfake video involving a popular Indian actress. Earlier, Prime Minister Narendra Modi had also referred to deepfakes as a new crisis that needs to be addressed. By implementing these rules, the government aims to create a safer digital environment.
Feedback on the draft amendments is expected until November 6. If finalized, these regulations will not only enhance user awareness but also put significant responsibilities on social media platforms. Failure to comply could result in the loss of legal protections that currently shield these platforms from liability regarding user-generated content.
In a global context, this move aligns with similar regulations being introduced in other nations, such as China, which has implemented its own AI labeling rules. As technology continues to evolve, these measures are vital for maintaining integrity and trust in digital communications.