India's New AI Content Labelling Rules
In an effort to combat the increasing misuse of AI-generated content, the Indian government has proposed new rules requiring social media platforms to label synthetic information clearly. This initiative aims to ensure that users declare whether the content they upload is generated by artificial intelligence, promoting transparency on platforms such as YouTube and Instagram.
As deepfake incidents rise, these regulations seek to protect individuals and society from the adverse effects of misinformation. IT Minister Ashwini Vaishnaw highlighted the importance of distinguishing between real and synthetic content, allowing users to make informed decisions in a democratic environment.
The draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, mandate that platforms, which facilitate the creation of AI content, ensure proper labeling. For instance, a deepfake video on YouTube will have two labels—one embedded within the video and another on the platform page.
The IT Ministry emphasized that incidents of deepfake audio and videos going viral demonstrate the potential harms of generative AI. These falsehoods can misrepresent individuals, leading to significant reputational damage or misinformation that could influence elections and financial decisions.
According to the proposed regulations, social media platforms must require users to declare if their uploaded content is synthetically generated. They must also implement technical measures to verify these declarations. If a platform fails to comply, it risks losing the legal immunity that protects it from third-party content liabilities.
The new amendments define “synthetically generated information” as content that is artificially created or modified using computer resources to appear authentic. This definition aims to cover a broad range of AI-generated content, ensuring comprehensive coverage.
With the deadline for public feedback on these draft amendments set for November 6, 2023, the government is open to suggestions to refine these regulations. This initiative resonates with the global trend of implementing AI labeling rules, as observed in countries like China, which recently enforced similar content labeling requirements.
In India, the issue of deepfakes gained attention earlier this year when a fabricated video of actress Rashmika Mandanna went viral. Following this, Prime Minister Narendra Modi stated that deepfakes pose a new crisis, highlighting the urgency for regulations to address such challenges effectively.