💥UPSC 2027,2028 Mentorship (April Batch) + Access XFactor Notes & Microthemes PDF

Artificial Intelligence (AI) Breakthrough

Government to tighten AI labelling rules for social media over ‘unsatisfactory compliance’

Why in the News?

The government’s decision to tighten AI labelling rules marks a clear step-up in digital regulation, triggered by poor compliance from platforms like YouTube, Instagram, and X. Earlier, platforms only needed to show “prominent” labels, but now they must display continuous and clearly visible labels throughout the content, making the rules much stricter. This change is important because cases of harmful AI content, such as deepfake images of women created by X’s Grok, have exposed serious gaps in regulation, raising concerns about privacy, dignity, and large-scale misinformation.

What are the AI Content labelling rules for social media?

  1. The Government of India has notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 (effective February 20, 2026), making AI content labelling mandatory on social media platforms. These rules are designed to curb the spread of deepfakes, misinformation, and non-consensual sexual content (CSAM).
  2. AI content labelling on social media is the mandatory or voluntary tagging of images, videos, and audio created or altered by artificial intelligence (AI) to distinguish them from human-made content. 
  3. It aims to increase transparency, reduce misinformation (deepfakes), and comply with regulations by using visible labels (e.g., “AI-generated”) or hidden metadata.

Key Features of the Amended IT Rules (2026):

  1. Mandatory Labelling: Social media platforms must prominently label “synthetically generated” or AI-generated images and videos that appear realistic.
  2. User Declaration: Platforms with over five million users must obtain a user declaration for AI-generated content and conduct technical verification before publishing.
  3. Excluded Content: Routine smartphone photo editing, filters, and film special effects are exempt from mandatory labelling.
  4. Permanent Metadata: Platforms must try to embed permanent metadata or watermarks to trace the origin of AI content.
  5. Takedown Timelines:
    1. 2 hours: Non-consensual deepfakes and intimate imagery must be removed within 2 hours of a complaint.
    2. 3 hours: Other illegal content must be removed within 3 hours of a court/government order.
  6. Loss of Safe Harbour: Non-compliance with these rules can result in the loss of safe harbour protection under Section 79 of the IT Act, making platforms liable for the content.

Key Proposed AI Labeling Amendments (April 2026) and how do the proposed amendments strengthen accountability of intermediaries?

  1. Continuous On-Screen Labels: The new proposal mandates that AI labels remain continuously and clearly visible throughout the entire duration of the video or audio content, rather than just in the beginning or occasionally.
  2. Expansion of Scope: The labeling requirement applies to “synthetically generated information” (SGI), which includes text, audio, images, and videos created or altered via AI to appear authentic.
  3. Platform Accountability: Social media intermediaries must ensure these labels are present. Failure to comply could lead to a loss of “safe harbour” protection, meaning platforms could be held liable for user-generated content.
  4. User Responsibilities: Users are required to declare if content is AI-generated upon uploading, which platforms must then verify using “reasonable and proportionate technical measures“.
  5. Stricter Takedown Timelines: The proposal includes a heavily reduced takedown timeline, requiring platforms to remove illegal, non-consensual deepfakes within 2 to 3 hours of a lawful order.
  6. Feedback Deadline Extended: The deadline for public feedback on these proposed changes has been extended to May 7, 2026. 

These moves, which follow initial rules announced in February 2026, are designed to combat the rising misuse of deepfakes and misinformation, ensuring that AI-generated material is easily distinguishable from real content

What regulatory gap prompted stricter AI labelling norms?

The primary regulatory gap that prompted stricter AI labelling norms was the transition from a standard of “prominent visibility” to a mandate for “continuous and clearly visible display” throughout the entire duration of the content. 

  1. Unsatisfactory compliance: Social media platforms failed to ensure consistent labelling despite February notification. For instance, only about 30% of AI-generated test posts were correctly flagged across major platforms.
  2. Inconsistent visibility: Labels appeared briefly or were not prominently displayed throughout content duration.
    1. Under earlier guidelines, AI labels often appeared only briefly or were placed in a way that was easily missed by users. The new 2026 amendments specifically aim to eliminate “blink-and-miss” disclaimers by requiring the label to remain on screen from start to finish.
  3. Regulatory dilution: Earlier proposal mandating labels to occupy 10% space was diluted, reducing effectiveness.
  4. Traceability Gaps: To prevent the removal of disclosures, the new norms mandate embedding permanent metadata or unique identifiers into synthetic content to ensure it remains traceable even when shared. 

What is the significance of redefining Synthetic Generated Information (SGI)?

Redefining Synthetically Generated Information (SGI) under India’s IT Rules 2026 is significant because it shifts from a reactive, general content moderation model to a proactive, AI-specific regulatory framework.

  1. Definition of SGI (Feb 2026 Rules): Refers to information created, modified, or generated using AI tools that can mimic real persons, events, or content.
    1. Includes deepfakes, AI-generated videos, audio, images, or text that appear real.
    2. Focuses on content that can mislead users or distort reality.
  2. Scope in February 2026 Rules:
    1. Broad coverage: Any AI-generated content that resembles real-world entities.
    2. Mandatory labelling: Required “prominent” disclosure, but no clarity on duration or format.
    3. Carve-outs included: Routine editing (filters, enhancement, dubbing) excluded as “good-faith use”.

What changes in the Proposed New Rules?

  1. Stricter visibility requirement:
    1. Continuous and clearly visible labelling throughout the content duration.
    2. Removes ambiguity of “prominent” labels.
  2. Sharper focus on harm:
    1. Targets SGI that violates laws or leads to misrepresentation of identity/events.
    2. Expands regulatory intent from disclosure for the prevention of misuse.
  3. Platform accountability strengthened:
    1. Requires verification of user declarations about SGI.
    2. Mandates technical safeguards to detect and prevent harmful SGI.
  4. Enforcement mechanism: Platforms must take immediate action (remove, disable access, suspend accounts) upon detection.

Why is this significant?

  1. Clear classification: Defines AI-generated content as SGI, ensuring regulatory clarity.
  2. Carve-outs provision: Excludes routine and good-faith editing (audio/video enhancement) from SGI definition.
  3. Misrepresentation control: Targets content that violates laws or misrepresents real-world events or identities.

What risks associated with AI-generated content triggered regulatory urgency?

  1. Deepfake misuse: Grok-generated images of women in revealing clothing raised dignity and privacy concerns.
  2. Misinformation threat: AI content risks distorting facts and influencing public perception.
  3. Identity manipulation: Enables impersonation and false representation of individuals.
  4. Global backlash: Incident led to bans in some countries and forced platform-level corrective measures.

How does the amendment impact Big Tech platforms?

  1. Enhanced compliance burden: Requires continuous monitoring and enforcement mechanisms.
  2. Liability exposure: Failure to act may attract legal consequences under IT Rules.
  3. User accountability integration: Platforms must ensure users disclose AI-generated content.
  4. Content moderation expansion: Strengthens obligations for proactive detection and removal.

What are the implications for digital governance in India?

  1. Regulatory evolution: Moves from reactive to proactive AI governance.
  2. Platform responsibility shift: Transfers greater accountability to intermediaries.
  3. Rights protection: Strengthens safeguards for privacy, dignity, and authenticity.
  4. Policy alignment: Aligns with global concerns on AI ethics and misinformation control.

Conclusion

The proposed amendments signal a decisive shift towards stricter AI governance, emphasizing transparency and accountability. Effective implementation will determine whether India can balance innovation with safeguards against misinformation and digital harm.

PYQ Relevance

[UPSC 2024] Social media and encrypting messaging services pose a serious security challenge. What measures have been adopted at various levels to address the security implications of social media? Also suggest any other remedies to address the problem.

Linkage: AI labelling rules and SGI regulation fall under GS-3 (Cyber Security, Emerging Technologies), focusing on risks like deepfakes, misinformation, and platform accountability. They also link to GS-2 (Governance) through regulation of intermediaries and GS-4 (Ethics) via concerns of privacy, dignity, and responsible AI use.


Join the Community

Join us across Social Media platforms.