India's AI Content Labeling Law Just Went Live, and It's the World's First

India's AI Content Labeling Law Just Went Live, and It's the World's First

As of February 20, 2026, every AI-generated image, video, and audio file published to a platform serving Indian users must carry a visible label identifying it as synthetic. The metadata must be permanent. Removing it is a criminal offense. And if a government order flags your content for takedown, the platform has three hours to comply.

India's amended Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, notified on February 10 and effective ten days later, make India the first country with a binding statutory mandate for AI content provenance. Not a voluntary framework. Not a proposal moving through committee. Active law, with teeth.

What Actually Changed

The amendment creates a legal category called "synthetically generated information" (SGI): any audio, image, or video content created or materially altered by an algorithm so that it "appears real or authentic and is likely to be perceived as indistinguishable from a real person or real-world event."

If your content fits that definition, five obligations kick in:

  1. Visible labeling. Every piece of SGI must carry a prominent on-screen label identifying it as AI-generated. Audio-only content requires a spoken disclaimer.
  2. Permanent metadata. Platforms must embed persistent metadata and unique identifiers into AI files so the provenance chain survives downloads, re-uploads, and edits. Removing, suppressing, or modifying these identifiers is prohibited.
  3. User self-declaration. Uploaders must declare whether content was created or materially altered using AI. On platforms with more than 5 million registered users in India (classified as Significant Social Media Intermediaries, or SSMIs), that self-declaration must be cross-verified using automated tools. A user claiming content is "original" doesn't satisfy compliance if the platform's own detection says otherwise.
  4. Compressed takedown timelines. Government or court-ordered removals must be actioned within 3 hours (down from 36). Non-consensual intimate deepfakes, morphed imagery, and sexual impersonation content: 2 hours.
  5. Quarterly user notification. Platforms must inform users every quarter about SGI obligations and enforcement consequences.

The rules carve out "routine editing and formatting," accessibility enhancements like translation, and "good-faith document creation." But each exclusion requires demonstrating good faith and no material distortion. If you're running AI-assisted color correction on stock footage, you're probably fine. If you're face-swapping a CEO into a product demo, you're not.

Who This Hits

The scope is extraterritorial. If a platform offers services to Indian users or targets the Indian market, it must comply regardless of where it's incorporated. Meta, Google, X, and every generative AI platform with Indian users fall under these rules. So does any SaaS company whose customers publish AI content to Indian audiences.

SSMIs (the 5-million-user threshold) face additional requirements: appointing an India-resident compliance officer, maintaining 24/7 law enforcement contacts, and publishing monthly compliance reports. Officers face potential personal liability. For smaller platforms, the obligations are lighter but the labeling and metadata requirements still apply.

The Missing Spec Sheet

Here's the catch practitioners should pay attention to: the rules mandate permanent metadata and provenance mechanisms but don't reference any technical standard. No C2PA. No JPEG Trust. No ISO 22144. Each platform is left to build its own scheme, which means provenance data won't travel across platforms or borders.

The "reasonable and appropriate" standard for automated verification tools is similarly undefined. No accuracy benchmarks. No approved detection model list. No acceptable false-positive rate. The Internet Freedom Foundation has flagged this gap, warning that broad enforcement discretion combined with compressed timelines will push platforms toward aggressive over-removal.

That concern is not hypothetical. A three-hour removal window for government orders means platforms need fully automated compliance pipelines. When your system has to make a remove-or-face-liability decision in under 180 minutes, the incentive structure rewards removing first and reviewing later.

What Practitioners Should Be Checking Now

If you're building content pipelines that touch AI generation (marketing automation, social media tools, video production), here's a concrete checklist:

  • Audit your output for the SGI definition. Does your pipeline produce content that "appears real" and could be "perceived as indistinguishable" from reality? If yes, your content needs labels and metadata before reaching any platform serving Indian users.
  • Check your metadata stack. Can you embed persistent provenance data that survives re-encoding and re-upload? If your tooling strips EXIF or XMP data during export, that's a compliance gap.
  • Review your upload workflows. If you publish to platforms operating in India, expect self-declaration checkboxes and automated verification at upload. Build those steps into your process now, not when platforms start enforcing.
  • Watch for platform-specific implementations. Without a universal standard, Meta, Google, and others will each implement their own labeling and metadata schemes. Your tooling may need to support multiple formats.

Why This Matters Beyond India

India has over 800 million internet users and more than 500 million active social media accounts. Global platforms already optimize for Indian compliance on other regulations. This will be no different.

The structural pattern is worth watching: India didn't try to regulate AI models or training data. It regulated the output layer, the content itself, at the point of publication. That's a template other large markets can adopt without needing to solve the harder problem of regulating model development.

The EU AI Act takes effect in stages through 2027 with broader ambitions but slower timelines. India's approach is narrower and already enforceable. For teams running AI content operations at scale, India just became the compliance floor, not the ceiling.