top of page

Deepfake Laws in India 2026: Legal Remedies for Victims Under the New IT Rules

  • Writer: Kaustav Chowdhury
    Kaustav Chowdhury
  • 2 days ago
  • 2 min read

Updated: 1 day ago

Deepfakes — AI-generated videos, images, and audio that convincingly impersonate real people — have become one of the most serious emerging threats to personal dignity, electoral integrity, and corporate reputation in India. India does not yet have standalone deepfake legislation, but the legal framework has evolved rapidly. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, notified by MeitY on 10 February 2026 and effective from 20 February 2026, now impose a strict three-hour takedown window for AI-generated content flagged as harmful. Combined with the Bharatiya Nyaya Sanhita, 2023, the Digital Personal Data Protection Act, 2023, and the IT Act, 2000, victims of deepfakes have multiple legal avenues to seek redress.


The 2026 IT Rules Amendment significantly strengthens platform accountability. Significant Social Media Intermediaries (SSMIs) are now required to: (1) prominently label AI-generated content — a watermark covering at least 10% of the image or video surface, or a verbal notice lasting 10% of audio duration; (2) remove harmful AI-generated or deepfake content within three hours of receiving a complaint or government notification, failing which they lose their safe harbour protection under Section 79 of the IT Act, 2000, and can be held liable as if they created the content. Beyond the IT Rules, the BNS, 2023 covers identity theft (Section 319), criminal intimidation, and defamation. Deepfakes that process biometric data without consent also violate the DPDP Act, 2023, attracting penalties up to Rs 250 crore. For sexually explicit deepfakes, the Protection of Women from Cybercrime provisions under BNS offer specific criminal remedies.


If you are a victim of a deepfake — whether a celebrity whose likeness has been misused, a business professional whose statements have been fabricated, or a private individual targeted for harassment — the practical steps are: (1) File a complaint immediately on the SSMI's platform and send a legal notice simultaneously, triggering the three-hour clock under the 2026 Rules; (2) File an FIR with the Cyber Crime cell of the police; (3) Approach the High Court for an urgent injunction ordering takedown and restraining further dissemination — Indian courts have granted such relief in celebrity deepfake cases (including the NTR Jr. case) establishing that AI-generated impersonation falls within existing IP and personality rights law; (4) For biometric data misuse, file a complaint with the Data Protection Board of India once it is operational.


India's deepfake legal landscape is moving fast — the 2026 Amendment Rules are among the most aggressive platform liability provisions globally, and their implementation will be tested in the coming months. Businesses, public figures, and individuals need proactive legal strategies: drafting take-down protocols, registering personality rights, and understanding the intersection of AI law, data privacy, and criminal liability. Sansa Kanoon Pranali Partners advises clients on AI and technology law, IT Act compliance, deepfake takedown strategies, and personality rights protection. If you have been targeted by deepfake content or need a policy framework for your platform, contact us at sansalegal.com.

 
 
 

Recent Posts

See All

Comments


bottom of page