top of page

AI Liability in India: Who Is Responsible When Artificial Intelligence Causes Harm

  • Writer: Kaustav Chowdhury
    Kaustav Chowdhury
  • 1 day ago
  • 2 min read

When an AI-powered medical diagnosis system gives a wrong recommendation, or an autonomous trading algorithm causes financial losses, or a hiring algorithm discriminates against certain candidates, the critical legal question arises: who is liable? In India, this question has no straightforward answer yet, because Indian law does not recognize artificial intelligence as a legal entity capable of bearing responsibility. As AI systems become more autonomous and their decisions more consequential, establishing clear liability rules is one of the most pressing legal challenges facing Indian businesses, consumers, and regulators. Understanding the current legal position helps you assess your exposure and take protective measures.


Since AI has no legal personality under Indian law, liability for AI-caused harm must be attributed to a human or corporate actor. The existing legal framework applies by analogy through several statutes. Under the Indian Penal Code (now Bharatiya Nyaya Sanhita 2023), criminal liability requires a guilty mind (mens rea) and voluntary conduct (actus reus), both of which are inapplicable to non-sentient AI systems, creating a significant gap. The Consumer Protection Act 2019 allows claims against defective products, but victims must prove a causal link between the defect and harm suffered, which is extremely difficult with opaque AI decision-making. The IT Act 2000 addresses cybersecurity breaches but was not designed for AI-specific harms. The DPDPA 2023 provides data privacy safeguards but does not directly address AI liability. The India AI Governance Guidelines released by MeitY in November 2025 recommend a proportionate liability model, where responsibility is distributed based on the role each actor plays in the AI value chain, including developers, deployers, and end-users, along with the risk of harm and the due diligence observed by each party.


For businesses deploying AI systems, the practical approach is to implement strong risk management measures now, rather than waiting for legislation. Document all AI system capabilities, limitations, and known risks comprehensively. Maintain detailed records of training data, model decisions, and testing results to demonstrate due diligence if liability questions arise. Implement human oversight mechanisms for high-risk AI applications, particularly in healthcare, lending, hiring, and legal advice. Consider obtaining professional liability insurance that covers AI-related claims. Ensure clear contractual allocation of liability between AI developers, deployers, and users through well-drafted technology agreements. For consumers who have suffered harm from AI systems, potential avenues include filing complaints under the Consumer Protection Act 2019 or pursuing tort claims for negligence against the company that deployed the system.


Several recent developments signal that binding AI liability rules are approaching. The AI Safety Institute (AISI) was established in January 2025 to develop risk assessment frameworks and advise policymakers. MeitY is drafting an AI Accountability and Ethical Use Bill that will address liability directly. A Supreme Court PIL filed in 2025 demands regulation of algorithmic discrimination in hiring. In February 2026, the government amended IT Intermediary Guidelines to require labelling of AI-generated content, establishing a precedent for content accountability. These developments indicate that India is moving toward a comprehensive liability framework. The Sansa Kanoon Pranali Partners team specializes in AI governance, technology liability, and regulatory compliance. Whether you are an AI developer, deployer, or a business integrating AI into operations, we can help you structure your liability protections, draft appropriate contracts, and prepare for the evolving regulatory landscape.

 
 
 

Recent Posts

See All

Comments


bottom of page