top of page

AI Governance in India 2026: Legal Framework, DPDP Act, and the Ethics Bill

  • Writer: Kaustav Chowdhury
    Kaustav Chowdhury
  • 2 days ago
  • 4 min read

India does not yet have a standalone artificial intelligence law. Unlike the European Union, which enacted the AI Act in 2024 with binding rules across all member states, India has adopted a principles-first, risk-based approach that relies on existing legislation, sector-specific regulation, and voluntary guidelines to govern AI deployment. The key legislation affecting AI systems is the Digital Personal Data Protection Act, 2023, with its Rules notified in 2025 for phased rollout through 2027. In addition, a Private Member's Bill titled the Artificial Intelligence (Ethics and Accountability) Bill, 2025, was introduced in Parliament in December 2025, proposing a statutory ethics committee and mandatory algorithmic audits. This article examines the current legal landscape for AI in India and what businesses need to know.

The DPDP Act and Its Impact on AI Systems

The Digital Personal Data Protection Act, 2023 is the most significant piece of legislation affecting AI operations in India. The Act applies to any processing of digital personal data, which directly impacts AI systems that are trained on, process, or generate outputs based on personal data. Key provisions relevant to AI include the requirement for explicit, informed consent before processing personal data, with the purpose of processing to be clearly stated. This poses a challenge for AI models that process large volumes of data for multiple purposes, including training, inference, and analytics. The concept of 'deemed consent' under Section 7(b) allows processing for certain legitimate uses, but the scope of what constitutes a legitimate use for AI training remains subject to interpretation. Data fiduciaries (entities that determine the purpose and means of processing) bear the primary compliance obligation, including maintaining accuracy of data, implementing security safeguards, and ensuring purpose limitation. The Act imposes penalties up to Rs 250 crore for significant breaches and establishes the Data Protection Board of India as the adjudicatory body.

The AI Ethics and Accountability Bill 2025

The Artificial Intelligence (Ethics and Accountability) Bill, 2025, introduced as a Private Member's Bill in Lok Sabha in December 2025, proposes a structured regulatory framework for AI. The Bill envisages the creation of a statutory Ethics Committee for AI that would oversee ethical reviews of high-risk AI systems. Key proposals include mandatory ethical reviews and bias audits before deploying AI systems in sectors such as healthcare, criminal justice, employment, and financial services. Developer obligations would include documenting training data sources, model architecture, and performance metrics. Restrictions are proposed on certain AI uses in law enforcement and employment decisions, such as autonomous decision-making without human oversight. Penalties for non-compliance could extend up to Rs 5 crore. As a Private Member's Bill, it does not have government backing and faces long odds of being passed in its current form. However, its introduction signals parliamentary awareness of the regulatory gap and may influence the government's approach when it formulates its own AI governance framework.

Sector-Specific AI Regulation Already in Place

While there is no horizontal AI law, several sector regulators have issued guidelines that directly affect AI deployment. The RBI has issued guidelines on the use of AI and machine learning in credit assessment, requiring banks and NBFCs to maintain explainability in automated lending decisions and to ensure that AI-driven credit scoring does not result in discriminatory outcomes. SEBI has addressed algorithmic trading through its Algo Trading Circular, requiring registration of algorithms used for trading, audit trails, and kill switches to prevent market disruption. IRDAI has issued advisories on the use of AI in insurance underwriting and claims processing. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 impose obligations on social media intermediaries to identify and remove AI-generated deepfakes, and the 2023 advisory from MeitY required platforms to label AI-generated content and obtain government approval before deploying AI models that could impact electoral integrity. These sectoral approaches collectively create a patchwork of AI obligations that vary by industry.

India's AI Governance Principles and Upcoming Bodies

India's overarching approach to AI governance is outlined in the India AI Governance Guidelines, which are based on seven principles: safety and reliability, inclusivity and non-discrimination, privacy and security, transparency and explainability, accountability, positive human values, and protection of intellectual property. These principles are voluntary and serve as a guiding framework rather than binding obligations. The government has announced the formation of two advisory bodies: the Artificial Intelligence Governance Group (AIGG) and the Technology and Policy Expert Committee (TPEC), which will provide policy recommendations on AI governance. The Digital India Act, which is expected to replace the Information Technology Act, 2000, is likely to include specific provisions for AI and emerging technologies, including deepfake regulation, algorithmic transparency requirements, and a risk-based classification of digital platforms. Until this comprehensive legislation is enacted, businesses deploying AI must navigate the existing framework of DPDP Act compliance, sector-specific regulations, and voluntary governance principles.

Practical Steps for Businesses Using AI in India

Businesses deploying AI systems in India should begin by mapping their AI operations against DPDP Act requirements, particularly consent management and purpose limitation. Any AI system processing personal data must comply with the Act's provisions regardless of whether it is explicitly labelled as an AI system. Companies should implement algorithmic audit mechanisms, especially for AI systems used in credit decisions, hiring, insurance underwriting, or customer-facing automated decisions, as these are the areas most likely to attract regulatory scrutiny. Documentation of training data sources, model performance metrics, and decision-making logic is advisable even in the absence of a statutory mandate, as it demonstrates good faith compliance should regulations be tightened. Businesses should monitor the progress of the Digital India Act and the AI Ethics Bill, as either could significantly alter the compliance landscape. The current regulatory window offers an opportunity to establish governance frameworks proactively rather than reactively.

 
 
 

Recent Posts

See All

Comments


bottom of page