Data Privacy and Cyber security in the Age of Artificial Intelligence: A Critical Analysis of the Indian Legal Framework|Volume VI Issue I(January-February 2026) | Author : Dr. Bhupender Kumar Jodhta (Professor & Principal ,Awasthi College of Law, Nalagarh, Solan) |

ABSTRACT

The proliferation of Artificial Intelligence systems has fundamentally transformed the data privacy and cybersecurity landscape in India. With over 850 million internet users and a booming digital economy, India faces unique challenges in balancing innovation with fundamental rights protection. This paper critically examines the Indian legal framework governing data privacy and cybersecurity in the AI era, analysing the interplay between the Digital Personal Data Protection Act, 2023, the Information Technology Act, 2000, and emerging AI-specific regulations up to March 2026. The paper argues that while India has constructed a comprehensive legislative architecture, significant gaps remain in implementation, enforcement, and doctrinal coherence. It concludes with recommendations for strengthening India’s legal framework to protect data privacy and ensure cybersecurity in the age of artificial intelligence.

 Keywords: Data Privacy, Cybersecurity, Artificial Intelligence, DPDP Act 2023, IT Act 2000, India

 1. INTRODUCTION

The age of artificial intelligence has ushered in unprecedented opportunities and challenges for data privacy and cybersecurity. AI systems depend fundamentally on vast quantities of data for training, deployment, and improvement, raising profound questions about consent, purpose limitation, and algorithmic accountability.[1]

India stands at a critical juncture in this global transformation. As the world’s most populous nation and one of its fastest-growing digital economies, India confronts unique governance challenges.[2] The IndiaAI Mission, launched in March 2024 with a ₹20,000 crore allocation, aims to position India as a global leader in AI development.[3] Simultaneously, the government has constructed an ambitious legislative framework—the Digital Personal Data Protection Act, 2023 (DPDP Act), the DPDP Rules 2025, the Artificial Intelligence (Ethics and Accountability) Bill 2025, and the Deepfake Regulation Bill 2025.

This paper critically examines whether this emerging legal architecture adequately addresses AI-related data privacy and cybersecurity challenges. It argues that while India has made remarkable progress, significant gaps remain in implementation, enforcement, and doctrinal coherence.

1.1 Research Questions

  1. How does India’s constitutional framework protect data privacy in the AI context?
  2. What obligations does the DPDP Act 2023 impose on AI developers?
  3. How does the IT Act 2000 address cybersecurity challenges from AI?
  4. Do emerging AI-specific legislations fill existing gaps?

1.2 Methodology

This paper employs a doctrinal research methodology, analysing constitutional provisions, statutes, subordinate legislation, and judicial decisions up to March 2026.[4]

  1. REVIEW OF LITERATURE

The scholarly literature on data privacy in India has evolved significantly. Early scholarship focused on the limitations of the Information Technology Act, 2000, which was designed for an era of e-commerce rather than AI-driven data processing.[5] Mali observes that a law “born in the flicker of cathode-ray screens and dial-up modems” was never intended to “govern deepfakes, AI-driven fraud, or algorithmic manipulation.”[6]

The constitutional foundation of privacy received authoritative treatment in Justice K.S. Puttaswamy v. Union of India (2017), where the Supreme Court recognised privacy as a fundamental right under Article 21.[7] This judgment provides the constitutional bedrock for subsequent data protection legislation.

The enactment of the DPDP Act, 2023 has generated substantial commentary. Divan and Rosencranz note that the Act represents “India’s first comprehensive, cross-sectoral law dedicated to personal data protection.”[8] However, they critique the exemptions for government processing and delayed enforcement.[9]

The intersection of AI and data protection has attracted increasing attention. Rana, Gandhi andChandgothia examine the tension between the right to erasure and AI architecture, observing that personal information once embedded in AI systems “is not stored in a manner that permits straightforward retrieval, indexing, or deletion.”[10]

Comparative scholarship notes that while the DPDP Act is “largely modelled upon the EU’s GDPR,” significant distinctions exist regarding processor liability and extraterritorial application.[11]

Despite this literature, the 2025 legislative developments remain underexamined. This paper addresses this gap.

  1. CONSTITUTIONAL AND STATUTORY FOUNDATIONS

3.1 Article 21 and the Right to Privacy

The Constitution does not expressly guarantee data privacy. However, in Puttaswamy, the Supreme Court recognised privacy as a fundamental right emanating from Article 21. The Court held that privacy is “an intrinsic element of liberty and dignity,” forming “the foundation of a large number of other fundamental rights.”[12]

This recognition has profound implications for AI. As the Court observed, “the rapid development of technology has brought into focus the need to protect privacy in contexts that were unimaginable at the time of framing of the Constitution.”[13]

3.2 The Information Technology Act, 2000

The IT Act, 2000, enacted for electronic commerce and digital signatures, has served as India’s foundational cyber law for twenty-five years.[14] The 2008 amendments added provisions on identity theft (Section 66C), privacy violations (Section 66E), and cyber terrorism (Section 66F).[15]

Section 43A introduced “reasonable security practices and procedures” for bodies corporate handling sensitive data. Section 72A addressed disclosure in breach of contract.

Section 79 grants intermediaries “safe harbour” from liability for third-party content, provided they do not initiate transmission, select recipients, or modify information.[16] This immunity has been crucial for platforms hosting AI-generated content.

3.3 The IT Rules, 2021

The IT Rules, 2021, as amended, operationalise the IT Act’s intermediary framework.[17] Rule 3(1)(b) restricts content that “misleads or deceives, including through deepfakes.”[18] Intermediaries must act expeditiously to remove unlawful content—within 72 hours generally, 24 hours for privacy violations.[19]

The Grievance Appellate Committee mechanism, established in 2023, allows users to appeal intermediary decisions online.[20]

3.4 The Bharatiya Nyaya Sanhita, 2023

The BNS contains provisions relevant to AI-generated harms. Section 353 penalises false statements causing public mischief. Section 111 addresses organised cybercrimes, including those involving deepfakes.[21] These provisions are “technology-neutral” and apply “irrespective of whether the content is AI-generated.”[22]

  1. THE DIGITAL PERSONAL DATA PROTECTION ACT, 2023

4.1 Overview

The DPDP Act, enacted in August 2023, represents India’s first comprehensive data protection law.[23] It applies to digital personal data processed within India and to processing outside India if offering goods or services to data principals in India.[24]

Key concepts include Data Principal (the individual), Data Fiduciary (entity determining purpose and means), and Consent Manager.[25] Processing is lawful only with consent for specified purposes or for certain legitimate uses.[26]

Data Principals enjoy rights to access, correction, erasure, and grievance redressal.[27] Data Fiduciaries must implement security safeguards, notify breaches, and process only for specified purposes.[28] Significant Data Fiduciaries bear additional obligations, including data protection impact assessments.[29]

The Data Protection Board investigates breaches and imposes penalties up to ₹250 crore.[30]

4.2 The DPDP Rules, 2025

On January 3, 2025, the Draft DPDP Rules were published.[31] Rule 13 imposes additional obligations on Significant Data Fiduciaries regarding “algorithmic software.” Such entities must “observe due diligence to verify that technical measures including algorithmic software adopted by it are not likely to pose a risk to the rights of Data Principals.”[32] This provision explicitly brings AI systems within the Act’s ambit.[33]

4.3 Application to AI Systems: Key Challenges

4.3.1 Training Data Compliance

AI training data frequently contains personal information. Under the DPDP Act, specific consent must be obtained for collecting personal data for AI training.[34] The IndiaAIGovernance Guidelines confirm that “the use of personal data without user consent to train AI models is governed by the DPDP Act.”[35]

Compliance presents formidable challenges. Identifying personal information in large, unstructured datasets is technically difficult.[36] Obtaining consent at scale from individuals whose data appears in web-scraped datasets is often infeasible.[37] The research exemption may cover academic study but likely excludes commercial deployment.[38]

4.3.2 The Public Data Fallacy

Publicly accessible data is not automatically exempt. While Section 3(c)(ii) excludes personal data voluntarily made public by the Data Principal, “the extent of permissible downstream reuse, particularly for large-scale, commercial AI-training, remains legally unsettled.”[39] Context, reasonable expectations, and intended use remain relevant.[40]

4.3.3 The Right to Erasure in AI Systems

The right to erasure under Section 12 presents profound challenges. Large language models do not preserve information in discrete, accessible rows. Once incorporated, data “is not stored in a manner that permits straightforward retrieval, indexing, or deletion.”[41] The influence of specific data is “diffused throughout the model’s architecture” as “statistical impressions rather than identifiable records.”[42] No practical mechanism exists to surgically excise an individual’s contribution without complete retraining.[43]

Technical solutions like differential privacy and machine unlearning exist but have significant limitations—reduced accuracy, high cost, and uncertain effectiveness.[44] The DPDP Act’s reasonableness framework may provide flexibility: “If compliance would cripple innovation or trade, alternative safeguards may be accepted.”[45]

4.3.4 Vendor and Cloud Dependency

AI development involves multiple third parties. Under the DPDP Act, the principal Data Fiduciary remains accountable even when processing is outsourced.[46] Contracts must include DPDP-aligned obligations, confidentiality requirements, and breach reporting timelines.[47]

4.4 The OpenAI v. ANI Case

The pending Delhi High Court case between ANI and OpenAI illustrates emerging conflicts. ANI alleges that OpenAI used its copyrighted content to train ChatGPT without authorisation.[48] From a data protection perspective, OpenAI acts as a Data Fiduciary under the DPDP Act, as it “determines the purpose and means of processing of personal data” of Indian citizens.[49] The Act applies extraterritorially, as ChatGPT offers services in India.[50] The right to erasure applies, yet OpenAI refused to remove content, citing conflict with US legal obligations.[51] The case highlights the urgent need for DPDP Act enforcement.[52]

  1. THE CYBERSECURITY FRAMEWORK

5.1 CERT-In and Incident Response

CERT-In, established under Section 70B of the IT Act, serves as the national nodal agency for cyber incident response.[53] CERT-In mandates reporting of defined cyber incidents within six hours of detection—among the fastest response windows globally.[54]

For AI systems, CERT-In has issued specific guidance. In November 2024, it published an advisory on deepfake threats and protective measures.[55]

5.2 Protected Systems and Critical Information Infrastructure

Section 70 empowers the government to declare any computer system as a “protected system.” NCIIPC, established under Section 70A, protects critical information infrastructure.[56] India has declared at least ten systems as protected, including Aadhaar, UPI infrastructure, and core banking systems.[57]

5.3 AI-Specific Cybersecurity Threats

AI systems present novel vulnerabilities distinct from traditional IT infrastructure:[58]

  • Data Poisoning: Manipulating training data can cause systems to behave unpredictably.[59]
  • Model Extraction: Repeated queries can allow attackers to reconstruct proprietary models.[60]
  • API Exploits: Attackers targeting APIs can disrupt or hijack AI workflows.[61]
  • Prompt Injection: Adversarial prompts can manipulate AI systems into unintended behaviour.[62]
  • Shadow AI: Employees using unsanctioned AI tools upload sensitive data to uncontrolled platforms.[63]

CERT-In’s advisory on deepfakes is a positive step, but comprehensive guidance on securing AI pipelines remains needed.

  1. EMERGING AI-SPECIFIC LEGISLATION

6.1 The Artificial Intelligence (Ethics and Accountability) Bill, 2025

Introduced on December 5, 2025, this Bill marks a shift from voluntary guidelines to binding legislation.[64] Key features include:

  • Establishment of an AI Ethics Committee for developing ethical guidelines and monitoring compliance.[65]
  • High-risk AI systems requiring prior approval and ethical review before deployment.[66]
  • Penalties up to ₹50 million for violations.[67]

6.2 The Deepfake Regulation Bill, 2025

Also introduced on December 5, 2025, this Bill criminalises creating and sharing deepfakes intended to humiliate, harass, commit fraud, or create sexual content without consent.[68]

6.3 The IndiaAI Governance Guidelines, 2025

Released in November 2025, these Guidelines articulate seven core ethical principles: trust, human-centricity, innovation, fairness, accountability, intelligibility, and safety.[69] Crucially, they clarify that “the Government intends to extend the application of existing law rather than enacting a new law to regulate AI systems.”[70]

6.4 The IndiaAI Mission and AI Safety Institute

The IndiaAI Mission, launched in March 2024, aims to build comprehensive AI infrastructure.[71] In January 2025, MeitY established the IndiaAI Safety Institute to develop safety tools and ensure ethical AI application contextualised to India’s diversity.[72]

6.5 Sectoral AI Regulation

  • RBI’s FREE-AI Framework: Mandates infrastructure for indigenous AI, multi-stakeholder governance, and audit mechanisms in the financial sector.[73]
  • State AI Policies: Odisha and Rajasthan are developing policies combining infrastructure, ethical AI, and digital law frameworks tailored to local contexts.[74]
  1. CRITICAL ANALYSIS AND CHALLENGES

7.1 Implementation Gaps and Delayed Enforcement

The most significant challenge is delayed enforcement. The DPDP Act, enacted in August 2023, remains unenforced pending final Rules.[75] Citizens and companies have been “informed of rights and obligations that currently have no legal force behind them.”[76]

7.2 Safe Harbour Concerns

Proposed IT Rules amendments would mandate intermediaries to ensure visible labelling and verification of synthetic content.[77] Legal commentators raise concerns about ultra vires: Section 79 grants safe harbour only where intermediaries do not modify content. The draft amendments “effectively redefin[e] the scope of conduct that Parliament has expressly identified as disqualifying for safe-harbour.”[78]

7.3 The Adjudicatory Framework

The IT Act’s adjudicatory mechanism under Section 46 has been largely ineffective. “Nearly half the states in India still do not have an AO, and in many others, the AO exists only on paper.”[79] The DPDP Act’s Data Protection Board must avoid replicating these failures.

7.4 Low Conviction Rates

Conviction rates under the IT Act remain alarmingly low—0.93% in 2021, 1.7% in 2022.[80] For AI-related harms, these enforcement deficits are particularly concerning.

7.5 Government Exemptions

The DPDP Act exempts government processing for various purposes, including “prevention, detection, investigation or prosecution of any offence.”[81] Critics warn that these exemptions risk surveillance and undermine privacy protections.[82]

  1. CONCLUSION AND RECOMMENDATIONS

India has constructed an ambitious legal framework for data privacy and cybersecurity in the AI era. The constitutional foundation laid by Puttaswamy, the DPDP Act, the IT Act, and emerging AI-specific legislation collectively represent a remarkable legislative achievement.

Yet significant challenges remain: enforcement delays, safe harbour uncertainty, weak adjudicatory mechanisms, and low conviction rates.

Recommendations:

  1. Enforce the DPDP Act without further delay. The 18-month transition period provides adequate time for compliance.[83]
  2. Amend safe harbour through primary legislation, not subordinate rules. Doctrinal coherence requires parliamentary action.[84]
  3. Establish the Data Protection Board with adequate resources and commitment to online dispute resolution.[85]
  4. Develop AI-specific cybersecurity guidance addressing the full lifecycle of AI systems.[86]
  5. Build capacity for investigation and prosecution of AI-related crimes. Low conviction rates undermine deterrence.[87]
  6. Narrowly construe government exemptions subject to independent oversight. Trust requires state accountability.[88]

India’s legal framework provides the tools to protect data privacy and ensure cybersecurity in the AI age. The task now is implementation—translating legislative ambition into lived reality for India’s 850 million internet users.[89]

 

REFERENCES

Primary Sources

  • Constitution of India, Article 21
  • Information Technology Act, 2000(Act 21 of 2000)
  • Digital Personal Data Protection Act, 2023(Act 22 of 2023)
  • Bharatiya Nyaya Sanhita, 2023(Act 45 of 2023)
  • Artificial Intelligence (Ethics and Accountability) Bill, 2025(Bill No. 59 of 2025)
  • Deepfake Regulation Bill, 2025(Bill No. 60 of 2025)
  • Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
  • Digital Personal Data Protection Rules, 2025(Draft)
  • Justice K.S. Puttaswamy v. Union of India(2017) 10 SCC 1
  • OpenAI v. ANI Media Pvt. Ltd., Delhi High Court (pending)

Secondary Sources

  • Divan S and Rosencranz A, Environmental Law and Policy in India(3rd edn, OUP 2022)
  • Mali P, ’25 Years of the IT Act, 2000: India’s Digital Law at a Crossroads’ (2025) NLIU Law Review
  • Rana V, Gandhi A and Chandgothia P, ‘Effect of Digital Personal Data Protection Rules, 2025 on AI Regulation’ (S.S. Rana & Co., 24 November 2025)
  • Sahni R, ‘OpenAI vs ANI: A Data Protection Perspective’ (IIPRD, 26 May 2025)
  • Khurana and Khurana, ‘AI Training Data under India’s DPDP Regime’ (29 January 2026)
  • Saikrishna& Associates, ‘Is the Safe Harbour Really Safe?’ (Lexology, 27 October 2025)
  • De Penning and De Penning, ‘Protecting India’s Digital Rights’ (23 December 2025)
  • MeitY, ‘IndiaAI Governance Guidelines’ (November 2025)
  • MeitY, ‘India well-equipped to tackle evolving online harms’ (PIB, 8 August 2025)
  • RBI, ‘Framework for Responsible and Ethical Enablement of AI (FREE-AI)’ (August 2025)
  • CERT-In, ‘Advisory on Deepfake Threats’ (November 2024)

[1]Khurana and Khurana, ‘AI Training Data under India’s DPDP Regime’ (29 January 2026).

[2]De Penning and De Penning, ‘Protecting India’s Digital Rights’ (23 December 2025).

[3]ibid.

[4]L Lammasniemi, Law Dissertations: A Step-by-Step Guide (2nd edn, Routledge 2022) 5.

[5]P Mali, ’25 Years of the IT Act, 2000′ (2025) NLIU Law Review.

[6]ibid.

[7]Justice K.S. Puttaswamy v. Union of India (2017) 10 SCC 1.

[8]S Divan and A Rosencranz, Environmental Law and Policy in India (3rd edn, OUP 2022).

[9]R Sahni, ‘OpenAI vs ANI: A Data Protection Perspective’ (IIPRD, 26 May 2025).

[10]V Rana, A Gandhi and P Chandgothia, ‘Effect of Digital Personal Data Protection Rules, 2025 on AI Regulation’ (S.S. Rana & Co., 24 November 2025).

[11]Sahni (n 9).

[12]Puttaswamy (n 7) para 123.

[13]ibid para 45.

[14]Mali (n 5).

[15]ibid.

[16]Information Technology Act 2000, s 79.

[17]MeitY, ‘India well-equipped to tackle evolving online harms’ (PIB, 8 August 2025).

[18]IT Rules 2021, r 3(1)(b).

[19]ibid.

[20]Mali (n 5).

[21]Bharatiya Nyaya Sanhita 2023, ss 353, 111.

[22]PIB (n 17).

[23]Digital Personal Data Protection Act 2023.

[24]ibid s 3.

[25]ibid s 2.

[26]ibid ss 4-6.

[27]ibid ss 11-14.

[28]ibid ss 8-10.

[29]ibid s 10.

[30]ibid s 15.

[31]Draft Digital Personal Data Protection Rules 2025.

[32]ibid r 13.

[33]Rana, Gandhi and Chandgothia (n 10).

[34]MeitY, ‘IndiaAI Governance Guidelines’ (November 2025).

[35]ibid.

[36]Khurana and Khurana (n 1).

[37]ibid.

[38]ibid.

[39]ibid.

[40]ibid.

[41]Rana, Gandhi and Chandgothia (n 10).

[42]ibid.

[43]ibid.

[44]ibid.

[45]ibid.

[46]Khurana and Khurana (n 1).

[47]ibid.

[48]Sahni (n 9).

[49]ibid.

[50]DPDP Act 2023, s 3.

[51]Sahni (n 9).

[52]ibid.

[53]Information Technology Act 2000, s 70B.

[54]Mali (n 5).

 

[55]PIB (n 17).

[56]Mali (n 5).

[57]ibid.

[58]ibid.

[59]ibid.

[60]ibid.

[61]ibid.

[62]ibid.

[63]ibid.

[64]Artificial Intelligence (Ethics and Accountability) Bill 2025.

[65]ibid.

[66]ibid.

[67]ibid.

[68]Deepfake Regulation Bill 2025.

[69]IndiaAI Governance Guidelines (n 34).

[70]ibid.

[71]De Penning (n 2).

[72]ibid.

[73]RBI, ‘FREE-AI Framework’ (August 2025).

[74]De Penning (n 2).

[75]Sahni (n 9).

[76]ibid.

[77]Saikrishna& Associates, ‘Is the Safe Harbour Really Safe?’ (Lexology, 27 October 2025).

[78]ibid.

[79]Mali (n 5).

[80]ibid.

[81]DPDP Act 2023, s 17.

[82]De Penning (n 2).

[83]Author’s recommendation.

[84]ibid.

[85]ibid.

[86]ibid.

[87]ibid.

[88]ibid.

[89]ibid.

LEAVE A REPLY

Please enter your comment!
Please enter your name here