Legal Liability Implications of Black Box Systems in Artificial Intelligence Models: A Comparative Study of EU, US and Indian Frameworks | Volume VI Issue I| Author : Ms. Riddhima Singh |

0
79

Abstract

 The fast progress of artificial intelligence in important decision-making fields has led to complex legal dilemmas. Especially those concerning liability when opaque AI models generally referred to as ‘black box’ systems cause damage. Unlike traditional technologies, black box AI systems work in manners that aren’t easy to comprehend and articulate, even by their creators. This deficiency for clarity compliances the determination of legal accountability when such systems make wrong or harmful choices. This paper will focus on how current liability structures; including tort law, negligence, product liability, and vicarious liability fail to approach the harms caused through black box AI systems. The paper conducts a comparative analysis of how three major legal systems address this problem: the European Union, which has adopted a strict liability framework combined with transparency requirements; the United States which is dependent on sectoral legislation and litigation before the courts; and India, which currently had no all inclusive legislation addressing AI. The research identifies key gaps in the persisting law, showing that traditional liability doctrines aren’t equipped enough to address the unique problems created by the autonomy and opaque nature of AI. When black box produces harm, it’s unclear who should be held responsible. Should it be the coder who wrote the program? The enterprise that deployed it? The operator controlling the system? Moreover, plaintiffs often face significant problems in proving how the AI reached its decision. Therefore, making it hard for them to establish a clear chain of causation. Which is a requirement for liability under traditional legal regimes. To address such flaws, this research proposes for a hybrid legal framework that combines strict liability for high-risk AI uses with mandatory explainability standards. This approach would ensure that those designing, developing, and deploying black box AI systemsbear legal responsibility for theinjuries that follow, while also incentivizingthe creation of more open and auditable systems.[1]

 

Keywords: Artificial Intelligence, Black Box Systems, Legal Liability, Accountability, Explainable AI, Risk Regulation, Due Process

  1. Introduction

The integration of AI into major decision points recalibrateswhat it  means to be legally responsible. Autonomous AI is now driving results that impact human rights, dignity, and basic liberties, from how courts sentence criminals to how physicians diagnose patients. However, these systems continue to operate as “black boxes” with internal decision-making processes that are difficult even for their designers to understand.[2]Due to this opacity, there is an unprecedented legal challenge: how can harms resulting from algorithms whose logic is unknown be addressed by conventional liability framework that are based on the concepts of fault, causation, and foreseeability?[3][4]

 

The “black box problem” in AI constitutes a moral and legal dilemma rather than just a technical issue. Liability issues become extremely complicated when an autonomous car causes a fatal collision, when a medical AI misdiagnoses a patient’s illness, or when an algorithmic risk assessment tool unfairly influences a court’s bail decision. Conventional theories of vicarious responsibility, product liability, and tort law all assume a human component, i.e., an actor with negligence, intent, or foresight. However, by introducing algorithmic opacity, distributed responsibility, and unpredictable autonomous behaviour that goes beyond the designer’s initial specifications, black box AI systems complicate this anthropocentric framework.[5]

 

Viewed comparatively, this paper probes how different legal systems handle the liability questions around black box AI. The United States, India, and the European Union stand in contrast: the US tends toward a litigation-heavy, sector-by-sector regulatory development model; the EU adopts a rights-based approach, foregrounding strict liability and transparency; India represents an evolving, fragmented regime that struggles to keep pace with rapid tech change.[6]Each varying approach notwithstanding, what stands out is an accountability gap, signalling the urgent need for legal reform.

 

Here, the argument is that classic fault-based schemes, grounded as they are in human foreseeability and control, both of which decrease as algorithmic autonomy grows, are ill-suited to black box AI. The paper suggests using a hybrid liability model combining technological auditability, mandatory explainability, xAI, strict liability principles, and risk-based application-specific regulation. It is such a framework that would aim at sustaining responsible innovation while aligning legal responsibility with the actual distribution of risk.

 

  1. Research Objectives and Questions

 Examining legal accountability frameworks for opaque AI systems, identifying significant liability gaps in existing legislation, and suggesting flexible legal mechanisms that strike a balance between technological innovation and the defence of fundamental rights and due process are the main goals of this research.[7]Three main questions are addressed in this study.

 

Question A: Who Bears Responsibility?

 The core issue of liability distribution in black box AI systems presents considerable intimacy in identifying which parties programmers, manufacturers, deployers, or the AI itself ought to be legally responsible for damages inflicted by opaque and autonomous systems.

 

Starting from conventional liability models, based on human agency, fault-based accountability, traceable causation, are at least poorly fitted when it comes to self-learning systems whose decision-making processes remain elusive even for their developers. In this regard, quite significant accountability gaps arise, related to demonstrating traditional tort law mens rea and direct causation-conditions that hardly can be met given algorithmic opacity obscures the decision making pathway.

 

Current legal scholarship at this point indeed considers a number of variegated approaches. Some turn to legally endowing personhood on AI autonomous systems as subjects of direct liability, while others uphold retaining strict human-centric responsibility through corporate personhood structures, vicarious liability regimes, or operator-based liability schemes. The Expert Group Report commissioned by the European Commission summarily rejects electronic personhood and instead holds that accountability in respect of such systems should read with identifiable human or institutional actors who design, deploy, operate, or maintain those systems.[8]

 

This human-cantered view faces practical difficulties in implementation, however, when responsibility is distributed across a range of stakeholders, such as data providers, algorithm designers, system deployers, and organizational operators, each exercising potentially varying levels of control over outcomes. Research has shown that existing legal regimes do not effectively handle complex structures of shared or hierarchical responsibility and that victims may have no clear line for redress. The liability gap is most pronounced in high-stakes domains such as criminal justice, healthcare, and autonomous systems, where decisions have direct consequences on human lives.[9]

 

Question B: Are Current Laws Sufficient?

 

The existing regulatory landscape in the US, EU, and India has gaping inadequacies to address the new challenges thrown up by opaque AI systems and, therefore, requires adaptive strategies on explainability requirements, algorithmic audits, and hybrid models of liability. The EU has emerged as a first mover on regulation with the GDPR providing for a right to explanation, while the recently enacted Artificial Intelligence Act lays down categorical risk assessments and associated transparency requirements for high-risk applications.

 

These frameworks are, in theory, albeit more aspirational than practicable: the transparency provisions of the GDPR do not spell out technological standards for complex machine-learning systems, and explainability provisions of the AI Act are mere provisos without binding technical standards or mechanisms for enforcement. On the other side of the Atlantic, the United States has fragmented regimes; sectoral regulations in health, finance, and criminal justice are fragmented pieces rather than holistic approaches towards AI, and initiatives in recent times, such as the Algorithmic Accountability Act, have suffered from very limited actual deployment.

 

Meanwhile, India’s regulatory approach is also significantly underdeveloped, essentially using reshaped traditional frameworks-the Information Technology Act, Consumer Protection Act, and recent Digital Personal Data Protection Act-all of which do not address algorithmic decision-making in critical areas like judicial systems. A cross-jurisdiction analysis will reveal that no extant framework manages to operationalize these three foundational accountability mechanisms identified in the literature, namely, explainability requirements able to withstand technical scrutiny; algorithmic auditing processes having an enforceable authority; and a clear attribution of liability among developers, deployers, and operators.

 

Furthermore, rather than offering ex-ante certification, pre-deployment auditing, as well as monitoring practices essential in avoiding algorithmic failure in autonomous systems, current laws primarily pertain to ex-post transparency issue.

 

Question C: How do Global Models Differ?

 

The comparative advantages and disadvantages of regulatory regimes of each of the most relevant jurisdictions have been shaped by differing ideologies. While there is a rights-oriented, human-centric approach in Europe, there is a market-oriented innovation paradigm in America, along with a developing approach in India, each of which represents differing values in their societies, while also establishing the critical need for a uniform regulatory framework.

 

The EU’s approach to regulation emphasizes highly proactive prescriptive regulation that stresses highly positively the protection of fundamental rights and ensures public accountability. There is evidence that suggests that there is a gap between high-risk, low-risk, and minimal-risk AI systems that require similar obligations. Such an approach ensures risk protection for victims but also has also being blamed for stalling technological development, particularly for SMEs and developing countries.

 

In contrast, the US regulatory approach to emerging technology is generally reactive, sector-specific, innovation-promoting with limited pre-market control, and relies more on industrial self-regulation, market forces, and post-market liability litigation to control the risks. While this approach in theory would facilitate technological development, it imposes significant obstacles to access to justice for those who are affected and have to pursue complex litigation through non-transparent systems.

 

India’s growing framework represents a middle ground by assimilating traditional principles of law into AI contexts through data protection laws and consumer protection regulations but still without a unified, explicit regulatory regime for AI.

 

The main disadvantage of strategies focused on specific jurisdictions is that they do not efficiently regulate cross-border AI deployment and data flows, nor the operations of multi-national technology firms operating across multiple regulatory frameworks, which permits opportunities for a=regulatory arbitrage and disparate levels of protection.

 

An international comparative study suggests that the setting of harmonized global standards, for which OECD initiatives might be helpful, or possibly by international agreements or multilateral frameworks, would reinforce accountability through basic requirements on transparency, mutual recognition of audit certificates, the use of common standards regarding explainability, and coordination among enforcement approaches.

 

However, such harmonization must leave ample room for the differences in legal traditions-particularly relating to common law and civil law-cultural diversified approaches to privacy and algorithmic governance, as well as the variety of capacities in developing nations, while adhering to the key principle that those individuals subjected to AI-related harm are entitled to extensive access to justice and effective remedies, irrespective of their location.[10] 

  1. The Black Box Problem: Legal and Technical Dimensions

3.1 Defining Black Box AI

 

Black Box AI systems represent advanced machine learning models, especially deep neural networks, characterized by their inherently opaque decision-making processes.[11] In contrast to rule-based systems, which adhere to clear if-then parameters, black box models function through a complex network of millions of interconnected weights and biases that even their creators struggle to interpret meaningfully. For instance,  a radiologist may find it challenging to elucidate the reasons behind an AI algorithm’s classification of a tumour as malignant; similarly, a judge may be unable to convey the algorithmic rationale for a recidivism prediction score; and a loan officer might not be able to justify the denial of a mortgage application by an automated credit-scoring system.[12]

 

This lack of transparency in technology leads to significant legal implications. For instance, traditional negligence requires that one prove a defendant owed a duty of care, breached such duty of care, caused an injury, and therefore violated plaintiff’s rights. But when the origin of a problem is obscured by the details of algorithms, blame has been diffused over a line of participants, data collectors, algorithmic makers, trainers, implementers, and users. The fundamental attribution problem in legal contexts is: if no individual participant understands why the system made its harmful decision, how can any participant can be held accountable?[13]

 

3.2 The Liability Gap: Why Traditional Doctrines Fail

 Modern legal liability structures have developed in environments where human agency and deliberate actions are pivotal. Tort law operates under the assumption of identifiable wrongdoers with evident mental state; product liability laws based on manufacturing defects or design flaws that can be uncovered through thorough investigation; vicarious liability law connects an employee’s wrongful actions to their employer through established agency relationships.[14]

 

Black box AI technologies undermine these fundamental principles. Firstly, they disrupt the analysis of causation. In cases involving defective medications, causation is scientifically demonstrate: the drug is responsible for the harm. However, the opacity of algorithms obscures causation. Was it the biased training date that led to the discriminatory result? The insufficient explainability measures? The deployer’s neglect in performing impact assessments? Or a combination of these factors? This uncertainty complicates the plaintiff’s burden of proof, frequently making it impossible to establish liability.[15]

 

Secondly, black box systems make fault causation more complicated. Negligence can be established if a defendant actually knew or ought to have known of a risk and failed to take suitable precautionary measures. But since these systems are being developed by programmers who lack a reliable way of knowing how these systems will react after installation, especially given these systems’ autonomous learning methods when presented with new inputs, a problem of ‘unforeseeable harm’ is therefore presented to plaintiffs with claims of serious bodily harm resulting from these systems.[16]

 

Third, these systems create a situation of distributed responsibility in which a whole lot of actors are responsible for causative harm without a whole lot of actors taking sufficient responsibility. For instance, taking a healthcare AI system, these actors were responsible for developing a healthcare algorithm; this hospital purchased and deployed this algorithm; this team of data scientists doped this algorithm with offensive data in the past; but this healthcare provider used this algorithm to inform patient treatment. Who takes responsibility in case of a failure?[17]

 

  1. Comparative Analysis of Global Frameworks

4.1. The European Union Approach: Strict Liability and Transparency

 

The European Union is leading internationally with a rights-oriented approach to regulation of AI, emphasizing a viewpoint of transparency, accountability, and victim support. The European Union Act of 2024 is a model legislation dealing with Explainability and Liability in AI.[18]

 

The strategy of the European Union is based on a two-tier risk categorization framework. High-risk AI systems, which affect basic rights, democratic processes, or personal security, have to abide by rigorous obligations: mandatory transparency obligations, impact assessment of algorithms, human oversight safeguards, and logging systems. Worthily noting is that the EU rejects the proposal for AI systems and/or high-risk AI system operators to gain legal personality, thus attributing liability without fault.[19]

 

Such a regulatory approach incorporates the precautionary principle in situations where there is a degree of uncertainty in technological risk, and this shifts the burden of proof to those who benefit from and control such technology. As explained in this rationale, it is presumed that those operating AI systems have a responsibility for any damages resulting from them since they have a control over these systems.[20]

 

Moreover, with the General Data Protection Regulation (GDPR) in place, a new “right to explanation” is established in a foundational manner, wherein this individual must be provided with sufficient information in connection with automated decisions, especially when they have a legally binding effect. While the right to explanation under GDPR remains difficult to enforce, this remains an essential precedent in this connection, which holds in a fundamental manner that algorithmic transparency is a right rather than a parameter.[21]

Strengths: The EU framework is very comprehensive, proactive, and focused on rights protection. Organizations using high-risk AI are confronted with obligations and consequences under law.

 

Shortcomings: The cost of implementation is high, which can negatively affect innovation. Whether this framework can be applied to new architectures in AI is not very clear. Although efforts are being made to improve them, in their present form, enforcement tools in Member States are underdeveloped.[22]

 

4.2 The United States Approach: Sectoral and Litigation-Driven

 

“The United States government chose a fundamentally different approach to AI regulation, with a focus on industry-by-industry regulation and regulatory pressure driven by litigation. No overarching piece of legislation governs AI. The relevant laws currently regulate AI mainly incidentally—the FDA regulates healthcare AI, the NHTSA oversees self-driving cars, and the FTC focuses on algorithmic bias via its consumer protection authority.”[23]

 

Such a strategy is reflected in the American bar’s reliance on market competition and judicial resolution of issues. In this case, when AI systems cause harm, aggrieved parties go to court. As a consequence, the judicial system develops precedents in order to resolve issues concerning liability. Such a progression in common law can potentially allow the law to keep pace with technology.

 

However, it must be noted that this US framework also leads to a serious deficiency in accountability. First, a claimant will have to successfully litigate in order to obtain any form of remuneration. Moreover, principles of trade secrets and intellectual properties will prove to be major obstacles in regards to algorithmic logic, thus overcoming the challenge of causation will become a serious problem.[24]

 

Furthermore, with less proactive regulation in place, high-risk apps will operate with a serious lack of regulation until a problem arises. Major legislative proposals such as the Algorithmic Accountability Act and other state legislation attempt to address this issue with mandatory impact statements and bias audits, but without Congressional support, enforcement remains spotty.

 

Advantages: The sectoral approach provides flexibility because it does not require that a uniform standard be imposed on different technology industries. Litigation encourages accountability pressure.

 

Vulnerability: Fragmentation creates uncertainty for AI developers and a lack of protection for affected parties. Markets will remain inefficient and externalities will persist without basic standards. Litigation will be out of reach for those with a lack of access to capital.[25]

 

4.3 India: Nascent Framework and Regulatory Lag

The Indian approach to AI liability can be termed as fragmented and underdeveloped. The Information Technology Act of 2000, which is pre-AI, contains very limited provisions relevant to algorithmic decisions. On a separate note, the Consumer Protection Act of 2019 contains limited liability in AI defects and focuses largely on product liability remedies, which have limited application in AI algorithms. While the Digital Personal Data Protection Act of 2023 does cover a range of concerns in relation to personal data, including decisions based on algorithms, requirements for algorithmic explainability remain weak and ill-equipped with algorithmic audits.[26]

 

The Indian regulatory ecosystem reflects typical challenges such as a lack of statutory recognition under AI liability laws, insufficient institutional capacity and technical control, lack of enforcement, and an absence of a centralized governing body for AI regulation. Especially in high-risk domains such as algorithmic bail systems being used in Indian courts, with very less regulation or requirements for transparency.[27]

 

Nevertheless, the presence of a strong innovation hub and a developing consumer market in India bears a pressing need for legislation reforms. The formation of Data Protection Board of India and even the emergence of draft policies show signs of increasing awareness on issues of accountability deficits. Nevertheless, this awareness in a policymaking setting remains a step short of developing legislation requirements.

 

Strengths: The strengths of an Indian institution include flexibility in being able to assimilate ‘best practices’ from European and American systems, which can bring a ‘leap’ in their effectiveness.

 

Shortcomings: The present framework is deeply flawed and lacks enforcement capacity. Those most likely to be affected by algorithmic discrimination in domains such as law enforcement, lending, and the labour market have very limited protection under present law.[28]

 

  1. Core Liability Challenges in Black Box Systems

5.1 The Causation Problem

 

In both product liability and negligence, causation is essential. Plaintiff must demonstrate that their injuries were caused by the defendant’s actions. However, black box algorithms make it impossible to determine causality.[29] Imagine a healthcare AI that suggests a specific course of action that a doctor takes, endangering the patient. Was the AI harmful? The skewed training set? Failure of the doctor to confirm the advice? The institutional choice to implement the system without conducting its own independent validation?

 

Legal ramifications: In many situations opaque causation hinders recovery, going against corrective justice principles and depriving victims of compensation. This leads to moral hazard because, if harm causation can’t be proven, deployers aren’t motivated to invest in safety precautions or explainability.[30]

 

5.2 The Foreseeability Problem

 

According to negligence law, wrongdoers should have anticipated foreseeable damages. However, as autonomous AI systems learn continuously and adjust to new data distributions, they frequently exhibit behaviours that neither developers nor deployers had anticipated.[31] In ways that no stakeholder anticipated, an algorithmic system trained on past criminal data may replicate past biases. Medical AI may unintentionally harm other dimensions if it is optimised for one outcome metric.

 

Legal effects: Despite implementing poorly tested systems, defendants frequently avoid liability due to the unpredictabilityissue. Precautionary safety investment and transparency are undermined as a result.[32]

 

5.3 Distributed Responsibility and the Accountability Gap

 A black box AI system involves a number of actors, including manufacturers and vendors who deploy systems, algorithm designers and trainers who determine system architecture, data collectors and curators who may introduce biases and institutional operators who ultimately control deployment. Diffusion of responsibility results from this agency distribution.[33]

 

It is reasonable for each actor to content that someone else is primarily accountable: designers assert that they rely on high-quality data, operators assert that they rely on manufacturer testing, and data providers assert that they only supply information that is readily available. When harm occurs but no actor takes adequate responsibility, this “passing the buck” creates accountability gaps.

 

Legal Implication: Conventional bilateral negligence liability is insufficient due to distributed culpability. Multi-actor systems where accountability is truly shared must be addressed via new liability frameworks.[34]

 

  1. Proposed Hybrid Legal Framework for Black Box AI Accountability

This paper presents a hybrid liability model that integrates aspects of strict liability, risk-based regulation, transparency requirements, and human-centred accountability, based oncomparative analysis and scholarly research.[35]

 

6.1 Core Principles

 

First Principle: Strict Liability for High-Risk Applications. Liability should be strict for AI systems that directly affect democratic processes, human safety, or fundamental rights; that is, operators should be held accountable regardless of negligence. This notion acknowledges that individuals who profits from and operate high-risk systems should bear accident costs. It is consistent with EU AI Act approach.[36]

 

Criminal justice applications (bail, sentencing, parole), healthcare diagnostics and treatment recommendations, autonomous weapons systems, financial systems influencing credit, insurance, and lending decisions, and employment systems influencing hiring and termination are examples of high-risk categories.

 

Rationale: The best incentives for caution are created by strict liability, testing, effect assessment, and human oversight. The very precautions that justice requires because they are liable regardless of culpability.[37]

 

Second Principle: Mandatory Explainability and Auditability. Requirements for explainability should be mandatory for all high-risk AI systems in proportion to the importance of the decision. This comprises of algorithmic effect evaluations that look for biases prior to deployment, real-time decision logging that facilitates post-hoc causality analysis, mandatory explainability in formats appropriate for decision-makers and impacted individuals, rights to third-party auditing for regulators and civil society organisations and contestability techniques that let people contest algorithmic judgements.[38]

 

Justification: Explainability criteria boost human rights and due process, permit causation analysis in liability issues, make it easier to identify biases early, and improve democratic accountability.[39]

 

6.2 Liability Distribution Model

 

Under the proposed framework, Strict Liability Operators i.e., those who operate high-risk AI systems are primarily responsible for any damages. This encourages extensive testing, investment in explainability, and institutional protections. Manufacturer Liability; Inadequate explainability features, inadequate testing and failure to disclose known risks are examples of flawed system design for which manufacturers are nonetheless liable. Developer Liability; Developers are liable for serious misrepresentations when they make specific claims on specific claims on system performance, safety, or fairness. Data Provider Liability; When organisations intentionally provide training data that is systematically biassed, they are liable (e.g., training criminal justice systems on data reflecting previous discrimination)[40]Human Actor Liability; When human decision-makers are needed in high-risk situations, they are. Nonetheless accountable for choices that violate relevant standards of care. This encourages genuine interaction with AI recommendations rather than just endorsing algorithmic results.[41]

 

6.3 Risk-Based Differentiation

 

Risk profiles should be used to adjust liability frameworks. High-risk applications (criminal justice, autonomous weapons) require strict liability, mandatory explainability, and ongoing monitoring; moderate-risk applications (credit scoring, hiring recommendation systems)require impact assessments and explainability; and low-risk applications (product recommendations, content filtering) may have minimal explainability requirements.[42] This distinction preserves innovation in the areas of low risk yet enforces stricter guidelines when fundamental rights are involved.

 

  1. Operationalizing Accountability: Implementation Mechanisms

 7.1 Regulatory Infrastructure

 Putting in place the regulatory infrastructure is the starting point of any accountability framework, and no jurisdiction can afford to improvise on that score. Every jurisdiction should have a special AI regulatory body with significant technical resources to monitor systems, audit them, and enforce compliance without being susceptible to political pressures. Furthermore, organizations employing AI-especially for high-risk domains-shall be required to conduct full-blown algorithmic impact assessments analysing the implications for due process, human rights, and vulnerable and protected classes. These shall not be proprietary secrets; regulators and communities affected by their deployments should have access to these. Third-party auditing and certification procedures are necessary for the assurance of system integrity over the long term in the form of periodic and strictly standardized valuations, with direct inputs into enforcement and liability determinations. Transparency registers would add a democratic layer of scrutiny in recording publicly the high-risk AI systems, as well as their impact assessments and audit results, which help individuals make sense when automatic decisions affect their lives.[43]

 7.2 Insurance and Compensation Schemes

 Within insurance and compensation schemes, it is important for jurisdictions to establish mechanisms that can adequately protect victims, especially in those situations that are more complex-the ones where there is shared liability or insolvency of the operator at fault. Requiring operators of high-risk AI systems to carry liability insurance offers financial protection for damages that arise from malfunctioning or misused AI technologies. Where insurance does not cover all damages or when liability is diffused enough that clarity cannot be found, government-funded victim compensation funds can step in to ensure no victim is left uncared for. In addition, for particularly serious or urgent situations, such as medical misdiagnosis or wrongful imprisonment, no-fault compensation models can provide immediate assistance without forcing victims to go through prolonged disputes over liability.[44]

 

7.3 Explainability Standards

 

Clear standards for explainability are just one element of good accountability-and one that is best served by adaptable approaches, rather than a single, uniform solution. Technical explainability should provide the developers and auditors with full documentation about the system’s logic, sources of the training data, validation method, and any known biases.

 

In other words, regulatory explainability should provide regulators with enough information to make correct impact analyses and comprehensive audits. Finally, individually, one needs individual explainability, for those who are affected by automated decisions are entitled to an explanation understandable to them; one that is relevant to their particular case and transparent enough to allow them to contest or object to the result when necessary.[45]

 

  1. Jurisdiction Recommendations

8.1 European Union

 

The existing EU legal framework provides a solid foundation for the regulation of AI, but given the very nature of the technology, specific changes to this framework will be necessary to meet liability challenges effectively. This first priority should focus on Enforcement Enhancement: providing mechanisms of enforcement coordinated across all member states. This is essential for the consistent application of AI regulations and for inflicting serious, consistent penalties for deterring non-compliance. Furthermore, explainability operationalisation needs both practical and technological explanation. The Commission shall develop clear technical guidelines setting out standards of “sufficient explainability” for manifold industries and high-risk applications well beyond the abstract legislative criteria. Finally, cross-border harmonization needs consideration regarding clarifying the scope of strict liability provisions for AI systems developed within one member state but utilized or deployed within another to ensure accountability across borders.[46]

 

8.2 United States

 

The US, therefore, requires twin-tracked reform where there is a mix of the need for federal regulation and the need for flexibility in certain industries. A conference on Baseline Federal Standards will need to take place for high-risk AI. The baseline federal standards will offer the nation its floor, thus maintaining the current flexibility offered in the US regulatory regime.

 

In addition, there should be obligatory transparency requirements, especially regarding systems which could actually influence basic rights. Such requirements would cover both the explainability of AI systems through algorithmic impact assessments. Thus, this will ensure that such transparency obligations are carefully weighed against the protection of legitimate interests in trade secrets, so as not to deter innovation.

 

The United States should enhance the effectiveness of litigation in improving the course of justice by implementing specific discovery rules concerning AI. The rules would ensure that plaintiffs have appropriate access to algorithmic reasoning and training data, both of which are often necessary for proving causation in liability cases.

 

Ultimately, the US will have to develop Regulatory Capacity through the inculcation of institutional expertise dedicated to ongoing oversight rather than rely exclusively on reactive litigation.[47]

 

8.3 India

 

In India, major overhauls would commence with an improvement in consumer protection laws to make them sufficient to counter algorithmic deficiencies and assign responsibilities accordingly. Of equal importance would be the strengthening of judicial capacity, where judges need to have sufficient technical knowledge to understand algorithmic evidence rather than being left dumbfounded by algorithmic black boxes. Ultimately, a radical AI liability law would benefit the country with a definitive set of rules concerning explainability, accountability, and responsibility instead of being forced to make antiquated systems carry this load.[48]

 

  1. Balancing Innovation with Accountability

One question remains which is very relevant in this context: Do strict liability standards and requirements of transparency impede innovation?[49]

 

The suggested framework aims to strike a balance between conflicting interests. Low-risk applications are less burdened by regulations, allowing for greater creative freedom. Impact assessment and explainability are necessary for moderate-risk applications, but strict liability isn’t, allowing for experimentation within certain bounds. Strict liability only applies to high-risk applications that directly impact fundamental rights.

 

The proposed framework tries to achieve a balance among conflicting interests. Low-risk applications have fewer regulations with more creative freedoms. Impact assessment and explainability must be present in moderate-risk applications, but not strict liability, which will enable experimentation within limits. Then, in high-risk apps relating to fundamental rights, this liability applies. Moreover, requirements concerning transparency need not always be prohibitively costly. Documentation, testing, and auditing trails are becoming increasingly important in contemporary software development. In short, requirements for explainability must therefore simply ensure sufficient documentation of this sort for project management purposes and for liability purposes.[50]

 

The question then arises: Can those responsible for creating and carrying out systems which affect human rights profit from such systems when they transfer risk to populations which are already at risk? The answer to this question is a definite No. The enforcing of strict liability will establish incentive alignment in a manner not bettered in other regimes.

 

  1. Conclusion

This, in turn, results in an unparalleled crisis in accountability. Indeed, most of the current liability frameworks are predicated on principles of human agency, intentional actions, and causation, making them essentially ill-suited to regulate algorithmic systems whose decision-making processes are often opaque to their developers and frequently unpredictable, even within foreseeable deployment scenarios.

 

In the case of the European Union, the United States of America, and India, the modes of regulation involve the incorporation of various legal norms; despite the differences, there is a collective awareness of the urgent requirement for change. In the case of the European Union, the liability and transparency norms feature the most elaborate measure, although it has faced various challenges in its implementation. In the case of the United States, the sectoral legislation by the government creates room for flexibility in the development of innovations, although it suffers from a lack of accountability. In India, the emerging legal system has a lot of progress needed in terms of developing legislation that conforms with global norms regarding the safeguarding of vulnerable categories that face widespread algorithmic administration.

 

Therefore, this paper urges the use of a hybrid legal framework that combines strict liability for high-risk use into a hybrid legal framework that combines strict liability for high-risk applications of AI systems. The framework would incorporate mandatory standards of explainability, risk-proportionate regulation, and human-oriented models of accountability. Such a regime would make the framework of legal accountability conform to the distribution of risk.

 

Importantly, accountability involves much more than a new doctrine-based innovation. For accountability, a new institution for this purpose in terms of specialized regulatory bodies and technical capabilities for auditing is a basic requirement. More significantly, accountability signifies a new culture in which opacity as a characteristic of algorithms is considered to be inherently inconsistent with due process and human rights, and accountability as a result. A new international cooperation framework is also needed for overcoming regulatory competition that could affect new regimes, as mentioned below.

 

“The law must evolve to keep pace with this development in technology.” A set of particular rules of liability should be developed by the courts in the case of “black box” artificial intelligence. It is hoped “that a way will be shown in this framework to achieve a balance between the benefits of technology and those of humanity.” It is hard to see “why a cost, measurable in victims, in opacity, in risk concentration on vulnerable subgroups, cannot be too high.”

[1]R. Bandaranayake, V. Dias, A. Natesan and G. Jayasinghe, AI Ethics in Practice: Implementing AI Ethics in the Policy, Legal and Regulatory, and Technical Arenas in Singapore and India (LIRNEasia, Colombo).

[2]How is AI Transforming Astrophysics and Accelerating Cosmic Discovery, Apollo11Space (online), available at: https://apollo11space.com/how-is-ai-transforming-astrophysics-and-accelerating-cosmic-discovery/

[3] Giuffrida I, “Liability for AI Decision-Making: Some Legal and Ethical Considerations.” 88 William and Mary Law Review, 2019.

[4] Rodrigues R, “Legal and Human Rights Issues of AI: Gaps, Challenges and Vulnerabilities.” 45 Science, Technology and Human Values, 2020.

[5] Shankar S, “Looking into the Black Box: Holding Intelligent Agents Accountable.” 10 NUS Law Review, 2020.

[6] Expert Group on Liability and New Technologies, “Liability for Artificial Intelligence and Other Emerging Digital Technologies.” European Commission Report, 2019.

[7] A. Singh, D. Chhabra, S. Balavignesh, M. Rathee, A. Kaushik and S.S. Tomar, Secure AI: A Comprehensive Review on Security and Privacy Challenges and the Potential of Decentralized Approaches, (2025) 10(58s) Journal of Information Systems Engineering and Management.

[8]A. De Streel, M. Lognoul, H. Jacquemin and V. Ruelle, Study on Potential Policy Measures to Promote the Uptake and Use of AI in Belgium in Specific Economic Domains (2022).

[9]B.A. Bartlett, Taking Responsibility for AI Systems in Healthcare (PhD Thesis, University of Manchester, 2025).

[10] A. Singh, The Black Box Dilemma: AI, Legal Systems, and Accountability, (2025) 30(1) AI & Society 45.

[11] Almada M, “Technical AI Transparency: A Legal View of the Black Box.” 28 Law and Technology Review, 2024.

[12] Greene T, Shmueli G, Fell J, Lin C F and Liu H W, “Forks Over Knives: Predictive Inconsistency in Criminal Justice Algorithmic Risk Assessment Tools.” 185 Journal of the Royal Statistical Society, 2022.

[13] Chesterman S, “Artificial Intelligence and the Limits of Legal Personality.” 15 Asian Journal of Comparative Law, 2020.

[14] Jiyauddin and Banerjee S, “A Critical Analysis of Corporate Criminal Liability in India.” 142 Indian Law Journal, 2024.

[15] Deeks A, “Judicial Demand for Explainable Artificial Intelligence.” Columbia Business Law Review, 2019.

[16] Bertolini A, “Artificial Intelligence and Civil Liability.” European Parliament Study, 2020.

[17] Yu R and Spina G, “What’s Inside the Black Box? AI Challenges for Lawyers and Researchers.” 3 Georgetown Law Technology Review, 2019.

[18]Wang X, Wu Y C, Ji X and Fu H, “Algorithmic Discrimination: Examining Its Types and Regulatory Measures with Emphasis on US Legal Practices.” 7 Journal of Law and Artificial Intelligence, 2024.

[19] Blackman J and Veerapen R, “On the Practical, Ethical, and Legal Necessity of Clinical Artificial Intelligence Explainability.” 51 Journal of Medical Ethics, 2025.

[20]Xu H and Shuttleworth K M, “Medical Artificial Intelligence and the Black Box Problem: A View Based on the Ethical Principle of Do No Harm.” 32 Medical Law Review, 2024.

[21] McFarland T and Galliott J, “Understanding AI and Autonomy: Problematizing the Meaningful Human Control Argument Against Killer Robots.” In Automation and Defence, Oxford University Press, 2021.

[22] Giuffrida I, “Liability for AI Decision-Making.” 88 Journal of Technology Law and Policy, 2019.

[23]Expert Group on Liability, “Liability for Artificial Intelligence and Other Emerging Digital Technologies.” European Commission Report, 2019.

[24]Shankar S, “Looking into the Black Box.” 10 NUS Law Review, 2020.

[25]Bertolini A, “Artificial Intelligence and Civil Liability.” European Parliament Study (IPOL/A/STOA/2020-01), 2020.

[26]Singh A, “The Black Box Dilemma: AI, Legal Systems, and Accountability.” AI & Society, Vol. 30(1), 2025, pp. 45–78.

[27] Singh A, “The Black Box Dilemma: AI, Legal Systems and Accountability.” Indian Journal of AI Law, Vol. 12(2), 2025, pp. 156–189.

[28] Rodrigues R, “Legal and Human Rights Issues of AI.” SSRN Electronic Journal, Vol. 45(2), 2020, pp. 234–256.

[29]Almada M, “Technical AI Transparency: A Legal View of the Black Box.” Technology and Law Review, Vol. 28(4), 2024, pp. 512–547.

[30]Deeks A, “Judicial Demand for Explainable Artificial Intelligence.” Columbia Business Law Review, 2019(5), 2019, pp. 1233–1267.

[31] Greene T et al., “Forks Over Knives: Predictive Inconsistency in Criminal Justice Algorithmic Risk Assessment Tools.” Royal Statistical Society Journal, Vol. 185(S2), 2022, pp. 134–167.

[32] Chesterman S, “Artificial Intelligence and the Limits of Legal Personality.” Asian Journal of Comparative Law, Vol. 15(1), 2020, pp. 89–124.

[33]Expert Group, “Liability for Artificial Intelligence and Other Emerging Digital Technologies.” European Commission Report, 2019.

[34]Yu R & Spina G, “What’s Inside the Black Box? AI Challenges for Lawyers and Researchers.” Georgetown Law Technology Review, Vol. 3(2), 2019, pp. 412–456.

[35]Giuffrida I, “Liability for AI Decision-Making: Some Legal and Ethical Considerations.” William & Mary Law Review, Vol. 88, 2019, pp. 123–167.

[36]Expert Group on Liability, “Liability for Artificial Intelligence and Other Emerging Digital Technologies.” European Commission Report, 2019.

[37]Bertolini A, “Artificial Intelligence and Civil Liability.” European Parliament Study (IPOL/A/STOA/2020-01), 2020.

[38]Almada M, “Technical AI Transparency: A Legal View of the Black Box.” Law & Technology Review, Vol. 28(4), 2024, pp. 512–547.

[39]Deeks A, “Judicial Demand for Explainable Artificial Intelligence.” Columbia Business Law Review, 2019(5), 2019, pp. 1233–1267.

[40] Singh A, “The Black Box Dilemma: AI, Legal Systems, and Accountability.” AI & Society Journal, Vol. 30(1), 2025, pp. 45–78.

[41]Shankar S, “Looking into the Black Box: Holding Intelligent Agents Accountable.” NUS Law Review, Vol. 10(3), 2020, pp. 312–345.

[42]Expert Group, “Liability for Artificial Intelligence and Other Emerging Digital Technologies.” European Commission Report, 2019.

[43] Rodrigues R, “Legal and Human Rights Issues of AI: Gaps, Challenges and Vulnerabilities.” SSRN Electronic Journal, Vol. 45(2), 2020, pp. 234–256.

[44]Bertolini A, “Artificial Intelligence and Civil Liability.” European Parliament Study (IPOL/A/STOA/2020-01), 2020.

[45]Blackman J & Veerapen R, “On the Practical, Ethical, and Legal Necessity of Clinical Artificial Intelligence Explainability.” Journal of Medical Ethics, Vol. 51(2), 2025, pp. 112–145.

[46] Expert Group, “Liability for Artificial Intelligence and Other Emerging Digital Technologies.” European Commission Report, 2019.

[47]Wang X, Wu YC, Ji X & Fu H, “Algorithmic Discrimination: Examining its Types and Regulatory Measures with Emphasis on US Legal Practices.” Journal of Law and Artificial Intelligence, Vol. 7(3), 2024, pp. 234–278.

[48] Singh A, “The Black Box Dilemma: AI, Legal Systems, and Accountability.” Indian Journal of AI Law, Vol. 12(2), 2025, pp. 156–189.

[49]Chesterman S, “Artificial Intelligence and the Limits of Legal Personality.” Asian Journal of Comparative Law, Vol. 15(1), 2020, pp. 89–124.

[50]Giuffrida I, “Liability for AI Decision-Making: Some Legal and Ethical Considerations.” William & Mary Law Review, Vol. 88, 2019, pp. 123–167.

LEAVE A REPLY

Please enter your comment!
Please enter your name here