ABSTRACT[1]
Artificial intelligence has rearranged the world and the legal systems and economies in a way never seen before with unmatched speed. As much as the AI has created unprecedented innovation, it has also come with a lot of very grave issues that are being raised on the stability of democracy, privacy, and the security of the primary rights. With the current AI occupying a significant role in the contemporary infrastructure surveillance, there is a pressing necessity in the creation of strong context-specific governance frameworks. The present paper considers policy frameworks and the formulation of ethical principles being formed by the European Union and emerging in the global governmental regimes, including the United States, China, the BRICS countries[2], ASEAN, and other regions, through the prism of the legislative initiatives, academic literature, global codes of practice and policy documents.[3]
It is second to none globally due to the normative leading instruments-the General Data Protection Regulation, the Digital Service Act, and the pioneering AI Act-all emphasizing human-centric values, risk-based governance, strong on accountability, protective of rights, and banned AI practices harmful for democratic life. As opposed to the unity in governance approaches, those indeed span from a highly decentralized, market-driven, first-model approach to innovation in the United States to a very centralized, security-oriented framework with extensive state surveillance in China, while others-often hybrid approaches or limited-capacity ones-have hitherto been shaped by diverse political and socioeconomic contexts, including ASEAN, BRICS, and the broader Global South.
This assessment therefore makes apparent a host of shared and emerging challenges, including ambiguity in defining high-risk AI, with limited empirical evaluation of regulatory impact on persistent algorithmic bias and accountability gaps until lately, as well as insufficient public awareness. A number of ethics guidelines and voluntary codes, in particular, have lacked enforceability, rendering them susceptible to ‘ethics washing.’ What is more, though the EU is said to export its regulatory philosophy-the so-called ‘Brussels effects’-when it comes to surveillance, the influence remains at eleven demands from under-represented regions, which evoke the need for more inclusive global governance.
The present paper will end by determining the relevance of AI regulation in the age of ubiquitous surveillance by appealing to the combined effort of states to entrench their respective ethical and flexible regulatory systems. The work is structured in compliance with such priorities as transparency, accountability, inclusion of multiple stakeholders, protection of rights, proactive approach to the risks, which may be brought by the AI in the context of surveillance, and its development is organized in a justified, durable, and favorable human flourishing, which is why the work does not contradict the standards of the law.
1. Introduction [4]
Following the boom of the developmental pace, AI has ceased being an invention of technology and is now a structural phenomenon that dictates how people interact, how the economy operates, and how society is governed. The wide use of AI in biometric identification, surveillance, and predictive analytics has led to what Shoshana Zuboff calls “surveillance capitalism” in which personal data has become the commodity and tool of state power as well as a commercial commodity. In fact, this very kind of development simultaneously introduces some fundamental ethical, human rights, and legal problems e.g. as algorithmic systems perform more as decision-makers, social orderers, and government.[5]The unstoppable advancement of AI into state administration, security, and commercial activity has blurred the distinction between reasonable surveillance and an invasive one, which challenges the effectiveness of the existing legal framework since it has clearly been modeled after human responsibility and not autonomous algorithms. The EU has now led the world in AI regulation with the Artificial Intelligence Act of 2024. The EU Artificial Intelligence Act is a sequel to two new harmonizations of EU law that are used to operationalize a human-centric vision of technological innovation in the form of transparency, accountability, and proportionality in the relationships between technologies and fundamental rights. Alternatively, the world is still disunited: the US fosters market-based, sectoral self-regulation; China implements AI into state-based surveillance of subjects, and international bodies such as the OECD and UNESCO provide soft-law models that establish moral values in the form of moral principles over and above the law.[6] It is clear that there is no unified legal framework in the world, thus a very high degree of regulatory dispersal, in which incompatible values are the foundation of the law of AI in the era of surveillance-innovation, security, privacy, and sovereignty.
Nonetheless, with this increase, comparative legal scholarship in the field of AI governance is relatively small in size and scope. Most of the existing research has concentrated on the theoretical ethics, economic implications or technological dangers without primarily considering interaction of AI regulation, surveillance methods and international human rights law.[7] The literature lacks a serious thought on how ethical standards have been incorporated into the legally binding standards in various jurisdictions and how the frameworks will respond to the challenges posed by the global use of AI application. The other direction that has not been explored fully is the possibility of harmonization of global AI regulation as international law. With the European Union as the paradigm of normalcy and the impact of the European Union on the global regimes of governance, the paper seeks to address these gaps by providing a comparative legal approach to AI-governed ethical and policy frameworks.[8] This is, in turn, the primary research question of this paper: How can global governance regimes and legal and ethical frameworks of the European Union governing AI governance be compared in regards to how they approach the problem of accountability, surveillance, and human rights? It then follows that this risk-based, EU rights based framework will be examined in its deviation and impact on the governance approaches of key states, including the United States and China and international schemes by the Council of Europe, UNESCO and OECD. The paper evaluates the possibility of international harmonization based on the diverging normative standards and addresses the trade-off between innovation and protection of human rights during AI policy-making in the discussion. The study, therefore, aims at comparing the position of AI surveillance across countries; examining the effectiveness of international ethical tools; analyzing the regulatory framework of the EU as an example of the ethical AI regulation; and prescribing the channels to a unified global regulation regime. Lastly, this study, through testing the possibility of convergence of regulatory ideas at a time when digital surveillance is probably to surpass the protections of human rights, can be seen as part of an emerging literature about the law of AI.
2. Theoretical Foundations[9]
2.1 Global Governance Theories
The regulation of Artificial Intelligence (AI) occurs in an intricate network of worldwide interconnectedness where a set of transnational ethical values, technological novelty, and sovereignty in the state converge. The theoretical perspective of global governance offers a basic platform on which legal and policy instruments transform in a world that has gone cross-border digital with interconnectedness. Rosenau and Czempiel (1992), however, view global governance as a polycentric system of overlaps of authority, institutions, and norms that collaborate in managing transnational issues and not a world government. This paradigm fits well with AI governance that transcends national boundaries, has both state and non-state actors, and needs to make rules collectively to address societal and ethical risks.
On a theoretical level, three prominent frameworks of global governance support the current discourse on the regulation of AI:
[10]1. The regime theory asserts that international cooperation is developed through issue-specific regimes, which are normative structures that systematize the state behavior in relation to shared objectives. The OECD Principles on AI (2019), UNESCO Recommendation on the Ethics of Artificial Intelligence (2021), and G7 Hiroshima AI Process (2023) are examples of such regime building, which establish non-binding and nonetheless important soft-law tools that guide national legislation. The regulations demonstrate the manner in which a lex informatica of AI, which are norms that are formed by code, practice and policy in contrast to traditional treaties, are gradually being formed.
The Multi-Stakeholder Governance Theory[11] puts a high accent on the participation of diversity of stakeholders in the formation of norms, such as companies, governments, academic institutions, and the civil society. Through their design and implementation of algorithmic designs, the companies such as OpenAI, Google, and Baidu to a significant degree govern AI. This is this diffused power structure represented in this strategy. The forums of multi-stakeholder experts organized by OECD and the High-Level Expert Group on AI (HLEG) of the EU are institutionalized versions of this type of governance that balance between the accountability of the population and the innovativeness of the business.
It is the norm diffusion and policy transfer theory (Finnemore and Sikkink, 1998), which explains how moral standards and legal standards are transferred across various jurisdictions. [12]The EU risk-based approach to AI regulation, which puts an emphasis on human control, responsibility, and transparency, has already influenced policy proposals in Brazil, Canada, and the Framework Convention on AI and Human Rights (2025)[13] by the Council of Europe. Despite this spread being conditioned by geopolitical conditions of different restations, it reflects the world tendency to converge internationally into rights-based AI governance.
A conflict of global regulatory coherence and technical sovereignty is a common issue identified by global governance theories. The strategy of cyber-sovereignty of China incorporates the use of AI in the state framework of control and surveillance, whereas the EU advocates the rules-based approach founded on human dignity. Comparatively, the United States supports market liberalism and innovation-first principles. Thus, the development of global governance frameworks that strike the balance between ethical universality and heterogeneity is an activity that is critical towards the comparative analysis in this research.
2.2 Legal–Ethical AI Frameworks[14]
The normative legitimacy of AI regulation is provided by convergence between ethical theory and legal regulation and offers it with hybrid frameworks that focus on providing accountability, equity, and human-centered innovation. The three keystones defining the idea of the “Trustworthy AI” outlined by HLEG 2019 are the lawfulness, ethical appropriateness, and technical stability. In 2024, the Artificial Intelligence Act is the change where the soft-law ethics transform into binding laws as the ideas are converted into the legal requirements. This action is characteristic of what scholars such as Yeung 2022 refer to as governance by design, where the moral values are incorporated into the technological and legal designs.
Research-wise, the legal treatments of AI rely on a number of ethical notions:
To begin with, the deontological ethics, using the Kantian Framework, emphasize respect of human autonomy and dignity. It is the same concept, which the consent and data minimization requirements of GDPR are based on, which makes sure that individuals are not objects but subjects of the algorithmic decision-making process.
Two. Utilitarian ethics afford very high precedence to enhanced social welfare in order to inform cost-benefit studies of AI deployment, particularly when it comes to security and healthcare contexts. However, when citizens are put in front of the police, the strategy will be in danger of proposing legitimizing monitoring.
Third, virtue ethics encourages both business and governmental integrity in developing algorithms; however, institutional character and accountability are placed at the surface in algorithmic design. The appeal of the OECD to be responsible in stewardship of trustful AI is quite indicative of this.
The operationalised ethical promises of the proposed EU AI Act that are legally binding classifies AI systems according to the risk types: high, unacceptable, limited and low risk. Therefore, biometric surveillance and predictive policing are considered high-risk systems, which are tightly evaluated concerning the conformance and transparency, and human monitored. This model attempts to ensure organizations do not engage in a practice termed as ethics washing i.e. giving lip service to ethical standards but not owning up to them.
Various non-EU frameworks are indicative of varied intellectual and ethical foundations. The US in its Blueprint of an AI Bill of Rights highly emphasizes innovation, competition, and free responsibility of companies. The AI governance in China,[15] however, is socially oriented and established by the 2022 Algorithmic Recommendation Provisions, which emphasize the communal security more than self-determination of the individual and form the context of AI ethics as the social harmony and state control. In the meantime international standards of ethics-transparency, justice and explainability-are promoted via international bodies like UNESCO and the OECD, but are not restricted by the same. This kind of mismatch attracts the attention to the idea of comparative legal study to seek means of normative harmonization through demonstrating the continuing gap between ethical ambition and legal application.
2.3 Law of Surveillance and Human Rights.
The most problematic field of digital governance in terms of ethics and law is the use of AI to monitor human activities. Facial recognition, predictive policing, and data analytics change the outlook of surveillance into a front-end behavioral control mechanism in both the government and market. All these technologies raise human rights issues concerning the compatibility of AI surveillance with democratic constitutionalism since they violate the fundamental civil rights of privacy, guaranteed under Article 8 ECHR, of freedom of expression, guaranteed under Article 10, and non-discrimination, guaranteed under Article 14.
The surveillance should meet the criteria of legality, necessity and proportionality under the European human rights doctrine especially in the European court of human rights interpretation. These are the standards that can be seen in the most significant rulings like Big Brother Watch v. Szabo and Vissy vs. United Kingdom, 2021. Hungary, 2016, is what requires open surveillance and remedies, which are supposed to be readily available, against mass data interception. The EU responsive commitments to human dignity in the digital governance manifest through the EU Charter of Fundamental rights where this is further emphasized in the perspective of the right to preserve data protection, Article 8, as a distinct right rather than a right to privacy.
In comparison, the surveillance architecture of China represents an entirely other normative order-one where the legitimacy of the practice of data collection is established with references to social stability and national security. It is doing so under its Social Credit System and extensive AI surveillance system. The reason is that the system of laws regulating data protection in the United States, including the CCPA and FISA, lacks the unifying federal law of data protection, which implies that the judiciary plays an important role in establishing a compromise between the needs of national security and privacy. These conflicting models therefore indicate the normative pluralism of surveillance governance in which institutional design and political beliefs should be central in the decision making process on legal protection.
The international level has human rights anchors of AI surveillance legislation in 2018 Convention 108+ of the Council of Europe and the 2017 Article 17 of the International Covenant on Civil and Political Rights (ICCPR). Nonetheless, enforcement and accommodation is a challenge in that algorithmic transparency, automated profiling, and real-time data aggregation did not exist with the aim of placing them under the existing human rights laws. New Definitions therefore seek to modernize human rights protection by introducing algorithmic responsibility and effect testing to the legal requirements. One example of such a proposal is the Framework Convention on AI, Human Rights, Democracy, and the Rule of Law of the Council of Europe dated 2025.
With such a background, intersective studies of AI, surveillance, and human rights laws must have multilayered and intricate methods of analysis that go far beyond the field of study of doctrines. The way the legal system can adjust itself to the human dignity in algorithmic society is inexplicable without the incorporation of the technology studies in addition to ethics and global governance theory. Although the EU model, which was based on the principles of ethical surveillance and legislations rooted on the rights is a very good starting point; it still remains to be seen how well it will operate in various systems of governance. The resolution of this ambiguity is the subject matter of the comparative inquiry made in this paper.
3. Legal Framework of the European Union.
3.1 The EU AI Act and GDPR[16]
The European Union Artificial Intelligence Act, officially agreed upon in 2024 is the first general regulation in the world to specifically designate AI technologies. It aims to harmonize the practices of member states in the use of AI and at the same time to secure the fundamental rights, safety, and democratic values. It is among the most significant components of digital policy agenda of the EU. According to the risk-based paradigm, the Act is divided into four AI systems, including unacceptable, high, limited, and minimal risk, and each of them has its legal requirements.
Article 5 is a clear ban on unacceptable-risk AI, including facial recognition in the street, social rating, or predatory algorithms. This is one of the core pledges of the EU towards autonomy, dignity, and nondiscrimination.
The use of a high-risk AI system could only be possible in case the systems satisfy stringent requirements on accountability, transparency, human control, and governance of data. Such systems include those in critical infrastructures, law enforcement, employment and migration control. Under the regime of strict control over the market, the persons who provide the said systems will be required to maintain technical documentation, and also maintain conformity tests (Articles 8–51).
The rationale of the Act regulatory reasoning brings this thought into the regulatory reasoning of AI-related data under the General Data Protection Regulation (GDPR, 2018), and it is a fundamental rule of AI-related data regulation. In this regard, the AI Act realises the concepts of lawfulness, justice, accountability, and transparency to the algorithmic contexts that are stipulated in the processing of personal data under the GDPR. The combination of these instruments creates a dual regulatory framework where the AI Act grants the regulation of designing, implementing, and monitoring AI systems that process data, whereas the GDPR grants the regulation of the collection and processing of data.
The connection between the AI Act and the GDPR has presented a number of doctrinal concerns in terms of research: It is evident that data protection authorities may overlap with AI regulators in a number of areas, i.e. automated decision-making in Article 22 GDPR, and AI-based profiling. Second, there are clash of proportionality and technological innovation: the AI Act flexible, risk-based architecture provides an opportunity to interpret it contextually, but the GDPR provides privacy as a fundamental right. Such a relationship, as described by researchers like Veale and Zuiderveen Borgesius (2021), constitute a form of normative continuum, where an explainability, accountability, and data protection [17]converge into a single model of digital constitutionalism.
In addition, the AI Act establishes extraterritoriality, like the GDPR where non-EU providers the AI systems provide the effect on individuals in the EU (Article 2). This Brussels Effect is the process of EU as a global regulator- exporter of regulations and an impact on foreign AI policy outside its borders (Bradford, 2020). Therefore, the concepts of the AI Act are expected to impact new regulatory schemes in Canada, Japan, and even California AI risk proposals. This extraterritoriality brings out the intentional approach of the EU to ensure that it has built a normative leadership in global AI regulations whilst not being a market leader.
3.2 Digital Markets Act ( DMA ) and Digital Services Act (DSA).[18]
The Digital Services Act (DSA) and Digital Markets Act (DMA) are intended to supplement the AI Act and GDPR, which were adopted in 2022, to constitute the more comprehensive strategy of the EU to control online platforms and digital markets. The acts play a central role in the AI governance ecosystem since they address the structural factors of an AI system that are especially relevant to accountability, transparency, and market fairness.
The DSA will establish a horizontal regulatory system on internet platforms and intermediaries, who have to adopt policies that provide algorithmic disclosure, risk, and responsibility of content regulation. Articles 2628 mandate that VLOPs and VLOSEs should be used to evaluate and reduce systemic risk, which could include spreading misinformation and discriminatory algorithmic decisions and the violation of basic rights. Notably, the DSA provides scholars and auditors to data and audit to conduct empirical evaluation of AI-driven recommender systems.
The DMA, instead, addresses the economic aspect of the concentration of digital power by putting the control over the control of data ecosystems and AI infrastructures like Google, Meta, and Amazon under a constraint. The DMA also diminishes the probability of the monopolization of AI- that is, control by major actors over algorithmic architecture and training data- because it imposes the interoperability requirement and prohibits anticompetitive behaviour. Combined, the DSA and DMA offer a framework that can be added to the AI Act since they are both addressing the technical operations of AI systems and the market structures underpinning them.
The constitutionalization of the legal perspective of digital regulation by the EU is the DSA and DMA to the extent that they constitute a rights-based governance of market law. They apply the human nature philosophy of AI Act to the digital space where AI systems are placed. Furthermore, introducing systemic risk management and algorithmic auditability into the DSA is a significant measure to institutionalizing algorithmic accountability on the European level.
3.3 Ethics Principles of Trustworthy AI HLEG, 2019.
The normative framework of the European Union on the regulation of AI preceding the creation of the AI Act was developed by the High-Level Expert Group on Artificial Intelligence through its seminal study, Ethics Guidelines of Trustworthy AI (April 2019). This article is one of the visions of AI that are founded on the human dignity, free will, and social welfare and continue to be the ethical core of the EU legislative approach.
The HLEG identified three pillars of Trustworthy AI:
1. Legality: behaving towards all the possible rules and laws;
2. Ethical Alignment: Caring moral standards and ideals that surpass the boundaries of legality;
3. Technical and Socio-Technical Robustness: reliability, safety, flexibility in the perspective of different situations.
The guidelines described above also outlined seven requirements, namely human agency and oversight, technical robustness and safety, privacy and data governance, openness, diversity and justice, social and environmental well-being and responsibility. Taken as a group, these standards have created the normative framework of EU human-centered AI.
Considering the governance theory, the HLEG framework is the precursor of the AI Act as the soft law. The HLEG document does the task of putting philosophical ethics into workable measures of governance in the expectancy of future legislation codifying. In addition, HLEG proposed a truly multi-stakeholder governance pattern of industry, academia, and civil society feedbacks. It is a stark contrast to the state-oriented governance of China as well as the laissez-faire style of the United States.
Opponents like Floridi and Cowls, 2020, believe that the HLEG principles, which entail the normative restrictions on the way technology is designed and used, constitutes the ethical constitutionalism of the informational era. Later donations point out, though, the danger of ethics washing: the appeal to voluntary principles that are not being adopted. Therefore, the shift to binding law is represented by the shift of soft ethics in the HLEG to hard law in the AI Act-a development that, however, broadly also describes the global leadership of the EU in the field of AI legislation.
4. Comparative Analysis: Multilateral AI Governance Frameworks, U.S. and China.
Beyond the European Union, the problem of artificial intelligence governance is a shattered yet highly educative global network with divided regulation strategies in the era of surveillance on the basis of political beliefs, institutional cultures, and vision of human rights among different jurisdictions. Various strategies towards the regulation in the era of surveillance concerning artificial intelligence are manifested in the comparative strategy which involves the jurisdictions of the United States, China and international organizations like OECD and United Nations.
Just like its larger agenda of technological innovation and limited government intervention, the US has always followed a market-based, sector-centric and soft-law approach to AI regulation. There is no federal law on AI regulation in the United States, unlike the extensive law on AI regulation in the EU, the AI Act. Instead, it relies on a sectoral grouping of various federal and state legislations, regulations, and protocols. The White House Office of Science and Technology Policy has identified five core values in its 2022 Blueprint on an AI Bill of Rights, which include safe and effective systems, protections against algorithmic discrimination, data protection, and notice and explanation, and consideration and alternatives, though these are not enforceable there. Similarly, we should mention such an openness and trustworthiness promoting framework as the AI Risk Management Framework by the NIST in 2023, which is largely a tool of soft-law compliance. The soft-law approach to these documents is indicative of the U.S. government culture of innovation, entrepreneurship, and national security interest and pre-regulation being less urgent priorities.
The existing American system of AI regulation[19] welcomes the cooperation between different governmental agencies and other participants, business codes, and corporate self-governance. Some companies like OpenAI, Microsoft, and Google have developed an algorithm auditing system and an internal AI ethics board to pre-empt future restrictions. The critics have, however, claimed that the processes are a form of practice that is known as ethics theater: they are symptomatic of compliance, but not legally required to comply. Additionally, industry-specific laws, like the Federal Trade Commission Act, the Fair Credit Reporting Act, and the FISA surveillance regulations, are effective privacy and surveillance regulation since there is no federal data protection legislation, like GDPR, in the EU. Therefore, it could be stated that the AI system of the United States is a reflection of a paradigm of governance that strengthens the uneven distribution of technological capital and biosurveillance across the world and the presence of Ethical Regulation in the Hinterlands of the countries.
AI governance model in China, conversely, is an integration between social control and digital innovation that is centered on the state and surveillance. The government of China includes AI as part of its larger Digital Governance Strategy and Social Credit System since the government perceives technology as a tool to both achieve political stability and economic growth. The legal services in China to control AI are founded on the top-down structure that employs collective security, state sovereignty, and ideological purity. The Algorithmic Recommendation Provisions (2022) and Provisions on Deep Synthesis of Internet Information Services (2023), in which AI-driven systems must adhere to the set of core socialist values and in which the output of such systems must not adversely affect societal order and national cohesion, are crucial elements. The simultaneous requirements of China in data localization and surveillance justification are addressed by specifications provided in the Data Security Law (2021) and Personal Information Protection Law (PIPL, 2021) of China, which provides the state with control over information.
The regulation of AI in China is critical; this strives to spread state agendas through control through technology, unlike in the EU regulatory framework, which is rights-based and US regulatory framework, which is market-based. Convergence between surveillance and administration is manifested through AI-based people tracking, facial recognition, and pre-emptive policing as can be seen in Xinjiang. Arguably, the field of AI does not even deserve autonomous regulation.
In the realm of academic literature, this phenomenon, which substitutes the traditional legal liability with the technology-efficient control, has been dubbed as the so-called digital authoritarianism (Polyakova and Meserole, 2019). However, as a strategy of advocating a different emphasis on sovereignty instead of Western-founded rights, China has not been idle even in global rule-setting bodies, where it participated in the work of UNESCO and ITU in standardization of AI.[20]
To bridging the differences between the liberalist world and the more dictatorial ones, the inter-governmental organizations such as the OECD and the UN strive to create an ethical code of AI internationally. The OECD Principles on Artificial Intelligence, which was adopted in 2019 and has been supported by over forty countries, is the first inter-governmental movement to have a set of standards on AI. It focuses on a series of human values such as accountability, transparency, resilience, and inclusive growth, leading to the incorporation of legislation by member countries that respects basic rights and encourages innovation. It is necessary to add that even the policy embraced by the OECD is based on soft-law and consensus rather than on enforcing the rules. These standards have played a major role in informing the AI policy structure in the EU. Besides, these standards are the basis upon which the G20 AI Principles were followed and it happened in the same year.
Meanwhile, a global governance agenda based on ethics has been advocated at the level of UNESCO and Office of the High Commissioner of Human Rights (OHCHR). One of the significant advances in the creation of global standards on AI governance has been the adoption of the Recommendation on the Ethics of Artificial Intelligence (2021) by the UNESCO by 193 member states. It has four pillars that include human rights and dignity, health of the planet and ecology, diversity and inclusion and peaceful society. The way AI ethics is pluralized in UNESCO in reference to the United Nations Charter and Universal Declaration of Human Rights (1948) is in contrast to an OECD emphasis on economics. Furthermore, the issue of algorithmic accountability and risks of surveillance technologies with implications on civil rights has been raised to the focus in the 2022 OHCHR report The Right to Privacy in the Digital Age.
But both OECD and UN structures have the structural weaknesses of the soft law, which have only moral authority but lack enforcement means, however, their moral authority is large. Their power is based on diffusion and influence of shaping of national and national law by legitimacy and persuasion as opposed to coercion. These global initiatives may be related to the adoption of ethical and human-centric norms by the EU, which, however, is enforced by the political will of the nations. It results in the potential emergence of a so-called global AI governance divide where totalitarian states will combine AI technologies and surveillance infrastructures, liberal democracies will resort to rights-based regulations, and developing countries will be trapped in a dilemma of modern dependence on digital and normative sovereignty, as proposed by different authors, such as Floridi (2024) and Calo (2023).
Comparative study of both U.S. and Chinese and multi-national strategies highlight bigger issues related to coherence in the area of international AI regulation. The U.S. is decentralized, business-oriented on the principles of its pro-innovation agenda. China associates AI with the national security and administrative practices and creates a connection between surveillance and legitimacy in the central focus of AI. In China, the United Nations and OECD have an intention of forging global ethics that is founded on soft cooperation. Therefore, arguing about its hard-constitutional approach towards AI, one can state that the European Union is in an intermediate position according to an endeavor to innovate and protect rights simultaneously.
Lastly, it is worth mentioning that, based on this comparative analysis, it can be stated that it is more important to have a contestation of norms rather than a legislative harmonization of global AI governance. Market liberalism, authoritarianism by the state, and multi-literalism on ethical considerations are represented as rival visions indicating different answers to one question. The answer to these visions boiling down to the same legal platform or even increasing an already vast gap in the geopolitical rifts of the twenty first century will be established as per the future direction of global AI governance.
5. Ethical and Policy Implications: Accountability, Data Ethics, and Crossborder Surveillance
Without any doubt, the ethical and policy ramifications of the Artificial Intelligence governance during the surveillance era are the most difficult and contentious area of analytical legal studies in the contemporary period. The challenges [21]linked to the increasing penetration of AI systems in decision making processes in various industries such as border management and health, to banking and law enforcement, prompt a corresponding rise in accountability, transparency, and human rights concerns. Even AI architecture, which is however-opaque by algorithm, is dependent upon data, and its multi-national operation, puts traditional legal concepts of duty and jurisdiction into a challenge. The next part is a reflection on how cross-border surveillance, data ethics and accountability overlap to redefine the world-wide discussion on how the technology and normative legitimacy should be governed. In the context of determining the moral and legal figure(s) who are responsible in a situation when AI systems cause pain or perpetuate injustice, accountability is central to ethical governance. With autonomous or semi-autonomous algorithms whose decision-making procedures grow less and less susceptible to intuition even to their designers, the usual legal system based on human agency and predictability fails. In line with that, there is an option to observe a diffusion of responsibilities among developers, deployers, and data controllers, also known as a so-called accountability gap named by scholars (cf. Wiener, 2020). In the GDPR, in the areas where the individual rights remain intact by the corresponding notions of explainability and contestability in Articles 12-22, the EU AI Act is attempting to compensate this through traceability, documentation, and human supervision of high-risk systems. These clauses bring about a rights-based accountability approach in the anchoring of legal responsibility in the requirements of transparency and procedural justice. However, transparency is a paradoxical virtue in the case of AI governance: explainability leads to legitimacy, but excess information will suffocate creativity or broadcast trade secrets. Therefore, a concept of meaningful transparency has been propagated, where meaningful disclosure-enough encourages oversight, but not disclosure of proprietary algorithms. This implies an intervention that is less ethically oriented towards informational transparency but rather towards epistemic justice: establishing the conditions under which individuals and organizations possess the knowledge that they will be able to maturely challenge the power of algorithms. In other words, algorithmic auditing, impact assessment, mechanisms of public accountability-already being tested on the EU through the risk management framework provided by the Digital[22]Services Act-will be streamlined. The second ethical discourse layer is human rights and data ethics, through which the ethical basis of any regulatory effort undertaken with AI is put. There is power and bias in AI systems that is carried in data. Mass scale entails data gathering, archiving and data analysis- without express permission and this stands to instill structural injustice and the loss of individual autonomy. The values of fairness, accountability, privacy and proportionality, reflected in the ethical data governance, are reflected in the OECD AI values 2019 and the UNESCO Recommendation on the Ethics of Artificial Intelligence 2021.[23] These concepts are legally manifested in the GDPR and the Charter of Fundamental Rights of the European Union that formalized data privacy as a basic right in Article 8; which formalised the moral imperative of making sure that people have control over their personal data, and that the AI systems only perform operations within limited scopes of necessity and proportionality. On the other hand, privacy in a nation such as the United States is a consumer right and not a constitutional right, and more data is frequently considered as a commodity. China Personal Information Protection Law of 2021, which places data privacy under the assurance of the public interest and the security of the state, is technically similar to the GDPR and shows the normative difference in data ethics in the world. This divergence highlights the underlying contradiction of utilitarian and deontological positions of AI regulation through the lens of the ethical theory. The former, the EU law embodies demands the fundamental human rights and dignity regardless of the consequences. In China, however, and other Asian regions, the factor that is stressed is the security and the well-being of the group, not on the freedom of the individual- a utilitarian or collectivist approach. It is normative heterogeneity of values emphasized with the aim of AI regulation that adds to the ethical discourse but complicates the potential of constructing a consistent system of global governance. The third and geopolitically, maybe, most crucial section of ethical and policy implications of technology, sovereignty, and human rights lies in cross-border surveillance. Facial recognition, biometric tracking, or predictive policing as AI-enhanced surveillance technologies do not respect the national boundaries in any way. Along with the global information flows that support them come significant questions of jurisdiction, extraterritoriality and responsibility. What law, e.g., would apply to the treatment of the biometric data of a European citizen by a U.S. based platform or transfer of the same to the police pursuant to the transatlantic data transfer arrangements? The specified ambiguity is exactly what supported a series of CJEU decisions in Schrems I (2015) and Schrems II (2020), which declared the U.S.-EU Privacy Shield agreement invalid due to inefficiency in terms of human rights protection against state surveillance. Responsibility is further confounded by emergence of the cross-border AI surveillance relationships between China and poor nations via its program of Digital Silk Road or the EU and US over counterterrorism information sharing. These hold the possibility of producing a global surveillance economy through the exportation of surveillance infrastructure to nations where the human rights laws are lax. In the meantime, the Framework Convention on AI, Human Rights, Democracy and the Rule of Law, 2025, of the Council of Europe is designed to establish globally binding rules of AI, such as transparency, necessity, and reparation. Meanwhile, the OECD and UN Human Rights Council support international standards of valid transfers of data across the boundaries, which heavily relies on the aspect of democratic responsibility and human oversight. The issue of cross-border monitoring brings the ethical dilemma on whether human rights and privacy can uphold geographical boundaries in a society that lacks geographical borders. Digital sovereignty is a concept that is at the heart of the EU policy and that represents a bid to reclaim data flows and technology infrastructures. Conversely, critics are afraid that excessive localization of data would lead to the disintegration of the internet, which is an obstacle to innovation. However, there must be a strike between sovereignty and universality so that AI-enforced surveillance can be used to fulfill legitimate security concerns, and the cosmopolitan aspect of human dignity is honored.
Altogether, the data ethics, accountability, and cross-border surveillance problems lead to profound institutional and moral reforms required to justify the AI governance. They demand regulatory mechanisms in law, which entrenches human rights into technological structures such as algorithmic impact evaluation, international audit structures, and international agreements on the trans-border management of data. Ethically, they demand a reversal between retrogressive regulation and progressive governance where transdisciplinary knowledge is involved, participatory oversight and foresight in AI policymaking. The ethical and policy ramifications that have been caused by the AI governance in the age of surveillance, in one word, have been used to essentially reveal the fundamental philosophical tension between openness and secrecy, self-regulation and domination, globalization and state authority. The rights-based model of the European Union provides a feasible premise on how these conflicts can be resolved, but it can be accepted generally only upon the proper balance between legal universality and ethical variety. In that regard, the accumulation of a universal ethical agreement that places human dignity in the focus of the digital change, and not just laws as such, will become decisive in the future of ethics in AI.
6. Future Directions: Towards Global Harmonization, International Law, and Policy Recommendations
The future course of AI legislation will be on whether the global community can bring about an efficient, rights-based, and enforceable governance system. Currently, there is fragmentation in the regulatory environment: the state-controlled, surveillance-based paradigm of China; the market-oriented, approach to innovation, led by the United States (UM).[24] In addition to the expression of geographical plurality, this fragmentation of beliefs and ideals opens the threat of the race to the bottom in the ethical code, the protection of human rights. That is why, the question of whether it is possible and even desirable to establish AI governance without destroying the cultural and political diversity is the most relevant one. Ampoules Towards International harmonization. The harmonization of AI on the international level is both an aim and a challenge. Although the European Union has established an inclusive framework of reliable AI through the European Union AI Act (2024), its extraterritorial intentions have triggered a process of normative diffusion, which has affected the legislative landscape of the rest of the world. The categorization system system based on risks, the requirements of transparency, and the process of its enforcement under the EU AI act are incentivizing players in the international arena to comply with its provisions in order to maintain market entry in the same way that the GDPR created motivation to uphold data protection in the international arena. It is also known as the so-called Brussels Effect (Bradford, 2020) that states that harmonization can be achieved by the forces of gravity of regulation but not by official treaty making. This, though, is not consistent, as well as not unopposed. Their examples are the AI Bill of Rights (2022) and the NIST AI Risk Management Framework (2023), which are soft law tools that the US prefers since they are sector specific in focus and voluntary in their adherence to commitments. An extremely contrasting approach to authoritarian harmonization is given by the Chinese approach to regulation, based on the state sovereignty, the state national security, and social stability and expressed in the Algorithmic Recommendation Provisions (2022) and Generative AI Measures (2023). It is this gap that proves that global harmonization must be more than merely technical, and must also deal with serious ethical and political conflicts over the objectives and scope of surveillance. Scholars regard it to be the most viable course of action: a pluralistic harmonization paradigm, in which regional malleability exists along with mutual values. Things like the OECD Principles on AI Ethics (2019), the UNESCO Recommendation on the Ethics of AI (2021), the emerging Council of Europe Framework Convention[25] on AI, Human Rights, Democracy and the Rule of Law (2025) propose a deeper convergence on the basic normative anchor-transparency, accountability, fairness and human oversight. These instruments are constructing blocks to a multi-layered governance structure where domestic enforcement, regional legislation and international soft law are in a dynamic interaction though non-binding or of limited breadth per se.
Role of International Law and Treaties[26]
International law nowadays stands at the forefront of AI governance.[27] Global technical governance, on the other hand, has traditionally depended on soft law, coordination mechanisms, and multi-stakeholder efforts instead of legally enforceable treaties. But there is a strong argument that a treaty-based international framework based on public international law would be an urgent need as AI technologies increasingly interact with human rights, cross-border data flows, and surveillance.
It is a hopeful step that the first legally binding international document on AI incorporating democratic, human rights, and rule of law principles into state duties is the AI Framework Convention, 2025, by the Council of Europe. It gives a model for more extensive multilateral engagement since it integrates accountability and impact assessment with restitution procedures in AI deployment. In parallel, both the OECD and the UN Human Rights Council continue to support efforts at embedding AI ethical concepts into international human rights law, at least as far as the issues of non-discrimination (Article 26 ICCPR) and privacy (Article 17 ICCPR) are concerned.
Nevertheless, a global AI treaty faces great practical and geopolitical barriers on its way. States differ significantly on data sovereignty, surveillance techniques, and rights to innovation and balancing of rights versus innovation. China claims the digital takeover over the United States via its Global Data Security Initiative (2021), and the United States values technical competition. The EU is based on the constitution of human rights that are the foundation of government. What is required to overcome these ideological rifts is a new form of so-called functional multilateralism- a form of government that acknowledges overlapping interests, encourages collaboration within issue areas (at national and international levels, e.g. cross-border surveillance and AI auditing) and uses existing institutions such as WTO, ITU, and UNESCO to harmonise standards.
Also, the liability of companies caused by the harm of AI can be justified by the UN Guiding Principles on Business and Human Rights of 2011, which includes the concept of extraterritorial human rights responsibilities. The inculcation of AI ethics into conventional international law, by high prevalence of state conduct and opinio juris, will ultimately convert portions of the principles, including algorithmic openness and human responsibility, into nationally enforceable norms.
Thus, the normative internalization-incorporating moral needs within the rationality of functioning of both governments and corporations-as opposed to codification will possess the clue to AI governance in international law in the future.
Suggestions for Lawmakers[28]
In order to establish a cogent and rights-based global governance model on AI and surveillance, policy makers ought to consider a multi-pronged strategy that would include institutional, ethical, and legal reforms:
World AI Accord Global treaty on AI, under the UN should be established on the principles of the Council of Europe by codifying them as follows: accountability, transparency and protection to the human rights and have an enforcement mechanism of peer compliance evaluations and periodic reviews.
Global Coordination as an Institution: A standardized response to standards, data-sharing schemes, and cross-border surveillance as well as AI deployment conflict regulation should make the UN or OECD organize a multi-stakeholder institution known as the Global AI Governance Forum.
Mandate Algorithmic Impact Assessments: To enable transparency, all high-risk AI systems, especially those dealing with surveillance need to be mandated to undergo Human Rights Impact Assessments prior to deployment in practice. Results of these tests need to be made public.
Reward Ethics-by-Design in AI: [29]Ethics by design and privacy by design strategies need to be incentivized through funding, certification, and shared research to ensure that design concepts of ethics are integrated into technology designs and not an application of ethics-and-legal-and-compliance.
Ensure Corporate Accountability: Based on the laws of corporate due diligence like the upcoming European Union Corporate Sustainability Due Diligence Directive (2024), establish international liability regimes on those companies that create and use AI systems that are involved in human rights abuses.
Enhance Cross-border Data Governance: Establish legally binding data adequacy standards to provide the same level of protection of personal data transferred across borders as GDPR offers, including safeguarding against corporate or state abuses of surveillance.
Aiming to establish International Ethical Research and Education: Fund multidisciplinary capacity-building courses on ethical AI design, governance theory and digital human rights law to legislators, engineers and legal scholars.[30]
The question of the reconciliation of the transformation of moral values into binding rules and sovereignty or universality will be a critical aspect of the decision of how the AI will be governed in the age of surveillance in the work of the international bodies. Moral coherence, which is a world-wide agreement that technological advancement should underpin social justice, democratic responsibility and human identity is what the future demands in addition to regulatory convergence.
Ahead of us now lies the task of constructing a strong global governance framework that can deal with technological change without reproducing those very imbalances of power and surveillance that AI itself renders possible. It is only through ethical cooperation, inclusive policymaking, and commitment to shared values that the international community will be able to ensure that AI develops into an instrument of emancipation, rather than of control.
Conclusion: Summary of Findings and Conclusion on the future of AI Governance.
The paper has tried to critically examine the emerging architecture of artificial intelligence regulation in the surveillance era using a focus on the interplay of legal, ethical and policy regimes within the European Union (EU) and international systems of governance. The investigation was carried out using the main research question How do the global governance regimes, and the legal and ethical frameworks of the EU addressing AI governance manage the accountability, surveillance and human rights issues? This effort has illuminated how ethical, normative, and regulatory problems are of great concern in AI governance in the twenty-first century by way of comparative analysis.
Synopsis of Results
The paper has firstly shown that the emergence of AI-driven surveillance has become a structural shift in the sphere of governance, shifting the manner in which companies and governments exercise authority, collect information and regulate conduct. As the theory of Zuboff (2019) explains, the phenomena of surveillance capitalism proves how the process of data extraction is transformed into a significant instrument of social control and the creation of commercial value. Against this background, it is challenging to observe that ethics and law are struggling to match the blistering development of technology and the obscurity of algorithmic decision-making.
The analysis of the European Union model of governance shows that it is a rational, rights-based model that is based on the key notions of accountability, proportionality, and transparency.
Along with the GDPR, Digital Services Act and the Digital Markets Act, the AI Act (2024) [31]will form an interrelated legal framework that will bring on board a human-centrical conceptualization of AI. Combined, these tools demonstrate the commitment of the EU toward the adoption of algorithmic accountability with a risk-based approach and ethics by design. The paradigm is also based on normative duties to human dignity, justice and well-being of the society in the Ethical Guidelines on Trustworthy AI (2019) by High-Level Expert Group.
The comparative analysis, in its turn, emphasized on global fragmentation. The US favors economic dynamism rather than centralized regulation by use of soft law, voluntary principles, and innovation led by the private sector. By contrast, the governance approach in China that is backed by social stability and national security considerations institutionalizes state surveillance and implements AI into the framework of state surveillance; however, the international-level organizations such as OECD, UNESCO, and the Council of Europe have created convergent ethical frameworks, most of which remain largely non-binding. Consequently, the normative gap between the global soft law and the regional hard law (EU) is present and diminishes collective responsibility and limits enforceability.
Ethically, the findings reveal that accountability, data ethics, and cross-border surveillance are the central aspect of the governance conundrum. Since the existing legal concepts of culpability require a human actor and foreseeability- normally not met by automated systems- the accountability gap of AI-based decisions remains. There is still no consistency in global systems even when the EU model tries to fill this gap with the paperwork required and human control. Deontological such as the rights-based regime of the EU and utilitarian or collectivist such as those of China and other parts of Asia still diverge on data ethics. On the other hand, cross-border surveillance brings out the limits of the territorial sovereignty in the digital era. The 2015 and 2020 Schrems decisions of the CJEU have shown that the cooperation between different countries on the basis of the shared human rights is required to provide data security and confidentiality instead of acting unilaterally.
As per the forward-looking test, international law and treaties will play a very important role in defining how AI is regulated. An institutional support towards embedding AI ethics into human rights law is reflected in new documents like the UNESCO Recommendation on AI Ethics (2021) and the Framework Convention on AI, Human Rights, Democracy and the Rule of Law (2025) by the Council of Europe. The harmonization across the world remains a gradual approach, however, constrained by opposing concepts of sovereignty, varying approaches to regulation, and geopolitical competition.
Considerations of the Future of AI Governance.[32] The future of AI regulation and technological advancements will be shaped by how the legal and ethical institutions will cope with the spread of new types of algorithmic power. This investigation is evoking three reflections. To start with, reactive regulation in AI governance needs to be replaced with the idea of anticipatory and adaptive governance. It means that the interdisciplinary monitoring, ethical analysis, and foresight should be integrated in all levels of the AI development and implementation. The future structures should also involve algorithms audits, dynamical risk evaluation, and human rights impact assessment as continuous as opposed to intermittent processes. Second, global governance needs to be pluralistically harmonized and not uniform. Although the given convergence is not as probable as possible, transparency, accountability, justice, and human control may become the pillars of international AI ethics. This pluralistic convergence does not accept ethical relativism, although it is taking into consideration the regional difference. It suggests a multi-layered international system of governance where national enforcement and regional legislation is playing in through international soft law by means of international forums of cooperation, including the UN AI Governance Forum and the Global Data Accord.
Third, AI governance requires moral consistency and democratic legitimacy in the future. Governance should remain accountable to the democratic institutions and popular discourse as algorithms to facilitate social order and other fundamental rights. Is it possible that AI will result in the ongoing entrenchment of digital authoritarianism and the perpetuation of structural levels of inequality in the name of efficiency and innovation without transparency and participation? Thus, the ethical imperative of the future should be to make sure that the technological progress promotes but does not diminish the human autonomy, dignity, and freedom.
To sum up, AI and surveillance as a governance issue present an unmatched challenge that challenges the quality of ethical judgment, the flexibility of domestic law, and the longevity of international law. The global governance [33]should entail the balancing of the legal, cultural and ideological differences with principled multilateralism; however the European Union has a solid example that should be imitated worldwide. The question that is before academics, policymakers, and even technologists is to create a global governance framework that prioritizes people over innovation and technologies in order to make AI a social justice/democratic empowerment tool instead of a surveillance tool.
[1]R. Binns, Ethics and Accountability in Algorithmic Governance (MIT Press, 2021)
[2]A.K. Sharma and R. Sharma, “Governance in the Age of Artificial Intelligence: A Comparative Analysis of Policy Framework in BRICS Nations” 46 AI Magazine e70010 (2025)
[3]A. Chander, “Artificial Intelligence and Trade” in M. Burri (ed.), Big Data and Global Trade Law 115–127 (Cambridge University Press, 2021)
[4]Lund, Brady D.,Standards, Frameworks, and Legislation for Artificial Intelligence (AI) Transparency, AI and Ethics (2025), https://doi.org/10.1007/s43681-025-00661-4, available at SSRN: https://ssrn.com/abstract=5295085
[5]UNESCO,Recommendation on the Ethics of Artificial Intelligence, UNESCO (n.d.), retrieved Dec. 16, 2025, from https://www.unesco.org/en/articles/recommendation-ethics-artificial-intelligence
[6]Basu, N. & Dave, R., “Comparative Analysis of Laws in AI”, SDGs Review, Vol. 5, e05575, pp. 1–23 (2025).
[7]C. Hakan Kan, “Artificial Intelligence (AI) in the Age of Democracy and Human Rights: Normative Challenges and Regulatory Perspectives” 9 Int’l J. Eurasian Educ. & Culture 145–166 (2024)
[8]R. Zhaltyrbayeva, A. Jangabulova, S. Suleimenova, Sh. Saimova and Zh. Tlembayeva, “Legal Challenges of Regulating Artificial Intelligence in Law Enforcement, Taking into Account the Interdisciplinary Approach to Socio-Legal Transformations” 8 Social & Legal Studios 118–130 (2025)
[9]Muh. Habibulloh, “Digital Governance and the Right to Privacy: A Comparative Analysis of AI Regulation in Southeast Asia and the European Union” 1 J. L., Policy & Global Dev. 19–35 (June 2025)
[10]A. Wiener, “The Accountability Gap in Algorithmic Governance: Legal Responsibility and Technological Complexity” 22 Yale J.L. & Tech. 95–136 (2020)
[11]A. Mantelero, “AI and Big Data: A Blueprint for a Human Rights, Social and Ethical Impact Assessment” 34 Computer L. & Security Rev. 754–772 (2018)
[12]K. Crawford and T. Paglen, “Excavating AI: The Politics of Images in Machine Learning Training Sets” 36 AI & Society 977–996 (2021)
[13]C. Kuner, “Reality and Illusion in EU Data Transfer Regulation Post-Schrems” 18 German L.J. 881–918 (2017)
[14]Muh. Habibulloh, “Digital Governance and the Right to Privacy: A Comparative Analysis of AI Regulation in Southeast Asia and the European Union” 1 J.L., Pol’y& Global Dev. 19–35 (2025)
[15]Breno Barbosa de Oliveira, “The EU’s Ambition to Influence Global Standards for Artificial Intelligence amongst Regulatory Competition with China and the USA” (Master’s Dissertation, Universidade de Évora, Escola de CiênciasSociais, MestradoemRelaçõesInternacionais e EstudosEuropeus, 2024)
[16]Ren Bin Lee Dixon, “Artificial Intelligence Governance: A Comparative Analysis of China, the European Union, and the United States” (Master of Public Policy Professional Paper, The Hubert H. Humphrey School of Public Affairs, University of Minnesota, 7 May 2023)
[17]John Babikian, “Securing Rights: Legal Frameworks for Privacy and Data Protection in the Digital Era” 1 Law Res. J. 91–101 (2023)
[18]Dalia Alic, “The Role of Data Protection and Cybersecurity Regulations in Artificial Intelligence Global Governance: A Comparative Analysis of the European Union, the United States, and China Regulatory Framework” (Master of Arts in Human Rights Final Thesis, Central European University, Vienna, Austria, June 2021)
[19]Anna Jobin, Marcello Ienca and Effy Vayena, “Artificial Intelligence: The Global Landscape of Ethics Guidelines” 1 Nature Machine Intelligence 389–399 (2019)
[20]Nimrod Mike, “Global Perspectives on AI Governance: A Comparative Overview” in Proceedings of HHAI-WS 2024: Workshops at the Third International Conference on Hybrid Human-Artificial Intelligence, Malmö, Sweden, 10–14 June 2024 (CEUR Workshop Proc., 2024)
[21]Arshid Jan, “Techno-Legality: The Legal Challenges of AI, Surveillance, and Digital Governance” 4 Advance Soc. Sci. Archive J. 906–919 (July–Sept. 2025)
[23]Roxana Radu, “Steering the Governance of Artificial Intelligence: National Strategies in Perspective” 40 Policy & Society 178–193 (2021)hu
[24]I. Ulnicane, W. Knight, T. Leach, B.C. Stahl and W.-G. Wanjiku, “Governance of Artificial Intelligence: Emerging International Trends and Policy Frames” in M. Tinnirello (ed.), The Global Politics of Artificial Intelligence 29–54 (Routledge, 2022)
[25]Timo Minssen, Barry Solaiman, Lea Köttering, Jakob Wested and Abeer Malik, “Governing AI in the European Union: Emerging Infrastructures and Regulatory Ecosystems in Health” in Barry Solaiman and I. Glenn Cohen (eds.), Research Handbook on Health, AI and the Law 311–330 (Edward Elgar Publishing, 2024).
[26]European Commission, “Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts”,COM(2021) 206 final, 2021/0106(COD) (21 April 2021)
[27]José Sixto-García, Bella Palomo and Carmen Peñafiel, “Editorial: Self-Regulation and Co-Regulation as Governance Solution” 9 Frontiers in Communication, Article 1529021 (December 2024)
[28]R. Zhaltyrbayeva, A. Jangabulova, S. Suleimenova, Sh. Saimova and Zh. Tlembayeva, “Legal Challenges of Regulating Artificial Intelligence in Law Enforcement, Taking into Account the Interdisciplinary Approach to Socio-Legal Transformations” 8 Social & Legal Studios 118–130 (2025)
[29]M.A.N. Miazi, “Interplay of Legal Frameworks and Artificial Intelligence (AI): A Global Perspective” 2 Law & Policy Rev. 1–25 (2023)
[30]A.K. Sharma and R. Sharma, “Governance in the Age of Artificial Intelligence: A Comparative Analysis of Policy Framework in BRICS Nations” 46 AI Magazine e70010 (2025)
[31]Amit Kumar, “Situating Automated Decision-Making Jurisprudence within Data Protection Frameworks: A Study of Intersections between GDPR and EU Artificial Intelligence Act – Part II” Law School Policy Rev. (16 May 2024)
[32]Roxana Radu, “Steering the Governance of Artificial Intelligence: National Strategies in Perspective” 40 Policy & Society 178–193 (2021)
[33]M. Veale, K. Matus and R. Gorwa, “AI and Global Governance: Modalities, Rationales, Tensions” 19 Ann. Rev. L. & Soc. Sci. 255–275 (2023)


