This essay explores the evolving landscape of international AI governance, arguing for a new comprehensive approach to address the complex legal, ethical, and geopolitical challenges posed by artificial intelligence. The rapid integration of AI into global systems challenges traditional legal frameworks, especially regarding state sovereignty and regulatory authority. The essay contends that the current patchwork of regulations is inadequate to address AI’s global nature and proposes a multi-tiered framework fostering international cooperation on core governance issues while allowing flexibility for national adaptation.
The essay examines AI’s far-reaching implications across economic, social, and geopolitical dimensions, highlighting both its potential as a driver of innovation and the risks it presents, from algorithmic bias to threats to privacy. It analyzes the current international legal landscape, including soft law instruments, emerging regional regulations, and ongoing discussions within international organizations. The proposed governance framework aims to balance the need for global coordination with respect for national sovereignty, recognizing that certain AI applications require sector-specific governance tailored to regional contexts.
By advocating for this nuanced approach, the essay contributes to the ongoing discourse on navigating AI’s challenges and opportunities within the international legal order. It emphasizes the urgency of developing cohesive legal solutions to address AI’s impact on economic power, human rights, and the foundational principles of sovereignty in the digital age.
Keywords
Artificial Intelligence Law, AI Governance, Digital Sovereignty, AI Diplomacy, AI Risk Management, Global AI Regulation, AI Legal Challenges, Algorithmic Accountability, Comparative AI Law, Digital Geopolitics, International Law
The rise of artificial intelligence (“AI”) represents one of the most transformative developments in international law and governance of the 21st century. As AI technology becomes increasingly integrated into everyday life—transcending borders and jurisdictions—it challenges traditional legal frameworks, especially regarding state sovereignty, regulatory authority, and global cooperation.1 This article explores the evolving legal landscape of international AI governance and argues for a new, comprehensive approach to address the complex legal, ethical, and geopolitical questions AI raises.
AI’s broad implications extend across economic, social, and geopolitical dimensions, making it a crucial driver of global innovation and power.2 However, its rapid growth presents new risks, ranging from algorithmic bias and threats to privacy to the potential for autonomous systems to make decisions without human intervention.3 Furthermore, the concentration of AI innovation in a few nations heightens fears of technological monopolies and digital imperialism.4 This imbalance has spurred debate about how to ensure equitable global participation in AI development while addressing divergent national interests.
This essay contends that the current patchwork of regulations—composed of national laws, regional initiatives, and non-binding soft law—is inadequate to address the global nature of AI technologies. It advocates for a multi-tiered framework that fosters international cooperation on core AI governance issues while allowing flexibility for national adaptation. This approach respects both international cooperation and the need for localized regulation, recognizing that AI applications in areas such as law enforcement or healthcare may require sector-specific governance depending on regional contexts.
Part I of this article will provide an overview of AI’s key characteristics and its role in reshaping global dynamics.5 Part II will critically assess the existing international legal landscape governing AI, with particular attention to current regulatory efforts and gaps in international coordination.6 Part III will propose a layered governance framework that accounts for AI’s global implications, balancing the need for international cooperation with national sovereignty.7
AI’s far-reaching impact demands urgent attention and cohesive legal solutions. By proposing this governance framework, this article aims to contribute to the ongoing discourse on how best to navigate the challenges and opportunities AI presents for the international legal order. The stakes are high, with AI poised to redefine economic power, human rights, and even the foundational principles of sovereignty in the digital age.
Artificial Intelligence refers to computer systems capable of performing tasks that typically require human intelligence.8 AI technologies encompass machine learning, natural language processing, and robotics.9 AI’s core lies in its ability to process vast amounts of data, identify patterns, and make predictions or decisions based on that analysis.10
In the digital economy, AI has emerged as a fundamental driver of innovation and economic growth.11 The United Nations Conference on Trade and Development (“UNCTAD”) has identified AI as a core component of the digital economy, alongside digital services and platform economics.12 AI’s capacity to extract value from data has made it a critical asset in the global economic landscape.13
AI’s international implications are far-reaching. These technologies have the potential to reshape global value chains, alter the nature of work, and influence geopolitical power dynamics.14 The concentration of AI capabilities in a few countries, particularly the United States and China, has raised concerns about digital colonialism and technology.15
Moreover, AI’s application in military contexts has sparked debates about autonomous weapons systems and their compatibility with international humanitarian law.16 The use of facial recognition technology and drones in armed conflicts, as seen in the Russia-Ukraine conflict, exemplifies the challenges AI poses to traditional legal frameworks.17
The global nature of AI development and deployment necessitates international cooperation and governance. As AI systems operate across borders and affect multiple jurisdictions simultaneously, unilateral regulatory approaches are insufficient.18 This reality underscores the need for a cohesive international legal framework to address AI’s challenges and opportunities. However, not all aspects of AI require the same level of international regulation. AI’s application in specific sectors, such as law enforcement, healthcare, or military use, may vary significantly based on national legal cultures and societal values. For example, an AI model designed for law enforcement in the United States may be wholly unsuitable for use in China or the European Union due to differing legal systems and social contexts.19 Similarly, China’s large language models may not be applicable in Taiwan due to fundamental differences in language, culture, and political systems.20 Thus, a multi-layered approach, with both global cooperation on foundational principles and local governance for specific applications, could provide a more practical solution.
II. Current International Legal Landscape for AI
The international legal framework governing AI is still in its nascent stages, characterized by a patchwork of soft law instruments, emerging regional regulations, and ongoing discussions within international organizations.21 Soft law instruments have played a prominent role in shaping AI governance due to the rapid pace of AI development and the challenges of traditional international lawmaking.22 These non-binding guidelines and principles aim to establish common norms and ethical standards for AI development and deployment.23
The UNESCO Recommendation on the Ethics of Artificial Intelligence, adopted in November 2021, represents the broadest consensus on AI ethics at the governmental level.24 It provides a framework for responsible AI development that respects human rights and fundamental freedoms.25 Other notable soft law initiatives include the OECD Principles on Artificial Intelligence,26 the G20 AI Principles,27 and the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems.28
In the absence of a comprehensive international treaty on AI, regional and national regulatory efforts have emerged as important sources of AI governance.29 The European Union has been at the forefront with its proposed AI Act, which aims to establish a risk-based regulatory framework for AI systems.30 Other countries, such as the United States31 and China,32 have also developed national AI strategies and regulatory frameworks, reflecting different priorities and values.
The intersection of AI and international trade law has become increasingly important. Recent trade agreements, such as the Digital Economy Partnership Agreement (“DEPA”)33 and the Singapore-Australia Digital Economy Agreement (“SADEA”),34 have begun to address AI-related issues. However, these agreements also highlight the challenges of categorizing AI within existing trade frameworks.
Several international forums are actively engaged in discussions on AI governance, including the UN Interagency Working Group on Artificial Intelligence (“IAWG-AI”), the World Trade Organization (“WTO”), and various UN human rights bodies.35 These ongoing discussions reflect the growing recognition of the need for international cooperation in AI governance, but also reveal the challenges of achieving consensus in a rapidly evolving technological landscape.
III. Challenges and Solutions for Global AI Governance
International AI governance faces significant challenges that must be addressed to ensure effective and equitable regulation of AI technologies. One fundamental challenge is the tension between traditional notions of state sovereignty and the borderless nature of digital technologies. The concept of “data sovereignty” has emerged as countries seek to assert control over data within their jurisdictions, leading to the “balkanization” of data privacy.36 While this fragmentation of governance, sometimes referred to as “Balkanization,” raises concerns about the free flow of data and technological interoperability, it may also reflect legitimate national interests.37 AI technologies are deeply embedded in local infrastructures and may serve national security or economic priorities, which justifies differentiated regulatory approaches.38 Instead of viewing Balkanization purely as a normative problem, it may be more productive to develop international frameworks that allow flexibility for nations to exercise sovereignty over specific AI applications, while cooperating on shared global concerns such as AI safety, ethical standards, and human rights protections. This fragmentation of the digital space conflicts with the inherently global nature of AI technologies and the internet’s original ethos of openness and decentralization.39
The rapid evolution of AI technologies also poses challenges for existing international legal frameworks, particularly in international trade law. The distinction between goods and services, fundamental to WTO law, becomes blurred when applied to AI systems that combine hardware, software, and services.40 Similarly, questions arise regarding the legal status of data used to train AI systems and whether such data can be considered an “investment” under international investment law.41
The “black box” nature of many AI systems presents significant challenges for legal accountability and transparency.42 International trade agreements have begun to address issues of source code disclosure, but these provisions may not adequately address the complexities of modern AI systems.43 Balancing the need for algorithmic transparency with the protection of intellectual property rights and trade secrets remains a challenge.44
AI technologies raise numerous ethical and human rights concerns that international law must address, including issues of privacy, non-discrimination, freedom of expression, and the right to human dignity.45 The potential for AI systems to perpetuate or exacerbate existing biases and inequalities poses a significant challenge to international human rights.46 Moreover, the use of AI in autonomous weapons systems raises complex questions about compliance with international humanitarian law.47
To address these challenges, international law must evolve its conception of sovereignty. A multi-layered approach to AI sovereignty could recognize the interconnected nature of AI technologies, encompassing application layer sovereignty, data layer sovereignty, and algorithm layer sovereignty.48 This nuanced approach can help balance national interests with the need for international cooperation in AI governance.
Given the rapid pace of AI development, international legal frameworks must be sufficiently flexible to adapt to technological changes. A layered approach to international AI regulation could establish foundational principles through binding international agreements, develop sector-specific regulations, and encourage the development of international technical standards through multi-stakeholder processes.49 This layered approach should recognize the necessity of domestic governance for certain AI applications that are deeply tied to national legal and cultural contexts. While broad international cooperation can be achieved on issues like algorithmic transparency, anti-discrimination, and AI safety, domestic legislative efforts must address context-specific concerns. International organizations should provide guidance, but the actual implementation of AI governance should reflect each country’s unique legal system and values.
To address the “black box” problem of AI systems, international standards for AI transparency and explainability should be developed.50 This could include mandating the use of “explainable AI” techniques in high-risk applications, establishing international certification processes for AI systems, and creating international mechanisms for algorithmic auditing and impact assessments.51
International AI governance must be firmly grounded in human rights law.52 This could involve developing AI-specific interpretations of existing human rights treaties, creating new international instruments to address AI-specific human rights challenges, and establishing an international body to monitor and report on the human rights implications of AI technologies.53
To address power imbalances and ensure inclusive AI governance, an international AI technology transfer program could support developing countries. Creating inclusive multi-stakeholder forums for AI governance discussions and developing international AI education and training programs would help democratize AI development and governance, mitigating concerns about digital colonialism.54
Recognizing the intrinsic link between AI and data, international AI governance should be closely integrated with data governance efforts. This could involve developing comprehensive international frameworks for data flows that consider AI-specific issues, creating international standards for data quality and representativeness in AI training datasets, and establishing mechanisms for international data sharing for AI research and development while respecting privacy and security concerns.55
The governance of artificial intelligence presents one of the most significant challenges to the international legal order in the 21st century.56 The global nature of AI technologies, their rapid evolution, and their profound impact on various aspects of human life necessitate a coordinated international response. While the current international legal landscape for AI governance is fragmented and largely dominated by soft law instruments, there is a growing recognition of the need for more robust and binding international frameworks.57
A multi-faceted approach to international AI governance that combines evolving concepts of sovereignty, flexible legal frameworks, enhanced transparency and accountability mechanisms, strengthened human rights protections, inclusive governance structures, and integrated data governance is crucial. By adopting such an approach, the international community can work towards harnessing the benefits of AI while mitigating its risks and ensuring its development aligns with fundamental principles of international law.
The path forward will require unprecedented levels of international cooperation, technical expertise, and legal innovation. As AI continues to reshape our world, the development of an effective international legal framework for its governance is essential for ensuring a future in which AI serves the collective interests of humanity.
The rapid development and global impact of artificial intelligence present unprecedented challenges to the international legal order. This essay has explored the complex landscape of AI governance, highlighting the tension between traditional notions of state sovereignty and the borderless nature of digital technologies. The current patchwork of soft law instruments, regional regulations, and ongoing discussions within international organizations is insufficient to address the multifaceted challenges posed by AI.58
To effectively govern AI on a global scale, we propose a multi-layered approach that balances international cooperation with respect for national sovereignty. This approach recognizes the need for foundational principles established through binding international agreements while allowing flexibility for domestic governance of context-specific AI applications. Key elements of this framework include evolving concepts of digital sovereignty, flexible legal mechanisms adaptable to rapid technological changes, enhanced transparency and accountability standards, strengthened human rights protections, and integrated data governance. By adopting this nuanced strategy, the international community can work towards harnessing the benefits of AI while mitigating its risks and ensuring its development aligns with fundamental principles of international law.
1. See generally H. Akin Ünver, Artificial Intelligence (AI) and Human Rights: Using AI as a Weapon of Repression and its Impact on Human Rights (2024); see also Cheng-chi (Kirin) Chang, The First Global AI Treaty: Analyzing the Framework Convention on Artificial Intelligence and the Eu AI Act, 2024 U. Ill. L. Rev. Online 86, 90 (2024), https://illinoislawreview.org/wp-content/uploads/2024/12/Chang2.pdf (“The inherently transnational nature of AI technologies and the need for global cooperation in addressing their challenges.”).
2. Id.
3. Id.
4. Id.
5. See infra Part I.
6. See infra Part II.
7. See infra Part III.
8. Harry Surden, Artificial Intelligence and Law: An Overview, 35 Ga. St. U. L. Rev., 1307–08 (2019), (“[W]hen engineers automate an activity that requires cognitive activity when performed by humans, it is common to describe this as an application of AI.”).
9. Weiyu Wang & Keng Siau, Artificial Intelligence, Machine Learning, Automation, Robotics, Future of Work and Future of Humanity: A Review and Research Agenda, 30 J. Database Manag. 61 (2019) (“The exponential advancement in artificial intelligence [AI], machine learning, robotics, and automation are rapidly transforming industries and societies across the world.”).
10. Sara Brown, Machine Learning, Explained, MIT Sloan (Apr. 21, 2021), https://mitsloan.mit.edu/ideas-made-to-matter/machine-learning-explained [https://perma.cc/Z83M-5DFS] (According to a guide by MIT Sloan, AI systems, especially those using machine learning algorithms, can process large volumes of data to identify patterns and make predictions or decisions.).
11. Philippe Aghion, Benjamin F. Jones & Charles I. Jones, Artificial Intelligence and Economic Growth (Nat’l Bureau of Econ. Growth, Working Paper No. 23928, 2017) (“A.I. may be deployed in the ordinary production of goods and services, potentially impacting economic growth and income shares.”).
12. See generally UN Trade and Development, Digital Economy Report 2024 (2024) (In its reports, UNCTAD highlights AI’s role in shaping digital transformations across various sectors, emphasizing its potential to drive economic growth, enhance productivity, and foster innovation, particularly in developing countries.).
13. Jacques Bughin, Jeongmin Seong, James Manyika, Michael Chui, & Raoul Joshi, Modeling the Global Economic Impact of AI, McKinsey & Co. 3 (Sept. 4, 2018), https://www.mckinsey.com/~/media/mckinsey/featured%20insights/artificial%20intelligence/notes%20from%20the%20frontier%20modeling%20the%20impact%20of%20ai%20on%20the%20world%20economy/mgi-notes-from-the-ai-frontier-modeling-the-impact-of-ai-on-the-world-economy-september-2018.ashx [https://perma.cc/U7E4-YK4X] (McKinsey Global Institute research suggests that by 2030, AI could deliver additional global economic output of $13 trillion per year.).
14. Karen Hao, Artificial Intelligence Is Creating a New Colonial World Order, MIT Tech. Rev. (Apr. 19, 2022), https://www.technologyreview.com/2022/04/19/1049592/artificial-intelligence-colonialism/ [https://perma.cc/N7EH-FTXW].
15. Id.
16. Yihan Deng, AI & The Future of Conflict, Geo. J. Int‘l Affs. (Jul. 12, 2024), https://gjia.georgetown.edu/2024/07/12/war-artificial-intelligence-and-the-future-of-conflict/ [https://perma.cc/8NJE-4ZNU].
17. Paresh Dave & Jeffrey Dastin, Exclusive: Ukraine has Started Using Clearview AI’s Facial Recognition During War, Reuters, https://www.reuters.com/technology/exclusive-ukraine-has-started-using-clearview-ais-facial-recognition-during-war-2022-03-13/ (Mar. 14, 2022, 4:12 PM) [https://perma.cc/4RD3-EWVC].
18. Id.
19. Hope Reese, What Happens When Police Use AI to Predict and Prevent Crime?, JSTOR Daily (Feb. 23, 2022), https://daily.jstor.org/what-happens-when-police-use-ai-to-predict-and-prevent-crime/ [https://perma.cc/38AV-9PJV] (In China, AI is often used for surveillance purposes with limited public awareness or objection.); Chris Jay Hoofnagle, Bart van der Sloot & Frederik Zuiderveen Borgesius, The European Union General Data Protection Regulation: What It Is and What It Means, 28 Info. & Commc’n Tech. L. 65, 75 (2019) (“The Police Directive, which entered into force at the same time as the GDPR, sets rules for data processing by law enforcement agencies, such as the police. The rules in this Directive allow for more limitations than the general framework provided by the GDPR.”).
20. Emily Feng, Why China, and Now Taiwan, Are Making Their Own Chatbots Using Their Own Data, NPR (May 29, 2024, 3:53 AM), https://www.npr.org/2024/05/29/nx-s1-4939615/why-china-and-now-taiwan-are-making-their-own-chatbots-using-their-own-data [https://perma.cc/TB5Y-B2SJ]; Yixuan Lin, Can Taiwan’s First Mandarin LLM Prevent an Invasion of Chinese Mandarin AI?, CommonWealth Mag. (Jan. 31, 2024), https://english.cw.com.tw/article/article.action?id=3614 [https://perma.cc/FP3L-ZUXR].
21. Carlos Ignacio Gutierrez & Gary Marchant, How Soft Law Is Used in AI Governance, Brookings (May 27, 2021), https://www.brookings.edu/articles/how-soft-law-is-used-in-ai-governance/ [https://perma.cc/6DXV-KPSA].
22. Gary Marchant, Why Soft Law Is the Best Way to Approach the Pacing Problem in AI, Carnegie Council for Ethics in Int’l Affs. (Sept. 29, 2021), https://www.carnegiecouncil.org/media/article/why-soft-law-is-the-best-way-to-approach-the-pacing-problem-in-ai [https://perma.cc/57Q9-CR37].
23. John Villasenor, Soft Law as a Complement to AI Regulation, Brookings (July 31, 2020), https://www.brookings.edu/articles/soft-law-as-a-complement-to-ai-regulation/ [https://perma.cc/VL47-UNV7].
24. UNESCO, Recommendation on the Ethics of Artificial Intelligence (2022).
25. Id.
26. AI Principles Overview, OECD.AI, https://oecd.ai/en/ai-principles (last visited Jan. 31, 2025) [https://perma.cc/EB3C-SN83].
27. G20 Ministerial Statement on Trade and Digital Economy, G20 Info. Ctr. (June 9, 2019), https://g20.utoronto.ca/2019/2019-g20-trade.html (last visited Feb. 25, 2025) [https://perma.cc/MLZ6-H73N].
28. The IEEE Global Initiative 2.0 on Ethics of Autonomous and Intelligent Systems, IEEE Standards Ass‘n, https://standards.ieee.org/industry-connections/activities/ieee-global-initiative/ (last visited Feb. 26, 2025) [https://perma.cc/W8ER-9CBK].
29. Press Release, European Parliament, Artificial Intelligence Act: MEPs Adopt Landmark Law, Press room (Mar. 13, 2024).
30. Id.
31. Recent U.S. Efforts on AI Policy, CISA, https://www.cisa.gov/ai/recent-efforts (last visited Feb. 26, 2025) [https://perma.cc/TFS2-D64L].
32. Matt Sheehan, China’s AI Regulations and How They Get Made, Carnegie Endowment for Int‘l Peace (July 10, 2023), https://carnegieendowment.org/research/2023/07/chinas-ai-regulations-and-how-they-get-made?lang=en [https://perma.cc/22G8-2Q6V].
33. Background: Canada’s Possible Accession to the Digital Economy Partnership Agreement, Glob. Affs. Can., https://web.archive.org/web/20241103054108/https://www.international.gc.ca/trade-commerce/consultations/depa-apen/background-information.aspx?lang=eng (Aug. 25, 2022) [https://perma.cc/9B5Q-H92M] (“The Digital Economy Partnership Agreement [DEPA], . . . addresses a range of emerging digital economy issues including: artificial intelligence.”).
34. Australia-Singapore Digital Economy Agreement, Australian Gov‘t Dep‘t of Foreign Affs. & Trade, https://www.dfat.gov.au/trade/services-and-digital-trade/australia-and-singapore-digital-economy-agreement, (last visited Feb. 26, 2025) [https://perma.cc/AU4Q-EXUZ] (“The Australian Government and the Government of the Republic of Singapore will cooperate on Artificial Intelligence [AI] capabilities, including new AI technologies, talent development and ethical standards to support the positive commercial application of AI in the digital economy.”).
35. Inter-Agency Working Group on Artificial Intelligence, United Nations, https://unsceb.org/inter-agency-working-group-artificial-intelligence (last visited Feb. 26, 2025) [https://perma.cc/R6VY-EZ5Y]; Volker Türk, Keynote Address at Stanford University: The Human Rights Dimensions of Generative AI: Guiding the Way Forward (Feb. 14, 2024), (transcript available at https://www.ohchr.org/en/statements-and-speeches/2024/02/human-rights-must-be-core-generative-ai-technologies-says-turk [https://perma.cc/L3NM-2E5U]).
36. Roxana Vatanparast, Data Governance and the Elasticity of Sovereignty, 46 Brook. J. Int‘l L. 1, 30 (2020) (“The risk that data localization and strong data sovereignty impose is that this will lead to fragmentation and the rise of digital borders, or what is sometimes referred to as ‘Internet Balkanization.’”).
37. Id.
38. Id.
39. Id.
40. See generally Han-Wei Liu & Ching-Fu Lin, Artificial Intelligence and Global Trade Governance: A Pluralist Agenda, 61 Harv. Int‘l L. J. 407 (2020).
41. See generally Mark McLaughlin, Regulating Artificial Intelligence in International Investment Law, 24 J. World Inv. & Trade 256 (2023).
42. Bartosz Brożek, Michał Furman, Marek Jakubiec, & Bartłomiej Kucharzyk, The Black Box Problem Revisited. Real and Imaginary Challenges for Automated Legal Decision Making, 32 AI L. 427 (2024) (“This paper addresses the black-box problem in artificial intelligence [AI], and the related problem of explainability of AI in the legal context.”).
43. Cosmina Dorobantu, Florian Ostmann, & Christina Hitrova, Source Code Disclosure: A Primer for Trade Negotiators, in Addressing Impediments to Digital Trade 105 (Ingo Borchert & L. Alan Winters, eds., 2021).
44. See generally Ulla-Maija Mylly, Transparent AI? Navigating Between Rules on Trade Secrets and Access to Information, 54 IIC – Int‘l Rev. Intell. Prop. & Competition L. 1013 (2023).
45. Rowena Rodrigues, Legal and Human Rights Issues of AI: Gaps, Challenges and Vulnerabilities, 4 J. Responsible Tech., 100005 (2020).
46. See generally Anna Su, The Promise and Perils of International Human Rights Law for AI Governance, 4 L., Tech. & Hum. 166 (2022).
47. See generally Neil Davison, Autonomous Weapon Systems under International Humanitarian Law, in UNODA Occasional Papers No. 30, at 5 (2017); Shin-Shin Hua, Machine Learning Weapons and International Humanitarian Law: Rethinking Meaningful Human Control, 51 Georget. J. Int‘l L. 117 (2020).
48. Jufang Wang, Claire Milne, Jess Zichen Hu, & Furqan Khan, OXGS Report: Navigating Geopolitics in AI Governance (2024).
49. Global Trends in AI Governance, World Bank Group 9 (2024).
50. See generally Greg Adamson, Can We Use Non-Transparent Artificial Intelligence Technologies for Legal Purposes?, in 2020 IEEE Int‘l Symp. on Tech. & Soc‘y 43 (2020) (The paper argues that non-transparent “black box” AI systems should not be used for legal purposes without human oversight, as post hoc explanations are insufficient for upholding the rule of law.).
51. Carlos Zednik, Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence, 34 Phil. & Tech. 265 (2019) (The paper proposes a normative framework to evaluate explainable AI techniques and address the “black box” problem of opaque AI systems.).
52. Türk, supra note 35.
53. Maria Paz Canales, Ian Barber, & Jacqueline Rowe, What Would a Human Rights-Based Approach to AI Governance Look Like?, Glob Digit. Partners (Sept. 19, 2023), https://www.gp-digital.org/what-would-a-human-rights-based-approach-to-ai-governance-look-like/ [https://perma.cc/N4WE-9WLD].
54. Governing AI for Humanity, United Nations 24 n.3 (2024).
55. See Philipp Hacker, A Legal Framework for AI Training Data – From First Principles to the Artificial Intelligence Act, 13 L., Innovation & Tech. 257, 300 (2021) (“The analysis has shown that three risks are crucial for a legal framework for Al training data: data quality, discrimination and innovation risks.”).
56. Alex Krasodomski, Introduction – the Need to Future-Proof AI Governance, in Artificial Intelligence and the Challenge for Global Governance 7 (2024).
57. See Vatanparast, supra note 36.
58. Id.
* Cheng-chi (Kirin) Chang (張正麒) is the Associate Director & Academic Fellow, AI and the Future of Work Program at Emory University School of Law. I extend my appreciation to Dr. Ifeoma Ajunwa, J.D., LL.M., Ph.D., Yinn-ching Lu, Rachel Cohen, Yilin (Jenny) Lu, Nanfeng Li, Yenpo Tseng, Jef-frey Chang, Wolf (Chun-Ting) Cho, Zih-Ting You, Youyang Zhong, Ssu-Yuan (Iris) Yang, Ya-jou Liu, Arron Fang, Edison Li, Shijie Xu, and Yizhang (Yilia) Shen, as well as the esteemed scholars who participated in The Inaugural Emory Global AI and Law Colloquium, including but not limited to Ga-briela Arriagada-Bruneau, Anupam Chander, Colleen Chien, Ignacio Cofone, Jake Okechukwu Effo-duh, Nikolas Guggenberger, Vivek Krishnamurthy, Ernest Lim, Paul Ohm, Catherine Sharkey, Kathe-rine Strandburg, and Angela Zhang, for their valuable insights and contributions. Their expertise and engagement have significantly enriched this work and broader discussions in the field. Additionally, I appreciate Eli Goldstein, Michael Cerota, and the other editors of the University of Illinois Law Review for their diligent efforts in bringing this article to publication. Any errors or oversights are my sole re-sponsibility. The views expressed in this article are solely my own and do not represent those of any affiliated institutions
The full text of this Article is available to download as a PDF.