I. Introduction
In early 2023, the legal profession went into a frenzy when ChatGPT made global headlines by not just passing but excelling on the U.S. bar examination.1 Mere months later, a similar furor arose when ChatGPT made its way into the judicial system, first being used by a judge in a legal opinion2 and then by lawyers in their legal submissions.3
The reason behind the uproar is that ChatGPT is a form of generative artificial intelligence (AI) capable of creating entirely new content.4 Some lawyers have lauded generative AI as a cost-saving device,5 and one federal judge has suggested there is “nothing inherently improper about using a reliable artificial intelligence tool for assistance.”6 However, there are many in the legal profession who view generative AI as extremely problematic.7
Numerous concerns arise with respect to generative AI, including those relating to due process and procedural fairness.8 However, three issues rise to the fore. First, in its current form, generative AI has no safeguards requiring it to produce information that is true and correct, resulting in documents that are replete with fictitious legal authorities known as hallucinations.9 Second, generative AI frequently misinterprets and misapplies source material, thereby casting doubt on references to legitimate legal authorities.10 Third, it is difficult or impossible to tell, simply by looking at the face of the document, that it has been created by a computer rather than a human.11
Taken together, these factors suggest that judges and litigants cannot rely on anything contained in a document created by generative AI. Though users of generative AI may claim they are making the litigation process more efficient, timely, and cost-effective, they are actually requiring judges and other litigants to double-check their work, thereby shifting the time, cost, and burden of legal analysis to other participants in the litigation process.12 This approach is not only rife with inefficiencies, it is extremely likely to erode public confidence in the civil and criminal justice systems.13
The technology industry has already called for immediate regulation of AI,14 and legal institutions around the world are responding. In the United States, individual judges are amending their rules to indicate the extent to which generative AI is permitted in party submissions,15 while in Canada, the Supreme Court is considering the adoption of a practice note concerning use of AI in its proceedings.16 The United Kingdom has issued a White Paper on AI,17 while the European Union (EU) and China have already drafted legislation on the subject.18
As welcome as these initiatives may be, they are more reactive than proactive. Rather than relying on piecemeal responses, the legal profession needs to address generative AI holistically. The first step in this endeavor involves identifying which entities or individuals are best-positioned to respond to the problems associated with generative AI at both the national and international levels.
This Article analyzes the narrow issue of which public and private bodies are best-suited to respond to the challenges of generative AI in domestic and cross-border litigation (Section III).19 Rather than proposing content-based solutions to the issues facing the criminal and civil justice systems, this Article focuses on identifying who can and should act in the short, medium and long terms.
The only way to properly evaluate the various options is to compare each alternative against a standard set of criteria. The Article therefore begins with a short discussion of the factors used to identify which entities or individuals are best-placed to provide a fair, effective, and appropriate legal response to generative AI in litigation (Section II). The Article concludes by tying together the various strands of argument and recommending how the national and international legal community should proceed (Section IV).
II. Factors Affecting Choice of Actor
Though there are many ways to evaluate the relative merits of different actors capable of responding to the challenges of generative AI, this Article focuses on four key factors. The first is consistency, meaning consistency across individual litigations within a particular court and consistency between different courts, both within and across jurisdictional lines. Consistency is important because it protects important principles of procedural fairness such as equal treatment of parties. Consistency has practical benefits as well, in that it reduces inefficiencies and errors by providing advance notice of what will be required of parties and their counsel.20 Consistency implicitly includes an element of transparency, thereby promoting public confidence in both criminal and civil justice systems.21
The second factor is speed. The longer generative AI remains unregulated, the more likely it is that injustices and errors will arise on both the individual and institutional levels, thereby affecting the perceived legitimacy of judicial proceedings. Delay can also lead to cognitive distortions (such as the status quo bias or the anchoring bias) that make it harder to regulate problematic behavior in the future.22
The third factor is flexibility. Though flexibility can be seen as the opposite of speediness, the field of AI is changing rapidly, and it is important to avoid calcifying the law in an immature or undesirable state. However, those concerns cannot excuse inaction. Instead, it simply means that the techniques used to address generative AI need to be agile.
The fourth and final factor is accountability. If a rule or law indicates that generative AI may not be used in a particular manner, there needs to be some means of ensuring that provision has been complied with. Furthermore, any sanctions need to be narrowly targeted toward the party responsible for creating the difficulties.
III. Determining Who Should Act to Address Generative AI in Litigation
Thus far, the legal community has been most concerned about whether and to what extent generative AI can be used by practitioners.23 However, lawyers are not the only ones who might use generative AI during the litigation process. Pro se litigants, judicial clerks, and even judges themselves might seek to rely on this type of technology.
The following discussion describes the various entities that might be capable of regulating generative AI in criminal and civil litigation. The first subsection focuses on domestic options while the second subsection focuses on international options.
A. Domestic Options
1. Judicial action
So far, the predominant means of addressing generative AI in litigation is through amendments to the rules of individual judges or local rules of court.24 This approach, which focuses exclusively on lawyers and litigants, has the benefit of speed, in that fewer individuals have to agree on a particular course of action, and flexibility, in that the rules can be amended easily and rapidly now and in the future. Accountability can arise through court sanctions. The problem is consistency within or across particular judicial systems.
Consistency could be increased at the federal level by amending the Federal Rules of Civil Procedure, Federal Rules of Criminal Procedure, Federal Rules of Appellate Procedure, and/or Rules of the Supreme Court of the United States, though that is neither a speedy nor flexible approach, given that the amendment process takes at least two to three years to complete.25 However, the Federal Judicial Center (FJC)—the research and education arm of the U.S. federal judiciary—could increase consistency in either the short or long term by facilitating discussions among federal judges and/or by publishing research concerning possible judicial responses to generative AI.26
Consistency within individual state court systems could arise through the amendment of state rules of procedure, though there is no way to ensure consistency between different states. While principles of federalism permit diversity among different state approaches, inefficiencies and errors might arise in interstate disputes, including in cases when attorneys appear pro hac vice in other state courts.27 Extreme divergence between different states could also lead to concerns about due process and procedural fairness.28
A certain amount of harmonization could arise with the help of organizations like the National Center for State Courts (NCSC), a judicial research and education organization similar to the FJC,29 and the Conference of Chief Justices, which facilitates discussions between chief justices of the various state courts on matters of common concern.30 While such efforts would improve consistency, it is unclear how long they would take to implement.
All of the options discussed in this subsection have the benefit of providing for accountability through judicial sanctions against both lawyers and pro se litigants. However, care must be taken to ensure that any sanctions are addressed to the offending individuals only. In particular, the wrongdoing of a lawyer should not affect the rights and interests of a party represented by that lawyer.31
Lawyers and litigants are not the only ones whose conduct needs to be considered. Judges and judicial clerks might also be tempted to rely on generative AI to help them draft judicial decisions and opinions.32 Indeed, judges in other countries have already used various forms of AI, including ChatGPT, in their work.33
Some in the United States may think that U.S. judges would never resort to AI in drafting their judgments, but U.S. judges have been known to take questionable shortcuts in their work in the past.34 For example, some U.S. judges have been known to engage in “judicial plagiarism,” which is when judges copy materials from party submissions into judicial decisions and opinions.35
Concerns about judicial independence have traditionally precluded the imposition of external restrictions on judicial behavior, even though critics claim that judicial self-regulation is inherently problematic.36 The only binding means of addressing U.S. judges’ use of generative AI would be through an amendment to the Code of Conduct for United States Judges and state judge analogues, with a separate code for Justices of the Supreme Court of the United States (thus far, unsuccessfully pursued by Chief Justice Roberts).37 Amendments to the Code of Conduct for Judicial Employees38 and rules of ethics imposed by individual judges39 could address the actions of federal law clerks, with the behavior of state law clerks being addressed through similar means.
While such efforts might increase accountability and consistency, it is unclear how speedy and flexible they are. One alternative might be to require judges and clerks at both the state and federal levels to undertake mandatory coursework about the problems associated with generative AI, but that approach is not fail-safe. Federal judges are not required to complete any form of judicial education after being elevated to the bench, as is also true of many state judges.40 Even those judges that are required to complete a certain number of continuing education credits per reporting period are not told which courses to take.41 Thus, judicial education alone will not have much effect on the behavior of judges and clerks.
2. Legislative action
Another means of addressing generative AI in litigation is through legislation, as the EU and China have done.42 Though U.S. state and federal legislatures do not frequently enact laws addressing civil procedure, they can act in appropriate circumstances.43
While a legislative response is possible, it may not be ideal, since such efforts are neither speedy nor flexible. Questions of accountability could also arise, particularly with respect to who would bear the cost of non-compliance. Much would depend on how the legislation was drafted.
Additional problems arise with respect to consistency. The only way Congress could affect the use of generative AI would be to successfully assert a claim that generative AI negatively affected interstate commerce under the Commerce Clause, individual procedural rights under the U.S. Constitution, or implementation of an international treaty or convention.44 Even with such a claim, the treatment of generative AI could end up differing between state and federal courts, or between individual state courts.45
Non-binding methods of promoting interstate consistency do exist. For example, the Uniform Law Commission (ULC) has sought to harmonize state law for decades by promulgating model laws for individual state legislatures to consider.46 Although the ULC typically avoids involving itself in judicial proceedings, it has promulgated thirty-two instruments relating to civil procedure and the courts.47 Therefore, it is not impossible for the ULC to engage with the issue of generative AI in litigation.
3. Action by licensing authorities
Another possible approach involves characterizing the use of generative AI as affecting rules of ethics and professional conduct established by lawyers’ licensing authorities (i.e., the bar).48 Amendments to the rules of professional responsibility would provide direct accountability for behavior undertaken by lawyers (including transactional lawyers, though this Article focuses exclusively on litigators) and indirect accountability for behavior involving judges and law clerks. A properly drafted rule might even be able to affect pro se litigants.
Changes to the rules of professional conduct can be implemented and subsequently amended relatively quickly, thus meeting the criteria for both speed and flexibility.49 Consistency would arise within each individual licensing territory, though problems could arise between different jurisdictions. However, the American Bar Association (ABA) could play a useful role in helping to harmonize rules across the United States.50
Since the vast majority of judges and law clerks are also lawyers, a properly drafted rule could govern use of generative AI by judges and clerks. Care would need to be taken in the drafting of those provisions to address any potential objections from those concerned about judicial independence.
One issue that has not yet been discussed by the legal community involves the possibility that pro se litigants might seek to introduce documents created by generative AI in court. On the one hand, it could be argued that self-represented individuals are better off with the assistance of generative AI than without, thereby improving access to justice. However, that view fails to appreciate both the errors that can and do arise in legal submissions drafted by generative AI as well as the burden that is placed on other participants in the process to review and correct those errors.51 It also assumes that all forms of generative AI are equal, when that is very much not the case.52
While professional licensing bodies cannot regulate the behavior of pro se litigants directly, they might be able to do so indirectly by claiming that creators of generative AI are engaging in the unauthorized practice of law by “advising” pro se litigants through technological means.53 While such an argument may be difficult to assert in light of recent opinions from the Supreme Court of the United States,54 it has the benefit of placing the burden of accountability squarely on the persons responsible for creating the problem (i.e., the designers of generative AI) and could provide an incentive to programmers to block generative AI from attempting legal analyses.
4. Complementary actions
While the entities discussed in the preceding subsections have the ability to act, many will hesitate to do so because of concerns about the content of the relevant rules, laws, or regulations. Fortunately, there are a number of bodies that can help by undertaking empirical and policy-oriented research designed to identify appropriate content. The benefit of this approach is that it can (and indeed should) start immediately.
The discussion above has identified several bodies (e.g., the ABA, FJC, NCSC, and ULC) that may be able to undertake research in this field. One group that has not yet been mentioned is the American Law Institute (ALI), best known as the body responsible for the Restatements of Law. Any work done by the ALI on generative AI in litigation would doubtless prove useful to law- and rule-making authorities seeking to determine how best to proceed. Studies conducted by the ALI could also form the basis for the ALI’s own work, which could take the form of a restatement, a set of guiding principles, or a model code.55
The major problem with the ALI is the amount of time it would take to complete a new project.56 However, smaller and more agile bodies—such as state and local bar organizations—could undertake similar studies. Indeed, bar organizations might be particularly well-placed to undertake studies of practitioner beliefs and behaviors, since the bar has broad access to lawyers across various specialties.
Professional organizations for judges (such as the Conference of Chief Justices, the American Judges Association or the ABA’s Judicial Division) could undertake similar studies of judges. Indeed, judicial organizations may be the only entities capable of gaining the judicial perspective, since judges are often hesitant to participate in studies conducted by academics or others.57
Any empirical studies would of course have to comply with best practices in data collection and analysis.58 Optimally, bar and judicial organizations would enlist the assistance of academics with the necessary research design skills. Alternatively, academics could initiate their own studies and enlist the assistance of bar associations and judges’ groups.
B. International Options
Domestic efforts can and should be supplemented by similar initiatives at the international level. International courts like the International Criminal Court, the International Court of Justice, and the European Court of Justice can implement rules of court to address generative AI in their proceedings, while supranational legislative bodies like the European Parliament can pursue legislative solutions affecting their constituent Member States.59 However, the absence of a single global sovereign with judicial or legislative jurisdiction over cross-border litigation makes it difficult to ensure consistency through binding law, at least in the short term. Fortunately, there are several intergovernmental and private bodies that can help craft a response to the challenges of generative AI in cross-border litigation.
The first institution to consider is the Hague Conference on Private International Law (Hague Conference). Not only has the Hague Conference promulgated numerous international conventions dealing with cross-border litigation,60 it also conducts research into various matters affecting private international law.61 While concerns about speed and flexibility may preclude the Hague Conference from initiating work on a hard or soft law instrument in the short term,62 a global research study into generative AI in litigation would be very useful in promoting cross-border consistency.
Another intergovernmental organization that might take on a project in this field is the International Institute for the Unification of Private Law (UNIDROIT). Though UNIDROIT focuses primarily on substantive law, it has some experience with civil procedure, having promulgated the Principles and Rules of Transnational Civil Procedure in 2006 in cooperation with the ALI (ALI/UNIDROIT Principles and Rules).63 That project was subsequently transformed in 2020 by the European Law Institute (ELI) into the ELI/UNIDROIT Principles and Rules.64 Both of these are soft law instruments, but still took a considerable amount of time to complete (seven years in both cases), suggesting UNIDROIT’s involvement would arise in the medium or long term rather than the short term.65
Although there is no international licensing authority that can impose a rule of professional conduct on lawyers working internationally, the International Bar Association (IBA) is very active in matters involving cross-border dispute resolution, promulgating a variety of soft law instruments including practice rules, guidelines, and principles relating to various aspects of cross-border legal practice.66 The IBA is also active in convening conferences and task forces that study and discuss issues of interest, which might very well include concerns about generative AI.67
IV. Conclusion
As a general rule, public justice systems evolve slowly so as to ensure due care is taken of all relevant issues. However, there occasionally comes an event that requires urgent attention. The advent of generative AI in criminal and civil litigation is just such an event.
There are those in the legal profession who believe that generative AI can be a useful tool that should be allowed in litigation, and there are those who believe it to be a dangerous innovation that can and will lead to inefficiency, inequity, and an erosion of public confidence in the judicial system.68 Regardless of where one stands on the future use of generative AI in litigation, what is clear is that the legal profession must put some standards in place immediately to avoid injury and unfairness on both an individual and institutional level.
The first step toward addressing generative AI in civil and criminal litigation is determining which entities or individuals are best-suited to respond. Unfortunately, as the preceding pages have shown, there is no single body that can address generative AI in domestic or cross-border litigation with sufficient consistency, speed, flexibility, and accountability. Instead, it is necessary to pursue several different actions, both simultaneously and seriatim, to achieve the desired results.
In the short term, the best means of proceeding would involve a combination of the rules of court with the rules of professional responsibility. Courts and bar organizations can adopt and implement rule changes speedily and flexibly, and the cross-cutting approach to accountability ensures coverage of all the key participants in litigation (lawyers, judges, judicial clerks, and pro se litigants) while also possibly inspiring the creators of generative AI to pursue necessary safeguards at the technical level.
The one shortcoming of the suggested approach involves consistency across jurisdictional boundaries. That problem could be solved by amending various rules of procedure, though those reforms would take several years to take effect, at the very least. Fortunately, organizations like the FJC, NCSC, ULC, ALI, ABA, IBA, UNIDROIT, and the Hague Conference can help promote harmonization efforts in the meantime by undertaking research initiatives that help various rule-making bodies identify appropriate content and by promoting their own soft-law initiatives that help shape judicial and litigant behavior.
Long-term projects would involve hard law instruments undertaken by legislatures or intergovernmental bodies, though such efforts should not be pursued until the law and practice have had an opportunity to develop and evolve. Indeed, it might not even be necessary to move to this final level of regulation if initial efforts to address generative AI in litigation are successful.
Although legislatures and intergovernmental bodies should not pursue hard law initiatives right away, they should keep a watchful eye on the development of law in this field. If too much time passes, path dependency and cognitive distortions such as the status quo bias and anchoring bias could make subsequent legislative and intergovernmental action difficult or impossible to achieve. Therefore, legislatures and intergovernmental bodies should consider forming working groups in the short term as a means of demonstrating that generative AI is an issue of legitimate legislative concern.69
Generative AI is still in a very early stage of development, and there will be those who caution against regulation lest the wrong standards be put in place.70 However, it can take years for some legal entities to respond to new challenges, and significant damage can be done in the meantime, both to individual litigants and to the criminal and civil justice systems as a whole. The legal community should certainly proceed with caution, but should not wait to start the process of considering how to deal with generative AI in litigation. To do so would work an injustice on the very people the justice system is meant to serve and protect.
* Ph.D. (law), University of Cambridge (U.K.); D.Phil., University of Oxford (U.K.); J.D., Duke Uni-versity. The author, who is qualified to practice as an attorney in New York and Illinois and as a so-licitor in England and Wales and in Ireland, is Professor of Comparative and Private International Law at the University of Sydney. The author would like to thank Mihaela Apostol, Amy Schmitz, Tim Schnabel, and Takashi Takashima for their insights during the drafting of this Article. All errors re-main the author’s own.
1. See Samantha Murphy Kelly, ChatGPT Passes Exams from Law and Business Schools, CNN (Jan. 26, 2023, 1:35 PM) https://edition.cnn.com/2023/01/26/tech/chatgpt-passes-exams/index.html [https://perma.cc/
X4Z8-GMSR]; Debra Cassens Weiss, Latest Version of ChatGPT Aces Bar Exam with Score Nearing 90th Percentile, ABA J. (Mar. 16, 2023, 1:59 PM), https://www.abajournal.com/web/article/latest-version-of-chatgpt-aces-the-bar-exam-with-score-in-90th-percentile [https://perma.cc/QN8R-RVKF].
2. See Lauren Croft, Use of ChatGPT in Courts Should be Approached “With Great Caution”, Lawyers Weekly (Feb. 13, 2023), https://www.lawyersweekly.com.au/wig-chamber/36657-use-of-chatgpt-in-courts-should-be-approached-with-great-caution [https://perma.cc/LS3B-5RTB].
3. See Mata v. Avianca, Inc., No. 1:2022cv01461, 2023 WL 4138427 (S.D.N.Y. June 22, 2023); Molly Bohannon, Judge Fines Two Lawyers for Using Fake Cases from ChatGPT, Forbes (June 22, 2023, 4:59 PM), https://www.forbes.com/sites/mollybohannon/2023/06/22/judge-fines-two-lawyers-for-using-fake-cases-from-chatgpt/?sh=66b4f9ad516c [https://perma.cc/6GFB-WFDH].
4. See Dan Milmo, Claude 2: ChatGPT Rival Launches Chatbot that can Summarise a Novel, Guardian (July 12, 2023, 9:19 AM), https://www.theguardian.com/technology/2023/jul/12/claude-2-anthropic-launches-chatbot-rival-chatgpt [https://perma.cc/5LW3-MFWY].
5. See Ben Edwards, More than Half of In-House Lawyers Back ChatGPT’s Use for Legal Work, Study Shows, Global Leg. Post (June 22, 2023), https://www.globallegalpost.com/news/more-than-half-of-in-house-lawyers-back-chatgpts-use-for-legal-work-study-shows-2042534205#:~:text=22%20Jun%202023-,More%20
than%20half%20of%20in%2Dhouse%20lawyers%20back%20ChatGPT’s,for%20legal%20work%2C%20
study%20shows&text=More%20than%20half%20of%20in%2Dhouse%20lawyers%20believe%20ChatGPT%
20and,new%20report%20from%20Thomson%20Reuters [https://perma.cc/3LQR-9SA2].
6. See Bohannon, supra note 3.
7. See Croft, supra note 2; Edwards, supra note 5.
8. See, e.g., European Commission for the Efficiency of Justice, European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and their Environments (December 3-4, 2018), https://rm.coe.int/ethical-charter-en-for-publication-4-december-2018/16808f699c [https://perma.cc/P49D-VF7R]; The Generative AI Revolution: Key Legal Considerations for the Nonprofit & Trade Association Industry, Nat’l L. Rev. (June 7, 2023), https://www.natlawreview.com/article/generative-ai-revolution-key-legal-considerations-nonprofit-trade
-association [https://perma.cc/FJ4C-XVUG]. For non-derogable procedural norms in criminal and civil proceedings, see Anthony J. Colangelo, Procedural Jus Cogens, 60 Colum. J. Transnat’l L. 377, 435–42 (2022) (discussing criminal procedure); S.I. Strong, General Principles of Procedural Law and Procedural Jus Cogens, 122 Penn St. L. Rev. 347, 399–403 (2018) [hereinafter Strong, Jus Cogens] (discussing civil procedure).
9. See Bohannon, supra note 3.
10. See id.
11. See Clare Duffy & Kenneth Uzquiano, Bot or Not? How to Tell When You’re Reading Something Written by AI, CNN (July 11, 2023), https://edition.cnn.com/interactive/2023/07/business/detect-ai-text-human-writing/ [https://perma.cc/7B93-3UXE].
12. See Bohannon, supra note 3.
13. See id.
14. See John Naughton, A Lawyer Got ChatGPT to Do his Research, but He Isn’t AI’s Biggest Fool, Guardian (June 3, 2023, 11:00 AM), https://www.theguardian.com/commentisfree/2023/jun/03/lawyer-chatgpt-research-avianca-statement-ai-risk-openai-deepmind [https://perma.cc/VD4C-5P6A].
15. See Practice Direction, Court of King’s Bench of Manitoba, Re: Use of Artificial Intelligence in Court Submissions (June 23, 2023), https://www.manitobacourts.mb.ca/site/assets/files/2045/practice_direction_-_use_of_artificial_intelligence_in_court_submissions.pdf [https://perma.cc/GDP7-RZJ5]; Devin Coldewey, No ChatGPT in My Court: Judge Orders All AI-Generated Content Must Be Declared and Checked, TechCrunch (May 30, 2023, 6:32 PM), https://techcrunch.com/2023/05/30/no-chatgpt-in-my-court-judge-orders-all-ai-generated-content-must-be-declared-and-checked/ [https://perma.cc/6EX7-TGYS].
16. See Cristin Schmitz, SCC Considers Possible Practice Direction on Use of AI in Top Court as More Trial Courts Weigh In, Law360 Canada (July 7, 2023, 1:00 PM), https://www.law360.ca/civillitigation/articles/48377/scc-considers-possible-practice-direction-on-use-of-ai-in-top-court-as-more-trial-courts-weigh-in?nl_pk=cc24d041-186e-4a1b-88bb-f7d641e1810f&utm_source=newsletter&utm_medium=email&utm_campaign=civillitigation [https://perma.cc/AZE3-U6SJ].
17. See A Pro-Innovation Approach to AI Regulation, UK Gov’t, (Aug. 3, 2023), https://www.
gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper [https://perma.cc/RG6P-D2K3].
18. See Proposal for a Regulation of the European Parliament and Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206 [hereinafter EU AI Act] (discussing administration of justice) (last visited Aug. 29, 2023) [https://perma.cc/FS2L-G7XD]; Laura He, China Takes Major Step in Regulating Generative AI Services Like ChatGPT, CNN (July 14, 2023, 4:03 AM), https://edition.cnn.com/2023/07/14/tech/china-ai-regulation-intl-hnk/index.html [https://perma.cc/A67L-3Z3J].
19. For a discussion of similar issues in arbitration, see S.I. Strong, Regulating Generative Artificial Intelligence in Domestic and International Arbitration: A Content-Neutral Blueprint for Action, 34 Am. Rev. Int’l L. (forthcoming 2024).
20. See generally John E. Coons, Consistency, 75 Cal. L. J. 59 (1987).
21. See generally id.
22. See Mirit Eyal-Cohen, Unintended Legal Inertia, 55 Ga. L. Rev. 1193, 1270–71 (2020).
23. See Bohannon, supra note 3; Coldewey, supra note 15. “Practice notes” that supplement rules of civil procedure can also be amended relatively quickly and with high degrees of flexibility. See Schmitz, supra note 16.
24. See supra notes 15–16 and accompanying text.
25. See Overview for the Bench, Bar, and Public, U.S. Courts, https://www.uscourts.gov/rules-policies/about-rulemaking-process/how-rulemaking-process-works/overview-bench-bar-and-public (noting the process requires two to three years at a minimum) (last visited Aug. 29, 2023) [https://perma.cc/Q4F9-JMMM].
26. See generally Federal Judicial Center, Homepage, fjc.gov (last visited Aug. 29, 2023) [https://
perma.cc/96KX-GR4J].
27. See Why do States Have Different Laws?, Legal Match, https://www.legalmatch.com/law-library/article/why-do-states-have-different-laws.html#:~:text=Under%20constitutional%20laws%2C%20however%2C
%20states,laws%20according%20to%20their%20needs (last visited Aug. 29, 2023) [https://perma.cc/PFX5-HW5R].
28. European Commission for the Efficiency of Justice, supra note 8.
29. See Priorities & Strategic Plan, Nat’l Ctr. for State Cts., https://www.ncsc.org/about-us/priorities-and-strategic-plan (last visited Aug. 29, 2023) [https://perma.cc/6HBU-MA2Y].
30. See Conf. of Chief Justs., https://ccj.ncsc.org/#:~:text=The%20Conference%20of%20Chief%20
Justices,of%20state%20courts%20and%20judicial (last visited Aug. 29, 2023) [https://perma.cc/SJE5-QMEW].
31. See Bohannon, supra note 3 (noting lawyers who used ChatGPT were fined, but also noting that the lawyers’ clients found the case thrown out of court for being untimely).
32. Judges routinely ask law clerks to write first drafts of decisions and opinions, though that practice has been criticized as an improper delegation of judicial authority. See Nadine J. Wichern, A Court of Clerks, Not of Men, 49 De Paul L. Rev. 621, 662 (1999).
33. See Croft, supra note 2. Courts in India currently use an AI mechanism (SUPACE) that processes facts, claiming it does not affect decision-making. See Express News Service, CJI Launches Top Court’s AI-Drive Research Portal, Indian Express (Apr. 7, 2021, 2:55 PM), https://indianexpress.com/article/india/cji-launches-top-courts-ai-driven-research-portal-7261821/ [https://perma.cc/KQR6-A9EL]. However, factual determinations are central to legal decision-making. See S.I. Strong, Legal Reasoning Across Commercial Disputes: Comparing Judicial and Arbitral Analyses 85, 295 (2020) (citing two different empirical studies).
34. See Strong, supra note 33, at 221, n.89.
35. See Douglas R. Richmond, Unoriginal Sin: The Problem of Judicial Plagiarism, 45 Az. St. L.J. 1077, 1079–80 (2013).
36. See Anthony D’Amato, Self-Regulation of Judicial Misconduct Could Be Mis-Regulation, 89 Mich. L. Rev. 609, 609-10 (1990); S.I. Strong, Judicial Education and Regulatory Capture: Does the Current System of Educating Judges Promote a Well-Functioning Judiciary and Adequately Serve the Public Interest?, 2015 J. Disp. Resol. 1, 5–6 (2015) [hereinafter Strong, Regulatory Capture].
37. See United States Courts, Code of Conduct for United States Judges, https://www.uscourts.gov/
judges-judgeships/code-conduct-united-states-judges (last visited Aug. 29, 2023) [https://perma.cc/VE64-VD
WE]; Joan Biskupic, John Roberts Can’t Get a Supreme Court Ethics Code. Alito’s Interview Shows Why, CNN (July 31, 2023), https://edition.cnn.com/2023/07/31/politics/supreme-court-ethics-alito-roberts/index.html [https://perma.cc/27GD-J8LX]; see also Ruth Marcus, A Former Judge Explains How to Fix the Supreme Court’s Ethics Problem, Wash. Post (July 17, 2023, 7:30 AM) (interviewing Judge Jeremy Fogel, former director of the FJC), https://www.washingtonpost.com/opinions/2023/07/17/supreme-court-legal-ethics-jeremy-fogel/ [https://
perma.cc/EU3Z-GU7N]. Judges in civil law countries may be less likely to use generative AI, since doing so could negatively affect prospects of promotion to higher and better-paid judicial positions. Judges in civil law countries also undergo extensive, specialized training, starting in law school and continuing throughout their careers, which allows for standardized education on generative AI, thereby facilitating consistency in judicial practices. Many thanks to Takashi Takashima, a retired Japanese judge, for this point.
38. See United States Cts., Code of Conduct for Judicial Employees, https://www.uscourts.gov/rules-policies/judiciary-policies/code-conduct/code-conduct-judicial-employees (last visited Aug. 29, 2023) [https://
perma.cc/5D84-K8QK].
39. See Federal Judicial Center, Maintaining the Public Trust: Ethics for Federal Judicial Clerks, https://cafc.uscourts.gov/wp-content/uploads/HR/Forms/Maintaining-the-Public-Trust_2019-Revised-Fourth-Edition.pdf (last visited Aug. 29, 2023) [https://perma.cc/EVW6-8S3Y].
40. See Strong, Regulatory Capture, supra note 36, at 3–4.
41. See id. at 14–16.
42. See EU AI Act, supra note 18; He, supra note 18.
43. See, e.g., Class Action Fairness Act, 28 U.S.C. §§1711–15.
44. See U.S. Const., art. I, § 8; amend. V; amend. XIV. A number of international conventions address cross-border litigation, and a similar instrument could be promulgated regarding the use of generative AI in courts. See infra note 49 and accompanying text.
45. See Federalism-Based Limitations on Congressional Power: An Overview, Cong. Rsch. Serv. (Jan. 31, 2023) (discussing the “anti-commandeering” doctrine), https://crsreports.congress.gov/product/pdf/R/R4
5323 [https://perma.cc/W2LY-JC9L].
46. See Current Acts, Unif. L. Comm’n, https://www.uniformlaws.org/acts/catalog/current (last visited Aug. 29, 2023) [https://perma.cc/2Z98-57XL].
47. See id. (search on civil procedure and courts).
48. See Fred C. Zacharias, The Purpose of Lawyer Discipline, 45 Wm. & Mary L. Rev. 675, 688–89 (2003) (characterizing lawyer discipline as an administrative function rather than a criminal function). However, licensing authorities tend to guard their power zealously and can be adverse to increasing the scope or severity of sanctions visited upon their members. See Lubna Shuja, LSB to Review Enforcement Tools Available to Regulators, Law Soc’y (Aug. 4, 2023), https://www.lawsociety.org.uk/topics/regulation/lsb-review-on-regulator-enforcement-tools?utm_source=professional_update&utm_medium=email&utm_campaign=PU-08%2f07%2f
2023&sc_camp=C7F0003414AA4230C6194004E418ACD4 [https://perma.cc/KB34-SHWV].
49. See Policy & Initiatives, Am. Bar Ass’n, https://www.americanbar.org/groups/professional_responsibility/policy/ (last visited Aug. 30, 2023) [https://perma.cc/JQ59-LKG9].
50. Id.
51. See Bohannon, supra note 3.
52. Subscription-based AI is far superior to free forms of generative AI. See Lexis + AI, LexisNexis, https://www.lexisnexis.com/en-us/products/lexis-plus-ai.page (last visited Aug. 29, 2023) [https://perma.cc/KM8V-U8VV].
53. See e.g., Twitter, Inc. v. Taamneh, 143 S. Ct. 1206 (2023); Gonzalez v. Google, 143 S. Ct. 1191 (2023) (Mem.).
54. Id.
55. The ALI has undertaken such studies in the past. See Shop Ali Publications, Am. L. Inst., https://
www.ali.org/publications/#publication-type-model-codes (last visited Aug. 29, 2023) [https://perma.cc/6PTW-D76J].
56. Some ALI projects take a decade or more to complete. See Project Life Cycle, Am. L. Inst., https://
www.ali.org/projects/project-life-cycle/ (last visited Aug. 29, 2023) [https://perma.cc/8TRY-QCH3].
57. See Strong, supra note 33, at 356–57.
58. See Lee Epstein & Gary King, The Rules of Inference, 69 U. Chi. L. Rev. 1, 54–114 (2002) (describing best practices in empirical legal research).
59. See About the Court, Int’l Crim. Crt., https://www.icc-cpi.int/about/the-court (last visited Aug. 30, 2023) [https://perma.cc/6X5L-R88L]; How the Court Works, Int’l Crt. of J., https://www.icj-cij.org/how-the-court-works (last visited Aug. 30, 2023) [https://perma.cc/QV9P-7SFJ]; Principles, Countries, History, Eur. Union, https://european-union.europa.eu/principles-countries-history_en (last visited Aug. 30, 2023) [https://
perma.cc/M9UE-6Z4Q].
60. See Conventions and Other Instruments, Hague Conf. on Priv. Int’l L., https://www.hcch.net/en/instruments/conventions (last visited Aug. 30, 2023) [https://perma.cc/UXN7-VDEF].
61. See Studies, Hague Conf. on Priv. Int’l L., https://www.hcch.net/en/publications-and-studies/studies (last visited Aug. 30, 2023) [https://perma.cc/22K9-Y4RD].
62. For example, the Judgments Project took nearly thirty years to complete. See Overview of the Judgments Project, Hague Conf. on Priv. Int’l L., https://www.hcch.net/en/publications-and-studies/details4/?pid=6843&dtid=61 (last visited Aug. 30, 2023) [https://perma.cc/YCK4-53CW].
63. See Civil Procedure, UNIDROIT, https://www.unidroit.org/instruments/civil-procedure/ (last visited Aug. 29, 2023) [https://perma.cc/S3UH-7WLL].
64. See id.
65. See Preparatory Work, UNIDROIT, https://www.unidroit.org/instruments/civil-procedure/ali-unidroit-principles/preparatory-work/ (last visited Aug. 29, 2023) [https://perma.cc/G56Z-THV5]; ELI – UNIDROIT European Rules, UNIDROIT, https://www.unidroit.org/instruments/civil-procedure/eli-unidroit-rules/eli-unidroit-european-rules/ (last visited Aug. 30, 2023) [https://perma.cc/P65L-WQGW].
66. See IBA Guides and Reports, Int’l Bar Ass’n, https://www.ibanet.org/resources (last visited Aug. 30, 2023) [https://perma.cc/6SMY-RDTZ].
67. IBA Task Forces, Int’l Bar Ass’n, https://www.ibanet.org/Task-Forces (last visited Aug. 30, 2023) [https://perma.cc/FHZ2-MN37].
68. See Edwards, supra note 5; Croft, supra note 3.
69. But see Eyal-Cohen, supra note 22, at 1270–71 (cautioning against temporary legislation).
70. Technical solutions might also arise. See, e.g., Milmo, supra note 4 (discussing “constitutional AI, which is designed to comply with various human rights instruments); Lexis + AI, supra note 49 (discussing generative AI with limited source material).
The full text of this Article is available to download as a PDF.