Higher education is often in a state of flux. Each year, teachers pour over their classroom materials reflecting on what transpired last year, spending countless hours considering the value and merit of the textbooks they used, the presentations they shared with their students, and the assessments they utilized to measure the students’ absorption of their teachings. Administrators,
too, are accustomed to change as teaching policies, strategies and resources persist in a constant state of flux. Yet, a more impactful and sudden change has swept across all schools—elementary to universities alike—that has abruptly disrupted pedagogical models and practices in unprecedented ways. The winds of artificial intelligence (AI), particularly generative AI, are quickly picking up intensity and show little signs of slowing down.
Generative AI refers to AI technology that is capable of creating original content, such as text images, video and sound from high-level instructions (referred to as prompts).i In short, AI isn’t merely a tech tool that can help students; it’s capable of taking the student’s place in creating work-product with no supervision and minimal direction. It relies on complex machine-learning algorithms and large language models (LLMs) to provide powerful capabilities for content creation—all with a user-friendly interface. Although there are now many examples of generative AI platforms, one of the most popular, OpenAI’s ChatGPT, launched in November 2022 and quickly became the “fastest-growing consumer application in history,” as it took less than two months to reach 100 million users.ii
Having attended many conferences and panels focused on the AI storm in higher education, I’ve observed a spectrum of excitement, confidence and concern among faculty. A vigorous debate has emerged: If the goal is student learning, are these headwinds or tailwinds toward that destination? While the jury on that question is still out, few would deny the powerful capabilities and potential that AI has already demonstrated to transform the way in which students are taught. And as is true for all significant storms, ignoring them is unlikely to yield positive results. To the contrary, inaction all but guarantees catastrophic danger for both students and teachers.
Moreover, there can be little question that AI will have a significant role in shaping the future of higher education. Thus, the relevant question isn’t whether AI should be part of higher-ed—the question is how. In my view, there will be three key aspects to harness the power of AI winds and bring higher education into the future: (1) attention, (2) adaptation and (3) implementation.
ATTENTION
Every level of the education system has been uniquely impacted by generative AI. Early education teachers face different concerns and opportunities than high school teachers and higher-ed instructors. Similarly, in higher education, each discipline has been rewarded with its own advantages and plagued with its own challenges.
For instance, those teaching college writing courses have been asked to grow the knowledge and abilities of students who—with generative AI—possess the capability of generating a 10-page paper on any topic faster than it previously took a student to ready their typewriter or boot their computer.
With a mere stretch of a few fingertips and some sinister motivation, every student can be tempted into ChatGPT’s intuitive interface for nearly instantaneous work-product that can often outperform an average student.
This newfound capability surely provides students with increased potential for efficiency, but it also presents instructors with significant pedagogical challenges. When awarding a grade, how is a faculty member to know if the merit should go to the student or a well-trained algorithm?iii Similarly, university history professors have suddenly found themselves with a cadre of students who can submit an assignment summarizing or “reflecting” on a period of history with little assurance the knowledge put to paper was actually gained by working through the assigned material. Did the student create their work-product by laboring through the course materials or by the instant gratification provided through ChatGPT?
Those teaching language courses (ChatGPT is fluent in many languages),iv computer science courses (ChatGPT not only writes but also debugs code)v and math courses (ChatGPT gives you the right answer and shows its work)vi have also not been spared.
Law Schools & Generative AI
In my specific discipline, law schools are now directly facing the AI winds in ways very few could have predicted just a few years ago. In addition to being able to write faster than any human can possibly type, understand complex Latin phrases and instantly summarize long texts, generative AI programs have proven themselves capable of understanding logic, precedent and legal analysis.vii Thus, absent severing their students’ internet connection, law school professors face many of the same challenges as peers across higher education.
Was a student’s legal memo generated after long nights of reading dense legal material and writing and rewriting (then again revising) drafts? Or was it a product of a few well-executed prompts? Was the question just answered by a student in class the result of a well-prepared student or the craftiness of a quick-witted student who received a helpful generative whisper without ever talking to a single (human) classmate?
AI impacts not just the legal writing courses critical to a law student’s transformation into a lawyer, but also the doctrinal courses without which students would have little value to offer a client—or even gain admission into a bar to share those skills with a client. Knowledge of substantive and procedural law—and the ability to effectively communicate that knowledge—are indispensable for any lawyer (and their clients) to find success. As of now, none of this can be “downloaded” to a student in the way fantasized in “The Matrix.” Acquiring those skills and knowledge can be achieved only through immense effort and iterative practice. The chief concern, however, is that generative AI allows law students to produce the work-product without much of the effort (or learning) that happens during the process.
Of course, concerns of plagiarism are hardly new. For many years, students have been able to submit papers authored by someone else, look up answers to common or reused questions in online resources, translate large quantities of text with online tools or copy the computer code/legal analysis of students before them. But aside from the ethical barriers that stood between a student and ill-gained work-product, there were practical challenges that made those practices the exception rather than the rule. No centralized source or tool could accomplish all of these so successfully with such minimal investment, as does generative AI. With a mere stretch of a few fingertips and some sinister motivation, every student can be tempted into ChatGPT’s intuitive interface for nearly instantaneous work-product that can often outperform an average student.
Refocusing on Process
Regardless of intentions or motivations, students who rely on generative AI can suffer significant negative impact on their learning. Course assessments are often based strictly on the final work-product submitted by the student, even though the learning value for almost all disciplines lies not in the work-product itself but the process of generating it. In many fields, generative AI has eliminated the requirement for that process to create the final work-product; the process can now be replaced by a student’s short prompt and AI’s near instantaneous creation. Yet, ChatGPT’s perfectly written English paper, accurate accounting of an historical period or well-organized legal memorandum offer little value for a student looking to learn how to create high-quality works. A student can produce great work-product with generative AI, but ask them to explain the characteristics contributing to its quality or to explain why it embodies what they were learning, and you may be faced with a blank stare. So, students taking advantage of that process shortcut aren’t just cheating their professor, they are cheating themselves from an opportunity to learn.
A student can produce great work-product with generative AI, but ask them to explain the characteristics contributing to its quality or to explain why it embodies what they were learning, and you may be faced with a blank stare.
Nor should it be comforting to students that clients usually care most about the quality of the final work-product and that after graduation the student will be able to more freely rely on generative AIviii—because so will the student’s future clients. For example, in some law firms, entry-level lawyers are now in competition with AI programs, as some of the tasks traditionally performed by junior lawyers have been outsourced to automated technologies—often at the request of clients looking to reduce costs from high billing rates. Vendors of these AI products are often quick to promote them without necessarily considering the full impact of implementation. Thus, the value a student can offer his or her future client will not be as an e-deliverer of AI-generated work-product—it will be in providing something more than what AI alone can generate. Likely that will derive in some form from the student’s developed expertise of the underlying subject matter, as well as the appreciation and knowledge for how AI can positively contribute to the end-result, either through efficiency or quality.
With a deep focus on grades, however, the pedagogical value of the process is often lost on too many students during their learning. A focus for many students is achieving a good grade—not necessarily learning the material. As a result, a dangerous dichotomy begins to emerge: Faculty face growing challenges in assessing progress in student learning, while students (and employers) gain a false confidence in performance based on work-product no longer serving the needs or objectives of either participant.
Setting Expectations
Even more problematic than mere access to this disruptive technology is the lack of agreement and notice for what faculty view as welcome, suspect or prohibited use of AI in their courses. In the absence of a universal AI ethical code—and in many cases minimal guidance in a course syllabus—even those students looking to do the “right thing” have found difficulty understanding exactly where that threshold lies. Given the high stakes accompanying grades for many students’ future career prospects, failing to utilize available tools that classmates (that is, fellow competitors for jobs) rely on to gain an advantage might prove too costly. To be sure, existing academic policies provide extensive remedies against students who bend the rules too far; after all, plagiarism isn’t defined or limited by whether the author from whom the student plagiarized is human. Yet, despite the flexibility these policies provide, resorting to their after-the-fact penal nature (for the few who get caught) offers little satisfaction to students, teachers and administrators. Moreover, students who rely on generative AI to support their learning in the context of unclear boundaries and who are acting in good faith should hardly be faulted for their ingenuity and boldness.
In the end, the complicated questions, concerns and opportunities that have emerged through the AI storm are not for students to solve. Students have never been the ones expected to, or responsible for, leading the development of pedagogical models. That responsibility falls squarely on the faculty and administrators guiding them through their educational journey. Yet not all faculty have even taken notice of the challenges and opportunities directly before them. Those who have paid attention quickly discovered that the questions and issues evolved and gave rise to even more questions. None of this is reason to look away. And the shared goal to face the AI storm requires the same first step for every educator and administrator, in every discipline and every level of education: Give AI—and its impact on education—the attention it deserves.
ADAPTATION
With challenges come opportunities. For those who have been paying attention, the risks and challenges above paint a bleak picture. Indeed, despair is a common initial reaction. The sudden popularity of ChatGPT (or more broadly, generative AI) provided a significant shock to the system, with reactions such as: Students will never read any assigned material again; students will never submit anything originally drafted by them again; and, in short, students will never learn anything again. It wasn’t just that the skies were falling; seemingly, the entire education world was crumbling.
Quickly, though, those educators (both faculty and administrators) who embraced the challenge at the outset and took on the work of better understanding what had changed and how we must adapt with it began to understand that the education world wasn’t crumbling. It was merely changing—perhaps evolving. The change was sudden, significant and scary, so not many were familiar with, much less prepared for, it.
Yet, the change was likely necessary. It is clear by virtually any available projection that AI will impact the world in every industry and workforce sector. Whether we like it or not, whether we see it as good or bad, that is the world our students will walk into after graduation. Although we certainly do not know—nor can foresee—all the different ways AI will impact and change the world, little doubt exists that AI will be an incredibly important part of the future.
Thus, if educators must prepare students to be successful into that future, they can afford to ignore AI no more than they can afford to avoid electricity or the internet. Some future and current AI applications will surely have a negative effect on education, just as on society. It will be up to the participants in each discipline to establish the policies by which AI use will be governed and to guide the students through the application of those policies. That will take significant time and resources. Therefore, waiting until that work is fully completed is bound to be a poor decision with even more negative consequences.
No Time to Wait
While we wait for the “perfect” AI playbook, faculty must recognize and embrace the circumstances and challenges that have been brought upon the world of education and more purposefully build their pedagogical goals and strategies in light of the new realities, taking into account AI’s capabilities and our students’ increasing inclination to use them. The opportunity for student learning still exists in every classroom.
Of course, when necessary, faculty can always unplug the metaphorical cable and cut off a student’s access to generative AI technologies. Many professors do so when it comes time for final course assessments. While this measure assures genuine authorship and provides more confident measures for student assessment, exclusive reliance on such “high stakes” assessment presents significant concerns that have been well-documented,ix particularly for students who do not thrive in such situations. Moreover, cutting off students entirely from AI is neither a prudent nor realistic path, given the world that awaits them.
In other words, education will need to rethink its assessments models: How can educators determine whether students are learning?
Although the impact and adaptation strategies will look different across disciplines, an emerging common theme is greater focus on the process of learning rather than the end work-product generated by students. One adaptation of this might be more formative assessments and less summative assessments. The renewed focus on process will be important for many disciplines because AI’s work-product, essentially, creates a new baseline for quality. Every student is now capable of producing “adequate” work-product for many assignments—even if they do not understand the material—simply by having generative AI create it. This not only creates challenges for assessing student learning, it also shrinks the grading spectrum. If generative AI can instantly produce C+ work-product, the range of grades might not be A to F, it might be A to C. As such, in some assignments, the goal of educators might not be to help students reach the baseline (which can be accomplished by generative AI in seconds), it might be to teach them how to improve the baseline. For example, in teaching students to write, the focus might be on a student’s progress of iterative drafts and exercises or editing an AI-generated draft rather than a final paper.x
There are many resources emerging to help educators looking to adapt their instruction with the advantages of AI.xi And even though the context for assessment in the age of AI might be different, the path faculty must follow is generally the same: Identify the course objectives, isolate the skills students are expected to gain and design a curriculum/assessment that works toward realizing/measuring those goals.
However, thinking about how to change things is not enough. Just as contemplating a plan of action for an impending storm won’t accomplish much unless the strategy is implemented, faculty and administrators—even those who pay attention and contemplate how to adapt—will need to take concrete actions on their educational approaches.
IMPLEMENTATION
Given how suddenly the AI winds began to blow across academia, many institutions and faculty were caught flat-footed. In fairness, so was the rest of the world. Thus, those who fell behind (or have yet to start) can be forgiven. Rather than regret missed early opportunities, educators’ focus now should be on moving forward. After paying attention and adapting learning methodologies to the new realities, faculty and administrators will need to implement those adaptations into still-evolving circumstances. To find an example of the successful transition from adaptation to implementation, one need not look further than our university system.
As one of the windiest states in the Union, North Dakota is used to dealing with powerful winds.xii So it shouldn’t be a surprise that our higher-ed leaders have been at the forefront of implementing preparations for the AI storm since its early forecast. For example, in early 2023, the North Dakota University System (NDUS) created an AI Forum to explore AI’s intersection with higher education.xiii Comprised of university presidents, state administrators and higher ed faculty, the forum meets regularly to share ideas and advice relating to AI and education. Similarly, NDUS leadership and the State Board of Higher Education convened various study groups to formulate Envision 2035,xiv the state’s strategic plan for the future of higher education with a focus on “major expansion of activity related to AI.”xv
At the University of North Dakota (UND), there has been a conscious focus and effort since the start to learn about, grow with and put into action meaningful AI initiatives. In 2023, numerous faculty panels were held to discuss AI’s potential, risks and implementation. UND then published “Initial Guidelines for Using Generative AI Tools” before many schools formalized any stance on how AI should be used.xvi President Andrew Armacost even hosted Greg Brockman, co-founder of OpenAI (also, a former UND student and North Dakota native), for a conversation on AI’s future,xvii followed by a panel discussion on the “Promise and Peril of AI in Higher Education.”xviii
North Dakota State University (NDSU) and the other NDUS campuses have also been active in facing the AI winds and implementing strategies to harness their power, including by hosting AI discussions.xix
For our elementary and secondary schools, the North Dakota Department of Public Instruction published the “North Dakota K-12 AI Guidance Framework.”xx
Across the state, NDUS faculty have been encouraged to participate in workshops designed to help move their courses and teaching into the AI age. The results have produced not only helpful interdisciplinary discourse but also tangible results. At UND, faculty members created and published more than 40 AI course exercises across many disciplines at the summer 2023 AI faculty workshop.xxi The summer 2024 AI workshop cohort featured an even larger number of faculty members and has published its work in the same repository. These efforts will surely be beneficial to colleagues at other campuses and, most of all, to our students.
Law Schools
Law schools nationwide have started to take notice, and a growing number of law faculty have begun to integrate AI into their coursework and teaching. To date, 258 professors have joined the AI Law Prof group, which is “organized to allow [the group] to collaborate and share [faculty] insights, best practices and resources.”xxii Another law professor group, the Legal Writing and Generative AI Convo Group, which formed in May 2023 and has grown to more than 400 members, meets monthly to discuss all aspects of generative AI and how it impacts the teaching of legal writing and will be used in the legal field.xxiii Some professors have even argued that “all law professors have an inescapable AI mandate … [to] achieve competence in, and understand the challenges of, [generative AI].”xxiv
At most law faculty conferences I attend, AI dominates the presentation topics, regarding its impact on teaching and also on substantive law. The same holds true for research, scholarship and publication, the growth of which is reaching a wider audience from different perspectives.
Evidence of adaptation and implementation is also evident in curricular changes. Until several years ago, AI was a meaningful part of the curriculum at few law schools. In the last year, however, more law school faculty have ventured into this brave new world to integrate AI, including generative AI, into their classes, not just for “AI and the Law” standalone courses but also skills and doctrinal courses. Often, the professors who choose to expand and integrate AI into their teaching discover that, while they are ahead of many colleagues, they are behind many students who didn’t wait for an invitation, permission or direction. For those who have yet to become involved, the challenge will only increase.
In a recent informal study, the American Bar Association Task Force on Law and Artificial Intelligence reported that “AI is already having a significant impact on legal education and is likely to result in additional changes in the years ahead.”xxv Not surprisingly, the survey found that “law schools are increasingly incorporating AI into their curricula.”xxvi Specifically, more than half of the responding law schools (albeit a small percentage of law schools overall) indicated they offer courses primarily focused on teaching students about AIxxvii and “nearly all (93 percent) of the responding law schools are considering changes to their curriculum in light of the profession’s increasing use of AI.”xxviii A majority (62 percent) of responding schools offer students an opportunity to learn about AI in the first-year curriculum.xxix Given the low number of law schools that participated in the survey, some have questioned the underlying numbers.xxx However, the key takeaways from the study are on firm footing and supported elsewhere: Law schools are aggressively exploring AI’s impact in legal education, and the number of law schools with AI courses is growing quickly.xxxi
Overall, while the study found that “legal education is evolving to meet the demands of a profession” into “AI literacy,” it also cautioned that “law schools are at different stages of readiness and enthusiasm for adopting AI-related changes.”xxxii For those schools striving to harness AI winds and transform them into tailwinds for their teaching methodologies, the focus has been on “new concentrations and courses,” “integrating AI tools and concepts throughout the curriculum,” “encouraging the use of AI in experiential classes,” and “reevaluating their methods of assessment to adapt to AI’s capabilities.”xxxiii
My impression of the state of AI implementation from talking to colleagues at law schools around the country is that many schools have taken the initiative to create committees to evaluate the impact of AI on their programs, but that it is a challenge to reach a wider audience of professors at each school.
The UND School of Law, like its main campus, has eagerly embraced AI’s transformative power. In 2023, our law students began working with AI in their first-year studies to gain exposure on how AI is changing the profession. Numerous professors have implemented AI-based assessments and exercises into first-year and upper-level courses to introduce students to the capabilities and risks of AI, as well as its impact on the practice of law and different substantive legal areas.xxxiv
Additionally, the law school recently approved a newly designed “AI and the Law” course, which I will begin teaching in the spring semester. The course will focus on teaching students about the many complex issues and unresolved questions at the intersection of AI and the law, including tort liability, free speech, privacy, intellectual property, ethics and bias. During the course, students will gain the skills necessary to identify key legal issues and concerns regarding AI use in various factual situations and legal substantive areas. Through this “survey” AI law course, students will be able to build on their foundational knowledge of AI in the specific substantive areas that interest them (or their clients) most. In short, UND’s law graduates will have the opportunity to enter the profession with “AI literacy.”
Beyond the classroom, UND Law’s commitment of resources and support to foster a community of AI expertise in a diversity of legal areas has led to substantial successes. In the past two years, five of UND’s 18 full-time law professors have published numerous articlesxxxv and made dozens of presentations locally, nationally and internationallyxxxvi on the impacts of AI on the law. This broad AI expertise further benefits students directly. With 28 percent of our faculty having AI expertise, UND Law is ahead of many peer institutions still trying to build an AI footprint. The law school is now positioned to implement AI throughout the entirety of its curriculum. Aware of the future challenges the AI winds will bring and the major changes accompanying the NextGen bar exam, UND Law is in the process of intensely reviewing its curriculum to ensure it can best equip students to meet the emerging challenges for the practice of law in the AI age.
SAILING WITH THE WIND
Regardless of whether university faculty embrace or resist the powerful AI winds sweeping across higher education, it’s clear that a growing number of students have already joined the AI experiment. If educators want to continue utilizing pedagogical strategies with intentionality, impact and results, they must face (and embrace) the AI winds that continue to intensify by:
(1) paying attention to the changes trailing AI, (2) consciously reflecting on adapting to the AI revolution, and (3) implementing pedagogical approaches that take into account AI’s transformative power. Without adhering to these critical steps, the growing divide between student expectations and faculty expertise (or lack thereof) will only further strain the opportunity for meaningful student learning in every discipline. Many institutions, including UND and others statewide, as well as a growing number of law schools, have responded with determination not just to weather the AI storm but to turn the AI winds into a tailwind for pedagogical objectives. Yet, with so much uncertainty in how exactly AI will transform higher education (and society), plenty of work remains to be done. ◉
I am grateful to Doni Bloomfield, Aman Gebru, Michael Goodyear, Timothy Hsieh, Ari Lipsitz, Jacob Noti-Victor, Amy Semet, Xiyin Tang, and Carolyn Williams, for insightful and helpful feedback on this article. The views shared in the article are my own and are not made on behalf of my institution.
REFERENCES
i Cole Stryker and Mark Scapicchio, “What is generative AI” (Mar. 22, 2024), https://www.ibm.com/topics/generative-ai. ii Krystal Hu, “ChatGPT Sets Record for Fastest-Growing User Base—Analyst Note,” Reuters (Feb. 2, 2023, 9:33 AM), https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/. iii See Amanda Hoover, “Students Are Likely Writing Millions of Papers With AI” (Apr. 9, 2024, 9:00 AM), https://www.wired.com/story/student-papersgenerative-ai-turnitin/. iv See Lauren Coffey, “Lost in Translation? AI Adds Hope and Concern to Language Learning” (June 6, 2024), https://www.insidehighered.com/news/techinnovation/artificial-intelligence/2024/06/06/ai-adds-hope-and-concern-foreignlanguage. v See Dr. Kip Glazer and Dr. Sonal Patel, “The Evolving Landscape of Computer Science Education in the Age of AI — Recommendations for Computer Science Educators” (Feb. 4, 2024), https://csteachers.org/ the-evolving-landscape-of-computer-science-education-in-the-age-of-airecommendations-for-computer-science-educators/. vi See Conrad Wolfram, The Math(s) Fix: An Education Blueprint for the AI Age (2020). vii Debra Cassens Weiss, “Latest version of ChatGPT aces bar exam with score nearing 90th percentile,” ABA Journal, available at https://www.abajournal.com/web/article/latest-version-of-chatgpt-aces-the-bar-exam-with-score-in-90th-percentile; Jonathan H. Choi, and Amy Monahan and Daniel Schwarcz, “Lawyering in the Age of Artificial Intelligence,” 109 Minnesota Law Review __(2024), available at https://ssrn.com/abstract=4626276. viii Not all professions allow for unlimited use of generative AI. In the law, many jurisdictions have adopted restrictions or banned reliance on AI-generated work-product. See, e.g., Victoria Fang, “Catalog of Court-Mandated AI Disclosures,” available at https://docs.google.com/spreadsheets/d/16evnwkR-gbt6zUf9QklyJL mcfuOnqk5tc7T7krnlYBw (last visited July 19, 2024). ix Olympia Duhart, “‘It’s Not for A Grade:’ The Rewards and Risks of Low-Risk Assessment in the High-Stakes Law School Classroom,” 7 Elon L. Rev. 491, 504 (2015); L. Danielle Tully, “What Law Schools Should Leave Behind,” 2022 Utah L. Rev. 837, 857 (2022). x Carolyn V. Williams, “Bracing for Impact: Revising Legal Writing Assessments Ahead of the Collision of Generative AI and the NextGen Bar Exam,” 28 Legal Writing 1, Part IV (2024) (describing various ways law professors can assess students’ legal research and legal communication skills in the wake of generative AI) available at https://ssrn.com/abstract=4509603. xi See, e.g., José Antonio Bowen and C. Edward Watson, Teaching with AI: A Practical Guide to a New Era of Human Learning (2024); Joan Monahan Watson, José Antonio Bowen and C. Edward Watson, Learning with AI: The K-12 Teacher’s Guide to a New Era of Human Learning (2024); see also https://personifyai.app/ (a tool to “Create an AI Teaching Assistant Tailored to Your Course”). xii Jason Samenow, “Blowing hard: The windiest time of year and other fun facts about wind,” The Washington Post (March 31, 2016), available at https://www.washingtonpost.com/news/capital-weather-gang/wp/2014/03/26/whatare-the-windiest-states-and-cities-what-is-d-c-s-windiest-month/ (ranking North Dakota as the fourth “windiest” state). xiii Jay Dahl, “NDUS holds forum on emerging AI technologies,” Inforum, available at https://www.inforum.com/news/north-dakota/ndus-holds-forumon-emerging-ai-technologies (last visited July 19, 2024). xiv Envision 2035, North Dakota University System, https://ndus.edu/envision-2035/. xv Joshua Wynne, “From the Dean: Envisioning 2035 in higher education,” For Your Health, available at https://blogs.und.edu/for-your-health/2024/05/30/from-the-dean-envisioning-2035-in-higher-education/; see also https://ndus.edu/envision-2035/. xvi Eric Link, Joshua Wynne, John Mihelich, Madhavi Marasinghe, “Initial Guidelines for Using Generative AI Tools at the University of North Dakota,” UND University Letter, available at https://blogs.und.edu/uletter/2023/08/initial-guidelines-for-using-generative-ai-tools-at-the-university-of-north-dakota/ (last visited July 19, 2024). xvii Tom Dennis, “VIDEO: A conversation with Greg Brockman, co-creator of ChatGPT,” UND Today, available at https://blogs.und.edu/und-today/2023/09/video-a-conversation-with-greg-brockman-co-creator-of-chatgpt/ (last visited July 19, 2024). xviii Promise & Peril of AI in Higher Education, University of North Dakota, https://www.youtube.com/live/nse_gRx60uE. xix See, e.g., “AI Anywhere and Everywhere,” an all-day AI event hosted by the Challey Institute at NDSU, https://www.ndsu.edu/news/ai-event-scheduled. xx Mary Steurer, “North Dakota department publishes AI guidance for public schools,” North Dakota Monitor (July 9, 2024), available at https://northdakotamonitor.com/briefs/north-dakota-department-publishes-aiguidance- for-public-schools/ (last visited July 19, 2024); see also “North Dakota K-12 AI Guidance Framework,” North Dakota Department of Public Instruction, available at https://www.nd.gov/dpi/policyguidelines/north-dakotak- 12-ai-guidance-framework (last visited July 19, 2024). xxi “UND AI Assignment Library,” available at https://commons.und.edu/ai-assignment-library/. xxii “AI & Law-Related Course Professor List,” https://www.ailawprof.com/ (last visited July 20, 2024). xxiii Anyone interested in joining or learning about the group can contact its founders: Professor Carolyn Williams (carolyn.williams.2@und.edu) or Professor Kirstin Davis (kkdavis@law.stetson.edu). xxiv Rachelle Holmes Perkins, AI Now, 97 Temple L. Rev. __, at *3 (forthcoming), available at https://ssrn.com/abstract=4840481. xxv “AI and Legal Education Survey Results 2024,” American Bar Association, available at https://www.americanbar.org/content/dam/aba/administrative/office_president/task-force-on-law-and-artificial-intelligence/2024-ai-legal-edsurvey.pdf (last visited July 19, 2024). xxvi Id. at 1. xxvii Id. at 4-5. xxviii AI and Legal Education Survey Results 2024, supra note, at 12-13. xxix Id. at 11. xxx Bob Ambrogi, “Recent Report of Law Schools’ AI Adoption Have Been Greatly Exaggerated,” available at https://www.lawnext.com/2024/07/recentreports-of-law-schools-ai-adoption-have-been-greatly-exaggerated.html (last visited Aug. 24, 2024). xxxi Bob Ambrogi, “Remember that ABA Survey of Law Schools with AI Classes? This May Be a More Accurate List,” available at https://www.lawnext.com/2024/08/remember-that-aba-survey-of-law-schools-with-ai-classes-thismay-be-a-more-accurate-list.html (last visited Aug. 24, 2024). xxxii Id. at 1-2, 14. xxxiii Id. at 12. xxxiv See, e.g., Nikola L. Datzov, “Using ChatGPT to Enhance Law School Assessments (2023),” UND AI Assignment Library, available at https://commons.und.edu/ai-assignment-library/39/ (last visited July 19, 2024); Carolyn Williams, “Checking ChatGPT for Accuracy (2023),” UND AI Assignment Library, available at https://commons.und.edu/ai-assignmentlibrary/36/ (last visited July 19, 2024). xxxv See, e.g., Nikola L. Datzov, “Artificial Intelligence is Transforming Our World—Are We Ready?,” Dakota Digital Review (September 2022), available at https://ssrn.com/abstract=4378970; Nikola L. Datzov, “The Role of Patent (In)eligibility in Promoting Artificial Intelligence Innovation,” 92 UMKC L. REV. 1 (2023) available at https://ssrn.com/abstract=4380405; Nikola L. Datzov, “Toward Automated Justice” (work-in-progress); Denitsa Mavrova Heinrich and Jennifer Cook, “AI-Ready Attorneys: Ethical Obligations and Privacy Considerations in the Age of Artificial Intelligence,” 72 U. Kan. L. Rev. 313 (2024), available at https://ssrn.com/abstract=4876134; Denitsa Mavrova Heinrich and Erika Pont, “Who You Gonna Call: The Role of Expert Witnesses in Authenticating AI-Generated Evidence,” (work-in-progress), available at https://ssrn.com/abstract=487621; Carolyn V. Williams, “Bracing for Impact: Revising Legal Writing Assessments Ahead of the Collision of Generative AI and the NextGen Bar Exam,” 28 Legal Writing 1 (2024) available at https://ssrn.com/abstract=4509603. xxxvi See, e.g., Nikola L. Datzov, “Patents, Ethics and AI,” Panel Chair at the 42nd ATRIP Congress, Rome, Italy, July 2024; Denitsa Mavrova Heinrich and Jennifer Cook, “Global Lawyers: A Comparative Approach to the Ethical Use of AI in the EU and the U.S.,” Presenter at the 16th Global Legal Skills Conference, Bari, Italy; Blake A. Klinkner, “Generative AI: The Future is Here. Ethically Navigating This Brave New World,” Presenter at the 2024 South Dakota State Bar Convention, Pierre, South Dakota, June 2024; Carolyn V. Williams, “Generative Artificial Intelligence: The Possibilities for Imagining a New Paradigm for Legal Writing,” Panelist at the Legal Writing Institute 2024 Biennial Conference, Indianapolis, IN, July 2024; see also Joseph Vacek (Courtesy Appointment, School of Law), “How AI works,” Presenter at the Aviation Lawyers Association 2023 Conference, Washington, D.C.
Nikola Datzov
Nikola Datzov is an Assistant Professor
at the University of North Dakota’s School of Law, where he teaches courses on artificial intelligence, intellectual property, torts, and remedies. He has received numerous awards for his innovative teaching methodologies, including his selection by the Association of American Law Schools as Teacher of the Year in 2022. His research and scholarship focus on patent law, artificial intelligence, innovation and the intersection of different areas of intellectual property law. Prior to joining academia, Prof. Datzov was a partner at a large law firm in the Midwest and worked as an attorney in the federal courts for three years, serving as a law clerk for judges at the U.S. Court of Appeals for the Eighth Circuit and the U.S. District Court for the District of Minnesota. Before going to law school, he worked as a computer programmer and technology consultant.