AI is the rage. In the past two years, ChatGPT and other generative AI platforms have made it so. But AI was already here and here to stay. In 2017, Stanford computer scientist Andrew Ng referred to AI as “the new electricity.”[i] According to Daniele Rus, the first woman to lead MIT’s AI Lab, “dig into every industry, and you’ll find AI changing the nature of work.”[ii] Stephen Hawking said of AI in 2016, “success in creating AI could be the biggest event in the history of our civilization.”[iii] At the same time, he also said “the rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity. We do not yet know which.”[iv]
What is AI and what should informed members of the public know about it? More importantly, how should we regulate AI, now, so that we might influence, indeed determine, on which fork on Hawking’s road we are headed?
This article starts with a policymaker’s definition and understanding of AI. It then offers five takeaways that every public citizen—defined here as an informed participant in the regulatory process—should know about AI. First, AI is here to stay. Second, AI regulatory choices will, or should, center on three issues: (1) when, where and how to exercise human control; (2) AI bias; and (3) data ownership, use and costs. Third, AI will present many forks in the road and at an increasing pace. Fourth, AI is the future of power. Finally, AI will shape and potentially transform democracy. The article concludes by urging public citizens, and in particular legislators, to make purposeful rather than default choices about AI. Those choices will determine whether the age of AI is one of promise or peril, or the degree to which it is a combination of both.
What is AI?
Let’s start with a few definitions. AI has many definitions, and the one I prefer comes from the 2021 report of the National Security Commission on Artificial Intelligence. “AI is not a single piece of hardware or software, but rather a constellation of technologies that give computer systems the ability to solve problems and to perform tasks that would otherwise require human intelligence.”[i] This definition captures at least two truths: AI is not one thing nor a thing, it is a collection of technologies and capabilities involving math, data and hardware, including computing systems, chips and sensors. This means there are no easy or singular solutions to regulating AI. The definition also distinguishes between human intelligence and machine capability. Generative AI may act like it is sentient and appear to express human feelings, but it is not human, nor is it expressing human emotion. It is deriving meaning and making predictions based on data created by humans. Thus, AI outputs are often imbued with human characteristics, including emotions.
Two other definitions are useful. Generative AI refers to computational models that use machine learning to generate or create new images, text, voices, videos and other content. There are different methodologies for teaching computing machines to learn and perform tasks. One method, deep learning, relies on a series of weighted parameters within an AI computational model or network. This network breaks input data down into constituent parts to derive additional meaning from each of those parts and do so in ways that humans cannot.
For example, a facial recognition algorithm might disaggregate an image into multiple parts and run those parts through an equation comprised of weighted layers. Each layer might examine an aspect of an image, such as an ear lobe, a portion of a nose, internally comparing one collection of pixels against a database of images containing billions of pixels. If the input image, or component of the image satisfies a certain weighted comparative value, it is passed on to the next layer of the network for further assessment. If the facial recognition algorithm is designed to match unknown pictures against a known database, the model will likely produce a series of possible matches rather than a single likely match. The algorithm is predicting that one of the output pictures may match the input picture, similar to how the Google search engine predicts that one of the provided links will respond to a search query.
One of the current challenges with neural networks and deep learning is that the consumer of the output layer cannot be sure on what basis the output was, or outputs were, selected within the cascade of weighted layers within “the black box” of the neural network. Was it the similarity in ear lobes? Or did the network identify and weight similarities in photographic backgrounds, as has happened in at least one notable case. Unsurprisingly, research is being conducted into making this type of AI output “explainable,” by which is meant understandable to the user who can then decide whether to rely on the outcome.
The concept of AI has been around at least since the 1950s. However, in the past two decades, several factors have accelerated its development exponentially. Experts might debate the relative weight to give to each factor, but certainly one of those factors is the quantity of data generated through and available on the internet. Data is needed to train, test and validate AI. Today’s AI is generally thought to have certain strengths and weaknesses. Current models can aggregate data, structure data and derive meaning from data in ways humans cannot. AI can detect patterns and spot anomalies in data, such as code, and through pattern recognition and generative capacity engage in natural language processing and generative creation.
Current models, however, cannot exercise subjective judgment or act with or respond to emotional intelligence, although they might mimic emotional intelligence, based on data scraped from the internet. We also know that AI is not always good at situational awareness, which involves adapting to circumstances for which it has not been trained. Generative AI also hallucinates, which is the term used to describe generative AI’s propensity to make stuff up, often in a persuasive manner.
AI is moving fast. In 2021, the buzz was around neural networks, while today public attention focuses on generative AI. Tomorrow, which may literally mean 24 hours from now, our attention may move to Artificial General Intelligence (AGI), which is the notional threshold when computing machines driven by AI have the capacity to perform multiple tasks, move from task to task, and do so autonomously. AGI is also thought to represent a threshold beyond which machines are “smarter” than humans, which means machines can perform tasks and solve problems humans cannot and do so without human direction. Some generative AI models and tools are starting to feel a bit like AGI in their capacity to respond to human queries and create their own code to perform tasks and do so autonomously. Most commentators contemplate AGI will emerge in a matter of years if not days.
AI philosophers, such as Nick Bostrum and some computer scientists, contemplate something beyond AGI, called superintelligence, where AI systems are generally capable of not only performing multiple tasks but capable of autonomously using all the knowledge derivable from the internet, including knowledge about how to connect to the energy grid to divert resources to the AI’s functions autonomously and independent of human programming or prompting.
To people who have lost their jobs to AI enabled robots, or who worry about AI-enabled drone swarms or the effects of an AI “arms race,” debates about the advent of superintelligence may seem remote or esoteric. Debates about superintelligence may seem equally inapt to the scientist using AI to find new medicines and potential cures for diseases, or to better model weather patterns to make predictions about storms and hurricanes, or use AI to create fusion-generated energy.
In short, the reality is that AI comes with promise and peril. It is the role of public citizens, including policymakers and legislators, to help bend history toward promise and away from peril.
Takeaways for Public Citizens and Legislators
With this background in hand, what is it that public citizens should know about AI when it comes to law and policy?[i] Here are five takeaways:
AI is Here & Here to Stay
This seems obvious to those in the field or who are tracking the growth of the AI market. In 2024, the Bureau of Labor Statistics predicted that three of the 20 fastest-growing professions were in the field of data analytics and computer science.[i] AI’s centrality is also evident to those who follow national security. AI has played a pivotal role in the War in Ukraine, with both sides using AI applications to conduct drone and electronic warfare, to detect and evade electronic defenses, and to penetrate air defenses. AI use-cases for national security include logistics, personnel, intelligence, targeting, planning and medicine. Nonetheless, there remain skeptics who believe that AI is overhyped, overrated and its impact exaggerated.[ii]
In my view, the question is how AI will change society—not whether it will do so. AI is already embedded in everyday life from shopping online, to receiving virtual content in news and video feeds, to driving with the aid of navigational applications empowered by AI tools. Students at every level now use generative AI to draft their papers, with or without the permission of their instructors. Judges and lawyers have long used AI-enabled tools to conduct research. Some judges are now starting to use generative AI tools to draft opinions. Query whether judging and the exercise of judgment or human skills and traits or something machines can learn. This is an interesting area to follow as we seek to distinguish between what are inherently human skills and tasks, and what might be better performed by machines, or better performed by machines with human oversight and control.
As former Google CEO Eric Schmidt and others have noted, nowhere is AI likely to be more impactful than in the field of medicine. It already is. AI can be used to screen patients for medical treatment, such as diabetic retinopathy, allowing ophthalmologists to focus limited time and resources on high-risk patients, whom AI selects with retinal patterns most likely indicative of retinopathy. AI models can map proteins and run experiments to develop new drugs or test cures and treatments for diseases using skills and performing tasks that would take humans years to perform, if they could perform them at all. And as Schmidt also points out, this generation of high school and college students will be the first “AI generation,” having grown up with and around AI, in the way the prior generation was the first digital generation.[iii]
The question we should be asking is where is AI headed? No one is sure. If they are sure, ask them why they did not predict the advent of ChatGPT 10 years ago, or even five years ago, or before 2022. This we do know: We still do not know which fork in the road we are on, or perhaps better put, we are headed in multiple directions all at once. AI comes with great promise, in the fields of medicine, science and logistics. It also comes with peril. The same capacities that allow scientists to cure disease may help malicious actors create new biological agents and democratize the capacity to do so by opening the door to Do It Yourself (DIY) tools once only available to governments. We have seen this already with deepfakes. Once only governments and Hollywood studios could conjure up realistic simulations of foreign actors and voices with special effects. Now there are platforms that make realistic images and voice facsimiles based on only a handful of words or images and do so in seconds. This can be done with comedic effect or fraudulent effect, as in the case of “family emergency scams” or “grandparent scams.”
Views vary on what all of this means and where we are headed. The columnist David Brooks has written, “AI is an ally and not a rival.”[iv] Henry Kissinger said “it is simply a mad race for some catastrophe.”[v] Whether one believes superintelligence is the stuff of science fiction or a realistic part of Hawkings’ worst-case scenario, we should not lose sight of the reality that all AI starts with a human decision to code a machine to perform a task or solve a problem.
Having thought through where we are headed, the second question we should ask is how do we get there first with appropriate guardrails and regulations? It is in our control to answer the second question. Whether there are guardrails will depend on how policymakers and public citizens respond to the takeaways that follow.
The Big Three
Most legal, regulatory and ethics issues presented by AI will revolve around three issues: a) the where, when and how of human control; b) bias; and (c) data ownership, use and costs. It follows that law and policy should purposefully address each of these issues in the context of each use case, including perhaps a decision not to regulate or define policy boundaries. Let’s briefly consider each:
- The where, when and how of human control: I call this issue “the centaur’s dilemma.” The phrase derives from the Department of Defense’s “centaur model” for AI: part machine and part man. It is a dilemma for at least two reasons. On one hand, the more control humans assert over an AI capacity the less advantage one might receive from an AI system’s capacity to instantaneously and autonomously act. On the other hand, the more autonomy an AI platform is given the greater the risk that the AI will act in unintended or erroneous ways, or do so without the exercise of human decision and choice in the contextual moment of use.
It is also a dilemma, because there is no easy, simple or singular solution. Each AI use case requires its own response to the dilemma. In the cyber field, for example, AI tools can effectively defend against malware and intrusion only if they are allowed to do so instantaneously and autonomously. However, no one in their right mind would think it wise to delegate the decision to use nuclear weapons to an AI system, even though time may also be of the essence in most conceivable “use cases.”
Policymakers, legislators, commanders, consumers and anyone else who uses AI empowered tools should always ask four threshold questions:
- What is the AI use case?
- For each use case, what is AI better at?
- For each use case, what are humans better at?
- What are humans better at when augmented by machines?
So far the answers to these questions invariably involve some form of human-machine teaming. This is true even for autonomous systems, such as the cyber security tool considered above, for a human decided to create, embed and enable the AI in the first instance.
AGI will change this. Therefore, a wise policymaker should consider now whether to impose legislative or policy redlines (prohibitions) and guardrails regarding human control over AI, including AGI, before a capacity is developed and interested parties are vested in avoiding regulatory limits.
For example, regulation might impose a requirement for affirmative human decision and accountability before issuing a medical diagnosis. Or, in the case of an autonomous system, regulation might require the designation of an official responsible for certifying the safety and security of the application. Likewise, legislators or policymakers might require a sandbox demonstration or red team evaluation and certification before an autonomous system is used in real-world circumstances.
- Bias: Just as humans, AI is biased, sometimes in similar ways and sometimes in different ways. Public citizens need to appreciate at least four attributes about AI bias:
There are many different kinds of bias. Lawyers generally think about bias in the context of suspect classes and equal protection under law. When intelligence analysts think about bias, they are considering the suite of cognitive biases that might impact the accuracy of analysis, such as groupthink, anchoring and confirmation bias. AI may possess such biases, because AI is designed, tested, deployed and used by humans.
However, AI can also embed other types of bias that derive from statistics, math and data that lawyers and analysts do not often consider or look for. Statistical bias, for example, might occur when data is too small to derive statistical meaning, accurately predict or does not account for hidden variables. This occurred, for example, at the outset of the Covid-19 pandemic when health models could not account for asymptomatic infections.
Inappropriate deployment bias occurs when an AI system trained in one context is used in a different context. Imagine driverless car software trained in the U.S. on the right side of the road being used in the U.K. on the left side of the road.
However, bias can be more subtle as in the case of an AI system being used to predict bail risk even though it was actually trained to predict substance-abuse risk. Inappropriate focus bias occurs when an algorithm reaches an output within a neural network based on factors inapt to the AI’s intended use. These are three types of bias that might impact AI accuracy; there are others.
Technologists have and are developing methodologies to detect and mitigate AI bias. But such tools are only useful if they exist, they are wanted, and they are used.
AI bias can be subtle and hard to detect in at least three ways. First is moral bias.Within the black box of neural networks, for example, an algorithm might weight a factor that is otherwise inappropriate because it is inconsistent with societal values or applies a proxy that would otherwise be inappropriate if known or discovered.
Second is what we might call proxy bias. Residential addresses, for example, may serve as a proxy for economic status and race in a context where economic or demography are not intended as variables. One analysis indicated that an AI employment screening system sorted job applicants based on the browser used by the applicants to submit their applications. This had the effect of de-selecting otherwise qualified candidates on the basis of age. Older applicants, it turns out, are more likely to use the manufacturer’s embedded browser in their laptops in contrast to younger users who are more likely to select a preferred browser.
The same result occurred when the AI sorted applicants based on their use of social media on the theory that the absence of social media use might indicate an introverted personality and too much use might suggest a distracted employee. Younger people are more likely to use social media on a regular basis than older people. However, there are different reasons why an individual may choose to eschew social media other than age. Moreover, in the case in question, the employer did not intend to screen out older candidates but rather candidates who might not work well with others or spend all their time texting.
Third is hallucination bias. As noted earlier, generative AI can also hallucinate and do so convincingly. Clearly, an AI that responds to false or contrived information is engaged in a form of bias.
AI bias is generally not fixed in time or reference. Because AI models are iterative—that is, continuously changing and learning from the data they are fed (think here about what ChatGPT is “fed” on a daily basis from its users)—AI consumers who care about the accuracy of what they are consuming must constantly and consistently test for bias. Users must also be wary about divulging confidential material to the AI or waiving privilege over the material they share.
These bias attributes lead to two takeaways. First, bias or not, the question is whether or not AI teaming with humans is more accurate than humans acting alone. Second, mitigating bias produces more accurate AI outputs. There is a tendency in academic literature to eschew or discount AI applications, generally or specifically, because they bear risk of bias or demonstrate bias. Certain facial recognition applications, for example, have proven more accurate in identifying Caucasian faces than the faces of foreign nationals or persons with blended demographics. Generally, this has occurred not by design but because the data on which the algorithm was trained is primarily Caucasian in demography.
However, before discounting AI as a tool because it reflects bias, policymakers and legislators should ask threshold questions: Are we better off augmenting human judgment with an imperfect tool? Or are we better off relying exclusively on a human decisionmaker, with his or her own imperfections, to reach the same decision? In the context of driverless cars, for example, policymakers might ask parallel questions. What is the safety record of the driverless vehicle? And is that safety record better than the safety record of human driven cars?
Further, it should follow that the mitigation of AI bias will lead to a more accurate AI system, a safer AI system, or both. Therefore, industry and technologists should not instinctively eschew law, policy and ethics directed at mitigating bias. If wielded wisely and well, such law, policy and ethics should lead to a more accurate AI, as well as uphold our legal and policy values.
- Data ownership, use & costs: As Audrey Cronin, Director of the Carnegie Mellon Institute for Strategy & Technology, has written, AI’s accuracy “all depend on data sets.”[i] The better the data and the more of it, the more accurate the AI. This may change with the advent of synthetic data, but that has not happened yet.Therefore, data that is not both legally and cyber protected will invariably get scraped and used to feed AI’s voracious appetite for new data. AI companies will also look for additional sources of data by changing user agreements or lobbying for changes to laws that protect data.
AI users should determine whether the system they are using has been trained and tested on data suited for the AI’s use. In the facial recognition scenario above, one might ask if the AI was trained on data reflecting the demographics of the population for which the tool will be used, or in the case of employment screening application, whether the algorithm was trained on data reflecting contemporary educational demographics.
Relatedly, where an AI is used for confidential matters, such as legal proceedings or medical diagnosis, consumers should consider whether the data used to train, test and validate the AI comes from a closed or an open system. In other words, will the queries and inputs submitted to the AI become part of the AI’s database and potentially shared with others, or will it remain confidential?
Data ownership is an issue. We see now the advent of multiple lawsuits asserting copyright infringement involving intellectual work product scraped from the internet and used to train generative AI and that subsequently becomes part of generative-AI responses to queries. Legislatures have been slow to respond. However, courts do not have the legislative option to do nothing; in the absence of framework statutory law addressing AI data, courts must address ownership and copyright issues on a case-by-case basis, which can result in a disparity of outcomes, especially since some partiesprefer to negotiate data-use licenses ratherthan assume the cost and delay of litigation.
Data also drives AI’s overt and hidden costs. The energy and environmental costs of running AI are now in focus; the law addressing these issues is not. Data storage and compute takes energy and water, a lot. One study indicates that a single query to ChatGPT resulting in a 100-word response consumes 519 milliliters of water, which is about the amount of water in a 12-ounce plastic bottle.[ii] According to Nature, in 2022, “datacenters consumed … about two percent of global energy demand.” AI-driven demand is expected to increase by 35 to 128 percent by 2026.[iii] The energy is used in training AI, moving data from memory chips to processing chips, and storing the vast quantities of data needed to train and operate generative AI. Water is used to cool computational processing and data storage.
Finally, the quality of data sets, including their age, labeling and relevance matters, and some commentators contemplate a slowdown in generative AI development as the availability of new data sets diminishes. At least one commentator has predicted that the development of generative AI will slow given the dearth of new data, with the internet already scraped.
Forks in the Road
As noted, Stephen Hawking called AI “either the best or the worst thing to ever happen to humanity.”XV However, most AI choices present a web of potential responses, more like a complex interstate exchange than a binary choice between a good and a bad fork in the road. Below are just a few forks in that web to illustrate. In some cases, such as the use of deepfakes for political advertising, we have already chosen a fork and moved forward without much debate or deliberative choice.
- A decision to delegate to a cybertool offensive capability and/or a cybertool with uncertain collateral effect.
- A decision to cross new thresholds in the deployment and use of AI into outer space, to create watch lists, to make policy predictions.
- Use of AI to record and tabulate electoral voting.
- Use of AI for predictive policing, or predictive sentencing.
- Use of AI to adjudicate social security and welfare benefits.
- Use of AI to process and adjudicate insurance claims and benefits.
- Use of AI to identify internet extremism and threats.
- Use of AI to process college and university admissions.
- AI enabled driverless truck convoys.
- Drone delivery services.
No doubt, readers will have other and more compelling examples. The goal here is not to create a list of every fork that lies ahead or agree that each of these points represents a “fork.” It may be obvious to some policymakers that drone delivery is the wave of the future and should proceed apace and without debate, but not apparently if one lives in New Jersey.[i] The goal of this article is for policymakers and legislators to make thoughtful and purposeful choices when new AI thresholds are reached and before they are crossed.
Perhaps the biggest fork ahead has to do with regulation itself: To what extent at the federal or state levels will the U.S. seek to regulate AI and, if so, how? Most commentators would agree that AI law is inchoate. Whether one views that as a good or a bad result largely depends on whether one is considering the question with a political, industry, consumer or rights-based lens. Industry actors are likely to say that regulation stifles innovation. Rights advocates, in contrast, are likely to express due process concerns about AI systems determining access to benefits or assessing bail and parole risks. In my view, and as noted above, appropriate and purposeful regulation is a good thing. Wielded well, regulation can result in more accurate and safer uses of AI, and bend society toward the promise of AI and away from peril.
Four possible regulatory models are emerging. The European Union (E.U.) has adopted a comprehensive model for AI regulation in the form of the E.U. AI Act. This is an effort to centralize and regulate AI by legislative mandate. The AI Act divides AI systems into four categories: Those with unacceptable risk, high risk, limited risk, and minimal or low risk. Notably, national security systems in any category are exempt. Social scoring and certain predictive policing systems are rated as unacceptable. Systems that address health, safety, fundamental rights, the administration of justice, democratic processes, access to services and law enforcement are rated as high risk. The AI Act devotes most of its attention to high-risk systems and General Purpose AI (that is, AGI), requiring of such systems numerous risk management, accountability and validation steps before the system is introduced into real world scenarios (for example, connected to the internet) or marketplace. These include: required procedures for bias detection and correction, requirements for active and transparent human oversight, stop-buttons, automatic logs to record AI actions, operator training and competence standards, fundamental rights impact statements, experimental sandboxes for testing AI, and conformity assessments by third parties to validate that the AI is working as intended. In addition, the AI Act establishes an AI Commission and E.U. bureaucracy to oversee implementation.
In the U.S., at the national level and in the absence of legislation, the executive branch has taken a selective ad hoc approach to AI regulation by seeking to regulate discrete areas of AI use and practice using existing presidential and statutory authorities. Executive Order 11410 (“On the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” October 30, 2023) focuses on government use of AI, the identification of generative AI presenting national security risks, and AI use cases implicating rights and safety. In addition, the order directs the designation of Chief AI Officers in government agencies and departments, thus creating an official responsible and accountable for AI development and use at each agency. The order also directed approximately 90 follow-on actions, many of which have come to fruition or near fruition, including in the form of a March 28, 2024, OMB memorandum to federal agencies and an October 24, 2024, National Security Memorandum. Notably, the order—and its implementing directives—not only establishes guardrails and processes directed at potentially problematic or risky uses, it also seeks to promote AI through mechanisms encouraging AI engineers to work for the government or in the U.S. (talent acquisition), and by promoting research and enhancing U.S. leadership internationally.
A third regulatory model is represented by the California bill known as SB 1047 (Safe and Secure Innovation for Frontier Artificial Intelligence Models Act). California previously adopted discrete elements of AI regulation in about 17 discrete laws. However, SB 1047 sought to provide a comprehensive state framework. Among other things, the bill would have required AI developers to have a cybersecurity plan and a Safety and Security Protocol in place and possess a capability to promptly enact a full shutdown before training a covered model. In addition, the bill would have required whistleblower protections for AI safety reporting, AI safety incident reporting and eventually third-party auditing of regulatory compliance. The bill was negotiated at length focusing on the prospect of and threshold for the state attorney general to seek injunctive relief to prevent “potential critical harms,” as well as the creation of a government agency to implement the act. These provisions were ultimately dropped. The surviving portionsof the bill passed both houses in August 2024 against opposition from Silicon Valley. A month later, however, the governor vetoed the bill without a subsequent legislative effort to override the veto. Although not law, the bill serves as one regulatory model we might call EU-lite.
The fourth model of regulation is the default model: to defer to industry best practices and principles, rather than any legislative or regulatory steps and see what happens.This might be a conscious choice on the theory that the absence of regulation encourages innovation. Alternatively, it may represent a default choice because it is too hard to reach agreement on whether and how to regulate AI, even when there is consensus that something should be done.
Whether one is “for” or “against” regulation, we should assess the pros and cons of regulation cognizant of what happens when the law applicable to AI is not clear or is inchoate. Relevant actors will use the law they have, which may or may not be a good fit in context or address the full array of societal interests at stake. Thus, for example, complex issues about data scraping and use may be addressed exclusively through copyright infringement lawsuits.
The absence of specific law also heightens the importance of constitutional law, for whatever the statutory playing field, the Bill of Rights still pertains. With AI, the First, Fourth and Fifth Amendments are especially relevant. Litigation also becomes more important as a methodology to resolve larger policy issues and not just disputes between parties. However, litigation is a poor mechanism for developing public policy, accenting as it does the specific interests and legal views of the parties. Key actors may also refer to other law to find law to apply by analogy to AI.
The promulgation and voluntary application of ethical codes and best practices also become more important as guideposts and to fill regulatory gaps. All of which means, the fork—the choice—is not really between a comprehensive regulatory regime and no regime, but between purposeful AI law and an ad hoc application of the law and ethics we already have.
If I were a legislator, I would not spend time debating whether the E.U. comprehensive approach, the U.S. selective approach or the SB1047/California comprehensive-lite approach is the better model. Rather, I would use the E.U. Act, Executive Order 11410 and SB1049 as an à la carte menu and ask, with respect to each use case, whether it would be wise to apply a tool, a prohibition, or a process on a use-case or generalized basis. Having determined the answer, I would then consider the best methodology for implementing that result: that is, national legislation; state legislation; executive order; indirect incentives, such as tax or insurance rates; or through voluntary best practices or perhaps market mechanisms alone. From my vantage point, purposeful choice, rather than default choice is essential because the stakes are so high.
The Future of Power
One reason the stakes are high is because as the National Security Commission on Artificial Intelligence (NSCAI) has said, “AI is the future of power.” Consider the effort that governments and industry are putting into AI. The national security use cases continue to grow as do the economic use cases. AI is the technology most likely to transform security practice and do so in profound ways. It is also a technology that has the potential to unlock and transform other technologies, such as synthetic biology, quantum information systems and fusion energy.
What is more, as the NSCAI identified, societal success with AI will not just mean national prominence or preeminence in the field. It will also likely reflect that the nation in question adopted the right regulatory model, which will likely be the one that not only sets reasonable boundaries and redlines, but also gets the backroom stuff right. That is, a regulatory system that also encourages but does not default to innovation, attracts talent, funds research, and results in a resilient and reliable industrial base. Such an industrial base will provide a foundation of power with AI, but also across the defense and economic spectrum. AI might well be the engine to develop and sustain the U.S. industrial and technological base, and that serves as a foundation on which the future of power may rest.
The Future of Democracy
AI and its regulation of AI might also determine the future of democracy, both its existence and its shape. The NSCAI noted that, if allowed, law and ethics may distinguish democratic AI from authoritarian AI. At least three questions are relevant: Who should decide whether and how AI is regulated? How should we address AI’s capacity to generate and distribute disinformation and misinformation? And should we care whether the benefits of AI, the promise of AI, accrues across society and not just to those who can afford it?
As the Yale political scientist Robert Dahl asked of city governance in his 1961 study, “Who Governs?”[i] The same question might be asked of AI today. If as Danille Rus has stated, “dig into every industry, and you’ll find AI changing the nature of work,”[ii] who should decide how AI is governed? This is a question about AI, but also about democracy. Should such decisions be made exclusively by first movers and key actors, such as the Department of Defense and industry leaders? Or should they be made by public citizens through democratic process? One can agree or disagree with this proposition, but we ought to ask and expect our legislators to ask this question and purposefully make a choice in response.
Generative AI’s capacity to create and tailor information and do so in the style and voice of others makes it an excellent tool of disinformation, misinformation and conspiracy theory. Propaganda and disinformation are not new phenomena, but AI gives these phenomena a capacity to undermine democratic norms at scale. Addressing this phenomena is especially challenging in the U.S. where the First Amendment limits the government’s capacity to impose time, place and content limits on speech. Thus, limited efforts at regulations, such as a requirement to mark, acknowledge or perhaps prohibit false AI-generated political ads of one’s opponent, might yet be met with legal as well as political opposition. Moreover, where one political side or another perceives advantage in the information landscape, they will be loath to impose limitations on AI-generated speech in the interest of “democracy.” As a result, discussion about AI’s influence on democracy often devolve into political arguments about specific messaging.
I am not sure regulation is as important here as it might seem. Common sense might work as well. My advice, wherever one stands on a political issue, is to follow two simple rules. Abraham Lincoln was said to be a great lawyer by James Russell Lowell because he could see both sides of every issue. Therefore, the Lincoln Rule posits that we should always ask what is the other side of the argument, even when we agree with the side we have heard, read or seen. Sometimes hearing both sides helps us find the ground truth; it is also more likely to lead both sides to common ground. The Moynihan Rule comes from Senator Daniel Patrick Moynihan who in 2003 quipped, “everyone is entitled to their own opinion, but not their own facts.” Whether we are dealing with AI-generated content or human content we should always ask, is the information at hand a fact, an inference, a judgment or an opinion? Knowing the response and doing so in an objective manner allows policymakers and public citizens to engage in discussion on a substantive rather than polemic basis.
AI will impact democracy in other ways as well. One question or concern, depending on perspective, is whether the benefits of AI will accrue across society or just to those who can afford its direct and hidden costs. Consider the use of AI in medical diagnosis and treatment. Here is another fork in the road. AI can lower the bar to entry in areas such as law and medicine, for example, by giving lower-income persons access to legal tools and medical testing they might otherwise be able to afford, or it can become a costly tool limited to those of means. My goal is not to persuade, but to encourage public citizens to make purposeful, transparent and accountable decisions about each of these issues, rather than default decisions because the issues are too hard or consensus elusive.
The Centaur’s Choice:
Homework for Public Citizens and Legislators
AI is here and it is here to stay. It presents both promise and peril, for jobs, for judges, for society and potentially for humanity. Public citizens have a choice. We can defer to the creators of AI technology to get it right, or we can choose to purposefully regulate AI in whole or in part. The centaur’s dilemma is a possessive dilemma because humans need not be passive actors. We have choices. One question is who is making these choices: industry actors, government officials, legislators, affectedstakeholders or all of the above. With each use case we also have a choice about how to exercise human control, address bias and manage data. We should remember that the effective contextual application of law and ethics will result in more accurate, reliable, understandable and safer AI. Whether and how well we address these issues and perform these tasks may well determine whether powerful AI is the best or the worst thing to happen to humanity. Here we should remember what Stephen Hawking also said, “we need not fear change. We need to make it work to our advantage.”[i] Time now for legislators and public citizens to get to work.

REFERENCES
[1] Andrew Ng, “Why AI Is the New Electricity,” Stanford Business (Mar. 11, 2017), https://www.gsb.stanford.edu/insights/andrew-ng-why-ai-new-electricity.
[1] Eliza Strickland, “AI Experts Speak: Memorable Quotes from Spectrum’s AI Coverage,” Institute of Electrical and Electronics Engineers (Sept. 30, 2021), https://spectrum.ieee.org/artificial-intelligence-quotes.
[1] “‘The Best or Worst Thing to Happen to Humanity’– Stephen Hawking Launches Centre for the Future of Intelligence,” University of Cambridge (Oct. 19, 2016),https://www.cam.ac.uk/research/news/the-best-or-worst-thing-to-happen-to-humanity-stephen-hawking-launches-centre-for-the-future-of.
[1] Id.
[1] National Security Commission on Artificial Intelligence (NSCAI), Interim Report 8 (Nov. 2019), https://www.nscai.gov/wp-content/uploads/2021/01/NSCAI-Interim-Report-for-Congress_201911.pdf.
[1] Here, “public citizen” describes an informed citizen who wishes to play a role in the governance of our society: A legislator, a teacher, a journalist or a member of the public interested in shaping public policy.
[1] Bureau of Labor Statistics, “Occupational Outlook Handbook: Fastest Growing Occupations” (2024).
[1] Greg Rosalsky, “10 Reasons Why AI May be Overrated,” NPR (Aug. 6, 2024, 6:30 AM), https://www.npr.org/sections/planet-money/2024/08/06/g-s1-15245/10-reasons-why-ai-may-be-overrated-artificial-intelligence.
[1] Videotape: “‘Genesis’ Looks at the Future of Artificial Intelligence” (MSNBC 2024) (on file with author).
[1] David Brooks, “Many People Fear A.I. They Shouldn’t,” New York Times (July 31, 2024), https://www.nytimes.com/interactive/2024/07/31/opinion/ai-fears.html.
[1] David Ignatius, “Why Artificial Intelligence is Now a Primary Concern for Henry Kissinger,” The Washington Post (Nov. 24, 2022),https://www.washingtonpost.com/opinions/2022/11/24/artificial-intelligence-risk-kissinger-warning-weapons/.
[1] Patrick Cronin and Audrey Cronin, “Will Artificial Intelligence Lead to War?”, The National Interest (Jan. 30, 2024), https://nationalinterest.org/feature/will-artificial-intelligence-lead-war-208958.
[1] Pranshu Verma and Shelly Tan, “A Bottle of Water per Email: The Hidden Environmental Costs of Using AI Chatbots,” The Washington Post (Sept. 18, 2024), https://www.washingtonpost.com/technology/2024/09/18/energy-ai-use-electricity-water-data-centers/.
[1] Katherine Bourzac, “Fixing AI’s Energy Crisis,” Nature (Oct. 17, 2024). https://www.nature.com/articles/d41586-024-03408-z.
xv “The Best or Worst Thing to Happen to Humanity”– Stephen Hawking Launches Centre for the Future of Intelligence, University of Cambridge (Oct. 19, 2016).
[1] In November and December there were reported sightings of unknown aerial objects over New Jersey leading to headlines like,Jason Breslow, “Mystery Drones Flying Over New Jersey Have Residents and Officials Puzzled NPR” (Dec. 12, 2024, 3:50 PM), https://www.npr.org/2024/12/11/nx-s1-5226000/new-jersey-drones; Adam Harding, “FBI Joins Hunt for Answers Behind Nightly Drone Sightings in New Jersey,” NBC New York (Dec. 6, 2024, 9:37 AM), https://www.nbcnewyork.com/new-jersey/morris-county-nightly-drone-noises/6033146/; Alys Davis, “Mystery Drones Over US East Coast Spark Concerns as FBI Investigates,” BBC News (Dec. 13, 2024), https://www.bbc.com/news/live/cq8q5q53ypjt; Experts indicated that many, if not all, of the sightings were aircraft or drones.
[1] Robert Dahl, Who Governs?, (Yale University Press 1961).
[1] See Strickland, supra note 2.
[1] Stephen Hawking, Brief Answers to the Big Questions, Chapter 9 (Bantam Books 2018).
Hon. James Baker, JD
Hon. James Baker, JD, is the Director of the Syracuse University Institute for Security Policy and Law, as well as a Professor at the Syracuse College of Law and the Maxwell School of Citizenship and Public Affairs. He also serves as a judge on the Data Protection Review Court. He previously served as a Judge and Chief Judge on the U.S. Court of Appeals for the Armed Forces. Judge Baker also served as a presidentially appointed member (Obama) and Acting Chair of the Public Interest Declassification Board. As a career civil servant, he served as Legal Adviser and Deputy Legal Adviser to the National Security Council. Judge Baker has also served as Counsel to the President’s Foreign Intelligence Advisory Board and Intelligence Oversight Board, an attorney in the U.S. Department of State, an aide to Sen. Daniel Patrick Moynihan, and as a Marine Corps infantry officer. In 2017-18, he was the Robert E. Wilhelm Fellow at the Center for International Studies, MIT. Judge Baker has also taught at Yale, Iowa, Pittsburgh, Washington University (St. Louis) and Georgetown. He is the author of numerous articles and three books: The Centaur’s Dilemma: National Security Law for the Coming AI Revolution (Brookings 2021); In the Common Defense: National Security Law for Perilous Times (Cambridge 2007); and, with Michael Reisman, Regulating Covert Action (Yale 1992). Judge Baker earned his BA and JD from Yale University.