HomeA.I.Artificial Intelligence is Transforming Our World–Are We Ready?

Artificial Intelligence is Transforming Our World–Are We Ready?

Our world is changing. Cars drive themselves. Automated grocery stores allow customers to shop without employees in the store. Drones manage and spray our farm fields. Software applications control access, temperature and lighting in our smart homes. Autonomous robots clean our houses. Voice-controlled virtual assistants help ease the burdens of many daily tasks. Facial recognition cameras help identify persons of interest in busy crowds. Imaging analysis software helps doctors provide medical diagnosis more quickly and accurately than ever. 

These are just some examples of how artificial intelligence (AI) is revolutionizing our society in unprecedented ways. In fact, the United States Patent and Trademark Office (USPTO)—the government branch primarily responsible for overseeing innovation in the U.S.—expects AI to “revolutionize the world on the scale of … electricity.”[1]U.S. Patent and Trademark Office, “Inventing AI,” 2 (2020), https://www.uspto.gov/sites/default/files/documents/OCE-DH-AI.pdf It is worth pausing to conceptualize the level of impact at issue. Imagine our world without electricity. Whether good or bad—whether we like it or not—this is the level of change at stake in the AI revolution. Driven by massive amounts of data, often collected from individuals, AI is able to emulate human intelligence and perform tasks historically performed by people. What was once science fiction will be tomorrow’s new normal. Although we have already moved past whether we should adopt AI into our lives, we should not overlook the important question of whether we are ready to adopt this quickly evolving technology. 

Emerging AI applications will undoubtedly advance our technology and improve our lives. They will likely make our roads safer and our homes more comfortable, improve our food production and ease the burdens of many everyday tasks. There exists a dark side to such advancement, however, and the meteoric rise of AI technology will certainly raise many significant societal questions. There is perhaps no greater uncertainty than how AI will impact our economic growth and likely displace some of our workforce in coming decades. 

Much closer on the horizon, three pressing legal questions have already emerged and remain largely unanswered: First, who will be legally responsible when AI causes injury? Second, how will we protect the immense value of AI innovation? Third, how will we balance the competing interests of AI’s societal benefits with its societal costs, such as reduced individual privacy? Before considering these legal gray areas—AI liability, innovation and privacy—it is pivotal to first understand the scope and the importance of specificity when addressing AI.

What Do We Mean by “Artificial Intelligence”?

Although AI is nearly ubiquitous, it has no universal definition. It is not an area of law nor a single industry. AI is a technological revolution that impacts virtually all facets of our lives. A common definition of AI refers to the capability of machines to emulate human behavior, particularly intelligence and decision making.[2]https://www.merriam-webster.com/dictionary/artificial%20intelligence However, this definition is certainly underinclusive in how society uses the term. Sure, artificial intelligence includes Terminators, IBM’s Watson and other highly sophisticated, autonomous and (perhaps in the future) self-aware computer systems. But AI—perhaps in conflation with automation—is often used to describe much more, such as: (1) software that performs processing typically performed by humans; (2) software that uses data to provide reports; (3) fitness trackers; (4) software that uses data for predictive analytics; (5) smart thermostats; (6) software that predicts illness spread, weather or traffic; (7) hardware components for robotic systems; (8) software that understands and mimics human speech; (9) virtual assistants; (10) underlying computer algorithm designs; (11) content recommendations on streaming platforms; and (12) autonomous robotic systems. In categorizing patents, even the USPTO found no definition with adequate specificity and instead defined AI patents by identifying eight “component technologies.”[3]USPTO, supra note i at 3. Given the varying definitions, AI’s scope for now is defined only by the label we ascribe to it. 

To be sure, AI is different from the Internet of Things (IoT), which generally refers to devices with sensors capable of gathering data and communicating over the internet. But the line between AI, IoT and mere software can become blurry. Ultimately, what becomes clear about the definition of AI is that it lacks clarity because AI’s potential scope, as understood by the general public, often stretches far beyond the narrower scope ascribed by scientists and engineers.[4]More specific terms, such as machine learning and deep neural network, which are subsets of AI, provide better clarity regarding the meaning of the technology at issue. However, they do not resolve … Continue reading 

AI’s evolving and broadening nature presents challenges in measuring its impact and analyzing policy decisions. Potential liability from errors in weather forecasting software presents different considerations than errors from medical diagnostic software. Innovation in autonomous vehicle and drone technology impacts our economy differently than automated calendaring software. And the data associated with what temperature our thermostat is set to at night presents different privacy concerns than the devices in our living room listening to (and perhaps recording)[5]See, e.g., Matt Day, Giles Turner and Natalia Drozdiak, “Thousands of Amazon Workers Listen to Alexa Users’ Conversations” (Apr. 11, 2019), … Continue reading our conversations. 

Accordingly, specificity is important. In making policy decisions, characterizing the issue merely as AI can be misleading. Yet, addressing AI policies and regulation at the micro level for each individual technology can be overwhelming and inefficient. Luckily, this is not necessary since there exists a “Goldilocks level” of specificity when addressing AI. Although, as a category, AI is far too broad to be specific, commonalities pervade its continuum. For example, like technologies can be grouped together. The key to meaningful dialogue and specificity is recognizing AI’s breadth and deliberately using specificity for precisely the AI category at issue. When addressing AI, we must articulate its scope and meaning or group it only with contextually similar applications.[6]It may seem contradictory to emphasize the importance of specificity in making policy decisions relating to AI and then to discuss AI generally in this article. But the focus of this article is not … Continue reading

Is Our Legal System Ready for AI? 

Our society’s readiness for AI, in many ways, will be measured by the readiness of our legal system. After all, our legal system is the system of rules for what conduct society is willing to accept, how we are willing to allocate risk, and who we believe is deserving of compensation. Laws govern everything we do. Although AI will raise many questions regarding legal policy—some of which have yet to be considered—three leading questions have emerged: First, how we will impose liability for injuries caused by AI; second, how we intend to protect and promote the innovation of AI technology; and third, how we will balance concerns for individual privacy from AI use with benefits to society as a whole. 

AI Liability

“Who is responsible for this?” That question has echoed in our minds since childhood. At its core, this simple question is rooted in the fundamental notion that those “responsible” for causing harm should be required to remedy it—which most often means “pay for it.” As AI continues to play an increasing role in our everyday lives, the potential for harm (and liability) seems inevitable. Self-driving cars crash, automated software applications malfunction, and AI predictions prove to be wrong. When harm results, we will once again ask, “Who is responsible for this?” Except, we will be asking that question in a new frontier where decisions might have been made by a robot (or autonomous system) instead of a human. 

In the absence of a contract that answers the question, liability for such wrongdoing is governed by tort law in the U. S. The law of torts imposes liability for both intentional torts (when the wrongdoer’s conduct was intentional) and negligent torts (when an actor had no intention of causing harm but did so in a way that society views as falling below a “reasonable” standard of care). Negligence (unintended harm) is the most common form of tort liability and will likely continue to be in the context of AI, where the vast majority of AI is likely to be programmed to avoid causing harm. To demonstrate liability for negligence, a party must generally demonstrate four elements: First, the defendant owed a duty to the plaintiff; second, the defendant breached a duty to the plaintiff; third, the defendant’s actions were the cause (both “actual” and “proximate”) of the plaintiff’s injury; and fourth, the plaintiff sustained some harm or injury. 

In a negligence lawsuit involving AI, the plaintiff will be obvious: the person who suffered the harm. But who will the defendant be? The AI system? The person who developed the AI system? The company that developed the AI system? The company that sold the AI system? The person who operated the AI system? The list is limited perhaps only by the creativity of the plaintiff’s attorney and whatever legal limits exist for liability under tort law. 

Given the clear focus of the analysis on the defendant’s actions to prove a negligence claim, it is imperative to name the right party as a defendant. As a matter of practice, a plaintiff lawyer’s creativity to seek a meaningful recovery for the client is frequently guided by the opportunity to sue wealthy parties, often a company. Why? Because they are most likely to result in a payment to the plaintiff. Obtaining a $10 million judgment can quickly become a Pyrrhic victory, when the defendant found liable has no assets (insurance or funds) to satisfy the judgment. In such a case, the plaintiff wins the legal claim, yet remains uncompensated. Worse, since many personal injury cases are litigated on a contingency basis (where the plaintiff’s lawyer is compensated only if the plaintiff recovers), a dim prospect of actual payment might result in difficulty even obtaining a lawyer. For some accidents, this might not pose a significant concern. In most fender-benders, for example, finding a party liable results in compensation either through the insurance company or the responsible party, who is likely to have assets to satisfy a small judgment. But raise the stakes to a single plaintiff with very significant injury (for example, a child killed by a self-driving car or a plaintiff misdiagnosed by medical software) or thousands of plaintiffs with relatively minor harm (for example, a smart thermostat that turns off the heat to thousands of homes or a digital assistant that mistakenly orders items online[7]See, e.g., Maham Abedi, “Amazon Echo mistakenly orders cat food after hearing TV commercial” (Feb. 14, 2018), https://globalnews.ca/news/4025172/amazon-echo-orders-cat-food-tv-commercial/ ) and the potential for under-compensation becomes real, particularly against an individual or a small, underinsured company with few assets.[8]Many of the companies leading AI development are large companies that would not raise such concerns. However, if liability does not extend to such companies, companies with fewer assets that were … Continue reading 

Devising a general rule as to which party should be held liable anytime AI causes harm is difficult. Like any negligence case, context is critical, and the liability of the actor will depend on the particular circumstances of the case, as well as what led to the harm. The questions central to the inquiry of liability are likely to include: Whose conduct fell below society’s expectations, and was the harm foreseeable from the conduct at issue? Yet, it doesn’t take a lawyer to appreciate the difficulties raised by negligence elements for imposing liability for AI-caused harm.

Illustration by Jerry Anderson

For many obvious reasons, a suit against an AI system itself is implausible, at least until AI systems start gaining personhood[9]In 2017, an AI system granted citizenship by Saudi Arabia became the first robot to be given personhood. See Emily Reynolds, “The agony of Sophia, the world’s first robot citizen condemned to a … Continue reading and owning property. When a dog bites someone, the plaintiff doesn’t sue the dog. 

Also, negligence suits against AI developers are far from guaranteed to provide recovery because problems might arise with demonstrating that the developer owed a duty to the plaintiff, if the software is used in a way not intended by the developer or harms someone who was not anticipated to be impacted by the software. An additional concern might be proving that the developer was the cause of the harm, if the AI system caused harm in an unexpected and unforeseeable way. 

AI systems can often be a “black box” due to the inability to know exactly how the system operates or makes its decisions. The more complex the thinking of the AI system, the further removed the developer is from foreseeability, and the less likely there is to be liability. Similarly, a reseller of an AI system might not have done anything unreasonable simply by providing a product or service developed by someone else. As such, there may be a need to rethink the applicability—or at least the scope—of foreseeability in our traditional analysis of negligence law when it comes to AI liability. 

An alternative avenue to liability against AI systems may be based on strict liability—a tort claim that imposes liability regardless of whether the defendant’s actions were intentional or reasonable. But strict liability laws are limited to very specific contexts, such as animals, abnormally dangerous activities and products liability. Although the discussion has spanned decades, it is still far from clear whether software constitutes a “product” that is subject to strict products liability.[10]See Bryan H. Choi, “Crashworthy Code,” 94 Wash. L. Rev. 39, 53 (2019) (“[N]one of those arguments are new, and they have long failed to move any court to extend products liability law to … Continue reading Since AI provided by a party primarily—sometimes exclusively—comprises software, strict liability currently might not extend to AI. 

However, even if an AI system does not fit within any of these categories, expanded strict liability laws (and accompanying insurance policies) may emerge as the leading way to govern the compensation of harm caused by AI systems. Importantly, imposition of liability without intent or negligence has drawbacks and requires careful consideration, especially regarding corporate willingness and ability to absorb such risk into business practices. A broader scope of insurance coverage leads to more expensive insurance and a higher cost of doing business, which might prohibit or discourage some AI uses and developers. 

So, when the familiar question “Who is responsible for this?” arises in the context of the new AI frontier, we can take comfort in the robust, time-tested legal framework that we can look to for answers. Yet, that comfort may be misplaced. Tort law is largely shaped by constantly changing policy decisions about how our society chooses to allocate risk and provide compensation. Moreover, tort law is primarily governed by state law, which creates the very real potential for inconsistent laws and policies across different states. In the context of AI’s emerging issues, the existing legal framework remains largely uncharted as to where these policy lines should be drawn. Absent legislation on AI liability, the boundaries for responsibility in this new frontier will continue to develop through common law (litigating individual cases in the courts). Since the development of common law takes significant time and a willingness for parties to take on the increasing costs of litigation, the law on AI liability could lag far behind society’s fast-paced adoption of AI. 

Protection of AI Innovation

The boom in AI development has seen an enormous amount of innovation in just the last decade. For example, “[t]here were 10 times as many AI patent applications published in 2019 as in 2013” and “[t]he same time period saw an almost four-fold increase in granted AI patents.”[11]See Center for Security and Emerging Technology (CSET), “Patents and Artificial Intelligence: A Primer,” 2 and 13 (2020), … Continue reading Not surprisingly, AI’s immense value has created a significant legal battleground for exploiting and protecting AI innovation. When it comes to leveraging AI innovation, individuals and companies have two key questions to consider: Do I have the legal right to do what I would like to do, and do I have the legal right to exclude others from doing it. The regulation of these legal questions falls squarely in the domain of intellectual property (IP). 

IP refers to intangible property—“creations of the mind, such as inventions; literary and artistic works; designs; and symbols, names and images used in commerce.”[12]“What is Intellectual Property?”, World Intellectual Property Organization, https://www.wipo.int/about-ip/en/ IP law is comprised of four core legal areas: patents, trade secrets, copyrights and trademarks. Patent law, in particular, is a critical way to protect innovation in the U. S. The scope of what can be patented is quite broad: “[A]ny new and useful process, machine, manufacture or composition of matter, or … improvement thereof” is eligible for a patent, provided that the applicant can satisfy all other requirements in the statute.[13]35 U.S.C. § 101. An owner of a patent enjoys a powerful monopoly to exclude others in the U.S. from making, using, offering to sell or selling the patented invention.[14]See 35 U.S.C. § 271(a). 

An alternative for protecting innovation—particularly innovation that is kept secret—is trade secret law. Governed by both federal and state law, trade secret law protects “all forms and types of financial, business, scientific, technical, economic or engineering information,” but only if the owner has taken reasonable measures to keep such information secret and the information “derives independent economic value … from not being generally known.”[15]See, e.g., 18 U.S.C. § 1839(3). Although trade secret law does not protect against reverse engineering or independent discovery, it prohibits others from “misappropriating”—acquiring or disclosing—a trade secret through “improper means.”[16]See 18 U.S.C. § 1839(5) and (6).

Copyright law protects original works of authorship fixed in tangible form—such as literary works, musical works, motion pictures, sound recordings, architectural works—from unlawful reproduction and distribution.[17]See 17 U.S.C. §§ 102, 106. It does not protect, however, any “idea, procedure, process, system, method of operation, concept, principle, or discovery.”[18]See 17 U.S.C. § 102(b). Thus, for example, copyright law might protect against the reproduction of the particular way in which a cooking recipe is expressed, but it would not preclude others from using or sharing the underlying process described in the recipe. 

Trademark law has a narrower scope in protecting AI innovation. A trademark is “any word, name, symbol or device, or any combination thereof” used to distinguish one’s goods and indicate the source of goods.[19]15 U.S.C. § 1127. Simply put, it is “how customers recognize you in the marketplace and distinguish you from your competitors.”[20]“What is a trademark?”, USPTO, https://www.uspto.gov/trademarks/basics/what-trademark

The importance of IP and its value in today’s world cannot be overstated. Gone are the days when most companies’ value was tied to the buildings they owned and the widgets they made. According to recent reports, “intangible assets”—a very significant portion of which are IP rights—are “now responsible for 90 percent of all business value,” as opposed to just 32 percent in 1985.[21]https://www.oceantomo.com/intangible-asset-market-value-study/; see also https://www.aon.com/getmedia/60fbb49a-c7a5-4027-ba98-0553b29dc89f/Ponemon-Report-V24.aspx With so much value now tied to IP rights, the competition for IP innovation and ownership has never been greater. Moreover, the demand—even dependence—on owning IP has amplified the importance of the delicate balance at the center of patent and copyright law. 

A core principle underlying patent and copyright laws is that they provide strong incentives for individuals and companies to devote resources and time to innovation by granting them exclusive rights as a reward for their investment. However, as we grant more IP rights to individual inventors and authors, the more we limit the public’s use and access to those rights. For example, granting a broad patent on drone technology leaves less for society in that same space due to the powerful monopoly to exclude others from using or selling the patented invention. As the U.S. Supreme Court has explained, “monopolization of [basic tools of science and technology] through the grant of a patent might tend to impede innovation more than it would tend to promote it.”[22]Mayo Collaborative Servs. v. Prometheus Lab’ys, Inc., 566 U.S. 66, 71 (2012). Additionally, the exertion of broad IP rights can provide significant (and sometimes improper) leverage against competitors in both the marketplace and litigation. 

The debate on the appropriate scope of IP rights to promote, rather than stifle, innovation is far from new. And that debate is certain to carry through into the policy discussions surrounding the AI revolution. Recently, the National Security Commission on Artificial Intelligence asserted that “[t]he United States lacks the comprehensive IP policies it needs for the AI era and is hindered by legal uncertainties in current U.S. patent eligibility and patentability doctrine.”[23]National Security Commission on Artificial Intelligence, “Final Report,” 12 (2021), https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf Others believe that the exponential growth in AI patents,[24]SPTO, supra note i at 5; see also CSET, supra note xi at 13. AI publications[25]Human-Centered AI Institute, Stanford University, “Artificial Intelligence Index Report 2021,” 18 (2021), https://aiindex.stanford.edu/wp-content/uploads/2021/11/2021-AI-Index-Report_Master.pdf and AI investment[26]CSET, “Tracking AI Investment,” 8 (2020), https://cset.georgetown.edu/wp-content/uploads/CSET-Tracking-AI-Investment.pdf demonstrates tremendous promise for AI innovation under the current legal framework. Although many views exist on where to draw legal boundaries for protecting AI IP, everyone seems to share the view that the future answers to these questions will have tremendous importance for AI innovation in the U.S. 

Illustration by Jerry Anderson

As the historical debate on balancing IP rights takes center stage in the emerging AI space, a related, but perhaps even more complex, question has developed with it. While AI IP has traditionally meant the inventions and artistic works developed and created by individuals and companies in the realm of AI, as AI systems become more sophisticated, AI has moved from being intellectual property to generating intellectual property. For example, on July 29, 2019, the USPTO received a patent application listing a single non-human inventor for an “[i]nvention generated by artificial intelligence.” [27]The USPTO denied the application and refused to grant a patent. See Decision on Petition, https://www.uspto.gov/sites/default/files/documents/16524350_22apr2020.pdf This raises the unique and novel question of who owns IP generated by AI.

Although not fully settled, current U.S. patent law appears not to allow AI to own a patent or to be listed as an inventor on a patent. In answering the question whether an “artificial intelligence machine [can] be an ‘inventor’ under the Patent Act,” a federal district court (in the companion litigation to the above patent application) recently held that “the clear answer is no.”[28]Thaler v. Hirshfeld, 558 F. Supp. 3d 238, 240 (E.D. Va. Sept. 2, 2021), appeal pending, No. 21-2347 (Fed. Cir. Sept. 24, 2021). In the appeal of that case, the USPTO continued to maintain that under “[t]he plain language… [of] the Patent Act… – only a human being can be an ‘inventor.’”[29]Thaler, No. 21-2347, Dkt. No. 34 at 17. Importantly, though, some other countries have taken a different approach and permitted AI to be listed as an inventor on a patent.[30]South Africa was the first jurisdiction to grant Thaler’s patent application. Australia’s federal court initially held that inventors need not be human, but a later decision by the full federal … Continue reading 

Addressing a similar question, copyright law has been interpreted not to allow AI to be listed as an author of an artistic work. Although the Copyright Act does not define “author,” the Register of Copyrights has identified in its administrative manual that “[t]oqualify as a work of ‘authorship’ a work must be created by a human being.”[31]Compendium of U.S. Copyright Office Practices § 313.2, available at https://www.copyright.gov/comp3/chap300/ch300-copyrightable-authorship.pdf “Works that do not satisfy this requirement are not copyrightable.”[32]Id. The Copyright Review Board recently reaffirmed this view of the law when it denied copyright registration for an AI-generated artwork.[33]https://www.copyright.gov/rulings-filings/review-board/docs/a-recent-entrance-to-paradise.pdf Court decisions have reached similar holdings that non-humans are not authors for purposes of copyright law.[34]See, e.g., Naruto v. Slater, 2016 WL 362231, at *4 (N.D. Cal. Jan. 28, 2016), aff’d, 888 F.3d 418 (9th Cir. 2018) (holding that a six-year-old crested macaque “is not an ‘author’ within the … Continue reading 

These legal holdings have intensified the question of who owns AI-generated inventions and artistic works, if the AI system does not meet the legal requirements to be an inventor or author. Is it the company that owns the AI at the time of invention/creation? The company that originally developed the AI? Or does the invention/artistic work fall into the public domain with no private owner? These questions are far from only theoretical or academic. For individuals and companies who use and rely on AI technology in their business, the answer to ownership of AI-generated inventions and artistic works may have tremendous impact on the value of their business. 

With technology’s pervasiveness in our lives, the reality—whether we like it or not—is that we create trails of data in almost everything we do. 

Privacy of AI Data

Quality data is extremely important to AI innovation. In fact, AI depends on data to function. The Economist recently proclaimed, in an article title, that “The World’s Most Valuable Resource Is No Longer Oil, But Data.”[35]https://www.economist.com/leaders/2017/05/06/the-worlds-most-valuable-resource-is-no-longer-oil-but-data AI systems that provide reports or predictions utilize and analyze large amounts of data to achieve their desired function. Even more importantly, more sophisticated AI systems, such as those making autonomous decisions, depend on data to learn how to differentiate and identify patterns and objects. For example, to train AI software to recognize a picture of a cat, the developer can utilize a large dataset of cat pictures to allow the AI system to learn what a cat picture looks like. Once the AI system has reviewed a sufficient number of pictures, it can rely on its trained algorithm to autonomously recognize a picture of a cat from a group of pictures. Without quality data, however, it would be virtually impossible for sophisticated AI systems to achieve their objectives.[36]Use of data to train AI does not itself provide ownership or protection for the data. As noted in the above section, whether a party can protect data it uses is a separate question governed by … Continue reading 

In some industries, it can be difficult to obtain useful data for AI development. For instance, in developing medical diagnostic software, access to medical imaging datasets can be very limited.[37]See Edmund L. Andrews, “The Open-Source Movement Comes to Medical Datasets” (Aug. 2, 2021), https://hai.stanford.edu/news/open-source-movement-comes-medical-datasets In many other areas, however, access to data is plentiful—at least for some (often larger) companies. For example, Amazon has access to an immense amount of data on the shopping habits and trends of most Americans. With technology’s pervasiveness in our lives, the reality—whether we like it or not—is that we create trails of data in almost everything we do. Your phone tracks where you go and how long you stay there. Your browser and social media applications track your internet footprints. Your fitness tracker records your health and sleep patterns. And cars not only monitor your driving habits but now check your level of attention to the road.

Abundant access to an increasing amount of user data provides opportunities for tremendous societal benefits. For instance, location data from phones helps find missing persons and solve crimes, internet activity provides convenience in quickly finding relevant information and products, social media posts help support societal movements, digital health devices improve our health and alert us to concerns, and driving data helps reduce accidents and create safer roads. As AI applications become more sophisticated, their impact and potential to improve our society will continue to expand. 

However, there is a dark side to constantly sharing data about ourselves. Unfortunately, not all data is used for public good, much less for the benefit of individuals. In fact, much of it is collected for commercial gain. Unchecked, data use in AI algorithms has the potential to hide biases and perpetuate biased decisions without adequate oversight.[38]See, e.g., Jeffrey Dastin, “Amazon scraps secret AI recruiting tool that showed bias against women” (Oct. 10, 2018), … Continue reading 

In addition, everyone has different expectations of privacy because not everyone is willing to share private data with the world, even for the greater good. Have you ever run an internet browser search for a product only to be unsuspectingly spammed with advertisements for the same product minutes later? Such targeted advertising occurs based on data left behind in your internet footprints. Surprising as it may be, the collection and sharing of data often happens behind the scenes, so that people may not even recognize what data they are sharing. Although companies often provide disclosures about the data they collect and how they use it (usually explained in user agreements), not every user takes the time to read those lengthy documents. Those who do may not fully understand them and likely would be powerless to change them.

Illustration by Jerry Anderson

Even further, some public or self-disclosed data simply requires no user permission. Take into account that data can often be shared and sold—not to mention hacked or stolen—and it becomes nearly impossible to understand how your private data is being used, much less predict where it will go. Since individuals can be reidentified even from “anonymized” data, removing identifying information in large datasets offers limited protection. Some companies now offer products that help keep data private,[39]See, e.g., https://duckduckgo.com/ but much of the control still lies in the companies that collect, store and use the data. Thus, the most meaningful protection for individual privacy will have to come from the laws regulating those companies. 

In 2018, the European Union passed a comprehensive data protection law: the General Data Protection Regulation (GDPR).[40]See https://gdpr.eu/what-is-gdpr/ This regulation applies not just to companies in Europe but to anyone—even those not in the EU—who “process the personal data of E.U. citizens or residents, or … offer goods or services to such people.”[41]Id. So, if a hotel in Fargo hosts Europeans or a business in Bismarck sells products to Europeans, that venue or company might be subject to the regulation’s requirements. The GDPR provides a “compliance checklist” for U.S. companies.[42]https://gdpr.eu/compliance-checklist-us-companies/ The definition of “personal data” under the GDPR is very broad and includes “any information that relates to an individual who can be directly or indirectly identified,” such as “[n]ames and email addresses… [l]ocation information, ethnicity, gender, biometric data, religious beliefs, web cookies and political opinions.”[43]See supra note xl. “Processing” data likewise carries a very broad definition and includes “[a]ny action performed on data, whether automated or manual,” such as “collecting, recording, organizing, structuring, storing, using, erasing.”[44]Id. The penalties for violating the GDPR can be very significant—up to €20 million or 4 percent of global revenue (whichever is higher)—in addition to damages that individuals can seek as compensation for improper data use.[45]Id. 

Conversely, the U.S. does not currently have a similar federal law that provides broad protection for individual data. While federal law provides protection for certain types of data—health and financial information, for example—the large gaps in privacy laws are governed by a patchwork of state laws[46]See https://iapp.org/resources/article/us-state-privacy-legislation-tracker/# that provide only scattered protection. In fact, only a few states currently offer broad data privacy laws for their citizens. As one example, the California Consumer Privacy Act—which protects only California residents—gives consumers: (1) the right to know what personal information businesses collect about them, (2) the right to delete certain personal information collected from them, and (3) the right to opt-out of the sale of their personal information.[47]See https://oag.ca.gov/privacy/ccpa The scope of personal information includes “name, social security number, email address, records of products purchased, internet browsing history, geolocation data, fingerprints, and inferences from other personal information that could create a profile about your preferences and characteristics.”[48]Id. The act’s requirements apply to “for-profit businesses that do business in California” and meet certain threshold conditions.xlix Many other states have recently considered legislation to address data privacy concerns, but the scope of protection and likelihood of such legislation materializing into law varies significantly. For example, in North Dakota, a data privacy bill that would have prohibited the sale of “a user’s protected data to another person unless the user opts-in to allow the sale” was presented during the 2021 Legislative Assembly but failed to pass, after a 12-1 committee vote recommended it be rejected.[49]See https://www.legis.nd.gov/assembly/67-2021/bill-actions/ba1330.html 

Undoubtedly, as the use of automation and AI systems increases, so will the need for meaningful access to data, which will increase data’s value. Although security, transparency and privacy are not incompatible with data sharing or advancing AI innovation to improve society, as AI implementations begin to further impact every facet of our lives, it will be imperative to consider appropriate measures to ensure a balance between access to information and respect for individual privacy. Some of the key issues to be addressed are likely to include requirements for safely storing private data, restrictions on the transfer of data, meaningful opportunities for users to choose which data they share, and the availability for users to seek a remedy when their data is misused. As highlighted above, the conversation on these issues has only just begun. 

The advancement of AI in the coming decade will revolutionize our world in unprecedented ways. This will undoubtedly offer many benefits to our society: Travel has the potential to become cheaper and safer; healthcare is poised to become more advanced, more accessible and more accurate; and automation could significantly ease many burdens in everyday life. As these changes unfold, however, important legal issues surrounding AI liability, innovation and privacy will arise that impact our society in significant ways. Although there exist no easy answers on these policy issues, it will be pivotal to consider the application of our existing legal framework to this new AI frontier before the unanswered legal issues impact our society on a larger scale. Without further research and discussion on these topics, our expansive adoption of AI could outpace our readiness to responsibly and appropriately integrate it into our society.

Nikola Datzov

Nikola Datzov is an Assistant Professor at the School of Law, University of North Dakota, where he teaches courses on intellectual property, torts, remedies and conflict of laws. His research and scholarship focus on patent law, artificial intelligence, innovation and the intersection of different areas of intellectual property law. Prof. Datzov earned a BS in Computer Science at the University of South Dakota and then a JD at Hamline University School of Law. He worked as an attorney in the federal courts for three years, serving as a law clerk for judges at the U.S. Court of Appeals for the Eighth Circuit and the U.S. District Court for the District of Minnesota. Prof. Datzov then became a partner at a large law firm in the Midwest, leveraging his law and computer science degrees in representing parties in high-stakes litigation in federal courts throughout the country before joining UND.

References

References
1 U.S. Patent and Trademark Office, “Inventing AI,” 2 (2020), https://www.uspto.gov/sites/default/files/documents/OCE-DH-AI.pdf
2 https://www.merriam-webster.com/dictionary/artificial%20intelligence
3 USPTO, supra note i at 3.
4 More specific terms, such as machine learning and deep neural network, which are subsets of AI, provide better clarity regarding the meaning of the technology at issue. However, they do not resolve the ambiguity surrounding the use of the broader concept of AI.
5 See, e.g., Matt Day, Giles Turner and Natalia Drozdiak, “Thousands of Amazon Workers Listen to Alexa Users’ Conversations” (Apr. 11, 2019), https://time.com/5568815/amazon-workers-listen-to-alexa/
6 It may seem contradictory to emphasize the importance of specificity in making policy decisions relating to AI and then to discuss AI generally in this article. But the focus of this article is not to offer recommendations on good AI policy for any specific issue or AI technology; instead, it is to highlight the important questions that will need to be addressed in each of those policy decisions with regard to specific AI technologies. Those questions transcend all types of AI.
7 See, e.g., Maham Abedi, “Amazon Echo mistakenly orders cat food after hearing TV commercial” (Feb. 14, 2018), https://globalnews.ca/news/4025172/amazon-echo-orders-cat-food-tv-commercial/
8 Many of the companies leading AI development are large companies that would not raise such concerns. However, if liability does not extend to such companies, companies with fewer assets that were involved in the development of the AI product (such as a joint development) could become the only viable party to hold liable.
9 In 2017, an AI system granted citizenship by Saudi Arabia became the first robot to be given personhood. See Emily Reynolds, “The agony of Sophia, the world’s first robot citizen condemned to a lifeless career in marketing” (Jan. 6, 2018), https://www.wired.co.uk/article/sophia-robot-citizen-womens-rights-detriot-become-human-hanson-robotics
10 See Bryan H. Choi, “Crashworthy Code,” 94 Wash. L. Rev. 39, 53 (2019) (“[N]one of those arguments are new, and they have long failed to move any court to extend products liability law to software.”).
11 See Center for Security and Emerging Technology (CSET), “Patents and Artificial Intelligence: A Primer,” 2 and 13 (2020), https://cset.georgetown.edu/wp-content/uploads/CSET-Patents-and-Artificial-Intelligence.pdf
12 “What is Intellectual Property?”, World Intellectual Property Organization, https://www.wipo.int/about-ip/en/
13 35 U.S.C. § 101.
14 See 35 U.S.C. § 271(a).
15 See, e.g., 18 U.S.C. § 1839(3).
16 See 18 U.S.C. § 1839(5) and (6).
17 See 17 U.S.C. §§ 102, 106.
18 See 17 U.S.C. § 102(b).
19 15 U.S.C. § 1127.
20 “What is a trademark?”, USPTO, https://www.uspto.gov/trademarks/basics/what-trademark
21 https://www.oceantomo.com/intangible-asset-market-value-study/; see also https://www.aon.com/getmedia/60fbb49a-c7a5-4027-ba98-0553b29dc89f/Ponemon-Report-V24.aspx
22 Mayo Collaborative Servs. v. Prometheus Lab’ys, Inc., 566 U.S. 66, 71 (2012).
23 National Security Commission on Artificial Intelligence, “Final Report,” 12 (2021), https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf
24 SPTO, supra note i at 5; see also CSET, supra note xi at 13.
25 Human-Centered AI Institute, Stanford University, “Artificial Intelligence Index Report 2021,” 18 (2021), https://aiindex.stanford.edu/wp-content/uploads/2021/11/2021-AI-Index-Report_Master.pdf
26 CSET, “Tracking AI Investment,” 8 (2020), https://cset.georgetown.edu/wp-content/uploads/CSET-Tracking-AI-Investment.pdf
27 The USPTO denied the application and refused to grant a patent. See Decision on Petition, https://www.uspto.gov/sites/default/files/documents/16524350_22apr2020.pdf
28 Thaler v. Hirshfeld, 558 F. Supp. 3d 238, 240 (E.D. Va. Sept. 2, 2021), appeal pending, No. 21-2347 (Fed. Cir. Sept. 24, 2021).
29 Thaler, No. 21-2347, Dkt. No. 34 at 17.
30 South Africa was the first jurisdiction to grant Thaler’s patent application. Australia’s federal court initially held that inventors need not be human, but a later decision by the full federal court reversed the holding. The other jurisdictions that have examined the application, such as the European Patent Office, have denied Thaler’s application.
31 Compendium of U.S. Copyright Office Practices § 313.2, available at https://www.copyright.gov/comp3/chap300/ch300-copyrightable-authorship.pdf
32, 41, 44, 45, 48 Id.
33 https://www.copyright.gov/rulings-filings/review-board/docs/a-recent-entrance-to-paradise.pdf
34 See, e.g., Naruto v. Slater, 2016 WL 362231, at *4 (N.D. Cal. Jan. 28, 2016), aff’d, 888 F.3d 418 (9th Cir. 2018) (holding that a six-year-old crested macaque “is not an ‘author’ within the meaning of the Copyright Act”).
35 https://www.economist.com/leaders/2017/05/06/the-worlds-most-valuable-resource-is-no-longer-oil-but-data
36 Use of data to train AI does not itself provide ownership or protection for the data. As noted in the above section, whether a party can protect data it uses is a separate question governed by intellectual property law and contract law.
37 See Edmund L. Andrews, “The Open-Source Movement Comes to Medical Datasets” (Aug. 2, 2021), https://hai.stanford.edu/news/open-source-movement-comes-medical-datasets
38 See, e.g., Jeffrey Dastin, “Amazon scraps secret AI recruiting tool that showed bias against women” (Oct. 10, 2018), https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G
39 See, e.g., https://duckduckgo.com/
40 See https://gdpr.eu/what-is-gdpr/
42 https://gdpr.eu/compliance-checklist-us-companies/
43 See supra note xl.
46 See https://iapp.org/resources/article/us-state-privacy-legislation-tracker/#
47 See https://oag.ca.gov/privacy/ccpa
49 See https://www.legis.nd.gov/assembly/67-2021/bill-actions/ba1330.html
Nikola Datzov
Nikola Datzov
Nikola Datzov is an Assistant Professor at the School of Law, University of North Dakota, where he teaches courses on intellectual property, torts, remedies and conflict of laws. His research and scholarship focus on patent law, artificial intelligence, innovation and the intersection of different areas of intellectual property law. Prof. Datzov earned a BS in Computer Science at the University of South Dakota and then a JD at Hamline University School of Law. He worked as an attorney in the federal courts for three years, serving as a law clerk for judges at the U.S. Court of Appeals for the Eighth Circuit and the U.S. District Court for the District of Minnesota. Prof. Datzov then became a partner at a large law firm in the Midwest, leveraging his law and computer science degrees in representing parties in high-stakes litigation in federal courts throughout the country before joining UND.
RELATED ARTICLES

Most Popular

Recent Comments