HomeA.I.Would Regulation Prevent AI From Becoming an Evil Overlord?

Would Regulation Prevent AI From Becoming an Evil Overlord?

Some people are, perhaps, afraid that heavily armed artificially intelligent robots[1]https://theconversation.com/losing-control-the-dangers-of-killer -robots-58262. might take over the world and enslave us—or they might even exterminate humanity.[2]https://theconversation.com/what-an-artificial-intelligence-researcher-fears-about-ai-78655. Due to these and other concerns, tech-industry billionaire Elon Musk, late physicist Stephen Hawking and a growing number of computer scientists say that artificial intelligence (AI) technology needs to be regulated[3]https://www.theverge.com/2017/7/17/15980954/elon-musk-ai-regulation-existential-threat. to manage the risks.[4]http://observer.com/2015/08/stephen-hawking-elon-musk-and-bill-gates-warn-about-artificial-intelligence. Amazon CEO Jeff Bezos and Microsoft founder Bill Gates, despite his earlier more relaxed position on AI,[5]https://www.cnbc.com/2017/09/25/bill-gates-disagrees-with-elon-musk-we-shouldnt-panic-about-a-i.html. have both raised concerns[6]https://www.cnbc.com/2019/03/26/bill-gates-artificial-intelligence-both-promising-and-dangerous.html. about AI weapons systems.

Because of these and related concerns, there have been a number of calls to regulate AI. Some policy makers have suggested regulating it as a “tool with applications,”[7]https://www.washingtonpost.com/outlook/2020/01/13/heres- how-regulate-artificial-intelligence-properly. while others have proposed regulating its use in particular sectors of the economy on a use-by-use basis.[8]https://www.brookings.edu/research/ai-needs-more-regulation-not-less. The Federal Trade Commission (FTC) has released guidelines reminding AI developers and users that AI has to play by FTC and other federal rules, including those regarding accuracy, non-deception, truthfulness and non-discrimination.[9]https://www.ftc.gov/news-events/blogs/business-blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai.

More extreme proposals have called on regulators and system operators to “starve” AI to prevent it from becoming “a social and cultural H-bomb” that could “deprive us of our liberty, power autocracies and genocides, program our behavior, turn us into human machines and, ultimately, turn us into slaves.”[10]https://blogs.scientificamerican.com/observations/dont-regulate-artificial-intelligence-starve-it. Rob Towes, a venture capitalist and journalist, has even called for the creation of a new federal AI regulating agency.[11]https://www.forbes.com/sites/robtoews/2020/06/28/here-is-how-the-united-states-should-regulate-artificial-intelligence/?sh=395615af7821.

What is Artificial Intelligence?
Artificial intelligence systems are a type of software that uses computer algorithms to make decisions. Multiple types of artificial intelligence exist. Some model the emergent decision-making patterns of insects and other animals, in which powerful decision-making emerges from the combination of many small-scale decisions. Other researchers use machine learning, in which the system performs analysis or prediction and learns from the results, either by comparing them to the right answer, as part of a training process, or by observing what happens in response to the implementation of training processes.

Red key with text Bunned and closed padlock icon on blue digital laptop keyboard

Regulation of AI as Regulation of Speech
We don’t regulate human thought or speech in the United States, but groups concerned with preventing AIs from advancing beyond human control and ensuring equitable AI decision-making have advocated regulating the decision-making processes and outputs of AI systems. While human speech is protected by the First Amendment to the U.S. Constitution, it is not clear that the speech of an AI system would be protected. One could argue that the broad language of the First Amendment (“Congress shall make no law abridging the freedom of speech”) applies to non-human speech. Or, since an AI program’s source code is the developer’s speech, and the outputs generated by the AI system are an extension of the developer’s code speech, these should be protected on the basis of preventing restraint on the developer’s (human) free speech rights. However, there is no clear precedent for these arguments as yet.

AI, though, is already subject to numerous regulations (as the FTC aptly notes). Regulating its decision-making processing—because of what it might determine or recommend—is inherently problematic. While AI likely doesn’t have constitutional free-speech rights, the designers and developers of the systems clearly do. Regulating AI’s output is arguably constitutionally prohibited prior restraint on developers’ speech, irrespective of what the AI may recommend or how these recommendations may be utilized.

In fact, some proposed regulations are dangerously close to suggesting the regulation of human thought. This is particularly true when the AI is advising only its developers or operators of its conclusions and not speaking to a larger audience. Moreover, researchers and businesses are already subject to existing rules, regulations and laws designed to protect public safety. Imposing further limitations risks reducing the potential for innovation with AI systems or limiting these benefits to only the large firms that are skilled at navigating governmental regulations without necessarily actually solving any real problems.

How Is AI Regulated Now?

While the term “artificial intelligence” may conjure fantastical images of human-like robots, most people have encountered AI before. It helps us find similar products while shopping,[12]https://doi.org/10.1155/2009/421425. offers movie and TV recommendations,[13]https://doi.org/10.1016/j.jvlc.2014.09.011. and helps us search for websites.[14]https://doi.org/10.1609/aimag.v18i2.1290. It can also grade student writing,[15]https://www.washingtonpost.com/news/answer-sheet/wp/2016/05/05/should-you-trust-a-computer-to-grade-your-childs-writing-on-common-core-tests. provide personalized tutoring[16]http://www.telegraph.co.uk/education/2017/09/08/tutor-future-scientists-develop-algorithm-match-pupils-tutors. and even help the TSA detect objects carried through airport security scanners.[17]https://doi.org/10.1109/ICCV.1999.790410 & https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2234476.

In each case, AI makes tasks easier for humans. But even as AI frees people from doing this work, it still bases its actions on human decisions and goals about what action to take or where to search, and what to look for.

In areas like these and many others, AI has the potential to do far greater good than harm—if used properly—and additional regulations are not needed to make sure this happens. There are already laws on the books of nations, states and municipalities governing civil and criminal liabilities for harmful actions. Autonomous drones, for example, must obey FAA regulations, while a self-driving car’s AI must obey regular traffic laws to operate on public roadways.

Existing laws also cover what happens if a robot injures or kills a person. If the injury is accidental, the robot’s programmer or operator isn’t criminally responsible[18]http://doi.org/10.5235/17579961.5.2.214. but could face civil consequences. While lawmakers and regulators might need to refine responsibility for the actions of AI systems as technology advances, creating regulations beyond those that already exist could prohibit or slow the development of capabilities that would be overwhelmingly beneficial.

Potential Risks from Artificial Intelligence

It may seem reasonable to worry about researchers developing very advanced artificial intelligence systems that can operate entirely outside human control.[19]http://heinonline.org/HOL/LandingPage?handle=hein.journals/akrintel4&div=11. A common thought experiment deals with a self-driving car forced to make a decision[20]https://theconversation.com/helping-autonomous-vehicles-and-humans-share-the-road-68044. about whether to run over a child who just stepped into the road or veer off into a guardrail, injuring the car’s occupants and perhaps even those in another vehicle.

Musk and Hawking, among others, worry that hyper-capable AI systems, no longer limited to a single set of tasks, such as controlling a self-driving car, might decide it doesn’t need humans anymore. It might even look at human stewardship of the planet, interpersonal conflicts, theft, fraud and frequent wars, and decide that the world would be better without people.[21]https://theconversation.com/what-an-artificial-intelligence-researcher-fears-about-ai-78655. Science fiction author Isaac Asimov tried to address this potential by proposing three laws[22]https://doi.org/10.1109/MIS.2009.69. limiting robot decision-making: Robots cannot injure humans or allow them “to come to harm.” They must also obey humans—unless this would harm humans—and protect themselves, as long as this doesn’t harm humans or ignore an order.

But Asimov himself knew the three laws were not enough.[23]https://theconversation.com/beyond-asimov-how-to-plan-for-ethical-robots-59725. And they don’t reflect the complexity of human values. What constitutes “harm” is an example: Should a robot protect humanity from suffering related to overpopulation? Or should it protect the individual freedom to make personal reproductive decisions? Or might it identify something humans wouldn’t even readily think of—and decide to protect us against it?

Humans have already wrestled with these questions in our non-artificial intelligences. There are numerous restrictions on freedom in this and all other societies. Rather than regulating what AI systems can and can’t do, it would be better to develop them with—or teach them human ethics and values,[24]https://futureoflife.org/ai-principles. like parents do with human children.

However, just as not every human society’s values are the same, we can expect that AI systems made by different groups might have different values. If AIs are developed in the shadows by zealots or criminal organizations that flout regulations, it will almost certainly be these organizations’ values that they embody.

Artificial Intelligence Benefits

People benefit from AI everyday. They get product recommendations from Amazon, search results from Google or Microsoft’s Bing and even AI-targeted marketing from numerous companies. AI systems look for network attackers, detect fraudulent credit card and bank transactions, and even keep airports and national borders safe. But this is just the beginning. AI-controlled robots could assist law enforcement in responding to human gunmen. Current police efforts must focus on preventing officers from being injured, but robots could step into harm’s way,[25]https://doi.org/10.1016/j.techsoc.2013.12.004. potentially changing the outcomes of cases such as the shooting of an armed college student at Georgia Tech[26]https://www.nytimes.com/2017/09/18/us/georgia-tech-killing-student.html. or an unarmed high school student in Austin.[27]http://www.nydailynews.com/news/national/texas-teen-shot-police-unarmed-naked-article-1.2526287.

Intelligent robots can help humans in other ways, too. They can perform repetitive tasks, such as processing sensor data[28]http://dx.doi.org/10.3390/s140304239. during which human boredom might cause mistakes. They can limit human exposure to dangerous materials and dangerous situations, such as decontaminating a nuclear reactor[29]https://www.cbsnews.com/news/bad-news-from-japans-wrecked-fukushima-nuclear-reactor. or working in areas humans can’t go. In general, AI robots can provide humans with more time to pursue what they define as happiness by freeing them from having to do other work.

AI Is Going to Happen—Here?

Many discussions of U.S. regulations seem to presume that American laws can restrict or prevent AI development. However, this is demonstrably not the case. While the U.S. has led the world in the development of key computing technologies and several of the world’s largest software companies[30]https://companiesmarketcap.com/software/largest-softwarecompanies-by-market-ca.—Microsoft, Google, Oracle, IBM, Apple and Adobe—are American firms, the U.S. is not the only place where AI is being developed. Russian president Vladimir Putin has heralded AI as “the future, not only for Russia, but for all humankind.”[31]http://fortune.com/2017/09/04/ai-artificial-intelligence-putin-rule-world. In September 2017, he went as far as to tell Russian students that the nation that “becomes the leader in this sphere will become the ruler of the world.”[32]https://www.rt.com/news/401731-ai-rule-world-putin.

With Russia and other nations embracing AI,[33]http://fortune.com/2017/09/04/ai-artificial-intelligence-putin-rule-world. nations that don’t innovate in AI technologies—or worse, those that actually restrict its development—run the risk of falling behind and not being able to compete with the countries that promote AI development.[34]https://doi.org/10.1016/j.clsr.2010.03.003. Advanced AIs can create advantages for a nation’s businesses and its defense. Nations without AI or with less mature AI systems might be placed at a severe disadvantage and forced to buy systems with whatever capabilities the more advanced nations are willing to let their firms sell to other countries. While the state of nations after the introduction of AI is inherently unclear, one thing is apparent: restricting AI development in the U.S. won’t stop it from being developed. In fact, this may make it far more likely that the eventual winning AI systems won’t respect our societal values, because they have been developed by another country or group that doesn’t share them.

AI Transparency

Despite the benefits that AI provides, and those that it is poised to provide, AI is far from perfect. AI makes mistakes. Computing systems can incorrectly take a spurious correlation as causality. They have been shown to disadvantage members of minorities and other groups. Academic Virginia Eubanks charged that some systems are responsible for “automating inequity” in a book by this name,[35]Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police and Punish the Poor, St. Martin’s Press, 2018. while another academic, Safiya Umoja Noble, termed some systems as “algorithms of oppression”[36]Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism, NYU Press, 2018. in another eponymous volume. Improving the underlying learning technologies and how these algorithms are trained is critical to preventing these and other issues.

What is even more problematic, however, is that AI’s mistakes are very hard to detect. Some systems can’t explain or justify how they made a recommendation or decision. Others can explain decisions in general terms but not justify them specifically. Yet others can explain general rules and trends but not specific decisions. A new sub-discipline has arisen, called eXplanable AI (XAI), to try to develop new AI techniques that are better understood by humans and to develop upgrades that attempt to better explain existing techniques[37]Vilone G, Longo L. “Explainable Artificial Intelligence: a Systematic Review.” arXiv. Published online May 29, 2020. Accessed April 27, 2021. http://arxiv.org/abs/2006.00093; Gunning D, Stefik M, … Continue readingEfforts in this area are critical to human trust in AI systems and AI’s long-term viability. Many of these efforts are being driven by academia, not corporate AI developers. The development of XAI techniques is exactly why we need to encourage more innovation by a broader community, not less by a small group that is well positioned to navigate extensive government bureaucracy.

AI systems have the potential to change how humans do just about everything. Scientists, engineers, programmers and entrepreneurs need time to develop the technologies—and deliver their benefits.

Notably, the FTC discusses the need for transparency in their guidance.[38]https://www.ftc.gov/news-events/blogs/business-blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai. They point out that an issue with a health care AI system was discovered by independent researchers who were able to source data from an academic hospital—not the AI developer or its users. Transparency won’t just help identify issues so they can be fixed, though, it can also demonstrate that systems are making good decisions and not exhibiting bias, if this is the case, and provide a defense for spurious claims against properly functioning systems’ developers and operators.

Who Does Regulation Really Protect?

Achieving most of these benefits will require a lot more research and development. Regulations that make it more expensive to develop AIs or prevent certain uses might delay or forestall those efforts. This is particularly true for small businesses and individuals—key drivers of new technologies—who are not as well equipped to deal with regulation compliance as larger companies.

In fact, the biggest beneficiary of AI regulation may be large companies that are used to dealing with it, because startups will have a harder time competing in a regulated environment. Even ambiguity regarding regulation and what aspects of AI are regulated may be problematic, as it may cause people to avoid innovation to avoid risking inadvertent ensnarement by vague regulations and potential penalties.

Humanity faced a similar set of issues in the early days of the internet. But the United States actively avoided regulating the internet to avoid stunting its early growth.[39]https://www.forbes.com/sites/adamthieer/2012/02/12/15-years-on-president-clintons-5-principles-for-internet-policy-remain-the-perfect-paradigm. Elon Musk’s PayPal and numerous other businesses helped build the modern online world while subject only to regular human-scale rules, like those preventing theft and fraud. Similarly, no special rules were rolled out to govern early software businesses, such as Microsoft, in their burgeoning years, that have gone on to become industry titans.

AI systems have the potential to change how humans do just about everything. Scientists, engineers, programmers and entrepreneurs need time to develop the technologies—and deliver their benefits. To achieve maximum benefit, their work should be free from concern that some AIs might be banned, and from the delays and costs associated with new AI-specific regulations. The bigger risk is that if we don’t develop these technologies domestically, they may be developed abroad, and we will have little or no knowledge about how they work and little or no say over their operations.

On the other hand, regulations regarding business conduct, including the conduct of AIs employed by businesses already exist. If gaps exist in these regulations that allow otherwise regulated conduct performed by an AI to slip through, an obvious solution is to patch the holes and fix them. Knee-jerk regulations that ban until we know it’s safe, on the other hand, will provide little benefit, in the long term, and might have severe repercussions.

This is not to say that there isn’t a role for government. Agencies, such as the FTC, have a part to play in keeping the market for AI technologies honest, non-deceptive and fair, and for ensuring that users of AI systems are similarly honest, non-deceptive and fair in their practices.

There is also an important legislative role. In fact, perhaps the most important AI government regulation would be for Congress to prevent states and municipalities creating a hodgepodge of local laws that may result in a confusing marketplace with increased compliance costs and a need for modified products from jurisdiction to jurisdiction. Because most AI products would have the potential for widespread use across state lines, AI development is inherently interstate commerce and would fall under federal preemption doctrine (under the authority of the Constitution’s Commerce Clause) if a federal law prohibiting state and municipal regulation was passed.

Acknowledgement

This article is based, in part, on an article that originally appeared in The Conversation.[40]https://theconversation.com/does-regulating-artificial-intelligence-save-humanity-or-just-stifle-innovation-85718. 

Jeremy Straub, PhD, is an Assistant Professor in the North Dakota State University Department of Computer Science and a NDSU Challey Institute Faculty Fellow. His research spans a continuum from autonomous technology development to technology commercialization to asking questions of technology-use ethics and national and international policy. He has published more than 60 articles in academic journals and more than 100 peer-reviewed conference papers. Straub serves on multiple editorial boards and conference committees. He is also the lead inventor on two U.S. patents and a member of multiple technical societies.

References

References
1 https://theconversation.com/losing-control-the-dangers-of-killer -robots-58262.
2 https://theconversation.com/what-an-artificial-intelligence-researcher-fears-about-ai-78655.
3 https://www.theverge.com/2017/7/17/15980954/elon-musk-ai-regulation-existential-threat.
4 http://observer.com/2015/08/stephen-hawking-elon-musk-and-bill-gates-warn-about-artificial-intelligence.
5 https://www.cnbc.com/2017/09/25/bill-gates-disagrees-with-elon-musk-we-shouldnt-panic-about-a-i.html.
6 https://www.cnbc.com/2019/03/26/bill-gates-artificial-intelligence-both-promising-and-dangerous.html.
7 https://www.washingtonpost.com/outlook/2020/01/13/heres- how-regulate-artificial-intelligence-properly.
8 https://www.brookings.edu/research/ai-needs-more-regulation-not-less.
9, 38 https://www.ftc.gov/news-events/blogs/business-blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai.
10 https://blogs.scientificamerican.com/observations/dont-regulate-artificial-intelligence-starve-it.
11 https://www.forbes.com/sites/robtoews/2020/06/28/here-is-how-the-united-states-should-regulate-artificial-intelligence/?sh=395615af7821.
12 https://doi.org/10.1155/2009/421425.
13 https://doi.org/10.1016/j.jvlc.2014.09.011.
14 https://doi.org/10.1609/aimag.v18i2.1290.
15 https://www.washingtonpost.com/news/answer-sheet/wp/2016/05/05/should-you-trust-a-computer-to-grade-your-childs-writing-on-common-core-tests.
16 http://www.telegraph.co.uk/education/2017/09/08/tutor-future-scientists-develop-algorithm-match-pupils-tutors.
17 https://doi.org/10.1109/ICCV.1999.790410 & https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2234476.
18 http://doi.org/10.5235/17579961.5.2.214.
19 http://heinonline.org/HOL/LandingPage?handle=hein.journals/akrintel4&div=11.
20 https://theconversation.com/helping-autonomous-vehicles-and-humans-share-the-road-68044.
21 https://theconversation.com/what-an-artificial-intelligence-researcher-fears-about-ai-78655.
22 https://doi.org/10.1109/MIS.2009.69.
23 https://theconversation.com/beyond-asimov-how-to-plan-for-ethical-robots-59725.
24 https://futureoflife.org/ai-principles.
25 https://doi.org/10.1016/j.techsoc.2013.12.004.
26 https://www.nytimes.com/2017/09/18/us/georgia-tech-killing-student.html.
27 http://www.nydailynews.com/news/national/texas-teen-shot-police-unarmed-naked-article-1.2526287.
28 http://dx.doi.org/10.3390/s140304239.
29 https://www.cbsnews.com/news/bad-news-from-japans-wrecked-fukushima-nuclear-reactor.
30 https://companiesmarketcap.com/software/largest-softwarecompanies-by-market-ca.
31, 33 http://fortune.com/2017/09/04/ai-artificial-intelligence-putin-rule-world.
32 https://www.rt.com/news/401731-ai-rule-world-putin.
34 https://doi.org/10.1016/j.clsr.2010.03.003.
35 Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police and Punish the Poor, St. Martin’s Press, 2018.
36 Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism, NYU Press, 2018.
37 Vilone G, Longo L. “Explainable Artificial Intelligence: a Systematic Review.” arXiv. Published online May 29, 2020. Accessed April 27, 2021. http://arxiv.org/abs/2006.00093; Gunning D, Stefik M, Choi J, Miller T, Stumpf S, Yang GZ. “XAI-Explainable Artificial Intelligence.” Sci Robot. 2019;4(37). doi:10.1126/scirobotics.aay7120.
39 https://www.forbes.com/sites/adamthieer/2012/02/12/15-years-on-president-clintons-5-principles-for-internet-policy-remain-the-perfect-paradigm.
40 https://theconversation.com/does-regulating-artificial-intelligence-save-humanity-or-just-stifle-innovation-85718.
Jeremy Straub
Jeremy Straub
Jeremy Straub, PhD, is an Assistant Professor in the North Dakota State University Department of Computer Science and a NDSU Challey Institute Faculty Fellow. His research spans a continuum from autonomous technology development to technology commercialization to asking questions of technology-use ethics and national and international policy. He has published more than 60 articles in academic journals and more than 100 peer-reviewed conference papers. Straub serves on multiple editorial boards and conference committees. He is also the lead inventor on two U.S. patents and a member of multiple technical societies.
RELATED ARTICLES

Most Popular

Recent Comments