{"id":115,"date":"2020-12-03T08:58:39","date_gmt":"2020-12-03T14:58:39","guid":{"rendered":"https:\/\/dda.ndus.edu\/ddreview\/?p=115"},"modified":"2023-04-18T16:22:50","modified_gmt":"2023-04-18T21:22:50","slug":"closer-to-the-robo-rubicon-robots-autonomy-and-the-future-or-maybe-not-of-you","status":"publish","type":"post","link":"https:\/\/dda.ndus.edu\/ddreview\/closer-to-the-robo-rubicon-robots-autonomy-and-the-future-or-maybe-not-of-you\/","title":{"rendered":"Closer to the Robo-Rubicon: Robots, Autonomy and the Future (or Maybe Not) of You"},"content":{"rendered":"\n<div class=\"wp-block-image\"><figure class=\"alignleft size-large is-resized\"><img loading=\"lazy\" src=\"https:\/\/dda.ndus.edu\/wp-content\/uploads\/sites\/17\/2020\/12\/Robo-Rubicon-2500px-scaled.jpg\" alt=\"\" width=\"425\" height=\"582\" \/><\/figure><\/div>\n\n\n\n<p class=\"has-drop-cap\">Seven years ago, Maj. Gen. Robert Latiff and I wrote an opinion article for the Wall Street Journal, titled \u201cWith Drone Warfare, America Approaches the Robo-Rubicon.\u201d The Week, which reviews newspaper and magazine stories in the U.K. and U.S., highlighted the article in its \u201cBest Columns-US\u201d section. \u201cIf you think drone warfare has created some tricky moral dilemmas, said [Latiff and McCloskey],\u201d The Week began its pr\u00e9cis, \u201cjust wait until we start sending robotic soldiers into battle.\u201d<\/p>\n\n\n\n<p>\u201cCrossing the Rubicon,\u201d of course, refers to Julius Caesar\u2019s irrevocable decision that led to the dissolution of the Roman Republic, a limited democracy, and ushered in the Roman Empire, which would be run by one or more dictators (aka, emperors). On January 10, 49 BC, General Caesar led a legion of soldiers across the Rubicon River, breaking Roman law and making civil war inevitable. The expression has survived as an idiom for passing the point of no return.<\/p>\n\n\n\n<p>Our contention in the article was that full lethal autonomy\u2014that is, empowering robotic weapons systems with the decision to kill humans on the battlefield\u2014crosses a critical moral Rubicon. If machines are given the legal power to make kill decisions, then it inescapably follows that humanity has been fundamentally devalued. Machines can be programmed to follow rules, but they are not persons capable of moral decisions. Surely taking human life is the most profound moral act, which, if relegated to robots, becomes trivial, along with all other moral questions.<\/p>\n\n\n\n<p>Not only does this change the nature of war, but human nature and democracy are put at risk. This is not merely a theoretical issue but a fast-approaching reality in military deployment.<\/p>\n\n\n\n<p>Drones are unmanned aerial vehicles that\u2014along with unmanned ground, underwater and eventually space vehicles\u2014are crude predecessors of emerging robotic armies. In the coming decades, far more technologically sophisticated robots will be integrated into American fighting forces. As well, because of budget cuts, increasing personnel costs, high-tech advances, and international competition for air and technological superiority, the military is already pushing toward deploying large numbers of advanced robotic weapons systems.<\/p>\n\n\n\n<p>There are obvious benefits, such as greatly increased battle reach and efficiency, and most importantly the elimination of most risk to our human soldiers.<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"alignright size-large is-resized\"><img loading=\"lazy\" src=\"https:\/\/dda.ndus.edu\/wp-content\/uploads\/sites\/17\/2020\/12\/Atlas-2000px-scaled.jpg\" alt=\"\" width=\"350\" height=\"631\" \/><figcaption>Atlas (above right) is a \u201chigh mobility, humanoid robot,\u201d according to its developer, Boston Dynamics. \u201cOur long-term goal is to make robots that have mobility, dexterity, perception and intelligence comparable to humans and animals, or perhaps exceeding them,\u201d said company founder Marc Raibert, \u201c[T]his robot is a step along the way.\u201d This robot (and many other types) could also be weaponized and given lethal autonomy.<\/figcaption><\/figure><\/div>\n\n\n\n<p>Unmanned weapons systems are already becoming increasingly autonomous. For example, the Navy\u2019s X-47B, a prototype drone stealth strike fighter, can now navigate highly difficult aircraft-carrier takeoffs and landings. At the same time, technology continues to push the kill decision further from human agency. Drones are operated by soldiers thousands of miles away. And any such system can be programmed to fire \u201cbased solely on its own sensors,\u201d as stated in a 2011 U.K. defense report. In fact, the U.S. military has been developing lethally autonomous drones, as the Washington Post reported in 2011.<\/p>\n\n\n\n<p>Lethal autonomy hasn\u2019t happened\u2014yet. The kill decision is still subject to many layers of officer command, and the U.S. military maintains that \u201cappropriate levels of human judgment\u201d will remain in place. However, although there has not been a change in official policy, it is fast becoming a fantasy to maintain that humans can make a meaningful contribution to kill decisions in the deployment of drones (or other automated weapons systems) and in robot-human teams.<\/p>\n\n\n\n<p>Throughout our military engagements in Kosovo, Iraq and Afghanistan, the U.S. enjoyed complete air superiority. This enabled complex oversight of drone attacks in which there was the luxury of sufficient time for layers of legal and military authority to confer before the decision to fire on a target was made. This would not exist in possible military engagements with Russia, China or even Iran. The choice would be lethally autonomous drones or human pilots\u2014and significant casualties.<\/p>\n\n\n\n<p>Aside from pilot risk, consider the cost differential. Each new F-35 Joint Strike Fighter jet will cost about $100 million and an additional $6 million per year to train an Air Force pilot. In contrast, each hunter-killer drone (MQ-9 Reaper) costs about $14 million.<\/p>\n\n\n\n<p>Military verbiage has shifted from humans remaining \u201cin the loop\u201d regarding kill decisions, to \u201con the loop.\u201d Soon technology will push soldiers \u201cout of the loop,\u201d since the human mind cannot function fast enough to process the data that computers digest instantaneously. Future warfare won\u2019t be restricted to single drones but masses of robotic weapons systems communicating at the speed of light.<\/p>\n\n\n\n<p>Recently, the Defense Advanced Research Projects Agency (DARPA), which funds U.S. military research, began exploring how to design an aircraft carrier in the sky, from which waves of fighter drones would be deployed. These drone swarms will be networked and communicate with each other instantaneously. How will human operators coordinate kill decisions for several, if not dozens, of drones simultaneously?<\/p>\n\n\n\n<h1>Third Offset Strategy<\/h1>\n\n\n\n<p>U.S. defense secretary Ashton Carter terms the Pentagon\u2019s new approach to deterrence as the \u201cthird offset strategy.\u201d The first offset in the post-WWII era, which asserted American technological superiority, was the huge investment in nuclear weapons in the 1950s to counter Soviet conventional forces.<\/p>\n\n\n\n<p>Twenty years later, after the Russians caught up in the nuke race, the U.S. reestablished dominance via stealth bombers, GPS, precision-guided missiles and other innovations. Now that the Russians and Chinese have developed sophisticated missiles and air defense systems, the U.S. is seeking advantage through robotic weapons systems and autonomous support systems, such as drone tankers for mid-air refueling.<\/p>\n\n\n\n<p>What\u2019s remarkable is how publicly the defense department is talking about robotic autonomy, including human-robot teams and human-machine enhancements, such as exoskeletons and sensors embedded in human warfighters to gather and relay battlefield information. Easily accessible online are the \u201cThe Unmanned Systems Integrated Roadmap FY2011-2036\u201d and \u201cUSAF Unmanned Aircraft Systems Flight Plan 2009-2047,\u201d which articulate the integration of unmanned systems in every aspect of the U.S. military\u2019s future. Given the pace at which AI is developing, this integration will accelerate.<\/p>\n\n\n\n<p>How then are fewer soldiers supposed to maintain human veto power over faster and massively greater numbers of robotic weapons on land, underwater and in the skies?<\/p>\n\n\n\n<p>As we wrote, \u201cWhen robots rule warfare, utterly without empathy or compassion, humans retain less intrinsic worth than a toaster\u2014which at least can be used for spare parts.\u201d The rejoinder is that robots would do better than humans on the battlefield.<\/p>\n\n\n\n<p>For example, Ronald Arkin, PhD, the director of the Georgia Institute of Technology\u2019s mobile robot lab, is programing robots to comply with international humanitarian law. Perhaps someday, as a result, an autonomous weapon might be able to distinguish between a small combatant and a child, resolving one crucial challenge. Let\u2019s hope the enemy doesn\u2019t wear masks\u2014or put them on children\u2014to confuse the robot\u2019s facial recognition software.<\/p>\n\n\n\n<p>Other computer scientists are focusing on machine learning as the route to making robots, in their view, better ethical decision-makers than humans. At one lab, researchers read bedtime stories to robots to teach them right from wrong. Apparently Dr. Seuss was R2-D2\u2019s favorite author.<\/p>\n\n\n\n<p>These endeavors, however, are beside the point since a robot\u2019s actions are not moral, even if it passes the Turing test and behaves so intelligently it seems indistinguishable (except for appearance, for now) from humans. Robotic actions are a matter of programming, not moral agency. They will hunt solely by sensor and software calculation.<\/p>\n\n\n\n<p>In the end, \u201cdeath by algorithm is the ultimate indignity.\u201d<\/p>\n\n\n\n<h1>National Discussion &amp; DARPA<\/h1>\n\n\n\n<p>Over 35 years ago, a scholar noted the basic problem regarding new technologies in the Columbia Science and Technology Law Review. Before development, not enough is known about risk factors to regulate the technology sensibly. Yet after deployment, it\u2019s too late since the market penetration is too great to reverse usage.<\/p>\n\n\n\n<p>In this case, however, there is enough legitimate concern about lethally autonomous weapons systems to warrant serious consideration, and deployment has not yet occurred. A significant step towards consideration was taken in 2014 with the publication of a report by the National Research Council and National Academy of Engineering, at the request of DARPA, \u201cEmerging and Readily Available Technologies and National Security.\u201d The report studied the ethical, legal and societal issues relating to the research, development and use of technologies with potential military applications. Maj. Gen. Latiff served on the committee that focused on militarily significant technologies, including robotics and autonomous systems.<\/p>\n\n\n\n<p>The report cited fully autonomous weapons systems that have already been deployed without controversy. Israel\u2019s Iron Dome antimissile system automatically shoots down barrages of rockets fired by Hamas, a Palestinian terrorist organization. The Phalanx Close-In Weapons System protects U.S. ships and land bases by automatically downing incoming rockets and mortars. These weapons would respond autonomously to inbound manned fighter jets and make the kill decision without human intervention. However, these systems are defensive and must be autonomous since humans can\u2019t react fast enough. Such weapons don\u2019t pose the same moral dilemma as offensive weapons since we have a fundamental right to self-defense.<\/p>\n\n\n\n<p>Also mentioned were offensive weapons that could easily operate with complete lethal autonomy, such as the Mark 48 torpedo and iRobot, which is equipped with a grenade launcher. The report sets out the framework for initiating a national discussion, such as whether such autonomous systems could comply with international law.<\/p>\n\n\n\n<p>However, if machines are deployed to seek out and kill people, there is no basis for humanitarian law in the first place. Every individual\u2019s intrinsic worth, which constitutes the basis of Western civilization, drowns in the Robo-Rubicon.<\/p>\n\n\n\n<p>How much intrinsic worth does a machine have? None. Its value is entirely instrumental. We don\u2019t hold memorial services for the broken lawnmower. At best we recycle. There is no Geneva Convention for the proper treatment of can openers or even iPhones. Once lethal autonomy is deployed, then people can have no more than instrumental value, which means that democracy and human rights are mere tools to be used or discarded as the ruling classes see fit.<\/p>\n\n\n\n<p>The answer to the dilemma lethal autonomy poses, to be clear, does not involve a retreat from technology but the securing of sufficient advantage that the U.S. can leverage international conventions on the military uses and proliferation of lethal autonomy and other worrisome emerging technologies.<\/p>\n\n\n\n<p>The wider importance of lethal autonomy becomes clear in considering the enormous social threat that automation poses. On the horizon is massive job displacement via automated taxis, trucks and increasingly sophisticated task automation affecting most employment arenas. Already in Japan there is a fully autonomous hotel without a single human worker. In many states, truck driver is the most common job. What will hundreds of thousands of ex-drivers, averaging over 50 years of age, do once autonomous transportation corridors are created? True, there\u2019s a shortage of neurosurgeons\u2014at least for now.<\/p>\n\n\n\n<p>IBM Watson, the artificial intelligence (AI) system that famously beat the world\u2019s top Go master last March, then released a financial robo-adviser for institutional clients. Not only are human financial advisers getting nervous, so are professionals throughout finance due to the proliferation of robo-advice. And the scenario is similar to lethal autonomy in that these tools are marketed as assistive\u2014i.e., with human professionals in the loop gaining productivity. But how long will that last as AI evolves and faster computer chips are developed? IBM now offers free access to anyone to a Cloud version of quantum computing for open-source experimentation.<\/p>\n\n\n\n<p>As AI becomes increasingly advanced, more functions will be done better, faster and cheaper by machines. Already, autonomous robots are performing surgery on pigs. Researchers claim that robots would outperform human surgeons on human patients, reducing errors and increasing efficiency.<\/p>\n\n\n\n<p>Some experts argue that the \u201cjobless future\u201d is a myth, that \u201cwhen machines replace one kind of human capability, as they did in the transitions from \u2026 freehold farmer, from factory worker, from clerical worker, from knowledge worker,\u201d wrote Steve Denning in his column at Forbes.com, \u201cnew human experiences and capabilities emerged.\u201d<\/p>\n\n\n\n<p>No doubt this will be true to some extent as technology facilitates fascinatingly interesting and valuable new occupations, heretofore unimaginable. But the problem isn\u2019t that machines are replacing \u201cone kind of human capability,\u201d but that robots threaten to replace almost all of them within a short period of time.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img src=\"https:\/\/dda.ndus.edu\/wp-content\/uploads\/sites\/17\/2020\/12\/Watson-computer-2000px.jpg\" alt=\"\" \/><figcaption>Watson is a question-answering computer system that responds to queries posed in natural language. Watson was named after IBM\u2019s founder and first CEO, industrialist Thomas J. Watson.<\/figcaption><\/figure>\n\n\n\n<p>There are two questions: What will happen to our humanity in big automation\u2019s tsunami, and who (or what) does this technology serve?<\/p>\n\n\n\n<p>Regarding our humanity, recent trends are disturbing. In medicine, not only are jobs at risk in the long run, but robots will increasingly make ethical and medical decisions. Consider the APACHE medical system, which helps determine the best treatment in ICU units. What happens when the doctor, who is supposed to be in charge, decides to veto the robo-advice? If the patient dies, will there not be multi-million dollar lawsuits\u2014and within seconds once the law profession is roboticized (thereby replacing rule of law with regulation by algorithm)? In short, in this arena and elsewhere, are we outsourcing our moral and decision-making capacity?<\/p>\n\n\n\n<p>\u201cNo one can serve two masters,\u201d said Jesus in an era when children were educated at home, learning carpentry (to choose a trade at random) from their father. Today, increasing numbers of children\u2014now a third, according to a survey in the U.K.\u2014start school without basic social skills, such as the ability to converse, because they suffer from a severe lack of attention and interaction with parents who are possessed by smartphones. Technology has become the god of worship, and kids are learning they are far less important than digital devices. How much will this generation value\u2014or even know\u2014their humanity and that of others? Is it not \u201cnatural\u201d in this inverted world to completely cede character and choice to the Matrix?<\/p>\n\n\n\n<p>\u201cHumans are amphibians\u2014half spirit and half animal,\u201d wrote C.S. Lewis. \u201cAs spirits they belong to the eternal world, but as animals they inhabit time.\u201d Machines can support both spheres\u2014if intelligently designed according to just principles with people maintaining control. This would seem common sense, but that is becoming the rarest element on the periodic table.<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"alignleft size-large is-resized\"><img loading=\"lazy\" src=\"https:\/\/dda.ndus.edu\/wp-content\/uploads\/sites\/17\/2020\/12\/tally-sticks-2000px.jpg\" alt=\"\" width=\"380\" height=\"253\" \/><figcaption>Tally sticks were used from prehistoric times into the 19th century as a memory aid device to record financial and legal transactions, and even schedules and messages<\/figcaption><\/figure><\/div>\n\n\n\n<h1>Tally-Ho &amp; Tally Sticks<\/h1>\n\n\n\n<p>Millennials and succeeding generations will remake the world via digital technology. Big data and big automation might cure cancer, reverse aging, increase human intelligence and solve environmental issues. Imagine a war where few die or lose limbs. These wonders and more seem more than plausible in what many see as a dawning utopia. And it\u2019s not likely to be an \u201ceither\/or.\u201d A kidnapped child will be located in minutes and the same surveillance tools might greatly restrict personal freedoms.<\/p>\n\n\n\n<p>Certainly there will be huge economic and creative opportunities\u2014for some. Experts predict that robot applications will render trillions of dollars in labor-saving productivity gains by 2025. Meanwhile, an Oxford University study in 2013 predicted that about half of jobs in the U.S. are vulnerable to being automated in the near future. If, as seems likely, jobs destroyed greatly outnumber jobs created, what does society do with the replaced?<\/p>\n\n\n\n<p>Some can retrain or transfer skills, but most might become permanently jobless. It\u2019s unlikely that many former taxicab drivers or even surplus middle-aged lawyers, as examples, could be re-purposed for most digital-based jobs\u2014as those positions decline in number, too.<\/p>\n\n\n\n<p>Consider the fate of tally sticks, which are notched pieces of wood used from prehistoric times to keep accounts (ergo, \u201ctallies\u201d). In 1826, England\u2019s Court of Exchequer began transferring records from these sticks to ink and paper. By 1834, there were tens of thousands of unused tally sticks, which were disposed of in a stove in the House of Lords. There were so many of these suddenly useless carbon-based units that the fire spread to the wood paneling and ultimately burned down both the House of Lords and the House of Commons.<\/p>\n\n\n\n<p>If the current presidential election cycle has shown anything, it\u2019s that there is already growing dissatisfaction among the majority of Americans, which could spark a social conflagration.<\/p>\n\n\n\n<p>Nonsense, some might argue, Americans take care of their own. Perhaps, but our fundamental commitment to the common good might disintegrate.<\/p>\n\n\n\n<p>The United States Constitution was founded on the Judeo-Christian belief in the intrinsic worth of every individual, as articulated eloquently in the Declaration of Independence: \u201cWe hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.\u201d True, it took a century to outlaw slavery and another hundred years to eliminate the legal barriers to racial equality. But justice prevailed precisely because injustice contradicts the nation\u2019s founding principles.<\/p>\n\n\n\n<p>There was an inevitable logic to the civil rights movement.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img src=\"https:\/\/dda.ndus.edu\/wp-content\/uploads\/sites\/17\/2020\/12\/Ex-Machina-2000px.jpg\" alt=\"\" \/><figcaption>Alicia Vikander plays Ava, a humanoid robot with heightened artificial intelligence in the British sci-fi film \u201cEx Machina,\u201d released in 2015. The film hinges on Ava\u2019s ability to pass a sophisticated version of the Turing test by convincing Caleb (left), a computer programmer, that he can relate to her as if \u201cshe\u201d is human. As Ava\u2019s creator, Nathan (right), hopes, \u201cshe\u201d demonstrates true intelligence. But as Nathan warns Caleb, Ava\u2019s feelings and seductive qualities are manipulations. Ava is a machine, no matter how ingenious, that precipitates a calculatingly cold outcome.<\/figcaption><\/figure>\n\n\n\n<p>The question now is whether full lethal autonomy destroys that foundation. If technology matters more than people, then rights are completely \u201calienable.\u201d If software renders human life expendable, then it is a much smaller moral leap to indifference towards those replaced by automation. Without a career and the ability to earn a living and accumulate enough resources to start a business, there is neither liberty nor any pursuit of happiness. Ironically, a proposal that is gaining support in Silicon Valley\u2014where automation is being spearheaded\u2014is a basic guaranteed income. This might relieve some guilt, but it is neither affordable nor desirable. To work is essential to developing human potential. In fact, 78 percent of Swiss voters rejected a guaranteed-income proposal on a national referendum on June 5.<\/p>\n\n\n\n<p>We wrote the original article in The Wall Street Journal with urgency to provoke discussion of lethal autonomy (tally-ho! for robots) as a moral pitfall and gateway, otherwise it will soon become a fait accompli.<\/p>\n\n\n\n<p>The evening after crossing the Rubicon, Caesar dined with his officers and uttered the famous phrase, \u201cThe die is cast.\u201d Ominous words for our future\u2014if we fail to assert our humanity.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-style-default\"><p>Every individual&#8217;s intrinsic worth drowns in the Robo-Rubicon.<\/p><\/blockquote>\n","protected":false},"excerpt":{"rendered":"<p>Seven years ago, Maj. Gen. Robert Latiff and I wrote an opinion article for the Wall Street Journal, titled \u201cWith Drone Warfare, America Approaches the Robo-Rubicon.\u201d The Week, which reviews [&hellip;]<\/p>\n","protected":false},"author":71,"featured_media":117,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[28,29,25,156,22,218,23],"tags":[87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115],"_links":{"self":[{"href":"https:\/\/dda.ndus.edu\/ddreview\/wp-json\/wp\/v2\/posts\/115"}],"collection":[{"href":"https:\/\/dda.ndus.edu\/ddreview\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dda.ndus.edu\/ddreview\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dda.ndus.edu\/ddreview\/wp-json\/wp\/v2\/users\/71"}],"replies":[{"embeddable":true,"href":"https:\/\/dda.ndus.edu\/ddreview\/wp-json\/wp\/v2\/comments?post=115"}],"version-history":[{"count":16,"href":"https:\/\/dda.ndus.edu\/ddreview\/wp-json\/wp\/v2\/posts\/115\/revisions"}],"predecessor-version":[{"id":654,"href":"https:\/\/dda.ndus.edu\/ddreview\/wp-json\/wp\/v2\/posts\/115\/revisions\/654"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dda.ndus.edu\/ddreview\/wp-json\/wp\/v2\/media\/117"}],"wp:attachment":[{"href":"https:\/\/dda.ndus.edu\/ddreview\/wp-json\/wp\/v2\/media?parent=115"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dda.ndus.edu\/ddreview\/wp-json\/wp\/v2\/categories?post=115"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dda.ndus.edu\/ddreview\/wp-json\/wp\/v2\/tags?post=115"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}