{"id":1688,"date":"2023-03-13T21:06:27","date_gmt":"2023-03-14T02:06:27","guid":{"rendered":"https:\/\/dda.ndus.edu\/ddreview\/?p=1688"},"modified":"2023-04-18T16:22:16","modified_gmt":"2023-04-18T21:22:16","slug":"ai-security-the-national-security-commission-on-artificial-intelligence-and-adversarial-machine-learning","status":"publish","type":"post","link":"https:\/\/dda.ndus.edu\/ddreview\/ai-security-the-national-security-commission-on-artificial-intelligence-and-adversarial-machine-learning\/","title":{"rendered":"AI SECURITY: The National Security Commission on Artificial Intelligence and Adversarial Machine Learning"},"content":{"rendered":"\n<p class=\"has-drop-cap\">The National Security Commission on Artificial Intelligence (NSCAI) was established by Congress in 2018 to \u201cconsider the methods and means necessary to advance the development of artificial intelligence \u2026 to comprehensively address the national security and defense needs of the United States.\u201d<sup>i<\/sup>\u00a0And this it did. More than 19 of the recommendations from NSCAI\u2019s \u201cThe Final Report\u201d were included in the FY2021 National Defense Authorization Act, with dozens of other recommendations influencing Acts of Congress and executive orders related to national defense, intelligence, innovation and competition.<\/p>\n\n\n\n<p>Although the final report was released in March 2021\u2014and the commission sunset in October that year\u2014one element of the report is still widely underappreciated, especially outside of government circles. While the Department of Defense (DOD), as well as other government agencies ranging from the Department of Veteran Affairs<sup>ii<\/sup>\u00a0to the Internal Revenue Service,<sup>iii<\/sup>\u00a0have responded to the charge to <em>adopt <\/em>AI, very few government and commercial entities have invested in the calls to <em>secure <\/em>AI.<\/p>\n\n\n\n<p>Organizations that adopt AI also adopt AI\u2019s risks and vulnerabilities. Intentional attacks against AI are a nascent style of cyberattack by which an attacker can manipulate an AI system. So, while the U.S. does need more investment in AI, it also needs to decisively secure industry security. The purpose of the book is to provide decision-makers with context that will sharpen their critique as they embrace the power of AI in government and industry. We also offer recommendations on how the intersection of technology, policy and law can provide us with a secure future.<\/p>\n\n\n\n<p>Below is an excerpt from the book\u2019s first chapter, \u201cDo You Want to Be a Part of the Future?\u201d<\/p>\n\n\n\n<h1>NSCAI Genesis Story<\/h1>\n\n\n\n<p>Ylli Bajraktari is not a household name, but in national security circles, he has a reputation for getting things done. Andrew Exum, a former Deputy it from its adversaries.<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-large is-resized\"><img loading=\"lazy\" src=\"https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI2.png\" alt=\"\" class=\"wp-image-1697\" width=\"198\" height=\"264\" srcset=\"https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI2.png 295w, https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI2-225x300.png 225w, https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI2-150x200.png 150w\" sizes=\"(max-width: 198px) 100vw, 198px\" \/><figcaption>Not with a Bug, But with a Sticker: Attacks on Machine Learning Systems and What To Do About Them, by Ram Shankar Siva Kumar and Hyrum Anderson, PhD,<br>John Wiley &amp; Sons, Inc., 2023.<\/figcaption><\/figure><\/div>\n\n\n\n<p>In response, we wrote a book, <em>Not With a Bug, But With a Sticker: Attacks on Machine Learning Systems and What To Do About Them<\/em>, which will be published by John Wiley &amp; Sons, Inc., on May 2, 2023. The book explains how AI systems are significantly at risk from attacks in both simple and sophisticated ways, which can jeopardize national security, as well as corporate and Secretary of Defense for the Middle East, described Bajraktari and his brother Ylber<sup>iv<\/sup>\u00a0as \u201ctwo of the most important and best people in the federal government you\u2019ve likely never heard of.\u201d<sup>v<\/sup>\u00a0Ylli Bajraktari escaped war-torn Kosovo and moved to the U.S. in his 20s. Burnished with <em>bonafide <\/em>credentials from Harvard\u2019s prestigious Kennedy School, he steadily rose through the ranks at the Pentagon, eventually becoming an advisor to the Deputy Secretary of Defense.<sup>vi<\/sup><\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"alignleft size-large is-resized\"><img loading=\"lazy\" src=\"https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI5.png\" alt=\"\" class=\"wp-image-1690\" width=\"235\" height=\"683\" srcset=\"https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI5.png 341w, https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI5-103x300.png 103w, https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI5-150x435.png 150w, https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI5-300x870.png 300w, https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI5-145x420.png 145w\" sizes=\"(max-width: 235px) 100vw, 235px\" \/><figcaption>Researchers asked an AI computer vision model, which typically performs well on image classification tasks, \u201cWhat is pictured in this image?\u201d To confuse the AI vision model, the researchers created images to maximize \u201cpenguin-ness,\u201d \u201csnakeness,\u201d and \u201cschoolbus-ness.\u201d As a result, the AI vision model was 99 percent confident that the top image represents a penguin, the middle image represents a green snake, and the bottom image represents a school bus. Courtesy of Ann Nguyen<\/figcaption><\/figure><\/div>\n\n\n\n<p>Whether it was fatigue that came from years of traveling the world to shape international policy or the pressure of working at the White House,<sup>vii<\/sup>\u00a0Bajraktari left the executive branch to join the National Defense University\u2019s (NDU) Institute for National Strategic Studies as a visiting research fellow. A year-long assignment from the Office of the Secretary of Defense, this break from the intensity in the White House gave him time to study and recalibrate at the NDU\u2019s libraries. He used some rare downtime for self-education on what he thought would be an instrumental cornerstone of U.S. competitiveness: artificial intelligence (AI). Bajraktari quickly absorbed the information he gleaned from pouring over books and watching YouTube videos on machine learning (ML). While at NDU, he organized the university\u2019s first ever AI symposium in November 2018. Expecting a meager 10 people to attend, the response was overwhelming. Bajraktari had to turn away hundreds of would-be attenders because of the room\u2019s fire code. While Bajraktari didn\u2019t know it yet, his time at DOD, the White House and now at NDU had been preparing him for a leadership role to shape the nation\u2019s strategy for investing in AI.<\/p>\n\n\n\n<p>That came in 2018, when the NSCAI was born out of the House Armed Services Committee. The leadership and guidance for the commission was to come from 15 appointed commissioners, a mix of tech glitterati that included current and former leadership at Google, Microsoft, Amazon, Oracle, and directors of laboratories at universities and research institutes that support national security.<sup>viii<\/sup>\u00a0It was an independent, temporary and bipartisan commission set up to study AI\u2019s national security implications.<\/p>\n\n\n\n<p>When NSCAI commissioner and former Google CEO, Eric Schmidt, called Bajraktari to ask him to lead the commission, Bajraktari didn\u2019t answer the phone. It was Christmas break. Plus, he had been primed by his White House days to ignore calls from unknown numbers. So eventually, the former CEO of Google resorted to the plebian tactic: He sent Bajraktari an email. In it, Schmidt described that he had just been voted by the other commissioners to chair NSCAI and needed somebody to run the commission\u2019s day-to-day operations. Bajraktari\u2019s email response was a simple one-liner: \u201cI\u2019ll do it.\u201d<sup>ix<\/sup><\/p>\n\n\n\n<p>With an ambitious goal, a tight deadline and a budget smaller than what it takes to air a 60-second Superbowl commercial, Bajraktari assembled a team of over 130 staff members to deliver a series of reports that would culminate in NSCAI\u2019s final report.<\/p>\n\n\n\n<p>Even the initial findings packed a punch. Bajraktari and the two chairs of the commission headed to the White House to brief President Trump about the findings. Scheduled for 15 minutes, the meeting at the Oval Office lasted for nearly an hour. In December 2020, at the twilight of his presidency, President Trump signed an executive order entitled \u201cPromoting the Use of Trustworthy Artificial Intelligence in Government.\u201d<sup>x<\/sup><\/p>\n\n\n\n<p>Even with a change in the executive office, NSCAI\u2019s recommendations continued to make an impact. On July 13, 2021\u2014well into the Biden presidency\u2014the commission held a summit to discuss the final report at the Mayflower Hotel Ballroom in Washington, the Secretaries of Defense, Commerce and State, the National Security Advisor, and the Director of the Office of Science and Technology Policy all made in-person appearances and spoke to the masked and socially distanced audience members. It was a powerful signal from the American government to both its allies and adversaries that the U.S., from its highest levels of trade, diplomacy, defense, security, and science and technology, was ready to invest in AI and take the NSCAI\u2019s recommendation seriously.<\/p>\n\n\n\n<figure class=\"wp-block-image alignwide size-large\"><img loading=\"lazy\" width=\"1024\" height=\"586\" src=\"https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI11-1024x586.png\" alt=\"\" class=\"wp-image-1700\" srcset=\"https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI11-1024x586.png 1024w, https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI11-300x172.png 300w, https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI11-768x440.png 768w, https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI11-600x344.png 600w, https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI11-150x86.png 150w, https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI11-696x399.png 696w, https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI11-1068x612.png 1068w, https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI11-734x420.png 734w, https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI11.png 1357w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><figcaption>AI can be fooled by carefully constructed patterns. What looks like a sweater designed around psychedelic, multicolor swirls is actually a carefully constructed pattern to fool object-recognition systems into mispredicting that the wearer does not exist. However, the deceptive traits of this sweater work only in a controlled setting and are generally ineffective to avoid surveillance in real-world settings. Courtesy of Zuxuan Wu and Tom Goldstein.<\/figcaption><\/figure>\n\n\n\n<h1>Adversarial ML: Attacks Against AI<\/h1>\n\n\n\n<p>Delivered on March 2021 to the president and Congress, the first page of NSCAI\u2019s final report includes a realistic assessment of where the country stands, which by the commission\u2019s own admission was uncomfortable to deliver. The NSCAI report was 756 pages long, but its opening lines summarize our unpreparedness. \u201cAmerica is not prepared to defend or compete in the AI era,\u201d it reads. \u201cThis is the tough reality we must face. And it is this reality that demands comprehensive, whole-of-nation action.\u201d To \u201cdefend \u2026 in the AI era\u201d certainly refers to holistic national security but also entails defending vulnerabilities in AI defenses that are spelled out in numerous recommendations to ensure that \u201cmodels are resilient to \u2026 attempts to undermine AI-enabled systems.\u201d<sup>xi<\/sup><\/p>\n\n\n\n<p>But defend from <em>whom? <\/em>Defend from <em>what?<\/em><\/p>\n\n\n\n<p>Adversarial ML is sometimes called \u201ccounter AI\u201d in military circles. Distinct from the case where AI is used to empower an attacker (often called <em>offensive <\/em><em>AI<\/em>), in adversarial ML, the adversary is <em>targeting <\/em>an AI system as part of an attack. In this kind of cyberattack against AI, attackers actively subvert vulnerabilities in the ML system to accomplish their goal.<\/p>\n\n\n\n<p>What are the vulnerabilities? What kinds of attacks are possible? Consider a few scenarios:<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"alignleft size-large\"><img loading=\"lazy\" width=\"389\" height=\"326\" src=\"https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI3.png\" alt=\"\" class=\"wp-image-1698\" srcset=\"https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI3.png 389w, https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI3-300x251.png 300w, https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI3-150x126.png 150w\" sizes=\"(max-width: 389px) 100vw, 389px\" \/><figcaption>To demonstrate the limitation his custom-built autonomous vehicle vision system, artist James Bridle drew two concentric circles with table salt sprinkled on the pavement: The inner circle was a solid line, and the outer circle was a dashed line. The car quite readily drove into the circle but was unable to drive out, mistaking the implausible lane markings as valid traffic indicators. Source: Autonomous Trap 001 (James Bridle, 2017), courtesy of the artist.<\/figcaption><\/figure><\/div>\n\n\n\n<ul><li>Cyber threat actors insert out-of-place words into a malicious computer script that causes AI-based malware scanners to misidentify it as safe to run.<\/li><li>A fraudster issues payment using a personal check that appears to be written for $900, but the victim\u2019s automated bank teller recognizes and pays out only $100.<\/li><li>An eavesdropper issues a sequence of carefully crafted queries to an AI medical diagnosis assistant in order to reconstruct private information about a patient that was discarded after training\u2014but inadvertently memorized by\u2014 the AI system.<\/li><li>An adversary corrupts public data on the internet used to train a facial recognition biometric authentication tool so that anyone wearing a panda sticker on their forehead is granted system access.<\/li><li>A corporation invests millions of dollars to develop proprietary AI technology, but a competitor replicates it for only $2,000 by recording responses of the webservice to carefully crafted queries.<\/li><\/ul>\n\n\n\n<p>All of these scenarios\u2014or scenarios very much like them\u2014have been demonstrated (in some cases, by the authors) against high-end deployed, commercial models. They are all a form of adversarial ML, which is not just subversive but also subterranean in our discourse.<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-large\"><img loading=\"lazy\" width=\"517\" height=\"703\" src=\"https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI4.png\" alt=\"\" class=\"wp-image-1699\" srcset=\"https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI4.png 517w, https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI4-221x300.png 221w, https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI4-150x204.png 150w, https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI4-300x408.png 300w, https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI4-309x420.png 309w\" sizes=\"(max-width: 517px) 100vw, 517px\" \/><figcaption>AI systems can produce nonsensical results in novel or foreign contexts. In this image, the boxes show how the AI system recognizes objects. But, when a polar bear is inserted into the image, the AI system becomes confuse and falsely detects a car. Courtesy of Amir Rosenfeld.<\/figcaption><\/figure><\/div>\n\n\n\n<p>Chances are you have heard more about deepfakes (a form of offensive AI) than adversarial ML. But adversarial ML attacks are an older, but still pernicious, threat that has become more serious as governments and businesses adopt AI.<\/p>\n\n\n\n<p>The NSCAI report wrote unequivocally about this point: \u201cThe threat is not hypothetical \u2026 adversarial attacks are happening and already impacting commercial ML systems.\u201d<\/p>\n\n\n\n<h1>ML Systems Don\u2019t Wobble, They Fold<\/h1>\n\n\n\n<p>To understand adversarial ML, we first need to understand how AI systems fail.<\/p>\n\n\n\n<p>An <em>unintentional failure <\/em>is the failure of an ML system with no deliberate provocation. This happens when a system produces a formally correct but often nonsensical outcome. Put differently, in unintentional failure modes, the system fails because of its inherent weirdness. In these cases, anomalous behavior often manifests itself as earnest but awkward \u201cAmelia Bedelia\u201d adherence to its designers\u2019 objectives. For instance, an algorithm that was trained to play Tetris learned how to pause the game indefinitely to avoid losing.<sup>xii<\/sup>\u00a0The learning algorithm penalized losing, so the AI did whatever was in its power to avoid that scenario. Scenarios like this are like the Ig Nobel Prize\u2014where it first makes you laugh and then makes you think.<\/p>\n\n\n\n<p>But not all cases are humorous.<\/p>\n\n\n\n<p>The U.S. Air Force trained an experimental ML system to detect surface-to-surface missiles. At first, the system demonstrated impressive 90 percent accuracy. But instead of getting a game-changing target recognition system, the Air Force learned a sobering lesson during field testing. \u201cThe algorithm did not perform well. It actually was accurate maybe about 25 percent of the time,\u201d an Air Force official remarked.<sup>xiii<\/sup>\u00a0It turns out that the ML system was trained to detect missiles that were only flying at an oblique angle. The accuracy of the system plummeted when the system was tested on vertically oriented missiles. Fortunately, this system was never deployed.<\/p>\n\n\n\n<p>Unintentional failure modes happen in ML systems without any provocation.<\/p>\n\n\n\n<p>Conversely, <em>intentional failure <\/em>modes feature an active adversary who deliberately causes the ML system to fail. It should come as no surprise that machines can be intentionally forced to make errors. Intentional failure modes are particularly relevant when one considers an adversary who gains from a system\u2019s failure either the hidden secrets in training data memorized by the system or the intellectual property that enables the AI system to work. This branch of failures in AI systems is now generally called \u201cadversarial machine learning.\u201d Research in this has roots in the 1990s when considering maliciously tampered training sets and, in the 2000s, with early attempts to evade AI-powered email spam filters.<\/p>\n\n\n\n<p>But adversary capabilities exist on a spectrum. Many require sophisticated knowledge of AI systems to pull off attacks. But one need not always be a math whiz to attack an ML system. Nor does one need to wear the canonical hacker hoodie sitting in a dark room in front of glowing screens. These systems can be intentionally duped by actors of varying levels of sophistication.<\/p>\n\n\n\n<p>The word \u201cadversary\u201d in adversarial machine learning instead refers to its original meaning in Latin, <em>adversus<\/em>, which literally means someone who \u201cturns against\u201d\u2014 in this case, the assumptions and purposes of the AI system\u2019s original designers. When ML systems are built, designers make certain assumptions about the place and manner of the system\u2019s operation. Anyone who opposes these assumptions or challenges the norms upon which the ML model is built is, by definition, an adversary.<\/p>\n\n\n\n<p>Take the event held by the Algorithmic Justice League, a digital advocacy nonprofit founded by Joy Buolamwini, as an example. In 2021, the nonprofit held a workshop called \u201cDrag Vs AI\u201d in which participants painted their faces with makeup to fool a facial recognition system.<sup>xiv<\/sup>\u00a0When facial recognition systems are built, they are relatively insensitive to faces with \u201cregular\u201d amounts of makeup applied. But when one wears over-the-top, exaggerated makeup, it can cause the facial recognition system to misrecognize the individual. In this case, participants have upended the conditions and assumptions on which the model has been trained and have become its adversary.<\/p>\n\n\n\n<p>Text-based systems are equally fallible. It was not uncommon in the early days of AI-based resume screening for job posters to pad their resumes with keywords relevant to the jobs they were seeking, colloquially called <em>keywords stuffing<\/em>. The rationale was that automated resume screeners were specifically looking for certain skills and keywords. The prevailing wisdom of keyword stuffers was to add the keywords on the resume in white font, invisible to human screeners but picked up by keyword scanners, to tilt the system in your favor. So, if an ML system is more likely to select an Ivy league grad, one may simply insert \u201cHarvard\u201d in white font\u2014invisible to the human reader but triggering to the system\u2014in the margin to coerce the system to promote the resume. Subverting the system\u2019s normal usage in this way would technically make one an adversary. Typos make a difference as well. \u201cRemove all buzzwords. Misspell them or put spaces between them\u201d\u2014that was the direction from a group using Facebook to promote ivermectin in a way that would escape Facebook\u2019s AI spam filters.<sup>xv<\/sup>\u00a0When the group found that the word ivermectin triggered Facebook\u2019s content moderation system, they resorted to simply \u201civm\u201d or used alternate words such as \u201cMoo juice\u201d and \u201chorse paste.\u201d<sup>xvi<\/sup><\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"alignright size-large is-resized\"><img loading=\"lazy\" src=\"https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/3-4-337x1024.png\" alt=\"\" class=\"wp-image-1701\" width=\"261\" height=\"794\" srcset=\"https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/3-4-337x1024.png 337w, https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/3-4-99x300.png 99w, https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/3-4-150x455.png 150w, https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/3-4-300x911.png 300w, https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/3-4-138x420.png 138w\" sizes=\"(max-width: 261px) 100vw, 261px\" \/><figcaption>According to AI vision systems, these patterns are recognized, respectively from top to bottom, as a jeep, a goldfish and a washing machine. These images were curated by researchers to demonstrate the limitations of the current state of computer vision. They assembled a selection of naturally confounding images to demonstrate that otherwise award-winning computer vision systems failed on some photographed images. Error rates skyrocketed to 98 percent for these naturally occurring adversarial examples. Courtesy of Dan Hendryks.<\/figcaption><\/figure><\/div>\n\n\n\n<p>Sometimes, adversaries can collectively refer to more than one person. In 2016, Microsoft released Tay, a Twitter bot that was supposed to emulate the personality of a teenager. Its purpose was to allow users to tweet at Tay to engage in a conversation with the bot as a playful publicity stunt. The ML system would parse the tweet as input and respond. Key to this system was that Microsoft Tay continually trained on new tweets \u201conline\u201d to improve its conversational ability. To prevent the bot from being misled or corrupted by conversations, Microsoft researchers taught it to ignore problematic conversations, but only from individual dialogues.<\/p>\n\n\n\n<p>And this is where things began to turn for the chatbot. In a matter of hours, Tay went from a sweet 16-year-old personality to an evidently Hitler-loving, misogynistic, bigoted bot. Pranksters from Reddit and 4Chan had self-organized and descended on Twitter with the aim of corrupting Tay. Why? For fun, of course. They quickly discovered that Tay was referencing language from previous Twitter conversations, which could have a causative effect on Tay\u2019s statements. So, the trolls flooded Tay with racist tweets. Overwhelmed by the variety and volume of inappropriate conversations, Tay was automatically retrained to mirror the internet trolls, tweeting, \u201cHitler was right I hate the jews.\u201d With the company image to consider, Microsoft decommissioned Tay within 16 hours of launching it.<sup>xvii<\/sup><\/p>\n\n\n\n<p>Microsoft had devised a plan to deal with corrupt conversations by a few individuals. But Microsoft was blind-sided by this coordinated attack. This group of internet strangers became an adversary. This kind of coordinated poisoning attack\u2014corrupting an AI system by corrupting the training data it ingests\u2014is one of the most feared attacks by organizations, according to our survey in 2020.<sup>xviii<\/sup><\/p>\n\n\n\n<p>Then in 2022, Meta, Facebook\u2019s parent company, released an experimental chatbot called BlenderBot 3, which was \u201ccapable of searching the internet to chat about virtually any topic \u2026 through natural conversations and feedback from people.\u201d Before too long users found that the bot began parroting election conspiracies that Trump was still president after losing the election and \u201calways will be.\u201d<sup>xix<\/sup>\u00a0It became overtly antisemitic, saying that a Jewish cabal controlling the economy was \u201cnot implausible\u201d and that they were \u201coverrepresented among America\u2019s super-rich.\u201d<\/p>\n\n\n\n<p>\u201cAdversarial attacks are happening and already impacting commercial ML systems,\u201d warned the NSCAI report. As with traditional cyberattacks, the economical inevitability of that statement stems from two conditions: that the odds of discovering a vulnerability in an AI system is high and that there are motivated adversaries willing to exploit it.<\/p>\n\n\n\n<h1>Never Tell Me the Odds<\/h1>\n\n\n\n<p>When the NSCAI report was published, Jane Pinelis, PhD, was vindicated.<\/p>\n\n\n\n<p>Pinelis had been leading the DOD\u2019s Joint Artificial Intelligence Center that was responsible for testing AI systems for failures. She knew intimately how brittle these systems are and had been trying to convince the Pentagon to take up the issue of defending AI systems more seriously.<\/p>\n\n\n\n<p>So, when NSCAI sounded the alarm about AI\u2019s dire straits and the implications for national security in a series of interim reports in 2019 and 2020, the issue gained center attention. More importantly, the NSCAI report convinced Congress to allocate money so that experts such as Pinelis were better resourced to tackle this area. For 2021, Congress authorized $740.5 million<sup>xx<\/sup>\u00a0for a vast number of national defense spending programs to modernize the U.S. military. One key element of that initiative focused on Trustworthy AI. Today, Pinelis is the Chief of AI Assurance at DOD,<sup>xxi\u00a0<\/sup>where her work revolves around justified confidence in AI systems to ensure that they work as intended, even in the presence of an adversary.<\/p>\n\n\n\n<p>Pinelis prefers \u201cjustified confidence\u201d in favor of \u201ctrustworthiness,\u201d because trust is something that is difficult to measure. Confidence on the other hand is a more mathematically tractable concept. Las Vegas Betting odds can establish reasonable odds of winning a boxing match. A meteorologist can estimate the odds of it raining tomorrow.<\/p>\n\n\n\n<p>So, roughly what are the odds of an attacker succeeding at an attack against an ML system?<\/p>\n\n\n\n<p>For this, we turned to University of Virginia Professor David Evans who specializes in computer security. Evans first considered the possibility of hacking AI systems when one of his graduate students began experiments that systematically evaded ML models. When he began looking more into attacking AI systems, what struck him was the lax security relative to other computer systems.<\/p>\n\n\n\n<figure class=\"wp-block-image alignwide size-large\"><img loading=\"lazy\" width=\"1024\" height=\"326\" src=\"https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI9-1024x326.png\" alt=\"\" class=\"wp-image-1694\" srcset=\"https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI9-1024x326.png 1024w, https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI9-300x95.png 300w, https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI9-768x244.png 768w, https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI9-600x191.png 600w, https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI9-150x48.png 150w, https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI9-696x221.png 696w, https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI9-1068x340.png 1068w, https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI9-1320x420.png 1320w, https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI9.png 1471w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image alignwide size-large\"><img loading=\"lazy\" width=\"641\" height=\"196\" src=\"https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI10.png\" alt=\"\" class=\"wp-image-1695\" srcset=\"https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI10.png 641w, https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI10-300x92.png 300w, https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI10-600x183.png 600w, https:\/\/dda.ndus.edu\/ddreview\/wp-content\/uploads\/sites\/18\/2023\/03\/AI10-150x46.png 150w\" sizes=\"(max-width: 641px) 100vw, 641px\" \/><figcaption>Researchers found that simply changing the hue (color) and saturation (color intensity) of images caused a record-holding AI vision system to misrecognize objects 94 percent of the time. In this example, the same image of a bird is recognized as an airplane, a dog and a frog when hue and saturation are adjusted. Courtesy of Hossein Hosseini.<\/figcaption><\/figure>\n\n\n\n<p>Encrypted forms of communication\u2014for example, as used in Facebook Messenger or online banking\u2014 are built upon methods designed to provide strong encryption. These encryption schemes would be considered totally broken if there were any way to guess the secret key more efficiently than by just trying every possible combination of keys. How hard is that? The odds of compromising modern day encryption by brute force is 1 in 10 followed by 39 zeros. If a more efficient method were discovered, the encryption scheme would be considered broken and unusable for any system.<\/p>\n\n\n\n<p>When designing operating systems that power everything from your laptop to your phone, Evans pointed out that for security protection to be considered acceptable, the odds of an attack succeeding against it should be less than 1 in 400 million.<\/p>\n\n\n\n<p>In both scenarios, \u201cjustified confidence\u201d in the security of these systems comes from a combination of analyses by experts, careful testing and the underlying fundamentals of mathematics. Although cybersecurity breaches are apparently becoming more prevalent, computing is more secure than it has ever been. We are the in Golden Age of secure computing.<\/p>\n\n\n\n<p>But when it comes to the security of AI, we are currently in the Stone Age. It is comically trivial to attack AI systems. We already saw how internet trolls can do it. But the more we dial the skill level up, the stealthier the attack gets.<\/p>\n\n\n\n<p>Modern ML systems are so fragile that even systems that are built using today\u2019s state-of-the-art techniques to make them robust can <em>still <\/em>be broken by an adversary with little effort, succeeding in roughly half of all attempted attacks. Is our tolerance for AI robustness really 200 million times less than our tolerance for operating system robustness? Indeed, today\u2019s ML systems are simply not built with the same security reliability as an operating system or the cryptography we expect for online chat apps such as WhatsApp or Facebook. Should an attacker choose to exploit them, most AI systems are sitting ducks.<\/p>\n\n\n\n<p>And, indeed, there are motivated adversaries who might wish to exploit AI\u2019s vulnerabilities.<\/p>\n\n\n\n<h1>AI\u2019s Achilles Heel<\/h1>\n\n\n\n<p>In his confirmation hearings for Secretary of Defense, U.S. Army (Ret) General Lloyd J. Austin III, called China a \u201cpacing threat<em>,<\/em>\u201d<sup>xxii<\/sup>\u00a0adding that China \u201cpresents the most significant threat going forward because China is ascending.\u201d<\/p>\n\n\n\n<p>The clear stance of the NSCAI report is that at present, there is no greater challenge to American AI dominance than China. That is what the NSCAI chairs reiterated to President Trump in their briefing. Bajraktari and the chairs repeated this message to Secretary of Defense Lloyd Austin and Deputy Secretary Kathleen Hicks in the Pentagon. Bajraktari and the group would again stress this point to the Office of Director of National Intelligence. At every turn, they delivered a consistent and cogent message on the urgency of seizing the moment before China\u2019s AI ascension.<\/p>\n\n\n\n<p>For one thing, whatever the U.S. does, China is close at its heels. After the 2016 Cyber Grand Challenge by the U.S. government, China not only paid attention but held seven such competitions.<sup>xxiii\u00a0<\/sup>When the U.S. announced an AI system to help fighter pilots, China announced a similar system in less than a year.<sup>xxiv<\/sup>\u00a0When we (the authors) organized a competition to help defenders get experience attacking AI systems, the Chinese online marketplace company Alibaba took it to the next level. It not only held a similar competition, but an entire series of challenges with much larger prizes.<\/p>\n\n\n\n<h4><strong><em><span class=\"has-inline-color has-vivid-red-color\">[T]he Chinese Army can cut off, manipulate or even overwhelm the \u201cnerves\u201d of American AI military systems with data deception, data manipulation and data exhaustion.<\/span><\/em><\/strong><\/h4>\n\n\n\n<p>China seems to be acutely aware of the possibility that AI systems can be attacked, including those used by the US military. In a 2021 document<sup>xxv<\/sup>\u00a0used by the Chinese Army, American AI systems are specifically called out as susceptible to information manipulation and data poisoning. Ryan Fedasiuk, a research analyst at Georgetown University\u2019s Center for Security and Emerging Technology (CSET) noted that the Chinese document called the issue of data in AI systems the \u201cAchilles heel\u201d of the ML systems used by the U.S. Army.<sup>xxvi<\/sup>\u00a0The document notes that the Chinese Army can cut off, manipulate or even overwhelm the \u201cnerves\u201d of American AI military systems with data deception, data manipulation and data exhaustion. The Army Engineering University of the Chinese People\u2019s Liberation Army partnered with Alibaba and other Chinese universities and participated in the AI Security challenge to upskill attacking ML systems.<sup>xxvii<\/sup><\/p>\n\n\n\n<p>The Chinese government routinely uses social media\u2014 namely Facebook and Twitter\u2014to boost and bolster its authoritarian agenda by creating fake accounts to flood these platforms with counter-narratives, sometimes with the same message verbatim.<sup>xxviii\u00a0<\/sup>Unsurprisingly, social media giants have started to use AI to detect these spam accounts and shut them down. In 2021, reporting by the New York Times and ProPublica showed that more than 300 Chinese-backed bot accounts posted a video attacking Secretary of State Mike Pompeo\u2019s stance supporting the Uyghurs on Twitter.<sup>xxix<\/sup>\u00a0This is how three Twitter bots captioned the videos:<\/p>\n\n\n\n<p><strong>Twitter <\/strong><strong>bot 1: the videos Pompeo most interested in (%<\/strong><strong><\/strong><\/p>\n\n\n\n<p><strong>Twitter<\/strong><strong> <\/strong><strong>bot 2: the videos Pompeo most interested in \u2018) (<\/strong><strong><\/strong><\/p>\n\n\n\n<p><strong>Twitter <\/strong><strong>bot 3: the videos Pompeo most interested in ^ \u00a5 _<\/strong><strong><\/strong><\/p>\n\n\n\n<p>The random characters appended at the end of each tweet were sufficient to evade Twitter\u2019s AI-based spam filter that was tasked with detecting bot behavior. Such simple tricks work to confuse AI systems at even mature and well-provisioned companies.<\/p>\n\n\n\n<p>There is a deterrence corollary to China\u2019s framing of an AI Achilles heel. Andrew Lohn, Senior Fellow at Georgetown University\u2019s CSET, put it succinctly when he pointed out how the ability to hack AI systems \u201ccould provide another valuable arrow in the U.S. national security community\u2019s quiver.\u201d<sup>xxx<\/sup>\u00a0This way, it could deter authoritarian regimes from developing or deploying AI systems\u2014an adversarial AI strategic deterrent. The U.S. has still not fully extended deterrence into the cyber domain<sup>xxxi<\/sup>\u00a0but could use adversarial machine learning as an important arrow in that quiver to nullify any potential gains from AI systems developed by authoritarian regimes. This seems to be unfolding already. One interesting hypothesis from Lohn is that the Russians did not field AI-based weapons in the war in Ukraine because they knew how susceptible they were to adversarial manipulation.<sup>xxxii<\/sup><\/p>\n\n\n\n<h1>Defense Can Lead the Way in AI Security<\/h1>\n\n\n\n<p>Government agencies are not alone in producing critical ML systems that may be vulnerable to attack. Urgency to adopt AI by companies often breeds lax security standards. \u201cTo create models quickly, researchers frequently have relaxed standards for developing safe, reliable and validated algorithms,\u201d a study found regarding those people and organizations that were building AI tools used for COVID diagnosis.<sup>xxxiii<\/sup><\/p>\n\n\n\n<p>In its final report, NSCAI put forth a series of strongly worded recommendations. \u201cWith rare exceptions, the idea of protecting AI systems has been an afterthought in engineering and fielding AI systems, with inadequate investment in research and development.\u201d The report recommended \u201cthat at a minimum\u201d seven organizations pay attention, including the Department of Homeland Security, DOD, FBI and State Department.<\/p>\n\n\n\n<p>Many of the recommendations around AI Security involve testing and evaluation (T&amp;E), verification and validation (collectively, TEVV) practices and frameworks. \u201cAll government agencies,\u201d the report stated, \u201cwill need to develop and apply an adversarial ML threat framework to address how key AI systems could be attacked and should be defended.\u201d The NSCAI\u2019s recommendations include calls to make \u201cTEVV tools and capabilities readily available across the DOD.\u201d Also recommended are \u201cdedicated red teams for adversarial testing\u201d to make AI systems violate rules of appropriate behavior, exploring the boundaries of AI risk.\u201d<\/p>\n\n\n\n<p>Congress and DOD have started to respond. In addition to FY2021 and FY2022 National Defense Authorization Acts, which have implemented many of the recommendations to invest in and adopt AI, the government is at the very beginning of efforts to secure it. For example, in late November 2022, the Chief Digital and Artificial Intelligence Office issued an open \u201cCall to Industry\u201d for a comprehensive suite of AI T&amp;E tools that includes tools specifically designed to measure adversarial robustness.<sup>xxxiv<\/sup><\/p>\n\n\n\n<p>As has often been the case in cybersecurity and risk management, the government is leading a charge to secure AI systems. Recommendations from the NSCAI report have been an instrumental warning voice. It was as if NSCAI was awakening these high- stakes organizations to the plausible threat of attack on their AI systems. In an enigmatic voice reminiscent of the Oracle of Delphi, the NSCAI report directs critical agencies to \u201c[f ]ollow and incorporate advances in intentional and unintentional ML failures.\u201d <\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2>References<\/h2>\n\n\n\n<p><sup>i<\/sup> <a href=\"http:\/\/www.nscai.gov\/2021\/09\/23\/nscai-to-sunset-in-october\/\">https:\/\/www.nscai.gov\/2021\/09\/23\/nscai-to-sunset-in-october\/<\/a><\/p>\n\n\n\n<p><sup>ii<\/sup> <a href=\"http:\/\/www.nextgov.com\/emerging-tech\/2022\/11\/adoption-ai-health-\">https:\/\/www<\/a>.nextgov<a href=\"http:\/\/www.nextgov.com\/emerging-tech\/2022\/11\/adoption-ai-health-\">.com\/emerging-tech\/2022\/11\/adoption-ai-health-<\/a> care-relies-building-trust-dod-va-officials-say\/379323\/<\/p>\n\n\n\n<p><sup>iii<\/sup> https:\/\/federalnewsnetwork.com\/artificial-intelligence\/2020\/03\/ai-as- ultimate-auditor-congress-praises-irss-adoption-of-emerging-tech\/<\/p>\n\n\n\n<p><sup>iv<\/sup> <a href=\"https:\/\/medium.com\/%40RPublicService\/feds-at-work-right-hand-men-to-\">https:\/\/medium.com\/@RPublicService\/feds-at-work-right-hand-men-to-<\/a> the-pentagons-top-officials-ca99b6c93fbf<\/p>\n\n\n\n<p><sup>v<\/sup> <a href=\"http:\/\/www.theatlantic.com\/international\/archive\/2018\/01\/trump-\">https:\/\/www.theatlantic.com\/international\/ar<\/a>chiv<a href=\"http:\/\/www.theatlantic.com\/international\/archive\/2018\/01\/trump-\">e\/2018\/01\/tr<\/a>ump- foreign-policy\/549671\/<\/p>\n\n\n\n<p><sup>vi<\/sup> <a href=\"https:\/\/medium.com\/%40RPublicService\/feds-at-work-right-hand-men-to-\">https:\/\/medium.com\/@RPublicService\/feds-at-work-right-hand-men-to-<\/a> the-pentagons-top-officials-ca99b6c93fbf<\/p>\n\n\n\n<p><sup>vii<\/sup> <a href=\"http:\/\/www.vanityfair.com\/news\/2018\/04\/inside-trumpworld-allies-fear-\">https:\/\/www<\/a>.v<a href=\"http:\/\/www.vanityfair.com\/news\/2018\/04\/inside-trumpworld-allies-fear-\">anityfair.com\/news\/2018\/04\/inside-tr<\/a>ump<a href=\"http:\/\/www.vanityfair.com\/news\/2018\/04\/inside-trumpworld-allies-fear-\">world-allies-fear-<\/a> the-boss-could-go-postal-and-fire-mueller<\/p>\n\n\n\n<p><sup>viii<\/sup> \u00a0<a href=\"http:\/\/www.nscai.gov\/commissioners\/\">https:\/\/www.nscai.gov\/commissioners\/<\/a><\/p>\n\n\n\n<p><sup>ix<\/sup> Interview with Bajraktari<\/p>\n\n\n\n<p><sup>x<\/sup> https:\/\/trumpwhitehouse.archives.gov\/articles\/promoting-use- trustworthy-artificial-intelligence-government\/<\/p>\n\n\n\n<p><sup>xi<\/sup> NSCAI\u2019s \u201cThe Final Report,\u201d <a href=\"https:\/\/www.nscai.gov\/2021-final-report\/\">https:\/\/www.nscai.gov\/2021-final-report\/<\/a><\/p>\n\n\n\n<p><sup>xii<\/sup> VII, Tom Murphy. \u201cThe first level of super mario bros. is easy with lexicographic.\u201d (2013).<\/p>\n\n\n\n<p><sup>xiii<\/sup> <a href=\"http:\/\/www.defenseone.com\/technology\/2021\/12\/air-force-targeting-ai-\">https:\/\/www.defenseone.com\/technology\/2021\/12\/air-force-targeting-ai-<\/a> thought-it-had-90-success-rate-it-was-more-25\/187437\/<\/p>\n\n\n\n<p><sup>xiv<\/sup> <a href=\"http:\/\/www.ajl.org\/drag-vs-ai#%3A~%3Atext%3D%23DRAGVSAI\">https:\/\/www.ajl.org\/drag-vs-ai#:~:text=%23DRA<\/a>GVSAI %20is%20a%20 hands%2Don,artificial%20intelligence%2C% 20and%20algorithmic%20 harms.<\/p>\n\n\n\n<p><sup>xv<\/sup> <a href=\"http:\/\/www.nytimes.com\/2021\/09\/28\/technology\/facebook-ivermectin-\">https:\/\/www.nytimes.com\/2021\/09\/28\/technology\/facebook-ivermectin-<\/a> coronavirus-misinformation.html<\/p>\n\n\n\n<p><sup>xvi<\/sup> <a href=\"http:\/\/www.nbcnews.com\/tech\/tech-news\/ivermectin-demand-drives-\">https:\/\/www<\/a>.nbcne<a href=\"http:\/\/www.nbcnews.com\/tech\/tech-news\/ivermectin-demand-drives-\">ws.com\/tech\/tech-news\/ivermectin-demand-drives-<\/a> trump-telemedicine-website-rcna1791<\/p>\n\n\n\n<p><sup>xvii<\/sup> <a href=\"http:\/\/www.theguardian.com\/technology\/2016\/mar\/24\/tay-microsofts-\">https:\/\/www<\/a>.theguar<a href=\"http:\/\/www.theguardian.com\/technology\/2016\/mar\/24\/tay-microsofts-\">dian.com\/technology\/2016\/mar\/24\/tay-microsofts-<\/a> ai-chatbot-gets-a-crash-course-in-racism-from-twitter<\/p>\n\n\n\n<p><sup>xviii<\/sup> R. S. S. Kumar, M. Nystrom, J. Lambert, A. Marshall, M. Goertzel, \u00a8 A. Comissoneru, M. Swann, and S. Xia, \u201cAdversarial machine learning&#8211;industry perspectives,\u201d in IEEE Security and Privacy Workshop, 2020.<\/p>\n\n\n\n<p><sup>xix<\/sup> <a href=\"http:\/\/www.bloomberg.com\/news\/articles\/2022-08-08\/meta-s-ai-\">https:\/\/www.bloomberg.com\/ne<\/a>ws\/ar<a href=\"http:\/\/www.bloomberg.com\/news\/articles\/2022-08-08\/meta-s-ai-\">ticles\/2022-08-08\/meta-s-ai-<\/a> chatbot-repeats-election-and-anti-semitic-conspiracies<\/p>\n\n\n\n<p><sup>xx<\/sup> <a href=\"http:\/\/www.armed-services.senate.gov\/press-releases\/inhofe-reed-praise-\">https:\/\/www.armed-services.senate.go<\/a>v\/pr<a href=\"http:\/\/www.armed-services.senate.gov\/press-releases\/inhofe-reed-praise-\">ess-releases\/inhofe-reed-praise-<\/a> senate-passage-of-national-defense-authorization-act-of-fiscal-year-2021<\/p>\n\n\n\n<p><sup>xxi <\/sup><a href=\"http:\/\/www.forbes.com\/sites\/markminevich\/2022\/03\/23\/\">https:\/\/www.forbes.com\/sites\/markminevich\/2022\/03\/23\/<\/a> ai-visionary-and-leader-dr-jane-pinelis-of-the-us-department-of- defense\/?sh=5b121a4b5aa5<\/p>\n\n\n\n<p><sup>xxii<\/sup> <a href=\"http:\/\/www.foxbusiness.com\/politics\/biden-defense-chief-china-pacing-\">https:\/\/www<\/a>.fo<a href=\"http:\/\/www.foxbusiness.com\/politics\/biden-defense-chief-china-pacing-\">xbusiness.com\/politics\/biden-defense-chief-china-pacing-<\/a> amid-ascendancy<\/p>\n\n\n\n<p><sup>xxiii<\/sup> <a href=\"https:\/\/cset.georgetown.edu\/publication\/robot-hacking-games\/\">https:\/\/cset.georgetown.edu\/publication\/robot-hacking-games\/<\/a><\/p>\n\n\n\n<p><sup>xxiv<\/sup> https:\/\/breakingdefense.com\/2021\/11\/china-invests-in-artificial- intelligence-to-counter-us-joint-warfighting-concept-records\/<\/p>\n\n\n\n<p><sup>xxv<\/sup> https:\/\/perma.cc\/X9KQ-4B9L<\/p>\n\n\n\n<p><sup>xxvi<\/sup> https:\/\/breakingdefense.com\/2021\/11\/china-invests-in-artificial- intelligence-to-counter-us-joint-warfighting-concept-records\/<\/p>\n\n\n\n<p><sup>xxvii <\/sup>Chen, Yuefeng, et al. \u201cUnrestricted adversarial attacks on imagenet competition.\u201d arXiv preprint arXiv:2110.09903 (2021).<\/p>\n\n\n\n<p><sup>xxviii<\/sup> <a href=\"http:\/\/www.nytimes.com\/interactive\/2021\/12\/20\/technology\/china-\">https:\/\/www.nytimes.com\/interactive\/2021\/12\/20\/technology\/china-<\/a> facebook-twitter-influence-manipulation.html<\/p>\n\n\n\n<p><sup>xxix<\/sup> <a href=\"http:\/\/www.nytimes.com\/interactive\/2021\/06\/22\/technology\/xinjiang-\">https:\/\/www.nytimes.com\/interactive\/2021\/06\/22\/technology\/xinjiang-<\/a> uyghurs-china-propaganda.html<\/p>\n\n\n\n<p><sup>xxx<\/sup> https:\/\/cset.georgetown.edu\/publication\/hacking-ai\/<\/p>\n\n\n\n<p><sup>xxxi <\/sup>https:\/\/www-msnbc-com.cdn.amppr<a href=\"http:\/\/www.msnbc.com\/\">oject.org\/c\/s\/www<\/a>.msnbc.com\/ msnbc\/amp\/shows\/reidout\/blog\/rcna48322<\/p>\n\n\n\n<p><sup>xxxii <\/sup><a href=\"http:\/\/www.forbes.com\/sites\/erictegler\/2022\/03\/16\/the-vulnerability-\">https:\/\/www.forbes.com\/sites\/erictegler\/2022\/03\/16\/the-vulnerability-<\/a> of-artificial-intelligence-systems-may-explain-why-they-havent-been-used- extensively-in-ukraine\/?sh=1f685d7637d5<\/p>\n\n\n\n<p><sup>xxxiii\u00a0<\/sup> <a href=\"https:\/\/pubs.rsna.org\/doi\/full\/10.1148\/ryai.2021210011\">https:\/\/pubs.rsna.org\/doi\/full\/10.1148\/ryai.2021210011<\/a><\/p>\n\n\n\n<p><sup>xxxiv<\/sup> https:\/\/go.ratio.exchange\/exchange\/opps\/challenge_detail. cfm?i=46C49763-B80B-4AF9-80AD-053F2B2095EF<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The National Security Commission on Artificial Intelligence (NSCAI) was established by Congress in 2018 to \u201cconsider the methods and means necessary to advance the development of artificial intelligence \u2026 to [&hellip;]<\/p>\n","protected":false},"author":5,"featured_media":1696,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[21,19,4,232,25,219,522,230,223,22,30,226,218,217],"tags":[456,304,448,455,464,472,451,462,460,461,468,449,450,470,467,454,463,473,466,452,445,459,453,458,469,471,311,447,465,446,457],"_links":{"self":[{"href":"https:\/\/dda.ndus.edu\/ddreview\/wp-json\/wp\/v2\/posts\/1688"}],"collection":[{"href":"https:\/\/dda.ndus.edu\/ddreview\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dda.ndus.edu\/ddreview\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dda.ndus.edu\/ddreview\/wp-json\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/dda.ndus.edu\/ddreview\/wp-json\/wp\/v2\/comments?post=1688"}],"version-history":[{"count":4,"href":"https:\/\/dda.ndus.edu\/ddreview\/wp-json\/wp\/v2\/posts\/1688\/revisions"}],"predecessor-version":[{"id":1718,"href":"https:\/\/dda.ndus.edu\/ddreview\/wp-json\/wp\/v2\/posts\/1688\/revisions\/1718"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dda.ndus.edu\/ddreview\/wp-json\/wp\/v2\/media\/1696"}],"wp:attachment":[{"href":"https:\/\/dda.ndus.edu\/ddreview\/wp-json\/wp\/v2\/media?parent=1688"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dda.ndus.edu\/ddreview\/wp-json\/wp\/v2\/categories?post=1688"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dda.ndus.edu\/ddreview\/wp-json\/wp\/v2\/tags?post=1688"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}