Dakota Digital Review

Artificial Intelligence, Cyberattacks & the Next Cold War

Premier Issue

Jeremy Straub, PhD, MBA, Assistant Professor of Computer Science, North Dakota University System

It’s easy to confuse the current geopolitical situation with the 1980s at the end of the Cold War, when tensions between the Soviet Union and the United States were highest. The Cold War began after World War II and ended with the dissolution of the Soviet Union in the early 1990s.i During this period, the Soviets and her allies and the U.S. and our allies built advanced weaponry in anticipation of military conflict and even World War III. ii The weapon of choice then was nuclear missiles, while today it’s software, whether it’s used for attacking computer systems or targets in the real world.ii

Russian rhetoric about the importance of artificial intelligence (AI) is picking up—and with good reason. As AI software develops, it will be able to make decisions based on more data, more quickly than humans.

The next major cyberattack could involve AI systems. At a 2017 cybersecurity conference, 62 industry professionals (out of 100 questioned) predicted that the first AI-enhanced cyberattack could come in 2018.iii The recent Solar Winds attack demonstrated the use of a commandeered automated delivery system, effectively attacking organizations’ systems from behind their defensive perimeter.

This doesn’t mean robots will be marching down Main Street. Rather, AI will make existing cyberattack efforts—such as identity theft, denial-of-service and password cracking—more powerful and efficient. Larger attacks could turn off power, shut down hospitals and disable weapons systems.iv

Interpreting human actions is still difficult for AIs, and humans don’t trust AI systemsiv to make major decisions. Unlike movie portrayals, AI offensive and defensive capabilities won’t soon enable computers to choose and attack targets on their own. People must still create AI systems and launch them at particular targets. Nevertheless, adding AI to today’s cybercrime and cybersecurity world will escalate what is already a rapidly changing arms race between attackers and defenders.iv

Modern Cold War

As in the Cold War, each side fears its opponent gaining a technological advantage. In a recent meeting at the Strategic Missile Academy near Moscow, Russian President Vladmir Putin suggested that AI may enable Russia to rebalance the power shiftv created by the U.S. outspending Russia nearly 10-to-1 yearly on defense. Russia’s state-sponsored RT media reportedv AI is “key to Russia beating [the] U.S. in defense.”

Putin has said AI is “the future, not only for Russia, but for all humankind.”vi In September 2017, he told students that the nation that “becomes the leader in this sphere will become the ruler of the world.”vii Putin isn’t saying he’ll hand over the nuclear launch codes to a computer; he’s talking about many other uses for AI.

This sounds remarkably like Cold War rhetoric, when Americans and the Soviets built up enough nuclear weapons to kill everyone on Earth many times over.ii This arms race led to the concept of mutual assured destruction (MAD). Instead of attacking, both sides stockpiled weapons and dueled indirectly via smaller armed conflicts and political disputes.ii

Since then, both sides have decommissioned tens of thousands of nuclear weapons.ii However, tensions are growing again. Both countries have expelled the other’s diplomats on multiple occasions. Russia annexed Crimea in 2014. The Turkey-Syria border war has been called a “proxy war” between the U.S. and Russia.ii 

Hopefully, MAD will continue to prevent nuclear war. However, conflicts enhanced by AI are likely to begin.

A World of Cyberconflict

Cyberweapons, including those powered by AI, are considered fair game by both sides.ii

Russia and Russian-supporting hackers have spied electronically, launched cyberattacks against power plants, banks, hospitals and transportation systems in the U.S., Ukraine and elsewhere—and even against American elections.ii Russian cyberattackers have also targeted U.S. allies such as Britain and Germany.ii

The U.S. is certainly capable of responding and might have already done so.ii

Use of AI for Weapons Control

Threats posed by surprise attacks from ship- and submarine-based nuclear weapons and conventional weapons placed near national borders might lead some countries to entrust self-defense tactics— including launching counterattacks—to the rapid decision-making capabilities of an AI system.ii

In case of an attack, AI can act and react more quickly and without the potential hesitation or dissent of a human operator.ii There is also an inherent economic aspect to AI operations: Once AIs are developed, they can be used over and over, replacing the need for numerous human hackers while delivering the same or superior effect. AI attackers can even reside on compromised systems, thus attacking from behind enemy lines.

A fast, automated response capability could alert potential adversaries that a nation is ready and willing to launch, which is key to MAD’s deterrent effectiveness.

AI can also be used to control non-nuclear weapons, including unmanned vehicles such as drones and cyberweapons. Unmanned vehicles must be able to operate while their communications are jammed, otherwise impaired or out of range. This requires onboard AI control, which also thwarts a targeted group from preventing or stopping a drone attack by destroying its control facility, because control is distributed, both physically and electronically.ii As well, reacting to cyberweapons might require such rapid responses that they would be best launched and controlled by AI systems.ii

AI-coordinated attacks can launch cyber or real-world weapons almost instantly, making the decision to defend or counterattack necessary before a human operator would even be able to notice the incursion. AI systems can change targets and techniques faster than humans can comprehend, much less analyze. For instance, an AI system might launch a drone to attack a factory, observe drones responding to defend and launch a cyberattack on those drones, with no noticeable pause.

The Impact of Cyberattacks

So far, most of the well-known hacking incidents, even those with foreign government backing, have done little more than steal data.viii Unfortunately, there are signs that hackers have placed malicious software inside U.S. power and water systems, where it lies in wait, ready to be triggered.viii The U.S. military has also reportedly penetrated the computers that control the Russian electrical grid.viii

A cyberattack with widespread impact, an intrusion in one area that spreads to others, or a combination of many smaller attacks could cause significant damage, including mass injury and mortality rivaling the death toll of a nuclear weapon.viii

Unlike a nuclear weapon, which would vaporize people within 100 feet and kill almost everyone within a half-mile,ix the death toll from most cyberattacks would be slower. People might die from a lack of food, power or gas for heat, or from car crashes resulting from a corrupted traffic light system.viii  This could happen over a wide area, resulting in mass injury and even deaths.

This might sound alarmist, but consider what has been happening in recent years in the U.S. and around the world. In early 2016, hackers took control of a U.S. treatment plant for drinking water and changed the chemical mixture used to purify water.viii If the changes had not been detected, there might have been poisonings and an unusable water supply.

In 2016 and 2017, hackers shut down major sections of the Ukraine power grid.viii The attack was mild, since no equipment was destroyed despite the ability to do so. Ukrainian officials think it was designed to send a message, possibly from the Russians.x In 2018, unknown cybercriminals gained access to the United Kingdom’s entire electricity system; in 2019, a similar incursion may have penetrated the U.S. grid.viii

In August 2017, a Saudi Arabian petrochemical plant was hit by hackers who tried to blow up equipment by taking control of the same types of electronics used in industrial facilities of all kinds throughout the world.viii Just a few months later, hackers shut down monitoring systems for oil and gas pipelines across the U.S.viii This only caused logistical problems, but it showed how an insecure contractor’s systems could potentially cause problems for primary systems.

The FBI has even warned that hackers are targeting nuclear power facilities.xi A compromised nuclear facility could result in the discharge of radioactive material, chemicals or even possibly a reactor meltdown.viii A cyberattack could cause an event similar to the incident in Chernobyl.viii That explosion, caused by human error, resulted in 50 deaths, the evacuation of 120,000 people and elevated birth defects for years afterwards.xii Parts of the region will remain uninhabitable for thousands of years.xii

Few Deterrents to Cyberattacks

The point here is not to downplay the devastating effects of a nuclear attack, but rather to highlight that the inhibitions against nuclear conflicts aren’t as strong for cyberattacks.xiii For instance, MAD deters a country from launching nuclear weapons at another nuclear-armed nation. The launch would likely be detected, and the targeted nation would launch its weapons in response. Both nations would be obliterated.

Cyberattackers have far fewer inhibitions. It’s much easier to disguise the origin of a digital incursion than conceal the source of a missile launch. Further, cyberwarfare can start small, targeting even a single phone or laptop.viii Larger attacks might target businesses, such as banks and hotels, or a government agency.viii But those incursions typically wouldn’t escalate a conflict to the nuclear level.

Nuclear-Grade Cyberattacks

There are three basic scenarios for how a nuclear-grade cyberattack might develop.viii It could start modestly, with one country’s intelligence service stealing, deleting or compromising another nation’s military data. Successive rounds of retaliation could expand the scope of attacks and the severity of damage to civilian life.

In another situation, a nation or a terrorist organization could unleash a massively destructive cyberattack—targeting several electricity utilities, water treatment facilities or industrial plants or a combination to compound the damage.

Perhaps the most concerning possibility is that this might happen by mistake. On several occasions, human and mechanical errors very nearly destroyed the world xiv during the Cold War, as illustrated in the movie “WarGames.” xv Something analogous could happen in the software and hardware of the digital realm.

The Importance of AI Development

Widespread use of AI-powered cyberattacksxvi may still be some time away, but a nation that thinks its adversaries have or will get AI weapons will want to get them too.

Countries might agree to a proposed Digital Geneva Convention to limit AI conflict.ii But that won’t stop AI attacks by independent nationalist groups, militias, criminal organizations, terrorists and others.ii As well, countries can back out of treaties. It’s almost certain, therefore, that someone will turn AI into a weapon and then everyone else will do so at least to defend themselves.

Nations that don’t embrace AI or restrict its development risk becoming unable to compete, economically or militarily, with countries wielding developed AIs, such as Russia or the U.S.ii Advanced AIs create massive advantages for a nation’s industrial and business sectors, as well as its military. Perhaps most importantly, the development of sophisticated AIs in multiple countries could provide a deterrent against attacks,xvii similar to MAD’s success.

Faster Attacks

Beyond computers’ lack of need for food and sleep, which limit human hackers even when working in teams, automation can make complex attacks much faster and more effective.

To date, the effects of automation have been limited. Very rudimentary AI-like capabilities have for decades given virus programs the ability to self-replicate, spreading from computer to computer without specific human instructions.iv In addition, programmers have used their skills to automate different elements of hacking efforts. Distributed attacks, for example, involve triggering a remote program on several computers or devices to overwhelm servers. The attack that shut down large sections of the internet in October 2016xviii used this type of approach. In some cases, attacks are made available as a script that allows an unsophisticated user to choose a target and launch an attack.

AI, however, could help human cybercriminals customize attacks. Spearphishing attacks, for instance, require perpetrators to have personal information about prospective targets, such as where they bank or what medical insurance company they use.iv AI systems can help gather, organize and process large databases to connect identifying information, making this type of attack easier and faster to carry out. That reduced workload may drive thieves to launch many smaller attacks that go unnoticed for long periods of time—if detected at all—due to their more limited impact.

AI systems could even be used to pull information together from multiple sources to identify people who would be particularly vulnerable to attack. Someone who is hospitalized or in a nursing home, for example, might not notice money missing from a bank account until long after the cyberthief has gotten away.

Improved Adaptation

AI-enabled attackers will also be much faster to react when they encounter resistance, or when cybersecurity experts fix weaknesses that had previously allowed entry by unauthorized users. AI may be able to exploit another vulnerability or start scanning for new ways into the system without waiting for human instructions.

This could mean that human defenders find themselves unable to keep up with the speed of incoming attacks. It may result in a programming and technological arms racexvii, with defenders developing AI assistants to identify and protect against attacks—and perhaps adopting AI with retaliatory attack capabilities.xix

Avoiding the Dangers

Operating autonomously could lead AI systems to attack a system it shouldn’t or cause unexpected damage.iv For example, software started by an attacker intending only to steal money might decide to target a hospital computer in a way that causes human injury or death. The potential for unmanned aerial vehicles to operate autonomouslyxx has raised similar questions of the need for humans to make the decisions about targets.xxi

The consequences and implications are significant, but most people won’t notice a big change from a conventional cyberattack when the first AI attack is unleashed. For most of those affected, the outcome will be the same as human-triggered attacks. But as we continue to fill our homes, factories, offices and roads with internet-connected robotic systems, the potential effects of an attack by artificial intelligence only grows.

This article is based on three articlesxxii that were originally published by The Conversation.

Jeremy Straub, PhD, MBA, is an Assistant Professor in the North Dakota State University Department of Computer Science and a NDSU Challey Institute Faculty Fellow. His research spans a continuum from autonomous technology development to technology commercialization to asking questions of technologyuse ethics and national and international policy. He has published more than 60 articles in academic journals and more than 100 peer-reviewed conference papers. Straub serves on multiple editorial boards and conference committees. He is also the lead inventor on two U.S. patents and a member of multiple technical societies.

i House, J.M., A Military History of the Cold War, 1962–1991, University of Oklahoma Press: Norman, OK, 2020.

ii For a discussion of this topic and reference material, see: Straub, J. “Artificial Intelligence is the Weapon of the Next Cold War,” The Conversation, 2018.

iii “Cylance Team Black Hat Attendees See AI as Double- Edged Sword,” available online: https://blogs.blackberry. com/en/2017/08/black-hat-attendees-see-ai-asdouble- edged-sword (accessed on Jan 7, 2021).

iv For a discussion of this topic and reference material, see: Straub, J., “Artificial Intelligence Cyber Attacks are Coming —but What Does That Mean?” The Conversation, 2017.

v “Brains over bucks: Putin hints AI may be key to Russia beating US in defense despite budget gap,” RT World News 2017.

vi Meyer, D., “Vladimir Putin Says Whoever Leads in Artificial Intelligence Will Rule the World,” available online: https:// fortune.com/2017/09/04/ai-artificial-intelligence-putin-ruleworld/ (accessed on Jan 7, 2021).

vii “‘Whoever leads in AI will rule the world:’ Putin to Russian children on Knowledge Day,” available online: https://www. rt.com/news/401731-ai-rule-world-putin/ (accessed on Jan 7, 2021).

viii For a discussion of this topic and reference material, see: Straub, J., “A Cyberattack Could Wreak Destruction Comparable to a Nuclear Weapon,” The Conversation, 2019.

ix Jabr, F., “What a Nuclear Attack in New York Would Look Like,” New York Magazine, 2018.

x Zetter, K., “Inside the Cunning, Unprecedented Hack of Ukraine’s Power Grid,” WIRED, 2016.

xiPerlroth, N., “Hackers Are Targeting Nuclear Facilities, Homeland Security Dept. and F.B.I. Say,” New York Times, 2017.

xii Taylor, A., “Still Cleaning Up: 30 Years After the Chernobyl Disaster,” The Atlantic, 2016.

xiii Straub, J., “Mutual assured destruction in information, influence and cyber warfare: Comparing, contrasting and combining relevant scenarios,” Technology in Society, 2019, 59, 101177.

xiv Clark, J., “5 Cold War Mistakes That Nearly Killed Us All,” Task & Purpose, 2016.

xv “WarGames” (1983). available online: https://www.imdb. com/title/tt0086567/ (accessed on Jan 7, 2021).

xvi Welsh, S., “AI researchers should not retreat from battlefield robots, they should engage them head-on,” The Conversation, 2015.

xvii Straub, J., “Consideration of the use of autonomous, non-recallable unmanned vehicles and programs as a deterrent or threat by state actors and others,” Technology in Society, 2016, 44.

xviii Cobb, S. “10 things to know about the October 21 IoT DDoS attacks,” available online: https://www.welivesecurity. com/2016/10/24/10-things-know-october-21-iot-ddosattacks/ (accessed on Jan 7, 2021).

xix Denning, D., “Cybersecurity’s next phase: Cyber-deterrence,” The Conversation, 2016.

xx Prescott, J.M., “Autonomous decision-making processes and the responsible cyber commander,” in Proceedings of the 2013 5th International Conference on Cyber Conflict (CyCon), IEEE, 2013, pp. 1–18.

xxi Docherty, B., “Losing control: The dangers of killer robots,” The Conversation, 2016.

xxii Straub, J., “Artificial Intelligence is the Weapon of the Next Cold War,” The Conversation, 2018; Straub, J., “Artificial Intelligence Cyber Attacks are Coming—but What Does That Mean?” The Conversation, 2017; Straub, J., “A Cyberattack Could Wreak Destruction Comparable to a Nuclear Weapon,” The Conversation, 2019.

Welcome to Our Premiere Issue

Covering the cyber sciences as well as related legal, political, regulatory, social and ethical issues, and digitization’s impact on the arts. The review is written and edited for the general educated reader. It is vitally important that residents throughout the region—whether working in government or business, or who are retired—become fluent and engaged in cyber sciences and their ramifications. Articles are written mostly by faculty and students but not to promote their universities. Instead, higher education’s intellectual resources are being mobilized statewide to serve the public beyond the campus. Dakota Digital Review, along with public talks and forums, will help elevate discussions and debates about digitization, facilitating far better preparation of government and business and of voters to make crucial decisions about our future.

Read More…