[This story has been revised and updated as of February 2, 2017].

Episode SummaryCyber security is closely linked to advances in artificial intelligence. In this episode, we speak with Dr. Roman Yampolskiy about the cyber security factors and risks associated with AI. How is AI both causing risks, and how can AI be used to combat those risks? We also dive into the future to speak about some of the potential ‘super’ AI risks to cyber security and briefly discuss what can be done now to help hedge known and unknown threats.

Guest: Dr. Roman V. Yampolskiy

ExpertiseComputer Science and Engineering, Artificial Intelligence

Recognition in BriefRoman Yampolskiy is a tenured associate professor and computer scientist at the Speed School of Engineering, University of Louisville, known for his work on behavioral biometrics, security of cyberworlds, and AI safety. He holds a PhD from the University at Buffalo. Yampolskiy is currently the director of Cyber Security Laboratory in the department of Computer Engineering and Computer Science at the Speed School of Engineering.

Yampolskiy is an author of some 100 publications, including numerous books, including his latest publication, Artificial Superintelligence: a Futuristic Approach. His work is frequently covered in popular media such as the BBC, MSNBC, Yahoo, and other news and radio outlets. He has received multiple teaching recognitions, including Distinguished Teaching Professor; Professor of the Year; Leader in Engineering Education; Top 4 Faculty; and Outstanding Early Career in Education award.

Current Affiliations: University of Louisville; IEET; AGI; Kentucky Academy of Science; Research Advisor for MIRI; and Associate of GCRI

Intelligent Autonomous Systems

Intelligent autonomy over actions is a hallmark of human beings. We accept that other humans have the same type of control, which flavors our world with both diversity and unpredictability. But what happens when AI systems gain a similar level of autonomy over systems that we have put in place? Will their goals align or clash with ours? What are the potential risks and benefits?

This type of control is already playing out in the world of software and algorithmic systems, where there are high levels of security risk—both by the humans who control them and, potentially, by the automated systems themselves. Dr. Roman Yampolskiy’s research interests are focused on the types of potential AI security risks, the ramifications for society, and ways to prevent such risks from becoming realized (at least at catastrophic levels). Yampolskiy’s stance was echoed in his response for TechEmergence’s 2015 Expert Poll on Machine Consciousness and AI Risk, in which he acknowledged his belief that a malicious AI is society’s biggest threat within the next 20 years.

“You have intelligent systems trying to get access to resources, maybe through free email accounts, maybe participating in free online games…they take over the games and get the money out as quickly as possible. Some of the work I did in is profiling such systems, detecting them, preventing them,” says Yampolskiy.

In certain domains, such as financial and military (though not limited to these sectors), the implications are insidious and potentially disastrous. There are systems, says Yampolskiy, that engage in stock trades and try to manipulate the market to get certain outcomes, illegal in terms of market participation. “It is a huge problem to think bout how much of our wealth is controlled by those systems,” he states.

In the newer frontier of military-developed AI, it’s obviously important to be able to detect whether our drones have been hacked. We’ve doubtless all heard the publicized threats of Chinese-based hackers tapping into the U.S. companies’ corporate data systems, and similar approaches pose threats to more overtly harmful technologies.

On the topic of hacking, Roman refers to any type of intelligent system. People have figured out how to automate the process of finding targets through an AI system, find weaknesses in a system, predict passwords, etc. Most anything can be automated today, says Yampolskiy, it’s not beyond our current technology, and hackers are always busy finding new ways to get into a system.

While governments and organizations tend to be concerned about attacks from the outside, Yampolskiy reminds us that such attacks can (and often do) happen internally. Online casinos are a hotspot for this type of activity, where an employee with privileges who has access to other players’ cards is suddenly winning every hand and earning thousands of dollars, he says. What’s surprising is that this type of foul play can go unsuspected for years.

Anticipating Automated Intelligence

To what level is AI actually involved in the hacking process itself? While there are many areas in which AI may act as a line of defense where human beings fail (such as those outlined in this opinion piece by Rob Enderle), Yompalskiy is concerned with the ways in which AI may turn against our systems. “We’re starting to see very intelligent computer viruses, capable of modifying drone code, changing their behavior, penetrating targets,” he says.

This is in addition to standard hacking scripts that are already available and becoming more sophisticated in a variety of intelligent systems, though Yampolskiy is mostly concerned about what will happen in a couple of years when most “hackers” will be significantly automated ones.

What can be done today in order to combat these threats? Yampolskiy refers to it as an “arms race” in trying to develop inclusion detection systems that detect unusual anomalies in system behavior. The most “rudimentary” detection systems have been around for years, but continue advancing at a rapid pace.

Software that looks at a person’s credit standing and catches suspicious transactions, for example, is one system that can be quite successful in forming profiles of other intelligent systems and detecting oddities, explains Yampolskiy. It seems more than pertinent to be working on such systems now in terms of a future, perhaps super-intelligent AI.

When I ask Yampolskiy if there’s anything that society can think about or build now in order to prepare for possible futures, he notes that he’s currently working on a project that looks at all possible ways that AI can become dangerous and then tries to classify and group those AI systems and threats in a meaningful way. Only once those threats are better understood and organized can we strategize about possible solutions in addressing each.

Yampolskiy emphasizes that each AI system poses a completely different problem; military AI is just one system that opens up a can of potential security risks. Yet some of the aggregate examples include mistakes in code, problems with goals, ensuring that systems align with human values, or even the issue of irrational people developing dangerous AI for their own narrow goals. We’re dealing with all of these in a different manner, he says.

“If you think about it, human safety, human security, it’s exactly the same problem…any person could potentially be very dangerous, and there’s an infinite number of ways that can happen…yet somehow society functions even though it’s threatened…I hope that after we understand how many infinite ways there are for AI to fail, we’ll concentrate on those that are truly dangerous”, he states.

Just as with human cloning, explains Roman, we don’t really understand how these advanced AI systems would work. In general as a society, we’ve decided not to clone humans just yet; it’s illegal and unfunded in most places. While Yampolskiy believes it’s fine to develop narrow AI, he suggests we might take similar efforts to freeze specific projects involving general AI until we develop better safety mechanisms.

Some months after our interview with Yampolskiy, he published a co-authored paper with Federico Pistono on a seemingly controversial topic: guidelines for the creation of a malevolent AI. The idea, of course, is not to further such AI, but to provide invaluable information on how such a thing could be attempted to academics and others who have an interest in AI safety.

The broad and unknown nature of AI system-related threats has driven Roman Yampolskiy to widen his driver’s view, so to speak: “I’m looking more at solutions which are universal enough to cover all cases…it will be useful to control (the system) while developing it, so you can test it, so that the system has limited access to resources while we’re learning about its behavior”.