The year 2015 might be seen as the year that “artificial intelligence risk” or “artificial intelligence danger” went mainstream (or close to it).

With the founding of Elon Musk’s Open AI and The Leverhulme Centre for the Future of Intelligence, the increased attention on the Future of Life Institute and Oxford’s Future of Humanity Institute, and a flurry of attention around celebrity comments around AI dangers (including the now well-known statements of Bill Gates and Elon Musk), it’s safe to say that AI risk has embedded itself as a topic of pop-culture discourse – even if it’s not a very serious one at present.

Recently, we interviewed and reached out to a total of over 30 artificial intelligence researchers (all except one hold a PhD) and asked them about the AI risks that they believe to be the most pressing in the next 20 years, as well as the next 100 years. Below you can see a list of all of our respondents; clicking on a respondent will bring up their answer to the 20-year risk question.

(NOTE: If you’re interested in the full data set from our surveys, including 12 guest responses that didn’t make this graphic and expert predictions on the biggest AI risks within the next 100 [not just 20] years, you can download the complete data set from this interview series here via Google Spreadsheets; simply fill out this form and we’ll give you access.)

Interestingly enough, automation and economic impact topped the list, coinciding with the massive amount of media attention on autonomous vehicles and improved robotic manufacturing, among other industries. “General mismanagement” and “autonomous weapons” ranked among the relatively popular responses as well.

While it’s important to bare in mind that the “categorization” was done after the fact (it could be argued that other categories could have been used to couch these responses), and that 33 researches is by no means an extensive consensus, the resulting trends and thoughts of PhDs, most of whom have spent their careers in various segments of AI, are interesting and worth considering.

This article was put together mainly to spurn debate and consideration of reasonable AI risks. Interacting with and getting the thoughts of readers is always valuable, which is the motivation behind the poll that allows you to make your own predictions and compare them to other TechEmergence readers:


Related TechEmergence Interviews:

Fill out this form and receive access to the entire data set from this interview series, as well as notifications of future interview series and infographics.

Image credits: gereports.cdnist.com