Over the last two years there has been a general “up-tick” in media attention around the risks of artificial general intelligence, and it seems safe to say that though Bill Gates, Stephan Hawking, and many others have publicly articulated their fears, no one has moved the media needle more than Elon Musk.

When I set out to gather perspectives from businesspersons on AI risk, I aimed to sift through the “whiz-bang” re-blogged articles about Musk’s statements and figure out what the man actually said about the matter… and as it turns out, that was rather difficult. Due to the possibly sensational and novel claims (combined with Musk’s growing celebrity attention), most of the articles about “what Musk said” are in fact not about “what Musk said,” but about what some reporter said about what a reporter said about what Musk said.

So I set out to find every actual statement from Musk himself on the topic of AI risk, and lay them out for others to reference. My goal (as a writer) is to have this catalogue for myself, but I’m sure there are other folks who’d want direct quotes as well.

In addition, I’d like to use this article as a jump-off point to other useful sources (authors, research, existing academic discussion) of AI risk debate, including some resources that Musk himself is undoubtedly familiar with.

Bare in mind, I’ll likely continue to update this list, and is by no means complete yet (if you’ve found other statements from Musk, please feel free to email me here at dan [at] techemergence.com). Here’s the homework I’ve done thus far, chronologically, from the top:

April 12, 2014 – Inc. Magazine Chat with Steve Jurvetson

(Hop along to 4:15 in the video about to see the first mention of AI and jesting about “Terminator”)

This video by Inc. Magazine is the first video that I’ve been able to find of Musk actually articulating his fears about AI risk. Many people believe that the first video was around 4 months later during his talk at MIT (see below), but this video makes clear mention of some of the same notions, if not with the same seriousness or shock value

Steve Jurvetson (of DFJ fame) actually mentions that the SpaceX data center has “Cyberdyne Systems” (the fictitious company that created “Skynet” in the Terminator movie series). As it turns out, this is true. Here’s a photo from Jurvetson’s Flikkr account:

9848225175_2b66555bd3_bWhoa.

Apparently, that’s the real deal. Interesting. It seems obvious that this is a bit playful, but it’s curious, especially given Musk’s overtly stated concerns about AI running amok.

I would wager that most people are unaware of this video, or of Musk’s playful naming of his SpaceX server cluster, so I figured it would be noteworthy to mention this video. If you can find an earlier video of Elon stating his concerns about AI, please do let me know, this is about as early as I could find.

June 17, 2014 – CNBC’s “Disruptor Number 1” Interview

The video above was taken from this interview on CNBC (original was not embeddable), and it includes Musk articulating why he had invested in AI companies DeepMind and Vicarious – to keep an eye on what’s going on in AI.

When CNBC’s hosts express surprise at Musk’s statement about AI being “dangerous,” he casually mentions that there have been movies about dangerous AI, and references Terminator (in partial jest).

In the interview he responds with “I don’t know” to questions such as “What can we do about this?” and “What should AI really be used for?” Frankly I don’t think I’d have a better response, those questions are big ones.

I don’t know… but there are some scary outcomes… and we should try to make sure that the outcomes are good and not bad.

Not much of a tremendous splash was made by these comments in the media, and admittedly, it was more of a glossing-over of some potential concerns.

August 2-3, 2014 – Twitter Mention of Bostrom’s “Superintelligence”, and Biological Boot Loaders

As far as I can tell, this is Musk’s first “from the horse’s mouth” statement about artificial intelligence and risk. He overtly mentions Prof. Nick Bostrom’s Superintelligence in his tweet, which seems likely to have been a force in helping the book sell as well as it has.

Nick Bostrom was interviewed here on TechEmergence about the same time as this tweet, interestingly enough, though his interview didn’t go live until September 24th of that year.

The tweet itself didn’t generate a tremendous amount of noise in the AI space, at least initially, but seemed to build steam with the comments made at MIT in October 2014 (see below).

Above is Musk’s second tweet. If you’re unaware of the term “bootloader,” you can find an adequate definition here.

CBS and other media sources picked up these tweets on August 4th (and in some of the weeks following).

October 6, 2014 – Vanity Fair’s “New Establishment Summit” Interview

The above video was posted October 8th, 2014, but this actual conversation was held on October 6th.

This video is fruitful because Musk makes a few short quotes that I believe turn out to be useful jump-off points into the existing and ardent (albeit niche) conversation about AI risk and control. We’ll start with this quote:

Most people don’t understand just how quickly machine intelligence is advancing, it’s much faster than almost anyone realized, even within Silicon Valley… (When asked “Why is that dangerous?”): If there is a superintelligence… particularly if it is engaged in recursive self-improvement…

This brings us to the debate around what is playfully (or not so playfully?) referred to as the “AI foom” – or – when AI can improve itself so rapidly that it gains capacity and intelligence at an exponential rate. This debate was made popular via a discussion between MIRI founder Eliezer Yudowski and Dr. Robin Hanson of George Mason University.

My interview with Dr. Hanson in early 2013 touched briefly on topics of AI risk, but the most robust record of the “AI foom” debate can be found on MIRI’s website here.

Another quote from Vanity Fair:

If there is a superintelligence who’s utility function is something that’s detrimental to humanity, then it will have a very bad effect… it could be something like getting rid of spam email… well the best way to get rid of spam is to get rid of… humans.

In 2003, Nick Bostrom articulated a hypothetical scenario where a super-powerful AI is programmed with the goal of making as many paperclips as possible.

It is supposed, in Bostrom’s example (which you can read on his personal website here), that such a superintelligence would not necessarily know any kind of rational limit the number of paperclips to make, and so may desire to use all available resources on earth (and eventually the universe) to make paperclips.

At some point, the AI will have a physical method of obtaining it’s own resources, and will not need humans to bring it’s raw materials… and so it will be motivated (or so the example goes) to exterminate humans… if nothing else, just so that it can use the atoms in human beings to create more paperclips.

October 24, 2014 – MIT’s AeroAstro 1914-2014 Centennial Symposium

(Fast forward to about 1 hour into this video to see Musk’s comments about AI)

The first Musk-AI press-bomb was during his MIT AeroAstro talk on October 24th, 2014. The Washington Post (and other publishers) actually covered Musk’s AI comments before this video (embedded above) even went live. However, I wanted to link to the actual MIT video (accuracy kind of our shtick around here – as much as we can handle it).

This is the interview that spawned all of the “Elon Musk is calling AI a demon” statements. Curiously enough, the actual source video from MIT (embedded above) has vastly less views than many of the articles written about the video. Not surprisingly, VICE is better at marketing than MIT AeroAstro, for which neither can be blamed.

Here’s the statement from Musk that received the most attention:

With artificial intelligence we are summoning the demon. You know all those stories where there’s the guy with the pentagram and the holy water and he’s like… (wink) yeah he’s sure he can control the demon… doesn’t work out.

Musk didn’t in fact liken AI to a “demon” in an analogy, but rather referred to the two directly, making the statement easy to sensationalize, though it seems rather likely that a metaphor was implied.

Here’s the statement that I believe might be more insightful as to his general sentiment towards AI risk:

I’m inclined to think that there should be some regulatory oversight… at the national and international level… just to make sure that we don’t do something very foolish.

You can watch the video above and come to your own conclusions.

Interestingly enough the “demon” analogy might have been spurned by Musk’s reading of Daniel Suarez’s “Daemon” novel. Directly under his August 2nd tweet, we see this brief mention of said book, including a comment from Suarez himself:

Musk and Suarez "Daemon"

 January 15, 2015 – Interview with the Future of Life Institute, and the AI Open Letter

The Future of Life Institute (which bears a rather similar name to the Future of Humanity Institute at Oxford… which I imagine has been reason for some inter-academic grumblings) proudly announced Musk’s donation of $10M to fund projects related to AI safety.

One of the funded projects from Elon’s donation pool was awarded to Wendell Wallach, a past interviewee here on the TechEmergence podcast.

Those interested in more of Musk’s conversations with Max Tegmark (one of FLI’s founders) might enjoy this video as well.

Around this same time, FLI’s AI Open Letter was released, with Musk as one of it’s first signatories.

This event garnered oodles of press, some of it quite critical of the validity of AI fears.

February 2015 – Tweets Referring to WaitbutWhy’s AGI Article

Musk mentions WaitbutWhy’s AGI article two times (those lucky sons of…). The article (which you can read by clicking the link the in the tweets) is a long one, but for people who haven’t read Bostrom and Kurzweil regularly, it’s a good overview of some of the major risk considerations of AI. Plus it’s awful humorous.

March 29, 2015 – Boao Forum, Conversation with Robin Li and Bill Gates

(Zip ahead to about 17 minutes into the video above to hear the conversation turn to the topic of AI)

In his discussion with Robin Li, Musk again articulates his relatively firm supposition about a “hard take-off” for AI, with it’s capacities zipping vastly beyond human level general intelligence right around the time that it might attain human-level general intelligence. This fear is not mirrored by many AI researchers, but it is a fear that many respected thinkers have considered.

Ben Goertzel (AGI researcher and OpenCog founder) has written about the concept of “hard takeoff,” which might be of interest to anyone aiming to get a better understanding about the speed of AI’s increasing capacities in the future. You can listen to Ben’s entire interview about AI acceleration and risk in this article based on his TechEmergence podcast interview. I’d also admonish anyone interested in the subject to explore more of Ben’s blog.

Musk aims to articulate his concerns about controlling AI in the following statement to Baidu CEO Robin Li:

(Considering nuclear research / weaponry) …releasing energy is easy; containing that energy safely is very difficult. And so I think the right emphasis on AI research is on AI safety. We should put vastly more effort into AI safety than we should into advancing AI in the first place.

Musk re-iterates that his stance is not against AI research in another statement to Li:

So I’m not against the advancement of AI – I want to be really clear about this. But I do think that we should be extremely careful.

Some folks (notably AGI researchers) consider Musk’s statement’s to be unnecessarily stifling for AI, and others see the statements as fear-hype that will be more dangerous to AI’s development that AI ever would be to humans in the near future.

With all the AI software developers he’s recruiting, it seems clear that he isn’t entirely against AI itself – though skeptics may believe that he wants to hog all the AI smarts to himself and slow everyone else down. I don’t know who’s right, but all of these opinions are out there.

July 27, 2015 – Tweet about Autonomous Weapons Open Letter

FLI’s Autonomous Weapons Open Letter made another reasonable media splash, and Musk bumped the link out to his followers.

September 10, 2015 – CNN Money Interview

In this brief clip, Musk sums up a few important notions about what kind of “AI” might prove itself to be a legitimate risk. This tends to de-bunk a lot of the “Elon fears the Terminator” hype that has stuck itself on blogs all over the net (this is the kind of click-happy material that appeals to the broad market of online readers, but skews the point being discussed).

Though it seems that many AI researchers disagree with his fears (some do agree), he isn’t articulating any kind of fear of terminators. Here’s a quote:

…the pace of (AI) progress is faster than people realize. It would be fairly obvious if you saw a robot walking around talking and behaving like a person, you’d be like ‘Whoa… that’s like… what’s that?”… that would be really obvious. What’s not obvious is a huge server bank in a dark vault somewhere with an intelligence that’s potentially vastly greater than what a human mind can do. It’s eyes and ears would be everywhere, every camera, every microphone, and device that’s network accessible.

October 22, 2015 – Talk at Tsinghua University

The audio isn’t great, and I’m not sure of whose is the ‘original’ upload, but this mobile-recorded talk that Musk gave at Tsinghua University captures an audience member’s question (at about 1hr 24m into the video) that goes something like, “Why go to Mars if AI will kill humanity anyway?”

Musk answers the question – protection against an all-but-guaranteed natural disaster, the sheer excitement and adventure of space travel, but skirts the bit about AI killing humanity. He does touch on the utility of having another planet to which to escape in the face of a potential World War III triggered by AI technologies, but that’s about as far as he takes it.

Tsinghua University also published a write-up covering Musk’s talk; no mention of the student’s question in question.

February 4, 2016 – CNN Interview

At about 8:20, Musk brings up “deep AI” in response to the journalist’s comment on the Singularity, saying it’s “something that may or may not turn out well.” He expands on his reservations and fears:

“We just don’t know what’s going to happen once intelligence is substantially greater than a human brain…what do we do if AI is that much smarter?”

His idea of a benign scenario? That AI can do any job that a human can do – but better.

May 4, 2016 – Interview at World Energy Innovation Forum

 

Not the best video quality (another mobile recording – why they don’t formally record some of these interviews beats me), but a good interview. At about about 24:30, the interviewer brings up the topic of AI risk and advancement. I’ve watched enough of these to notice that Musk always seems hesitant to answer (probably for good reason; not easy to sum up views in one sit-down interview).

Musk responds to the idea of AI expanding exponentially. While close up it may look linear, this may be a case of a “recursive y axis”; in other words, if each year predictions are converging with advances each year, then it’s fair to say that AI is advancing at an exponential rate. He goes on to reiterate a past point:

“It’s (AI) going to go beyond self-driving cars, that’s the case of narrow AI, cars aren’t going to take over the world or do anything…eventually if digital intelligence becomes super human, which I said it looks on track to me, it will be able to do everything better than we can do.”

June 2, 2016 – Interview with Recode

 

Jump to 40:23 in the video, and the interviewer says she wants Musk to clarify his “issue” with AI. Musk comments that his “full position would require a long explanation” (a pretty common preface in several of his interviews), but says that he’s concerned about certain directions that AI could take, emphasizing that while he’s on the side of a positive future, not all are benign:

“If we create some super digital AI that exceeds us in every way by a lot, it’s very important that that (AI) be benign.”

He mentions OpenAI and its ‘purposeful’ governance structure – that while the organization is not generating a profit as AI is developed (it’s funded by other means), there is a great sense of urgency, with the intent to democratize AI. To further this point, Musk quotes the late Lord Acton: “Freedom consists of the distribution of power and despotism in its concentration.”

Musk really drives home this point (and does so again below in the Y Combinator interview) –  that the bad outcome is people controlling AI. There’s an interesting moment (44:30) when he says he won’t name names of big tech companies, but he does acknowledge “there’s only one” (there has obviously been much speculation about said company). Musk says it’s all about “increasing the probability that the future will be good”:

“If AI power is widely distributed, and there’s not one entity that has some super AI…to the degree that we can link AI power to each individual’s will…then if somebody did try to do something terrible, then the collective will of others could overcome that bad actor.”

June 2, 2016 – Tweet about ‘Neural Lace’ Concept (also brought up in Y Combinator Interview below)

june-4-2016-tweet

Musk tweets again his idea about the “neural lace” (connecting the cortex with the digital self), one that he sees as an imperative in the case for democratizing AI and creating individual links to each person’s will (as phrased in the Recode interview above), thereby reducing the risk of one super AI being wielded by one or a small group of corrupt individuals.

August 17, 2016 – Fortune’s Exclusive Clip of Documentary

Back in August, Fortune released an exclusive teaser of the new documentary “Lo and Behold”, which was out in select theaters (released August 19) until recently and has clips of Herzog’s interview with Musk (will be waiting for that one to come out on DVD).

In the Fortune clip (wasn’t able to embed in WordPress), Musk “clarifies” his concerns about AI:

“I think that the biggest risk is not that the AI will develop a will of its own…but rather that it will follow the will of people that establish its utility function.”

September 15, 2016 – Interview with Y Combinator’s Sam Altman

 

Y Combinator’s Sam Altman (one of OpenAI’s sponsors) opens the AI box at about 11 minutes into the interview. Musk gives a characteristic preface, “I do want to emphasize that this is not something I advocate or is prescriptive…this is hopefully predictive,” but again articulates the idea of achieving democratization of AI technology as the best of the available alternatives, where no one company or small set of individuals has control.

He makes the interesting leap into the “neural lace” concept when he says that part of the prevention method entails solving the high bandwidth interface to the cortex:

“If we can effectively merge with AI by improving the neural length between your cortex and your digital extension of yourself…then effectively you become an AI human symbiote, and with that then there’s widespread ‘anyone who wants it can have it’, and we solve the control problem as well.”

In other words, we become a collective AI society, where many wills beats one.

Wrapping Up

Musk’s claims about AI’s near-term risk may be overblown, but they may not be. Given my own assessment in a recent poll of 33 AI researchers (which may be biased, as many of our guests have at least a marginal interest in AGI to begin with), essentially of what Musk has said seems reasonable when couched in the existing rational debates on the topic.

What I think is counter-productive, is both the media hype around “Terminators”, and the back-lash from some of the AGI community likening Musk to an evil luddite (I’ve written about this “politicking” previously). Though we could easily imagine that Musk has some potential agenda at play with respect to his articulated AI fears (possibly increasing regulation on Google and Facebook in order to gain greater prestige and wealth than other tech giants?… maybe?… somewhat unlikely), it seems that on the aggregate he’s up to speed with many pretty reasonable AGI thinkers.

The point of this post, however, wasn’t to give my perspective, but to show Musk’s actual statements and let you make your own call.

In addition to updating me with important quotes that I missed (which can be done via email at dan [at] techemergence.com), feel free to leave your own comments below this article.

All the best,

Dan