Episode Summary: Some organizations are leveraging artificial intelligence (AI) to help the world with research, some to help companies with marketing, and some are intent on ensuring that the future of AI doesn’t result in the end of humanity. Theres’a good likelihood that if you’re reading this interview, that you’re already familiar with OpenAI, an organization with the sole purpose of ensuring that the future of man and machines is a friendly one, and that the concentration of power and intelligence isn’t centralized in a way that would make AI a dangerous tool. In this episode, we speak with OpenAI’s Ilya Sutskever, research director for the nonprofit organization. This was a fun but frustrating interview; Sutskever held his cards close to his chest, but we gain some perspective on what he considers to be areas of importance regarding the future of AI and considerations for safely furthering advances in the field.

iTunes BadgeGet it on SC



Expertise: Machine learning and neural networks

Brief Recognition: Ilya Sutskever is co-founder and research director of OpenAI. Prior to OpenAI, Sutskever spent time as a research scientist at the Google Brain Team, and was also a co-founder of DNNresearch. Sutskever was a postdoc in Stanford with Andrew Ng’s group. Prior to Stanford, he was a student in the Machine Learning group of Toronto, working with Geoffrey Hinton. Sutskever has published numerous articles on various topics in deep and machine learning. He was recognized as a top 35 under 35 by MIT Technology Review and was also awarded the Top Innovator award by University of Toronto in 2014.

Current Affiliations: Co-founder and Research Director of OpenAI



Interview Highlights:

(6:16) When you say tension (about open-sourcing AI)…what goes beyond that line…what are the ‘of courses’ that we really want to keep out of this open-source ecosystem?

(7:31) You are aiming to do some things at present, the OpenAI Gym for example, a place for people to collaborate…is the gym part of what we might hope that the world trickles towards as we move toward AI?

(10:34) Are there other component parts of building that future – where the world is good when we have AI – are there any next five-year considerations…or is it mostly wait for the big game and be the top dogs when it hits the fan?

(12:20) Is it safer to say that we may require an intelligence beyond our own to discern those ethical scenarios, to consider and distill the good…and make big policy decisions?

(18:14) What are your thoughts on companies pulling together in AI…do you see this as a proliferation of the kind of good collaboration that we might want to see, are there skeptical elements from OpenAI’s perspective around the big guys potentially influencing policy?

(19:12) What are you most excited about now in OpenAI?





Big Ideas:

1 – AI will be a technology with unmatched impact (think along the lines of eliminating the need to perform undesired or unnecessary work, the development of mind-boggling medicine and materials, etc. – as Sutskever phrases it, “the stuff of science fiction today”). While humans don’t understand what is happening or what will happen, based on the magnitude of the change, OpenAI is basing its research and work on the question: ‘How can we help scaffold a safe and positive evolution of AI technology?’ (Read OpenAI Sponsor Elon Musk’s quotes about AI risk for related perspective).

2 – As a nonprofit organization, OpenAI is not beholden to investors or outside parties; it was created to benefit humanity in the face of any AI outcome. Its mission is to encourage public awareness of AI and to foster forward thinking about its future and possibilities. There is a collective “zeitgeist” that many would argue is already under way in many parts of the world in respect to the potentials and ramifications of AI. According to Sutskever, one stream of evidence for this argument is the number of people entering the field, reflected by the record numbers of students enrolling online and in universities in machine and deep learning classes.