[This story has been revised and updated.]

Professor Nick Bostrom is widely respected as the premier academic thinker on topics related to strong artificial intelligence, transhumanist theory, and existential risk. His talks, books, and articles cover all of these topics, and his vocation involves bringing attention and critical thought to these most pressing human issues.

Bostrom is founder and director of the Future of Humanity Institute at Oxford, and author of the book “Superintelligence: Paths, Dangers, Strategies,” which has been occasionally referenced by Bill Gates and Elon Musk during some of their interviews about the risks of artificial intelligence.

In our exclusive interview with Dr. Bostrom, we explore the topic of identifying “existential” human risks (those that could wipe out life forever), and how individuals and groups might mediate these risks on a grand scale to better secure the flourishing of humanity in the coming decades and centuries.

How can we determine the risks that are most likely? Where does “A.I.” stand as an existential risk today? How can society pool its efforts to prevent huge catastrophes?

You can listen to our interview below, or listen on iTunes, and subscribe for more interviews with AI luminaries and machine learning researchers from around the globe.

[Did you enjoy this episode? Subscribe on iTunes and leave us a review with your thoughts]

Bostrom’s Views on Existential Risk (in AI and Otherwise)

In our interview, Bostrom defines existential risk as a concept, “a lens through which to focus our gaze when we’re trying to prioritize between different global concerns.”

Our choices and decisions, in their simplest forms, either help support life or aid in its destruction. Bostrom rightly emphasizes that there is no single methodology to which we adhere when considering the risk of a particular action or another more concrete and natural consequence, such as asteroids.

The latter is, more often than not, something that we can quantify in terms of measuring level of existential risk (which, by the way, is very small). But bigger and more abstract issues that involve ethical and moral decision-making do not fall under this scope.

To improve our assessment of existential risk, Bostrom suggests that one of the things we can is to think about how our actions or contributions are produced; are there biases, for example, that we can become aware of and avoid when making decisions that affect humanity at large?

Cultivating this level of awareness is much easier said than done. If you’re wrong about major threats to humanity over the course of an entire century, for example, there’s no quick path or gauge in reality that tells you if you’re right or wrong; our views are distorted by our experiences, relationships, and a barrage of other factors. But one could say that recognizing the need to develop such an awareness is a start in the direction of helping society.

Bostrom envisions a peaceful and more harmonious world at international level, but working towards  such an abstract cause as an individual is, in our own words, more or less like throwing a dart into an Olympic-sized swimming pool and trying to hit the bulls-eye in the dartboard resting at the bottom.  What can we as humans do to improve our chances in furthering this goal, are all efforts so futile? Not necessarily.

As individuals, Bostrom (and others) suggest that we’re more effective working on something more narrow at the individual level. We may have a bigger impact if we focus on achieving something small and more concrete in terms of the grand scheme of the greater forces always at work in shaping long-term and far-reaching realities.

 

Following are a few select resources that we’ve found helpful in brushing up on our knowledge and discussions of existential risks in the modern world:

  1. Bostrom’s well-known research article, “Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards”, published in the Journal of Evolution and Technology in 2002
  2. A comprehensive research/resource list for existential risks, collated by Harvard Instructor Bruce Schneier
  3. Nick Bostrom’s TedX Oxford in 2013:

 

 

Image credit: http://img.gfx.no/