Episode Summary: In addition to focusing on industry applications of artificial intelligence and emerging technology, we also focus on ethical and societal impacts of emerging technology. In this episode, we get back to ethics with Wendell Wallach, a scholar at Yale’s Interdisciplinary Center for Bioethics and author of “A Dangerous Master”, which addresses tech governance and other emerging technology issues. In this week’s episode, Wendell talks about the problems of governing technologies that are developing faster than we can possibly assess all the risks, a topic that Wendell has thought about in-depth through both his extensive consulting, speaking and writing.
Expertise: Ethics and emerging technology
Recognition in Brief: Wendell Wallach is a consultant, ethicist, and scholar at Yale University’s Interdisciplinary Center for Bioethics. At Yale, Mr. Wallach has chaired the Center’s working research group on Technology and Ethics for the past 10 years and is a member of other research groups on Animal Ethics, End of Life Issues, and Neurotics. His latest book, A Dangerous Master: How to keep technology from slipping beyond our control was published by BASIC Books in June 2015. He also co-authored (with Colin Allen) Moral Machines: Teaching Robots Right From Wrong. Wendell frequently speaks on ethical and governance concerns posed by emerging technologies, particularly artificial intelligence and neuroscience; he has also been interviewed and quoted by media around the world, including The New York Times, The Wall Street Journal, and the BBC News. In 2014, he received the award for ethics from the World Technology Network.
Current Affiliations: Scholar at Yale University’s Interdisciplinary Center for Bioethics; senior advisor to The Hastings Center; fellow at the Center for Law, Science & Innovation at the Sandra Day O’Connor School of Law (Arizona State University); fellow at the Institute for Ethics & Emerging Technology
(1:21) I know an important distinction we should probably address earlier rather than later…is around hard and soft governance. What is the difference between the two?
(4:40) What are examples of soft governance now…where is it having a role and a place in enacting change and serving the role of monitoring technology as it is already in the world?
(6:56) In terms of the potential applications for that in the future, where are some of the areas that you might see soft governance required in the next 10 or 20 years for the governance of certain kinds of technologies so they too don’t get out of hand?
(10:15) I know in your more recent book…you’re fleshing out the nuances of a potential combination of hard and soft governance working together in a particular way and applying the concept of a steering committee. Go into a little bit of nuanced detail of how you’re forming that, what you’re calling that, and why it might work better than wha you see going on today.
(19:51) So there’s maybe more of a need for the holistic perspective and multidisciplinary perspective but that’s not how things have worked…part of me is almost surprised that…the decisions around drones (and other emerging technology)…would be made in an arbitrary way by one organization that doesn’t tackle other concerns…it seems odd it’s so isolated…
(23:04) When we talk about international concerns and considerations…if a policy is setup in Canada that really manages AI or GenTech, etc. exceptionally well…if we have some great governance procedures in one country about some potentially dangerous technologies that aren’t adhered to by others…how do we deal with this consideration of some semblance of global policy?
[This interview has been updated as of December 2016.]