Episode SummaryStatements about AI and risk, like those given by Elon Musk and Bill Gates, aren’t new, but they still resound with serious potential threats to the entirety of the human race. Some AI researchers have since come forward to challenge the substantive reality of these claims. In this episode, I interview a self-proclaimed “old timer” in the field of AI who tells us we might be too preemptive about our concerns of AI that will threaten our existence; instead, he suggests that our attention might be better  honed in thinking about how humans and AI can work together in the present and near future.

Guest: James Hendler

Expertise: Computer and Cognitive Sciences

Recognition in BriefJames Hendler is the Director of the Institute for Data Exploration and Applications and the Tetherless World Professor of Computer, Web and Cognitive Sciences at Rensselaer Polytechnic Institute(RPI). Hendler has authored over 250 technical papers in the areas of Semantic Web, artificial intelligence, agent-based computing, open data systems and high performance processing. One of the originators of the “Semantic Web,” Hendler was the recipient of a 1995 Fulbright Foundation Fellowship. He is also the former Chief Scientist of the Information Systems Office at the US Defense Advanced Research Projects Agency (DARPA) and was awarded a US Air Force Exceptional Civilian Service Medal in 2002.

Current Affiliations: Rensselaer Polytechnic Institute, Director, Rensselaer Institute for Data Exploration and Applications; Chair of the Board of Trustees and Director of the UK’s charitable Web Science Trust; Fellow of the American Association for Artificial Intelligence, the British Computer Society, the IEEE and the AAAS

AI’s Growing Pains

Elon Musk and Stephen Hawking are arguably some of the greatest minds of the 21st century, so when one or the other or especially both express a strong opinion, people sit up and pay attention. Such was the case with the now famous (some might say infamous) statements about the threat of AI to the future of the human race. But some researchers closer to the inner-workings of the AI field aren’t so sure that AI poses such a threat, now, in the next two decades, or ever.

James Hendler has been in the field since 1977, and he’s had front row seats to the evolution of the AI field, which in his words has shifted from critics saying, “this stuff isn’t possible, it’s never going to work,” to the present-day “this stuff is starting to work, we’re scared.” Says James,

“It’s a really interesting time to be in AI because we’re really being forced to think about what’s causing that change and what’s cause this new set of fears, and a lot of it is we’re starting to see self-driving cars become a reality, speech recognition is becoming much more prevalent, a lot of things that were never going to happen are literally around the corner.”

Booming progress in the last decade is certainly part of the reason for people taking more notice and speculating as to what will happen next if AI continues to progress at a similar, or even quicker, pace.

Stephen Hawking took this a step further when he described machines that might have the ability to replicate or evolve on their own, asking if they might better fill the social, cultural and other niches in ways  which humans can’t even conceive. Hendler thinks this is an odd question to be asking at present, when the technology is nowhere near to being conscious, let alone overtly threatening (a new CBS/Vanity Fair poll appears to show that most humans are not overly concerned about super-intelligent AI taking over the world either). One imminent AI threat about which James is concerned is autonomous weapons.

“That’s a real concern and it’s something we need to look at, but that’s not so much because we’re worried about AI becoming super intelligent, it’s rather because we’re worried about computers being put in places where human judgment is still really needed.”

As an illustration of this concern, he relayed a 1983 event in which a Russian officer on duty in a nuclear facility was prompted by a machine-intelligent system to launch a nuke in response to an American missile launch. The officer hesitated because he wasn’t seeing other related warning signs, so he didn’t act; in the end, his judgment saved thousands of American lives.

It’s probable that the same sort of ‘bug’ or even an enemy hacking into life-or-death systems could occur today and trigger a catastrophic mistake. “It’s more about humans being good at some things, and computers at others; we don’t know enough to make policy around these technologies, and that’s where we get into trouble,” says Hendler

But how do we avoid throwing the baby out with the bathwater? Right now, there are a number of experiments and new companies showing increasing performance in the accuracy of medical diagnostics when a computer is paired with a doctor, a combination that seems to perform better than both a single doctor or computer.

For the next generation, suggests James, we should probably be thinking about how to better couple computers and humans in areas like medicine, manufacturing, and many other fields, rather than halting all advancement. While there will undoubtedly be policy issues that need to be addressed, there seems to be a lack of conversation in reference to a middle ground where AI may be able to help us, especially in promoting partnership rather than sheer automation in many of these areas, he suggests.

AI and Humans, Better Together Than Apart?

One of the places where we’re starting to see a lot of interaction between human and computers is in science. Hendler describes a project called Galaxy Zoo (Zooniverse), a citizen science project that allowed the public to identify and classify types of galaxies from pictures taken by the Hubble telescope. While computers are much better at sectoring out galaxies, it turns out that people are really good at distinguishing types of galaxies. This one instance of crowdsourcing science has been very successful in the past few years and continues to evolve. Hendler explains,

“The key idea is that humans are the ones who can do better pattern recognition; the computer is much better at figuring out things like, ‘is this person lying or doing a good job’.”

In other words, a computer shows the same picture to a number of people, and to the same person over a period of time to check for consistency, producing a ‘weighting’ scale of accuracy to help determine people’s success at identifying galaxies. If we can get the public to approach science in this way, imagine using AI to connect scientists in different fields – crowd-sourced analysis – to promote greater progress and potentially make a greater impact in solving tough problems. Says James,

“There’s a term starting to float around called ‘discovery informatics‘…you have a lot of scientists who spend a lot of their career chasing down hypotheses that turn out to be wrong and in many cases there was a lot of evidence that they might be wrong earlier on that they didn’t see because it was published in a different literature.”

Which is where discovery informatics comes in, with algorithms that can connect the scientific community at large and encourage the sharing of available information on a routine basis, where they can hypothetically view all web-published papers that support and contradict their theories. “Unfortunately most of this technology is being used to promotes better ads, but it’s there,” says Hendler.

“People used to say you need the best minds to solve problems, but those minds will be both human and computer to solve mega problems.”

Sounds like the next generation of creative partnership will bring humans and AI closer together before (and if) it drives us apart.

Image credit: Rensselaer Polytechnic Institute