Episode Summary: What is intelligence? For some researchers, it may be quite possible to create an intelligent machine ‘in a box’, something without physical embodiment but with a powerful mind. Others believe general intelligence requires interaction with the outside world, inferring information from gestures and other features of functioning in an environment. Dr. Vincent Müller is of the belief that intelligence may involve more than just mental algorithms and may need to include the capacity to sense rather than just run a program. Vincent focuses on cognitive systems as an approach to AI, and in this episode he talks about what this means and implies, how this approach is different from classical AI, and what this might permit in the future if the field is developed.
Guest: Vincent Müller
Expertise: Philosophy of Computing and Cognitive Science, Ethics of Technology
Recognition in Brief: Vincent C. Müller’s research focuses on theory and ethics of technology, particularly artificial intelligence. He has generated 3.6mil.€ research income for his institution. Müller organizes a conference series on the Theory and Philosophy of AI and is principal investigator of a EC-funded research project on the ethics of “Digital Do-It-Yourself” (DIDIY).Müller has published to-date 40 academic papers and 12 edited volumes in the philosophy of computing, the philosophy of AI and cognitive science, the philosophy of language, applied ethics, and related areas. He has organized to-date 25 conferences and workshops and given 100 presentations. Müller studied philosophy in cognitive science, linguistics and history at the universities of Marburg, Hamburg, London and Oxford. He was Stanley J. Seeger Fellow at Princeton University and James Martin Research Fellow at the University of Oxford.
Current Affiliations: Professor of philosophy at Anatolia College/ACT, president of the European Association for Cognitive Systems, and chair of the euRobotics topics group on ‘ethical, legal and socio-economic issues’Cognitive Systems: A Divergent Path from Classical AIA cognitive system sounds like it could cover all manner of thinking entities, but it’s a term that evolved from the pursuit of AI. Starting in the 1990s, a division of researchers began to diverge from what can be termed “classical AI”, a field that Vincent Müller describes as largely pursuing the creation of an intelligence that is fundamentally different in function from how intelligence works in humans and animals. “Cognitive systems” researchers believe that an artificial intelligence may need to be more similar to natural thinking systems, which (as we know them) are embodied and interact with their environment through sensory systems.Müller describes the pursuit as a two-way interaction, in which scientists learn from natural systems to make technical systems more sophisticated and vice versa.
“One result of this is that we put a lot more emphasis on the role of the body in the intelligence than classical AI would do and we generally think of intelligence in broader terms than say intelligent thinking, so intelligence is also the ability that allows you to find your way home, to locate and grasp a cup, and to do all these things that we do in every day life that make up human intelligent behavior,” says Vincent.
A common argument for more natural systems is their robustness and adapability in unpredictable environments, as compared to a more classical AI that tries to predict exact parameters, which over the years has proven to be labor-intensive process with often clumsy or inaccurate results.If cognitive systems are a potentially better way to develop an intelligent entity, then why don’t we hear more about this approach? “The short answer is we haven’t done much in the sense that the progress that we have made in AI in this time frame is to a very large extent progress due to more efficient algorithms and faster machinery,” says Vincent. In other words, the classical AI mindset still prevails in distributed resources and research, but with exponentially better tools than from a decade before.Machine learning is a buzz-worthy technique that has led to developments of other areas like neural networks and deep learning. Outside of robotics, which is a technically challenging field, any kind of embodied cognitive science is largely theoretical at this point. Müller expands on the view that cognitive systems researchers propose doing something much different than what has been done so far in AI.
“The opinions that I can see are divergent on this matter; some people say the trajectory that we are on is beautiful and we have made much progress in the past couple decades…some people, at least when the founders are not listening, are saying ‘oh, we know that’s not going to go anywhere’ so we either keep plowing ahead and we know it’s going to be okay, or we do something totally different.”
The Rise and Limitations of Google Cars and Smart DevicesA lot of people might think of a technology like the Google car as an embodied system, but Vincent argues that beyond the narrow purpose of driving, the Google car is not very intelligent. Arguably, self-driving cars are still a significant feat of progress. Researchers try to program into the system all the possible things that can happen when driving; one might argue that there are unlimited possibilities, but this is not quite true, says Vincent. “There is an end to the kinds of things that you would usually expect to happen on a road, and if you make a database that is large enough to contain this kinds of stuff set…then it looks like it might well be that you can actually program the system efficiently so that it can handle an environment of a given complexity,” says Müller, which when paired with sensory systems might be enough for most driving tasks.Driving on the highway, for example, is a relatively easy task, as compared to driving in a congested downtown area. A highway is a case of a relatively well-structured environment, and researchers are on their way to more or less proving that self-driving vehicles have the ability to efficiently function in this environment as well (or potentially better) than most humans. Google’s and other self-driving cars are a sophisticated realization of classical AI, which seem poised to master many of the tasks that humans master subconsciously, those that we turn into automatic habits – like driving and even sports – and which can be appropriately adjusted. While classical AI has been successful in more industrial or controlled areas (like the chess board), less predictable environments still pose a real challenge.The potential of cognitive systems is two-fold. To start, systems within an interactive and sensory body would theoretically be more flexible than the AI systems of today, adjusting to its environment if it proves different from the way it’s supposed to be. A second key ability is the integration of many different pieces of information working simultaneously. Take speech recognition, which has significantly improved over the last five years, yet Vincent see its trajectory or success curve flattening. “It’s getting better but not significantly better, and this essentially means that speech recognition really isn’t so good in noisy environments, people with slightly funny accents, this kind of thing, not so good that is in comparison to humans speech recognition,” says Müller.Instead of having it analyze more sound files, why not feed the machines different kinds of information – like video files – so that it can integrate different ‘skills’ or ways of processing information?
“In a foreign language for example that you don’t speak particularly well, talking on the phone is really a challenge whereas if you’re interacting with the person in front of you, you get a lot more information,” says Vince.
He suggests that making a task more difficult and more reflective of the way that humans process information – in multiple variations and forms – may lead to more sophisticated capabilities over time.Because of the lack of real ‘stick’ in the field of cognitive systems, Müller hesitates to give any kind of projected timeline for such an embodied intelligence. Even if such an entity is never recreated, he believes it’s still a viable research field that has a lot to contribute to the field of artificial intelligence, even if researchers only consider the argument of embodied intelligence as Devil’s advocate.
Image credit: SuzanMazur.com