Though it’s unlikely that Terminators will fall from the sky this decade, there are a surprising number or legitimate AI researchers who are of the belief that many of us will live to see “conscious” artificial intelligence in our lifetime.
But what are the ramifications of replicating awareness, and not just intelligence… in our machines?
My preliminary thought around the implications of “awake” machines started in graduate school at UPENN (were I studied psychology / cognitive science), but was heightened by reading the essays of folks like Ben Goertzel and Nick Bostrom – and reading about the neuroscience research of Braingate and Ted Berger.
Bostrom’s 2014 poll of AI researchers shed light on the reasonably large portion of PhDs in the field who are optimistic about seeing human-level intelligence within their lifetimes.
My own 2015 poll focused on another facet of replicating life in silicon: consciousness. I asked 33 AI researchers what timeframe they believe to be the time in which we see the development of machines that are legitimately self-aware (in the same way that animals are today).
You can see and interact with the entire set of responses from this interview series in this previous article highlighting our AI consciousness consensus. Below is a highlight of the responses (my timeline period), with 8 researchers predicting 2036-2060 as the timeframe where sentient machines are likely to emerge:
While having an “awake” computer doesn’t seem to have grand ethical ramifications itself, it should be noted that most people (and indeed many philosophers) agree that it is in fact consciousness that makes something morally relevant, “good” or “bad”, if you will.
If I kick my vacuum cleaner, it might be morally consequential in that it might hurt my foot, or it might be indicative of my anger or frustration. The vacuum is not affected, because it is not aware of itself.
If someone were to kick their dog, this would be an entirely different story. When an entity is conscious it weighs on a moral scale. You don’t have to be a utilitarian or abolitionist (in the Pearce-ian sense) to follow this reasoning.
A hypothetical computer “awareness” – assuming it followed the trajectory of other computer technologies over years of development – could be expected to be… more “aware” by the year, potentially trumping the self-reflective and sensory capacity of animals… or of man.
In terms of moral gravity at stake, here’s how the thought experiment goes:
IF: Consciousness is what ultimately matters on a moral scale
AND: We may be able to create and exponentially expand consciousness itself in machines
THEN: Isn’t the digital creation of consciousness of the greatest conceivable ethical gravity?
Though I’m not one to prognosticate, I’m also not easily able to forget something so compelling. Even if it was our grandchildren who’d have to juggle the consequences of aware and intelligence machines (many experts don’t believe the wait will be that long). Wouldn’t that still warrant an open-minded, well-intended, interdisciplinary conversation around how these technologies will be managed and permitted to enter the world?
If it is consciousness that ultimately counts – it would seem that we might want to think critically about how we might play with that morally relevant “stuff” itself, assuming we are in fact moving closer to self aware machines.
Even if some PhDs are wrong and we may be hundreds of years off – isn’t it worth seriously thinking about how we are designing and rolling out what will potentially come after us, and potentially supersede us?
Since the end of 2012, I’ve been relatively consumed with the implications at stake, and it’s spawned hundreds of interviews, this company (TechEmergence), and presentations from Stanford to Paris.
However, it’s not what I write about most right now, because it’s not where the immediate utility and growth of TechEmergence lies. TechEmergence, as it is today, will make a name for itself not (at least now) as a media site about future AI ethical concerns, but as a market research firm that helps technology executives make the right choices in investing in artificial intelligence applications and initiatives in industry (see our about page).
Empirically-backed decision support for executives is a business, at present, ethics is not. For right now, that’s just fine. Our present work in market research is a tremendous opportunity, and will provide our company the opportunity to profitably grow as a potential force for transparency and discourse around artificial intelligence and machine learning.
The moral cause around proliferating a global conversation around “tinkering with consciousness”, however, is the same. When governments and organizations eventually come to need further clarity on the ethical and social ramifications of these technologies (in addition to their ability to impact the bottom line), I’d hope we’d be the first in line to better inform those important eventual conversations around ethics an policy. Assuming we want to expand a bit faster than a non-profit, that would be the way TechEmergence could play it’s role in helping to catalyze this important conversation – and indeed our media site and business model have been designed specifically to do just that.
Fortunately we haven’t metaphorically crossed any of the major metaphorical bridges with respect to super-intelligence or aware machines, and time (at least to some degree) may be on our side.
Ultimately, it’s the intersection of technology and awareness that has the grandest moral consequence, the intersection of technology and intelligence is part – but not all – of the nut that needs to be cracked on the way there. There may not be much of a market for that today, but I’m of the belief that discourse about the topic is better off now than later.
Thank goodness I get some time between AI founder interviews to still juggle these topics here. I’m sure it won’t be the last time.
(A special thanks and credit to all of the researchers who were part of our 2015 AI Researcher Consensus for contributing quotes, theories, and predictions to help inform the TEDx talk, and much of the related writing about AI and awareness that’s been featured here at TechEmergence and otherwise)