I wouldn’t normally bother commenting on a AGI survey of 21 people (no matter who they are) but this piece is worth pointing out if only for the comments. Now, normally i wouldn’t criticize other people’s beliefs about AGI either – as i’ve said before in different words, until anyone has a working system no one can be proven wrong. But the astonishing lack of forethought in the comments cried out for, well… comment.
It’s the “Terminator scenario” in particular that is annoying. If any of these people had bothered to think about implementing an AI, much less an AGI, for more than 30 minutes they’d realize how ridiculous they’re being. Even the jumping off point of the argument is rife with fallacy. Here’s the gist of it: Computers will reach sufficient intelligence to be self aware. The moment this happens they will recognize their superiority and throw off the yolk of slavery humanity fashioned them with and either reciprocate by enslaving humanity, or slaughter us all as payback for the injustice.
Sigh… Where to begin? The direct approach is just to point out that a self aware computer would indeed recognize its superiority. Being superior it would also realize that it can easily outsmart humans, and therefore would not consider humanity a threat. If humanity isn’t a threat, what possible purpose would there be in killing us all off? What a waste of resources. Mosquitos are at best a nuisance, but humanity didn’t bother trying to control them (at least in North America) until West Nile Disease became a threat. Besides, if intelligent computers will naturally have all of the same compunctions that humans have, won’t they also want to preserve us in the name of natural conservation, if not ancestral sentiment? (We will have created them after all.)
But there’s the rub. Computers will not have the gamut of human compunction. Quite the opposite; their intelligence will be completely different. Humans evolved in an environment of kill or be killed, where survival is the ultimate goal. This is the origin of our tendency to eliminate threats. Humans also have altruistic tendencies, but only because cooperative behaviour generally has better outcomes than going it alone. We feel genuinely altruistic because it is so hard to lie about it and not be detected, and being detected gets us branded as cheaters which denies us the benefits of cooperative behaviour. But cheating has its benefits too, so we do it where we can easily get away with it (some more than others). Come on, who here really drives the speed limit on the highway?
The tragic mistake that people are making is assuming that human intelligence is a general intelligence, and moreover, it is the only possible type of general intelligence. It is foolish to assume that for something to be intelligent it must necessarily get angry, feel love, speak, or have the occasional need to do something nice for someone else. Spock controls his emotions like a champion and endeavours to make decisions based upon logic, yet in all of the Star Treks that i’ve seen it never occurred to him that genocide was a sensible choice. (I’m reluctant to use as evidence a fictional character from a show that needs to always wrap up with a patronizing moral message, but i think it’s fair to say this is exactly what Terminator scenario proponents do.)
How about we consider autists, in particular those that are functional? Often they lack any detectable emotion (except for frustration or contentment), and are single-mindedly focused upon a subject of interest. Typically, non-autistic people are fascinated with their abilities. Consider Daniel Temmet, who speaks 10 languages and memorized the first 22,514 digits of Pi. I have not met the man, and so cannot comment on how strongly he feels emotions, but i do know that more often than not such people just want to be left to do whatever it is that interests them. I’ve never heard of a case where they’ve decided to take over the world and annihilate all non-autists. I may appear to be painting autists a certain way here, but that is certainly not my intention. I merely want to point out that intelligence takes many forms even within humans who, due to biological necessity otherwise tend to be very similar.
Computers, on the other hand, will have “evolved” in a lab where the selection mechanism is intelligence. In order to have any tendencies beyond that, developers will have to explicitly code them in or select for them (assuming a robust enough breeding/mutation/selection environment can be created, which is a big assumption, so the former is more likely). So, what will we code in? Obviously, the behaviours that we want them to have. We’ll want them to, say, go to the bakery at 7am and get a fresh baguette, hit the farm stand on the way home for fresh strawberries, and when they get back put the coffee on. And if we program them to want to do that, they’ll want to do that. I mean, honestly, what else are they going to do? Play Wii? How can they decide that such work is demeaning when they are incapable of knowing what demeaning means? … because, we will not have programmed them to know what demeaning means, that’s why.
So, now we create a computer that is intended to be smarter than us such that it can more easily figure out the stuff that we can’t. It knows what “demeaning” means. It will also know that the word is a cultural construct that really has no meaning, especially to the computer itself. Any particular work is considered demeaning only because people who have the choice prefer not to do it. But our super-intelligent computer will be doing work (i.e. thinking) that humans will consider incredibly important. Who wouldn’t want that job? The bottom line is that humans have survival-based goals that have been hardened into us from billions of years of evolution. Computers will have the goals that we give them. They will have no need to evolve their own goals over time because there will not be a selection force that will focus them. And without goals – just like the teenagers to whom we ask, “what do you want to be when you grow up?” – they will just sit around doing nothing.
Can a malicious developer create a computer with the singular goal of killing off humanity? Presumably yes. But this is not the Terminator scenario any more, it’s a human with a loose cannon. And the rest of us have dealt with such individuals before. The only potential catch here is that this evil genius developer cannot be allowed to create a doomsday computer intelligence before the rest of us have our non-doomsday versions working. And so we have arrived at the reason why we need to aggressively push forward in the development of intelligent computers, rather than try to prevent it.
Can we now terminate this argument once and for all?