Consider, though, that over the thousands of years of man’s civilization, we have built a technology geared to the human shape. Products for humans’ use are designed in size and form to accommodate the human body-how it bends and how long, wide, and heavy the various bending parts are. Machines are designed to fit the human reach and the width and position of human fingers.
We have only to consider the problems of human beings who happen to be a little taller or shorter than the norm-or even just left-handed-to see how important it is to have a good fit into our technology.
If we want a directing device then, one that can make use of human tools and machines, and that can fit into the technology, we would find it useful to make that device in the human shape, with all the bends and turns of which the human body is capable. Nor would we want it to be too heavy or too abnormally proportioned. Average in all respects would be best.
Then too, we relate to all nonhuman things by finding, or inventing, something human about them. We attribute human characteristics to our pets, and even to our automobiles. We personify nature and all the products of nature and, in earlier times, made human-shaped gods and goddesses out of them.
Surely, if we are to take on thinking partners-or, at the least, thinking servants-in the form of machines, we will be more comfortable with them, and we will relate to them more easily, if they are shaped like humans.
It will be easier to be friends with human-shaped robots than with specialized machines of unrecognizable shape. And I sometimes think that, in the desperate straits of humanity today, we would be grateful to have nonhuman friends, even if they are only friends we build ourselves.
Our Intelligent Tools
Robots don’t have to be very intelligent to be intelligent enough. If a robot can follow simple orders and do the housework, or run simple machines in a cut-and-dried, repetitive way, we would be perfectly satisfied.
Constructing a robot is hard because you must fit a very compact computer inside its skull, if it is to have a vaguely human shape. Making a sufficiently complex computer as compact as the human brain is also hard.
But robots aside, why bother making a computer that compact? The units that make up a computer have been getting smaller and smaller, to be sure-from vacuum tubes to transistors to tiny integrated circuits and silicon chips. Suppose that, in addition to making the units smaller, we also make the whole structure bigger.
A brain that gets too large would eventually begin to lose efficiency because nerve impulses don’t travel very quickly. Even the speediest nerve impulses travel at only about 3.75 miles a minute. A nerve impulse can flash from one end of the brain to the other in one four-hundred-fortieth of a second, but a brain 9 miles long, if we could imagine one, would require 2.4 minutes for a nerve impulse to travel its length. The added complexity made possible by the enormous size would fall apart simply because of the long wait for information to be moved and processed within it.
Computers, however, use electric impulses that travel at more than 11 million miles per minute. A computer 400 miles wide would still flash electric impulses from end to end in about one four-hundred-fortieth of a second. In that respect, at least, a computer of that asteroidal size could still process information as quickly as the human brain could.
If, therefore, we imagine computers being manufactured with finer and finer components, more and more intricately interrelated, and also imagine those same computers becoming larger and larger, might it not be that the computers would eventually become capable of doing all the things a human brain can do?
Is there a theoretical limit to how intelligent a computer can become?
I’ve never heard of any. It seems to me that each time we learn to pack more complexity into a given volume, the computer can do more. Each time we make a computer larger, while keeping each portion as densely complex as before, the computer can do more.
Eventually, if we learn how to make a computer sufficiently complex and sufficiently large, why should it not achieve a human intelligence?
Some people are sure to be disbelieving and say, “But how can a computer possibly produce a great symphony, a great work of art, a great new scientific theory?”
The retort I am usually tempted to make to this question is, “Can you?” But, of course, even if the questioner is ordinary, there are extraordinary people who are geniuses. They attain genius, however, only because atoms and molecules within their brains are arranged in some complex order. There’s nothing in their brains but atoms and molecules. If we arrange atoms and molecules in some complex order in a computer, the products of genius should be possible to it; and if the individual parts are not as tiny and delicate as those of the brain, we compensate by making the computer larger.
Some people may say, “But computers can only do what they’re programmed to do.”
The answer to that is, “True. But brains can do only what they’re programmed to do-by their genes. Part of the brain’s programming is the ability to learn, and that will be part of a complex computer’s programming.”
In fact, if a computer can be built to be as intelligent as a human being, why can’t it be made more intelligent as well?
Why not, indeed? Maybe that’s what evolution is all about. Over the space of three billion years, hit-and-miss development of atoms and molecules has finally produced, through glacially slow improvement, a species intelligent enough to take the next step in a matter of centuries, or even decades. Then things will really move.
But if computers become more intelligent than human beings, might they not replace us? Well, shouldn’t they? They may be as kind as they are intelligent and just let us dwindle by attrition. They might keep some of us as pets, or on reservations.
Then too, consider what we’re doing to ourselves right now-to all living things and to the very planet we live on. Maybe it is time we were replaced. Maybe the real danger is that computers won’t be developed to the point of replacing us fast enough.
Think about it!
I present this view only as something to think about. I consider a quite different view in “Intelligences Together” later in this collection.
The Laws Of Robotics
It isn’t easy to think about computers without wondering if they will ever “take over.”
Will they replace us, make us obsolete, and get rid of us the way we got rid of spears and tinderboxes?
If we imagine computerlike brains inside the metal imitations of human beings that we call robots, the fear is even more direct. Robots look so much like human beings that their very appearance may give them rebellious ideas.
This problem faced the world of science fiction in the 19208 and 19308, and many were the cautionary tales written of robots that were built and then turned on their creators and destroyed them.
When I was a young man I grew tired of that caution, for it seemed to me that a robot was a machine and that human beings were constantly building machines. Since all machines are dangerous, one way or another, human beings built safeguards into them.
In 1939, therefore, I began to write a series of stories in which robots were presented sympathetically, as machines that were carefully designed to perform given tasks, with ample safeguards built into them to make them benign.
In a story I wrote in October 1941, I finally presented the safeguards in the specific form of “The Three Laws of Robotics. “ (I invented the word robotics, which had never been used before.)