Изменить стиль страницы

When Turing returned to Bletchley Park in April 1943, he became friends with a colleague named Donald Michie, and they spent many evenings playing chess in a nearby pub. As they discussed the possibility of creating a chess-playing computer, Turing approached the problem not by thinking of ways to use brute processing power to calculate every possible move; instead he focused on the possibility that a machine might learn how to play chess by repeated practice. In other words, it might be able to try new gambits and refine its strategy with every new win or loss. This approach, if successful, would represent a fundamental leap that would have dazzled Ada Lovelace: machines would be able to do more than merely follow the specific instructions given them by humans; they could learn from experience and refine their own instructions.

“It has been said that computing machines can only carry out the purposes that they are instructed to do,” he explained in a talk to the London Mathematical Society in February 1947. “But is it necessary that they should always be used in such a manner?” He then discussed the implications of the new stored-program computers that could modify their own instruction tables. “It would be like a pupil who had learnt much from his master, but had added much more by his own work. When this happens I feel that one is obliged to regard the machine as showing intelligence.”90

When he finished his speech, his audience sat for a moment in silence, stunned by Turing’s claims. Likewise, his colleagues at the National Physical Laboratory were flummoxed by Turing’s obsession with making thinking machines. The director of the National Physical Laboratory, Sir Charles Darwin (grandson of the evolutionary biologist), wrote to his superiors in 1947 that Turing “wants to extend his work on the machine still further towards the biological side” and to address the question “Could a machine be made that could learn by experience?”91

Turing’s unsettling notion that machines might someday be able to think like humans provoked furious objections at the time—as it has ever since. There were the expected religious objections and also those that were emotional, both in content and in tone. “Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain,” declared a famous brain surgeon, Sir Geoffrey Jefferson, in the prestigious Lister Oration in 1949.92 Turing’s response to a reporter from the London Times seemed somewhat flippant, but also subtle: “The comparison is perhaps a little bit unfair because a sonnet written by a machine will be better appreciated by another machine.”93

The ground was thus laid for Turing’s second seminal work, “Computing Machinery and Intelligence,” published in the journal Mind in October 1950.94 In it he devised what became known as the Turing Test. He began with a clear declaration: “I propose to consider the question, ‘Can machines think?’ ” With a schoolboy’s sense of fun, he then invented a game—one that is still being played and debated—to give empirical meaning to that question. He proposed a purely operational definition of artificial intelligence: If the output of a machine is indistinguishable from that of a human brain, then we have no meaningful reason to insist that the machine is not “thinking.”

Turing’s test, which he called “the imitation game,” is simple: An interrogator sends written questions to a human and a machine in another room and tries to determine from their answers which one is the human. A sample interrogation, he wrote, might be the following:

Q: Please write me a sonnet on the subject of the Forth Bridge.

A: Count me out on this one. I never could write poetry.

Q: Add 34957 to 70764.

A: (Pause about 30 seconds and then give as answer) 105621.

Q: Do you play chess?

A: Yes.

Q: I have K at my K1, and no other pieces. You have only K at K6 and R at R1. It is your move. What do you play?

A: (After a pause of 15 seconds) R–R8 mate.

In this sample dialogue, Turing did a few things. Careful scrutiny shows that the respondent, after thirty seconds, made a slight mistake in addition (the correct answer is 105,721). Is that evidence that the respondent was a human? Perhaps. But then again, maybe it was a machine cagily pretending to be human. Turing also flicked away Jefferson’s objection that a machine cannot write a sonnet; perhaps the answer above was given by a human who admitted to that inability. Later in the paper, Turing imagined the following interrogation to show the difficulty of using sonnet writing as a criterion of being human:

Q: In the first line of your sonnet which reads “Shall I compare thee to a summer’s day,” would not “a spring day” do as well or better?

A: It wouldn’t scan.

Q: How about “a winter’s day.” That would scan all right.

A: Yes, but nobody wants to be compared to a winter’s day.

Q: Would you say Mr. Pickwick reminded you of Christmas?

A: In a way.

Q: Yet Christmas is a winter’s day, and I do not think Mr. Pickwick would mind the comparison.

A: I don’t think you’re serious. By a winter’s day one means a typical winter’s day, rather than a special one like Christmas.

Turing’s point was that it might not be possible to tell whether such a respondent was a human or a machine pretending to be a human.

Turing gave his own guess as to whether a computer might be able to win this imitation game: “I believe that in about fifty years’ time it will be possible to programme computers . . . to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning.”

In his paper Turing tried to rebut the many possible challenges to his definition of thinking. He swatted away the theological objection that God has bestowed a soul and thinking capacity only upon humans, arguing that this “implies a serious restriction of the omnipotence of the Almighty.” He asked whether God “has freedom to confer a soul on an elephant if He sees fit.” Presumably so. By the same logic, which, coming from the nonbelieving Turing was somewhat sardonic, surely God could confer a soul upon a machine if He so desired.

The most interesting objection, especially for our narrative, is the one that Turing attributed to Ada Lovelace. “The Analytical Engine has no pretensions whatever to originate anything,” she wrote in 1843. “It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths.” In other words, unlike the human mind, a mechanical contrivance cannot have free will or come up with its own initiatives. It can merely perform as programmed. In his 1950 paper, Turing devoted a section to what he dubbed “Lady Lovelace’s Objection.”

His most ingenious parry to this objection was his argument that a machine might actually be able to learn, thereby growing into its own agent and able to originate new thoughts. “Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s?” he asked. “If this were then subjected to an appropriate course of education, one would obtain the adult brain.” A machine’s learning process would be different from a child’s, he admitted. “It will not, for instance, be provided with legs, so that it could not be asked to go out and fill the coal scuttle. Possibly it might not have eyes. . . . One could not send the creature to school without the other children making excessive fun of it.” The baby machine would therefore have to be tutored some other way. Turing proposed a punishment and reward system, which would cause the machine to repeat certain activities and avoid others. Eventually such a machine could develop its own conceptions about how to figure things out.