Изменить стиль страницы

Brian grabbed at his belt where he kept his phone — it was gone. “I must have put it down somewhere, didn’t hear it ring.”

She took out her own phone and hit the memory key to dial his number. There was a distant buzzing. She tracked it down beside the coffeemaker. Returned it to him in stony silence.

“Thanks.”

“It should be near you at all times. I had to go looking for your bodyguards — they told me you were still here.”

“Traitors,” he muttered.

“They’re as concerned as I am. Nothing is so important that you have to ruin your health for it.”

“Something is, Shelly, that’s just the point. You remember when you left last night, the trouble we were having with the new manager program? No matter what we did yesterday the system would simply curl up and die. So then I started it out with a very simple program of sorting out colored blocks, then complicated it with blocks of different shapes as well as colors. The next time I looked, the manager program was still running — but all the other parts of the program seemed to have shut down. So I recorded what happened when I tried it again, and this time installed a natural language trace program to record all the manager’s commands to the other subunits. This slowed things down enough for me to discover what was going on. Let’s look at what happened.”

He turned on the recording he had made during the night. The screen showed the AI rapidly sorting colored blocks, then slowing — then barely moving until it finally stopped completely. The deep bass voice of Robin 3 poured rapidly from the speaker.

“…K-line 8997, response needed to input 10983 — you are too slow — respond immediately — inhibiting. Selecting subproblem 384. Response accepted from K-4093, inhibiting slower responses from K-3724 and K-2314, Selecting subproblem 385. Responses from K-2615 and K-1488 are in conflict — inhibiting both. Selecting…”

Brian switched it off. “Did you understand that?”

“Not really. Except that the program was busy inhibiting things — - ”

“Yes, and that was its problem. It was supposed to learn from experience, by rewarding successful subunits and inhibiting the ones that failed. But the manager’s threshold for success had been set so high that it would accept only perfect and instant compliance. So it was rewarding only the units that responded quickly, and disconnecting the slower ones — even if what they were trying to do might have been better in the end.”

“I see. And that started a domino effect because as each subunit was inhibited, that weakened other units’ connection to it?”

“Exactly. And then the responses of those other units became slower until they got inhibited in turn. Before long the manager program had killed off them all.”

“What a horrible thought! You are saying, really, that it committed suicide.”

“Not at all.” His voice was hoarse, fatigue abraded his temper. “When you say that, you are just being anthropomorphic. A machine is not a person. What on earth is horrible about one circuit disconnecting another circuit? Christ — there’s nothing here but a bunch of electronic components and software. Since there are no human beings involved nothing horrible can possibly occur, that’s pretty obvious—”

“Don’t speak to me that way or use that tone of voice!”

Brian’s face reddened with anger, then he dropped his eyes. “I’m sorry, I take that back. I’m a little tired, I think.”

“You think — I know. Apology accepted. And I agree, I was being anthropomorphic. It wasn’t what you said to me — it was how you said it. Now let’s stop snapping at each other and get some fresh air. And get you to bed.”

“All right — but let me look at this first.”

Brian went straight to the terminal and proceeded to retrace the robot’s internal computations. Chart after chart appeared on the screen. Eventually he nodded gloomily. “Another bug of course. It only showed up after I fixed the last one. You remember, I set things up to suppress excessive inhibition, so that the robot would not spontaneously shut itself down. But now it goes to the opposite extreme. It doesn’t know when it ought to stop.

“This AI seems to be pretty good at answering straightforward questions, but only when the answer can be found with a little shallow reasoning. But you saw what happened when it didn’t know the answer. It began random searching, lost its way, didn’t know when to stop. You might say that it didn’t know what it didn’t know.”

“It seemed to me that it simply went mad.”

“Yes, you could say that. We have lots of words for human-mind bugs — paranoias, catatonias, phobias, neuroses, irrationalities. I suppose we’ll need new sets of words for all the new bugs that our robots will have. And we have no reason to expect that any new version should work the first time it’s turned on. In this case, what happened was that it tried to use all of its Expert Systems together on the same problem. The manager wasn’t strong enough to suppress the inappropriate ones. All those jumbles of words showed that it was grasping at any and every association that might conceivably have guided it toward the problem it needed to solve — no matter how unlikely on the face of it. It also showed that when one approach failed, the thing didn’t know when to give up. Even if this AI worked there is no rule,that it had to be sane on our terms.”

Brian rubbed his bristly jaw and looked at the now silent machine. “Let’s look more closely here.” He pointed to the chart on the machine. “You can see right here what happened this time. In Rob-3.1 there was too much inhibition, so everything shut down. So I changed these parameters and now there’s not enough inhibition.”

“So what’s the solution?”

“The answer is that there is no answer. No, I don’t mean anything mystical. I mean that the manager here has to have more knowledge. Precisely because there’s no magic, no general answer. There’s no simple fix that will work in all cases — because all cases are different. And once you recognize that, everything is much clearer! This manager must be knowledge-based. And then it can learn what to do!”

“Then you’re saying that we must make a manager to learn which strategy to use in each situation, by remembering what worked in the past?”

“Exactly. Instead of trying to find a fixed formula that always works, let’s make it learn from experience, case by case. Because we want a machine that’s intelligent on its own, so that we don’t have to hang around forever, fixing it whenever anything goes wrong. Instead we must give it some ways to learn to fix new bugs as soon as they come up. By itself, without our help.”

“So now I know just what to do. Remember when it seemed stuck in a loop, repeating the same things about the color red? It was easy for us to see that it wasn’t making any progress. It couldn’t see that it was stuck, precisely because of being stuck. It couldn’t jump out of that loop to see what it was doing on a larger scale. We can fix that by adding a recorder to remember the history of what it has been doing recently. And also a clock that interrupts the program frequently, so that it can look at that recording to see if it has been repeating itself.”

“Or even better we could add a second processor that is always running at the same time, looking at the first one. A B-brain watching an A-brain.”

“And perhaps even a C-brain to see if the B-brain has got stuck. Damn! I just remembered that one of my old notes said, ‘Use the B-brain here to suppress looping.’ I certainly wish I had written clearer notes the first time around. I better get started on designing that B-brain.”

“But you’d better not do it now! In your present state, you’ll just make it worse.”

“You’re right. Bedtime. I’ll get there, don’t worry — but I want to get something to eat first.”

“I’ll go with you, have a coffee.”