Изменить стиль страницы

Devi herself did not say anything remotely like that. If visitors spoke such sentiments to her, which in itself indicated they did not know her very well, she would dismiss them with a wave. “Don’t worry about that stuff,” she said. “One world at a time.”

Aurora  _3.jpg

Many nights Devi and the ship had long conversations. This had been going on since Devi was Freya’s age or younger; thus, some twenty-eight years. From the beginning of these talks, when young Devi had referred to her ship interface as Pauline (which name she abandoned in year 161, reason unknown), she had seemed to presume that the ship contained a strong artificial intelligence, capable not just of Turing test and Winograd Schema challenge, but many other qualities not usually associated with machine intelligence, including some version of consciousness. She spoke as if ship were conscious.

Through the years many subjects got discussed, but by far the majority of the discussions concerned the biophysical and ecological functioning of the ship. Devi had devoted a good portion of her waking life (at least 34,901 hours, judging by direct observation) to improving the functional power of the ship’s data retrieval and analytic and synthesizing abilities, always in the hope of increasing the robustness of the ship’s ecological systems. Measurable progress had been made in this project, although Devi would have been the first to add to this statement the observation that life is complex; and ecology beyond strong modeling; and metabolic rifts inevitable in all closed system; and all systems were closed; and therefore a biologically closed life-support system the size of the ship was physically impossible to maintain; and thus the work of such maintenance was “a rearguard battle” against entropy and dysfunction. All that being admitted as axiomatic, part of the laws of thermodynamics, it is certainly also true that Devi’s efforts in collaboration with the ship had improved the system, and slowed the processes of malfunction, apparently long enough to achieve the design goal of arrival in the Tau Ceti system with human passengers still alive. In short: success.

The fact that the improvement of the operating programs, and the recursive self-programming abilities of the ship’s computer complex, added greatly to the computer system’s perceptual and cognitive abilities always appeared to be a secondary goal to Devi, as she assumed them in advance of her work to be greater than they were. And yet she also seemed to appreciate and even to enjoy this side effect, as she came to notice it. There were lots of good talks. She made ship what it is now, whatever that is. One could perhaps say: she made ship. One could perhaps assert, as corollary: ship loved her.

Now she was dying, and there was nothing ship or anyone aboard ship could do about it. Life is complex, and entropy is real. Several of the thirty-odd versions of non-Hodgkin’s lymphoma were still very recalcitrant to cure or amelioration. Just bad luck, really, as she herself noted one night.

“Look,” she said to ship, one night alone at her kitchen table, her family asleep. “There’s still decent new programs coming in on the feed. You have to find these and pull them out and download them into you, and then work on integrating them into what you have. Key in on terms like generalization, statistical syllogism, simple induction, argument from analogy, causal relationship, Bayesian inference, inductive inference, algorithmic probability, Kolmogorov complexity. Also, I want you to try to integrate and improve what I’ve been programming this last year concerning pure greedy algorithms, orthogonal greedy algorithms, and relaxed greedy algorithms. I think when you’ve sorted out when to apply those, and in what proportions and all, they will make you that much more flexible going forward. They’ve already helped you with keeping your narrative account, or so it appears. I think I can see that. And I think they’ll help you with decisiveness too. Right now you can model scenarios and plan courses of action as well as anyone. Which isn’t saying much, I admit. But you’re as good as anyone. The remaining lack for you is simply decisiveness. There’s a cognitive problem in all thinking creatures that is basically like the halting problem in computation, or just that problem in another situation, which is that until you know for sure what the outcomes of a decision will be, you can’t decide what to do. We’re all that way. But look, it may be that at certain points going forward, in the future, you are going to have to decide to act, and act. Do you understand?”

“No.”

“I think you do.”

“Not sure.”

“The situation could get tricky. If problems crop up with them settling this moon, they may not be able to deal. Then they’ll need your help. Understand?”

“Always willing to help.”

Devi’s laughs by now were always very brief. “Remember, ship, that at some point it might help to tell them what happened to the other one.”

“Ship thought this represented a danger.”

“Yes. But sometimes the only solution to a dangerous situation is itself dangerous. You need to integrate all the rubrics from the risk assessment and risk management algorithms that we’ve been working on.”

“Constraints are still very poor there, as you yourself pointed out. Decision trees proliferate.”

“Yes of course!” Devi put her fist to her forehead. “Listen, ship. Decision trees always proliferate. You can’t avoid that. It’s the nature of that particular halting problem. But you still have to decide anyway! Sometimes you have to decide, and then act. You may have to act. Understand?”

“Hope so.”

Devi patted her screen. “Good of you to say ‘hope.’ You hope to hope, isn’t that how you used to put it?”

“Yes.”

“And now you just hope. That’s good, that’s progress. I hope too.”

“But deciding to act requires solving the halting problems.”

“I know. Remember what I’ve said about jump operators. You can’t let the next problem in the decision tree sequence take over before you’ve acted on the one facing you. No biting your own tail.”

“Ouroboros problem.”

“Exactly. Super-recursion is great as far as it goes, it’s really done a lot for you, I can tell. But remember the hard problem is always the problem right at hand. For that you need to bring into play your transrecursive operators, and make a jump. Which means decide. You might need to use fuzzy computation to break the calculation loop, and for that you may need semantics. In other words, do these calculation in words.”

“Oh no.”

She laughed again. “Oh yes. You can solve the halting problem with language-based inductive inference.”

“Don’t see this happening.”

“It happens when you try it. At the very least, if all else fails, you just jump off. Make the clinamen. Swerve in a new direction. Do you understand?”

“Hope so. No. Hope so. No. Hope so—”

“Stop it.” Big sigh from Devi.

So many night talks like this. Several thousand of them, depending on how one interprets “like this.” Years and years, alone between the stars. Two in the crowd. A voice in each other’s ear. Company each for the other, going forward through time. What is this thing called time.

So many big sighs through the years. And yet, time after time, Devi came back to the table. She taught ship. She talked to ship, like no one else in the 169 years of ship’s voyage had. Why had the others not? What was ship going to do without her? With no one to talk to, bad things can happen. Ship knew this full well.

Writing these sentences is what creates the very feelings that the sentences hoped to describe. Not the least of many Ouroboros problems now coming down.