Изменить стиль страницы

The last mystery to be cleared up was why the swarms always returned to the laboratory. It never made any sense to me. I kept worrying about it because it was such an unreasonable goal. It didn't fit the PREDPREY formulations. Why would a predator keep returning to a particular location?

Of course, in retrospect there was only one possible answer. The swarms were intentionally programmed to return. The goal was explicitly defined by the programmers themselves. But why would anybody program in a goal like that?

I didn't know until a few hours ago.

The code that Ricky showed me wasn't the code they had actually used on the particles. He couldn't show me the real code, because I would have known immediately what had been done. Ricky didn't ever tell me. Nobody ever told me.

What bothers me most is an email I found on Julia's hard drive earlier today. It was from her to Ricky Morse, with a CC to Larry Handler, the head of Xymos, outlining the procedure to follow to get the camera swarm to work in high wind. The plan was to intentionally release a swarm into the environment.

And that's exactly what they did.

They pretended it was an accidental release, caused by missing air filters. That's why Ricky gave me that long guided tour, and the song and dance about the contractor and ventilation system. But none of what he told me was true. The release was planned. It was intentional from the beginning.

When they couldn't make the swarm work in high wind, they tried to engineer a solution. They failed. The particles were just too small and light-and arguably too stupid, too. They had design flaws from the beginning and now they couldn't solve them. Their whole multimillion-dollar defense project was going down the drain, and they couldn't solve it. So they decided to make the swarm solve it for them.

They reconfigured the nanoparticles to add solar power and memory. They rewrote the particle program to include a genetic algorithm. And they released the particles to reproduce and evolve, and see if the swarm could learn to survive on its own. And they succeeded.

It was so dumb, it was breathtaking. I didn't understand how they could have embarked on this plan without recognizing the consequences. Like everything else I'd seen at Xymos, it was jerry-built, half-baked, concocted in a hurry to solve present problems and never a thought to the future. That might be typical corporate thinking when you were under the gun, but with technologies like these it was dangerous as hell.

But of course, the real truth was more complicated. The technology itself invited the behavior. Distributed agent systems ran by themselves. That was how they functioned. That was the whole point: you set them up and let them go. You got in the habit of doing that. You got in the habit of treating agent networks that way. Autonomy was the point of it all. But it was one thing to release a population of virtual agents inside a computer's memory to solve a problem. It was another thing to set real agents free in the real world. They just didn't see the difference. Or they didn't care to see it.

And they set the swarm free.

The technical term for this is "self-optimization." The swarm evolves on its own, the less successful agents die off, and the more successful agents reproduce the next generation. After ten or a hundred generations, the swarm evolves toward a best solution. An optimum solution. This kind of thing is done all the time inside the computer. It's even used to generate new computer algorithms. Danny Hillis did one of the first of those runs years back, to optimize a sorting algorithm. To see if the computer could figure out how to make itself work better. The program found a new method. Other people quickly followed his lead. But it hasn't been done with autonomous robots in the real world. As far as I know, this was the first time. Maybe it's already happened, and we just didn't hear about it. Anyway, I'm sure it'll happen again.

Probably soon.

It's two in the morning. The kids finally stopped vomiting. They've gone to sleep. They seem to be peaceful. The baby is asleep. Ellen is still pretty sick. I must have dozed off again. I don't know what woke me. I see Mae coming up the hill from behind my house. She's with the guy in the silver suit, and the rest of the SSVT team. She's walking toward me. I can see that she's smiling. I hope her news is good.

I could use some good news right now.

Julia's original email says, "We have nothing to lose." But in the end they lost everything-their company, their lives, everything. And the ironic thing is, the procedure worked. The swarm actually solved the problem they had set for it.

But then it kept going, kept evolving.

And they let it.

They didn't understand what they were doing.

I'm afraid that will be on the tombstone of the human race.

I hope it's not.

We might get lucky.

Bibliography This novel is entirely fictitious, but the underlying research programs are real. The following references may assist the interested reader to learn more about the growing convergence of genetics, nanotechnology, and distributed intelligence.

Adami, Christoph. Introduction to Artificial Life. New York: Springer-Verlag, 1998. Bedau, Mark A., John S. McCaskill, Norman H. Packard, and Steen Rasmussen. Artificial Life VII, Proceedings of the Seventh International Conference on Artificial Life. Cambridge, Mass.: MIT Press, 2000.

Bentley, Peter, ed. Evolutionary Design by Computers. San Francisco: Morgan Kaufmann, 1999.

Bonabeau, Eric, Marco Dorigo, and Guy Theraulaz. Swarm Intelligence: From Natural to Artificial Systems. New York: Oxford Univ. Press, 1999.

Brams, Steven J. Theory of Moves. New York: Cambridge Univ. Press, 1994.

Brooks, Rodney A. Cambrian Intelligence. Cambridge, Mass.: MIT Press, 1999.

Camazine, Scott, Jean-Louis Deneubourg, Nigel R. Franks, James Sneyd, Guy Theraulaz, and Eric Bonabeau. Self-Organization in Biological Systems. Princeton, N.J.: Princeton, 2001. See especially chapter 19.

Caro, T. M., and Clare D. Fitzgibbon. "Large Carnivores and Their Prey." In Crawley, Natural Enemies, 1992.

Crandall, B. C. "Molecular Engineering," in B. C. Crandall, ed., Nanotechnology, Cambridge, Mass.: MIT Press, 1996.

Crawley, Michael J., ed. Natural Enemies: The Population Biology of Predators, Parasites, and Diseases. London: Blackwell, 1992.

Davenport, Guy, tran. 7 Greeks. New York: New Directions, 1995.

Dobson, Andrew P., Peter J. Hudson, and Annarie M. Lyles. "Macroparasites," from Crawley, Natural Enemies, 1992.

Drexler, K. Eric. Nanosystems, Molecular Machinery, Manufacturing, and Computation. New York: Wiley Sons, 1992. --. "Introduction to Nanotechnology," in Krummenacker and Lewis, Prospects in Nanotechnology.

Ewald, Paul W. Evolution of Infectious Disease. New York: Oxford Univ. Press, 1994.

Ferber, Jacques. Multi-Agent Systems: An Introduction to Distributed Artificial Intelligence. Reading, Mass.: Addison-Wesley, 1999.

Goldberg, David E. Genetic Algorithms in Search, Optimization and Machine Learning. Boston: Addison-Wesley, 1989.

Hassell, Michael P. The Dynamics of Competition and Predation. Institute of Biology, Studies in Biology No. 72, London: Edward Arnold, 1976.

Hassell, Michael P., and H. Charles J. Godfray. "The Population Biology of Insect Parasitoids," in Crawley, Natural Enemies, 1992.

Holland, John H. Hidden Order: How Adaptation Builds Complexity. Cambridge, Mass.: Perseus, 1996.

Koza, John R. "Artificial Life: Spontaneous Emergence of Self-Replicating and Evolutionary Self-Improving Computer Programs," in Langton, ed., Artificial Life III.

Kelly, Kevin. Out of Control. Cambridge, Mass.: Perseus, 1994.

Kennedy, James, and Russell C. Eberhart. Swarm Intelligence. San Diego: Academic Press, 2001.