Изменить стиль страницы
Closures Simulate Subroutines

One of the problems with using Web pages as a UI is the inherent statelessness of Web sessions. We got around this by using lexical closures to simulate subroutine-like behavior. If you understand about continuations, one way to explain what we did would be to say that we wrote our software in continuation-passing style. When most web-based software generates a link on a page, it tends to be thinking, if the user clicks on this link, I want to call this cgi script with these arguments. When our software generated a link, it could think, if the user clicks on this link, I want to run this piece of code. And the piece of code could an arbitrary piece of code, possibly (in fact, usually) containing free variables whose value came from the surrounding context.

The way we did this was to write a macro that took an initial argument expected to be a closure, followed by a body of code. The code would then get stored in a global hash table under a unique id, and whatever output was generated by the code in the body would appear within a link whose url contained that hash key. If that link was the next one clicked on, our software would find and call the corresponding bit of code, and the chain would continue. Effectively we were writing cgi scripts on the fly, except that they were closures that could refer to the surrounding context. So far this sounds very theoretical, so let me give you an example of where this technique made an obvious difference. One of the things you often want to do in Web-based applications is edit an object with various types of properties. Many of the properties of an object can be represented as form fields or menus. If you're editing an object representing a person, for example, you might get a field, for their name, a menu choice for their title, and so on.

Now what happens when some object has a property that is a color? If you use ordinary cgi scripts, where everything has to happen on one form, with an Update button at the bottom, you are going to have a hard time. You could use a text field and make the user type an rgb number into it, but end-users don't like that. Or you could have a menu of possible colors, but then you have to limit the possible colors, or otherwise even to offer just the standard Web colormap, you'd need 256 menu items with barely distinguishable names.

What we were able to do, in Viaweb, was display a color as a swatch representing the current value, followed by a button that said "Change." If the user clicked on the Change button they'd go to a page with an imagemap of colors to choose among. And after they chose a color, they'd be back on the page where they were editing the object's properties, with that color changed. This is what I mean about simulating subroutine-like behavior. The software could behave as if it were returning from having chosen a color. It wasn't, of course; it was making a new cgi call that looked like going back up a stack. But by using closures, we could make it look to the user, and to ourselves, as if we were just doing a subroutine call. We could write the code to say, if the user clicks on this link, go to the color selection page, and then come back here. This was just one of the places were we took advantage of this possibility. It made our software visibly more sophisticated than that of our competitors.

Beating the Averages

(This article is derived from a talk given at the 2001 Franz Developer Symposium.)

In the summer of 1995, my friend Robert Morris and I started a startup called Viaweb. Our plan was to write software that would let end users build online stores. What was novel about this software, at the time, was that it ran on our server, using ordinary Web pages as the interface.

A lot of people could have been having this idea at the same time, of course, but as far as I know, Viaweb was the first Web-based application. It seemed such a novel idea to us that we named the company after it: Viaweb, because our software worked via the Web, instead of running on your desktop computer.

Another unusual thing about this software was that it was written primarily in a programming language called Lisp. It was one of the first big end-user applications to be written in Lisp, which up till then had been used mostly in universities and research labs. [1]

The Secret Weapon

Eric Raymond has written an essay called "How to Become a Hacker," and in it, among other things, he tells would-be hackers what languages they should learn. He suggests starting with Python and Java, because they are easy to learn. The serious hacker will also want to learn C, in order to hack Unix, and Perl for system administration and cgi scripts. Finally, the truly serious hacker should consider learning Lisp:

Lisp is worth learning for the profound enlightenment experience you will have when you finally get it; that experience will make you a better programmer for the rest of your days, even if you never actually use Lisp itself a lot.

This is the same argument you tend to hear for learning Latin. It won't get you a job, except perhaps as a classics professor, but it will improve your mind, and make you a better writer in languages you do want to use, like English.

But wait a minute. This metaphor doesn't stretch that far. The reason Latin won't get you a job is that no one speaks it. If you write in Latin, no one can understand you. But Lisp is a computer language, and computers speak whatever language you, the programmer, tell them to.

So if Lisp makes you a better programmer, like he says, why wouldn't you want to use it? If a painter were offered a brush that would make him a better painter, it seems to me that he would want to use it in all his paintings, wouldn't he? I'm not trying to make fun of Eric Raymond here. On the whole, his advice is good. What he says about Lisp is pretty much the conventional wisdom. But there is a contradiction in the conventional wisdom: Lisp will make you a better programmer, and yet you won't use it.

Why not? Programming languages are just tools, after all. If Lisp really does yield better programs, you should use it. And if it doesn't, then who needs it?

This is not just a theoretical question. Software is a very competitive business, prone to natural monopolies. A company that gets software written faster and better will, all other things being equal, put its competitors out of business. And when you're starting a startup, you feel this very keenly. Startups tend to be an all or nothing proposition. You either get rich, or you get nothing. In a startup, if you bet on the wrong technology, your competitors will crush you.

Robert and I both knew Lisp well, and we couldn't see any reason not to trust our instincts and go with Lisp. We knew that everyone else was writing their software in C++ or Perl. But we also knew that that didn't mean anything. If you chose technology that way, you'd be running Windows. When you choose technology, you have to ignore what other people are doing, and consider only what will work the best.

This is especially true in a startup. In a big company, you can do what all the other big companies are doing. But a startup can't do what all the other startups do. I don't think a lot of people realize this, even in startups.

The average big company grows at about ten percent a year. So if you're running a big company and you do everything the way the average big company does it, you can expect to do as well as the average big company-- that is, to grow about ten percent a year.

The same thing will happen if you're running a startup, of course. If you do everything the way the average startup does it, you should expect average performance. The problem here is, average performance means that you'll go out of business. The survival rate for startups is way less than fifty percent. So if you're running a startup, you had better be doing something odd. If not, you're in trouble.