Изменить стиль страницы

I admit this is an extreme case. ITA's hackers seem to be unusually smart, and C is a pretty low-level language. But in a competitive market, even a differential of two or three to one would be enough to guarantee that you'd always be behind.

13.6. A Recipe

This is the kind of possibility that the pointy-haired boss doesn't even want to think about. And so most of them don't. Because, you know, when it comes down to it, the pointy-haired boss doesn't mind if his company gets their ass kicked, so long as no one can prove it's his fault. The safest plan for him personally is to stick close to the center of the herd.

Within large organizations, the phrase used to describe this approach is "industry best practice." Its purpose is to shield the pointy-haired boss from responsibility: if he chooses something that is "industry best practice," and the company loses, he can't be blamed. He didn't choose, the industry did.

I believe this term was originally used to describe accounting methods and so on. What it means, roughly, is don't do anything weird . And in accounting that's probably a good idea. The terms "cutting-edge" and "accounting" do not sound good together. But when you import this criterion into decisions about technology, you start to get the wrong answers.

Technology often should be cutting-edge. In programming languages, as Erann Gat has pointed out, what "industry best practice" actually gets you is not the best, but merely the average. When a decision causes you to develop software at a fraction of the rate of more aggressive competitors, "best practice" does not really seem the right name for it.

So here we have two pieces of information that I think are very valuable. In fact, I know it from my own experience. Number 1, languages vary in power. Number 2, most managers deliberately ignore this. Between them, these two facts are literally a recipe for making money. ITA is an example of this recipe in action. If you want to win in a software business, just take on the hardest problem you can find, use the most powerful language you can get, and wait for your competitors' pointy-haired bosses to revert to the mean.

13.7. Appendix: Power

As an illustration of what I mean about the relative power of programming languages, consider the following problem. We want to write a function that generates accumulators—a function that takes a number n ,and returns a function that takes another number i and returns n incremented by i. (That's incremented by, not plus. An accumulator has to accumulate.)

In Common Lisp 7 this would be:

(defun foo (n)

(lambda (i) (incf n i)))

In Ruby it's almost identical:

def foo (n)

lambda {|i| n += i } end

Whereas in Perl 5 it's

sub foo {

my ($n) = @_;

sub {$n += shift}

}

which has more elements than the Lisp/Ruby version because you have toextract parameters manually in Perl.

In Smalltalk the code is also slightly longer than in Lisp and Ruby:

foo: n

|s|

s := n.

^[:i| s := s+i. ]

because although in general lexical variables work, you can't do an assignment to a parameter, so you have to create a new variable s to hold the accumulated value.

In Javascript the example is, again, slightly longer, because Javascript retains the distinction between statements and expressions, so you need explicit return statements to return values:

function foo (n) {

return function (i) {

return n += i } }

(To be fair, Perl also retains this distinction, but deals with it in typical Perl fashion by letting you omit returns.)

If you try to translate the Lisp/Ruby/Perl /Smalltalk/Javascript code into Python you run into some limitations. Because Python doesn't fully support lexical variables, you have to create a data structure to hold the value of n . And although Python does have a function data type, there is no literal representation for one (unless the body is only a single expression) so you need to create a named function to return. This is what you end up with:

def foo (n):

s = [n]

def bar (i):

s[0] += i

return s[0]

return bar

Python users might legitimately ask why they can't just write

def foo (n):

return lambda i: return n += i

or even

def foo (n):

lambda i: n += i

and my guess is that they probably will, one day. (But if they don't want to wait for Python to evolve the rest of the way into Lisp, they could always just...)

In OO languages, you can, to a limited extent, simulate a closure (a function that refers to variables defined in surrounding code) by defining a class with one method and a field to replace each variable from an enclosing scope. This makes the programmer do the kind of code analysis that would be done by the compiler in a language with full support for lexical scope, andit won't work if more than one function refers to the same variable, but it is enough in simple cases like this.

Python experts seem to agree that this is the preferred way to solve the problem in Python, writing either

def foo (n):

class acc:

def _ _init_ _ (self, s):

self.s = s

def inc (self, i):

self.s += i

return self.s

return acc (n).inc

or

class foo:

def _ _init_ _ (self, n):

self.n = n

def _ _call_ _ (self, i):

self.n += i

return self.n

I include these because I wouldn't want Python advocates to say I was misrepresenting the language, but both seem to me more complex than the first version. You're doing the same thing, setting up a separate place to hold the accumulator; it's just a field in an object instead of the head of a list. And the use of these special, reserved field names, especially _ _call_ _, seems a bit of a hack.

In the rivalry between Perl and Python, the claim of the Python hackers seems to be that Python is a more elegant alternative to Perl, but what this case shows is that power is the ultimate elegance: the Perl program is simpler (has fewer elements), even if the syntax is a bit uglier.

How about other languages? In the other languages mentioned here—Fortran, C, C++, Java, and Visual Basic—it does not appear that you can solve this problem at all. Ken Anderson says this is about as close as you can get in Java:

public interface Inttoint {

public int call (int i);

}

public static Inttoint foo (final int n) {

return new Inttoint () {

int s = n;

public int call (int i) {

s = s + i;

return s;

}};

}

which falls short of the spec because it only works for integers.

It's not literally true that you can't solve this problem in other languages, of course. The fact that all these languages are Turing-equivalent means that, strictly speaking, you can write any program in any of them. So how would you do it? In the limit case, by writinga Lisp interpreter in the less powerful language.

That sounds like a joke, but it happens so often to varying degrees in large programming projects that there is a name for the phenomenon, Greenspun's Tenth Rule:

Any sufficiently complicated C or Fortran program contains an ad hoc informally-specified bug-ridden slow implementation of half of Common Lisp.

If you try to solve a hard problem, the question is not whether you will use a powerful enough language, but whether you will (a) use a powerful language, (b) write a de facto interpreter for one, or (c) yourself become a human compiler for one. We see this already beginning to happen in the Python example, where we are in effect simulating the code that a compiler would generate to implement a lexical variable.