Even though a googolplex is extraordinarily big, it is a precisely defined number. There is nothing vague about it. And it is definitely not infinite (just add 1). It is, however, big enough for most purposes, including most numbers that turn up in astronomy. Kasner and Newman observe that `as soon as people talk about large numbers, they run amuck. They seem to be under the impression that since zero equals nothing, they can add as many zeros to a number as they please with practically no serious consequences,' a sentence that Mustrum Ridcully himself might have uttered. As an example, they report that in the late 1940s a distinguished scientific publication announced that the number of snow crystals needed to start an ice age is a billion to the billionth power. `This,' they tell us, `is very startling and also very silly.' A billion to the billionth power is 1 followed by nine billion zeros. A sensible figure is around 1 followed by 30 zeros, which is fantastically smaller, though still bigger than Bill Gates's bank balance.
Whatever infinity may be, it's not a conventional `counting' number. If the biggest number possible were, say, umpty-ump gazillion, then by the same token umpty-ump gazillion and one would be bigger still. And even if it were more complicated, so that (say) the biggest number possible were umpty-ump gazillion, two million, nine hundred and sixty-four thousand, seven hundred and fifty-eight ... then what about umpty-ump gazillion, two million, nine hundred and sixty-four thousand, seven hundred and fifty-nine?
Given any number, you can always add one, and then you get a number that is (slightly, but distinguishably) bigger.
The counting process only stops if you run out of breath; it does not stop because you've run out of numbers. Though a nearimmortal might perhaps run out of universe in which to write the numbers down, or time in which to utter them.
In short: there exist infinitely many numbers.
The wonderful thing about that statement is that it does not imply that there is some number called `infinity', which is bigger than any of the others. Quite the reverse: the whole point is that there isn't a number that is bigger than any of the others. So although the process of counting can in principle go on for ever, the number you have reached at any particular stage is finite. `Finite' means that you can count up to that number and then stop.
As the philosophers would say: counting is an instance of potential infinity. It is a process that can go on for ever (or at least, so it seems to our naive pattern-recognising brains) but never gets to `for ever'.
The development of new mathematical ideas tends to follow a pattern. If mathematicians were building a house, they would start with the downstairs walls, hovering unsupported a foot or so above the damp-proof course ... or where the damp-proof course ought to be. There would be no doors or windows, just holes of the right shape. By the time the second floor was added, the quality of the brickwork would have improved dramatically, the interior walls would be plastered, the doors and windows would all be in place, and the floor would be strong enough to walk on. The third floor would be vast, elaborate, fully carpeted, with pictures on the walls, huge quantities of furniture of impressive but inconsistent design, six types of wallpaper in every room ... The attic, in contrast, would be sparse but elegant - minimalist design, nothing out of place, everything there for a reason. Then, and only then, would they go back to ground level, dig the foundations, fill them with concrete, stick in a dampproof course, and extend the walls downwards until they met the foundations.
At the end of it all you'd have a house that would stand up. Along the way, it would have spent a lot of its existence looking wildly improbable. But the builders, in their excitement to push the walls skywards and fill the rooms with interior decor, would have been too busy to notice until the building inspectors rubbed their noses in the structural faults.
When new mathematical ideas first arise, no one understands them terribly well, which is only natural because they're new. And no one is going to make a great deal of effort to sort out all the logical refinements and make sense of those ideas unless they're convinced it's all going to be worthwhile. So the main thrust of research goes into developing those ideas and seeing if they lead anywhere interesting. `Interesting', to a mathematician, mostly means `can I see ways to push this stuff further?', but the acid test is `what problems does it solve?' Only after getting a satisfactory answer to these questions do a few hardy and pedantic souls descend into the basement and sort out decent foundations.
So mathematicians were using infinity long before they had a clue what it was or how to handle it safely. In 500 Bc Archimedes, the greatest of the Greek mathematicians and a serious contender for a place in the all-time top three, worked out the volume of a sphere by (conceptually) slicing it into infinitely many infinitely thin discs, like an ultra-thin sliced loaf, and hanging all the slices from a balance, to compare their total volume with that of a suitable shape whose volume he already knew. Once he'd worked out the answer by this astonishing method, he started again and found a logically acceptable way to prove he was right. But without all that faffing around with infinity, he wouldn't have known where to start and his logical proof wouldn't have got off the ground.
By the time of Leonhard Euler, an author so prolific that we might consider him to be the Terry Pratchett of eighteenth-century mathematics, many of the leading mathematicians were dabbling in `infinite series' - the school child's nightmare of a sum that never ends. Here's one:
1 + 1/2 + 1/4 + 1/8 + 1/16 + 1/32 + . - where the `...' means `keep going'. Mathematicians have concluded that if this infinite sum adds up to anything sensible, then what it adds up to must be exactly two. [1] If you stop at any finite stage, though, what you reach is slightly less than two. But the amount by which it is less than two keeps shrinking. The sum sort of sneaks up on the correct answer, without actually getting there; but the amount by which it fails to get there can be made as small as you please, by adding up enough terms.
Remind you of anything? It looks suspiciously similar to one of Zeno/Xeno's paradoxes. This is how the arrow sneaks up on its victim, how Achilles sneaks up on the tortoise. It is how you can do infinitely many things in a finite time. Do the first thing; do the second thing one minute later; do the third thing half a minute after that; then the fourth thing a quarter of a minute after that ... and so on. After two minutes, you've done infinitely many things.
The realisation that infinite sums can have a sensible meaning is only the start. It doesn't dispel all of the paradoxes. Mostly, it just
[1] To see why, double it: the result now is 2 + 1 + V2 + '/ + y8 + 'A+ ... , which is 2 more than the original sum. What number increases by 2 when you double it? There's only one such number, and it's 2.
sharpens them. Mathematicians worked out that some infinities are harmless, others are not.
The only problem left after that brilliant insight was: how do you tell? The answer is that if your concept of infinity does not lead to logical contradictions, then it's safe to use, but if it does, then it isn't. Your task is to give a sensible meaning to whatever `infinity' intrigues you. You can't just assume that it automatically makes sense.
Throughout the eighteenth and early nineteenth centuries, mathematics developed many notions of `infinity', all of them potential. In projective geometry, the `point at infinity' was where two parallel lines met: the trick was to draw them in perspective, like railway lines heading off towards the horizon, in which case they appear to meet on the horizon. But if the trains are running on a plane, the horizon is infinitely far away and it isn't actually part of the plane at all - it's an optical illusion. So the point `at' infinity is determined by the process of travelling along the train tracks indefinitely. The train never actually gets there. In algebraic geometry a circle ended up being defined as `a conic section that passes through the two imaginary circular points at infinity', which sure puts a pair of compasses in their place.