Academia.eduAcademia.edu

Chaitin - Book review: The Unknowable

2002, Journal of Scientific Exploration

Book review: The Unknowable: works by G.J. Chaitin The Limits of Mathematics, a Course on Information Theory and Limits of Formal Reasoning, by Gregory J. Chaitin. London: Springer-Verlag. Hardcover, November 1997. The Unknowable, by Gregory J. Chaitin. London: Springer-Verlag Series in Discrete Mathematics and Theoretical Computer Science. Hardcover, August 1999. Exploring Randomness, by Gregory J. Chaitin. London: Springer-Verlag. Hardcover, February 2001. Not only does God play dice with physics, contrary to Einstein’s oft-quoted assertion, but He also plays dice with arithmetics, and even with that “hardest” part of mathematics known as number theory. So argues mathematician Gregory Chaitin, whose work has been supported for the last 30 years by the IBM research division at the Thomas J. Watson research center in New York State. Chaitin is the main architect of a new branch of mathematics called algorithmic information theory, or “AIT. “ A gifted pioneer (in 1965, while in high school, he wrote a paper on automata that is still quoted today) he obviously enjoys shaking philosophers and scientists alike by his radical statements about the incompleteness of mathematics, the need to reframe it as an experimental science rather than an exact one, and more generally the folly of ever attempting to derive complete truth from a set of axioms. As he puts it in a piece called Letter to a daring young reader: “I have demonstrated the existence of total randomness in the mental mindscape of pure mathematics.” Chaitin and Kolmogorov simultaneously came up with the idea that something is random if it cannot be compressed into a shorter description: “If you think of a theory as a program that calculates the observations, the smaller the program is relative to the output, which is the observations, the better the theory is,” writes Chaitin. Three overlapping books on the incompleteness of mathematics Chaitin’s three books are based on his popular lectures and must be taken together in order to assess his ideas. In The Unknowable he compares his work on incompleteness to that of Gödel and Turing, discussing the historical context of his research on program-size complexity; in The Limits of Mathematics he brings more detail on metamathematical implications; and in Exploring Randomness he develops algorithmic theory, further revealing its technical core. This is important work, with implications that go far beyond the arcane arguments of one branch of mathematics. At first sight, however, the reader may be justified for feeling confused or overwhelmed. The three books are fascinating in their blend of flamboyant ideas and long chapters written in LISP, a programming language that Chaitin favors: He even developed his own dialect of it! While this provides a ready tool for his colleagues and students it makes it harder for the general reader to unravel the many threads of his ebullient arguments. Yet the sections in LISP are mandatory because the common theme of all three books is to study the size of the smallest program for calculating a given number, and “you cannot really understand an algorithm unless you can see it running on a computer.” Another weakness is the overlap of the three volumes, that would have benefited from tighter editing and structure (perhaps with the LISP developments as an appendix?) These are minor problems of presentation, however, that should not detract from the massive intellectual challenge the author is proposing. As one gets into the substance of the books it is difficult to resist Chaitin’s enthusiastic style and obvious intelligence. Beyond the technicalities of the argument the reader is quickly drawn into a fundamental new landscape of ideas. What Chaitin is demanding, in effect, is nothing less than a bold reassessment of our notions about truth and logic. The challenge to Hilbert At the dawn of the 20th century it seemed that science was about to solve, once and for all, the totality of mathematical problems. David Hilbert believed that a consistent and complete set of axioms could be drawn up, from which you could derive all of mathematics. As Chaitin summarizes it, “if all mathematicians could agree whether a proof is correct and be consistent and complete, in principle that would give a procedure for automatically solving any mathematical problem. This was Hilbert’s magnificent dream, and it was to be the culmination of Euclid and Leibniz, and Boole and Peano, and Russell and Whitehead." Hilbert’s famous lecture in the year 1900 proposed a list of 23 difficult problems, a “call to arms” that inspired a generation of researchers, among them John von Neumann. In the fifties and sixties, when I studied math at the Sorbonne in the shadow of Bourbaki, this was still the dominant vision. The first man who pointed out that Hilbert’s axiomatic theory was flawed was Gödel. As early as 1931 he showed that mathematics could not be consistent and complete at the same time. More specifically, he proved that if an axiomatic system was consistent it would prove theorems that were wrong, and therefore it was incomplete. And if it was complete it would fail to prove some theorems that were true. To put it in simplistic terms, consider the statement, “This statement is unprovable.” If it turns out to be provable, then we are proving something that is false. And if it is indeed unprovable, then it is true – a true statement that escapes our system of axioms. This in turn means that they are incomplete. Gödel’s proof is difficult (refreshingly, Chaitin himself confesses that he could follow it step by step but “somehow I couldn’t ever really feel that I was grasping it”) but it was followed by a more clear, more devastating attack five years later, led by the father of computer theory, Alan Turing. Gödel had shown that a formal axiomatic system for arithmetic could not be complete if it was consistent, but this still left a door open for a “decision procedure” that would tell us if a given assertion was true or not. Turing closed that door in 1936, and his proof is the springboard for Chaitin’s work. Turing posed the question in radical new terms by tackling the “halting problem,” which considers a program (P) that determines whether or not a given computer program (Q) will halt or not when it is run on a particular computer. This is where computer languages with recursivity are important: In a language like LISP that is interpreted rather than compiled you can run (P) as a subprocedure of itself. If (P) stated that (Q) would never halt, then you would halt; and you would go into an infinite loop in the opposite situation, when (P) stated that (Q) would halt. Thus you would demonstrate the incompleteness of the axioms, unable to yield a fixed answer. Chaitin refined this incompleteness result by defining a number, “Omega” as the “halting probability.” Omega is the probability that a binary program generated by tossing a coin will ever stop running. Given a specific computer, this is a well-defined real number. The computer calls for a series of binary digits and tries executing this “program.” Omega is “maximally unknowable,” says Chaitin, because the sequence of 0’s and 1’s in this number have no mathematical structure. To calculate the first N bits of Omega demands an N-bit program, in other words, N bits of axioms. This is irreducible mathematical information, a shocking idea in the Hilbertian view that assumed that all mathematical truth (hence, all computable numbers) could be derived from a small set of axioms in the same way as Pi, or the square root of 2, can be computed to arbitrary precision. Implications beyond Mathematics Leibniz claimed that if something was true, it was true for a reason. That reason was the “mathematical truth.” But the bits in Chaitin’s Omega number are not true for any reason, they are true by accident. We will never know what these bits are in the way we “know” that the first decimal in Pi is 1, the second one is 4, etc. Summarizing the history Chaitin writes: “it turned out that not only Hilbert was wrong, as Gödel and Turing showed… With Gödel it looks surprising that you have incompleteness, that no finite set of axioms can contain all mathematical truth. With Turing incompleteness seems much more natural. But with my approach, when you look at program size, I would say that it looks inevitable. Wherever you turn, you smash up against a stone wall and incompleteness hits you in the face!” Chaitin has shown that some mathematical truths were true by accident, that mathematics was no longer an exact science but an empirical, even an experimental science like physics. This is a nightmare for the logicians. At a time when physicists (who went through a similar revolution with the concept of randomness in the 1920s) are trying to get spacetime out of a random substratum, this work on the limits of mathematics is an inspiration. How far can we take the implications? Chaitin himself sees no direct connection between his work and the physical concept of "random reality” but he does claim that "AIT will lead to the major breakthroughs of 21st century mathematics, which will be information-theoretic and complexity-based characterizations of what is mind, what is intelligence, what is consciousness, of why life has to appear spontaneously and then to evolve.” This last statement suggests a link with many of the topics studied by the SSE. French writer Aimé Michel had reached the conclusion that certain problems (such as the topic of “alien contact”) were in the realm of the unknowable, and would remain so until humans evolved a more complex brain. But mathematical unknowability is not necessarily a consequence of human frailty. Hilbert’s First Problem (also known as “Cantor’s Continuum Hypothesis”) is an example of this. In transfinite arithmetic the Hebrew letter Aleph subscripted by zero (“Aleph null”) is the number of integers. It can be shown that 2 raised to the Aleph-null power is another number, and is greater than Aleph-null. Hilbert asked whether there was a number between these two numbers. In 1963 a Stanford mathematician named Paul Cohen showed that you couldn’t know if such a number existed. As a scientist friend from Los Alamos reminds me, “it’s not that you are not smart enough, or lack the mathematical tools to find it. It is just undecidable.” This finding challenges many philosophical positions. Materialist theoretician and Marx’s co-author Friedrich Engels made the point that our subjective thought and the objective world follow the same laws and therefore cannot contradict each other in their results. That is where mathematics comes from, argues Engels: abstraction from the world of nature. Eighteenth-century materialism had already posed the principle that nihil est in intellectu, quod not fuerit in sensu. (nothing exists in thought that doesn’t exist in sensory experience.) In a piece called On Prototypes of Mathematical Infinity in the Real World Engels further stressed that “our geometry starts from spatial relationships, our arithmetics and algebra begin with numerical quantities and thus correspond to our terrestrial conditions.” In such a materialistic view it would seem to follow that the world itself must be unknowable. Not all scientists will agree with this interpretation. After Gödel and Turing, you can indeed ask some well-posed questions that do not have an answer. But we should not look for implications beyond logic: “I see no connection to the existence of UFOs or the existence of God,” says my Los Alamos correspondent. “But because I fail to see the connections this doesn’t mean there is no connection. In the early eighteenth century Pierre Louis Moreau de Maupertuis set out to prove the existence of God, and ended up formulating the principle of least action, which provides the underpinnings for much of modern physics.” If Chaitin is right about the impact of AIT as a new discipline, his work on the Unknowable could indeed prove fundamental for 21st century science. I find it ironic that information science, which was regarded as a minor branch of “applied mathematics” when I went to graduate school, may turn out to play such a major role in the future. But the best advice Chaitin gives us comes at the end of Exploring Randomness, when he writes: “Be prepared to have many false breakthroughs, which don’t survive the glaring light of rational scrutiny the next morning. You have to dare to imagine many false beautiful theories before you hit on one that works; be daring, dare to dream, have faith in the power of new ideas and hard work. Get to work! Dream!” Jacques F. Vallee San Francisco 15 September 2001 Vallee book review: Chaitin page 6