Why creativity trumps IQ
Bottomline: The greatest equalizer between people is universality (hardware), but the greatest differentiator between people is creativity (software).
A lot has changed, to say the least, since I last published my controversial essay, “Why universality trumps IQ,” last year. Perhaps the most common criticism, and certainly the most dull one of them all, was that the human mind is not computable. Regardless of whether you accept the physical Church-Turing thesis — essentially, that computers are physical systems that can simulate, or copy the behaviour of, any physical system, including itself, to arbitrary detail—this criticism can be ignored simply on the grounds that it is not even willing to entertain conclusions that follow from this working assumption. Even if it is wrong, I do not believe that a computational theory of human mind is like a Bed of Procrustes that forcibly butchers people into being “mere machines.” This is such an uncharitable assumption of people who take the physical C-T thesis seriously, that there is simply no room left for productive discussion here, and readers who share this criticism should stop reading right now.
With that aside, let us restate some findings that have not changed since then. If the physical C-T thesis is true, then there cannot exist computers more powerful than what is essentially the Turing machine. By more powerful, I mean that there cannot exist physical systems— including human minds — that can do what the humble Turing machine cannot. If this is true, then that is simply a logical limit of our Reality. No one is surprised if we cannot travel faster than the speed of light, but many are completely flummoxed that there should be logical limits on thought at all. The quantum complexity-theoretic C-T thesis poses the most serious threat to the classical complexity-theoretic C-T thesis in that classical computers are not as efficient as quantum computers in simulating all physical systems, but it does not change the meaning of the physical C-T thesis. It is not at all clear that universal quantum computers are physically possible (although this is almost certainly because our technology is immature), let alone that human minds are quantum computers. Anyway, what all of this implies is that there is no fundamental difference between individuals, let alone races, in their potential range of thinking, because all universal computer hardware can and cannot do precisely the same things, even if they look very different from each other. Furthermore, Nature gives you universality almost for free: it is very easy to find or make it. Universality implies that we all have the same hardware in this sense.
This, then, brings us to the second criticism that some racists are fond of bringing up in response to universality: “Yes, but perhaps geniuses are like computers that are more parallel, or have higher CPU speed or more RAM!” First, there is the unproven implication that some races apparently genetically produce these superior “hardware” differences, but not others. If so, then it does not explain why different civilizations made of different races have taken turns at the helm in history. Second, talk about the real Bed of Procrustes: using very simple, linear, and crude measures such as CPU speed and RAM size to discuss the much more complicated and complex human brain, which so far outperforms the fastest and most memorious supercomputers, no matter how cleverly designed. To dispel this criticism once and for all, my friend Ashlesh Sharma and I have written an essay on why no one is exponentially smarter than others: at best, there may be a polynomial difference in computing “hardware” between individuals, not races, and we do not believe that this is what really matters in practice anyway.
I could go on refuting every other criticism, but the problem is that none of them was really interesting, and every one of them, especially from high-IQ racists, missed the largest missing computational piece of the puzzle. In fact, it was Gregory Chaitin who clued me into it when I read Proving Darwin a few months ago.
It is a shame that Gödel’s two incompleteness theorems are typically misinterpreted as negative instead of positive results. The nutshell is that some mathematicians — such as the great David Hilbert, Bertrand Russell, and Alfred North Whitehead — thought that there can exist one system for endlessly deducing all true mathematical statements. To their surprise, but not Gödel and friends, this turns out to be impossible. The theorems are roughly as follows: no closed, consistent system of axioms and rules can completely produce all true statements about the integers, and it cannot prove its own consistency either. This may seem quite unfortunate, but on the contrary: Turing turned Gödel’s lemons into lemonade by designing universal computers that obey these limitations, but nevertheless compute a countably infinite number of things. And the way he designed these computers were by thinking about what a human must tell another human precisely what to do in order to be said to compute anything at all. Another positive way to think about Gödel’s “unfortunate” results comes from Emil Post via Chaitin:
“According to Emil Post — who is not as well known as Gödel and Turing but was at their level (he came up with Turing machines too, and also with an incompleteness theorem that remained unpublished for years) — the axiomatic method, and especially Hilbert’s formal mathematics, was just a terrible mistake, a confused misunderstanding.”
“According to Post, math cannot provide certainty because it is not closed, mechanical, it is creative, plastic, open! Sound familiar? You bet, we have been talking about biological creativity all through the previous chapter, and now we find something like it in pure math too! So math is creative, not mechanical, math is biological, not a machine!”
Okay, so now we are ready for the crème brûlée. The typical misunderstanding of Gödel has been that, “Aha, you see, therefore, no machine could ever replicate humanlike intelligence!” Well, the same logical limits of epistemology may apply to human minds as well. If the physical C-T thesis is true — and, remember, people like George Boole and Turing were trying to capture something like the basic laws of thought themselves — then what Gödel, Turing, Post, Chaitin, and friends imply is that there is no single mathematician or school of mathematics either that can produce all mathematics. This doesn’t mean that there are no methods behind creativity, but no one method can do it all. Sometimes, someone has to try entirely new things and see where it takes them. This is what Chaitin means by “against method” à la Paul Feyerabend. So much for the Reason and Enlightenment project: the irony being, of course, that we can reason about some of the limits of reason itself.
Put another way, there is no end to creativity. Just like negative or complex numbers and non-Riemann geometries were once considered unnatural, they can now be described by standard set theory. This has historically been how mathematics was done anyway: someone is bold enough to conjure something completely new; it is considered a heresy or curiosity at first; then it gets added to common knowledge; rinse and repeat. Now, even standard set theory is not complete, because you can choose to believe or disbelieve the Axiom of Choice or the Continuum Hypothesis. Most recently, our friend Joe “Detective” Shipman has pointed to entirely new mathematical results by Harvey Friedman that don’t follow from standard set theory. Creativity is like what the Red Queen told Alice in Wonderland: “it takes all the running you can do, to keep in the same place.” There is no end to patching knowledge by trial-and-error. That is why it is critical to have an open culture of free speech and criticism.
So, one of the most “disturbing” things that follow from Gödel is that even in mathematics, we must live with uncertainty: we generally have to live with not knowing whether something is provably true or false. Take P vs NP: if P=NP, then most cryptography and thus cryptocurrencies are broken, but nobody knows how to prove whether P=NP or not, despite a fatwa with a million bucks out on the problem. Maybe it will get busted someday; maybe not. That is why cryptographers hedge their bets by making new algorithms all the time. So, as Wolfram, Chaitin, and others have observed, even mathematics turns out to be an experimental science after all.
Where does this all bring us? It means that individuals have to be endlessly creative and generally unpredictable — hence the uselessness of IQ scores. As David Deutsch remarked (leaving aside his indifference to risk management when it comes to science and technology), the production of new knowledge has to be generally unpredictable. Otherwise, you’d already know about it. That is why Alan Kay said the “the best way to predict the future is to invent it,” and the best way to do it is with unpredictable individuals with unpredictable ideas.
And now we come to the largest refinement between the last and this essay: on the difference between actual vs potential range of thinking between individuals, not races. Software is the world’s greatest equalizer, but also the world’s greatest differentiator. I speculate that individuals are different software running on essentially the same hardware. Different individuals are like different formal axiomatic systems with different axioms and rules. Nature has built in our intrinsic differences as part of a distributed search to continually solve the problem of perpetuating life for as long as possible. We must each be different in entirely new ways in order to try to solve or manage new problems that crop up all the time, such as the novel coronavirus.
It doesn’t matter where differences come from: whether from single-nucleotide polymorphisms, random wiring during embryonic development, personality, path-dependent historical development, and so on. The point is that there are differences, and that is the robust argument. We know from complexity theory that very large outcomes can result from very simple differences. (Clearly, there are limits on the differences: the results must still be recognizably human.) Furthermore, it can be infeasible if not outright impossible to reverse-engineer the program by looking at the outcome. And this is all right. Nature wouldn’t have bothered with variance between individuals if they didn’t matter, so that is one line of evidence. We have now made the game harder for racists, in that they must now explain the supposed superiority of some races over others in terms of not hardware, which we know are universal, but in terms of software, where they might now claim that the genetics of one race might produce consistently “better” software. Good luck with that.
The differences we see between people can be explained using software rather than hardware, the last refuge of racists. It explains what we see in practice, especially given the limitations of time, which my previous essay had ignored. It is also a novel explanation for why even fellow experts in a technical field may consider someone “beyond reach.” Note that this does not mean that individuals cannot converge on the same ideas: it just means that individuals have differences in actual ranges of thoughts. I believe this is the most robust refinement of the subject thus far from a computational point of view. In a future essay, I plan to write about the relationship between intelligence and time.
Thanks to Sean McClure, Ashlesh Sharma, Joe Shipman, and Michael Straka for their reviews.