When should you think that you may be able to do something unusually well?
Whether you’re trying to outperform in science, or in business, or just in finding good deals shopping on eBay, it’s important that you have a sober understanding of your relative competencies. The story only ends there, however, if you’re fortunate enough to live in an adequate civilization.
Eliezer Yudkowsky’s Inadequate Equilibria is a sharp and lively guidebook for anyone questioning when and how they can know better, and do better, than the status quo. Freely mixing debates on the foundations of rational decision-making with tips for everyday life, Yudkowsky explores the central question of when we can (and can’t) expect to spot systemic inefficiencies, and exploit them.
Eliezer Shlomo Yudkowsky is an American artificial intelligence researcher concerned with the singularity and an advocate of friendly artificial intelligence, living in Redwood City, California.
Yudkowsky did not attend high school and is an autodidact with no formal education in artificial intelligence. He co-founded the nonprofit Singularity Institute for Artificial Intelligence (SIAI) in 2000 and continues to be employed as a full-time Research Fellow there.
Yudkowsky's research focuses on Artificial Intelligence theory for self-understanding, self-modification, and recursive self-improvement (seed AI); and also on artificial-intelligence architectures and decision theories for stably benevolent motivational structures (Friendly AI, and Coherent Extrapolated Volition in particular). Apart from his research work, Yudkowsky has written explanations of various philosophical topics in non-academic language, particularly on rationality, such as "An Intuitive Explanation of Bayes' Theorem".
Yudkowsky was, along with Robin Hanson, one of the principal contributors to the blog Overcoming Bias sponsored by the Future of Humanity Institute of Oxford University. In early 2009, he helped to found Less Wrong, a "community blog devoted to refining the art of human rationality". The Sequences on Less Wrong, comprising over two years of blog posts on epistemology, Artificial Intelligence, and metaethics, form the single largest bulk of Yudkowsky's writing.
This book has noticeably changed my thinking when considering new projects. Really great epistemic advice, as well as damn fun and easy to read.
Some of the useful things inside: - Microeconomic tools for judging the epistemic incentives of groups of experts - Concise and useful explanations for how major insitutions (academia, medicine, politics, venture capitalism) break - Case studies in how to successful use Aumman's agreement theorem in real life
The most valuable part of this book by far is chapter 3, which is an extended dialogue between a cynical conventional economist ("C.C.E" or 'Cecie') and a visitor from a different world, trying to explain why a particular institution is killing babies. The example is a real one - the FDA hasn't approved a simple set of fats for intravenous baby food, causing severe brain damage in something like dozens of babies per year. And the explanation is a whirlwind tour of how our institutions work and how they break, with many concepts that I've been able to use elsewhere to great benefit.
Above all, this book has changed the way I think about learning from experts. I'll end my review with an example of how I analyse things like this now.
Recently someone came to me proposing a project for building a math-education website - combining math explanations and machine learning algorithms, connecting you with the best explanation given your background knowledge. And of my first 3 thoughts, 2 were about how the site would function and how a user would interact with it - it's important to concretely visualise a product to figure out if it feels like it would work. However, my 3rd thought was to do an adequacy analysis - I looked for data points about whether I should expect that, if this is a good idea that can work, why hasn't someone already done it? Think about all the money and time people spend on education each year - all the tutors in universities, all the teachers in high schools, all the assistant lecturers and support staff and government subsidies. Surely, if this project was a good idea, someone would've built it and it would be a common website we all use. Is the fact that this doesn't exist already enough evidence that it won't work?
In general I don't see the educational marketplace of resources changing rapidly in response to new tech and research. There was a bunch of research about spaced repetition and how memorisation works, that nobody has jumped to incorporate into how universities work. Many people can tell you the best physics/math textbooks (e.g. Feynman Lectures) but most students will never read them. It *doesn't* look to me like a space where it would require a great deal of effort to out-do the best in the field. You might counter by pointing out the success of Khan Academy, but personally I'm not sure I'd consider the fact that a single dad can make lots of videos and do better than everyone else, a sign of success for the educational market.
I'm not actually very confident in this analysis. However, the best part is that I can learn about the whole field by working on the one project. If the project fails, or I find out that someone else has tried this and failed, then I'll change my assessment of other projects in this space. On the other hand, if the project succeeds, I will update about how good the educational market is at incentivising people to create useful things like this.
I wouldn't have made these models and hypotheses before, and for that reason I'm really glad I read this book.
خیلی وقت بود کتابی به این آزاردهنگی نخوانده بودم، آنقدر که لبخندتلخی به لب آدم بنشاند! انتخاب این کتاب بخشی از برنامه ای بود که برای مطالعه آثار نویسندگان جوان داخلی و خارجی داشتم که نامی برای خودشان دست و پا کرده اند و سرنخهایی برای فهم وضعیت فکری معاصر میدهند. اینجا هم با نویسنده ای سروکار داریم که به تحصیلات آکادمیک پشت کرده و خودش را در زمینه هوش مصنوعی مطرح کرده و علاقه زیادی هم به بازارهای مالی دارد
با چنین پیشینه ای، نویسنده مثالهایی از زندگی خودش میزند که چطور توانسته بهتر از بانک ملی ژاپن یا سیستم پزشکی آمریکا مشکلات بزرگی را پیش بینی یا حل کند و حالا سعی دارد تا رویکرد خودش را به دیگران بیاموزد! ایده کتاب در ظاهر عمل زیباست و از همین سابقه علمی می آید که فهم کارکرد بازارهای مالی به عنوان بازارهایی کارا، با ذهنیتی متاثر از کارکرد هوش مصنوعی یعنی بهبود تدریجی سیستم از طریق بازخوردهای فراوان، چطور می تواند به کمک حوزه های دیگر زندگی بشر بیاید. اما در عمل، دانشی اندک و سطحی از اقتصاد خرد را مورد استفاده قرار میدهد تا به حوزه های مختلفی از زندگی بشری مثل سیستم بهداشت و سیستم دانشگاهی بتازد و با رویکردی خودبرتربینانه حتی در عنوان کتاب ادعا کند که چرایی شکست تمدنهای بشری را یافته است! بی اعتنایی اش به تحقیقات آکادمیک و رشته های علمی تا آنجا پیش میرود که نه تنها حوزه هایی مثل بازارهای تطابقی یا نظریه پیچیدگی را که ارتباط مستقیم با مباحث کتاب پیدا می کنند نادیده می گیرد، بلکه حتی به مفاهیم هم اعتقاد ندارد
اعتقاد نداشتن به مفاهیم، خواندن متن را دردناک تر می کند، یعنی با هر بیانی که دوست دارد می نویسد و متن بین روایتی محاوره ای و علمی در نوسان است، هرکجا که دوست دارد مفاهیم جدید خلق میکند و اسمهای تازه بر روی پدیده ها می گذارد. به هر چیزی که اعتقادی ندارد، یا در زندگی خودش به آن بی اعتنا بوده است، مثل تحصیلات آکادمیک یا منزلت اجتماعی، در لفافه ای از نظریه پردازی و بازی با کلمات آنها را به تمسخر می گیرد و تنها کسانی را محترم می شمارد که در حوزه های تخصصی خودش یعنی هوش مصنوعی و بازارهای مالی دارای اعتبار و موفقیت فراوان اند. جالب تر اینکه اکثر شواهدی که مورد استناد قرار میدهد، وبلاگ خودش و دوستانش و مسائلی است که آنجا طرح میشود و الان مهمترین دلایل پشت تحلیل شکست تمدن ما شده اند!
نیمه دوم کتاب با ذهنیتی که برشمردم، شبیه به گفتگوی نویسنده با خودش است که مثالهای مختلفی از تجربه های زندگی اش نقل می کند و با خودش فکر می کند چقدر توانسته دیگران را قانع کند که خودش بهتر فکر می کند یا می نشیند و مسائل را با خودش تحلیل می کند و در آخر به نتیجه ای می رسد که به نظرش شگفت انگیز است و دوست دارد برای خواننده هم نقل کند. طرفین مثال و گفتگو هم آدمهایی هستند انگار در اوج حماقت و ساده اندیشی که توانسته اند از هوش سرشار نویسنده بهره مند شوند! یعنی در نهایت کتاب تبدیل به نسخه ای از یک کتاب مدیریت بر خود یا چگونه موفق شویم می شود که البته بر مبنای زندگی یک نفر شکل گرفته است، آن هم کسی که فهم اندکش از یک حوزه علمی باعث شده خود را در سایر بخشهای زندگی هم صاحب نظر بداند و درکی از اقتصاد نئولیبرالیستی را به جای راه حل کلیه مسائل بشر بنشاند و خواننده را هم تشویق کند تا دست از فروتنی بیجا در تحلیل مسائل روز بردارد و با چنین رویکردی از خرد جمعی فراتر برود و مشکلات را حل کند
خوب است آدم چنین کتابی را بخواند، با تمام رنجها و احساسات متضاد، تا بفهمد هرچه بیشتر میداند و می فهمد، خطر افتادن در دام غرور همه چیزدانی و همه چیزفهمی بیشتر و بیشتر می شود، و موفقیت ها و تحسین همگان هم راه را برای فهم مرزهای درک توانایی هایمان سخت تر می کنند
A quick, useful read. The concept of "inadequacy" that Eliezer introduces here is new (to me, anyway) and potentially valuable. It made me think about certain economic problems in a way that I hadn't before. The second half of the book is mostly a defense of the inside view, but I'm a member of the part of his audience that's more prone overconfidence than under, so for my own good I will take his advice and ignore most of what I read there.
Short book yet too long. A lot of metatalk to fundamentally just to tell people to try more weird things and not be so concerned about making mistakes.
In the conclusion of this book, Yudkowsky mentions that the intended audience of Inadequate Equilibira are people with a so-called ‘modest epistemology’. I happen to be one of these people so this book was especially useful to me.
What Yudkowsky means by modest epistemology is something close to the view that, “You can’t expect to be able to do X that isn’t usually done, since you could just be deluding yourself into thinking you’re better than other people.”
There is a lot more that this book delves into to fully expand on what this epistemology is and why Yudkowsky doesn’t expect it to be useful, but I’ll stick to touching on a few things that stood out to me as important takeaways, in a bit. First I want to outline some examples and terminology which will serve mostly as a summary of chapters 1-3.
First is that inefficiency does not imply exploitability. There are places like the stock market where inefficiency does imply exploitability -- if you know a stock is underpriced before the market does, then you can exploit that -- but often this is not the case. For example, in a housing boom where it is easy to predict that the price of houses will rise and then eventually fall, it is still difficult to enter the market and make a profit since it is overcrowded and difficult to short-sell. Inexploitability refers to this concept.
Second is that adequacy refers to whether the low hanging fruit have been picked from a particular sector of the economy. For instance, Yudkowsky’s wife suffered from Seasonal Affective Disorder (SAD). They had tried several standard interventions (which usually include sitting in front of a bunch of lights) and when those didn’t work decided to try...adding more lights. It’s a really obvious solution when you consider, like Yudkowsky did, that the sun is just a lot of light. And it worked! But he couldn’t find a single experiment which had tested this hypothesis. Why? Well, because of civilizational inadequacy, that’s why!
There is an entire chapter in this book that explains civilizational inadequacy in a very digestible and humorous way. I liked how he introduced concepts that sound complex on a first pass in an intuitive way, mostly through giving many examples. I think the biggest takeaway, and maybe the most incisive way to describe why our society is woefully inept and sometimes insane, is that many aspects of our civilization are caught in suboptimal Nash equilibria, hence the name of this book. What this means in more intuitive language is that we are caught in systems where everyone in the system can look at it and say, ‘wow this system seriously sucks and does not output what the ideal system would output’, and yet it is still in every individual's best interest to act according to the status quo. Typically there is an obviously better state of affairs (a better Nash equilibrium) that we could move to, but in order to do so we need a coordinated effort and buy-in from a lot of players, which is difficult.
For instance, sticky traditions can develop in a system which really shouldn’t be there. If startups need to go through many rounds of funding in order to succeed, with different people judging them at each step, then if most startup founders have red hair, even if this is an absurd thing to judge the value of a startup on, everyone at each stage has to choose only the red-haired founders, since they know no other funder will fund a non red-headed founder. This is an example where you need a huge coordinated effort (i.e., each funder has to agree ahead of time not to judge on this) to get past inadequacy.
Another place where we see an inadequate equilibrium is in healthcare. The beginning of this passage is called ‘total market failure’, and rightly so. Many of the obvious desiderata for healthcare are routinely not met, such as, you know, saving people or providing statistics or results or anything that could indicate to buyers which seller they should go with. There are two heartbreaking examples provided in the book where it’s obvious just how horrendously broken our society is such that it allows people to die needlessly -- when there is ample evidence that extremely cheap interventions could save them.
“Central-line infections, in the US alone, killed 60,000 patients per year, and infected an additional 200,000 patients at an average treatment cost of $50,000/patient. Central-line infections were also known to decrease by 50% or more if you enforced a five item checklist which included items like ‘wash your hands before touching the line.’ [...] wider adoption of hand-washing and similar precautions are now finally beginning to occur, after many years -- with an associated 43% nationwide decrease in central-line infections. After partial adoption.”
Another example is that some babies born with short bowel syndrome need nutrition delivered intravenously, and this has historically used soybean oil as its source of fat. Switching from soybean oil to fish oil reduces the death rate of these babies from 37% to 9%. This was discovered in 2012, and for a variety of reasons, it wasn’t until 2015 that the FDA finally resolved this problem.
But how do we fix this? No hospital wants to be the only one that starts delivering non FDA approved fish oil. No hospital wants to be the first to start sharing statistics of how well they perform, which means that hospitals really don’t have a financial incentive to, well, actually save people. Which means things like central-line infections are allowed to keep happening for years.
It’s sobering to see the world this way, to look at society, a system that we’re told is supposed to support us and to see all the ways in which it fails us. But it’s also extremely important not to delude ourselves, and to understand the mechanics of what is happening here. Sometimes there will be a way to help, and we have to know how the system works in order to identify those places.
Important takeaways/things for me to remember (mostly from chapters 4-7):
People with a modest epistemology rely heavily on outside view. Taking an outside view means that you first look for a reference class to which your situation is analogous to (e.g., if you’re building a new theater, you would compare your project to other theater constructions) and then you base your predictions about your project on that reference class. Taking the outside view can correct for biases on things like predicting how long your Christmas shopping will take, and how many days it will take to finish a term paper, but, as Yudkowsky points out, it is not useful when the reference class is not obvious and/or when there are few instances of the reference class.
People with a modest epistemology think much more like foxes than hedgehogs. Fox thinking means one relies more on data and observation than theory. The latter is referred to as hedgehog thinking.
It is considered presumptuous/arrogant to use the inside view from a modest perspective. To this end, Yudkowsky considers how taking the inside view can look like a status grab. To take the inside view is to say that you have knowledge that other people don’t, and that carries status. In particular, I found his point that having ambitious plans as being analogous to saying ‘I’m going to be high status in the future!’ very insightful. I think this has been a huge problem in my life (I don’t want to violate status), and from his writing, it seems like the effective altruism community also suffers from this. From an outside view having ambitious plans without a background of greatness is like saying, ‘I have no reason to believe that I’ll be great, but I’m arrogant enough to think I do anyways!’. At this point, Yudkowsky asks us to consider that an inside view with well-calibrated models and theories could actually carry an ambitious plan and that we should look more at that, and less at status.
Anxious underconfidence. This is another area where I have struggled immensely with, and Yudkowsky sums it up really well. He calls the situation wherein people won’t try ideas out of an extreme fear of failing as anxious underconfidence. I want to quote this entire section but I’ll lead with one particular quote, “If you only try the things that are allowed for your ‘reference class,’ you’re supposed to be safe -- in a certain social sense. You may fail, but you can justify the attempt to others by noting that many others have succeeded on similar tasks. On the other hand, if you try something more ambitious, you could fail and have everyone think you were stupid to try.” Obviously, not all ideas are good, but like Yudkowsky says, the anxious underconfidence goes to an extreme which isn’t healthy -- people abandoning entire careers because the prospect of failing at one interview was too scary.
The conclusion of this book wraps up with some tips on how one can gradually come to have a view closer to Yudkowsky’s, and farther from modest epistemology. These include: Try things! Fail quickly. Do cheap tests. Bet on everything! Have some common sense.
Okay so I quite liked this book. There were a few things I didn’t like, which caused me to give this book a 4 instead of a 5. I’ll describe those here.
I thought this book varied from being a very straightforward, digestible read to being a bit convoluted and wander-y. I think this was a balance that Yudkowsky was trying to strike between offering, in his words, explicit principles and implicit mental habits. However, I found the wander-y parts hard to follow and I often didn’t feel like I had a good idea of where he was going with something or what I was supposed to be relating the current passage to. All of these confusions were eventually resolved, but the experience of reading them was a bit tiresome.
Since he was often trying to relay implicit mental habits, some things left lingering confusion which I had to spend extra mental effort trying to resolve. For instance, compare the two following passages:
“Startup founder 1: I want to get (primitive version of product) in front of users as fast as possible, to see whether they want to use it or not.
Eliezer: I predict users will not want to use this product.
Founder 1: Well, from the things I’ve read about startups, it’s important to test as early as possible whether users like your product, and not to over-engineer things.
Eliezer: The concept of ‘minimum viable product’ isn’t the minimum product that compiles. It’s the least product that is the best tool in the world for some particular task or work-flow. If you don’t have an MVP in that sense, of course the users won’t switch. So you don’t have a testable hypothesis. So you’re not really learning anything when the users don’t want to use your product.”
The second passage concerns whether or not Yudkowsky and Salamon should plan a lesson for a class or try to improvise the first time.
“The first lesson is to not carefully craft anything that it was possible to literally just improvise and test immediately in its improvised version, ever. Even if the minimum viable product won’t be representative of the real version. Even if you already expect the current version to fail. You don’t know what you’ll learn from trying the improvised version.”
What’s the difference between these two? I can see a few possible distinctions. The first is the amount of time it takes to create anything. I don’t know what the startup was, but it seems plausible that making anything for users to test in that environment is way more time intensive than improvising a class is. The second is that you have a better sense for what types of things you’ll learn when doing user testing for a startup than you will when you run an improvised class. Without having much experience in either this distinction is not obvious to me, but it seems like what he is going for given the last sentence of the second passage.
Regardless, I wish he had spent some more time highlighting his implicit models which concluded that these were distinct situations.
Overall, I really enjoyed this book. I don’t recommend it to everyone since it is somewhat depressing and a bit technical. However, if you consider yourself anxiously under confident and/or modest, I think it’s well worth reading.
Only Eliezer Yudkowsky could write an entire book about how great he is for buying some lightbulbs for his girlfriend. Just kidding...sort of.
Seriously, I think this book has a really interesting premise, but isn't executed well. Yudkowsky starts from a generalization of the efficient markets principle: to paraphrase, if something can be done that is valuable and cost-effective, someone else will have already figured out how to do it. Taken to the extreme, however, this idea would result in the conclusion that you shouldn't really bother doing anything. We know, based on unscientific observation, that people do achieve good things. So given these two points that are apparently in tension, how can we discern when there may be opportunities that are both worthwhile and feasible? This is a very important question for a person trying to plan their life.
A lot of this book is spent going over the Econ 201 concepts that often explain why a thing appears to work suboptimally: principal-agent problems, asymmetric information, and collective action problems. I don't think Yudkowsky adds much to the reams that have already been written on these topics, but it's a reasonable enough treatment. Basically in this section, Yudkowsky establishes that we can often easily identify "inadequate" situations that are nonetheless not exploitable--the inadequacy doesn't come from people being dumb or irrational, just from unfortunate information and/or incentive structures. In this part, Yudkowsky talks a lot about how he concluded that the Bank of Japan's monetary policy was flawed, which was later borne out by a policy change that improved things. I'm willing to believe he actually did reach a correct conclusion, but it's sort of odd to me that he spends so much time talking about it. Yudkowsky is a big proponent of "rationality-as-winning," and I can hardly think of something that matters less, in terms of winning, than a non-Japanese non-central-banker's view on Japanese monetary policy.
The other anecdote Yudkowsky talks a lot about, as I mentioned above, is his purchase of lightbulbs for his girlfriend. I'll spell it out a little more, since I was being a little cheeky before. As Yudkowsky recounts it, his girlfriend suffered severely from seasonal affective disorder, and the standard light-box treatments weren't working. He got the bright idea (ha) to just buy a ton more lightbulbs to increase the amount of light, and lo and behold, his girlfriend was cured. His conclusion is basically, I couldn't find any academic studies about this, but I'm not super surprised because academia is full of bad incentives--so when you get an idea like this that seems so obvious it must have been tried, don't be too quick to assume that.
I actually think this is good advice, although not necessarily reached for the right reasons. One thing Yudkowsky doesn't mention at all is the placebo effect, which could easily explain this "prescription" working in an individual case while not being provable in a scientific sense. (Heck, it could even be that his girlfriend was so touched by her boyfriend coming up with crazy schemes just to try to help her, that it helped to cure her.) Another is simply that sometimes things that don't work on average in statistically detectable ways can still work for individuals! Individual outcomes are determined by whole hosts of interacting factors that we can't understand fully. But these don't invalidate the point--they're just further evidence that, even if you don't think you're able to come up with generally applicable innovative ideas, it's still worth trying different stuff in individual cases.
A related and very topical discussion arose recently around face mask effectiveness, and whether public health bodies should recommend their use as protection from SARS-CoV2. As you may recall, there were initially clear communications in the US that in general, people should not wear masks; only later was this reversed to the current state where masks are generally recommended and indeed required in many public places. Scott Alexander had an extensive discussion of this issue on his blog (https://slatestarcodex.com/2020/03/23...). His conclusion is basically that the initial non-recommendation of masks was based on the evidence for their efficacy not reaching a sufficiently scientific burden of proof--"not proven effective beyond a reasonable doubt."
I think these discussions add up to some good advice that Yudkowsky sort of communicates in the book, but that I think could have been outlined more clearly. When it comes to individual judgments, a rational person should be weighing evidence in a Bayesian way, such that our determination of the "right" course of action always comes down to a degree of belief. This is a significantly different standard than is used in most of our society's information-generating institutions--science and medicine may pose stricter rules that filter out more false positives but also some true positives; conversely, academic publishing may reward p-hacking, which results in a lot of non-replicable results being published. Or on the other hand, journalism works based on representativeness and availability. So it's not responsible for us to outsource our decisionmaking entirely to any of these information-generating institutions, although we should of course consider their outputs in our own Bayesian decision process. And the corollary is that there may be opportunities for achievement that have not been ratified by our society's information-generating structures. We can begin to identify these by thinking about the biases inherent in these structures.
We should also impose a heavy filter on "what to try" based on cost--it's obviously not a good idea to start taking unproven medications based on the above reasoning, because they could have huge and/or irreversible drawbacks. However, something like "exposing myself to more lightbulbs" seems pretty low-cost and easily reversible. Yudkowsky does communicate this conclusion well--just try stuff if it's low-cost, and don't worry too much about whether you are second-guessing the (generalized) market.
The back end of the book seems like a lot of inside baseball relating to the specific circles Yudkowsky interacts with. The discussion focuses on the responsible use of "the outside view"--for those not familiar with the concept, taking the outside view on a problem means trying to abstract from your individual circumstances and reason by analogy with a related class of instances, whereas taking the inside view means trying to reason through the specifics of your circumstances. The relevant literature identifies that using the outside view can reduce bias, for example helping us to avoid the "planning fallacy" where we assume a project we are working on will go smoothly because we imagine the best possible outcome, rather than thinking of things that could go wrong--which will more naturally come to mind if we think about similar things that happened in the past.
Basically Yudkowsky argues that people use the outside view excessively, resulting in excessive modesty and suboptimal achievement. He cites, for example, conversations with startup founders who use the outside view to estimate their potential market rather than thinking about how it could be a thousand times bigger, and thus end up not shooting for anything more ambitious than others have already achieved. I think this must be a very specific problem with people in the circles Yudkowsky runs in, because I rarely if ever hear people reasoning based on the outside view. (To be fair, only a very specific subset of people are going to read an Eliezer Yudkowsky book either, and he points this out.) It's true that the outside view has a serious drawback, in that it's impossible to definitively identify the correct reference class. But I've never understood the outside view to be taken as a unique planning tool; rather, it's useful to get a "second opinion" on whether you're being too optimistic about something. It also shouldn't prevent you for trying to achieve something greater than your reference class; it should just dissuade you from making plans that will have very bad outcomes if you don't (for example, overleveraging a company)--which I think remains good advice. I feel like the people Yudkowsky is talking about are using the outside view in an idiosyncratically bad way.
Ultimately, I would boil down the message of this book to: think for yourself, and be willing to try unorthodox stuff if the cost is pretty low. Both very sound pieces of advice, just reached in what felt like a roundabout way.
I was disappointed by this book. I had much higher expectations. It is somewhat lengthy, and too anecdotal for my taste. The best parts (which I had hoped to see more of) are about explaining possible mechanisms for bad equilibriums to form and be sustained, and strategies for exploiting or fixing them. All in all, I recommend this book for anyone who does not have an intuitive understanding of Nash equilibria. PS: I think someone should take the time to write an abridged, less Lesswrong-centric version which can be shared with normal people. The issues discussed are pretty much THE CAUSE of all societal failures. It's a shame that mainstream discourse doesn't understand them. Update: Slatestarcodex’s review of the book might be this good summary. Do read that one if you haven’t read the book, it’s definitely worth it.
“Inadequate Equilibria” by Eliezer Yudkowsky author at Machine Intelligence Research Institute (MIRI) addresses the question as to whether there exists any avenue or opportunity for individual thinking, in a globalised and internet fueled world where the norm is to place blind and total trust on expert agreements. It dwells deeply on the notion of efficient market hypothesis (EMH), a concept which underpins the logic that that asset prices reflect all available information. The concept of EMH, literally eviscerates the notion of “low hanging fruit” out of any decision making parameter, since if there was an opportunity to enjoy the benefits of such low hanging fruits, such an opportunity would already have been exploited.
Hence, if you happen to glimpse a $20 bill lying on the sidewalk, that day undoubtedly would be your lucky day. However, if you happen to cast your eyes upon a $20 bill lying on the sidewalk in Grand Central Station, and it happens to be the same bill that was lying on the same sidewalk a week ago, then something is amiss. Grand Central is a beehive that witnesses hundreds of thousands of footfalls every day. It is ridiculous to assume that at least a pair of feet amongst the multitude would have passed up a $20 bill just lying there for the taking. Unless and until such a bill Maybe it’s some kind of weird trick. Maybe you’re dreaming. But there’s no way that such a low-hanging piece of money-making fruit would go unpicked for that long.
Just when you are beginning to warm up to the notion of Nobel Laureate Eugene Fama’s ubiquitous creation (EMH), Eliezer, expertly throws some sand in the gear. The author expended quite a bit of time and effort in lashing out at the macroeconomic policies instituted by the Bank of Japan. He and a few of his “econblogger’ acquaintances were of the informed opinion that not only were the strategies implemented by the bank daft, but they also cost Japan trillions of dollars in lost economic growth. But wouldn’t the experts and PhD holders within the hallowed portals of the famed institution know more than Eliezer Yudkowsky? A few years after Eliezer’s criticisms, the Bank of Japan indeed switched strategies thereby resulting in an instant improvement to an otherwise tepid economy! Does this mean that one of the biggest economies of the world left a trillion-dollar bill on the sidewalk by ignoring what even a non-professional could detect?
The innate notion of bowing to expert opinion is a fallacy which Eliezer attributes the term “epistemic modesty”. In layman terms, epistemic modesty represents an unwillingness to believe that one knows better than the average person. An exact opposite of the famed or the infamous Dunning-Krueger Syndrome.
In attacking the syndrome of epistemic modesty, Eliezer provides certain interesting illustrations from his personal life. His wife Brienne suffered from Seasonal Affective Disorder (SAD). The received wisdom treatment-wise for SAD involves “light boxes”, very bright lamps that mimic sunshine thereby making winter seem more like. When the consensus therapy did not give Brienne any relief and her seasonal depression got so bad that she had to move to the Southern Hemisphere three months of every year just to stay functional, Eliezer decided to depart from the traditional. Stringing up the house with around 130 LED Bulbs, Eliezer was able to cure Brienne’s disorder. This SAD cure was based on the simple premise that ultimately nothing can be brighter than the sun and more the brightness, more the potential for a complete recovery. This therapy was the equivalent of a vaunted medical establishment leaving a $20 bill on the sidewalk in Grand Central Station.
The book, however, is for the most bit, arcane, abstruse and incredibly complex. Filled with passages expounding on the Bayesian Model of decision making, the reader is left with the feeling that she has definitely bitten off much more than what she can comfortably chew.
So is there escaping the curse of “Epistemic Modesty?” Eliezer provides three broad solutions to wriggle oneself out of the modesty trap:
Cases where the decision lies in the hands of people who would gain little personally, or lose out personally, if they did what was necessary to help someone else; Cases where decision-makers can’t reliably learn the information they need to make decisions, even though someone else has that information; Systems that are broken in multiple places so that no one actor can make them better, even though, in principle, some magically coordinated action could move to a new stable state. In writing “Inadequate Equilibria”, Eliezer Yudkowsky might well be catering to a niche and targeted audience. As he very clearly states in the book, people who read his book will mostly be smarter than average.
It would have been much more desirable if he had focused on articulating his views to readers irrespective of their smart quotient and intellect. That way everyone would have been adequately equipped to spot and make the best use of dollar bills that might be left on many a sidewalk. But then again if everyone was to be an expert in spotting and exploiting that opportunity, will there be such an opportunity in the first place?
VISITOR: Maybe it’s naive of me... but I can’t help but think... that surely there must be some breaking point in this system you describe, of voting for the less bad of two awful people, where the candidates just get worse and worse over time. At some point, shouldn’t this be trumped by the “voters” just getting completely fed up? A spontaneous equilibrium-breaking, where they just didn’t vote for either of the standard lizards no matter what?
CECIE: Perhaps so! But my own cynicism can’t help but suspect that this "trumping" phenomenon of which you speak would be even worse.
As much as I dislike many of Eliezer's writing due to his endless ramblings on how he's so smart and everyone else is so dumb, and the general style of taking way more words than necessary to convey a simple idea, I thought this book was pretty well written. As a believer in the Efficient Market Hypothesis (and tried to be proven wrong but failed so far), I may have subconsciously extended the same concept in other areas to think that many other fields e.g. academia, medicine, etc. are also fairly efficient, of course within what is realistically achievable given perverse incentives. But this book describes the ways in which systems may converge towards an equilibrium that is far from ideal, when 1) those who notice a mistake cannot benefit from correcting it, 2) expert knowledge cannot flow towards beneficiaries effectively (potentially due to asymmetric information), and 3) Nash equilibria, i.e. Moloch. Recommended read.
I first picked this up many months ago, and at the time I found Eliezer's writing a little difficult to parse, but in the intervening months I've spent a lot of time in close proximity to rationalist friends + reading thematically related books, so when I picked this up again I found that I actually had the vocabulary and mental framework for most of this to be coherent and useful. I have longer book notes somewhere else, but the essence of the book is: how do you know when you can do better than what exists (with some amount of effort, thought, etc)? It applies to problem solving at various scales (eg personal medical issues, starting a startup, solving some global crisis, changing The System) -- if a solution exists, why hasn't it been solved yet? Two main reasons: 1) misalignment of incentives (the chapter on inadequacy analysis / moloch's toolbox was interesting, if a little long, and reminded me a lot of the book "the elephant in the brain" -- the idea of looking beneath the surface / explicit purposes of a system, and understanding what's really motivating the individual actors. It brings in concepts like the principle agent problem, common knowledge, etc, and takes the frame that systems aren't suboptimal *just* because humans are stupid, but rather that we're responding to sometimes non-explicit incentives); 2) limited resources / time / some other factor X, opportunity cost <- this is where we may be able to find opportunities to do things differently, if we are particularly endowed in some relevant area or uniquely determined / passionate / knowledgeable about some issue, etc. This doesn't mean that all problems are solvable, better states exist, etc. But there are opportunities to do better sometimes, and it's about understanding the system thoroughly AND coming up with the right ideas AND putting in the work to actually do it (for example, so much of Uber's "contribution" as a company was to make things work with regulations).
The concept of an inadequate equilibria seems like a useful model, and the ability to identify its sources (and figure out if you can transcend that with effort) seems like something you can get better at with practice. As someone who tends towards modest epistemology, this is really relevant to me. ("If you've never wasted an effort, you're filtering on far too high a required probability of success.")
Aside: I am a fan of thermodynamic analogies. To take if further (hello high school chemistry): a lower enthalpy / more favorable state exists, but the energy barrier means that it takes a certain energy / catalyst / pizzazz to get there.
I'm just going to get it out there - Yudkowsky, along with Scott Alexander (and SSC, LessWrong-ers, rationalists, etc), irritates me on a personal level. Is my review biased based on this? Yeah, probably, so you can consider it with that in mind. That being said, there are at least snippets of wisdom in this book.
"...Usually, when things suck, it's because they suck in a way that's a Nash equilibrium."
"So far, every time I've asked you why someone is acting insane, you've claimed that it's secretly a sane response to someone else acting insane. Where does this process bottom out?"
The Gell-Mann Amnesia effect
"You will detect inadequacy every time you go looking for it, whether or not it's there. If you see the same vision wherever you look, that's the same as being blind."
"...You can say 'holy shit, everyone in the world is fucking insane. However, none of them seem to realize that they're insane. By extension, I am probably insane. I should take careful steps to minimize the damage I do.'"
"...When you previously just had a lot of prior reasoning, or you were previously trying to generalize from other people's not-quite-similar experiences, and then you collide directly with reality for the first time, one data point is huge."
"If you and a trusted peer don't converge on identical beliefs once you have a full understanding of one another's positions, at least one of you must be making some kind of mistake. If we were fully rational (and fully honest), then we would always eventually reach consensus on questions of fact."
"Hey! Guys! I found out how to take over the world using only the power of my mind and a toothpick." "You can't do that. Nobody's done that before." "Of course they didn't, they were completely irrational." "But they thought they were rational too." "The difference is that I'm right." "They thought that too!"
"If just anyone could find some easy sentences to say that let them get higher status than God, then your system for allocating status would be too easy to game."
"Try to make sure you'd arrive at different beliefs in different worlds. You don't want to think in such a way that you wouldn't believe in a conclusion in a world where it were true, just because a fallacious argument could support it. Emotionally appealing mistakes are not invincible cognitive traps that nobody can ever escape from. Sometimes they're not even that hard to escape."
Excellent wisdom, writing not so much. In true Yudkowsky fashion, this book comes with some good mental handles---things like the distinction between "efficiency", "unexploitability", and "inadequacy." The gist is that most inadequate systems are that way because they are unexploitable---that is to say, that an outsider can't make money by coming in and putting things right.
He makes an analogy to hungry agents, running around eating up free energy. Thus, if you're in a domain where you'd expect a lot of agents eating up free energy, you shouldn't expect to do any better than average, since you will require a PARTICULAR, SPECIFIC ADVANTAGE in order to beat all the other smart agents who are eating up the free energy.
What I like most about the book is the stuff against modesty but I realize I shouldn't have started writing this review because it's late and I should go to bed because my thoughts here are jumbled. GOODNIGHT
In "Inadequate Equilibria," Eliezer Yudkowsky refers to "Moloch's Toolbox" as a
metaphorical set of tools or mechanisms that perpetuate suboptimal equilibria and systemic inefficiencies.
Moloch, in this context, symbolizes the forces or dynamics that trap societies, institutions, and individuals in inefficient and often harmful states, despite the existence of better alternatives.
Here are some key components of "Moloch's Toolbox" as discussed in the book:
1. **Coordination Problems**: - Individuals or groups may recognize a better equilibrium but cannot coordinate their actions to achieve it due to lack of communication, trust, or shared incentives. Coordination problems prevent collective action that could lead to improvements.
2. **Misaligned Incentives**: - Incentives within systems often do not align with overall social welfare. For example, individuals or institutions may pursue actions that benefit them personally but are detrimental to the group or society as a whole.
3. **Principal-Agent Problems**: - Situations where agents (those who make decisions) do not perfectly align their interests with principals (those who are affected by the decisions). This misalignment leads to decisions that are not optimal for the larger group or system.
4. **Status Quo Bias**: - A cognitive bias that favors existing conditions over change, even when change could lead to better outcomes. This bias contributes to the persistence of inefficient systems because people prefer the familiar over the uncertain.
5. **Coordination Failure due to Lack of Trust**: - The absence of trust among individuals or entities prevents effective collaboration and joint efforts to move towards a better equilibrium. Without trust, even mutually beneficial actions are avoided.
6. **Regulatory Capture**: - When regulatory agencies tasked with overseeing industries become dominated by the industries they are supposed to regulate. This results in regulations that favor the industry at the expense of public interest.
7. **Cultural and Institutional Inertia**: - Deeply ingrained cultural norms and institutional practices that resist change, even in the face of evidence that change would be beneficial. This inertia keeps systems locked in suboptimal states.
8. **Information Asymmetry**: - When one party has more or better information than another, leading to decisions that benefit the informed party at the expense of the less informed. This imbalance perpetuates inefficiencies and exploitation.
9. **Fear of Change and Risk Aversion**: - Individuals and institutions may avoid making changes due to fear of potential risks and uncertainties, even when the expected benefits outweigh the risks. This risk aversion contributes to the maintenance of the status quo.
10. **Fragmented Authority**: - When decision-making power is dispersed among multiple actors with conflicting interests or priorities, making it difficult to implement coherent and effective policies or changes.
By highlighting these mechanisms, Yudkowsky illustrates how various factors, both psychological and structural, contribute to the persistence of inadequate equilibria.
##
Why Blame Moloch?
1. **Metaphor for Systemic Failures**: - Moloch represents the collective, emergent forces that lead to systemic inefficiencies and coordination failures. By personifying these abstract forces, Yudkowsky and others (like Scott Alexander) highlight how these systemic issues persist even when individual participants recognize the problems and wish for improvement.
2. **Coordination Problems**: - Many societal and institutional failures arise from coordination problems where individuals or groups are unable to align their actions for mutual benefit. Moloch symbolizes these failures, emphasizing that the root cause is not individual malice but rather the inability to effectively coordinate.
3. **Misaligned Incentives**: - Systems often suffer from misaligned incentives, where the actions that benefit individuals do not lead to the best outcomes for the group. Moloch is blamed because these perverse incentives are entrenched in the system, making it difficult for individuals to act in ways that would lead to better collective outcomes.
4. **Tragedy of the Commons**: - Moloch embodies the dynamics of the tragedy of the commons, where individual rational actions lead to collective irrational outcomes. For example, overfishing by individual fishermen, driven by the incentive to maximize personal gain, can deplete fish stocks, harming everyone in the long run.
5. **Inertia and Resistance to Change**: - Once a system is stuck in an inadequate equilibrium, it can be very resistant to change due to various forms of inertia—cultural, institutional, or economic. Moloch symbolizes the forces that maintain the status quo and resist efforts to move to a more optimal state.
6. **Metaphor for Non-Agency**: - The metaphor of Moloch is powerful because it emphasizes the non-agency of these systemic problems. By personifying the issue, Yudkowsky underscores that these problems are not the result of a single malevolent entity but rather the emergent outcome of many individual actions and decisions.
The Power of the Metaphor:
- **Clarifying the Problem**: By blaming Moloch, Yudkowsky clarifies that the issues are deeply rooted in the structure and dynamics of the system itself, rather than in the intentions of individuals. This helps to focus on systemic solutions rather than attributing blame to specific actors.
- **Mobilizing Action**: The metaphor can serve to mobilize collective action. Recognizing that everyone is trapped by Moloch’s influence can unify efforts to address the underlying systemic issues.
Some very important lessons here that I want many people to read, however it's not always easy to parse unless you have a lot of experience with Eliezer's ways of thinking and use of vocabulary already.
Like many others, I enjoyed the first four chapters quite a lot and found the analysis of "inadequacy" to be useful. After that the book gets much worse. I wouldn't really recommend the second half of the book.
It's no secret that I'm a fan of Yudkowsky's writing and singular style of critical thinking. This book gave me no reason to change that.
In the words of the author himself: "This is a book about two incompatible views on the age-old question: When should I think that I may be able to do something unusually well?" Those two views are epistemological modesty and inadequacy analysis, and the book opens by contrasting them through various examples.
Yudkowsky examines the topic of civilisational inadequacy using the terminology of contemporary economics. In particular, the concepts of 'efficiency' and 'exploitability' are used to provide a framework through which the shortcomings of incentive structures can be understood.
Yudkowsky's most poignant example of this is how, in the US, babies with digestive problems are intravenously fed formula with an imbalanced lipid profile, leading to liver damage and death. See here. Whilst superior formula exists and has been demonstrated to drastically reduce mortality, the only way for babies in the US to receive it is if (1) they already have liver damage, (2) the doctor/parents are aware of the problem and the superior formula, (3) the hospital is legally allowed to import the superior formula. Yudkowsky then challenges the reader to explain this to a visiting alien race and defend humanity's apparent disregard for the lives of babies.
The book goes on to examine the different elements of "Moloch's Toolbox," how inadequate systems can be identified, and the challenges faced by those attempting to remedy the shortcomings of society.
Whilst this work definitely appeals to a Rationalist worldview, I suspect that anyone with an interest in basing their decisions in reality will find it not only fascinating but also truly useful. It's also freely available online.
--------------------------------- Some of my favourite extracts follow: ---------------------------------
"Efficiency: “Microsoft’s stock price is neither too low nor too high, relative to anything you can possibly know about Microsoft’s stock price.” Inexploitability: “Some houses and housing markets are overpriced, but you can’t make a profit by short-selling them, and you’re unlikely to find any substantially underpriced houses—the market as a whole isn’t rational, but it contains participants who have money and understand housing markets as well as you do.” Adequacy: “Okay, the medical sector is a wildly crazy place where different interventions have orders-of-magnitude differences in cost-effectiveness, but at least there’s no well-known but unused way to save ten thousand lives for just ten dollars each, right? Somebody would have picked up on it! Right?!”"
"If you want to outperform—if you want to do anything not usually done—then you’ll need to conceptually divide our civilization into areas of lower and greater competency. My view is that this is best done from a framework of incentives and the equilibria of those incentives—which is to say, from the standpoint of microeconomics. This is the main topic I’ll cover here."
"If I had to name the single epistemic feat at which modern human civilization is most adequate, the peak of all human power of estimation, I would unhesitatingly reply, “Short-term relative pricing of liquid financial assets, like the price of S&P 500 stocks relative to other S&P 500 stocks over the next three months.” This is something into which human civilization puts an actual effort."
"A market that knows everything you know is a market where prices are “efficient” in the conventional economic sense—one where you can’t predict the net direction in which the price will change."
"We can see the notion of an inexploitable market as generalizing the notion of an efficient market as follows: in both cases, there’s no free energy inside the system. In both markets, there’s a horde of hungry organisms moving around trying to eat up all the free energy. In the efficient market, every predictable price change corresponds to free energy (easy money) and so the equilibrium where hungry organisms have eaten all the free energy corresponds to an equilibrium of no predictable price changes. In a merely inexploitable market, there are predictable price changes that don’t correspond to free energy, like an overpriced house that will decline later, and so the no-free-energy equilibrium can still involve predictable price changes."
"I’ve seen a number of novice rationalists committing what I shall term the Free Energy Fallacy, which is something along the lines of, “This system’s purpose is supposed to be to cook omelettes, and yet it produces terrible omelettes. So why don’t I use my amazing skills to cook some better omelettes and take over?” And generally the answer is that maybe the system from your perspective is broken, but everyone within the system is intensely competing along other dimensions and you can’t keep up with that competition. They’re all chasing whatever things people in that system actually pursue—instead of the lost purposes they wistfully remember, but don’t have a chance to pursue because it would be career suicide. You won’t become competitive along those dimensions just by cooking better omelettes... What inadequate systems and efficient markets have in common is the lack of any free energy in the equilibrium. We can see the equilibrium in both cases as defined by an absence of free energy. In an efficient market, any predictable price change corresponds to free energy, so thousands of hungry organisms trying to eat the free energy produce a lack of predictable price changes. In a system like academia, the competition for free energy may not correspond to anything good from your own standpoint, and as a result you may label the outcome “inadequate”; but there is still no free energy. Trying to feed within the system, or do anything within the system that uses a resource the other competing organisms want—money, publication space, prestige, attention—will generally be as hard for you as it is for any other organism."
A fantastic book, Inadequate Equilibria, is about how to think about certain things that affect your own life (in other words, what model to use) and thus how to improve your decision making (and your life with it).
The author points out, through concrete examples, how certain systems are badly broken, stuck in an inadequate equilibrium, sometimes with dire consequences (like in the case of feeding a bad source of protein to babies in the US that still causes several thousand deaths) and explains in details how this is possible. The way he explains the completely avoidable deaths of thousands of babies through a conversation of a visitor from a Better World, a Conventional Cynical Economist and Simplicio, a major university student with no knowledge in economics is simply brilliant and makes that long chapter not only extremely insightful but also fun to read.
Another great thought that Yudkowsky is making is that the "modest epistemology", thinking that I'm no smarter than the experts in a field and thus cannot know better, is very often the wrong way to approach a problem because of all the brokenness of systems (a good example is how he cured her wife's SAD with a simple method that wasn't published).
Having already read HPMOR and about half of The Sequences, I still hadn't made up my mind about whether EY had anything genuinely useful to say. Certainly his writing is interesting, and there are lots of interesting facts in it. But it's another question altogether as to whether the whole package of his writing is worthwhile. He is not at all shy about prescribing certain ways of thinking, and simply knowing lots of interesting trivia about the evolutionary psychology and the history of science is no basis for dispensing life advice. His writing does always bear the whiff of egotism-fodder. [Not an exact quote:] "Ah, my dear reader, because you have been initiated into the Bayesian Conspiracy, you too are far smarter than those so-called-academics, with their frequentist statistics and use of 'emergence' as a fake explanation."
This book convinced me that EY does actually have something genuinely enriching to say. Like Haidt's "The Righteous Mind", this book gives provides you with a shiny new tool for understanding the world. Whereas TRH allowed you to understand why other people have such perverse political opinions, this book allows you to understand why society can act so stupid sometimes, and how you should respond as an individual to such bewildering incompetence. In particular: When is it ok to think you're right and everyone else is wrong, and when can you expect to be able to do better than everyone else? Intellectual modesty is an over-correction for Dunning-Kruger. You can expect to know better than even experts (gasp!) if you 1) pay attention to predictive track records, and 2) pay attention to the dynamics of a system: whether you would expect a genuine improvement to actually be adopted.
Like most of EY's writing, could do with some trimming (I think the last couple of chapters could have been cut), but it's a huge improvement on the usual 1600 pages.
This book changed my opinion of Yudkowsky from slightly negative to positive. It was more focused and better presented than some of his sequences that I’ve read, and his writing has improved since he wrote his Harry Potter fanfic. I have a better understanding of why people dislike him, and in the future I’ll give him more of the benefit of the doubt when he says something that I think is crazy.
I’m still not going to finish his Potter fanfic though
Very interesting ideas about when systems might be inadequate (basically doing sth in a more wasteful way than necessary, whether they are exploitable and by whom.
-short selling capability essential for (short-term?) efficiency so that price can adjust in both directions -often, there exist multiple nash equilibria but the selected one is not pareto optimal. A cause of this might be multi factor markets with multiple categories of participants, where it’s particularly likely that they’ll end up stuck/stable in a bad state =>Theorize but test theories, especially when it is quick and cheap to do so (reminds me of “strong opinions weakly held”-doctrine). BET! -> “Skin in the game”. This allows building better judgement (á la Buffet/Munger, Naval)
Essential categories of causes of failure: -decisionmakers not beneficiaries -asymmetric information -bad nash equilibria
Applying modesty (Tetlock’s fox, using outside view, “agile”) vs. applying theorizing (hedgehog, building a causal model, inside view, Thiel), quite central for navigating life. ->a danger of modesty is “blind empricism” Example regarding startups:
“The concept of “minimum viable product” isn’t the minimum product that compiles. It’s the least product that is the best tool in the world for some particular task or workflow. If you don’t have an MVP in that sense, of course the users won’t switch. So you don’t have a testable hypothesis. So you’re not really learning anything when the users don’t want to use your product.” -> made me think of Thiel’s view that failure is overrated because one learns very little from it because startup failure is overdetermined.
Am skeptical about Yudkowsky’s views on *macro*-econ (actions of ECB or Japanese Central Bank); might neglect long-term effects, opposes Austrian Econ, Spitznagel who is an expert that I trust
In general, would have loved more thoughts regarding inadequacy/epistemology/exploitability with juxtaposition of long- vs short time horizon.
“[..] most of the time systems end up dumber than the people in them due to multiple layers of terrible incentives, and that this is normal and not at all a surprising state of affairs to suggest”
the founder of LessWrong delves deeply into the role sub-optimal Nash Equilibria play in society and progress. He also frequently digresses into comments about decision theory, statistics, and cognitive science, which I appreciated as a fan of his blog and organization. It’s a fantastic framework to identify violations of the efficient market hypothesis, both in finance as well as in society at large.
A pretty interesting take on why healthcare, education, politics, and many other fields seem to grind to a screeching halt. It describes how individuals, acting perfectly rationally in accordance to their own self-interests, end up creating results that harm everybody (including themselves).
It's a fascinating description of one of the worst problems in society. Not a lot in terms of solution, unfortunately. But sometimes insightfully describing the problem is enough of a step forward.
"If you want to outperform—if you want to do anything not usually done—then you’ll need to conceptually divide our civilization into areas of lower and greater competency. My view is that this is best done from a framework of incentives and the equilibria of those incentives […]"
Eliezer Yudkowsky has a talent for teaching rationality, economics and decision theory concepts in an accessible manner and usually his writing helps transform unformed thoughts into proper labels and names, thus enabling new modes of thinking.
The concepts I've taken from this book [the descriptions below are probably hard to understand without reading the actual book]:
- Efficiency, inexploitability, inadequacy: ○ Efficiency [as in, efficient market]: “Microsoft’s stock price is neither too low nor too high, relative to anything you can possibly know about Microsoft’s stock price.” ○ Inexploitability [There is an inefficiency but we can't make money off of it]: “Some houses and housing markets are overpriced, but you can’t make a profit by short-selling them, and you’re unlikely to find any substantially underpriced houses—the market as a whole isn’t rational, but it contains participants who have money and understand housing markets as well as you do.” ○ Inadequacy [a gap where you can outperform the best current human results]: “Okay, the medical sector is a wildly crazy place where different interventions have orders-of-magnitude differences in cost-effectiveness, but at least there’s no well-known but unused way to save ten thousand lives for just ten dollars each, right? Somebody would have picked up on it! Right?!”"
- Modest epistemology [the point of view Eliezer is refuting in this book]: If there's really something to improve/discover here, someone would've already done it! I'm not smarter/better than others so why assume that I know or can do something other don't?
- Moloch’s Toolbox: There’s a toolbox of reusable concepts for analyzing systems I would call “inadequate”—the causes of civilizational failure, some of which correspond to local opportunities to do better yourself. I shall, somewhat arbitrarily, sort these concepts into three larger categories: - Decisionmakers who are not beneficiaries; - Asymmetric information; - and above all, Nash equilibria that aren’t even the best Nash equilibrium, let alone Pareto-optimal. In other words: - Cases where the decision lies in the hands of people who would gain little personally, or lose out personally, if they did what was necessary to help someone else; - Cases where decision-makers can’t reliably learn the information they need to make decisions, even though someone else has that information; and - Systems that are broken in multiple places so that no one actor can make them better, even though, in principle, some magically coordinated action could move to a new stable state.
---
After introducing the basic concepts of inadequacy analysis, Eliezer goes into some case studies, sometimes in depth. He starts with the U.S. medical system ("the most broken system that still works") but then also dissected the failures and inadequacies of Academia, venture capitalism and politics.
As examples of personal contributions in instances where our civilization happens to be inadequate, he cites finding a solution for his wife's psychological problem which involved installing 65 light bulbs in their house, creating his own ketogenic meal replacement drink recipe, inventing a new decision theory and *knowing* that the bank of Japan's monetary policy was harming Japan's economy.
Overall Eliezer's writing is as clear and entertaining as ever and it's obvious he is learning from previous publications such as Rationality: A to Z. He is giving many concrete examples for his abstract writing with practical suggestions on how to use and not misuse the techniques presented. The book is much shorter than previous ones but still too long and could use further editing, especially the last third. Overall, if you've ever enjoyed Yudkowsky don't skip this one.
---
Finally, some quotes that I thought were worth highlighting:
- For our central example, we’ll be using the United States medical system, which is, so far as I know, the most broken system that still works ever recorded in human history. If you were reading about something in 19th-century France which was as broken as US healthcare, you wouldn’t expect to find that it went on working when overloaded with a sufficiently vast amount of money. You would expect it to just not work at all.
In previous years, I would use the case of central-line infections as my go-to example of medical inadequacy. Central-line infections, in the US alone, killed 60,000 patients per year, and infected an additional 200,000 patients at an average treatment cost of $50,000/patient.... So my new example is infants suffering liver damage, brain damage, and death in a way that’s even easier to solve, by changing the lipid distribution of parenteral nutrition to match the proportions in breast milk.
- To paraphrase a commenter on Slate Star Codex: suppose that there’s a magical tower that only people with IQs of at least 100 and some amount of conscientiousness can enter, and this magical tower slices four years off your lifespan. The natural next thing that happens is that employers start to prefer prospective employees who have proved they can enter the tower, and employers offer these employees higher salaries, or even make entering the tower a condition of being employed at all. - Visitor: Hold on, I think my cultural translator is broken. You used that word “doctor” and my translator spit out a long sequence of words for Examiner plus Diagnostician plus Treatment Planner plus Surgeon plus Outcome Evaluator plus Student Trainer plus Business Manager. Maybe it’s stuck and spitting out the names of all the professions associated with medicine. CECIE: Your translator wasn’t broken. In our world, “doctors” are supposed to examine patients for symptoms, diagnose especially complicated or obscure ailments using their encyclopedic knowledge and their keen grasp of Bayesian inference, plan the patient’s treatment by weighing the costs and benefits of the latest treatments, execute the treatments using their keen dexterity and reliable stamina, evaluate for themselves how well that went, train students to do it too, and in many cases, also oversee the small business that bills the patients and markets itself. So “doctors” have to be selected for all of those talents simultaneously, and then split their training, experience, and attention between them.
- VISITOR: I must still be missing something. I just don’t understand why all of the people with economics training on your planet can’t go off by themselves and establish their own hospitals. Do you literally have people occupying every square mile of land?... VISITOR: So there’s no way for your planet to try different ways of doing things, anywhere. You literally cannot run experiments about things like this.
- The observation stands: there must be, in fact, literally nobody on Earth who can read Wikipedia entries and understand that omega-6 and omega-3 fats are different micronutrients, who also cares and maximizes and can head up new projects, who thinks that saving a few hundred babies per year from death and permanent brain damage is the most important thing they could do with their lives.
- Living in an Inadequate World: Whether you’re trying to move past modesty or overcome the Free Energy Fallacy: Step one is to realize that here is a place to build an explicit domain theory—to want to understand the meta-principles of free energy, the principles of Moloch’s toolbox and the converse principles that imply real efficiency, and build up a model of how they apply to various parts of the world. Step two is to adjust your mind’s exploitability detectors until they’re not always answering, “You couldn’t possibly exploit this domain, foolish mortal,” or, “Why trust those hedge-fund managers to price stocks correctly when they have such poor incentives?” And then you can move on to step three: the fine-tuning against reality.
- So a realistic lifetime of trying to adapt yourself to a broken civilization looks like: - 0-2 lifetime instances of answering “Yes” to “Can I substantially improve on my civilization’s current knowledge if I put years into the attempt?” A few people, but not many, will answer “Yes” to enough instances of this question to count on the fingers of both hands. Moving on to your toes indicates that you are a crackpot. - Once per year or thereabouts, an answer of “Yes” to “Can I generate a synthesis of existing correct contrarianism which will beat my current civilization’s next-best alternative, for just myself (i.e., without trying to solve the further problems of widespread adoption), after a few weeks’ research and a bunch of testing and occasionally asking for help?” (See my experiments with ketogenic diets and SAD treatment; also what you would do to generate or judge a startup idea that wasn’t based on a hard science problem.) - Many cases of trying to pick a previously existing side in a running dispute between experts, if you think that you can follow the object-level arguments reasonably well and there are strong meta-level cues that you can identify. - The accumulation of many judgments of the latter kind is where you get the fuel for many small day-to-day decisions (e.g., about what to eat), and much of your ability to do larger things (like solving a medical problem after going through the medical system has proved fruitless, or executing well on a startup).
- Oh, and bet. Bet on everything. Bet real money. It helps a lot with learning.
The good: There are a lot of interesting gems in this book. The author also goes into details of how systems like the US medical system, academic research, the bank of Japan and startups have such motivational structures in place, that changing the systems can be harder than it looks. He also provides context to understands that in a lot of system "everyone acting insane, is just a rational reaction to everyone else acting insane". He then puts this in a framework of Nash equilibria.
The bad - his writing style is not for me. I identified 3 patterns in his writing that I don't enjoy. First, the interesting context is put in the form of a long, semi-structured dialogue. I don't enjoy this, as the characters seem simplified, and you don't really understand where the discussion is going, until you've read it (more or less). The second thing I didn't enjoy was the fact that the second part of the book was comprised of hardly structured long academic sentences. I understand that lots of people have this issue with Yudkowski's writing style, but I'll also confirm. You need to focus really well i order to parse his sentences and chapters. Also, I didn't find much value in the second part of the book. For me, that looked like an attempt to explain simple concepts using far too complucated sentence structures, again in a simple way. Somehow one has to decide if they're writing an academuc paper, or a book for the general public. To put it in AI terms, it feel like the author is "overfitting" when it comes to expressing ideas in an academic way. When writing for the general public, I think that a more simple style, with more common words should be used - That being said, I did learn a lot of words and concepts. The third thing I dislike is that I feel his motivation for this book comes from him arguing a lot with people, and wanting to prove that he's actually right. I know, this is highly subjective. However, a lot of topics that he writes in this book about, are started from conversation he had - stated in the book. I also feel like he's misrepresenting other people's points of view. I had a powerful feeling that he's being unfair to those that he talked about in the past because of how he portrays their argument....thise people seem all suspiciously simplistic in thinking, and the author doesn't give them the benefit of the doubt...at all.
The ugly: 2 things here - I feel like Yudkowski is either unfair or dishonest in his thinking, which again, is based on how he cinstructs the characters that are not himself. Second, and this is funny: I was eager to finish the book. As the last refference in the book (before the conclusion) however, he posts a link to an additional chapter. I started reading this chapter somewhat annoyed by the fact that the guy doesn't want to let me finosh the book. The extra chapter feels simply like a rationalization about his motivations to write the "Harry Potter and ..." series ( I read about 10% of that chapter and had to drop it).
Conclusion: Although there are a lot of interesting pieces of economical thinking etc in this book, the writing style, and feeling like the author is not in control or aware of his feelings and motivations is dissapointing.
Who should read this: People interested in economic thinking, with some background in game theory.
I recommend that this book should be read with care and patience. You might get annoyed or irritated by it. If you're emotional or an extremist thinker, this book is not for you :p
I found the first half of the book interesting. A bunch of interesting case studies and facts, I learnt about economics and gained a better intuition about what a market is for. I liked the free energy metaphor for efficient markets: we are following gradients of exploitability (this reminds me of Schmidhuber's theory of fun/creativity: following gradients of compression).
The second half of the book is closer to self help/a rant about what Elizer thinks it means to be rational and the pitfalls of being modest (different to avoiding overconfidence aka humility).
When can you reasonably think you are right and others are wrong? If 100 experts think X, but you think Y, who is likely to be correct? The modest view would say that we should always defer to the 100 experts, how can we be so arrogant? But Elizer explores pathological cases (or inadequate -- where society is doing something less than optimally according to their actual goals) where the 100 experts might be led astray by systems around them and their incentives. So the question becomes, under what conditions is it possible that I could be right and others wrong.
General point, which I can't quite remember the wording: Smart people can do dumb things under the wrong incentives. Being right is not always a question of who is smarter, but who is motivated correctly.
I think the modest view is captured by the joke; "[A] friend stops and says, "Look, there is a $20 bill on the ground!" The economist turns and coolly replies, "Can't be. If there was a $20 bill on the ground, somebody would have already picked it up.""
How can you be right? Modesty tells us to shut up and listen to the data, not to speculate and let our beliefs tell comforting lies. But, theories are useful, they allow us to make predictions where data does not exist.
A couple of thoughts while reading; - The dialouge between Visitor, Cicie and Simplico got me thinking about how these inadequate systems have evolved in time, a series of local changes that have lead to this local minima. The problem is that the activation potential, the energy necessary to apply a global perturbation (coordinate many actors) is too large, we are only allowed to make local perturbations controlling a couple of the variables. - A theory is the compression of evidence.
Favourite quotes/aphorisms; - You literally don’t have a healthcare market. Nobody knows what outcomes are being sold. Nobody knows what the prices are. - Don’t assume you can’t do something when it’s very cheap to try testing your ability to do it. - If you’ve never wasted an effort, you’re filtering on far too high a required probability of success. (more general point, you will be/should be failing more if you are really trying new things/pushing boundaries)
Basically two different books. The first is about the typical failure modes of a society that struggles to create common knowledge and solve coordination problems. The second offers guidance about when to trust experts and when to set aside that "modest epistemology" in favour of your own intuitions.
I preferred the first book to the second- I enjoyed the author's righteous anger about civilizational inadequacy. One example stuck with me: hundreds of babies die each year simply from getting an incorrect (and outdated) dosing of fatty acids. This happens because hospitals don't aggregate their data (if it happens to only one baby every few years per hospital, the scale of the problem isn't clear) and parents have difficulty accessing information about hospitals use up-to-date dosing.
Now, one of the reasons that example stuck with me is that too many other examples were from the author's personal life. For all that each was meant to present an instance where his immodest epistemology set him ahead, I didn't think all of them were convincing. (In particular, his dialogue of advice to a startup founder seemed misguided.) The book would have been more convincing if he had given more examples of other people practicing the same kind of thinking, or examples where he had been wrong. This contributed to the overall tone being rather off-putting and tirade-y for my taste (hence the three-star rating, despite interesting content).
This book is not long, but still much longer than needed to get its points across. Most of the first half was great, specifically the parts about economics. However, the second part about modest epistemology is in my honest opinion not well reasoned enough and kind of unclear.
Instead of reading this, my recommendation would be to read the Slate Star Codex review of the book, which is essentially a very thorough summary of all the points thanks to which you won't miss anything:
Good definition of when it's good to take an intentionally modest approach: have you ever accomplished anything in the area you are talking about? No. OK. Modesty, please. When to not take such a modest approach: is it cheap to find out if you are actually better than you think? Then take the test. If you fail, then be modest.
Also, I liked the simple definition of minimally viable product (MVP): the simplest thing that is the best in the world for at doing at least one thing.
I think this is not a book so much as a collection of blog posts for people who like to get information from blog posts. Also, the Socratic dialog approach was tedious. This is not Gödel, Escher, Bach: An Eternal Golden Braid we're talking about here.
This book taught me some very interesting things about people and economics in the first three chapters, and was well worth the read overall.
The later chapters had harder-to-follow explanations and some weird examples with slightly problematic reasoning. Still, those problems don’t hurt the central argument much because those chapters are about side topics.
The book seems less like a standalone book than I hoped and more like a continuation of the author’s series of blog posts on Less Wrong. This is because of the opinionated phrasing, the matter-of-fact slightly-arrogant tone, and the anecdotes used as examples. The similarity isn’t such a bad thing, since I loved the Less Wrong “sequences” and found them very insightful when I first read them, but it does make it harder to recommend this book to friends.