Vimal & Sons

December 5, 2021

Cliff Notes Version of Thinking in Bets by Annie Duke

Thinking in Bets: Making Smarter Decisions When You Don't Have All the Facts 


Introduction

Winning and losing are loose signals of decision quality. We need to separate decision quality from outcome quality. Two things determine how our lives turn out; one is the quality of our decisions and the second thing is luck. Thinking in bets helps us to learn the difference between the two.

Chapter 1 - Life is Poker, Not Chess

Almost all of us tend to equate the quality of a decision with the quality of its outcome. In Poker it's called 'resulting'. Resulting is a routine thinking pattern that bedevils all of us. When we are asked about the best decisions that we have made in our lives, our answers invariably veer towards ranking our results and not our decisions. What we miss is that a good decision may deliver a bad or undesirable outcome; the bad outcome in and of itself does not or should not be used for judging the quality of the decision. In other words, we can't judge decisions as good or bad depending upon what their outcome is. Once we engage in 'resulting' we also tend to engage in hindsight bias (which is the tendency, after an outcome is known to view the outcome as inevitable).

The question is - why do we behave in the way we do? The reason is that humans are inherently irrational creatures. We recognise the existence of luck but we do not want to accept the fact that luck has played in our lives. We do not want to accept randomness for what it is and try and create order even when there is none. That is the way we have evolved. 

When our ancestors heard rustling on the savanna and a lion jumped out, they made a connection between 'rustling' and 'lions'. We survive by trying to find predictable correlations, in the process making many such correlations that were doubtful or false. Incorrectly interpreting rustling from the wind as an oncoming lion is called a 'Type 1' error, a false positive. The consequences from such an error are less grave than those of a 'Type 2' error which is a false negative. In this case, a false negative - interpreting all rustling sounds like the wind. We seek certainty and it can wreak havoc on our decision-making in an uncertain world. 

Since we seek certainty, we work backwards from the outcomes to figure out how things happened. In the process, we are susceptible to a variety of cognitive traps, like assuming causation when there is only correlation or cherry-picking data to confirm the narrative we prefer. We pound square pegs into round holes to maintain the illusion of a tight relationship between our outcomes and our decisions.

Because our lives revolve around a lot of hidden information and spades of luck. Life is a game of incomplete information and involves decision-making under uncertainty over time, unlike chess. Yet, we treat life decisions as if they were chess decisions. 

We have to redefine the concept of being wrong. Of two outcomes, if we chose the one with the higher probability of success and failure, it doesn't mean that we were wrong. It just means that the one with the lower probability worked. And that can happen more often than one imagines. Decisions are bets on the future and they aren't 'right' or 'wrong' based on whether they turn out well on any iteration. When we think in a probabilistic manner, we are less likely to see adverse results as proof alone tag the made a decision error. Redefining wrong means calibrating our decisions between two extreme continuums, in the shades of grey, if you will. 

Chapter 2 - Wanna Bet

A bet is defined as a 'choice made by thinking what will probably happen'. There are five key elements in any bet, these are choice, probability, risk, decision and belief. And the important thing is that betting doesn't have to take place in a casino or against somebody else - when we make a decision, we are placing a bet on an expected outcome. In that sense, all our decisions are bets. Not placing a bet on something is in and of itself a bet! 

Since our beliefs form the basis for our decision-making, we need to closely look at how we form our beliefs. And the reality is that our beliefs are formed haphazardly -  we believe all sorts of things based on what we have heard but haven't researched for ourselves. The process can be described as, we hear something, we think about it and come to an opinion about it being true or false, and then we proceed to form a belief. But the above is what we think we are doing. In reality, we do something else - we hear something, we believe it to be true and only sometimes and at a later date if we have the time and the inclination and the need. Do we think about it and vet it to determine whether it is true or false. The reality is that we are credulous creatures who find it very easy to believe and very difficult to doubt. Our default setting is to believe that what we hear and read is true. 

How we form our beliefs has been influenced by our evolution and our irrationality and the process is influenced by our push towards efficiency rather than accuracy. Humans form beliefs abstractly and for us *seeing is believing*. In other words, we form beliefs about things that we don't directly experience and that is unique to humans.  

Even if we were to keep our evolutionary belief formation system unchanged but just tweak it in a small manner, it would make a world of difference. The tweak that we should make is to update our beliefs and then based on the new information change our beliefs. Unfortunately, most of us maintain our beliefs even though we continue to receive evidence that our beliefs are false. Instead of altering our beliefs, we do the opposite; we change the interpretation of the new information to fit our beliefs. Very few of us process information with a view to ‘truth-seeking. 

Our beliefs affect how we process information. As a result, we have what Scot Adams called the 'one movie, two screens syndrome', whereby the same event is interpreted in completely different ways by different people. In other words, our pre-existing beliefs influence the way we experience the world and since our beliefs aren't formed in an orderly manner, it leads to all sorts of mischief with our decision-making. 

Once a belief is formed it becomes difficult (impossible?) to dislodge. The belief takes on a life of its own. As a result, it leads us to seek evidence that confirms our beliefs and we rarely, if ever challenge the validity of the confirming evidence that we so obtain to protect our beliefs. At the same time, we work hard at discrediting information that contradicts our beliefs. This irrational, circular information processing pattern is called motivated reasoning. It indirectly leads to our beliefs being even more entrenched than they were earlier. It doesn't take much for us to believe something. And once we believe it, protecting our belief guides how we treat further information relevant to the belief. When faced with information that is not consistent with our beliefs we act as if it an assault on our self-narrative. Hence the rise of fake news and disinformation. The internet has led to the formation of filter bubbles, and none of us is immune from these. 

The smarter you are, the more susceptible you are to fake news and disinformation. Being smart makes the bias worse, since the smarter you are the better you are at constructing a narrative to support your beliefs. The better you are with numbers, the better you are at spinning those numbers to support and confirm your beliefs. That is the way we have evolved - we are wired to protect our beliefs even though our goal is to seek the truth. 

When someone challenges us to bet on a belief, they tend to signal that our belief is inaccurate in some way. It triggers us to vet our belief and we revisit what we believe and how we formed the belief. That is the philosophy of Wanna Bet and it works. Wanna Bet triggers us into engaging in the third step of our belief formation which is vetting the belief and determining how true or untrue it is. Making a wager brings the risk out in the open - it makes explicit what is already implicit and overlooked. 

We can train ourselves to view the world through the lens of Wanna Bet. 

Instead of asking ourselves or others the question - ‘are you sure?', we need to flip the question too 'how sure are you?' Once we answer that question we can form a probabilistic estimate of what we believe. By shifting or calibrating our beliefs we build uncertainty into the decision-making process and recognise that there are shades of grey between the shades of black and white. It leads to more objective decision-making. Acknowledging uncertainty is the first step in measuring and narrowing it. 

Chapter 3 - Bet to Learn: Fielding the Unfolding Future

Just like in any classroom where only some students listen to their teacher, most of us don't learn from experience. Aldous Huxley, a renowned philosopher has famously said: Experience is not what happens to a man; it is what a man does with what happens to him. The point is that we have to learn to identify what the outcomes of our decisions teach us and whether there is a lesson to be learnt therein. 

The way our lives turn out is the result of two things, skill and luck. Any outcome that is the direct result of our decision-making is in the skill category. If making the same decision again or if altering the decision would result in a different outcome, that is in the skill category. If an outcome occurs because of things that we can't control, the result would be due to luck. If the outcomes we obtained were not influenced in any way by the decisions we made, that is in the luck category. 

For any outcome that we make, we are faced with this sorting decision, throwing the decision in one of the two buckets. If this fielding of outcomes is done diligently, we can learn from our experiences, not otherwise. 

 Fielding of outcomes in the manner described above isn't easy. It is difficult to stay objective when we work backwards from the quality of an outcome and it means that learning from an outcome is a haphazard process. As a result, we make fielding errors, we take credit for favourable outcomes by putting them into the skill bucket and we conveniently dump all negative outcomes into the luck bucket. It is known as a self-serving bias. And our self-serving bias doesn't help us in any way to learn from our experiences. Our capacity for self-deception has few parallels. 

The reality is that most outcomes are the combination of the two. As such learning from experience is tricky and it doesn't offer us any orderliness; uncertainty trips us along the way. What we must learn to do is to calibrate the bets that we make. Black and white thinking or the all or nothing bias means that we think in terms of being 100 per cent right or 100 per cent wrong and there is nothing that is 'somewhat less sure' or in between these two. The reason we behave in this manner is that our self-esteem is involved and all of us have an innate desire to create a positive self-narrative. When we are asked to judge another person, we flip the narrative. So everyone other than us who get good outcomes gets them due to luck and their bad outcomes are due to skill; that's just the way we are wired. 

A lot of the way we think about ourselves comes from how we think we compare with others. This habit of ours impedes learning. To learn we need to shift our mindset. Treating outcome fielding as a bet can accomplish the mindset shift necessary to reshape habit. When we ask the rhetorical question 'wanna bet' it makes us revisit our beliefs and we then take a closer look. In the process, we make explicit what is already implicit. When we treat outcome fielding as a bet, it pushes us to field outcomes more objectively into the appropriate buckets because that is how bets are won. In this manner, we can build a habit of thinking of fielding outcomes as bets. Thinking in bets triggers an open-minded exploration of the alternative hypothesis, of reasons supporting conclusions opposite to the routine of self-serving bias.  The way our lives turn out is the result of two things, skill and luck. Any outcome that is the direct result of our decision-making is in the skill category. If making the same decision again or if altering the decision would result in a different outcome, that is in the skill category. If an outcome occurs because of things that we can't control, the result would be due to luck. If the outcomes we obtained were not influenced in any way by the decisions we made, that is in the luck category. 

For any outcome that we make, we are faced with this sorting decision, throwing the decision in one of the two buckets. If this fielding of outcomes is done diligently, we can learn from our experiences, not otherwise. 

 Fielding of outcomes in the manner described above isn't easy. It is difficult to stay objective when we work backwards from the quality of an outcome and it means that learning from an outcome is a haphazard process. As a result, we make fielding errors, we take credit for favourable outcomes by putting them into the skill bucket and we conveniently dump all negative outcomes into the luck bucket. It is known as a self-serving bias. And our self-serving bias doesn't help us in any way to learn from our experiences. Our capacity for self-deception has few parallels. 

The reality is that most outcomes are the combination of the two. As such learning from experience is tricky and it doesn't offer us any orderliness; uncertainty trips us along the way. What we must learn to do is to calibrate the bets that we make. Black and white thinking or the all or nothing bias means that we think in terms of being 100 per cent right or 100 per cent wrong and there is nothing that is 'somewhat less sure' or in between these two. The reason we behave in this manner is that our self-esteem is involved and all of us have an innate desire to create a positive self-narrative. When we are asked to judge another person, we flip the narrative. So everyone other than us who get good outcomes gets them due to luck and their bad outcomes are due to skill; that's just the way we are wired. 

A lot of the way we think about ourselves comes from how we think we compare with others. This habit of ours impedes learning. To learn we need to shift our mindset. Treating outcome fielding as a bet can accomplish the mindset shift necessary to reshape habit. When we ask the rhetorical question 'wanna bet' it makes us revisit our beliefs and we then take a closer look. In the process, we make explicit what is already implicit. When we treat outcome fielding as a bet, it pushes us to field outcomes more objectively into the appropriate buckets because that is how bets are won. In this manner, we can build a habit of thinking of fielding outcomes as bets. Thinking in bets triggers an open-minded exploration of the alternative hypothesis, of reasons supporting conclusions opposite to the routine of self-serving bias. 

Thinking in bets will not rid us of our self-serving bias or our motivated reasoning, but it will make things a wee bit better and that wee bit does tend to matter in the ultimate analysis. Being a wee bit better at decision-making makes a world of difference in the real world. And, when we know that almost nobody else is doing what we are doing it, gives us an edge. Moreover, the wee bit extra we get via this process compounds over time. 

Chapter 4 - The Buddy System

Our brains have evolved to make our version of the world more comfortable; favourable outcomes are the result of our skill and unfavourable outcomes are beyond our control (luck) and we always compare favourably with peers. We tend to deny or dilute the most painful parts of the message. We process information to protect our self-image. This is the echo chamber most of us live in. By choosing to exit this chamber, we are voluntarily opting to strive for a more objective assessment of the world. By doing this we hope to achieve a sense of happiness over the long term, instead of the short-term happiness that we live in when we are ensconced in our echo chamber. The important thing to remember is that this trade-off isn't for everyone. And each person must voluntarily choose to exit his or her echo chamber, he or she can't be forced to do it. 

The point is that we don't win bets by being in love with our ideas. We win bets by relentlessly striving to calibrate our beliefs and predictions in a way that these will more accurately represent the world. Staying objective is the goal. Confirmation Bias and Motivated Reasoning are so ingrained in our system that it is helpful if there is someone around who can help us get over the pitfalls that these present. When we seek truth in the above settings we can differentiate between outcomes and decision quality. Once we get used to this buddy approach, it becomes a habit. 

Chapter 5 - Dissent to Win

The C in the CUDOS framework, which is communism doesn't refer to the political system, but the communal ownership of data within groups. Data must be shared, else it leads to bias. 

The U in the CUDOS framework stands for Universalism. It means don't shoot the messenger. If we don't like the message, we shouldn't be blaming it on the messenger. At the same time, there is another idea which is don't shoot the message just because you don't like the person who delivered it. When we have a negative opinion about the person delivering the message, we close our minds to what they are saying and in turn miss a lot of learning opportunities. Similarly, when we have a positive opinion about a person, we accept what they are saying without vetting. Both are bad. The point is that information has merit, irrespective of where it came from - the accuracy or lack thereof of information should be assessed irrespective of its source. No one has ONLY good ideas and ONLY bad ideas, we can learn from almost anyone and everyone we meet. 

That leaves an important part of how does one communicate with those who are not part of our group, those who don't want to Think in Bets? There are a couple of ways. 

1. First express uncertainty, never says things with an air of finality as if you're completely sure about something. Leave the door slightly open so that the other person can express his or her opinion - express uncertainty. In this way, others share helpful information and dissenting opinions. If we express certainty, the fear of being wrong suppresses others from expressing their opinion. 

2. Secondly, lead with assent. Listen for things you agree with, state those, be specific. Once we lead with assent, our listeners will be more open to any dissent that might follow. What happens is that the new information is presented as supplemental rather than negating. Instead of using the word ‘but use the word 'and'. In this way, we avoid the language of 'no'. 

3. Third, we must ask for a temporary agreement to engage in truth-seeking. We can ask the other person if he or she is venting information or asking for advice. Once that part is clear, we can then define the rules of the game. 

4. Finally, focus on the future. We can validate the person experience of the past and then ask him or her. To refocus on the future. In this way, we circle back to the question of how we can avoid making similar mistakes that we made in the past, again in the future. We need to focus on the things that we can control. When we send them thinking into the future, they take a short trip beyond their present frustrations and towards improving things they can control.

Chapter 6 - Adventures in Mental Time Travel

In real life decision-making, when we bring our past or future self into the equation, we tend to make better bets. When engaged in the decision-making process if we isolate ourselves from similar situations that we have been in the past and do not think about the future it leads to in-the-moment thinking where the scope of time is distorted. As decision-makers, we want to collide with past and future versions of ourselves. Just as we can recruit other persons to be our decision buddies, we can 'time-travel' and recruit past and future versions of ourselves to obtain a similar result. It helps us to stay on a rational path. Indirectly, it reminds us that every decision has consequences. At all times we are trying to improve decision quality and are trying to improve the probability of obtaining a good outcome, we cannot guarantee that the outcome will be a good one. And once the probabilities improve, good results tend to compound. 

When we make an in-the-moment decision we are more likely to be irrational and impulsive. This tendency to focus on our present self at the expense of our future self is called temporal discounting - we are willing to take an irrationally large discount now to get a reward instead of waiting for a bigger reward later. 

We must ask ourselves the consequences of our decision using the 10-10-10 framework. What will the consequences be in ten minutes, in ten months and ten years? In this way, we can trigger mental time travel. We can use this for the past as well, how would I have felt if I had taken this decision 10 days ago, 10 months ago or 10 years ago? In this way, we can move regret in from of a decision. And more often than not, it forces us to make a better decision. If we were not to engage in this framework and instead take in-the-moment decisions, what happens is that we degrade the quality of the bets that we are making and thereby increase the probability of obtaining a bad outcome. 

The way we field outcomes is path-dependent. It doesn't so much matter how we end up, what matter is how we got there. What has happened in the recent past drives our emotional response much more than how we are doing overall. 

When the emotional part of our brains starts pinging the rational part shuts down, something like what happens when a pinball machine is tilted. It's called tilt in Poker. When we are on 'tilt' we are not fit to make any decisions. 

Backcasting is the process of working backwards from a positive future. When it comes to advance thinking, standing at the end and thinking backwards is much more effective than looking forward from the beginning. When we try and breast the future, we end up with a distorted image. The reason is that from where we stand the present and the immediate future looms large, everything else beyond that loses focus. 

Pre-mortems are working backwards from a negative future. It is the principle of inversion, you know what you are not supposed to do. 

When we indulge in hindsight bias we forget all the other alternative outcomes that could have occurred. Once something happens we assume that it was bound to happen. If we don't hold all possible futures in mind before one of them happens, it becomes almost impossible to realistically evaluate decisions or probabilities after. Once something happens, we no longer think of it as probabilistic or ever having been probabilistic. That results in 'I should have known or 'I told you so' responses. By keeping an accurate representation of what could have happened (and not a version edited by hindsight) memorialising the scenario plans and decision trees we create through a good planning process, we can be better calibrators of things going forward. We are all outcome junkies and the more we wean ourselves from that addiction, the happier we will be.