Sorry Han, I’m going to tell you the odds.
Early in The Empire Strikes Back, Han Solo is attempting to escape the space of Hoth when the Empire intervenes. To lose the pursuing cruisers, Solo decides to wing it through an asteroid field. C-3PO famously objects, saying, “Sir, the possibility of successfully navigating an asteroid field is approximately three thousand seven hundred and twenty to one.” Han of course goes for it anyway but doesn’t want to know his chances for success. Why?
Analyzing risk is tricky. First you have to agree on how to measure it, who it applies to, over what time it applies, and which statistics will be used to quantify it. In this case, the risk is pretty straightforward: either the star destroyers blow you up or you take your chances in the asteroid field. Let’s trust C-3PO and say that his 1 in 3,720 risk is saying that Han Solo will only survive passing through the asteroids one time in over three thousand. I wouldn’t want to know those odds either. It’s a 99.97% chance of death.
If taking on the Empire’s cruisers means a 100% chance of death, then by definition any option less risky is the better call. But why wouldn’t Han want to know the odds (other than to establish his fly-by-the-seat-of-his-pants persona)? In the study of risk communication, we’d say that Han is exhibiting a well-know bias in human psychology — engaging in risky behavior in the face of loss.
Suppose you are given the terrible choice of deciding the fate of 600 people. You can either save 200 of them outright, or attempt to save all 600 with a chance to end up saving none. Which option do you choose?
Consider letting 400 of those people die outright or attempting to save all 600 with a chance to lose everyone. Which would you choose now?
When psychologists Tversky and Kahneman ran a version of this experiment in the 80s, they found that people overwhelming choose to save 200 people in the first option, and to try the Hail Mary pass in the second option. However, as you may have noticed, the risk of loss is exactly the same in each — either 400 people die or you take a chance to save or possibly lose all of them. Results like these demonstrate the power of “framing.” When the question is framed with a gain facing a loss (saving 200 versus losing all 600), people tend to be loss-averse and go with the guaranteed gain. When faced with two loss-based options, people tend to go for the more risky behavior, presumably because something is going to be lost anyway. Our biases about framing like this can affect everything from deciding which stocks to buy to choosing when (if at all) to go in for cancer screening.
If we were psychologists, we would consider Han Solo to biased towards risk-seeking behavior. A life on the run, always escaping death by the skin of his teeth; Han’s whole existence could be biasing him into taking the big gambles. He’s probably going to get killed soon anyway, so why not approach every risk from the go-big-or-go-home perspective? Sure, he could go for that easy Kessel cargo drop, but because he could bite the space dust any minute, why not try to save a princess and get rich in accordance with how much he can imagine (which is quite a bit)? Maybe this is how all our loveable bandits operate mentally, from Star-Lord to Malcolm Reynolds — renegades are biased into risk-seeking behaviors by lives framed in terms of losses.
Of course, if Han were flying through a real asteroid belt, the odds of him actually getting hit by a cosmic boulder might be one in a billion, and then C-3PO should definitely tell him those odds.
Kyle Hill is the Science Editor of Nerdist Industries. Follow on Twitter @Sci_Phile.