How to Not Make Such Terrible Decisions

The Solution Involves Vampires, Foxes, and Hedgehogs

Photo by Lubo Minar on Unsplash

“Choices are the hinges of destiny,” said Edwin Markham. Which is a nice sound bite and all, but how exactly are we supposed to go about making these choices that define our destiny?

Yes, some are obvious. Exercise. Eat healthy. Tell the truth.

Never buy the extended warranty. Don’t tell your boss what you really think of him. Ceviche from the gas station is unlikely to work our well.

Most of these decisions are fairly straight-forward. And even if we don’t always do the right thing, we usually know what the right decision should be.

Other times we can base our decisions on previous experience. Whether it’s that investment that tanked, the project that failed, or that misguided attempt to save a couple bucks by cutting my own hair, we learn from previous events. And hopefully, we keep from repeating the same mistake too many times.

But what about those decisions that aren’t as obvious? What about those decisions where we don’t have prior experience to estimate from? In a cruel twist, the decisions which will have the greatest impact on our lives are also the ones that we’re least equipped to handle.

No black-and-white answer. No previous experience to leverage. Just a gut feeling and a large amount of anxiety.

Is it a good time to switch careers? Or start a family? Or move to a new part of the world?

Or, of course, the oft-considered quandary of whether you’d choose to become a vampire?

“The things we want are transformative, and we don’t know or only think we know what is on the other side of that transformation.” — Rebecca Solnit, A Field Guide to Getting Lost

So, would you become a vampire?

It’s this thought experiment that philosopher L.A. Paul puts forth in Transformative Experience. If you could become a vampire — painlessly and without inflicting pain on others, trading your human existence for nocturnal superpowers, with your friends having already made the switch and happy with it — would you do it?

Assuming you aren’t a Twilight fan (are there still Twilight fans?), you’d likely start to consider the benefits against the drawbacks. But it quickly becomes clear there’s little way to turn this into a rational decision. As Paul wrote,

“The trouble is, in this situation, how could you possibly make an informed choice? For, after all, you cannot know what it is like to be a vampire until you are one. And if you can’t know what it’s like to be a vampire without becoming one, you can’t compare the character of the lived experience of what it is like to be you, right now, a mere human, to the character of the lived experience of what it would be like to be a vampire. This means that, if you want to make this choice by considering what you want your lived experience to be like in the future, you can’t do it rationally. At least, you can’t do it by weighing the competing options concerning what it would be like and choosing on this basis. And it seems awfully suspect to rely solely on the testimony of your vampire friends to make your choice, because, after all, they aren’t human any more, so their preferences are the ones vampires have, not the ones humans have.”

Now, I agree that the likelihood you’ll be offered this particular option is quite slim.

But how many of our own decisions fit this same model? How many times do we find ourselves facing major life choices with little means of objectively evaluating our options?

Many of our biggest decisions are those where we know very little about the potential future. They bring substantial change — or at least the possibility for substantial change — and we have little means of knowing what our lives will be like after that transformation. As Paul described it,

“Many of these big decisions involve choices to have experiences that teach us things we cannot know about from any other source but the experience itself.”

Faced with a choice of continuing with life as we know it or chancing a life that we can’t imagine, how exactly do we make a rational decision? Or better yet, how do we keep from screwing up and making a choice for which our future selves will curse our name?

Because every choice is essentially trying to make a prediction about the future. Whether it’s deciding on a career or a lunch, our selection is based on our prediction of that future state — and how satisfied we’ll be with it.

The problem is that most of us are very bad at predicting the future.

“How you think matters more than what you think.” — Phil Tetlock

In the 1980s, political science professor Phil Tetlock conducted a series of forecasting tournaments. He gathered 284 experts from a wide range of industries, backgrounds, and perspectives, and asked them to make geopolitical and economic predictions about future events.

The results were terrible. Most of the so-called experts performed no better than random guesses. Additionally, Tetlock found an inverse correlation between the expert’s media exposure and their predictions’ accuracy.

Yet Tetlock did also uncover one group who performed better than a bunch of random dart throws. When he separated this group out, he found one key difference that separated success from failure. As he described each group in his book documenting the results, Expert Political Judgment,

“One group tended to organize their thinking around Big Ideas, although they didn’t agree on which Big Ideas were true or false…They sought to squeeze complex problems into the preferred cause-effect templates and treated what did not fit as an irrelevant distractions…As a result, they were unusually confident and likelier to declare things ‘impossible’ or ‘certain.’”

“The other group consisted of more pragmatic experts who drew on many analytical tools, with the choice of tool hinging on the particular problem they faced. These experts gathered as much information from as many sources as they could….They talked about possibilities and probabilities, not certainties. And while no one likes to say ‘I was wrong,’ these experts more readily admitted it and changed their minds.”

Tetlock named these two types of forecasters hedgehogs and foxes, borrowing from the ancient Greek poet Archilochus who said, “The fox knows many things but the hedgehog knows one big thing.”

The foxes sought many perspectives, while the hedgehogs focused on a singular storyline. But the situations they were trying to assess expanded beyond any one field of expertise. So when the hedgehogs tried to apply their singular worldview to these complex, changing situations, they struggled to account for all of the variations. The foxes, on the other hand, were able to adapt and evolve their expertise to match the dynamics of the situations. As Tetlock wrote,

“The intellectually aggressive hedgehogs knew one big thing and sought, under the banner of parsimony, to expand the explanatory power of that big thing to ‘cover’ new cases; the more eclectic foxes knew many little things and were content to improvise ad-hoc solutions to keep pace with a rapidly changing world.”

We often laud the hedgehogs as our experts. They’re more likely to be the ones we see commenting on world affairs. And we often think that the answer to most of our difficult decisions lies in gaining more expertise on that one, specific topic.

Yet as the saying goes, “if your only tool is a hammer, then every problem looks like a nail,” having a singular perspective prevents us from recognizing the complexities of difficult decisions. When we overly focus on one unique view, we can’t see the interconnections across different fields. And we fail to recognize the impact these will likely have on the consequences of our decision.

“What the human being is best at doing is interpreting all new information so that their prior conclusions remain intact.” — Warren Buffet

Yes, just as few of us will ever be offered the opportunity to become a vampire — and painlessly no less — few of us will be in the market of making global economic predictions. Yet the ability to make quality decisions — which is really just avoiding our tendency to make bad decisions — shares the same lesson.

When we’re faced with a decision in which we don’t have any prior experience, we default into the same tendency that humans have followed for millennia — we tell ourselves stories.

We project a future in our heads based on the decision to take that new job, move to New Zealand, or trade in your bathing suit for fangs and a cape. We tell ourselves a story based on how we believe things will go.

The problem is that we often only tell ourselves one story. And if we’re overly focused on one particular view or perspective, there’s a good chance that our story is going to mirror that unique view.

This fallacy of extrapolation — a phenomenon in which we extend a series of local effects into a larger conclusion — causes us to bias expected future results based on a limited view. And as we all have a tendency to overvalue the importance of what we understand well, the greater this divergence the more narrowband our decisions are likely to be.

Our minds are constantly looking to conserve energy. And as a result, we leverage different biases and heuristics to make life easier every day.

Shortcuts like confirmation bias, anchoring, and loss aversion all simplify our daily decisions. But when we’re faced with a complex situation, these shortcuts quickly turn into liabilities.

Our brains default into projecting outcomes that mimic our views of the world. A process, which — as Tetlock’s hedgehogs found out — often creates a story that significantly differs from reality.

So while one solution is to make sure you’re gaining different perspectives and becoming knowledgeable of multiple fields, another solution is to make sure you’re creating more than one story.

“Our ability to cope with uncertainty is one of the most important requirements for success in life, yet also one of the most neglected. We may not appreciate just how often we’re required to exercise it, and how much impact our ability to do so can have on our lives, and even on the whole of society.” — Dylan Evans, Risk Intelligence

In the 1950s, Herman Kahn developed a decision-making technique of describing the future in stories as if written by people in the future. The process eventually became known as scenario planning and after he founded the Hudson Institute in 1961, it quickly expanded outside of military applications.

While Kahn may be credited with developing the practice, it seems odd to think that Sun Tzu and Julius Caesar weren’t partaking in this same practice millennia before.

Regardless, following Kahn’s lead, Pierre Wack and Ted Newland of Royal Dutch Shell adopted scenario planning to successfully predict the oil crisis of the 1970s. As Wack described it,

“A sustained scenario practice can make leaders comfortable with the ambiguity of an open future. It can counter hubris, expose assumptions that would otherwise remain implicit, contribute to shared and systemic sense-making, and foster quick adaptation in times of crisis.”

In scenario planning, we still create stories about the future, but we create multiple stories challenging our different variables, interconnections, and assumptions.

There’s many options for the process, but the critical factors are to focus on identifying your key uncertainties and developing a series of plausible scenarios that explore each of these areas. As a result, you gain 3–5 scenarios in which sometimes the decision goes right, sometimes it goes wrong, and sometimes it gets weird.

You’re still telling yourself a story. Just now you have a couple of them, all covering different paths and options. Instead of just sticking with our default views like Tetlock’s hedgehogs, we force ourselves to consider alternative perspectives.

And once we’re aware of the different ways the future might veer from its expected path, we’re much more prepared to deal with them if they happen.

“The problem with the future is that it is different. If you are unable to think differently, the future will always arrive as a surprise.” — Gary Hamel

Scenarios by themselves rarely reveal a final decision. But just as the purpose of planning isn’t to develop plans, the purpose of scenario planning isn’t to come up with a perfect vision of the future. Instead, it’s to identify perspectives outside of our typical default mode. It’s to force us outside the fallacy of extrapolation.

Each scenario is essentially a different hand of cards you might be dealt. Knowing your potential options in advance helps you make decisions based on the cards you’ll eventually receive.

The whole point is to consider different perspectives, push against the conventional thought, and come up with a series of ideas you otherwise wouldn’t have considered.

Because while we often start with a binary choice, complex situations usually have undiscovered options. And often the best decision — the one that best balances the risk and opportunities — wasn’t visible when you started.

Although to be honest, I’m still not sure whether I’d become a vampire.

Thanks, as always, for reading. If you enjoyed this or have any suggestions, please let me know your thoughts. I’d love to hear from you. And if you found this helpful, I’d appreciate if you could help me share with more people. Cheers!

Writing helps me realize just how little I know.