Resulting: Why Good Decisions Sometimes Look Bad
Annie Duke talks about a concept called “resulting” in her book Thinking in Bets: the tendency to judge decisions by their outcomes rather than their quality at the time they were made. It’s a useful shorthand for what psychologists have studied as outcome bias since the 1980s. But the more I’ve thought about it, the more I think the problem isn’t just a matter of individual judgment errors, outcome bias creates systemically perverse incentives in organizations.
The Fundamental Mismatch
Outcomes are visible and measurable. Decision processes are not always so.
When you evaluate someone’s performance, you can see whether they hit their sales target, whether their product launched on time, whether their investment paid off. What you can’t easily see is whether they had a sound reasoning process, gathered appropriate information, or properly weighted risks.
This creates a natural drift toward results-orientation. Baron and Hershey demonstrated this in 1988 with a simple experiment: people rated identical medical decisions differently based on whether the patient lived or died, even when the outcome was explicitly stated to be random chance. The same decision, evaluated twice, with opposite verdicts.
But at least in their experiment, people knew they were being inconsistent. In real organizations, we don’t even notice we’re doing it.
How This Warps Incentives
A 2020 study in Theory and Decision found outcome bias in financial investment decisions even when evaluators had “exact knowledge of the investment strategy.” Principals judged the same investment decision as good or bad based on returns that were explicitly described as random. The researchers noted this explains why investors chase past performance. An advisor who got lucky looks skilled, so they get more money, which gives them more chances to get lucky again or blow up spectacularly.
The problem compounds because it creates a selection mechanism for risk-seeking behavior. If you’re evaluated on outcomes, and outcomes are noisy, the optimal strategy is to take big swings. Why? Because:
- If you take a conservative approach and get unlucky, you look incompetent
- If you take a risky approach and get lucky, you look brilliant
- The downside of failure is roughly the same either way
- But the upside of a lucky win is much higher with the risky strategy
This explains so much broken behavior in business. The startup founder who makes increasingly desperate pivots because they need a win to keep fundraising. The product manager who greenlights a half-baked feature launch because shipping something feels safer than shipping nothing. The trader who doubles down after losses because they need to recoup before quarter-end.
They’re not necessarily making bad decisions in isolation. They’re responding rationally to incentives that reward outcomes over process.
The Hindsight Twist
Outcome bias gets worse when combined with hindsight bias. It’s the tendency to believe, after something happens, that you “knew it all along.”
Research by Roese and Vohs shows these biases interact: not only do we judge decisions by their outcomes, we retroactively reconstruct the decision context to make the outcome seem more predictable than it was.
This is why postmortems are often useless. After a project fails, everyone can suddenly see exactly where it went wrong. Of course we should have done X instead of Y. It’s so obvious now. Except it wasn’t obvious then, and the reasons it seems obvious now are contaminated by knowing how it turned out.
What Actually Helps
The solution isn’t to ignore outcomes, obviously results matter.
But you need to evaluate them probabilistically. A 2025 meta-analysis in the Journal of Applied Psychology on mitigating cognitive bias suggests two concrete practices:
1. Prospective decision logs. Before making a decision, write down your reasoning, what you expect to happen, and what would need to be true for different outcomes. This creates a paper trail that prevents hindsight bias from rewriting history. Six months later, when things work out or don’t, you can go back and see whether your process was sound independent of the result. (I’ve written more about decision journals and how high-reliability organizations use them to separate process quality from outcome luck.)
2. Batch evaluation. Research on hiring decisions shows that when you evaluate candidates one at a time, outcome bias dominates. You hire the person who happened to do well in their last role, regardless of why. But when you evaluate multiple candidates simultaneously, you naturally start comparing processes and inputs rather than just results. The same principle applies to evaluating projects or strategic bets.
The Uncomfortable Part
Here’s what makes this hard: accepting that good process doesn’t guarantee good outcomes means you can do everything right and still fail. It also means you can’t take full credit when things work out.
That trader who made a killing last quarter? Maybe they’re brilliant. Maybe they got lucky. Probably some of both. The founder whose company went to zero? Maybe they screwed up. Maybe the market shifted. Probably some of both.
The corporate world hates this ambiguity. We want heroes and villains, not expected value calculations. We want to promote the people who got results and fire the ones who didn’t. It’s cleaner. It feels fair.
But it’s not actually fair, and it doesn’t produce better decisions. It just produces people who are good at getting lucky, or good at blaming bad luck.
If you want to actually get better at decisions, you have to separate signal from noise. You have to judge process independent of outcome. You have to accept uncertainty.
Or you can keep resulting.
Subscribe to the Newsletter
Get the latest posts and insights delivered straight to your inbox.