Metrics · Featured Debate
9 guests 19 episodes 4,050 words

The $100 Million Decision Nobody Believed In

Should product decisions be driven by data or intuition?

A small code change sat in Bing's backlog for months. It moved the second line of ad text to the first line. Nobody rated it as a priority. An engineer who kept seeing it in the backlog said, "My God, we're spending too much time discussing it. I could just implement it." He spent a couple of days on it and launched the experiment. An alarm fired: something was wrong with the revenue metric. But there was no bug. That trivial change increased revenue by about 12% -- worth $100 million annually. It was the biggest revenue impact in Bing's entire history.

That story should haunt every product leader who has ever said, "I just know this is the right call." It should also haunt every product leader who has ever said, "Let the data decide." Neither instinct nor data would have surfaced that change on its own. Instinct failed -- nobody believed in it. Data could not help until someone actually ran the test.

This is the real tension at the heart of product decision-making. Not "data vs. intuition," but something far more slippery: knowing when to trust your gut, when to trust the numbers, and when both are lying to you.

Nine conversations from the Lenny's Podcast archive reveal that the best product leaders in the world disagree sharply on this question. But when you lay their positions side by side, a surprising consensus emerges -- and it has nothing to do with which approach is "better."

Airbnb, Microsoft, Amazon

Airbnb search relevance: 250 experiments, only 8% improved target metric, but collectively drove 6% revenue improvement

Bing ad text change: moving second line to first line generated $100M/year in revenue -- languished in backlog for months because nobody's intuition flagged it

Shopify

30-40% of experiments showing short-term lift have no GMV lift a year later

Shopify's three product groups: core (100-year vision, no KPIs), merchant services (medium-term), growth (experiments with long-term holdouts)

37signals (Basecamp/HEY)

No OKRs, KPIs, revenue targets, or growth targets -- only 'did we make more than we spent?'

37signals runs Basecamp and HEY with ~70 employees vs competitors with 1,500-3,000 employees, with similar customer counts and higher profitability

Linear

Profitable for 2+ years, $35K total marketing spend, net negative burn rate, only 2 departures ever

Linear has no A/B tests, one PM (Head of Product), and no metrics-based goals

Meta

Facebook growth team stopped all roadmap work in January 2009 to focus exclusively on data instrumentation

7 friends in 10 days (or 10 in 14) -- even the original team is not sure which was first, suggesting the specific number mattered less than the shared goal

Gmail, YouTube, Microsoft

Google+ consumed 1,000+ engineers over multiple years, built a division the size of Android, and was shut down in...

Google+ consumed 1,000+ engineers over multiple years, built a division the size of Android, and was shut down in 2019 -- opinion-based development at its most expensive

The Synthesis

When you lay these nine perspectives on top of each other, the "data vs. intuition" framing collapses. The right decision-making approach is not a philosophical choice. It is a function of three variables: the cost of being wrong, the speed at which you can correct mistakes, and the time horizon over which value is created.

01
Three Decision Variables
What actually determines whether to use data or intuition?
02
Data Misleading Risk
Can being data-driven actually lead you astray?
03
Hypothesis Generator
How do intuition and data actually work together?
04
Intuition as Learnable Skill
Is product intuition a gift or a developed skill?

The right approach is a function of three variables: the cost of being wrong, the speed at which you can correct mistakes, and the time horizon over which value is created. Pure intuition works when cost of being wrong is low and iteration speed is high. Rigorous testing is essential when a bad change hits hundreds of millions of users.

If 30-40% of short-term experiment wins wash out over a year, the standard two-to-four-week experiment window is actively misleading. Being 'data-driven' over the wrong time horizon means being data-misled. A change that looks like a loss at two weeks might be a massive win at six months.

Intuition generates the questions, data answers them. Stop generating questions (pure data optimization) and you converge on local maxima. Stop answering questions (pure intuition) and you get Google+ -- massive resource commitments to untested beliefs. The pathology comes when you skip one side.

Product intuition is defined as empathy plus creativity, developed through observing users, deconstructing products, and curiosity about technology. This reframes the debate: the question is not whether to use data or intuition, but whether you have invested enough in developing the quality of both your measurements and your judgment.

Which Approach Fits You?

Answer 3 questions about your situation. We'll match you to the right approach.

Question 1

What is the cost of being wrong with this decision?

Question 2

How much data do you have to work with?

Question 3

What is your product's time horizon for delivering value?

Notable Absences

The Bottom Line

The best product leaders are not "data-driven" or "intuition-driven." They are *well-calibrated*. They know when they are in a domain where their intuition is reliable -- high reps, fast feedback, low cost of failure -- and when they are in a domain where it is not. They hold convictions loosely enough to let evidence reshape them, but tightly enough to avoid the paralysis of testing everything.

If you take one thing from this analysis, let it be Archie Abrams's challenge: go look at your biggest experiment wins from a year ago. Check the downstream metrics. If the lift held, your process is working. If it did not, you may be optimizing for an illusion -- and a well-calibrated leader with strong product taste would have served you better than a thousand A/B tests.

  1. Ronny Kohavi"The ultimate guide to A/B testing" — Lenny's Podcast, July 27, 2023
  2. Itamar Gilad"Becoming evidence-guided" — Lenny's Podcast, September 21, 2023
  3. Karri Saarinen"Inside Linear: Building with taste, craft, and focus" — Lenny's Podcast, October 8, 2023
  4. Maggie Crowley"Mastering product strategy and growing as a PM" — Lenny's Podcast, November 5, 2023
  5. Jason Fried"Jason Fried challenges your thinking on fundraising, goals, growth, and more" — Lenny's Podcast, December 17, 2023
  6. Dylan Field"Figma's CEO: Why AI makes design, craft, and quality the new moat for startups" — Lenny's Podcast, June 30, 2024
  7. Naomi Gleit"Meta's Head of Product on working with Mark Zuckerberg, early growth tactics, and more" — Lenny's Podcast, October 27, 2024
  8. Archie Abrams"Breaking the rules of growth: Why Shopify bans KPIs, optimizes for churn, and builds toward a 100-year vision" — Lenny's Podcast, November 7, 2024
  9. Peter Deng"From ChatGPT to Instagram to Uber: The quiet architect behind the world's most popular products" — Lenny's Podcast, June 22, 2025
  10. Karri Saarinen"How Linear builds product" — Lenny's Newsletter, September 26, 2023
  11. Jules Walter"How to develop product sense" — Lenny's Newsletter, March 15, 2022
esc
Loading…
navigate filter openesc close