← Back to Bookshelf

Testing Business Ideas

A Field Guide for Rapid Experimentation

Alex Osterwalder

Why Read This

Replace gut feeling with evidence — 44 experiments for testing whether your idea has legs before you bet on it.

Most entrepreneurs fall in love with their idea and skip straight to building. Osterwalder gives you a systematic way to stress-test assumptions early, so you spend real money only on ideas that have already earned it.

Pillar: Money Theme: Run a Side-Hustle Read: ~10 min
10 Insights Worth the Read

The Book in Bullets

Everything Osterwalder and Bland want you to walk away with

1

Most startups fail not from bad execution but from building something nobody wants.

Too many entrepreneurs execute ideas that look great in presentations and spreadsheets, only to learn their vision was a hallucination. Don't make the mistake of executing without evidence — test thoroughly first.

2

Break every big idea into three types of testable risk: desirability, feasibility, and viability.

Desirability asks if customers actually want it. Feasibility asks if you can build and deliver it. Viability asks if you can earn enough money from it. Each risk gets its own hypotheses and experiments.

3

The most dangerous assumptions are the ones you don't realize you're making.

Use an Assumptions Map to plot every hypothesis by importance and evidence. The top-right quadrant — critical to success but lacking evidence — is where you must focus your testing energy first.

4

Write hypotheses to disprove, not just to confirm — or you'll bake your biases right into the business.

If all hypotheses start with 'We believe that...' you fall into a confirmation trap. Create competing hypotheses that try to refute your assumptions and test them simultaneously, especially when the team disagrees.

5

Define what success looks like before you run the experiment — not after.

Every experiment needs four components: a hypothesis, a description, the metrics you'll measure, and pre-set success criteria. Without criteria defined in advance, you'll rationalize any result.

6

Start with cheap, fast experiments and increase fidelity as evidence grows.

Incrementally fund teams using a venture-capital style approach based on the learnings they share. Match experiment strength to what you're trying to learn — don't over-invest before you've earned the right to.

7

Small dedicated teams outperform large distracted ones — and they need autonomy, funding, and customer access.

Multitasking across projects silently kills progress. Teams need real budgets for experiments, direct access to customers, clear strategic direction, and a facilitative leadership style that leads with questions, not answers.

8

Use the Business Model Canvas and Value Proposition Canvas to make ideas tangible and testable.

Don't fall in love with your first idea. Ideate broadly, narrow down with business prototypes, then assess. You'll constantly improve these prototypes with insights from testing in future iterations.

9

A call-to-action experiment — where someone takes an observable action — generates stronger evidence than any survey or interview.

Asking people what they'd do is unreliable. Watching what they actually do when given the chance to sign up, pre-order, or pay tells you the truth. Real paying customers are always different from hypothetical ones.

10

Successful testing teams share six behaviors: data-influenced, experimental, customer-centric, entrepreneurial, iterative, and willing to question assumptions.

They move fast, create momentum, and aren't afraid to test disruptive models. They know the bottleneck is always at the top of the bottle — leadership support makes or breaks the testing culture.

These notes are inspired by direct excerpts and woven together into a readable guide you can follow from start to finish.

Testing Business Ideas: A Field Guide for Rapid Experimentation

By David J. Bland and Alexander Osterwalder


Introduction

Why Test Business Ideas?

Too many entrepreneurs and innovators execute ideas prematurely because they look great in presentations, make excellent sense in spreadsheets, and look irresistible in the business plan—only to learn later that their vision turned out to be a hallucination. Don’t make the mistake of executing business ideas without evidence: test your ideas thoroughly, regardless of how great they may seem in theory.

💡 Key Insight

This book outlines one of the most extensive testing libraries available to help you make your ideas bulletproof with evidence. Test extensively to avoid wasting time, energy, and resources on ideas that won’t work.

To test a big business idea, you break it down into smaller chunks of testable hypotheses. These hypotheses cover three types of risk. First, desirability—that customers aren’t interested in your idea. Second, feasibility—that you can’t build and deliver your idea. Third, viability—that you can’t earn enough money from your idea. You test your most important hypotheses with appropriate experiments. Each experiment generates evidence and insights that allow you to learn and decide. Based on the evidence and your insights, you either adapt your idea if you learn you were on the wrong path, or continue testing other aspects of your idea if the evidence supports your direction.

Types of Hypotheses

Hypothesis TypeCore QuestionPrimary Risk
Feasible”Can we do this?”Can’t manage, scale, or access key resources, activities, or partners.
Desirable”Do they want this?”Market too small, too few customers want the proposition, or we can’t reach and retain them.
Viable”Should we do this?”Can’t generate more revenue than costs.
Types of Hypotheses
Feasible "Can we do this?" Risk: can't manage, scale, or access key resources, activities, or partners. Desirable "Do they want this?" Risk: market too small, too few customers want it, or can't reach them. Viable "Should we do this?" Risk: can't generate more revenue than costs.

Who This Guide Is For

This material serves three audiences. If you are a corporate innovator, you are challenging the status quo and building new business ventures within the constraints of a large organization. If you are a startup entrepreneur, you want to test the building blocks of your business model to avoid wasting the time, energy, and money of the team, cofounders, and investors. And if you are a solopreneur, you have a side hustle or an idea that is not quite yet a business.


Section 1 — Design

Design 1.1 — Design the Team

If you do not have all of the skills your team needs, evaluate technological tools to fill the void. There are new tools coming to market every day that allow you to create landing pages, design logos, and run online ads — but a strong cross-functional team is still the foundation.

Definition — Cross-Functional Team

A cross-functional team has all the core abilities needed to ship the product and learn from customers. A common basic example consists of design, product, and engineering. Depending on the business, you may also need skills in legal, data, sales, marketing, research, and finance.

Key Insight

A lack of diverse experiences and perspectives on a team will result in baking your biases right into the business. When forming your team, keep diversity top of mind — rather than as an afterthought.

What a Cross-Functional Team Needs
🎯
Dedicated
Small, focused teams outperform large distracted ones
💰
Funded
Incrementally fund using a VC-style approach
🔓
Autonomous
Give space to own the work without micromanaging
🧭
Strategy
Clear direction on where the team will play
👥
Customer Access
Direct access to customers, not isolated
📊
KPIs
Signposts to measure progress toward the goal

Successful Teams Exhibit Six Behaviors

1. Data Influenced. You do not have to be data driven, but you need to be data influenced. Teams no longer have the luxury of burning down a product backlog of features. The insights generated from data shape the backlog and strategy.

2. Experiment Driven. Teams are willing to be wrong and experiment. They are not only focused on the delivery of features, but craft experiments to learn about their riskiest assumptions. Match experiments to what you are trying to learn over time.

3. Customer Centric. To create new businesses today, teams have to know “the why” behind the work. This begins with being constantly connected to the customer — not limited to the new customer experience, but expanding to both inside and outside of the product.

4. Entrepreneurial. Move fast and validate things. Teams have a sense of urgency and create momentum toward a viable outcome. This includes creative problem-solving at speed.

5. Iterative Approach. Teams aim for a desired result by means of a repeated cycle of operations. The iterative approach assumes you may not know the solution, so you iterate through different tactics to achieve the outcome.

6. Question Assumptions. Teams have to be willing to challenge the status quo and business as usual. They are not afraid to test out a disruptive business model that will lead to big results, as compared to always playing it safe.

What the Team Needs

Dedicated. Teams need an environment in which they can be fully dedicated to the work. Multitasking across several projects will silently kill any progress. Small teams who are dedicated outperform large teams who are not.

Funded. It is unrealistic to expect these teams to function without a budget. Experiments cost money. Incrementally fund teams using a venture-capital-style approach, based on the learnings they share during stakeholder reviews.

Autonomous. Teams need to be given space to own the work. Do not micromanage them to the extent that it slows their progress. Instead, give them space to give an accounting of how they are making progress toward the goal.

What the Company Needs to Provide

Leadership. A facilitative leadership style is ideal — lead with questions, not answers. Be mindful that the bottleneck is always at the top of the bottle.

Coaching. Teams need coaching, especially if this is their first journey together. Coaches — internal or external — help guide teams when they are stuck, particularly in moving beyond interviews and surveys to a wider range of experiments.

Customer Access. The trend of isolating teams from the customer must be reversed. If teams keep getting pushback on customer access, they will eventually just guess and build anyway.

Resources. Teams need enough physical or digital resources to make progress and generate evidence. Constraints are good; starving a team will not yield results.

Strategy. Without a clear, coherent strategy, you will mistake being busy with making progress. Teams need direction on where they will play — adjacent market or new one — to unlock new revenue.

KPIs. Key performance indicators act as signposts so that everyone can tell whether the team is making progress toward the goal.

Team Alignment

Before diving in, get the team aligned using these nine steps:

Team Alignment Steps

  1. Define the mission.
  2. Define the time box for the agreement.
  3. Create joint team objectives. What do we intend to achieve together?
  4. Identify commitment levels for team members. Who does what?
  5. Document joint resources needed to succeed. What resources do we need?
  6. Write down the biggest risks that could arise. What can prevent us from succeeding?
  7. Describe how to address the biggest risks by creating new objectives and commitments.
  8. Describe how to address resource constraints.
  9. Set joint dates and validate as a group.

Design 1.2 — Shape the Idea

The Design Loop

Shaping an idea is a design loop with three repeating steps: Ideate, Business Prototype, and Assess.

The Design Loop
Iterate
1
Ideate
Generate as many alternatives as possible. Don't fall in love with your first idea.
2
Business Prototype
Narrow down with napkin sketches, then the Value Proposition and Business Model Canvases.
3
Assess
Evaluate: Is this the best way to address customer needs? Once satisfied, start testing.

Step 1 — Ideate. Come up with as many alternative ways as possible to use your initial intuition — or insights from previous testing — to turn your idea into a strong business. Do not fall in love with your first ideas.

Step 2 — Business Prototype. Narrow down the alternatives from ideation using business prototypes. When starting out, rough prototypes like napkin sketches are fine. Use the Value Proposition Canvas and Business Model Canvas to make ideas clear and tangible. You will constantly improve these prototypes as insights flow in from testing.

Step 3 — Assess. Evaluate the design of your business prototypes. Ask: “Is this the best way to address our customers’ jobs, pains, and gains?” “Is this the best way to monetize our idea?” “Does this best reflect what we have learned from testing?” Once satisfied, start testing in the field.

The Business Model Canvas

You do not have to be a master of the Business Model Canvas to use this framework, but it is invaluable for defining, testing, and managing risk. It breaks a business idea into nine building blocks:

Definition — The Nine Building Blocks

  • Customer Segments: The different groups of people or organizations you aim to reach and serve.
  • Value Propositions: The bundle of products and services that create value for a specific customer segment.
  • Channels: How a company communicates with and reaches its customer segments to deliver a value proposition.
  • Customer Relationships: The types of relationships a company establishes with specific customer segments.
  • Revenue Streams: The cash a company generates from each customer segment.
  • Key Resources: The most important assets required to make a business model work.
  • Key Activities: The most important things a company must do to make its business model work.
  • Key Partners: The network of suppliers and partners that make the business model work.
  • Cost Structure: All costs incurred to operate a business model.

The canvas maps visually as a single page. On the right are customer-facing elements (Segments, Value Propositions, Channels, Relationships) — your desirability building blocks. In the center-left are Key Activities, Resources, and Partners — your feasibility building blocks. At the bottom, Revenue Streams and Cost Structure form your viability building blocks.

The Value Proposition Canvas

The Value Proposition Canvas zooms into two building blocks and examines them in finer detail. It has two sides: the Value Map describing what you offer, and the Customer Profile describing who you are serving.

Definition — Value Map (Your Offering)

  • Products and Services: List the products and services your value proposition is built around.
  • Gain Creators: Describe how your products and services create customer gains.
  • Pain Relievers: Describe how your products and services alleviate customer pains.

Definition — Customer Profile (Your Customer)

  • Customer Jobs: Describe what customers are trying to get done in their work and in their lives.
  • Gains: Describe the outcomes customers want to achieve or the concrete benefits they are seeking.
  • Pains: Describe the bad outcomes, risks, and obstacles related to customer jobs.

The goal is to achieve a strong fit between your Value Map and your Customer Profile — your gain creators and pain relievers should directly address the gains and pains that matter most to your customers.


Section 2 — Test

Test 2.1 — Hypothesize

To test a business idea you first have to make explicit all the risks that your idea will not work. You need to turn the assumptions underlying your idea into clear hypotheses you can test, then prioritize them ruthlessly.

Writing Good Hypotheses

When creating hypotheses, begin by writing the phrase “We believe that…” — for example: “We believe that millennial parents will subscribe to monthly educational science projects for their kids.”

Key Insight — Avoiding Confirmation Bias

If you create all of your hypotheses in the “We believe that…” format, you can fall into a confirmation bias trap — constantly trying to prove what you believe instead of trying to refute it. To prevent this, create competing hypotheses that try to disprove your assumptions: “We believe that millennial parents will not subscribe…” You can even test these competing hypotheses at the same time. This is especially helpful when team members cannot agree on which direction to pursue.

Principle — Characteristics of a Good Hypothesis

  • Testable: It can be shown true or false based on evidence.
  • Precise: You know what success looks like — ideally the specific what, who, and when of your assumptions.
  • Discrete: It describes only one distinct, testable thing you want to investigate. One hypothesis per sticky note — no bullet points.

Three Categories of Risk

Market Risk — Desirability Hypotheses

These hypotheses address whether customers actually want what you are offering. They map to the customer-facing building blocks of the Business Model Canvas: Customer Profile (Do the jobs, pains, and gains you identified really matter?), Value Map (Do your products actually solve high-value jobs and relieve pains?), Customer Segments (Are you targeting the right segments, and are they large enough?), Value Propositions (Is yours unique enough?), Channels (Can you reach customers?), and Customer Relationships (Can you retain them?).

Infrastructure Risk — Feasibility Hypotheses

These hypotheses test whether you can actually build and deliver what you promise — whether you can perform the necessary Key Activities at scale, secure the required Key Resources (technology, IP, talent, capital), and create the Key Partnerships your model depends on.

Financial Risk — Viability Hypotheses

These hypotheses examine whether your idea can make money — whether customers will pay a specific price, whether you can generate sufficient Revenue Streams, keep your Cost Structure under control, and ultimately earn a profit.

Assumptions Mapping

Once you have written your hypotheses, use the Assumptions Map to prioritize them. It is a two-axis, two-by-two grid:

Assumptions Map
Important
Unimportant
Have Evidence
No Evidence
Top Left — Share Check these hypotheses against your evidence and share with the team. Do they really have observable evidence to back them up? Challenge the evidence to make sure it's good enough.
Top Right — Experiment Focus here to identify which hypotheses to test first. This defines your near-term experimentation. Create experiments to address these high-risk themes.

Assumptions Map (How to Use the Quadrants)

QuadrantMeaningAction
Top Left — Important + Have EvidenceCritical assumptions with observable evidence already available.Share with the team, challenge evidence quality, and keep monitoring.
Top Right — Important + No EvidenceHighest-risk assumptions with little or no proof.Run experiments first. This is the primary focus area.
Bottom Left — Unimportant + Have EvidenceLow-priority assumptions that already have support.Track lightly; avoid over-investing time.
Bottom Right — Unimportant + No EvidenceLow-priority assumptions with limited support.Defer until higher-priority risks are resolved.

Definition — The Assumptions Map

Step 1 — Identify: Write each desirability, feasibility, and viability hypothesis on its own sticky note. Use different colors for each type. Keep hypotheses short and precise — no blah blah blah. Discuss and agree as a team.

Step 2 — Prioritize: Plot every hypothesis on two axes. The x-axis is evidence: place a hypothesis on the left if you have relevant, observable, and recent evidence; on the right if you do not. The y-axis is importance: place a hypothesis at the top if it is absolutely critical for your idea to succeed — meaning if proven wrong, the entire idea fails. Place it at the bottom if it is not something you would test first.

Step 3 — Focus on the top-right quadrant: High importance, little evidence. These are your riskiest assumptions — the ones that, if proven false, will cause your business to fail. This is where you start.

Principle

The major focus of this entire framework is on how to test the top-right quadrant of your Assumptions Map: important hypotheses with little evidence. These assumptions, if proven false, will cause your business to fail. Start here — always.

Test 2.2 — Experiment

With your most important hypotheses identified, turn them into experiments. Start with cheap, fast experiments to learn quickly. Every experiment you run reduces the risk that you will spend time, energy, and money on ideas that will not work.

Six Principles for Better Experiments

A good experiment is precise enough so that any team member can replicate it and generate usable, comparable data. It clearly defines:

  • The “who” — the test subject
  • The “where” — the context
  • The “what” — the elements being tested

Definition — Four Components of an Experiment

  • Hypothesis: The most critical hypothesis from the top-right quadrant of your Assumptions Map.
  • Experiment: The description of the experiment you will run to support or refute the hypothesis.
  • Metrics: The specific data you will measure.
  • Criteria: The success threshold for your experiment metrics.

Definition — Call-to-Action Experiment

A specific type of experiment that prompts a test subject to perform an observable action. Used to test one or more hypotheses by measuring what people do — not just what they say — making it a source of stronger evidence.

The Test Card

The Test Card captures your entire experiment on a single page before you begin running it:

Test Card
Test Card
Step 1: Hypothesis
We believe that…
Step 2: Test
To verify that, we will…
Step 3: Metric
And measure…
Step 4: Criteria
We are right if…

Test Card Template

  1. Hypothesis — “We believe that…” Write the specific hypothesis you are testing. Rate how critical it is (low / medium / high).
  2. Test — “To verify that, we will…” Describe the experiment you will run. Rate the test cost (cheap / moderate / expensive) and data reliability (weak / moderate / strong).
  3. Metric — “And measure…” Define the specific data you will collect. Rate the time required (hours / days / weeks).
  4. Criteria — “We are right if…” State the success threshold. What specific result would confirm the hypothesis? What would refute it?

Also capture: Test Name, Deadline, Assigned To, and Duration.

Experiment Guidelines

Before running any experiment, document these parameters to ensure clarity and replicability:

Experiment Guidelines

  1. Our customer segment is [fill in]
  2. The total number of customers involved in our experiment is [fill in], estimated to be [fill in]
  3. Our experiment will run from [fill in]
  4. The information currency we are collecting is [fill in]
  5. The branding we will use for the experiment is [fill in]
  6. The financial exposure of the experiment is [fill in]
  7. We can turn off the experiment by using [fill in]

Anyone who isn’t embarrassed by who they were last year probably isn’t learning enough. — Alain de Botton

Test 2.3 — Learn

Seven Principles for Learning

Not all evidence is created equal. The strength of a piece of evidence determines how reliably it helps you support or refute a hypothesis.

Principle — The Evidence Spectrum

The Evidence Spectrum
Weak Evidence Strong Evidence
Opinions
Surveys
Lab Tests
Observed Behavior
Real Purchases

Weak evidence comes from opinions and beliefs (“I would…,” “I think… is important”), from what people say in interviews or surveys, from lab settings where people know they are being observed, and from small investments like email sign-ups.

Strong evidence comes from facts and events (“Last week I…,” “I spent $__ on…”), from what people actually do — observable behavior is a good predictor of future action — from real-world settings where people are not aware they are being tested, and from large investments like pre-purchasing a product or putting one’s professional reputation on the line.

In practice, you layer experiments to progressively strengthen evidence. You might start with interviews to gain initial insights into your customers’ jobs, pains, and gains. Then run a survey to test those insights at broader scale. Finally, conduct a simulated sale to generate the strongest type of evidence for customer interest. Your confidence level should rise with each additional experiment you conduct for the same hypothesis.

The Learning Card

After running an experiment, capture your learnings in a structured way. The Learning Card is the companion to the Test Card:

Learning Card
Learning Card
Step 1: Hypothesis
We believed that…
Step 2: Observation
We observed… (+ Data Reliability rating)
Step 3: Learnings & Insights
From that we learned that… (+ Action Required rating)
Step 4: Decisions & Actions
Therefore, we will…

Learning Card Template

  1. Hypothesis — “We believed that…” Restate the original hypothesis you tested.
  2. Observation — “We observed…” Describe what actually happened. What data did you collect? Rate the data reliability (weak / moderate / strong).
  3. Learnings and Insights — “From that we learned that…” What patterns emerged? What surprised you? Does this support or refute the hypothesis? Rate the action required (low / medium / high).
  4. Decisions and Actions — “Therefore, we will…” Based on what you learned, what will you do next? Pivot the idea? Run another experiment? Double down?

Also capture: Insight Name, Date of Learning, and Person Responsible.

Test 2.4 — Decide

Experiments only create value when their evidence leads to clear decisions. This chapter covers the rituals and ceremonies that turn raw evidence into action — and keep your team aligned, reflective, and moving forward.

Experiment Ceremonies
Daily Standup
Align on daily goals, tasks, and blockers
Weekly Learning
Gather evidence, generate insights, revisit strategy
Biweekly Retrospective
Reflect on process: what's working, what to improve, what to try
Stakeholder Review
Present evidence to investment committee for decisions

Ceremonies

Daily Standups

Daily Standup Agenda

  1. What is the Daily Goal? Create a daily goal. Daily goals feed into your larger, more ambitious goals for the overall business.
  2. How Will You Achieve It? Identify the tasks needed to achieve the daily goal and plan your day.
  3. What Is in the Way? Identify any blockers that would prevent you from completing experiment tasks. Some can be resolved within the standup; others require a follow-up meeting.

Weekly Learning

Weekly Learning Agenda

  1. Gather Evidence: Collect all qualitative and quantitative evidence your experiments have generated.
  2. Generate Insights: Look for patterns. Even qualitative evidence can be quickly themed using techniques like affinity sorting. Keep an open mind — unexpected insights may reveal new paths to revenue.
  3. Revisit Your Strategy: Take new insights back to your Business Model Canvas, Value Proposition Canvas, and Assumptions Map. Update them to reflect your current state of learning. If it feels awkward, that is normal — it is a normal part of being an entrepreneur.

Biweekly Retrospective

This is arguably the most important ceremony. When you stop reflecting, you stop learning and improving.

Biweekly Retrospective Agenda

  1. What Is Going Well: Take five minutes to silently write down what is going well. This gets the retrospective off to a good start as people have space to speak positively.
  2. What Needs Improvement: Take five minutes to silently write down what needs improvement. Frame these as opportunities to improve, not personal attacks against team members.
  3. What to Try Next: Come up with three things you would like to try — something from the discussion or something completely new.

Principles of Experiment Flow

Principle 1 — Visualize Your Experiments

Make your work visible to yourself and others. Write down your experiments — one per sticky note — and draw a simple experiment board with four columns: Backlog → Setup → Run → Learn. Rank experiments in the Backlog from top to bottom, with the next one to execute at the top. Pull them across as you work on each. If you keep all this work in your head, you will never achieve flow, and your teammates cannot read your mind.

Experiment Board
Backlog
Online Ads
Customer Interviews
Landing Page
Survey
Setup
Run
Learn

Principle 2 — Limit Experiments in Progress

Multitasking too many experiments leads to trouble. Define work-in-progress limits — for example, a limit of one experiment per column. This prevents the team from pulling a second experiment until the first has moved forward. In practice, this means you run the customer interviews before the survey, instead of trying to do both at once. What you learn from one experiment informs the next.

Principle 3 — Continuous Experimentation

Continue to experiment over time. Identify and visualize blockers — such as an internal department refusing to allow customer contact — because these prevent flow and need to be communicated to stakeholders. As your process matures, you may need to split columns (separating “ready to run” from “still being set up”) to capture the nuances of your workflow.

Key Insight — The Goal of the Entire Process

The goal is not to test and learn for the sake of it. The goal is to decide, based on evidence and insights, to progress from idea to business. Your experiments, ceremonies, and flow principles all serve this one objective. Make sure every cycle through the process brings you closer to a clear pivot, persevere, or kill decision.


Section 3 — Experiments

Experiment 3.1 — Select an Experiment

With a library of experiments available, the challenge becomes picking the right one. Narrow your choice by asking three questions:

Experiment Strength vs. Cost
Interviews & Surveys
Cheap, fast, weaker evidence
Prototypes & Stubs
Moderate cost, moderate evidence
Landing Pages & Ads
Call-to-action evidence
Presales & MVPs
Expensive, strong evidence

1. What type of hypothesis are you testing? Some experiments produce better evidence for desirability, others for feasibility, and others for viability. Pick experiments that match your major learning objective.

2. How much evidence do you already have? The less you know, the less you should waste time, energy, and money. When uncertainty is high, quick and cheap experiments are most appropriate — they point you in the right direction despite producing generally weaker evidence. As your understanding grows, shift to experiments that produce stronger evidence.

3. How much time do you have? If a major meeting with decision-makers or investors is approaching, you may need fast experiments to generate evidence on multiple fronts quickly. If you are running low on funding, choose experiments that will produce evidence compelling enough to extend it.

Principle — Four Rules of Thumb for Selecting Experiments

  1. Go cheap and fast at the beginning. When you know little, stick to inexpensive, quick experiments to find the right direction. You can afford weaker evidence early on because you will test more later.
  2. Increase the strength of evidence over time. Run multiple experiments for the same hypothesis. Learn fast, then run additional experiments to produce stronger confirmation. Never make important decisions based on one experiment or weak evidence alone.
  3. Always pick the strongest experiment given your constraints. Even when going fast and cheap, design the strongest experiment you can within your budget and timeline.
  4. Reduce uncertainty before you build anything. Many people think they need to build something to start testing. Quite the contrary — the higher the cost to build, the more experiments you should run first to verify that customers actually have the jobs, pains, and gains you assume they have.

Experiment Sequences

Example B2C Software Sequence
1
Customer Interview
2
Online Ad
3
Landing Page
4
Email Campaign
5
Clickable Prototype
6
Mock Sale
7
Wizard of Oz

Great teams do not treat experiments as isolated events. They build momentum by running sequences that progressively strengthen evidence over time. Every experiment has logical predecessors and successors — experiments you can run before, during, and after. By chaining experiments together deliberately, you move from early, cheap signals to deep, high-confidence evidence faster than running experiments in isolation.

Definition — Common Experiment Sequences

ContextRecommended Sequence
B2B HardwareCustomer Interview → Paper Prototype → 3D Print → Data Sheet → Mash-Up MVP → Letter of Intent → Crowdfunding
B2B SoftwareCustomer Interview → Discussion Forums → Boomerang → Clickable Prototype → Presale → Single Feature MVP
B2B ServicesExpert Stakeholder Interviews → Customer Support Analysis → Brochure → Presale → Concierge
B2C HardwareCustomer Interview → Search Trend Analysis → Paper Prototype → 3D Print → Explainer Video → Crowdfunding → Pop-Up Store
B2C SoftwareCustomer Interview → Online Ad → Simple Landing Page → Email Campaign → Clickable Prototype → Mock Sale → Wizard of Oz
B2C ServicesCustomer Interview → Search Trend Analysis → Online Ad → Simple Landing Page → Email Campaign → Presale → Concierge
B2B2CCustomer Interview → Online Ad → Simple Landing Page → Explainer Video → Presale / Concierge → Buy a Feature / Data Sheet → Partner & Supplier Interview → Letter of Intent → Pop-Up Store
Highly RegulatedA Day in the Life → Validation Survey → Customer Support Analysis → Sales Force Feedback → Storyboard / Explainer Video → Brochure → Partner & Supplier Interview → Data Sheet → Presale

Experiment 3.2 — Discovery

From Discovery to Validation
Exploration
Interviews, ethnography, surveys
Data Analysis
Search trends, support data, forums
Interest Discovery
Ads, email campaigns, feature stubs
Prototypes
Paper, 3D, storyboard, explainer video
Validation
Landing pages, presales, split tests

Discovery experiments help you explore the landscape — uncovering customer jobs, pains, and gains, and generating initial signals about your value proposition. They are generally cheaper, faster, and produce weaker evidence than validation experiments, making them ideal for early stages when uncertainty is highest.

Exploration Experiments

Customer Interview

An interview focused on exploring customer jobs, pains, gains, and willingness to pay. Ideal for gaining qualitative insights into the fit between your value proposition and customer segment — and a good starting point for price testing. Not ideal as a substitute for what people will actually do.

Partner & Supplier Interviews

Similar to Customer Interviews, but focused on whether you can feasibly run the business. You will be sourcing and interviewing key partners to supplement the Key Activities and Key Resources that you cannot do — or do not want to do — in-house.

Expert Stakeholder Interviews

Similar to Customer Interviews, but focused on getting buy-in from key players inside your organization whose support is critical to moving forward.

A Day in the Life

A method of qualitative research that uses customer ethnography — observing or working alongside customers — to better understand their jobs, pains, and gains in context. Relatively cheap; you may need to compensate people for their time.

Discovery Survey

An open-ended questionnaire used to collect information from a sample of customers. Ideal for uncovering jobs, pains, and gains at a broader scale. Not ideal for determining what people will actually do — only what they say they will do.

Data Analysis Experiments

Search Trend Analysis

The use of search data to investigate patterns in what people are looking for online. Ideal for performing your own market research — especially on newer trends — instead of relying on third-party data.

Web Traffic Analysis

The use of website data collection, reporting, and analysis to look for customer behavior patterns in an existing product or site.

Discussion Forums

Mining online discussion forums to uncover unmet jobs, pains, and gains. Ideal for finding unmet needs in your existing product or a competitor’s.

Sales Force Feedback

Leveraging the frontline sales team to surface unmet jobs, pains, and gains. Ideal for businesses that use a group of people to conduct sales.

Customer Support Analysis

Using existing customer support data to uncover unmet jobs, pains, and gains. Ideal for businesses that already have a substantial number of existing customers.

Interest Discovery Experiments

Online Ad

An online advertisement that clearly articulates a value proposition for a targeted customer segment with a simple call to action. Ideal for quickly testing your value proposition at scale with customers online.

Link Tracking

A unique, trackable hyperlink used to gather quantitative data on customer actions. Ideal for testing what people do, not just what they say.

Feature Stub

A small test of an upcoming feature — usually in the form of a button — that includes only the very beginning of the experience. Ideal for rapidly testing the desirability of a new feature within an existing product. Not ideal for testing mission-critical functionality.

404 Test

A faster, riskier variation of the Feature Stub: nothing sits behind the button, generating 404 errors each time it is clicked. Count the errors to measure interest. Do not run a 404 test for more than a few hours.

Email Campaign

Email messages deployed across a specific period of time to customers. Ideal for quickly testing your value proposition with a customer segment. Not ideal as a replacement for face-to-face customer interaction.

Social Media Campaign

Social media messages deployed across a specific period of time. Ideal for acquiring new customers, increasing brand loyalty, and driving sales.

Referral Program

A method of promoting products or services to new customers through referrals, word of mouth, or digital codes. Ideal for testing how to organically scale your business.

Discussion Prototypes

Paper Prototype

A sketched interface on paper, manipulated by a person to simulate the software’s reactions to customer interaction. Ideal for rapidly testing the concept of your product with customers. Not ideal as a replacement for proper usability testing.

3D Print

Rapidly prototyping a physical object from a three-dimensional digital model. Ideal for rapidly testing iterations of a physical solution with customers.

Storyboard

Illustrations displayed in sequence to visualize an interactive experience. Ideal for brainstorming scenarios of different value propositions and solutions with customers.

Data Sheet

A one-page physical or digital sheet with specifications of your value proposition. Ideal for distilling your offering down to a single testable page for customers and key partners.

Brochure

A mocked-up physical brochure of your imagined value proposition. Ideal for testing in person with customers who are difficult to find online.

Explainer Video

A short video that explains a business idea in a simple, engaging, and compelling way. Ideal for quickly communicating your value proposition at scale.

Boomerang

Performing a customer test on an existing competitor’s product to gather insights on the value proposition. Ideal for finding unmet needs in an existing market without building anything.

Pretend to Own

A nonfunctioning, low-fidelity prototype placed into a customer’s daily life to test whether the solution fits naturally. Sometimes called a Pinocchio experiment. Ideal for generating your own evidence on the potential usefulness of an idea.

Preference & Prioritization Discovery

Product Box

A facilitation technique used with customers to visualize value propositions, main features, and key benefits in the physical form of a box. Ideal for refining your value proposition and narrowing in on key features.

Speed Boat

A visual game technique used with customers to identify what is inhibiting their progress — and how it impacts your feasibility. Ideal for going beyond conversations to get a visual representation of what is slowing customers down.

Card Sorting

A technique where a person uses cards with customers to generate insights into jobs, pains, gains, and value propositions.

Buy a Feature

A technique where people use pretend currency to “buy” the features they most want in a product. Ideal for prioritizing features and refining your understanding of customer jobs, pains, and gains.

Experiment 3.3 — Validation

Invention is not disruptive. Only customer adoption is disruptive. — Jeff Bezos

Validation experiments go further than discovery. They produce stronger evidence by testing whether customers will actually commit — with their time, money, or reputation — to your value proposition. They tend to require more investment to set up but yield evidence you can make real decisions on.

Interaction Prototypes

Clickable Prototype

A digital interface representation with clickable zones that simulate the software’s reactions to customer interaction. Ideal for rapidly testing the concept of your product at higher fidelity than paper. Not ideal as a replacement for proper usability testing with customers.

Single Feature MVP

A functioning minimum viable product with the single feature needed to test your core assumption. Ideal for learning whether the central promise of your solution resonates with customers.

Mash-Up

A functioning MVP that combines multiple existing services to deliver value without building custom technology. Ideal for learning whether the overall solution resonates with customers before investing in custom development.

Concierge

Creating a customer experience and delivering value entirely through manual human effort — and the customer knows it. Ideal for learning firsthand about every step needed to create, capture, and deliver value. Not ideal for scaling a product or business.

Life-Sized Prototype

Real-world replicas of service experiences at full scale. Ideal for testing higher-fidelity solutions with customers at a small sample size before deciding to scale.

Call to Action

Simple Landing Page

A simple web page that clearly illustrates your value proposition with a call to action. Ideal for determining whether your value proposition resonates with your customer segment.

Crowdfunding

Funding a project or venture by raising many small amounts of money from a large number of people, typically via the internet. Ideal for validating demand with customers who believe in your value proposition. Not ideal for determining whether your new business venture is feasible.

Split Test

A method of comparing two versions — control A against variant B — to determine which performs better. Ideal for testing different versions of value propositions, prices, and features to see what resonates best with customers.

Presale

A sale held before an item is made available for purchase. Unlike a mock sale, you process a real financial transaction when it ships. Ideal for gauging market demand at a smaller scale before launching to the public.

Validation Survey

A closed-ended questionnaire used to learn whether customers would be disappointed if your product went away or whether they would refer others to it. Captures sentiment at scale, though it reflects what people say rather than what they do.

Simulation Experiments

Wizard of Oz

Creating a customer experience and delivering value manually with people instead of technology — but the customer does not know a person is behind the curtain. Ideal for learning firsthand about every step needed to create, capture, and deliver value. Not ideal for scaling a product or business.

Mock Sale

Presenting a sale for your product without processing any payment information. Ideal for testing different price points and measuring purchase intent without a real financial transaction.

Letter of Intent

A short, written, non-legally-binding contract. Ideal for evaluating key partners and B2B customer segments. Not ideal for B2C customer segments.

Pop-Up Store

A retail store opened temporarily to sell goods and test face-to-face purchasing behavior. Ideal for B2C businesses testing whether customers will really make a purchase. For B2B, consider a conference booth instead.

Extreme Programming Spike

A simple program built to explore whether a technical or design solution is feasible. The term comes from rock climbing and railroads — a necessary pause to determine whether you can continue making progress. Ideal for quickly evaluating software feasibility. Not ideal for scaling: it is typically thrown away and rebuilt properly afterward.


Section 4 — Mindset

Mindset 4.1 — Avoid Experiment Pitfalls

The more success you’ve had in the past, the less critically you examine your own assumptions. — Vinod Khosla, Venture Capitalist

Even disciplined teams fall into common traps when testing business ideas. Here are the eight most dangerous pitfalls — and how to fix each one.

Experiment Pitfalls — Self-Assessment
  • We dedicate enough weekly time to testing
  • We act on analysis rather than overthinking it
  • Our data is comparable across experiments
  • We measure behavior, not just opinions
  • We challenge our own confirmation biases
  • We run multiple experiments per hypothesis
  • We synthesize learnings and adapt our idea
  • We keep testing in-house, not outsourced

Pitfall 1 — Time Trap: Not dedicating enough time.

You get what you invest. Teams that do not put in enough time to test business ideas will not get great results. Too often, teams underestimate what it takes to conduct multiple experiments well.

The Fix: Carve out dedicated time every week to test, learn, and adapt. Set weekly goals for what you would like to learn about your hypotheses. Visualize your work so that stalled or blocked tasks become obvious.

Pitfall 2 — Analysis Paralysis: Overthinking things you should just test and adapt.

Good ideas and concepts are important, but too many teams overthink and waste time rather than getting out of the building to test and adapt.

The Fix: Time-box your analysis work. Differentiate between reversible and irreversible decisions — act fast on the former, take more time for the latter. Avoid debates of opinion; conduct evidence-driven debates followed by decisions.

Pitfall 3 — Incomparable Data: Messy data that cannot be compared.

Too many teams are sloppy in defining their exact hypothesis, experiment, and metrics. That leads to data that cannot be compared — for example, testing with different customer segments or in wildly different contexts.

The Fix: Use the Test Card. Make the test subject, experiment context, and precise metrics explicit. Make sure everybody involved in running the experiment is part of the design.

Pitfall 4 — Weak Evidence: Only measuring what people say, not what they do.

Teams are often satisfied with surveys and interviews and fail to go deeper into how people actually behave in real-life situations.

The Fix: Do not just believe what people say. Run call-to-action experiments. Generate evidence that gets as close as possible to the real-world situation you are trying to test.

Pitfall 5 — Confirmation Bias: Only believing evidence that agrees with your hypothesis.

Sometimes teams discard or underplay evidence that conflicts with their hypothesis, preferring the illusion of being correct in their prediction.

The Fix: Involve others in the data synthesis process to bring in different perspectives. Create competing hypotheses to challenge your beliefs. Conduct multiple experiments for each hypothesis.

Pitfall 6 — Too Few Experiments: Conducting only one experiment for your most important hypothesis.

Few teams realize how many experiments they should conduct to support or refute a hypothesis. They make important decisions based on one experiment with weak evidence.

The Fix: Conduct multiple experiments for important hypotheses. Differentiate between weak and strong evidence. Increase the strength of evidence as uncertainty decreases.

Pitfall 7 — Failure to Learn and Adapt: Not taking time to analyze evidence and generate insights.

Some teams get so deep into testing that they forget the real goal: to decide, based on evidence and insights, to progress from idea to business.

The Fix: Set aside time to synthesize results, generate insights, and adapt your idea. Always navigate between the detailed testing process and the big picture. Create rituals to ask: Are we making progress from idea to business?

Pitfall 8 — Outsource Testing: Outsourcing what you should be doing and learning yourself.

Testing is about rapid iterations between testing, learning, and adapting an idea. An agency cannot make those rapid decisions for you, and you risk wasting time and energy by outsourcing.

The Fix: Shift resources reserved for an agency toward building an internal team of professional testers.

Mindset 4.2 — Lead Through Experimentation

Leadership in an experimentation culture looks fundamentally different from traditional leadership. Whether you are improving an existing business model or inventing a new one, the principles remain the same: use language that empowers, hold teams accountable for outcomes rather than features, and develop your facilitation skills.

Improving Business Models

Be mindful of how your words land. Overuse of the leader’s perspective — “I think,” “my experience says” — can unintentionally strip teams of their decision-making authority. They will start waiting for you to assign experiments rather than designing their own. Focus on business outcomes, not features and dates, and create opportunities for teams to give an account of how they are making progress.

Principle — Leadership Language (Existing Models)

Do say: “We, Us, Our” · “How would you achieve this business outcome?” · “Can you think of 2–3 additional experiments?”

Don’t say: “I, Me, Mine” · “Deliver this feature by release date” · “This is the only experiment we should run”

Inventing Business Models

Creating something new demands a “strong opinions, weakly held” mindset. Start with a hypothesis, but remain genuinely open to being proven wrong. If you are merely trying to prove you are right, your cognitive biases will take over.

Principle — Leadership Language (New Models)

Do say: “What is your learning goal?” · “What obstacles can I remove to help you make progress?” · “How else might we approach this problem?” · “What learning has surprised you so far?”

Don’t say: “I don’t trust the data.” · “I still think we should build it anyway.” · “You need 1,000 customers for it to mean anything.” · “This has to be a $15M business by next year.”

Steps Leaders Can Take

Leadership Moves

  • Create an enabling environment: Abolish business plans as the sole decision-making tool. Establish testing processes and metrics that differ from execution processes. Give teams the autonomy to make decisions and move fast — then get out of the way.
  • Remove obstacles and open doors: Give teams access to customers, brand assets, intellectual property, and specialized resources. When internal roadblocks appear, clear them.
  • Make evidence trump opinion: Past experience may actually prevent you from seeing the future. Push teams to build a compelling case based on evidence — not based on anyone’s preferences, including your own.
  • Ask questions, not answers: Relentlessly inquire about experiments, evidence, insights, and patterns. Your job is to help teams grow and adapt their ideas, not to hand down solutions.

Create More Leaders

Meet your teams one half-step ahead. Think of where you eventually want team members to be, then look backward to find the next small nudge. Whether in one-on-ones, retrospectives, or hallway conversations — find opportunities to guide that first step.

Understand context before giving advice. Actively listen until team members are finished speaking. Ask clarifying questions to make sure you understand the full context before offering your perspective.

Key Insight — Say “I Don’t Know”

Practice saying these three words when you do not have the answer. It helps your teams understand that you do not have all the answers — nor should you. Follow it up with “How would you approach this?” or “What do you think we should do?” Saying “I don’t know” models the behavior the leaders you are developing will embrace.

Mindset 4.3 — Organize for Experiments

Silos vs. Cross-Functional Teams

There has been a broad shift from traditional, functionally siloed organization models to more agile, cross-functional team approaches. When testing new business ideas, speed and agility are imperative. Cross-functional teams can adapt more quickly than functionally siloed teams. In many organizations, small, dedicated, cross-functional teams outperform large, siloed project teams.

The Innovation Portfolio

Innovation Portfolio Stages
SeedLaunchGrowth
FundingLess than $50,000$50,000–$500,000$500,000+
Team Size1–32–55+
Time per Team Member20–40%40–80%100%
Number of ProjectsHighMediumLow
ObjectivesCustomer understanding, context, and willingness to payProven interest and indications of profitabilityProven model at limited scale
KPIsMarket size, customer evidence, problem/solution fit, opportunity sizeValue Proposition evidence, financial evidence, feasibility evidenceProduct/market fit, acquisition and retention evidence, business model fit
Experiment ThemesDesirability 50–80%, Feasibility 0–10%, Viability 10–30%Desirability 30–50%, Feasibility 10–40%, Viability 20–50%Desirability 10–30%, Feasibility 40–50%, Viability 20–50%

An innovation portfolio helps you manage multiple bets simultaneously, distributing resources across ideas at different stages of maturity. Just as a venture capitalist does not invest everything in one startup, your organization should spread innovation investments across a portfolio and manage them with the discipline each stage demands.

Definition — The Three Portfolio Stages

Seed (under $50K, teams of 1–3, 20–40% time commitment): Focus on customer understanding, context, and willingness to pay. KPIs center on market size, customer evidence, and problem/solution fit. Experiment mix: 50–80% Desirability, 0–10% Feasibility, 10–30% Viability.

Launch ($50K–$500K, teams of 2–5, 40–80% time commitment): Focus on proven interest and early indications of profitability. KPIs center on Value Proposition evidence, financial evidence, and feasibility evidence. Experiment mix: 30–50% Desirability, 10–40% Feasibility, 20–50% Viability.

Growth ($500K+, teams of 5+, 100% time commitment): Focus on a proven model at limited scale. KPIs center on product/market fit, acquisition and retention evidence, and business model fit. Experiment mix: 10–30% Desirability, 40–50% Feasibility, 20–50% Viability.

Key Insight — Uncertainty and Funding

Seed Stage at a Glance
<$50K
Funding
Small bets, fast learning
1–3
Team
Small and dedicated
20–40%
Time
Per team member
~65%
Focus
Desirability experiments

The relationship between uncertainty and funding moves in opposite directions. Early on, uncertainty and risk are very high while funding is low. As you generate evidence through experimentation, uncertainty drops — and funding should rise accordingly. This is why a venture-capital-style approach to incremental funding works: teams earn the right to more investment by reducing risk with evidence.

Investment Committees

Teams need a governing body that funds experiments, removes obstacles, and makes decisions.

Designing the Committee

  • Keep it small: 3–5 members who can make decisions and move quickly.
  • Add an external voice: Consider an external member or entrepreneur-in-residence to bring a fresh perspective.
  • Include decision-makers: Members must have authority over approvals and budget.
  • Prioritize entrepreneurial mindset: Members do not need a history of entrepreneurship, but they must be willing to challenge the status quo. Too many conservative members will prematurely stunt new innovations.

Committee Working Agreement

  • Be on time: If members do not prioritize review ceremonies, teams will wonder whether their work matters.
  • Make decisions in the meeting: Teams should never leave wondering whether they can move forward. Decide with the teams present before adjourning.
  • Leave ego at the door: Have an opinion, but be willing to be swayed by evidence. The teams will present what they have experimented on and how they propose to move forward. Your job is to listen, not talk over them.

Six Areas to Monitor as a Committee

  • Time: Are teams getting enough dedicated time?
  • Multitasking: Are they spread across too many projects?
  • Funding: Do they have the budget to run experiments?
  • Support: Do they have leadership and coaching?
  • Access: Can they reach customers and necessary resources?
  • Direction: Do they have a clear strategy and KPIs to guide them?

Principle — The Complete Framework

Design your team and shape your idea using the Business Model Canvas and Value Proposition Canvas. Test by hypothesizing, experimenting, learning, and deciding — using Test Cards, Learning Cards, and the Assumptions Map. Choose from an extensive experiment library of discovery and validation experiments, sequencing them for maximum learning. And adopt the mindset of avoiding pitfalls, leading through experimentation, and organizing your teams and portfolio for continuous testing. The goal, always, is to reduce the risk and uncertainty of new ideas with evidence — so you can progress from idea to business with confidence.