The Customer Discovery Handbook

If you’re at all familiar with innovation practices like design thinking and Lean Startup, you’re probably familiar with the idea that you sometimes have to be on a learning mission and other times you have to be on a more traditional scaling mission. This guide is about doing that learning.

Riddle me this: What’s the difference between revenue and research? The answer is that you can’t have too much revenue. But, yes, you can have too much research. Worse still, you can do the right kind of research at the wrong time and have it be worthless.

This guide is about helping you identify the questions that need answering and pairing them with just enough of the right kind of research. I’ve organized the guide around five key areas:
Customer-Discovery-Handbook-Summary-v7After reviewing this guide, you will be able to:

  1. Identify the right discovery questions to focus on as your project evolves
  2. Answer those focal questions with effective, actionable discovery work
  3. Present the above to your team and stakeholders for buy-in

Persona & Problem Hypotheses


What is it?

A persona is a humanized view of your customer which allows you to create testable ideas about how they’ll behave (that ‘s the hypothesis part). There isn’t a fixed format, though you can see what I consider a good example here: Example Personas. Here’s the key thing, though: Personas are more of an approach to answering questions than they are a discrete item. You could improvise one in a hallway conversation about a customer-related topic, but there’s no single/permanent ‘done’ point or a perfect personas. All that said, there is a pretty reliable process for focusing your questions and getting effective answers. That’s what we’re going to cover.

Closely tied to the concept and methodology of testing personas is the idea of problem scenarios. These are statements of need or desire that are not solution-specific. They’re not all things someone would refer to as a ‘problem’- problem scenario is just a working term. For example, you could have a problem scenario of ‘hanging a picture’ or ‘an afternoon pick-me-up’. From there, you would look alternative ways a persona might solve that problem. If you’re familiar with the ‘jobs to be done’ concept, this is basically the same thing.

A problem scenario hypothesis generally has to do with learning what items are top of mind for a given persona in a given area of interest and what alternatives they’re using to solve that problem. For example, you might have this problem hypothesis: ‘Finding documentation is an important job for HVAC technicians’. You might also hypothesize about the alternatives they’re using: ‘Currently they Google for the documentation they need. This requires substantial time because it takes several tries to find the right document.’. Both of these statements are testable with the methods below in the How section.

Together, personas and problem scenarios help teams formulate innovation-friendly ideas and testing about who their customer is, how they’ll behave, and what jobs/desires are worth addressing.

When should I use it?

I would say that you should always be operating on a solid foundation of personas and problem scenarios. Without them, even something like concept testing Lean Startup style will lack focus and maximum testability.

Let’s say you’ve skipped over personas and you just want to test demand for dog washes in Centerville- that’s the next hypothesis area we’l cover (Value Hypothesis). You create an ad on Google AdWords, something like ‘The fastest dog wash ever! Right here in Centerville’ and you show it to users who search for things like ‘dog wash Centerville’. You get a super low click-through rate. What then? You could try some new variations, but personally I’d want to make sure I’d had some interviews with my target persona to make sure the proposition solved a problem/desire they have and that I’m using the same kind of language they use to talk about dog washes. I’d also want to make sure they search for things like dog washes on Google.

personas-whyWithout a working set of personas and problem scenarios, the issues you’ll run into actually building and promoting product are likely to be even more serious. Additionally, I find that personas and problem scenarios are invaluable for collaborating across functional areas. For example, they’re a great tool for product managers to discuss and test what they’ve learned about product/market fit with marketers who are working to amplify that product/market fit.

How do I execute?

Basically, you draft, discover (via subject interviews), and apply your personas in a continuous cycle of learning about your customer. The process below starts with a ‘persona question’, which is basically a question or questions about how your customer will behave in a certain situation. These might pertain to adding functionality to a product or initiating a new marketing campaign. You execute and close with a tested answer.

Work you’ve done previously will need periodic refreshing, but you’ll find over time that your understanding of the customer and ability to execute on the basis of that understanding becomes more functional, improving your innovation capabilities. In terms of organizing to do this work, design sprints are a great tool- those are one-week iterations.

There’s a list of resources below with more on that and a few other items. Good luck and I think you’ll find these really help your work.


Where do I go next?

Below is a list of items I thought you might find helpful. I sorted them from most immediate to most explanatory:

  1. Template for Persona Development and Subject Interviews
    This is a Google Doc template: Interview Guide. The prior section is a template for the personas themselves.
  2. Tutorial on Personas & Problem Scenarios
    For more depth on the use of personas (including examples) and the process above, check out: Personas Tutorial.
  3. Plan for a Design Sprint on Personas & Problem Scenarios
    If you’re not familiar with the idea of a design sprint, it’s basically to take a design (or research task) and execute it in a one week format. Here’s a guide to doing that for persona and problem discovery: The Problem Discovery Sprint.
  4. Online Course on Using Personas with Agile
    For comprehensive learning on this in the context of an agile program, I can’t help but recommend my online course (on Coursera): Agile Meets Design Thinking.

Value Hypothesis


What is it?

There’s a BIG (yes, that big) difference between this area and the last. You can ask questions in the right way and test your persona and problem hypotheses. However, you can’t ask a customer whether they’d like a product you’re thinking of building. They’ll always say ‘yes’. Yellow-Walkman-FeatureFor a famous design story/legend about this, see Story of the Yellow Walkman.

Given that you can’t ask whether someone’s going to buy your product (or use your feature) directly, what do you do? That’s what the Value Hypothesis is about: testing whether a customer is going to prefer your proposition over the alternatives they have for a given problem scenario (without actually building and marketing a full offering).

In the last section, we talked about offering dog washes in Centerville. A simplified starter version of a related Value Hypothesis might be: ‘If we offer Pedro the Professional a 10 minute dog wash, then he will buy it’. To actually test this, you’d probably want to decompose and detail it, but that’s the basic idea. For most Value Hypotheses, I recommend this format: If we [do something] for [certain persona or segment], then they will [respond in a certain way].

Once we have a Value Hypothesis (or hypotheses), we design an experiment to test it. Since we can’t just ask and we don’t want to invest the time and money to build a full product before we validate our hypothesis, we use a ‘minimum viable product’ or MVP.

You may be familiar with this concept from Lean Startup. The basic idea is that innovation is inherently risk- a well run program has something like a 1 in 10 success rate with new products. Given this, how might you test whether a product’s going to be successful without building it all the way out? This is what successful innovators do- and by dramatically cutting the cost of testing a new concept they improve the economics of their innovation program.

When should I use it?


How do I execute?

Lean Startup with Design ThinkingThe easiest way to think through testing your Value Hypothesis is tried and true: it’s the scientific method! Obviously, it’s a waste of time to test a bad idea (step 01). How do you get a good idea? If you guessed customer discovery and testing your persona and problem hypotheses, you hit the jackpot.

One you’ve got a validated persona and a validated problem scenario that’s important to them, draft your Value Hypothesis (step 02). A great place to start is to design a value proposition relative to your persona and problem hypothesis:

A strong value proposition that’s worth testing is one that understands what it’s up against: the alternatives customer (persona) is currently using to deliver on your target problem scenario. For example, with our dog wash our persona is Pedro the Professional and the problem scenario is keeping his dog clean. From interviewing subjects we know his top alternative is just washing his dog at home, but it’s messy and he never gets to it as much as he likes.
Our value proposition is that we’re going to offer him a dog wash that’s convenient and affordable. What we’re hoping is that real people who fit that persona prefer our dog wash over washing at home. From there, we get to the Value Hypothesis you saw above: ‘If we offer Pedro the Professional a 10 minute dog wash, then he will buy it’.

Following this, we need to design an experiment (step 03) and run it (step 04). The basic idea is that there needs to be some exchange of value to test the customer’s actual interest in your proposition. That could be as simple as having them sign up for an email newsletter if they come to your website from out of the blue. However, if you’re standing over their shoulder, signing up for an email doesn’t count. There are a number of established patterns for doing this testing (MVP types) that you can learn more about through the materials in the next section.

Finally, you’re diving to what Eric Ries (author of The Lean Startup) calls the ‘pivot or persevere’ moment (05). Key to this is having definite thresholds for your experiments where you can specifically conclude whether you got a negative or a positive on your experiment. For example, let’s say you run some Google AdWords to test your Value Hypothesis- the basic hypothesis being that if a customer clicks through, then they have some amount of interest in your value proposition. When you design the experiment, you’d want to sent a click-through rate that constitutes a fail- say <3%. Pro tip: As part of your experiment design, write up the slides or email you plan to use when you present your results- begin the whole thing with that end in mind.

The resources below provide more details on how to start testing your Value Hypothesis.

Where do I go next?

Below is a list of items I thought you might find helpful. I sorted them from most immediate to most explanatory:

  1. Template for Designing Experiments to Test Your Value Hypothesis
    This is a Google Doc template: Testing Your Hypothesis. The prior section is a template for laying out your hypotheses.
  2. Tutorial on Lean Startup
    For more depth on creating testable value propositions and designing experiments: Lean Startup Tutorial.
  3. Plan for a Design Sprint on Personas & Problem Scenarios
    If you’re not familiar with the idea of a design sprint, it’s basically to take a design (or research task) and execute it in a one week format. Here’s a guide to doing that for persona and problem discovery: The Motivation Sprint.
  4. Online Course on Using Personas with Agile
    For comprehensive learning on this in the context of an agile program, I can’t help but recommend my online course (on Coursera): Testing with Agile.

Usability Hypothesis


What is it?

Great news- this one is relatively easy to understand. You’re testing to see how well your customer can use a given interface element (or elements) to complete a given objective. While it does require some kind of working prototype (or working software), this is actually one of the easier types of testing to master.

Based on where you are in your project, you’ll design a set of appropriate tests to determine how well your customer is able to use a given interface to accomplish a given objective. The interface is ideally some kind of a quick, low-fidelity prototype in the early phases. In fact, many teams require multiple divergent prototypes get tested for development of a given interface element. This is commonplace for teams at Google, as an example.

usability-hypothesisHow about the objective? If agile user stories have been central to your development, you already have your objective: it’s the final clause of your various stories ‘…so that I can [realize some reward/objective].’. If you haven’t been using agile user stories, I highly recommend them. Aside from keeping your ideas testable, they’re a great way to explore and detail the experience you want to provide to the user.

There’s a simple test design to make sure you can actually sit down with a subject and test, but really all that revolves around your user stories and prototypes.

When should I use it?

ABT: ‘Always Be Testing’ is a little slogan I like to use in class. Really, the key thing with this type of testing is to focus on the right thing at the right phase of development. The diagram below breaks this down into three generally-accepted (though not universal) phases:

In the Exploratory phase, parallel prototyping and testing in small batches is important. It’s much better to batch up a few users, test, revise your interface and plan based on what you learned, then re-test vs. just end up seeing the same thing over and over again. Most teams will use interactive prototypes of some sort (vs. working software)- and this allows for anyone to mock up an idea. The idea is to push yourself (and your team) to consider a few approaches before you start investing in one. You can see the team at HVAC in a Hurry doing this in Example A of the Prototyping Tutorial. Use comparable’s to make sure you’re reusing existing well understood (by users) interface models.

In the Assessment phase, you pick and approach (or two) and articulate it out into more of a fully scoped user experience. And you see how it goes. Here, testing in small batches is still important and you’re likely using prototypes, though they may be somewhat higher fidelity/more detailed.

The Validation phase is probably what most people think about when they think about usability testing. Here you’re testing working software and basically making sure that what you think is usable really is. Here you might actually do stuff like time a user to see how long a task takes. Those benchmarks are useful for comparing against your analytics once you actually release.

You may go through multiple rounds of each of these before you release something to the public/your base. The idea is to test in each phase until you get a positive result that suggests you should move forward.

How do I execute?

First and foremost, just go try it out! I recommend starting with strong user stories. Even if you’re already developed something and release it, they’re a good way to go back and be explicit about what you’re trying to achieve for the user- and that will naturally help good testing happen.

Then draft your test plan- see item #1 in the section below. Finally, draft a prototype to test with (or use your working software). Many individuals like to start with prototyping because it feels more tangible and we all like that- but really the intent you establish with the stories is what should drive the design; not the other way around.

Where do I go next?

Below is a list of items I thought you might find helpful. I sorted them from most immediate to most explanatory:

  1. Template for Designing Experiments to Test Your Value Hypothesis
    This is a Google Doc template: Usability Test Plan Template.
  2. Tutorial on Running Usability Testing
    For more depth on design and running usability testing: Your Usability Test Plan.
  3. Plan for a Usability Sprint 
    If you’re not familiar with the idea of a design sprint, it’s basically to take a design (or research task) and execute it in a one week format. Here’s a guide to doing that for usability testing: The Usability Sprint.
  4. Online Course on Using Personas with Agile
    For comprehensive learning on this in the context of an agile program, I can’t help but recommend my online course (on Coursera): Testing with Agile.

Growth Hypothesis

lean-customer-creation-hypothesisThese hypotheses only make sense after you’ve got fundamental validation on your core Value Hypothesis (see above). You can’t market/growth hack your way around a lack of product/market fit.

Once you’ve got that basic product/market fit, it’s time to start observing, hypothesizing, and experimenting against some kind of customer acquisition funnel. I like the AIDAOR model (attention-interest-desire-action-onboarding-retention):

customer-funnelWhy is that funnel so important? Because if you don’t break down the question/problem, you’ll almost certainly end up mired in confusion. Your funnel is your anchor point for both qualitative and quantitative data. I recommend starting with qualitative ideas since that will help you with the ‘why?’ and drive better hypotheses. One technique I like for that is storyboarding. You might have multiple takes on this, which is fine/great. Here’s an example from Enable Quiz, a fictional company that makes online quizzes that HR managers can use to screen engineering candidates:


For more on doing this, see the corresponding section of the storyboarding tutorial here- Storyboarding for Growth.

Now it’s time to form hypotheses. An example for attention might be something like:

‘If we deliver [a certain Google AdWord ad] against [a certain set of keywords], we’ll see a click-through-rate of [x%].’ You might have a similar hypotheses later in the funnel around pricing or certain offers. You might have others for content marketing. One really important thing is to take an integrated view of your marketing mix/brand experience. My favorite tool for that is the Growth Hacking Canvas:


It’s a lot of stuff- I know! But we’re moving toward a pretty firm realization that just like silo’ed product development doesn’t work (and so we need agile), silo’ed marketing doesn’t work either. For more on using the Canvas, check out this tutorial: Growth Hacking Canvas.

Growth is hard and in my experience, everyone thinks they’re the only ones that don’t get it and everyone else is killing it. It’s not true. Just take a disciplined approach, keep experimenting, keep observing, and you’ll get there.

I also recommend a quick think on your ‘engines of growth’ (a term coined by Eric Ries). The proposition here is that there are three principal engines of growth and a new ventures knows which one is most important:

Viral– customers/users tell each other about the offer. Crucial here is some measurement of ‘viral coefficient’, the propensity of one customer/user to share in some fashion the offer with others. In this case, your ability to drive sharing/word-of-mouth is crucial.

Paid– you have a certain cost of customer acquisition based on the use of marketing and/or sales resources. Here, ascertaining the cost of acquisition and the value of a customer are key to understanding the validity of your unit economics.

Sticky– the lifetime value of customers is very high because the relationship will deepen over time. Here, testing your ability to retain and maximize the lifetime value of a customer relationship is key.

Appendix: Creating Effective Screeners

It’s not hard to spend 45 minutes with a subject only to realize they’re really not the subject you’re looking for. Particularly if you dive into detail early, they may be informative enough to keep you going even if you’re on a road to nowhere. This applies to all the hypothesis types and research techniques above.

For this, we create ‘screeners’. Basically the screener is a simple, factual question or set of questions can you ask a potential subject to be sure they’re relevant. It shouldn’t take much.

For example, let’s say we’re interested in problem scenarios around some aspect of network management, with the idea of possibly building an application for network engineers to manage transport elements like routers and switches. We have a persona(s) for the end user that we want to develop and validate. A good screener would be: ‘How many times last week did you log into a switch or router?’. Let’s say we’re building software for plumbers. A good screener would be: ‘How many plumbing jobs were you out on last week?’.

The screener is more important than you might guess at first. We have a natural bias to go with subjects that are convenient & comfortable, which can dramatically limit actionable learning. Don’t blame yourself, but do screen yourself!

You’ll find both the Enable Quiz example usability test plan as well as another that tests automation platforms for social media (Hootsuite, Buffer, etc.) in the References at the end of this page


  • Sherman

    Great article…though I admittedly only read the first few parts of it and skimmed the rest (it only took me a few paragraphs to know you know what you’re talking about; Pocket for the rest).

    Regarding this piece:
    “3. Find Subjects

    Are you finding that getting to the right interview subjects is difficult, messy, and time-consuming? Good- you’re probably doing it right.
    If I knew an easier way, dear reader, I would tell you straight away. The reality is that soliciting online and even using high-priced agencies tends to deliver atypical subjects, semi-professional participants that are not a reliable or useful source of actionable learning.”

    Dear author, we do have an easier (but still not perfect) way –
    Yes, the interviewees are paid, introducing some bias, but it’s been a useful tool for scaling customer discovery efforts. We’ve had over 8000 recorded minutes of interviews as of writing.
    I know the tool is useful because my co-founder and I built it out of necessity and use it frequently.

    I just filled out your contact; hopefully we can talk more.

  • Jaffy

    This IS what I was searching for. I saved in Evernote, because this needs to exists in the future. Thanks from a random brazilian person.

    • Hi Jaffy- Thanks for writing, and I’m so glad you liked it!

  • j_mes

    I’d love to know when you’re planning to release the Customer Creation Hypothesis, as it’s what I’m struggling with most. Do you have any other resources you could recommend in the meantime.

    Thanks again for your content Alex, I can’t understand why this isn’t more popular — it’s incredible!

    • Hi J_Mes- Thanks! I’m so glad you like it. Customer Creation Hypothesis: I made a small update and more in Feb., but a lot of the action is here:

      Why isn’t this more popular? What a great question! Part of it is by choice- there’s design reality and design theater. These tutorials are detailed because they’re for serious practitioners. That’s not appealing to a huge audience- a lot of people prefer simpler answers. I’m kind of reminded of this Tweet by Kent Beck about TDD:

      Anyway, thanks for writing and I’ll have more material up soon.

      • j_mes

        Thanks for the response Alex. Looking forward to what more you have to share!

  • Alex

    Hi Alex,
    Thank you for your materials.
    Please check the first sentence in Preparation subsection of Value Hypothesis section:
    “I highly recommend having a working view of your Persona Hypothesis & Persona Hypothesis before you approach this area (see above)”
    It seems second word Persona should be replaced with Problem