Your Lean Startup

What did you do today to guide your venture (or project) to the outcome you want?

How will you know if it worked?

Does your team have a visible plan that easily allows them to prioritize completing task A vs. task B, C, or D?

Do they understand why?

Lean Startup is about delivering quality answers to these questions, questions we should all be asking ourselves. After reading this tutorial and engaging in the recommended practice (see section How Do I Get Started?) you will be able to:

  1. Explain when & where Lean Startup is applicable and why it’s useful
  2. Take general ideas and structure them into testable hypotheses
  3. Design and run optimal (as quick and as inexpensive as possible) Lean Startup experiments to test ideas with a minimum of waste
  4. Instrument observation into your project’s key outcomes so you and your team can objectively decide what’s working and what isn’t

What’s ‘Lean Startup’?

no-muda-leanThe Lean Startup book and movement grew out of Eric Ries‘ work on applying the principals from the lean manufacturing movement to the creation of startups.The goal of lean is to eliminate waste, ‘muda’ in Japanese. In this context, a startup is any venture that hasn’t yet validated a ‘product/market fit’, meaning they have a proposition they can reliably sell to a particular type of customer.

Lean emphasizes use of the scientific method (hypothesis testing & observational learning) and the efficiency of ‘small batches’, doing things in incrementally on a success basis.

The unique importance of all this to any innovative company (which is pretty much any growth company) is non-obvious and also breakthrough. Pretty much everything you currently learn about ‘business’ was creating for operating a factory that produces commodity widgets. For such a business, 5 year plans are great, the assumption of perfect information is relatively valid, and your conventional MBA will serve you well. For any innovation-based business, these techniques are grossly inadequate and will generate massive waste.

Example Lean Startup AssumptionsTemplate Lean Startup Assumptions

Lean in Action


Two core practices underlie lean: 1) use of the scientific method and 2) use of small batches. Science has brought us many wonderful things. You can see its process to the right. Particularly when dealing with the unknown (aka innovation), it’s good to be explicit, hands-on, and data driven about whether your innovative new idea is a money maker or an irrelevant novelty.

The use of small batches gives you more shots at a successful outcome, particularly valuable when you’re in a high risk, high uncertainty environment. A great example for Eric Reis’ book is the envelope folding experiment: If you had to stuff 100 envelopes with letters, how would you do it? Would you fold all the sheets of paper and then stuff the envelopes? Or would you fold one sheet of paper, stuff one envelope? It turns out that doing them one by one is vastly more efficient, and that’s just on an operational basis.If you don’t actually know if the envelopes will fit or whether anyone wants them (more analogous to a startup), you’re obviously much better off with the one-by-one approach.

So, how do you do it? In 6 simple (in principal) steps!

  1. Start with a strong idea, one where you’ve gone out an done strong discovery which is packaged into testable personas and problem scenarios.
  2. Structure your idea(s) in a testable format (as hypotheses).
  3. Figure out how you’ll prove or disprove these hypotheses with a minimum of time and effort. Much of the time, you can do this without building any actual product.
  4. Get focused on testing your hypotheses. Don’t worry about anything else- none of that will likely matter unless you have an idea that proves to be valuable to your customer (or user).
  5. Conclude and decide; did you prove out this idea and is it time to throw more resources at it? Or do you need to reformulate and re-test? There’s no shame in the second outcome- the core virtue of Lean Startup is the recognition that startup’s are high risk and it makes sense to avoid waste so you get multiple shots at a winner.
  6. Revise or scale/persevere. If you’re pivoting and revising, they key is to make sure you have a strong foundation in customer discovery (see #1) so you can pivot in smart way based on your understanding of the customer/user.

The six sections below describe these steps in more detail. While the focus is on lean and Eric Ries’ work on Lean Startup, a few other techniques (like design thinking and customer personification) are important complements and I’ll reference those as well.

01 Developing High Quality, Testable Ideas

Scientific-Method-Lean-Startup-IdeasThis is the creation story that sells about the founding of new ventures: Young founders dream up brilliant idea, code it, and the next morning are acquired for 1 billion dollars! There’s nothing wrong with a little harmless fantasy, but the reality is that few startups (even successful ones) actually take this course. For most, it’s a marathon of trying things and seeing what works. Did you know Rovio was on the verge of bankruptcy when they released Angry Birds? And that they paid the bills by making games for other companies? I’m not saying doing a startup isn’t fun. It is. I can’t imagine anything better than working with a great team on learning how to build something that matters. But following a fake, media-generated script will Personas-Problem-Scenarios-User-Stories-v4probably lead to stress and disappointment and that’s not fun. Let’s talk about how to create strong, actionable ideas.

Your fundamental job is to build empathy for your customer (users and buyers). If you could correlate ‘customer empathy acquired’ against venture success I bet you’d see a very tight correlation. Applying empathy to directed creativity is what the popular rubric of ‘design thinking’ is about. You can learn how to do all this in the customer discovery and personas tutorials.

A good way to make sure you have your bases covered with personas is the Think-See-Feel-Do checklist: What do your persona think, see, feel, do in your area of interest? (Again, more on that in the tutorial above). Following that, you’ll want to frame the value propositions you plan to deliver to the customer in terms of problem scenarios and alternatives. Most of us start with the spark of an idea, ‘Hey wouldn’t it be cool if …’, and that’s fine. But your understanding of the customer will be much more relevant/accurate, actionable, and testable if you can relate the propositions to customer problem scenarios and current alternatives.

02 Focusing Testable Hypotheses

Scientific-Method-Lean-Startup-HypothesisIf you organize your customer discovery and resulting ideas as we reviewed above, it summarizes naturally into what I call a ‘product hypothesis’.


A certain [Persona(s)] exists…

…and they have certain [Problem Scenario(s)]…

…where they’re currently using certain [Alternatives]…

…and I have a [Value Proposition(s)] that’s better enough than the alternatives that the persona will buy/use my product.

Most students and advisees I work with find this is a useful jumping off point to formulating hypotheses since it helps to keep their work on personas and problem scenarios linked to their ideas about value and experiments on buyer motivation. The table below organizes key hypotheses areas into into five chunks. (They’re in the form of questions not hypotheses since I think that’s an easier starting point.)

Persona Hypothesispersona-hypothesis Does this persona exist? Can you name or find 5-10 examples?Can you identify them out in the real world?

Do you understand them really well?

Do you understand how they relate to your area of interest? Think? See? Feel? Do?

Problem Hypothesis problem-hypothesis Do the problems you’re solving really exist?Is it more of a ‘job to be done’ or a need, desire?How important is the problem or problems?

How is the customer solving them now? With what alternatives?

Value (Motivation) Hypothesis value-hypothesis How much better than the best alternative is your product at delivering on the problem?How obvious is that to the customer?How will you test that without just asking ‘do you want this?’ (because that doesn’t work)?

The areas above are progressive, meaning that you should generally move through them sequentially and that if you’re stuck on one you’ll probably have trouble downstream.

Once you have a working view of where you are and what’s important in those areas, you’ll want to start breaking down them down into individual, testable hypotheses. Let’s pause a minute: This may feel like a lot of stuff, a lot of work, but, remember, this is a systematic way to work through basically everything about your venture. Back to those hypotheses, I like the following format:


If we [do something] for [persona], they will [respond in a certain way]. 

That format anchors to the key elements of your work and establishes causality. You’ll also want to consider which, in fact, need proving and how you’ll do that. The only exception to this is that sometimes the Persona and Problem hypotheses may be formulated a little differently- as statements about who the customer is and how well you understand them.

The following table has some examples for a fictional company called ‘Enable Quiz’ that provides lightweight quizzes to help managers screen the core skill sets of candidates for technical jobs (for more on them see the Venture Concepts page).

Priority Hypothesis Needs Proving? Experimentation
0 PROBLEM: If we ask non-leading questions about what’s problematic for HR managers in the hiring/recruiting process, we’ll consistently hear that screening technical hires for skill sets is difficult and they wish they could do it more effectively. Yes. Sure, the problem probably exists to some degree- but how important is it? To who? What are they doing now? Discovery interviews with HR managers at companies that hire a lot of engineers/technical talent
0 VALUE: If Enable Quiz offers an easy, affordable, lightweight technical quizzing solution, we can acquire, retain, and monetize customers. Yes, definitely. 1. Execute a manual ‘concierge’ experiment on the quiz process2. Make some early pre-release sales3. Test pre-release promotion and sign-up’s for beta programs by topic area (Linux, Ruby, .NET, etc.)


Next you should determine whether or not your hypothesis needs to be proven and why. For example, with Enable Quiz a Persona Hypothesis is that there are HR managers and functional (hiring) managers who hire technical candidates and that they can identify those individuals. This is important but doesn’t need proving.

For example hypotheses , see Example 1 below.

03 Designing Effective Experiments

Scientific-Method-Lean-Startup-ExperimentsExperimentation is where the rubber hits the road with the practice of Lean Startup. The keys to success are 1) keeping the experiments focused on obtaining a true/false result for one or more of your key hypotheses and 2) being creative about how you can design the fastest, cheapest experiment which delivers on #1. Not all experiments have a quantitative output- this isn’t physics and it’s valid to review a set of qualitative outputs (like customer discovery interviews) and make a judgement call on how/whether the results prove or disprove a hypothesis . Also, not all product tests should require building product. We’ll go through some quick examples below and more in the MVP Case Studies section, but many of the best Lean Startup success stories didn’t require a single line of code.

Persona Hypotheses

persona-hypothesisIf you don’t know your customer well, it’s time to get to know them (not to mention validate that they exist). When I’m considering an idea, I brainstorm a list of the key personas and then I make sure I can think of 5-10 real world examples (who are obviously then good candidates for your first set of discovery interviews). Then you want to make sure you understand them well- What kind of shoes would they wear? Your primary vehicle for validating these hypotheses is customer discovery interviews.

PersonasYou also want to make sure you understand how they think about your area of interest (using think-see-feel-do). You’re passionate about your idea, but be sure that doesn’t blind or bias you from seeing these people as they really are. Particularly at this early stage, it’s important to be focused on discovering what’s really important to this customer.

For example, Enable Quiz wants to sell lightweight technical quizzes to companies that hire engineers. They think the parties concerned are the HR manager and the functional manager (the manager in charge of the hiring). They’re going to go learn about these personas and what they think about the general area of technical recruiting and skills management.

When I teach Venture Design classes, I usually have students try to write up their personas as a first step- I always find it’s a good way to find out how little I really understand. For your persona hypothesis, this takes the place of the type of hypothesis table you saw above.  You’re looking to create characters for your narrative at this point, not collect statistically significant data, so I recommend against questionnaire. That said, I do recommend prepping an interview guide for focus, consistency, and as a kind of checklist. Quality learning in this area will not only help you stay on the mark, but if you later pivot, it will help you make vastly smarter pivots, steered by the empathy and understanding you have for your customer.


Problem Hypotheses

problem-hypothesisHere you’re looking for opportunities- problems you can solve, needs, habits you can fulfill better than current alternatives. There are no new tasks in the enterprise; there are no new consumer behaviors. Do we do things differently than we did 20, 50 years ago? Yes. That said, everything ultimately ties back to an existing need or behavior. Your job is to identify the problems where you have an opportunity to over deliver against current alternatives. This one’s also executed primarily by way of customer discovery interviews.

For example, Enable Quiz generally wants to learn about technical hiring and skills management. They’ve hypothesized two problem scenarios-

1: Screening job candidates for technical skills is hard and the lack of quality in that screening leads to bad outcomes

2: Getting a view staff skill sets is difficult and that difficulty impairs team development and operation
But they’re not presupposing they’re right about this- otherwise what’s the point of doing these interviews. Their job is to get the HR managers and functional managers to speak freely about what their job is like in this area and what, if anything, they wish was better. A good practice during interviews is to start the questions around problem scenarios with questions like:

‘Tell me about the last time you [filled an engineering position].”

“What’s hard about [filling an engineering position]?”

“What are the top 3 things you wish were better about [filling an engineering position]”

Don’t lead the witness. These questions are sequenced very purposefully: they progress towards an increasing degree of prompting. They would not want to ask a question like ‘Is it hard to screen for technical skill sets?’, at least not until the very end because even a ‘yes’ provides very little validation. And in many cases you may actually hear ‘no’ for one reason or another where if they really thought about the whole problem, you might get a more useful or actionable answer. What provides a lot of validation is if they ask the first question above and consistently hear things like:

– interviewing takes up lots of time

– you never really know what a candidate knows until they’re on the job

– I try to quiz them some in the interview but I don’t want to be a jerk and I don’t have all day

– I wish I could do more to screen candidates for the functional manager (from the HR manager)

If they consistently hear responses like that, they can conclude that, yes, they’re on to something.

Once you have a good sense of a few focal problem scenarios, work to understand the customer’s alternatives, how they’re solving the problem today. This will be an important counterweight to our next area, the value hypothesis.  For the folks at Enable Quiz, that will be things like: the HR managers process for checking references, interview guides for job candidates, application forms they use, and even job descriptions. Let’s say we were building an app for parents to distribute allowances. If we find parents keep a list of completed chores up on the fridge to figure out how much allowance, we’d want to snap a photo of that.

Strong presentation of a problem scenario is blood in the water for you- if the current alternatives were really good there wouldn’t be a strong presentation. But some problems are just really hard and you need to make sure you understand what a superior (enough) delivery looks like. Also, you need to make sure the problem/need happens enough. I met a serial entrepreneur who had written a beautiful, functional app for finding movie times. Users loved it. But it didn’t add up commercially because pretty much no one watches enough movies to use or care about the app enough to add up to a good business.


  • (interview guide prep. and interviews per above)
  • Validated problem scenarios, including explanation of current alternatives

Value Hypotheses

value-hypothesisArmed with real personas and validated problems, now you have to validate whether or not what you’d create is better enough than the alternative to make a sale. Sadly, the way to do this is definitely not asking the customer whether or not they would hypothetically buy your product in the future.

This information is less than worthless because it creates a creates a false validation that we desperately crave for our idea. When it times to actually go out and sell the thing, you may encounter an echoing silence. I say likely because just about everyone will say ‘yes’ to a hypothetical sale- they don’t want you to feel bad and even more so they don’t want to argue with you. See the Yellow Walkman Story for a great example.


If you can’t ask customers, then what are you supposed to do? Just build something and hope for the best?

No! There is a better way: the Minimum Viable Product (MVP). The naming is kind of unfortunate because the point of an MVP is to avoid building an actual product. The idea with the MVP is that it’s the minimum ‘thing’ you can use to test your value hypothesis. Ideally, it’s a fake product of some sort or ‘product proxy’, as I like to call it.

Why? Well, as I mentioned in the introduction, the whole point of lean is to avoid waste. Building something that no one wants (which happens all the time), is exactly the kind of waste Lean Startup seeks to avoid.

An MVP is not the same thing as the 1.0/first version of your product. In fact, this is one question I pose at the beginning of workshops: What’s the difference between an MVP and a 1.0? The basic difference is that the MVP is a vehicle to test your value hypothesis and the 1.0 is a vehicle to execute on and scale a validated (positively tested) value hypothesis.

Here are three popular MVP types/patterns:

1. The Concierge MVP

The idea here is to hand create the experience you think your customers wants and observe both the process and the results. For example, Enable Quiz might find a few HR managers that have open positions and hand create quizzes they can use to screen the skill job candidates, grading the quizzes for these HR managers by hand (vs. building software to do this).

I like this MVP a lot because it delivers the most observation. In the early stages of a new project, that’s likely to be what you need. With the Sales MVP (see below), teams often exit the experiment saying “No one wanted it, but why not?”.  With the Concierge MVP, you’re more likely to exit with a sentiment like “This is more nuanced than we thought, but here’s clearly what we need to learn now.”

For anything that’s business-t0-business (B2B), consulting is often a terrific concierge vehicle- solve the customer’s problem by hand and then identify what can be standardized and automated and build software for it. I love this pattern so much I did a talked for the Lean Startup Circle about it: ‘B2B Hacks- Getting From Consulting to Scalable Products‘.

2. The Wizard of Oz MVP

In the WoO, you hand create the product interaction(s) you think the customer wants and observe the results. For example, a demo video (based on software that isn’t really working) is a classic WoO. Having a customer interact with a product front end while someone manually executed whatever the backend would be doing is another classic WoO.

Robotics is one space where the fake back end WoO pattern is popular. I was just interviewing a product manager from iRobot for The Interdisciplinarian. They make the (awesome) robot vacuum ‘Roomba’ and often test new feature ideas by having users interact with their robots while a human operator executes the interaction they’d implement in code if the feature looks like a winner.

However, it’s worth pointing out that the test above is valid more for the Usability Hypothesis (Can users use it?) vs. the Value Hypothesis (Will they buy it?). Wizard of Oz is useful vehicle for testing your Value Hypothesis, but only when it’s paired with a Sales MVP to test for willingness to buy. For example, Dropbox famously created a WoO demo they put online, but their paired it with a call to sign up for updates, which I would call a Sales MVP. Even though they’re not asking for money, there’s an exchange of value (the visitor is giving up their privacy/paying you with some of their attention).

Sorry for all this explanation, but I felt like we needed to have this talk now. I see so, so many teams over-investing in excellent usability for a product that it later turns out no one is motivated to buy or use.

Usability matters, but I would say motivation matters more at the early stages of a project. My favorite tool for thinking about the difference while considering the relationship between the two is the Fogg Curve- see below. On the y-axis is Motivation: how much does the user/buyer want this? On the y-axis is ability/usability- how easy is it for them to buy it/use it?


If you consider a point on the upper-left of this curve, that’s a product that the user is very motivated to use and will use even if their ability is low (the usability is low). If you consider a point on the bottom-right of the curve, that’s a product that the user is not very motivated to use but would use if it’s super-duper easy.

You need to execute well on both, but with a new project nailing motivation is key. Even thought it doesn’t give you a lot of observation, the Sales MVP is the simplest way to test that.

3. The Sales MVP

In this MVP, you try to sell something. This might not involve paying money: getting sign-up’s on a landing page is a classic Sales MVP. For example, I mentioned above that Dropbox famously paired a Wizard of Oz demo with a call for sign-up’s to hear about the launch of the product.

The Sales MVP does need to involve some voluntary exchange of value. For example, if you’ve just spent an hour with someone interviewing them and then ask them if they want to sign up for your product updates while you watch over their shoulder, that’s not a valid test.

Taking pre-orders or pledges through a platform like Kickstart or IndieGogo is another classic Sales MVP pattern.

Let’s loop back to our example company Enable Quiz and consider how they might use these three patterns to test their Core Value Hypothesis:

If Enable Quiz offers an easy, affordable, lightweight technical quizzing solution, we can acquire, retain, and monetize customers.

Below are some ideas.Note: None of these requires writing a single line of software and that’s good because it will reduce the time and money required for them to decide whether they’re on to something or whether they should reformulate and pursue something else.

  1. Concierge MVP
    They could manually create technical quizzes for open positions that potential customers are trying to fill. They’d give the quiz to the HR manager and observe whether they actually use it, whether they get a better outcome with the hiring process, whether they’re dying to know when the product comes out for real. Should Enable Quiz charge for this? That has pro’s and con’s, but mostly pro’s: 1) getting even a token payment validates real demand (remember the spiel about ‘tangible actions’?) 2) customers are notoriously flaking about following through with free trials, since they feel they have little at stake. Generally, I recommend erring on the side of trying to get paid if you’re providing something substantial. If it doesn’t work or it’s slowing things down, in lean fashion you can pivot and go with an unpaid approach.
  2. Wizard of Oz MVP
    They could fake up a demo for the service and put it on a site/landing page. If they want to test motivation relative to their Value Hypothesis, they’re probably best off pairing this video with a call to action for sign-ups to an email list about product updates.
  3. Sales MVP
    They could simply pre-sell the actual service (at a discount for ordering early). Even if the order/contract the customer signs isn’t strictly binding, that still passes the tangible action test. Companies don’t like to sign things if they don’t have to- it creates the potential for a later nuisance, kind of like when an individual signs up for your email newsletter.  Enable Quiz would need to create some kind of incentive for signing up early, which could be deep discounts or (better) free personal support. Note: this doesn’t need to be exclusive with the first option. Enable Quiz could experiment with both approaches across different representative companies.Another Sales MVP would be to run an experiment where they post various versions of the propositions via Google AdWords that link to landing pages where the call to action is to sign up for a newsletter on product updates. Side reminder: this is why it’s so important to take note of the specific language customers use to talk about their problem; using natural language to connect with relevant desires is the best route to economical success on Google. This test has the advantage of also testing a relevant customer creation technique, which brings us to our next area- the Customer Creation Hypothesis.

Here’s a summary of the MVP types for your viewing convenience:

MVP Pattern Notes
The Wizard of OZ MVP Create a realistic demo recording/rendering of your product, post it online, drive some traffic to it, and see how you do on sign-up’s and sharing. This is a pattern that DropBox initially used.
The Google AdWords MVP This is a popular pattern for both new products and features- see if you can generate click-throughs and sign-up’s from a Google AdWords campaign. This is especially useful if your engine of growth has certain assumptions about paid CPA (cost-per-acquisition).
The Pre-Sales MVP There’s really nothing as definitive as getting customers to pre-pay (or even sign a letter-of-intent; it’s still an exchange of value) for your product, assuming the price point is viable. Make sure your pitch and the transaction are clear.
The Concierge MVP (Strong Form) Have the customer submit inputs on the front end and you manually execute whatever it is your product would do on the back end.We did this at a photo-social startup I was advising: rather that creating an app to do something (hypothetically) exciting with a set of photos, I challenged the team to execute hypothetically exciting action by hand. If the output was popular on social media, that’s a validation signal. If not, it isn’t.See also the Howe & Associates example on the case studies.
The Concierge MVP (Consulting)  Using consulting as a prelude to (hypothetically) building a piece of software is a great way to discover and validate all your hypothesis areas.Leonid Systems used consulting as a vehicle to solve customer problems while evaluating problem areas that looked ripe for automation and standardization with software.



Customer Creation Hypotheses

customer-creation-hypothesisOnce you’ve got an identifiable customer that reliably/repeatable buys into your proposition, you have the nucleus of the pivotal ‘product/market fit’. The last frontier is to make sure you can multiply that product/market fit on an economical, repeatable basis. There are certainly dozens of products out there that I’d buy if I was fully informed about them. But that’s never going to happen. We live in a world of imperfect information and rising above the noise floor is a real challenge.Your primary hypothesis might be something like ‘We can acquire customers in an economical fashion.’

You’ll want to break that hypothesis down into more manageable pieces, otherwise you risk freaking out and just asking ‘This product rules. Where the #@$# are all my customers!?’ For this, I like the ‘AIDA(OR)’ framework- Attention, Interest, Desire, Action. Since this framework is over 100 years old, I added Onboarding and Retention, important additional steps for the kind of products we sell today.

I really like storyboarding as a way to walk through the AIDAOR breakdown and keep it connected to your work on customer discovery (step 01). Here’s an example from Enable Quiz (without background notes but see below):


Here’s the post it came from if you’re interested in the detail: Storyboarding Customer Acquisition.

The table below provides focal notes across the AIDAOR breakdown. Note- some of these hypotheses and items will bleed across AIDAOR some. It’s not a hard science. The important thing is to make sure you have a clear, testable view of the customer journey that dovetails (and/or updates) your work on customer discovery and personas.

Step Sample Hypothesis Notes
Attention On AdWords we can achieve click through rates of [x] at a cost of [y].We can achieve a viral coefficient of [x] on emails.Our salespeople can call on [x] qualified customers/day. This is where you start, obviously. How does the customer find out you even exist? How do you get them to click through to your site? To sit down with one of your representatives?I hear ‘someone will tell them about it’ and that’s OK but you should have a clear, measureable view  of how that word of mouth happens.Generally speaking, your job is to communicate with the customer persona in a way that’s relevant to them and present them their problem scenario in a compelling way, connecting with their understanding of it.
Interest If we can achieve a bounce rate of [x] on our landing page, then we can achieve a success rate on scheduled meetings from cold calls of [y]. They took a look what what you showed them- are you connecting with a relevant problem scenario? Credibly?Do you have a value proposition that they think will deliver on that problem scenario?There are a few tricks for getting attention, but this is where your work on customer discovery will really bear dividends- the best messaging is crafted around an intimate understanding of the customer. Many great items are also arrived at by iteration so just like everything else here, this is very much a place to A/B/..N test different versions of your proposition, different channels, etc. and learn what works. 
Desire Customers will share out our messages at a rate of [x].We’ll see comments and feedback that we’re really connecting with what the customer wanted to solve their problem. Remember the ‘Feel’ part of think-see-feel-do in the persona from step 01? This is where that becomes important. Most of us do a lot of what we do for reasons that are ultimately emotional. And you’re competing with a lot of other demands and distractions on your customer’s time. How are you connecting with them?
Action If we get HR managers to a landing page with a demo, [x]% will sign up for [our email product announcements, a free trial]. This is whatever the customer has to do to buy your product. Make sure to keep it as simple as possible. Many companies make the mistake of creating great product and promotion and then having a crummy sign-up process or onerous contracts. How will you know if you’re doing well here?
Onboarding If HR managers signs up for a free trial, at least [x]% will create a quiz.. This is whatever’s required for the customer to a) start really using your product and b) make it a habit (consumer) or integral to their processes (business). How do you review and ensure customer success? This is the last place you want to be
Retention [x] portion of customers will renew/re-purchase. How well are you doing on renewals? Up-sells? Word of mouth from existing customers? It’s much easier to work with the customers you already have.

Usability Hypotheses


This area has to do with testing interface patterns for maximum usability. Referencing the Fogg Curve above, this is the Ability/Usability dimension on the x-axis whereas the Value Hypothesis dealt with motivation on the y-axis.

In practice, success here has a lot to do with bringing your validating learning forward for development in the form of user stories.  If you’re not familiar, these are basically a way to be specific and detail oriented about the experience you want the user to have with out prescribing the implementation.

This is important for the practice of lean/Lean UX because it frees you (conceptually) to prototype and test several different approaches. For more on how to do this, see the Customer Discovery Handbook- the section on usability. Key to doing this successfully is getting your crucial assumptions on the board early and making them a focal point for your subsequent exploration and testing.

For example, let’s say you’re considering a web-based drag and drop interface and some on the team think users wont’ get it and others are certain it’s the best option. Get an assumption on the board like ‘If we present a drag and drop interface to [deliver on whatever user story you’re working], 95% of users will engage with this affordance to accomplish their goal’. Then test. With Lean Startup, it’s all about testing!

04 Experimentation

Scientific-Method-Lean-Startup-ExperimentationIf everything’s in order up to this point, this should be easy! Stay focused on the experiments, get them done, and then move on to a decision about whether to revise or move forward.

That said, nothing ever goes perfectly and distractions arise. The top determinant of successful experimentation (in new ventures) is focus. Creating output makes us feel good- do the work, cross it off the list, call it done. But that’s not how to assure your best possible outcome under uncertainty. With Lean Startup, you have to be ready to cross something off the list and then likely put it back on the last several times before you get it right. Emotionally, it’s daunting.


Here are 3 tips to stay on track:

  1. For everything you’re doing on step 03 (designing the experiments), make sure you’ve visualized the moment where you interpret the results and make a decision. If you can’t visualize that moment, you probably need to tighten up your experimentation/discovery plan.
  2. If you’re thinking that an experiment isn’t going to deliver a definitive result, odds are you’re right. Stop it, fix it, repeat it.
  3. Everyday, ask ‘What did I accomplish yesterday? What will I do today? How did those things tie to the outcome I’m pursuing?’ Here’s a post on a related technique: The Daily Do

The first bit of experimentation deals with customer discovery and validating your idea. I’ve found that’s somewhat unfamiliar territory for a lot of folks, so I put together the checklists below to help you step through that process. These checklists describe a few key items you should verify within the persona and problem hypotheses.

Checklist: Persona Hypothesis

  Hypothesis Experiment
✔︎ This persona exists (in non-trivial numbers) and you can identify them. Can you think of 5-10 examples?
Can you set up discovery interviews with them?
Can you connect with them in the market at large?
✔︎ You understand this persona well. What kind of shoes do they wear?
Are you hearing, seeing the same things across your discovery interviews?
✔︎ Do you understand what they Think in your area of interest? What do you they mention as important? Difficult? Rewarding?
Do they see the work (or habit) as you do?
What would they like to do better? To be better?
✔︎ Do you understand what they See in your area of interest? Where do they get their information? Peers? Publications?
How do they decide what’s OK? What’s aspirational
✔︎ How do they Feel about your area of interest? What are their triggers for this area? Motivations?
What rewards do they seek? How do they view past actions?
✔︎ Do you understand what they Do in your area of interest? What do you actually observe them doing?
How can you directly or indirectly validate that’s what they do?

Checklist: Problem Hypothesis

  Hypothesis Experiment
✔︎ You’ve identified at least one discrete problem (habit/need) Can you describe it in a sentence? Do others get it?
Can you identify current alternatives?
✔︎ The problem (habit/need) is important Do subjects mention it unprompted in discovery interviews?
Do they respond to solicitation (see also value and customer creation hypotheses)?
✔︎ You understand current alternatives Have you seen them in action?
Do you have ‘artifacts’ (spreadsheets, photos, posts, notes, whiteboard scribbles, screen shots)?

Hopefully those help you focus your thinking and progress on validating those early hypotheses.

Checklist: Value Hypothesis

Every experiment for a Value Hypothesis should have the basics you see in the checklist below. For some examples, see Example 1.

  Item Notes
✔︎ Explicit Hypothesis Which hypothesis are you testing?
✔︎ Test Design How is the test going to work? Try to really think through the details and (more importantly) test your experiment as soon as you possibly can and leave yourself time for revision between iterations. It’s not important to collect statistically significant data for most of these so feel free to tweak the experiment if you need.
✔︎ Pivotal Metrics What specific metrics will you collect as the experiment runs? Do you have the right observation instrumented into the test design to get these?
✔︎ Invalidation Threshold At what value will you consider the result a positive (validation) vs. a negative (invalidation)? Even if you’re not certain, in practice it turns out that putting a line in the sand here is super important. Otherwise, you may never get to a place where you really are metrics driven (or you’ll get there too late). It sounds weird, but even if you think you might revise them later, I’d start with an explicit threshold in mind.
✔︎ Next Steps What are you going to do if it’s a positive vs. a negative result? This is really important because the point of all this is to drive toward a good outcome for the venture. If the results of your experiment aren’t moving you forward, the experiment may be a waste.
✔︎ Time & Money Beyond the obvious purposes of budgeting, it’s good to estimate this early since in many situations you’ll have to make a choice on which experiments you actually run.
✔︎ Iteration Time How long will it take to get results. This is important for the same reasons as Time and Money.

05 Pivot or Persevere?

Scientific-Method-Lean-Startup-Pivot-or-PersevereI recommend settings goals for your experiments and time-boxing them in agile-type sprints (iterations) of 2-6 weeks. This will help keep everyone on track. If the experiments are running well, you should arrive at a ‘pivot or persevere moment’ where you have the learning to decide whether to proceed or revise and re-test. Or you may find you need to tighten up your experiments and repeat them- that happens.

The hypothesis areas above were organized roughly in sequence and the table below describes common results from these experiments and ideas on how to interpret them and make decisions about what to do next.

Concluding on Your Persona Hypotheses

persona-hypothesis-concluding-v2It’s OK to enter one without set hypotheses. Let’s say you’re generally interested in problem scenarios around 3D printing in the consumer durables vertical. That’s a perfectly OK starting point for some customer discovery, driving to explicit written hypotheses as you learn more.

That said, you’re looking to drive to a relatively conclusive understanding of your key personas before you move too far ahead- otherwise you’ll likely be operating on a weak foundation. Here are a few notes on sample conclusions in this area:

Conclusion Notes
‘Everyone is my customer!’ Ultimately, this may be true but it’s important to identify an early market where you’ll focus and establish a beachhead.
‘There are a few customers to focus on- I’m not sure which one’. Take your best guess and choose, but run your experiments against a focal early market. Pick the one that has the most compelling problem scenario.
‘I can’t find anyone to interview’ Then I would step back. This almost certainly means you’ll have trouble with the next steps as well.
‘I think I get this persona, but I’m not sure about the whole think-see-feel-do thing.’ Getting down a solid think-see-feel-do for each key persona will not solve all your problems. But not having a solid understanding of your customer is likely to generate waste downstream, decrease your chances at success, and make pivots less well informed and purposeful. I’d check out this tutorial and increase your comfort level with your personas: Tutorial- Personas & Problem Scenarios.

Concluding on Your Problem Hypotheses

problem-hypothesis-concludingWhile it’s often practical to combine the field work on customer discovery in this area with your persona hypothesis, it is important to have a strong footing with your personas before you finalize your problem hypothesis. This is important because ultimately you’re going to need to sell something to these people and you’ll need to be able to identify them. Also, some problems are so spread out among customer segments/personas and occasional that they’re not a strong fit for a new venture. Here are a few notes on sample conclusions in this area:

Conclusion Notes
‘During customer discovery interviews, the subjects consistently mentioned our problem scenario’ Excellent! That’s a good preliminary validation you’re on the right track.
‘We did a questionnaire and >80% of subjects said they wish [our problem area] was better.’ I’d be very cautious about that result- it sounds like you’re leading the subjects. I’d like a lot of things to be better but there are only a small fraction of those that I’d actually dedicate my time and money to improving. I’d try face-to-face or at least phone interviews.
‘I am in this business/I am one of these personas and I know I have this problem- and I’m sure it exists for most others like me.’ While there are many fabled successes where founders build products for themselves, it’s not the most reliable way to succeed with a new venture. Your expertise/experience may blind you to doing good customer discovery with others like you- which is, of course, your actual market. By all means, play to your strengths and use your expertise but be sure to approach the customer discovery work with a fresh and unbiased perspective.
‘Our product doesn’t really address a problem, exactly, so this isn’t relevant for us.’  First, words are faulty instruments- on a business-to-consumer product, this is just as likely to be a ‘need’ or ‘habit’. And fundamentally there are no new habits and there are no new jobs in the workplace. Be very sure you understand the problem(s) or need(s) you’re connecting with before progressing.
‘Our product is so fundamentally novel that there are no current alternatives.’  See above- there’s a lot less novelty in the world than we think, particularly those of us that come from the technology world. Make sure you have a clear view of how your customer’s fulfilling their needs today or you won’t have a good counterweight to determine if and how your value proposition is relevant.
‘We’ve mapped out the alternative and observed or key personas in action with them.’  Excellent! You’re ready to synthesize, tune and test your value proposition!

Concluding on Your Value Hypotheses

value-hypothesis-concludingThis is where it all starts to come together (or possibly apart!) for a new venture- Is your value proposition better enough than the persona’s alternatives to generate revenue? Here are a few notes on sample conclusions in this area:

Conclusion Notes
‘Over 80% of the people we asked said they’d buy our product!’ They’re probably not being entirely truthful, or, let’s say ‘accurately predicting their future behavior’. I’d disregard that result. See- The Yellow Walkman Story for more explanation on why.
 ‘We did a concierge test and [got paid, got asked by the customer when they could buy our product].’ Excellent! You’re on the fast track of iterating to a successful outcome. Time to look at the contours of an actual MVP.
 ‘We finished our concierge test. They liked it but as a result it was a long way to conclusive.’  Now that you understand the problem area and concierge execution better, do you think you could get paid for the next one? That’s a good follow-on test. You can also try some of the options below. If you continue to see a lukewarm response, go forward to pivot.
‘We made a bunch of pre-release sales, but they’re non-binding.’  It’s OK that they’re non-binding. As long as you made the agreement with a real decision maker (someone who could buy it for real in the future), you’ve got a reasonably good validation of value hypothesis.
 ‘We couldn’t make any pre-release sales.’ Why not? Were they not that interested? Or wanted to see real product first? If so, how real? If they’re not interested, try some other experiments but that’s a sign that maybe you should pivot. If they wanted to see real product, did you push them to something that was too-binding? Were they ready to sign up for any kind of follow up? If so, good sign; if not, they may not be interested and were just using ‘no real product’ as an excuse. That’s a call you’ll have to make based on your experience with the individuals.
 ‘We found a few AdWord-landing page combinations that had better than expected click through and conversion rates to email sign-up’s.’  Excellent! That’s a good validation of your value hypothesis and you’re gotten a jump start on your Customer Creation Hypotheses.
 ‘We tried a few things with AdWords and landing pages, but the results weren’t great.’ What happens when you try the same thing out in the real world? You may just need to learn about your personas, problem scenarios and how to pitch your value proposition. These tests are good for connecting with existing demand but not for fundamentally understanding it. Try spending some time with real prospective customers.

 Concluding on Your Customer Creation Hypotheses

customer-creation-hypothesis-concludingThese results are generally easy to interpret- you convert your prospect through the funnel, or you don’t. And then you try something else. If the preceding items are in good shape, this should just be a matter of finding the right channel and tuning your approach. If you’re struggling here, make sure you’ve kept your work on personas, problem scenarios and the nature of your value proposition tightly integrated with your work here (messaging, etc.) and don’t be afraid to loop back to those if you suspect a more fundamental flaw is dampening your conversions.

06.a Pivot!

Scientific-Method-Lean-Startup-PivotPivots vary widely in size and number. Pivots in the area of customer creation and business model are just about  inevitable. The section above described a few common conclusions about experimental results and the possible implication of a pivot. The worst thing you can do is limp along- organizing your experiments into iterations where you set a goal about conclusion will help you avoid that. Strong customer discovery and encapsulation of those outputs in personas and problem scenarios, which we discussed in step 01, is critical as a rudder for your pivots. A strong understanding of the customer will help you pivot much smarter.

06.b Persevere!

Scientific-Method-Lean-Startup-PersevereYou have the core spark of a successful startup! Congratulations. Now it’s time to scale up and steadily improve the recipe you’ve found. I recommend the material here on business model generation and using agile to tie the items we reviewed here to your actual product implementation.

With creativity and focus, it’s not hard to achieve substantial validation and with that the confidence to persevere. The next section, ‘MVP Case Studies’, summarizes a few of my favorite examples.

7 Minimum Viable Product (MVP) Case Studies

Validating your idea doesn’t necessarily require a lot of money or even a lot of time. It does require focus and the design of substantial, relevant contact with prospective customers. The examples that follow range from household names to little known and run the gamut of product categories. My intention with this section is that you’ll be able to find at least one pattern that’s relevant to your situation, sparking ideas on a creative MVP.

Case Study #1: Sprig

sprig-case-studyI first heard Sprig’s story from the founders at a Lean Startup Circle event. From Sprig you can order a healthy $12USD meal delivered with a few taps on their mobile app. It’s kind of like the Whole Foods deli meets Uber, or, in their words “dinner on demand … prep time is 3 taps … delectable prices’.

It’s run by an experienced Silicon Valley team and wanting to go to approach VC’s with more than a great time and a great idea, they ran a successful validation experiment within a week of pulling together they founding team.


Item Notes
Persona* Paula the Professional- health conscious, short on time, moderate to high income, already uses similar services like Uber.
Problem Scenario I want to have a nice, healthy dinner with no hassle and at a price I can afford (like $12).
Alternatives Going to the store or an expensive, take-out, or a slow delivery service (>20 minutes).
Value Proposition Get a healthy meal like you would order a cab (on Uber): “Dinner on Demand … Prep Time is 3 Taps … Delectable Prices” (Sprig Home Page)

* This is me interpolating/guessing on an item; not part of the Sprig team’s explanation.


Item Notes
Key Hypothesis People like Paula exist and rather than prepping their own meals, ordering takeout, or eating out, they’d prefer to easily order a healthy 12 meal that’s delivered in 20 minutes.

If Sprig offers Paula a healthy meal like you would order a cab (on Uber), then she would use and reuse the service.

Experiment Prep. such a meal and delivery ad hoc for one night; post the offer for delivery on Eventbrite; email friends and acquaintances
Validation Criteria Does a workable portion of the emailed population respond? Do they like the experience?
Result Strong preliminary validation- good uptake and good customer experiences

Like a lot of the examples that follow, Sprig’s first MVP required no (new/custom) software, little time and little money.

Case Study #2: Dropbox

dropbox-case-studyI’ll assume you know about Dropbox. But you may not have heard the terrific story about how Drew Houston validated the concept in the face of a crowded, confused market and a difficult technical execution.

When Dropbox was in its infancy, many file sharing services existed- they just weren’t all that good and so few people used them. The Dropbox proposition was that a well executed product would achieve large scale market success. Here’s the tricky part: to do this well across even the very big platforms like OSX, Windows, iOS, etc. was a big job and they needed to raise money. VC’s were reluctant to place such a bet on a space with existing competitors that were struggling.

So the Dropbox team did something very creative to validate their proposition- see below.


Item Notes
Persona Tom the Techie- early adopter who works on projects that require swapping a lot of files between a shifting network of collaborators.
Problem Scenario It’s difficult to share files between a fluid network of collaborators, particularly if they’re: big or numerous or change a lot.
Alternatives Many existing products, but none of them super compelling and widely adopted. Also, custom setup’s which work but are cumbersome to set up and maintain.
Value Proposition A file sharing service that truly feels transparent to the user across all major platforms- OSX, iOS, Windows, etc.


Item Notes
Key Hypothesis People like Tom (and others in the later market) exist and if there was a really nice, easy file sharing service, they’d adopt it.

If Dropbox created a file sharing service that truly felt transparent to the user across all major platforms- OSX, iOS, Windows, etc., then a mass market of users would prefer it over the alternatives, subscribe to it and use it over time.

Experiment Hand craft a demo (without actual working, releasable software); post it; orient the messaging to the early market; promote it and see what happens
Validation Criteria Substantial traffic on the video and sign-up’s for product information
Result Strong preliminary validation

One additional thing that’s notable about Dropbox is that the persona I (questionably) described ‘Tom the Techie’ was what they identified as they early market, the first few folks who felt the problem scenario most acutely and would be most reachable with the value proposition. While their video demo wasn’t exclusively tailored for that market, they added inside references for that market.

Case Study #3: Photo-Social Startup

photo-social-startup-case-studyI advised this company through a program at Stanford. They are still in ‘stealth mode’ so rather than going into the details about their product, let’s take a look at the general pattern for photo-social products, products like Instagram that somehow make the photos we take more interesting on social media.

The user has or takes photos. Rather than just posting them to social media (Facebook, Twitter, etc.) they want to do something with them to make them more interesting- tell a story, enhance them visually, something like that. Then they post them and the whole point is the reward of social acclaim, your social network registering their approval with likes, shares, and comments:

photo-social-case-studyWhen I first started working with this team they had an idea of this type and were in starting software development. We put that on pause and used Lean Startup techniques (as well as design thinking and personification) to spend less time and money and still validate (or invalidate) their concept:


Item Notes
Persona Existing poster of photos. Personas: Martha the Mom, Pat the Party Planner, Teresa the Teen Social Butterfly
Problem Scenario [I want to do something interesting with my photos so that my social graph rewards me with interest and acclaim]
Alternatives Manually enhance photos, use alternative enhancers/amplifiers like Instagram
Value Proposition [This is something users can do with photos that will generate engaging content for their social graph]


Item Notes
Key Hypothesis People like the personas above would like to enhance their photos using our process and if they do this they’ll be rewarded with approval and interest from their social network.

If we offers Facebook users a way to [do something novel] with their photos, they will try the service and convert to a paid version of the app.

Experiment Manually create output of the type the hypothetical app would produce
Validation Criteria Posts created in this way create strong interested demonstrated by like’s, comments, and shares
Result An echoing silence- nobody cared. Time to pivot!

The result was a big, echoing silence- no interest. But the team was much better off for having found that out sooner vs. later and now they’re working on a much more promising iteration of their idea.

Case Study #4: Leonid Systems

Leonid-Systems-Case-Study-LeanI started Leonid Systems in 2007 to explore new ideas for back office IT in the hosted communications space. Leonid’s customers are mostly large infrastructure providers, companies like Verizon and Comcast. But Leonid needed to start small, and do it on a bootstrapped basis. So we started out doing consulting, and we used that as a ‘concierge’ vehicle to isolate, learn about, and validate important problem scenarios for our customers.

The specific problem scenarios require industry-specific explanations, so I’ll skip that for now and instead reference this talk I did for the Lean Startup Circle in San Francisco:

Essentially, Leonid went through a series of MVP’s, starting with consulting, to make sure that we were doing things that were relevant for our customer base.  

Case Study #5: Rapid MVP Testing with Paul Howe & Associates

paule-howe-lean-case-studyI heard Paul Howe’s story at the Lean Startup Circle (SF). He an a couple of other veterans had a funded startup to explore business-to-consumer (B2C) concepts in search of a winner.

Their approach was very heavy on Lean Startup- get in, test, and then scale it or get out (vs. doing more customer discovery in a given area). While personally I tend to pick a problem area and spend more time learning about it, I think their approach is probably great if you have a lot of different ideas you want to try and you’re good (or make yourself good) at this type of experimentation.

The concept I specifically remember was a service to tell you how much all your ‘stuff’ is worth by looking at your emails and bank/credit card statements. Instead of diving into this fascinating ‘big data’ problem, they did a concierge MVP where they did the searches by hand for a few test customers. Paul Howe sat down and just manually searched their email and bank records to compile a statement of what they had an how much it was worth. The result? An echoing silence, and they moved on to their next idea (with relatively little time & money spent).


Item Notes
Persona (not sure; their emphasis was heavily weighted toward testing vs. customer discovery)
Problem Scenario I have a lot of stuff around that I might want to sell and/or I’m just generally curious about how much it’s worth, how much I’ve spent.*
Alternatives Manually going through credit card statements or receipts.
Value Proposition It’s interesting and possibly useful to know how much stuff you have.*

* This is me interpolating/guessing on an item; not part of the team’s explanation.


Item Notes
Key Hypothesis There are certain personas who would like to know how much their stuff is worth.

If Paul Howe & Co. offered a service where you could quickly, automatically know how much your stuff is worth, users would engage with such a service in large numbers.

Experiment Manually create such a ‘statement of your stuff’ and see if the user cares
Validation Criteria Users demonstrate an interest in the service (not sure how they specifically structured the validation)
Result An echoing silence- nobody cared. Time to pivot!

They encountered an echoing silence but were imminently ready to move on to their next concept.

Case Study #6: Zappos

Zappos-Lean-Case-StudySince they got started in 1999, you can say Zappos a pioneer in the current era of lean startups. Their story is wonderful and simple.

Nick Swinmurn had the idea that choosy shoppers wanted better price and selection than they were getting at their local mall. What he did next was a pure Lean Startup: he photographed a whole bunch of shoes and put them for sale online to see if anyone would buy them. They did and the rest is history.


Item Notes
Persona Sam the shoe-hound- knows what he wants but not where to get it.
Problem Scenario Sam is unable to find the shoe he wants at local retailers, wasting time and getting frustrated.
Alternatives Possibly mail order or wait until he’s in a bigger market to go to the store.
Value Proposition Make the shoe Sam wants accessible online and make sure he has a great experience so he’ll come back and not have to think about where to find the shoe he wants anymore.


Item Notes
Key Hypothesis Sam the shoehound exists and rather than shopping locally or compromising on what he wants he’ll find and want to buy the shoe he really wants online. If we offer him a wide selection of shoes online, he’ll buy them from us.
Experiment Photograph a bunch of shoes and put them on a simple website. Promote a little and see what happens.
Validation Criteria Do they come and buy?
Result Yes, they did.

 Case Study #7: Enable Quiz

enable-quiz-case-studyMentioned earlier, Enable Quiz is a synthetic company I use for example purposes. They’re (hypothetically) creating a lightweight quiz app for screening engineering candidates for new positions so the hiring manager has a clear picture of their skill sets and can focus on fit, etc.

Enable Quiz loans itself to a concierge MVP approach where the founders hand create position-specific quizzes for HR Managers. They can then gauge whether the quiz in fact helped the company arrive at a better process and outcome with their hiring and whether that generated residual interest in the future product.


Item Notes
Persona Helen the HR Manager and Frank the Functional Manager(Helen’s in charge of the administrative side of hiring and Frank’s the person the new hire would work for)
Problem Scenario We spend a lot of time evaluating technical skill sets and a) we don’t do that well, often ending up with hires that aren’t a good mutual fit and b) we’d like to spend less time interviewing overall and more time on cultural fit with the top candidates
Alternatives Calling references, asking a few probing questions
Value Proposition spend less time interviewing and get better outcomes


Item Notes
Key Hypothesis Companies that hire engineers would prefer to use a lightweight quizzing app for evaluating candidates fit with a given position’s required skill set instead of spending time checking that ad hoc.

If Enable Quiz offers companies that hire engineers lightweight technical quizzes that screen job candidates for engineering positions, then these companies would trial, use, adopt, and pay for such a service.

Experiment Manually create position-specific quizzes for individual companies to use in screening candidates
Validation Criteria Do the hiring and HR manager feel they had appreciably better outcomes? Do they enthusiastically ask about the finished app product?
Result n/a (hypothetical company)

Is Lean Startup just for startups?

No, not in the sense you probably mean. Eric Ries defines a ‘startup’ as any business (or line of business) that hasn’t yet found a ‘product/market fit’, meaning that it can reliably sell a known proposition to a known customer. If you have a new line of business or product within an established company, Lean Startup’s probably a great fit for you.

Is Lean Startup the answer?

Not to be coy, but it does depend on the question. If your question is ‘How do I manage this venture systematically to a good outcome in the face of uncertainty?’, then yes, Lean Startup will help you get there. As a planning  technique for innovation, I don’t know of anything better.

That said, most innovative ventures have other questions as well, like:

Who is my customer really, and how do I make sure I’m relevant to them? For this I recommend the work around design thinking.

How do I take a holistic look at the business without toiling over a business plan that no one will read? For this, the business model canvas is handy.

How do I think about a new venture start to finish and understand where we are? For this, I like Steve Blank’s work around customer development.

How do I develop great products quickly, and bridge the gap between ‘business’ and ‘engineering’? For this, agile is tried and true.

Lean Startup’s Top 6 Failure Modes and How to Avoid Them

Lean isn’t a passing fad: it’s fundamentally better suited to innovation than most of the prevailing classical/traditional techniques. That said, it’s widespread use in the innovation/startup context is relatively recent and best practices are emerging and sometimes the hype diverges from the reality of what’s practical. I compiled the list below based on my experience advising startup’s and individuals on the use of lean/Lean Startup:

1. No Pivotal Hypotheses

Subscribing to the general idea isn’t enough to make Lean Startup perform for your venture. You have to actually articulate them, prioritize the few that are truly pivotal, write them down, and use them as your focal point. The sections above, starting with ‘01 Developing High Quality, Testable Ideas‘, lay out a systematic approach to doing this.

2. No Focal Point

Once you’ve identified and prioritized your pivotal hypotheses, it’s important that you use that as your focal point and litmus test for everything you do. Output is not the same as driving outcomes in a startup. Crossing things off our list makes us feel good, but is it really driving to that ‘pivot or persevere’ moment? Subject all your activities to that litmus test.

Make sure your hypotheses stays up to date and is highly visible. Google Doc’s isn’t a bad solution. Here’s a Hypothesis Template you can use as a starting point.

3. Remaining Inside the Building

This is a riff on Steve Blank’s famous directive ‘get outside the building!’. Validated learning is the one and only propulsion for driving to decisions and outcomes with Lean Startup. Without meaningful learning and experimentation with real prospective customers, your Lean Startup will be running in place.

For more on how to do this, see section 03 on designing experiments and section 04 on experimenting.

4. Aimless Pivots

Lean Startup helps you make sure you’re not wasting time on an idea that’s not ready to for success. It doesn’t deal directly with how you determine which ideas are highest quality. For this, I highly recommend the use of design thinking techniques, specifically personas, problem scenarios, and value propositions. This material has the added benefit of making sure that if you do have to pivot, you’re doing it with an increasingly better understanding of the target customer. This increases the odds you’ll arrive at a pivot that hits.

The practice of design thinking is tightly integrated into this tutorial. For more on personas, etc. see: Tutorial on Personas, Problem Scenarios.

5. Lack of Purpose & Goals

The world’s a noisy place. Distractions will walk in the door every day. Many teams with good intent and an understanding of Lean Startup fail to make steady, reliable progress towards a pivot or persevere moment.

It’s important to work in time-boxed (time-constrained) iterations, each of which have discrete goals. That’s what the material on Startup Sprints is about, though there are many ways to implement the concept. The Daily Do is another technique you can use to make sure you and your team are on track day to day.

6. Too Big an MVP

We love to build things, it’s in our nature. Subordinating your love for the product your building to the learning mission at the core of a Lean Startup is difficult (at least, I’ve never found it easy). Doing so requires discipline focus, and clear check points to make sure you’re on track.

The MVP case studies here are a useful test point or you to step through whether or not you’re building too much product.

Please Note: This list presupposes you want to and should use lean to solve your problem at hand. For a view on where lean’s a good fit, see the section above ‘Is Lean Startup the answer?’.

Criticism & Context

In practice, lean isn’t always the right method- a statement you could make about pretty much any method. That said, as a pure idea, it’s pretty durable and coherent. Every method should be subjected to scrutiny and, of course, rigorous validation is itself part of the method. Below are a few summarized criticisms of lean and Lean Startup along with notes.

You can’t skimp your way to greatness.

While this observation may often be true, its application to Lean Startup is mostly the result of misunderstanding. You pair the words ‘lean’ and ‘startup’ and the idea of avoiding waste and it’s not shocking that on a quick look you come away with the idea that it’s about making sure your startup/venture doesn’t spend much money. Keeping your spend down may be an outcome you get with Lean Startup, but the method itself about waste avoidance not cost avoidance.

Earlier, we looked at the Minimum Viable Product (MVP) concept. That fits into a process where you create a tightly defined value proposition, then conceive the most quickest, least expensive way to test it (the balance between cost and speed being mostly a function of your particular priorities). You may reach a point where a relatively long, expensive creating of something is the best way to do that.

Lean Startup wouldn’t say that’s wrong. It would just say that you should exhaust the quicker, cheaper alternatives to testing the proposition so you don’t go through a long, expensive creation cycle and then encounter an echoing silence where customers aren’t interested in what you created.

It doesn’t work in [medical, industrial equipment, other areas with long design cycles].

Sure, there may not be any shortcuts to getting a regulatory approval for a new drug or device. Yes, it may take a long time to get a new model of bulldozer functional. These aren’t good reasons to discard the method.

First, you may be able to test the demand for your proposition without a product. Let’s say you hold a webinar, conference about a particular problem area for medical clinicians- Do a lot of them show up for problem A vs. problem B?

Additionally, there are a lot of elements to a successful customer experience that surround a core product. How do clinicians identify when and how they should use this new product? Buy it? Store it? Take it out of the box and administer it? These are areas where small batch experimentation may be perfectly viable.

For a set of actual examples of how this works, here’s an article about the application of Lean Startup at GE: HBR Article on Lean Startup at GE.

It hasn’t been statistically validated that Lean Startup actually makes companies more successful.  

This is true but not necessarily that relevant for two reasons. First, it’s difficult and rare for social science to reliably draw these kind of conclusions. Success factors for products and ventures vary across a lot of dimensions which change over time with their operating environment. Second, Lean Startup has only been around since 2011- there just isn’t a lot of data available.

I don’t want to oversell lean or Lean Startup, but I do think the criticisms above are mostly the result of misunderstanding or inappropriate context. In practice, I think the biggest issues with it mostly have to do with a) not actually grinding through the details of its rigorous implementation and b) wanting it to be the one silver bullet for every problem and situation (which is natural- who doesn’t want that), when in practice it’s a portfolio of methods that lean to successful innovation.

How Do I Get Started?

In my teaching and advising, I find that these seven steps are an effective way to initiate your practice of Lean Startup:

  1. Draft a positioning statement so you have a clear working definition of the project
  2. Charter the project with a core value hypothesis
  3. Draft three ideas on MVP’s to test your core value hypothesis
  4. Storyboard the customer journey with AIDAOR
  5. Unpack your hypotheses in more detail
  6. Choose what experiment you want to run and complete a working version 
  7. Start experimenting!

The first six steps will probably take you 60-90 minutes and they will get you rolling.

1. Drafting a Positioning Statement

We all think our idea is perfectly clear, but we’re often wrong. I really like the standard positioning statement a way to make sure I’ve identified the basics of what the project is about. Here’s the format I use:

For (target customer) who (statement of the need or opportunity), the (product name) is a (product category) that (statement of key benefit – that is, compelling reason to buy). Unlike (primary competitive alternative), our product (statement of primary differentiation).

You can find an example in the sections below (Example 1, etc.) and if you’re using the Venture Design template (Google Doc), you can add it here: Positioning Statement in the Venture Design Template.

2. Chartering the Project with a Core Value Hypothesis

Like the positioning statement, this kind of addresses the ‘dumb question’ of whether there’s a clear, actionable view of what the project should do. This has the same form as a ‘regular’ hypothesis:

If we [do something] for [persona], they will [respond in a certain way].

The only difference is that it’s a kind of summary of what the venture hopes will happen. For example, for the startup Enable Quiz that wants to sell a SaaS quizzing solution to companies, it might be something like this:

‘If Enable Quiz offers an easy, affordable, lightweight technical quizzing solution, we can acquire, retain, and monetize customers.’

Simple right? Even though it sounds simple, I recommend doing this to anchor and communicate your core objective.

3. Drafting Three MVP Ideas

I wouldn’t say Lean Startup has no value if you don’t run experiments (I think it’s still good for general focus), but it is mostly about running experiments. Given that, I recommended looping back to experimentation a lot. After you have a core value hypothesis, I’d sketch out what the three MVP patterns might look like to test it:

1. Concierge MVP. How could you create the experience & outcome you want the customer to have with a minimum of up-front work? Remember, practicality/scalability does not matter! You’re only trying to do this a few times.

2. Wizard of Oz MVP. How can you fake the product experience itself? What are you looking for (in terms of motivation)? Very frequently these are paired with a Sales MVP in a ‘show them the Wizard of Oz MVP and have a call to action with the Sales MVP’. If so, it’s OK to reference the Sales MVP for most of your outcome observations and metrics.

3. Sales MVP. How would you find your target customer and pitch them? What’s the pitch?

4. Storyboarding the Customer Journey

You’ve got the big stuff in order- what about the specifics of what might make this experience a go vs. a no-go for the user? For this I really like to storyboard a take on the customer journey: attention, interest, desire, action, onboarding and retention. For more on this see the tutorial on storyboarding (the part on user acquisition) and the example below.

Take your core/summary value hypothesis and decompose it into a more detailed set of at least five ‘child’ hypotheses to test. Move through the AIDAOR process and write down all the assumptions that present themselves to you, using the same formula (but for your more detailed/specific hypotheses): If we [do something] for [persona], they will [respond a certain way]. using the template (attached).

5. Unpacking Your Hypotheses

The Core Value Hypothesis is great for overall focus, but for actual testing and execution you’ll probably want more detail. The good news is that the AIDAOR storyboard you did will probably help a lot for driving to specifics. See below for an example.

Example 1: Initiating Lean Startup at a … Startup

This page presents a set of example hypotheses based on a fictional company, ‘Enable Quiz’. Enable Quiz is rolling out a lightweight technical quizzing solution; for companies that hire engineers, it will allows them to better screen job candidates and assess their internal talent for skills development. For more on Enable Quiz, see the example Venture Concepts page.

Drafting a Positioning Statement

Here’s the positioning statement for Enable Quiz:

For [HR and hiring managers] who [need to evaluate technical talent], [Enable Quiz] is a [talent assessment system] that [allows for quick and easy assessment of topical understanding in key engineering topics]. Unlike [formal certifications or ad hoc questions], our product [allows for lightweight but consistent assessments of technical talent].

For more on what this is and why it’s important, see the preceding section ‘How Do I Get Started?’.

Chartering a Core Value Hypothesis

Based on what they know right now, they have this formulation of their root hypothesis around the problem scenario of hiring engineers:

‘HR and functional managers are in charge of technical hires and they struggle to effectively screen for technical skill sets, making the hiring process slower and more labor intensive and producing worse outcomes than they should reasonably expect. Currently they implement a patchwork of calling references and asking a few probing questions.

If Enable Quiz offers an easy, affordable, lightweight technical quizzing solution, we can acquire, retain, and monetize customers.’

Drafting MVP Ideas (for Core Value Hypothesis)

1. Concierge MVP

Enable Quiz could find a few HR managers who want to participate and develop a quiz for them by hand to screen candidates for a specific position they open for hiring. These quizzes might be on paper or on Google Forms, etc.. In either case, they should probably grade them ourselves since even though it’s more work, that’s closer to the real experience we’re considering (an online quizzes that obviously would grade itself).

Outcome Metrics: First, are the HR managers able to use the quizzes and what is that like? How hard is it to take a real job description and develop a suitable quiz? Second, do the HR managers actually use it? (Note: they’ll need a way to get some log or indication of how many candidates they screen so we can know that.) Third, after they develop quizzes for 1-2 positions, if they give a low-pressure offer to have Enable Quiz do it again (but where the customer pays), do they still want it? Note: This should be low-pressure since they might feel somewhat obliged to us for the free ones we did and if they buy on that basis it would not be valid to conclude anything about the market at large.

2. Wizard of Oz MVP

I see two possibilities here. First, they could go with a product demo video and test sign-up’s against that (see Sales MVP below). Second, they could build a very basic site where users sign-up and configure the quizzes they want. They’d let users know it will take a few hours and build the quizzes by hand in the background. A variation on that would be just to have them submit the job description, maybe with optional tags for skills, and build against that.

Outcome Metrics: For the first, see Sales MVP. For the second, they might look at total sign-up’s but they’d also have an opportunity to look at the choice of topics and submitted job descriptions. If they introduced the topic tagging, they’d also be able to test the users (who we’re assuming are HR managers) ability to identify specific tech topics relevant to a given job description.
They could fake up a demo for the service and put it on a site/landing page. If they want to test motivation relative to their Value Hypothesis, they’re probably best off pairing this video with a call to action for sign-ups to an email list about product updates.

3. Sales MVP
Let’s say their current business model assumes they’re selling online. In that case they’d probably focus on testing click-throughs against Google AdWords and, from there, conversions to trial and then sale on their site. If they have enough testable ideas on propositions they should just go for it, but if they encounter click-through rates below 3% after tuning their AdWords, hand selling to companies might be a good fallback since that will give them more perspective on what’s working.

Outcome Metrics: Basically, they’d be looking at the customer journey that concludes with regular usage (and paying!)–
click-through rate from Google AdWords
conversion from landing page to trial sign-up (or plan purchase)
creation of a quiz?
use of a quiz?
repeated use of a quiz?
conversion to paid plan after trial?

Storyboarding the Customer Journey

Below is an example of what this might look like for Enable Quiz’s customer.


A few additional notes:
1: There isn’t just one storyboard/customer journey. Obviously, to a degree, every customer has a different journey, so don’t try to fit everyone into one of these. Just pick a journey you think is plausible and one you’d like to test and focus on making it vivid and testable.
2: This isn’t the only angle on your assumptions. Depending on the project, there may be any number of other areas that are worth exploring. This is just one angle on the assumptions that most projects have in common.

Unpacking Your Hypotheses

The table below describes Enable Quiz’s working hypotheses and plans for experimentation and validation (or invalidation). I’ve organized these decomposed hypothesis into the categories we reviewed above: Persona Hypothesis, Problem Hypothesis, Value Hypothesis, Customer Creation Hypothesis, and Usability Hypothesis.

Here are a few notes on the column headings–

Priority: This is a measure of the importance and context of the hypothesis. While it’s important to speak all your hypotheses and keep an eye on them, focus is also important. You can use any scale you want, but I like to use this rating scheme:
0: This is a core/summary hypothesis
1: This is a  pivotal hypothesis- if this turns out to be untrue, we need a pivot. It probably needs to be decomposed further for manageability.
2: This is a child of the type of hypothesis above. Individually, its disproving doesn’t mean a pivot, but if all its ‘siblings’ (all the other related hypotheses that tie to something of priority ‘1’) prove untrue, then it does.
3: This may be pivotal- we’re not sure yet. It’s definitely important.
4-5: Lighter dispositions of ‘3’.
6-10: This is a tactical detail related to product development or growth/promotion. Take note if it’s convenient, but we need to validate (or invalidate) the related items above it before we worry about- if the related items are invalidated then pursuing it could be 100% waste.

Needs Proving? Not all notable hypotheses need proving, or you may have already proven them. This is a place to log that, along with notes on why and where it needs proving.

Experimentation: These are quick notes and possibly external references to how the hypothesis gets proven or disproven.


Example Persona Hypotheses

persona-hypothesis-exampleEnable Quiz’s general area of interest is improving team performance by making it ultra simple to measure technical skill sets. This could apply to both of screening new hires and doing a more systematic review of existing staff to see how knows what (so peers and managers can be more effective in divvying up tasks and focusing professional development).

Personas-Who-v2In both cases, they believe that there are two personas that are the primary customers: ‘Helen the HR Manager’ and ‘Frank the Functional Manager’. ‘Chris the Candidate’ (being interviewed) and ‘Steve the Staff Member’ are users, but probably fairly passive users so it’s really Helen and Frank that primarily interest Enable Quiz (see Personas Tutorial- Examples for more on these).

These personas are a starting point, but since they’re (we think) the pivotal personas, the team at Enable Quiz will almost certainly need to re-segment them. We know every individual is vastly different, so, of course, there’s always an avenue to split up personas into sub-types. But why and how in this case? First and foremost, re-segmenting the personas will allow them to identify an actionable early market where they can focus and acquire their first few ‘beachhead’ customers. .

The table below describes focal hypotheses around Enable Quiz’s persona hypotheses:

Priority Hypotheses Needs Proving? Experimentation
1 The Hiring Manager persona and Functional Manager exist in roughly the form we’ve described and they are collectively responsible for technical recruiting and hiring. Not really. There may be minor variations on this but the arrangement is pretty standard and across dozens of companies they haven’t seen anything much different. ditto
1 We understand these personas well and what they think about technical recruiting, hiring and skills management (in the form of think-see-feel-do). Yes, definitely. – Observing consistent results on discovery interviews
4 Helen is likely to be our primary buyer, Frank a possible influencer and approver Yes, definitely. – Observing consistent results on discovery interviews- Trying to generate pre-sales- Asking who and how they’ve purchased (roughly) comparable services like online recruiting services

The output of this section is a set of validated personas, including think-see-feel-do.

Example Problem Hypotheses

problem-hypothesis-exampleEnable Quiz’s is generally interested in the problem space of assessing technical talent, with an emphasis on lightweight tests for tactical decision making. Their goal with the problem hypothesis is to flesh out all the material problem scenarios and then validate (or invalidate) which ones substantially exist. They also want to clearly understand their personas’ interaction with current alternatives, so they can

a) validate that they really understand the problem scenarios (if you can’t identify an alternative you likely do not)
b) better connect with customers’ current perception of the problem when they start customer creation (aka selling)
c) provide a backdrop/baseline for their value hypothesis- how much better is their solution than the alternative?

Note: It’s useful to divide up these hypothesis areas for organizational and analytical clarity, but the different areas will and should comingle. For  example, the team may (hopefully) refine their personas on the basis of which sub-types of their personas have the most acute problem scenarios. And this is a big win, allowing them to more reliably get a win in their early market.

The table below describes focal hypotheses around Enable Quiz’s problem hypotheses:

Priority Hypotheses Needs Proving? Experimentation
1 Screening technical hires for skill sets is difficult and most companies wish they could do it more effectively vs. their current alternatives. This wish is on their ‘A list’ of problems. Yes. Sure, the problem probably exists to some degree- but how important is it? To who? What are they doing now? – Discovery interviews: does it come up as a problem, unprompted- Test pre-release promotion and sign-up’s for more info.
2 The HR manager perceives this as important Yes, definitely. (ditto)
2 The hiring manager perceives this as important Yes, definitely. (ditto)
2 This is frequently relevant in the area of {IP networking, Linux sysadmin, Microsoft sysadmin, Java, PHP, Ruby, .NET, QA, devops, development management} Yes, definitely- it’s important to prioritize which areas are hottest and which (if any) have most frequent relevance to the early market. Some topics may also loan themselves to this type of test better than others. – Discovery interviews: check topics- Google Trends & Keyword Planner (search trends, keyword value, monthly searches on ‘hire ruby developer’, etc.)
1 Quizzing existing staff to understand who knows what at the current time would be useful and actionable and managers wish they could do it more effectively vs. alternatives. This is on their A-list. Yes, definitely. – Discovery interviews: does it come up as a problem, unprompted- Test pre-release promotion and sign-up’s for more info.
2 The above is useful because it would help match team members and tasks more effectively Yes, definitely. (ditto)
2 The above is useful because it would help with intra-team learning and professional development Yes, definitely. (ditto)
2 The current staff would find such a quiz at worst a benign admin task and at best a fun, friendly competitive diversion; they won’t find it too judge-y or top down-ish Yes, definitely. – Ditto but with technical staff more so than managers- ‘Steve the Staff Member’

The output of this section is a set of validated problem scenarios, including a careful description of the current alternatives.

Example Value Hypotheses

value-hypothesis-exampleSo, does anybody want some?

Since Enable Quiz is a synthetic company, let’s take the liberty of supposing that in their customer discovery they found that the problem scenario around screening potential new hires was the most acute. The current alternatives were calling cagey references and trying not to be a jerk and take up the whole interview by asking probing questions.

Now the question is whether a lightweight quizzing solution would deliver on that problem scenario and exceed the alternative enough to fuel sales and reliable customer creation.

The tables below describes some of Enable Quiz’s more detailed hypotheses.

Attention, Interest, Desire

Priority Hypotheses Needs Proving? Experimentation
3 If we run a program of Google AdWords ads, we’ll find ads that have a click-through rate (CTR) >2% Yes. Here is one example ad:
Headline 1: ‘Ruby’, ‘Devops’, ‘Golang’…HUH?
Headline 2: Are you a technical recruiter
Description: We make screening engineering candidates easier through a simple online tool.See [doc xyz or your Google AdWords account] for other ads. [no need to include this for your assignment- just an example of how this template might relate to the rest of your project]
3 If we get HR managers to a landing page with a demo, 10% will click through to other page for more information. Yes. We’ll probably have a few landing page variants for the different types of Google AdWord ads.

Action, Onboarding, Retention

Priority Hypotheses Needs Proving? Experimentation
3 If visitors click through from the landing page to another page, then at least 2% of them will convert to a free trial. Yes.
3 If users are on a free trial, 40% of them accounts will create at least one quiz. Yes.
3 If we successfully onboard the HR manager with a relevant quiz for an open position, they will use the quizzes for all the candidates they interview. Yes. Execute a manual ‘concierge’ experiment on the quiz process
3 If users create a new quiz, at least 30% of them will use their quiz at least five times. Yes.
3 If the HR manager uses the screening quiz with all candidates for a given position, [x]% fewer unqualified candidates will make their way to the functional manager Yes.  Execute a manual ‘concierge’ experiment on the quiz process
3 If users use their quiz at least five times, 20% of them will convert from a free trial to a paid plan. Yes.
 3  If we offer the service at [x] price with [y] supplemental assistance, companies that hire a lot of engineers will pay [z].  Try pre-release sales; pair with conclusion of concierge test

Example Customer Creation Hypotheses

customer-creation-hypothesis-exampleOnce Enable Quiz knows who they’re selling to, what problem they’re (really) solving, and how to deliver value against that problem, they need a way to economically and repeatably connect with that demand.

And this may involve a couple of different recipes, particularly as they transition from an early market of enthusiasts to wider distribution in the larger population of more pragmatic buyers (followers).

The table below describes focal hypotheses around Enable Quiz’s customer creation hypotheses. Notice that they’re looking at two channels: 1) direct sales and 2) online advertising and they’ve organized the hypotheses around those.

Each new channel begins with a priority 1 hypothesis with supplemental priority 2 items breaking it down across the AIDA framework.

Priority Hypothesis Needs Proving? Experimentation
1 Channel: Direct Sales.
Enable Quiz can connect with demand economically through direct sales.
Yes.  – see below
2 Attention: If we give them a contact list, our salespeople can call on [x] qualified customers/day. Yes. – Test and measure a limited set of sales activity (probably starting with a founder/senior person)
2 Interest: If a salesperson calls on [x] qualified leads, they can scheduled [y] meetings. Yes. ditto
2 Desire: If a salesperson gets a meeting, they will see follow-up from the customer [x]% of the time. Yes. ditto
2 Action: If we approach [x] qualified leads, [y] will close for a paid offer over $[z]. Yes. ditto
1 Channel: Online Advertising.
If we market to HR managers through an AdWords campaign, we’ll achieve a cost per acquisition (of free trial) of less then $[x].
Yes. – Run an initial AdWords test
2 Attention: If we run relevant AdWords ads, we’ll get the attention of HR managers. Yes. –  click through rates of at least [x]%
2 Interest: If we get the customer to a landing page with a demo, we can capture their interest. Yes. – Achieve a bounce rate of less then [x]% on a winning variation of the landing page.
2 Desire: If we are able to show the demo page to an HR manager, we’ll see desire. Yes.  – (This one is tough to observe through this channel. You could use the adjacent items (landing page bounce rate and sign-up conversion rate) as a proxy. Other channels and interactions like direct sales and social are much better channels for this.)
2 Action: If we get HR managers to a landing page with a demo, [x]% will sign up for [our email product announcements, a free trial]. Yes. – Achieve [x]% conversion rate for the objective in question
2 Onboarding: If HR managers signs up for a free trial, at least [x]% will create a quiz. Yes. – Observe this on in-app analytics
2 Retention: only [y] portion of customers will require a support call; the rest will use the online help to onboard Yes. – ditto
1  Retention: [x] portion of customers will renew/re-purchase. Yes. – Observe this on in-app or external analytics.

The output of this section is a repeatable, economical recipe for customer creation.

Example Usability Hypotheses

product-development-hypothesis-exampleLet’s assume Enable Quiz validates their key hypotheses and decides to ‘persevere’, building a working minimum viable product. Just to over communicate on this critical point, it’s not a foregone conclusion that they get to this point with the idea as it is and they should definitely not move to this point before they

  1. validate their persona, problem, and value hypothesis
  2. in many cases their customer creation hypothesis (especially if their business model involves selling over the web)
  3. exhaust productive opportunities to field test the whole proposition with non-software/non-product MVP’

One other preface not for clarity: this material is more ‘lean’ in general than specifically Lean Startup. The practice of Lean Startup is about fundamental, strategic validation of new ideas. This material here is more tactical- while none of these items individually is likely to make the venture sink or float, it is highly productive to list out key hypotheses about how customers will interact with the product and then figure out how to validate (or invalidate) those as quickly, cheaply, and above all early as possible.

The “MTP”

When, and only when, I’m building an actual product, I like to start with a ‘Minimum Testable Product’. The idea is to take everything you think is questionable about the user interface and artificially motivate users to test that before you put it out to your real users. While you don’t want to paralyze yourself with analysis, done right this is relatively quick and easy and it allows you to unbundle and debug the risk that users understand how to use your product/system from risks you have about them being fundamentally motivated to want to use it to solve their problem.

The ‘minimal’ part means that the application doesn’t actually have to be ‘working’, the responses can be static/hard-coded. That’s workable because you’re artificially motivating/directing the user to attempt something specific so you can see if they get how to do it on your product. In UI (user interface) talk, this is also known as a ‘Wizard of Oz’ prototype.

A specific example: I recently built brandlattice with some collaborators. It’s a web app that involves a lot of drag and drop, something most users still aren’t expecting from a web app, so we knew we had a substantial risk that users just wouldn’t get that part and get stuck. So we tested it and lo and behold, they got stuck. We had a toolbox of fixes to help them along, which we then tested in ascending order of disruptiveness to the overall experience We tested a few and we validated one that had a high success rate (and fortunately was minimally disruptive).

Without going into the details of Enable Quiz’s possible user interface, let’s say that they see the HR Managers progress to creating an actual quiz from the available technical topics as a big possible hurdle. They’ve discussed (argued about?) a few approaches and naturally they’d like to go with the simplest. So they build a simple front end prototype and formulate the hypothesis & experiment something like this:

Priority Hypothesis Needs Proving? Experimentation
1 If we provide them a self-service interface, the HR manager will be able to create the quizzes based on the available job descriptions. Yes. – Usability test with interactive prototype
2 The HR manager persona will understand the quiz creation process as presented and be able to complete it at least 90% of the time Yes. (see above)

Again, only undertake this step if you’re good on a, b and c above and if you feel like you’re mired in user testing, just move on- the main goal is to validate or invalidate your fundamental proposition.

Concierge Experiment Enable Quiz


Item Example (Enable Quiz Concierge Test)
What hypothesis will this test? This MVP will test our high-level Value Hypothesis:
If Enable Quiz offers companies that hire engineers lightweight technical quizzes that screen job candidates for engineering positions, then these companies would trial, use, adopt, and pay for such a service.
How will we test it? We’ll start with custom-built quizzes on Google Forms to assess the basic value of the product to the HR manager. We have recruited five HR managers from our customer discovery work who have agreed to participate. Each has 1-2 open positions where we have custom-designed screening quizzes based on the specifics of the open position.

The quizzes have been made available to the HR managers and we’ve finished 0 day/usability testing to validate that they know how to administer the quizzes and find the scores (which we post to a Google Doc for them after grading them by hand).

What is/are the pivotal metric(s)?
What is the threshold for true (validated) vs. false (invalidated)?
Unpacking our high level hypotheses, we’d like to test-
1: If we create position-specific quizzes for HR managers, they’ll use them ~100% of the time and, after two positions, be willing to pay. For this, our metric is [quizzes administered]/[candidates interviewed].We’ll measure [quizzes administered] based on the number of position-specific quiz forms we receive. We’ve added a checkbox for ‘this is a test’ to help make it easier to discard junk forms. Also, there’s a name or initial field which we use to correlate back to the interviews. We screened the HR managers to make sure they have systematic calendaring on the interviews they do so that even if they don’t keep track of the count of candidates, we can work with them after the fact to check the count.Our target threshold on this is 90%. Given the hand-help set up, etc., we’re providing, if the quiz isn’t compelling to the HR managers we’ve hand held such that they use it for most job candidates, then we’ll likely need a substantial pivot.2: If the HR managers use the quiz, they’ll send through <1/2 as many candidates. For this, our metric is a comparison on the portion of candidates screened out by the functional manager- baseline vs. with the quiz.
This test will be of an approximation. Based on interviews with both HR & functional managers, around ⅔ of candidates are screened out by the functional manager based on some material deficit in skill set. We’ve provided a working Google Doc for HR managers to use in post-mortems for cases where they don’t already have this. We’ll check in with them weekly to (gently) work to keep this form up to date, but we expect only moderate upkeep of this.We’d like to to see the ratio of candidates screened out drop to roughly ⅓. This may be aggressive particularly since we’ve erred on the side of ‘easier’ quizzes to avoid false positives (incorrectly screening out candidates with a possibly adequate skill set).3: If we offer the service at [x] price with [y] supplemental assistance, companies that hire a lot of engineers will pay [z].  We will measure this by our ability to sell a package where we charge them $100 for a subsequent custom-created quiz.We believe this is a better test than a pre-pay for the service since we think such a transaction for a few-hundred dollars would be difficult/not adequately compelling for an HR manager to sell internally.We’d like to see at least 50% of the subjects opt for a subsequent quiz, assuming success on the above two tests.
What will you do next if the result is true? False? If all three tests validated, we will proceed with a 1.0 of the Enable Quiz software, limited to just a few specific topics (see experiment below for decision-making on that).

If 1 & 2 only pass, we will consider the circumstances and reasons for that and review price point, purchaser, and, likely, the actual value proposition itself.

If no tests pass, we will step back and consider the a) whether a different take on the value proposition might be relevant and b) whether the problem is truly important.

How much time, money will it take to set up? Based on the current 5 technical topics we estimated that total set up for all 5 subjects will involve:- 20 hours of work by our product lead to set up, user test, and document (for user) the quiz infrastructure on Google Forms- 40 hours of work by our technical lead to formulate and validate (with subjects) the quiz questions across the 5 subjects
Roughly, what will it take for each individual test? For each subject (5), we think it will take our product lead:- 3 hours for initial Q&A and onboarding (including travel, etc.)- 3 hours across the quizzing to answer misc. questions

– 3.5 hours to grade the quizzes (assuming 20 quizzes/position)

– 1 hour of misc. follow-up by our technical lead

Roughly, how long will it take for each  test to run and produce definitive, actionable results? The interview cycle runs for 3-5 weeks after which we expect to have a full set of results on-hand.

AdWords Experiment Enable Quiz

Item Example (Enable Quiz Adwords Test)
What hypothesis will this test? This MVP will test our hypothesis about which technical topics are most promising for our hypothetical 1.0. There are many to choose from, and our intuition is that the right topics will a) by popular/in demand with employers b) overlap with the market we can reach and c) be affordable with regard to keyword phrases.
How will we test it? We have assembled a list of popular topics and workable keyword phrases ‘hire [Ruby] developer’, etc. and plan to run comparative Google AdWork campaigns to determine the top 10 most promising topics.
What is/are the pivotal metric(s)?
What is the threshold for true (validated) vs. false (invalidated)?
The pivotal metrics here are-
1: Absolute click-through-rate (CTR)
After a few iterations, we’d like to see a CTR of 2% on any topic we consider. Below this, we’re not sure our current hypotheses on our Customer Creation Hypothesis hold together. We’d like to see at least ~100 impressions on each iteration, with an estimate of 2 iterations/topics (this is a blend since we’re planning to use similar patterns across topics).2: Comparative CTR
Beyond this, we’ll initially rank topics by CTR.
What will you do next if the result is true? False? If true, we will pursue a 1.0 of the product with the top 10 topics.

If false in that none of the CTR’s are >2% after we feel we’ve tested a reasonable set of alternative keywords and ad+landing page combo’s, then we’ll a) revise our Customer Creation Hypothesis and consider alternative Channels and b) pursue an alternative assessment strategy (example: looking at job postings for target customers).

How much time, money will it take to set up? Setting up and tuning the campaign (including AdWord & landing page creation and iteration) will take:
– 20 hours by our product lead
– 20 hours by our ‘growth hacking’/marketing contractor, costing $1,600
Roughly, what will it take for each individual test? The above includes both set up and our estimate on tuning. After that, we should have a usable set of results.
Roughly, how long will it take for each  test to run and produce definitive, actionable results? Based on search frequency of our preliminary keywords and the need to iteration, we think we’ll need 10 days for each test to run.

Example 2: Initiating Lean Startup in an Enterprise Project