Your Lean Startup

What did you do today to guide your venture (or project) to the outcome you want?

How will you know if it worked?

Does your team have a visible plan that easily allows them to prioritize completing task A vs. task B, C, or D?

Do they understand why?

Lean Startup is about delivering quality answers to these questions, questions we should all be asking ourselves.

What’s ‘Lean Startup’?

no-muda-leanThe Lean Startup book and movement grew out of Eric Ries‘ work on applying the principals from the lean manufacturing movement to the creation of startups.The goal of lean is to eliminate waste, ‘muda’ in Japanese. In this context, a startup is any venture that hasn’t yet validated a ‘product/market fit’, meaning they have a proposition they can reliably sell to a particular type of customer.

Lean emphasizes use of the scientific method (hypothesis testing & observational learning) and the efficiency of ‘small batches’, doing things in incrementally on a success basis.

The unique importance of all this to any innovative company (which is pretty much any growth company) is non-obvious and also breakthrough. Pretty much everything you currently learn about ‘business’ was creating for operating a factory that produces commodity widgets. For such a business, 5 year plans are great, the assumption of perfect information is relatively valid, and your conventional MBA will serve you well. For any innovation-based business, these techniques are grossly inadequate and will generate massive waste.

Example Lean Startup AssumptionsTemplate Lean Startup Assumptions

Lean in Action

Scientific-Method-Lean-Startup

Two core practices underlie lean: 1) use of the scientific method and 2) use of small batches. Science has brought us many wonderful things. You can see its process to the right. Particularly when dealing with the unknown (aka innovation), it’s good to be explicit, hands-on, and data driven about whether your innovative new idea is a money maker or an irrelevant novelty (I’ve had some of both but more of the latter).

The use of small batches gives you more shots at a successful outcome, particularly valuable when you’re in a high risk, high uncertainty environment. A great example for Eric Reis’ book is the envelope folding experiment: If you had to stuff 100 envelopes with letters, how would you do it? Would you fold all the sheets of paper and then stuff the envelopes? Or would you fold one sheet of paper, stuff one envelope? It turns out that doing them one by one is vastly more efficient, and that’s just on an operational basis.If you don’t actually know if the envelopes will fit or whether anyone wants them (more analogous to a startup), you’re obviously much better off with the one-by-one approach.

So, how do you do it? In 6 simple (in principal) steps!

  1. Start with a strong idea, one where you’ve gone out an done strong discovery which is packaged into testable personas and problem scenarios.
  2. Lay out all the things that have to be true for your idea to succeed (you’ll find a few standard starters in the section below).
  3. Figure out how you’ll prove or disprove these using ‘small batches’ with a minimum of time and effort. Much of the time, you can do this without building any actual product.
  4. Get focused on your tests. Don’t worry about anything else- it probably won’t matter unless you prove out your idea.
  5. Decide; did you prove out this idea and is it time to throw more resources at it? Or do you need to reformulate and re-test? There’s no shame in the second outcome- the core virtue of Lean Startup is the recognition that startup’s are high risk and it makes sense to avoid waste so you get multiple shots at a winner.
  6. Revise or scale/persevere. If you’re pivoting and revising, they key is to make sure you have a strong foundation in customer discovery (see #1) so you can pivot in smart way based on your understanding of the customer/user.

The six sections below describe these steps in more detail. While the focus is on lean and Eric Ries’ work on Lean Startup, a few other techniques (like design thinking and customer personification) are important complements and I’ll reference those as well.

01 Developing High Quality, Testable Ideas

Scientific-Method-Lean-Startup-IdeasThis is the story that sells for sites, publications, TV news: Young founders dream up brilliant idea, code it, and the next morning are acquired for 1 billion dollars! There’s nothing wrong with a little harmless fantasy, but the reality is that few startups (even successful ones) actually take this course. For most, it’s a marathon of trying things and seeing what works. Did you know Rovio was on the verge of bankruptcy when they released Angry Birds? And that they paid the bills by making games for other companies? I’m not saying doing a startup isn’t fun. It is. I can’t imagine anything better than working with a great team on learning how to build something that matters. But following a fake, media-generated script will Personas-Problem-Scenarios-User-Stories-v4probably lead to stress and disappointment and that’s not fun. Let’s talk about how to create strong, actionable ideas.

Your fundamental job is to build empathy for your customer (users and buyers). If you could correlate ‘customer empathy acquired’ against venture success I bet you’d see a very tight correlation. Applying empathy to directed creativity is what the popular rubric of ‘design thinking’ is about. You can learn how to do all this in the customer discovery and personas tutorials.

A good way to make sure you have your bases covered with personas is the Think-See-Feel-Do checklist: What do your persona think, see, feel, do in your area of interest? (Again, more on that in the tutorial above). Following that, you’ll want to frame the value propositions you plan to deliver to the customer in terms of problem scenarios and alternatives. Most of us start with the spark of an idea, ‘Hey wouldn’t it be cool if …’, and that’s fine. But your understanding of the customer will be much more relevant/accurate, actionable, and testable if you can relate the propositions to customer problem scenarios and current alternatives.

02 Focusing Testable Assumptions (Hypotheses)

Scientific-Method-Lean-Startup-HypothesisIf you organize your customer discovery as we reviewed above, it summarizes naturally into what I call a ‘product hypothesis’.

PRODUCT HYPOTHESIS

A certain [Persona(s)] exists…

…and they have certain [Problem Scenario(s)]…

…where they’re currently using certain [Alternatives]…

…and I have a [Value Proposition(s)] that’s better enough than the alternatives that the persona will buy/use my product.

Most students and advisees I work with find this focal point helpful to keep their work on personas and problem scenarios linked to their ideas about value and experiments on buyer motivation. The table below organizes key assumption areas into into five chunks. (They’re in the form of questions not assumptions since I think that’s an easier starting point.)

Area NOTES
Persona Hypothesispersona-hypothesis Does this persona exist?Can you name or find 5-10 examples?Can you identify them out in the real world?

Do you understand them really well?

Do you understand how they relate to your area of interest? Think? See? Feel? Do?

Problem Hypothesis problem-hypothesis Do the problems you’re solving really exist?Is it more of a ‘job to be done’ or a need, desire?How important is the problem or problems?

How is the customer solving them now? With what alternatives?

Value (Motivation) Hypothesis value-hypothesis How much better than the best alternative is your product at delivering on the problem?How obvious is that to the customer?How will you test that without just asking ‘do you want this?’ (because that doesn’t work)?
Usability Hypothesis usability-hypothesis-small Assuming the user is motivated by a perception of value and/or experiences they find valuable, are you presenting them a usable interface to that experience?
Customer Creation customer-creation-hypothesis Can you get this customer’s attention?Capture their interest?Connect with a strong fundamental desire?

Is the action they have to take easy enough where they purchase?

Onboard with usage? How is retention and word of mouth?

The areas above are progressive, meaning that you should generally move through them sequentially and that if you’re stuck on one you’ll probably have trouble downstream.

Once you have a working view of where you are and what’s important in those areas, you’ll want to start breaking down them down into individual, testable assumptions. Let’s pause a minute: This may feel like a lot of stuff, a lot of work, but, remember, this is a systematic way to work through basically everything about your venture. Back to those assumptions, I like the following format:

FORM OF AN ASSUMPTION

If we [do something] for [persona], they will [respond in a certain way]. 

That format anchors to the key elements of your work and establishes causality. You’ll also want to consider which, in fact, need proving and how you’ll do that.

The following table has some examples for a fictional company called ‘Enable Quiz’ that provides lightweight quizzes to help managers screen the core skill sets of candidates for technical jobs (for more on them see the Venture Concepts page).

Priority Key Assumption Needs Proving? Experimentation
1 PROBLEM: If we ask non-leading questions about what’s problematic for HR managers in the hiring/recruiting process, we’ll consistently hear that screening technical hires for skill sets is difficult and they wish they could do it more effectively. Yes. Sure, the problem probably exists to some degree- but how important is it? To who? What are they doing now? Discovery interviews with HR managers at companies that hire a lot of engineers/technical talent
1 VALUE: If we offer HR managers at companies that hire a lot of engineers a lightweight quizzing app, they will convert to paid subscriptions after an unpaid trial. Yes, definitely. 1. Execute a manual ‘concierge’ experiment on the quiz process2. Make some early pre-release sales3. Test pre-release promotion and sign-up’s for beta programs by topic area (Linux, Ruby, .NET, etc.)

Regarding the Priority column, I like to keep the assumptions carefully prioritized and layered with the following scale:

1: Pivotal assumption. If this is disproven, the venture needs to be canned or go through a fundamental pivot.
2: Child of a pivotal assumption (same assumption but more detail, specificity)
3: Child of above.
(end truly pivotal assumptions)
4: Extremely important assumption. This assumption substantially affects key profit drivers.
5: Important assumption. This assumption affects key profit drivers.
6-10: Tactical assumptions for incremental improvements in various areas.
X: Not sure of the priority of this assumption. Not being sure of the priority is much better than skipping it!

CLICK HERE FOR ASSUMPTIONS SECTION OF VENTURE DESIGN TEMPLATE

Having too many assumptions is just about as useless as not writing them down at all.

Next you should determine whether or not your assumption needs to be proven and why. For example, with Enable Quiz their key Persona Hypothesis is that there are HR managers and functional (hiring) managers who hire technical candidates and that they can identify those individuals. This is important but doesn’t need proving.

Finally, I recommend making some initial notes on the experiment or experiments you think you might use to test the assumption. In fact, more ideas is better at this point- push yourself to consider how you might use a few of the experiment types you’ll read about in the next section before you narrow down the experiments you want to design and execute. For example assumptions, see Reference A below.

03 Designing Effective Experiments

Scientific-Method-Lean-Startup-ExperimentsExperimentation is where the rubber hits the road with the practice of Lean Startup. The keys to success are 1) keeping the experiments focused on obtaining a true/false result for one or more of your key assumptions and 2) being creative about how you can design the fastest, cheapest experiment which delivers on #1. Not all experiments have a quantitative output- this isn’t physics and it’s valid to review a set of qualitative outputs (like customer discovery interviews) and make a judgement call on how/whether the results prove or disprove an assumption. Also, not all product tests should require building product. We’ll go through some quick examples below and more in the MVP Case Studies section, but many of the best Lean Startup success stories didn’t require a single line of code.

Persona Hypotheses

persona-hypothesisIf you don’t know your customer well, it’s time to get to know them (not to mention validate that they exist). When I’m considering an idea, I brainstorm a list of the key personas and then I make sure I can think of 5-10 real world examples (who are obviously then good candidates for your first set of discovery interviews). Then you want to make sure you understand them well- What kind of shoes would they wear? Your primary vehicle for validating these hypotheses is customer discovery interviews.

PersonasYou also want to make sure you understand how they think about your area of interest (using think-see-feel-do). You’re passionate about your idea, but be sure that doesn’t blind or bias you from seeing these people as they really are. Particularly at this early stage, it’s important to be focused on discovering what’s really important to this customer.

For example, Enable Quiz wants to sell lightweight technical quizzes to companies that hire engineers. They think the parties concerned are the HR manager and the functional manager (the manager in charge of the hiring). They’re going to go learn about these personas and what they think about the general area of technical recruiting and skills management.

When I teach Venture Design classes, I usually have students try to write up their personas as a first step- I always find it’s a good way to find out how little I really understand. For your persona hypothesis, this takes the place of the type of assumptions table you saw above.  You’re looking to create characters for your narrative at this point, not collect statistically significant data, so I recommend against questionnaire. That said, I do recommend prepping an interview guide for focus, consistency, and as a kind of checklist. Quality learning in this area will not only help you stay on the mark, but if you later pivot, it will help you make vastly smarter pivots, steered by the empathy and understanding you have for your customer.

OUTPUT

Problem Hypotheses

problem-hypothesisHere you’re looking for opportunities- problems you can solve, needs, habits you can fulfill better than current alternatives. There are no new tasks in the enterprise; there are no new consumer behaviors. Do we do things differently than we did 20, 50 years ago? Yes. That said, everything ultimately ties back to an existing need or behavior. Your job is to identify the problems where you have an opportunity to over deliver against current alternatives. This one’s also executed primarily by way of customer discovery interviews.

For example, Enable Quiz generally wants to learn about technical hiring and skills management. They’ve hypothesized two problem scenarios-

1: Screening job candidates for technical skills is hard and the lack of quality in that screening leads to bad outcomes

2: Getting a view staff skill sets is difficult and that difficulty impairs team development and operation
But they’re not presupposing they’re right about this- otherwise what’s the point of doing these interviews. Their job is to get the HR managers and functional managers to speak freely about what their job is like in this area and what, if anything, they wish was better. A good practice during interviews is to start the questions around problem scenarios with questions like:

‘Tell me about the last time you [filled an engineering position].”

“What’s hard about [filling an engineering position]?”

“What are the top 3 things you wish were better about [filling an engineering position]”

Don’t lead the witness. These questions are sequenced very purposefully: they progress towards an increasing degree of prompting. They would not want to ask a question like ‘Is it hard to screen for technical skill sets?’, at least not until the very end because even a ‘yes’ provides very little validation. And in many cases you may actually hear ‘no’ for one reason or another where if they really thought about the whole problem, you might get a more useful or actionable answer. What provides a lot of validation is if they ask the first question above and consistently hear things like:

– interviewing takes up lots of time

– you never really know what a candidate knows until they’re on the job

– I try to quiz them some in the interview but I don’t want to be a jerk and I don’t have all day

– I wish I could do more to screen candidates for the functional manager (from the HR manager)

If they consistently hear responses like that, they can conclude that, yes, they’re on to something.

Once you have a good sense of a few focal problem scenarios, work to understand the customer’s alternatives, how they’re solving the problem today. This will be an important counterweight to our next area, the value hypothesis.  For the folks at Enable Quiz, that will be things like: the HR managers process for checking references, interview guides for job candidates, application forms they use, and even job descriptions. Let’s say we were building an app for parents to distribute allowances. If we find parents keep a list of completed chores up on the fridge to figure out how much allowance, we’d want to snap a photo of that.

OUTPUT

  • (interview guide prep. and interviews per above)
  • Validated problem scenarios, including explanation of current alternatives

Value Hypotheses

value-hypothesisArmed with real personas and validated problems, now you have to validate whether or not what you’d create is better enough than the alternative to make a sale. The best way is to sell. This doesn’t necessarily mean collecting money, but it does mean having the customer take a tangible action, giving up something in exchange for what you have. That could be just a few minutes of their time to sign up on your landing page to receive emails from you.

It is definitely not is asking the customer whether or not they would hypothetically buy your product in the future. This information is less than worthless because it creates a creates a false validation that we desperately crave for our idea. When it times to actually go out and sell, you’ll likely find your standing on a plume of smoke. I say likely because just about everyone will say ‘yes’ to a hypothetical sale- they don’t want you to feel bad and even more so they don’t want to argue with you. See the Yellow Walkman Story for a great example.

Customer-Discovery-Sales

Strong presentation of a problem scenario is blood in the water for you- if the current alternatives were really good there wouldn’t be a strong presentation. But some problems are just really hard and you need to make sure you understand what a superior (enough) delivery looks like. Also, you need to make sure the problem/need happens enough. I met a serial entrepreneur who had written a beautiful, functional app for finding movie times. Users loved it. But it didn’t add up commercially because pretty much no one watches enough movies to use or care about the app enough to add up to a good business.

Returning to Enable Quiz as an example, there’s a value hypothesis in the assumption set we reviewed earlier:

Priority Key Assumption Needs Proving? Experimentation
1 PROBLEM: If we ask non-leading questions about what’s problematic for HR managers in the hiring/recruiting process, we’ll consistently hear that screening technical hires for skill sets is difficult and they wish they could do it more effectively. Yes. Sure, the problem probably exists to some degree- but how important is it? To who? What are they doing now? Discovery interviews with HR managers at companies that hire a lot of engineers/technical talent
1 VALUE: If we offer HR managers at companies that hire a lot of engineers a lightweight quizzing app, they will convert to paid subscriptions after an unpaid trial. Yes, definitely. 1. Execute a manual ‘concierge’ experiment on the quiz process2. Make some early pre-release sales3. Test pre-release promotion and sign-up’s for beta programs by topic area (Linux, Ruby, .NET, etc.)

The folks at Enable Quiz have a few options on experiments they can run, and they should probably try all of them. Note: None of these requires writing a single line of software and that’s good because it will reduce the time and money required for them to decide whether they’re on to something or whether they should reformulate and pursue something else.

Option Experiment Notes
1 Execute a manual ‘concierge’ experiment on the quiz process First, they can run a ‘concierge’ experiment, which means ‘faking’ the customer experience to see if anyone cares and wants what you’re thinking of building. In product design/UX a similar technique is called ‘Wizard of Oz’ testing, meaning that you present the user with a front end interface that’s talking to a ‘take’ back end that doesn’t really do anything. In the case of Enable Quiz, they could manually create technical quizzes for open positions that potential customers are trying to fill. They’d give the quiz (and grading guide) to the HR manager and observe whether they actually use it, whether they get a better outcome with the hiring process, whether they’re dying to know when the product comes out for real.Should Enable Quiz charge for this? That has pro’s and con’s, but mostly pro’s: 1) getting even a token payment validates real demand (remember the spiel about ‘tangible actions’?) 2) customers are notoriously flaking about following through with free trials, since they feel they have little at stake. Generally, I recommend erring on the side of trying to get paid if you’re providing something substantial. If it doesn’t work or it’s slowing things down, in lean fashion you can pivot and go with an unpaid approach.
2 Make some early pre-release sales This brings us to our second option: simply pre-selling the actual service. Even if the order/contract the customer signs isn’t strictly binding, that still passes the tangible action test. Companies don’t like to sign things if they don’t have to- it creates the potential for a later nuisance, kind of like when an individual signs up for your email newsletter.  Enable Quiz would need to create some kind of incentive for signing up early, which could be deep discounts or (better) free personal support. Note: this doesn’t need to be exclusive with the first option. Enable Quiz could experiment with both approaches across different representative companies.
3 Test pre-release promotion and sign-up’s for beta programs Finally, they could run an experiment where they post various versions of the propositions via Google AdWords that link to landing pages where the call to action is to sign up for a newsletter on product updates. Side reminder: this is why it’s so important to take note of the specific language customers use to talk about their problem; using natural language to connect with relevant desires is the best route to economical success on Google. This test has the advantage of also testing a relevant customer creation technique, which brings us to our next area- the Customer Creation Hypothesis.

Following this, you want to step through your experiment. I highly recommend all the sections below for a useful, investable experiment.

ENABLE QUIZ CONCIERGE TEST

Item Example (Enable Quiz Concierge Test)
What assumption will this test? This MVP will test our high-level Value Hypothesis:
If Enable Quiz offers companies that hire engineers lightweight technical quizzes that screen job candidates for engineering positions, then these companies would trial, use, adopt, and pay for such a service.
How will we test it? We’ll start with custom-built quizzes on Google Forms to assess the basic value of the product to the HR manager. We have recruited five HR managers from our customer discovery work who have agreed to participate. Each has 1-2 open positions where we have custom-designed screening quizzes based on the specifics of the open position.

The quizzes have been made available to the HR managers and we’ve finished 0 day/usability testing to validate that they know how to administer the quizzes and find the scores (which we post to a Google Doc for them after grading them by hand).

What is/are the pivotal metric(s)?
What is the threshold for true (validated) vs. false (invalidated)?
Unpacking our high level assumption, we’d like to test-
1: If we create position-specific quizzes for HR managers, they’ll use them ~100% of the time and, after two positions, be willing to pay. For this, our metric is [quizzes administered]/[candidates interviewed].We’ll measure [quizzes administered] based on the number of position-specific quiz forms we receive. We’ve added a checkbox for ‘this is a test’ to help make it easier to discard junk forms. Also, there’s a name or initial field which we use to correlate back to the interviews. We screened the HR managers to make sure they have systematic calendaring on the interviews they do so that even if they don’t keep track of the count of candidates, we can work with them after the fact to check the count.Our target threshold on this is 90%. Given the hand-help set up, etc., we’re providing, if the quiz isn’t compelling to the HR managers we’ve hand held such that they use it for most job candidates, then we’ll likely need a substantial pivot.2: If the HR managers use the quiz, they’ll send through <1/2 as many candidates. For this, our metric is a comparison on the portion of candidates screened out by the functional manager- baseline vs. with the quiz.
This test will be of an approximation. Based on interviews with both HR & functional managers, around ⅔ of candidates are screened out by the functional manager based on some material deficit in skill set. We’ve provided a working Google Doc for HR managers to use in post-mortems for cases where they don’t already have this. We’ll check in with them weekly to (gently) work to keep this form up to date, but we expect only moderate upkeep of this.We’d like to to see the ratio of candidates screened out drop to roughly ⅓. This may be aggressive particularly since we’ve erred on the side of ‘easier’ quizzes to avoid false positives (incorrectly screening out candidates with a possibly adequate skill set).3: If we offer the service at [x] price with [y] supplemental assistance, companies that hire a lot of engineers will pay [z].  We will measure this by our ability to sell a package where we charge them $100 for a subsequent custom-created quiz.

We believe this is a better test than a pre-pay for the service since we think such a transaction for a few-hundred dollars would be difficult/not adequately compelling for an HR manager to sell internally.

We’d like to see at least 50% of the subjects opt for a subsequent quiz, assuming success on the above two tests.

What will you do next if the result is true? False? If all three tests validated, we will proceed with a 1.0 of the Enable Quiz software, limited to just a few specific topics (see experiment below for decision-making on that).

If 1 & 2 only pass, we will consider the circumstances and reasons for that and review price point, purchaser, and, likely, the actual value proposition itself.

If no tests pass, we will step back and consider the a) whether a different take on the value proposition might be relevant and b) whether the problem is truly important.

How much time, money will it take to set up? Based on the current 5 technical topics we estimated that total set up for all 5 subjects will involve:- 20 hours of work by our product lead to set up, user test, and document (for user) the quiz infrastructure on Google Forms- 40 hours of work by our technical lead to formulate and validate (with subjects) the quiz questions across the 5 subjects
Roughly, what will it take for each individual test? For each subject (5), we think it will take our product lead:- 3 hours for initial Q&A and onboarding (including travel, etc.)- 3 hours across the quizzing to answer misc. questions

– 3.5 hours to grade the quizzes (assuming 20 quizzes/position)

– 1 hour of misc. follow-up by our technical lead

Roughly, how long will it take for each  test to run and produce definitive, actionable results? The interview cycle runs for 3-5 weeks after which we expect to have a full set of results on-hand.

CLICK HERE FOR EXPERIMENTS SECTION OF VENTURE DESIGN TEMPLATE

Customer Creation Hypotheses

customer-creation-hypothesisOnce you’ve got an identifiable customer that reliably/repeatable buys into your proposition, you have the nucleus of the pivotal ‘product/market fit’. The last frontier is to make sure you can multiply that product/market fit on an economical, repeatable basis. There are certainly dozens of products out there that I’d buy if I was fully informed about them. But that’s never going to happen. We live in a world of imperfect information and rising above the noise floor is a real challenge.Your primary assumption might be something like ‘We can acquire customers in an economical fashion.’

You’ll want to break that assumption down into more manageable pieces, otherwise you risk freaking out and just asking ‘This product rules. Where the #@$# are all my customers!?’ For this, I like the ‘AIDA(OR)’ framework- Attention, Interest, Desire, Action. Since this framework is over 100 years old, I added Onboarding and Retention, important additional steps for the kind of products we sell today.

I really like storyboarding as a way to walk through the AIDAOR breakdown and keep it connected to your work on customer discovery (step 01). Here’s an example from Enable Quiz (without background notes but see below):

customer-acquisition-storyboard--aida-or--example

Here’s the post it came from if you’re interested in the detail: Storyboarding Customer Acquisition.

The table below provides focal notes across the AIDAOR breakdown. Note- some of these assumptions and items will bleed across AIDAOR some. It’s not a hard science. The important thing is to make sure you have a clear, testable view of the customer journey that dovetails (and/or updates) your work on customer discovery and personas.

Step Sample Assumption Notes
Attention On AdWords we can achieve click through rates of [x] at a cost of [y].We can achieve a viral coefficient of [x] on emails.Our salespeople can call on [x] qualified customers/day. This is where you start, obviously. How does the customer find out you even exist? How do you get them to click through to your site? To sit down with one of your representatives?I hear ‘someone will tell them about it’ and that’s OK but you should have a clear, measureable view  of how that word of mouth happens.Generally speaking, your job is to communicate with the customer persona in a way that’s relevant to them and present them their problem scenario in a compelling way, connecting with their understanding of it.
Interest We can achieve a bounce rate of [x] on our landing page.We can achieve a success rate on scheduled meetings from cold calls of [y]. They took a look what what you showed them- are you connecting with a relevant problem scenario? Credibly?Do you have a value proposition that they think will deliver on that problem scenario?There are a few tricks for getting attention, but this is where your work on customer discovery will really bear dividends- the best messaging is crafted around an intimate understanding of the customer. Many great items are also arrived at by iteration so just like everything else here, this is very much a place to A/B/..N test different versions of your proposition, different channels, etc. and learn what works. 
Desire Customers will share out our messages at a rate of [x].We’ll see comments and feedback that we’re really connecting with what the customer wanted to solve their problem. Remember the ‘Feel’ part of think-see-feel-do in the persona from step 01? This is where that becomes important. Most of us do a lot of what we do for reasons that are ultimately emotional. And you’re competing with a lot of other demands and distractions on your customer’s time. How are you connecting with them?
Action We’ll get [x] conversion rate from site visitors to paying subscribers.Our close rate on meetings will be [y]. This is whatever the customer has to do to buy your product. Make sure to keep it as simple as possible. Many companies make the mistake of creating great product and promotion and then having a crummy sign-up process or onerous contracts. How will you know if you’re doing well here?
Onboarding [x] portion of customers will become active users. This is whatever’s required for the customer to a) start really using your product and b) make it a habit (consumer) or integral to their processes (business). How do you review and ensure customer success? This is the last place you want to be
Retention [x] portion of customers will renew/re-purchase. How well are you doing on renewals? Up-sells? Word of mouth from existing customers? It’s much easier to work with the customers you already have.

Product Development Hypotheses

product-development-hypothesisStrictly speaking, this has more to do with design thinking and product development than it does Lean Startup. Lean Startup primarily deals with a process for validating new ideas and ventures. This area has more to do with designing a good product, assuming someone is going to want it.

I mention it here mostly so that you can keep it in mind as you work through the process and to highlight the importance of validating your pivotal assumptions before looking at the detail of your implementation.

The primary input for most modern application development is agile user stories. The great thing about these is that they have a syntax where each atomic unit requires it’s own mini-assumption: ‘As a [persona], I want to [do something] so that I can [derive a benefit = your assumption about why they want to do this]’. If successful validation or the need to validate pivotal assumptions that absolutely require working product, be sure to keep that work connected with your customer discovery and be sure to attach lean validation criteria to the ‘derive a benefit’ clauses of your stories. For more on how to do this, see: Your Best Agile User Story.

04 Experimentation

Scientific-Method-Lean-Startup-ExperimentationIf everything’s in order up to this point, this should be easy! Stay focused on the experiments, get them done, and then move on to a decision about whether to revise or move forward.

That said, nothing ever goes perfectly and distractions arise. The top determinant of successful experimentation (in new ventures) is focus. Creating output makes us feel good- do the work, cross it off the list, call it done. But that’s not how to assure your best possible outcome under uncertainty. With Lean Startup, you have to be ready to cross something off the list and then likely put it back on the last several times before you get it right. Emotionally, it’s daunting.

output-vs-outcome

Here are 3 tips to stay on track:

  1. For everything you’re doing on step 03 (designing the experiments), make sure you’ve visualized the moment where you interpret the results and make a decision. If you can’t visualize that moment, you probably need to tighten up your experimentation/discovery plan.
  2. If you’re thinking that an experiment isn’t going to deliver a definitive result, odds are you’re right. Stop it, fix it, repeat it.
  3. Everyday, ask ‘What did I accomplish yesterday? What will I do today? How did those things tie to the outcome I’m pursuing?’ Here’s a post on a related technique: The Daily Do

The first bit of experimentation deals with customer discovery and validating your idea. I’ve found that’s somewhat unfamiliar territory for a lot of folks, so I put together the checklists below to help you step through that process. These checklists describe a few key items you should verify within the persona and problem hypotheses.

Checklist: Persona Hypothesis

  Hypothesis Experiment
✔︎ This persona exists (in non-trivial numbers) and you can identify them. Can you think of 5-10 examples?
Can you set up discovery interviews with them?
Can you connect with them in the market at large?
✔︎ You understand this persona well. What kind of shoes do they wear?
Are you hearing, seeing the same things across your discovery interviews?
✔︎ Do you understand what they Think in your area of interest? What do you they mention as important? Difficult? Rewarding?
Do they see the work (or habit) as you do?
What would they like to do better? To be better?
✔︎ Do you understand what they See in your area of interest? Where do they get their information? Peers? Publications?
How do they decide what’s OK? What’s aspirational
✔︎ How do they Feel about your area of interest? What are their triggers for this area? Motivations?
What rewards do they seek? How do they view past actions?
✔︎ Do you understand what they Do in your area of interest? What do you actually observe them doing?
How can you directly or indirectly validate that’s what they do?

Checklist: Problem Hypothesis

  Hypothesis Experiment
✔︎ You’ve identified at least one discrete problem (habit/need) Can you describe it in a sentence? Do others get it?
Can you identify current alternatives?
✔︎ The problem (habit/need) is important Do subjects mention it unprompted in discovery interviews?
Do they respond to solicitation (see also value and customer creation hypotheses)?
✔︎ You understand current alternatives Have you seen them in action?
Do you have ‘artifacts’ (spreadsheets, photos, posts, notes, whiteboard scribbles, screen shots)?

Hopefully those help you focus your thinking and progress on validating those early hypotheses. The Value and Customer Creation hypotheses more so loan themselves to direct experimentation, as described above in Step 4.

05 Pivot or Persevere?

Scientific-Method-Lean-Startup-Pivot-or-PersevereI recommend settings goals for your experiments and time-boxing them in agile-type sprints (iterations) of 2-6 weeks. This will help keep everyone on track. If the experiments are running well, you should arrive at a ‘pivot or persevere moment’ where you have the learning to decide whether to proceed or revise and re-test. Or you may find you need to tighten up your experiments and repeat them- that happens.

The hypothesis areas above were organized roughly in sequence and the table below describes common results from these experiments and ideas on how to interpret them and make decisions about what to do next.

Concluding on Your Persona Hypotheses

persona-hypothesis-concluding-v2It’s OK to enter one without set hypotheses. Let’s say you’re generally interested in problem scenarios around 3D printing in the consumer durables vertical. That’s a perfectly OK starting point for some customer discovery, driving to explicit written hypotheses as you learn more.

That said, you’re looking to drive to a relatively conclusive understanding of your key personas before you move too far ahead- otherwise you’ll likely be operating on a weak foundation. Here are a few notes on sample conclusions in this area:

Conclusion Notes
‘Everyone is my customer!’ Ultimately, this may be true but it’s important to identify an early market where you’ll focus and establish a beachhead.
‘There are a few customers to focus on- I’m not sure which one’. Take your best guess and choose, but run your experiments against a focal early market. Pick the one that has the most compelling problem scenario.
‘I can’t find anyone to interview’ Then I would step back. This almost certainly means you’ll have trouble with the next steps as well.
‘I think I get this persona, but I’m not sure about the whole think-see-feel-do thing.’ Getting down a solid think-see-feel-do for each key persona will not solve all your problems. But not having a solid understanding of your customer is likely to generate waste downstream, decrease your chances at success, and make pivots less well informed and purposeful. I’d check out this tutorial and increase your comfort level with your personas: Tutorial- Personas & Problem Scenarios.

Concluding on Your Problem Hypotheses

problem-hypothesis-concludingWhile it’s often practical to combine the field work on customer discovery in this area with your persona hypothesis, it is important to have a strong footing with your personas before you finalize your problem hypothesis. This is important because ultimately you’re going to need to sell something to these people and you’ll need to be able to identify them. Also, some problems are so spread out among customer segments/personas and occasional that they’re not a strong fit for a new venture. Here are a few notes on sample conclusions in this area:

Conclusion Notes
‘During customer discovery interviews, the subjects consistently mentioned our problem scenario’ Excellent! That’s a good preliminary validation you’re on the right track.
‘We did a questionnaire and >80% of subjects said they wish [our problem area] was better.’ I’d be very cautious about that result- it sounds like you’re leading the subjects. I’d like a lot of things to be better but there are only a small fraction of those that I’d actually dedicate my time and money to improving. I’d try face-to-face or at least phone interviews.
‘I am in this business/I am one of these personas and I know I have this problem- and I’m sure it exists for most others like me.’ While there are many fabled successes where founders build products for themselves, it’s not the most reliable way to succeed with a new venture. Your expertise/experience may blind you to doing good customer discovery with others like you- which is, of course, your actual market. By all means, play to your strengths and use your expertise but be sure to approach the customer discovery work with a fresh and unbiased perspective.
‘Our product doesn’t really address a problem, exactly, so this isn’t relevant for us.’  First, words are faulty instruments- on a business-to-consumer product, this is just as likely to be a ‘need’ or ‘habit’. And fundamentally there are no new habits and there are no new jobs in the workplace. Be very sure you understand the problem(s) or need(s) you’re connecting with before progressing.
‘Our product is so fundamentally novel that there are no current alternatives.’  See above- there’s a lot less novelty in the world than we think, particularly those of us that come from the technology world. Make sure you have a clear view of how your customer’s fulfilling their needs today or you won’t have a good counterweight to determine if and how your value proposition is relevant.
‘We’ve mapped out the alternative and observed or key personas in action with them.’  Excellent! You’re ready to synthesize, tune and test your value proposition!

Concluding on Your Value Hypotheses

value-hypothesis-concludingThis is where it all starts to come together (or possibly apart!) for a new venture- Is your value proposition better enough than the persona’s alternatives to generate revenue? Here are a few notes on sample conclusions in this area:

Conclusion Notes
‘Over 80% of the people we asked said they’d buy our product!’ They’re probably not being entirely truthful, or, let’s say ‘accurately predicting their future behavior’. I’d disregard that result. See- The Yellow Walkman Story for more explanation on why.
 ‘We did a concierge test and [got paid, got asked by the customer when they could buy our product].’ Excellent! You’re on the fast track of iterating to a successful outcome. Time to look at the contours of an actual MVP.
 ‘We finished our concierge test. They liked it but as a result it was a long way to conclusive.’  Now that you understand the problem area and concierge execution better, do you think you could get paid for the next one? That’s a good follow-on test. You can also try some of the options below. If you continue to see a lukewarm response, go forward to pivot.
‘We made a bunch of pre-release sales, but they’re non-binding.’  It’s OK that they’re non-binding. As long as you made the agreement with a real decision maker (someone who could buy it for real in the future), you’ve got a reasonably good validation of value hypothesis.
 ‘We couldn’t make any pre-release sales.’ Why not? Were they not that interested? Or wanted to see real product first? If so, how real? If they’re not interested, try some other experiments but that’s a sign that maybe you should pivot. If they wanted to see real product, did you push them to something that was too-binding? Were they ready to sign up for any kind of follow up? If so, good sign; if not, they may not be interested and were just using ‘no real product’ as an excuse. That’s a call you’ll have to make based on your experience with the individuals.
 ‘We found a few AdWord-landing page combinations that had better than expected click through and conversion rates to email sign-up’s.’  Excellent! That’s a good validation of your value hypothesis and you’re gotten a jump start on your Customer Creation Hypotheses.
 ‘We tried a few things with AdWords and landing pages, but the results weren’t great.’ What happens when you try the same thing out in the real world? You may just need to learn about your personas, problem scenarios and how to pitch your value proposition. These tests are good for connecting with existing demand but not for fundamentally understanding it. Try spending some time with real prospective customers.

 Concluding on Your Customer Creation Hypotheses

customer-creation-hypothesis-concludingThese results are generally easy to interpret- you convert your prospect through the funnel, or you don’t. And then you try something else. If the preceding items are in good shape, this should just be a matter of finding the right channel and tuning your approach. If you’re struggling here, make sure you’ve kept your work on personas, problem scenarios and the nature of your value proposition tightly integrated with your work here (messaging, etc.) and don’t be afraid to loop back to those if you suspect a more fundamental flaw is dampening your conversions.

06.a Pivot!

Scientific-Method-Lean-Startup-PivotPivots vary widely in size and number. Pivots in the area of customer creation and business model are just about  inevitable. The section above described a few common conclusions about experimental results and the possible implication of a pivot. The worst thing you can do is limp along- organizing your experiments into iterations where you set a goal about conclusion will help you avoid that. Strong customer discovery and encapsulation of those outputs in personas and problem scenarios, which we discussed in step 01, is critical as a rudder for your pivots. A strong understanding of the customer will help you pivot much smarter.

06.b Persevere!

Scientific-Method-Lean-Startup-PersevereYou have the core spark of a successful startup! Congratulations. Now it’s time to scale up and steadily improve the recipe you’ve found. I recommend the material here on business model generation and using agile to tie the items we reviewed here to your actual product implementation.

With creativity and focus, it’s not hard to achieve substantial validation and with that the confidence to persevere. The next section, ‘MVP Case Studies’, summarizes a few of my favorite examples.

7 Minimum Viable Product (MVP) Case Studies

Validating your idea doesn’t necessarily require a lot of money or even a lot of time. It does require focus and the design of substantial, relevant contact with prospective customers. The examples that follow range from household names to little known and run the gamut of product categories. My intention with this section is that you’ll be able to find at least one pattern that’s relevant to your situation, sparking ideas on a creative MVP.

Case Study #1: Sprig

sprig-case-studyI first heard Sprig’s story from the founders at a Lean Startup Circle event. From Sprig you can order a healthy $12USD meal delivered with a few taps on their mobile app. It’s kind of like the Whole Foods deli meets Uber, or, in their words “dinner on demand … prep time is 3 taps … delectable prices’.

It’s run by an experienced Silicon Valley team and wanting to go to approach VC’s with more than a great time and a great idea, they ran a successful validation experiment within a week of pulling together they founding team.

SPRIG SUMMARY

Item Notes
Persona* Paula the Professional- health conscious, short on time, moderate to high income, already uses similar services like Uber.
Problem Scenario I want to have a nice, healthy dinner with no hassle and at a price I can afford (like $12).
Alternatives Going to the store or an expensive, take-out, or a slow delivery service (>20 minutes).
Value Proposition Get a healthy meal like you would order a cab (on Uber): “Dinner on Demand … Prep Time is 3 Taps … Delectable Prices” (Sprig Home Page)

* This is me interpolating/guessing on an item; not part of the Sprig team’s explanation.

SPRIG MVP & EXPERIMENTATION

Item Notes
Key Assumption People like Paula exist and rather than prepping their own meals, ordering takeout, or eating out, they’d prefer to easily order a healthy 12 meal that’s delivered in 20 minutes.
Experiment Prep. such a meal and delivery ad hoc for one night; post the offer for delivery on Eventbrite; email friends and acquaintances
Validation Criteria Does a workable portion of the emailed population respond? Do they like the experience?
Result Strong preliminary validation- good uptake and good customer experiences

Like a lot of the examples that follow, Sprig’s first MVP required no (new/custom) software, little time and little money.

Case Study #2: Dropbox

dropbox-case-studyI’ll assume you know about Dropbox. But you may not have heard the terrific story about how Drew Houston validated the concept in the face of a crowded, confused market and a difficult technical execution.

When Dropbox was in its infancy, many file sharing services existed- they just weren’t all that good and so few people used them. The Dropbox proposition was that a well executed product would achieve large scale market success. Here’s the tricky part: to do this well across even the very big platforms like OSX, Windows, iOS, etc. was a big job and they needed to raise money. VC’s were reluctant to place such a bet on a space with existing competitors that were struggling.

So the Dropbox team did something very creative to validate their proposition- see below.

DROPBOX SUMMARY

Item Notes
Persona Tom the Techie- early adopter who works on projects that require swapping a lot of files between a shifting network of collaborators.
Problem Scenario It’s difficult to share files between a fluid network of collaborators, particularly if they’re: big or numerous or change a lot.
Alternatives Many existing products, but none of them super compelling and widely adopted. Also, custom setup’s which work but are cumbersome to set up and maintain.
Value Proposition A file sharing service that truly feels transparent to the user across all major platforms- OSX, iOS, Windows, etc.

DROPBOX MVP & EXPERIMENTATION

Item Notes
Key Assumption People like Tom (and others in the later market) exist and if there was a really nice, easy file sharing service, they’d adopt it.
Experiment Hand craft a demo (without actual working, releasable software); post it; orient the messaging to the early market; promote it and see what happens
Validation Criteria Substantial traffic on the video and sign-up’s for product information
Result Strong preliminary validation

One additional thing that’s notable about Dropbox is that the persona I (questionably) described ‘Tom the Techie’ was what they identified as they early market, the first few folks who felt the problem scenario most acutely and would be most reachable with the value proposition. While their video demo wasn’t exclusively tailored for that market, they added inside references for that market.

Case Study #3: Photo-Social Startup

photo-social-startup-case-studyI advised this company through a program at Stanford. They are still in ‘stealth mode’ so rather than going into the details about their product, let’s take a look at the general pattern for photo-social products, products like Instagram that somehow make the photos we take more interesting on social media.

The user has or takes photos. Rather than just posting them to social media (Facebook, Twitter, etc.) they want to do something with them to make them more interesting- tell a story, enhance them visually, something like that. Then they post them and the whole point is the reward of social acclaim, your social network registering their approval with likes, shares, and comments:

photo-social-case-studyWhen I first started working with this team they had an idea of this type and were in starting software development. We put that on pause and used Lean Startup techniques (as well as design thinking and personification) to spend less time and money and still validate (or invalidate) their concept:

STEALTH PHOTO-SOCIAL STARTUP SUMMARY

Item Notes
Persona Existing poster of photos. Personas: Martha the Mom, Pat the Party Planner, Teresa the Teen Social Butterfly
Problem Scenario [I want to do something interesting with my photos so that my social graph rewards me with interest and acclaim]
Alternatives Manually enhance photos, use alternative enhancers/amplifiers like Instagram
Value Proposition [This is something users can do with photos that will generate engaging content for their social graph]

STEALTH PHOTO-SOCIAL STARTUP MVP & EXPERIMENTATION

Item Notes
Key Assumption People like the personas above would like to enhance their photos using our process and if they do this they’ll be rewarded with approval and interest from their social network
Experiment Manually create output of the type the hypothetical app would produce
Validation Criteria Posts created in this way create strong interested demonstrated by like’s, comments, and shares
Result An echoing silence- nobody cared. Time to pivot!

The result was a big, echoing silence- no interest. But the team was much better off for having found that out sooner vs. later and now they’re working on a much more promising iteration of their idea.

Case Study #4: Leonid Systems

Leonid-Systems-Case-Study-LeanI started Leonid Systems in 2007 to explore new ideas for back office IT in the hosted communications space. Leonid’s customers are mostly large infrastructure providers, companies like Verizon and Comcast. But Leonid needed to start small, and do it on a bootstrapped basis. So we started out doing consulting, and we used that as a ‘concierge’ vehicle to isolate, learn about, and validate important problem scenarios for our customers.

The specific problem scenarios require industry-specific explanations, so I’ll skip that for now and instead reference this talk I did for the Lean Startup Circle in San Francisco:

Essentially, Leonid went through a series of MVP’s, starting with consulting, to make sure that we were doing things that were relevant for our customer base.  

Case Study #5: Rapid MVP Testing with Paul Howe & Associates

paule-howe-lean-case-studyI heard Paul Howe’s story at the Lean Startup Circle (SF). He an a couple of other veterans had a funded startup to explore business-to-consumer (B2C) concepts in search of a winner.

Their approach was very heavy on Lean Startup- get in, test, and then scale it or get out (vs. doing more customer discovery in a given area). While personally I tend to pick a problem area and spend more time learning about it, I think their approach is probably great if you have a lot of different ideas you want to try and you’re good (or make yourself good) at this type of experimentation.

The concept I specifically remember was a service to tell you how much all your ‘stuff’ is worth by looking at your emails and bank/credit card statements. Instead of diving into this fascinating ‘big data’ problem, they did a concierge MVP where they did the searches by hand for a few test customers. Paul Howe sat down and just manually searched their email and bank records to compile a statement of what they had an how much it was worth. The result? An echoing silence, and they moved on to their next idea (with relatively little time & money spent).

PAUL HOWE & CO STARTUP SUMMARY

Item Notes
Persona (not sure; their emphasis was heavily weighted toward testing vs. customer discovery)
Problem Scenario I have a lot of stuff around that I might want to sell and/or I’m just generally curious about how much it’s worth, how much I’ve spent.*
Alternatives Manually going through credit card statements or receipts.
Value Proposition It’s interesting and possibly useful to know how much stuff you have.*

* This is me interpolating/guessing on an item; not part of the team’s explanation.

PAUL HOWE & CO MVP & EXPERIMENTATION

Item Notes
Key Assumption There are certain personas who would like to know how much their stuff is worth
Experiment Manually create such a ‘statement of your stuff’ and see if the user cares
Validation Criteria Users demonstrate an interest in the service (not sure how they specifically structured the validation)
Result An echoing silence- nobody cared. Time to pivot!

They encountered an echoing silence but were imminently ready to move on to their next concept.

Case Study #6: Zappos

Zappos-Lean-Case-StudySince they got started in 1999, you can say Zappos a pioneer in the current era of lean startups. Their story is wonderful and simple.

Nick Swinmurn had the idea that choosy shoppers wanted better price and selection than they were getting at their local mall. What he did next was a pure Lean Startup: he photographed a whole bunch of shoes and put them for sale online to see if anyone would buy them. They did and the rest is history.

ZAPPOS STARTUP SUMMARY

Item Notes
Persona Sam the shoe-hound- knows what he wants but not where to get it.
Problem Scenario Sam is unable to find the shoe he wants at local retailers, wasting time and getting frustrated.
Alternatives Possibly mail order or wait until he’s in a bigger market to go to the store.
Value Proposition Make the shoe Sam wants accessible online and make sure he has a great experience so he’ll come back and not have to think about where to find the shoe he wants anymore.

ZAPPOS MVP & EXPERIMENTATION

Item Notes
Key Assumption Sam the shoehound exists and rather than shopping locally or compromising on what he wants he’ll find and want to buy the shoe he really wants online.
Experiment Photograph a bunch of shoes and put them on a simple website. Promote a little and see what happens.
Validation Criteria Do they come and buy?
Result Yes, they did.

 Case Study #7: Enable Quiz

enable-quiz-case-studyMentioned earlier, Enable Quiz is a synthetic company I use for example purposes. They’re (hypothetically) creating a lightweight quiz app for screening engineering candidates for new positions so the hiring manager has a clear picture of their skill sets and can focus on fit, etc.

Enable Quiz loans itself to a concierge MVP approach where the founders hand create position-specific quizzes for HR Managers. They can then gauge whether the quiz in fact helped the company arrive at a better process and outcome with their hiring and whether that generated residual interest in the future product.

ENABLE QUIZ SUMMARY

Item Notes
Persona Helen the HR Manager and Frank the Functional Manager(Helen’s in charge of the administrative side of hiring and Frank’s the person the new hire would work for)
Problem Scenario We spend a lot of time evaluating technical skill sets and a) we don’t do that well, often ending up with hires that aren’t a good mutual fit and b) we’d like to spend less time interviewing overall and more time on cultural fit with the top candidates
Alternatives Calling references, asking a few probing questions
Value Proposition spend less time interviewing and get better outcomes

ENABLE QUIZ MVP & EXPERIMENTATION

Item Notes
Key Assumption Companies that hire engineers would prefer to use a lightweight quizzing app for evaluating candidates fit with a given position’s required skill set instead of spending time checking that ad hoc.
Experiment Manually create position-specific quizzes for individual companies to use in screening candidates
Validation Criteria Do the hiring and HR manager feel they had appreciably better outcomes? Do they enthusiastically ask about the finished app product?
Result n/a (hypothetical company)

Is Lean Startup just for startups?

No, not in the sense you probably mean. Eric Ries defines a ‘startup’ as any business (or line of business) that hasn’t yet found a ‘product/market fit’, meaning that it can reliably sell a known proposition to a known customer. If you have a new line of business or product within an established company, Lean Startup’s probably a great fit for you.

Is Lean Startup the answer?

Not to be coy, but it does depend on the question. If your question is ‘How do I manage this venture systematically to a good outcome in the face of uncertainty?’, then yes, Lean Startup will help you get there. As a planning  technique for innovation, I don’t know of anything better.

That said, most innovative ventures have other questions as well, like:

Who is my customer really, and how do I make sure I’m relevant to them? For this I recommend the work around design thinking.

How do I take a holistic look at the business without toiling over a business plan that no one will read? For this, the business model canvas is handy.

How do I think about a new venture start to finish and understand where we are? For this, I like Steve Blank’s work around customer development.

How do I develop great products quickly, and bridge the gap between ‘business’ and ‘engineering’? For this, agile is tried and true.

Lean Startup’s Top 6 Failure Modes and How to Avoid Them

Lean isn’t a passing fad: it’s fundamentally better suited to innovation than most of the prevailing classical/traditional techniques. That said, it’s widespread use in the innovation/startup context is relatively recent and best practices are emerging and sometimes the hype diverges from the reality of what’s practical. I compiled the list below based on my experience advising startup’s and individuals on the use of lean/Lean Startup:

1. No Pivotal Assumptions

Subscribing to the general idea isn’t enough to make Lean Startup perform for your venture. You have to actually articulate them, prioritize the few that are truly pivotal, write them down, and use them as your focal point. The sections above, starting with ‘01 Developing High Quality, Testable Ideas‘, lay out a systematic approach to doing this.

2. No Focal Point

Once you’ve identified and prioritized your pivotal assumptions, it’s important that you use that as your focal point and litmus test for everything you do. Output is not the same as driving outcomes in a startup. Crossing things off our list makes us feel good, but is it really driving to that ‘pivot or persevere’ moment? Subject all your activities to that litmus test.

Make sure your assumption set stays up to date and is highly visible. Google Doc’s isn’t a bad solution. Here’s a Lean Startup Assumptions Template you can use as a starting point.

3. Remaining Inside the Building

This is a riff on Steve Blank’s famous directive ‘get outside the building!’. Validated learning is the one and only propulsion for driving to decisions and outcomes with Lean Startup. Without meaningful learning and experimentation with real prospective customers, your Lean Startup will be running in place.

For more on how to do this, see section 03 on designing experiments and section 04 on experimenting.

4. Aimless Pivots

Lean Startup helps you make sure you’re not wasting time on an idea that’s not ready to for success. It doesn’t deal directly with how you determine which ideas are highest quality. For this, I highly recommend the use of design thinking techniques, specifically personas, problem scenarios, and value propositions. This material has the added benefit of making sure that if you do have to pivot, you’re doing it with an increasingly better understanding of the target customer. This increases the odds you’ll arrive at a pivot that hits.

The practice of design thinking is tightly integrated into this tutorial. For more on personas, etc. see: Tutorial on Personas, Problem Scenarios.

5. Lack of Purpose & Goals

The world’s a noisy place. Distractions will walk in the door every day. Many teams with good intent and an understanding of Lean Startup fail to make steady, reliable progress towards a pivot or persevere moment.

It’s important to work in time-boxed (time-constrained) iterations, each of which have discrete goals. That’s what the material on Startup Sprints is about, though there are many ways to implement the concept. The Daily Do is another technique you can use to make sure you and your team are on track day to day.

6. Too Big an MVP

We love to build things, it’s in our nature. Subordinating your love for the product your building to the learning mission at the core of a Lean Startup is difficult (at least, I’ve never found it easy). Doing so requires discipline focus, and clear check points to make sure you’re on track.

The MVP case studies here are a useful test point or you to step through whether or not you’re building too much product.

Please Note: This list presupposes you want to and should use lean to solve your problem at hand. For a view on where lean’s a good fit, see the section above ‘Is Lean Startup the answer?’.

Criticism & Context

In practice, lean isn’t always the right method- a statement you could make about pretty much any method. That said, as a pure idea, it’s pretty durable and coherent. Every method should be subjected to scrutiny and, of course, rigorous validation is itself part of the method. Below are a few summarized criticisms of lean and Lean Startup along with notes.

You can’t skimp your way to greatness.

While this observation may often be true, its application to Lean Startup is mostly the result of misunderstanding. You pair the words ‘lean’ and ‘startup’ and the idea of avoiding waste and it’s not shocking that on a quick look you come away with the idea that it’s about making sure your startup/venture doesn’t spend much money. Keeping your spend down may be an outcome you get with Lean Startup, but the method itself about waste avoidance not cost avoidance.

Earlier, we looked at the Minimum Viable Product (MVP) concept. That fits into a process where you create a tightly defined value proposition, then conceive the most quickest, least expensive way to test it (the balance between cost and speed being mostly a function of your particular priorities). You may reach a point where a relatively long, expensive creating of something is the best way to do that.

Lean Startup wouldn’t say that’s wrong. It would just say that you should exhaust the quicker, cheaper alternatives to testing the proposition so you don’t go through a long, expensive creation cycle and then encounter an echoing silence where customers aren’t interested in what you created.

It doesn’t work in [medical, industrial equipment, other areas with long design cycles].

Sure, there may not be any shortcuts to getting a regulatory approval for a new drug or device. Yes, it may take a long time to get a new model of bulldozer functional. These aren’t good reasons to discard the method.

First, you may be able to test the demand for your proposition without a product. Let’s say you hold a webinar, conference about a particular problem area for medical clinicians- Do a lot of them show up for problem A vs. problem B?

Additionally, there are a lot of elements to a successful customer experience that surround a core product. How do clinicians identify when and how they should use this new product? Buy it? Store it? Take it out of the box and administer it? These are areas where small batch experimentation may be perfectly viable.

For a set of actual examples of how this works, here’s an article about the application of Lean Startup at GE: HBR Article on Lean Startup at GE.

It hasn’t been statistically validated that Lean Startup actually makes companies more successful.  

This is true but not necessarily that relevant for two reasons. First, it’s difficult and rare for social science to reliably draw these kind of conclusions. Success factors for products and ventures vary across a lot of dimensions which change over time with their operating environment. Second, Lean Startup has only been around since 2011- there just isn’t a lot of data available.

I don’t want to oversell lean or Lean Startup, but I do think the criticisms above are mostly the result of misunderstanding or inappropriate context. In practice, I think the biggest issues with it mostly have to do with a) not actually grinding through the details of its rigorous implementation and b) wanting it to be the one silver bullet for every problem and situation (which is natural- who doesn’t want that), when in practice it’s a portfolio of methods that lean to successful innovation.

Reference A: Example Lean Startup-Style Assumptions

This page presents a set of example assumptions based on a fictional company, ‘Enable Quiz’. Enable Quiz is rolling out a lightweight technical quizzing solution; for companies that hire engineers, it will allows them to better screen job candidates and assess their internal talent for skills development. For more on Enable Quiz, see the example Venture Concepts page.

For a template you can use to create your own (a Google Doc which you can download as MS Word or copy to your domain), try this: LEAN STARTUP STYLE ASSUMPTIONS TEMPLATE.

The table below describes Enable Quiz’s working assumptions and plans for experimentation and validation (or invalidation). Based on what they know right now, they have this formulation of their root hypothesis around the problem scenario of hiring engineers:

‘HR and functional managers are in charge of technical hires and they struggle to effectively screen for technical skill sets, making the hiring process slower and more labor intensive and producing worse outcomes than they should reasonably expect. Currently they implement a patchwork of calling references and asking a few probing questions. If Enable Quiz offers an easy, affordable, lightweight technical quizzing solution, we can acquire, retain, and monetize customers.’

Armed with their general understanding and avid curiosity, the team’s job is to decompose their root assumption into pieces where they can validate their understanding to determine whether the root hypothesis (or some revision of it) point to a business they should scale up and build. I’ve organized these decomposed assumptions into the categories we reviewed above: Persona Hypothesis, Problem Hypothesis, Value Hypothesis, and Customer Creation Hypothesis. I’ve also included some material on their tactical Product Development Hypotheses.

You’ll see the focal assumptions organized into tables with the following columns:

Column Heading Notes
Priority This is a measure of the importance and context of the assumption. While it’s important to speak all your assumptions and keep an eye on them, focus is also important. You can use any scale you want, but I like to use this rating scheme:1: This is a strategic, pivotal assumption. If this turns out to be untrue, we need a pivot. It probably needs to be decomposed further for manageability.2: This is a child of the type of assumption above. Individually, its disproving doesn’t mean a pivot, but if all its ‘siblings’ (all the other related assumptions that tie to something of priority ‘1’) prove untrue, then it does.3: This may be pivotal- we’re not sure yet. It’s definitely important.4-5: Lighter dispositions of ‘3’.6-10: This is a tactical detail related to product development or growth/promotion. Take note if it’s convenient, but we need to validate (or invalidate) the related items above it before we worry about- if the related items are invalidated then pursuing it could be 100% waste.
Assumption This is the assumption itself, always phrased in a form where you could say true or false about it.
Needs Proving? Not all notable assumptions need proving, or you may have already proven them. This is a place to log that, along with notes on why and where it needs proving.
Experimentation These are quick notes and possibly external references to how the assumption gets proven or disproven.

Example Persona Hypotheses

persona-hypothesis-exampleEnable Quiz’s general area of interest is improving team performance by making it ultra simple to measure technical skill sets. This could apply to both of screening new hires and doing a more systematic review of existing staff to see how knows what (so peers and managers can be more effective in divvying up tasks and focusing professional development).

Personas-Who-v2In both cases, they believe that there are two personas that are the primary customers: ‘Helen the HR Manager’ and ‘Frank the Functional Manager’. ‘Chris the Candidate’ (being interviewed) and ‘Steve the Staff Member’ are users, but probably fairly passive users so it’s really Helen and Frank that primarily interest Enable Quiz (see Personas Tutorial- Examples for more on these).

These personas are a starting point, but since they’re (we think) the pivotal personas, the team at Enable Quiz will almost certainly need to re-segment them. We know every individual is vastly different, so, of course, there’s always an avenue to split up personas into sub-types. But why and how in this case? First and foremost, re-segmenting the personas will allow them to identify an actionable early market where they can focus and acquire their first few ‘beachhead’ customers. .

The table below describes focal assumptions around Enable Quiz’s persona hypotheses:

Priority Key Assumption Needs Proving? Experimentation
1 The Hiring Manager persona and Functional Manager exist in roughly the form we’ve described and they are collectively responsible for technical recruiting and hiring. Not really. There may be minor variations on this but the arrangement is pretty standard and across dozens of companies they haven’t seen anything much different. ditto
1 We understand these personas well and what they think about technical recruiting, hiring and skills management (in the form of think-see-feel-do). Yes, definitely. – Observing consistent results on discovery interviews
4 Helen is likely to be our primary buyer, Frank a possible influencer and approver Yes, definitely. – Observing consistent results on discovery interviews- Trying to generate pre-sales- Asking who and how they’ve purchased (roughly) comparable services like online recruiting services

The output of this section is a set of validated personas, including think-see-feel-do.

Example Problem Hypotheses

problem-hypothesis-exampleEnable Quiz’s is generally interested in the problem space of assessing technical talent, with an emphasis on lightweight tests for tactical decision making. Their goal with the problem hypothesis is to flesh out all the material problem scenarios and then validate (or invalidate) which ones substantially exist. They also want to clearly understand their personas’ interaction with current alternatives, so they can

a) validate that they really understand the problem scenarios (if you can’t identify an alternative you likely do not)
b) better connect with customers’ current perception of the problem when they start customer creation (aka selling)
c) provide a backdrop/baseline for their value hypothesis- how much better is their solution than the alternative?

Note: It’s useful to divide up these hypothesis areas for organizational and analytical clarity, but the different areas will and should comingle. For  example, the team may (hopefully) refine their personas on the basis of which sub-types of their personas have the most acute problem scenarios. And this is a big win, allowing them to more reliably get a win in their early market.

The table below describes focal assumptions around Enable Quiz’s problem hypotheses:

Priority Key Assumption Needs Proving? Experimentation
1 Screening technical hires for skill sets is difficult and most companies wish they could do it more effectively vs. their current alternatives. This wish is on their ‘A list’ of problems. Yes. Sure, the problem probably exists to some degree- but how important is it? To who? What are they doing now? – Discovery interviews: does it come up as a problem, unprompted- Test pre-release promotion and sign-up’s for more info.
2 The HR manager perceives this as important Yes, definitely. (ditto)
2 The hiring manager perceives this as important Yes, definitely. (ditto)
2 This is frequently relevant in the area of {IP networking, Linux sysadmin, Microsoft sysadmin, Java, PHP, Ruby, .NET, QA, devops, development management} Yes, definitely- it’s important to prioritize which areas are hottest and which (if any) have most frequent relevance to the early market. Some topics may also loan themselves to this type of test better than others. – Discovery interviews: check topics- Google Trends & Keyword Planner (search trends, keyword value, monthly searches on ‘hire ruby developer’, etc.)
1 Quizzing existing staff to understand who knows what at the current time would be useful and actionable and managers wish they could do it more effectively vs. alternatives. This is on their A-list. Yes, definitely. – Discovery interviews: does it come up as a problem, unprompted- Test pre-release promotion and sign-up’s for more info.
2 The above is useful because it would help match team members and tasks more effectively Yes, definitely. (ditto)
2 The above is useful because it would help with intra-team learning and professional development Yes, definitely. (ditto)
2 The current staff would find such a quiz at worst a benign admin task and at best a fun, friendly competitive diversion; they won’t find it too judge-y or top down-ish Yes, definitely. – Ditto but with technical staff more so than managers- ‘Steve the Staff Member’

The output of this section is a set of validated problem scenarios, including a careful description of the current alternatives.

Example Value Hypotheses

value-hypothesis-exampleSo, does anybody want some?

Since Enable Quiz is a synthetic company, let’s take the liberty of supposing that in their customer discovery they found that the problem scenario around screening potential new hires was the most acute. The current alternatives were calling cagey references and trying not to be a jerk and take up the whole interview by asking probing questions.

Now the question is whether a lightweight quizzing solution would deliver on that problem scenario and exceed the alternative enough to fuel sales and reliable customer creation.

The table below describes focal assumptions around Enable Quiz’s value hypotheses:

Priority Key Assumption Needs Proving? Experimentation
1 If Enable Quiz offers companies that hire engineers lightweight technical quizzes that screen job candidates for engineering positions, then these companies would trial, use, adopt, and pay for such a service. Yes. – Execute a manual ‘concierge’ experiment on the quiz process
– Make some early pre-release sales
– Test pre-release promotion and sign-up’s for beta programs
2 If we successfully onboard the HR manager with a relevant quiz for an open position, they will use the quizzes for all the candidates they interview. Yes. – Execute a manual ‘concierge’ experiment on the quiz process
2 If the HR manager uses the screening quiz with all candidates for a given position, [x]% fewer unqualified candidates will make their way to the functional manager Yes. – Execute a manual ‘concierge’ experiment on the quiz process
2 If the HR manager uses the screening quiz with all open tech. positions, the rate of avoidable bad hires (hires that are not a good fit skills-wise) will go down by at least [x]%. Yes. – Execute a manual ‘concierge’ experiment on the quiz process
2 If an HR manager fills one position successfully with the solution, they will continue using the service, creating new quizzes for new open positions. Yes. – Execute a manual ‘concierge’ experiment on the quiz process with some inputs from the HR manager for the quiz
2 If we offer the service at [x] price with [y] supplemental assistance, companies that hire a lot of engineers will pay [z]. Yes. – Try pre-release sales; pair with conclusion of concierge test
…. ….

Example Customer Creation Hypotheses

customer-creation-hypothesis-exampleOnce Enable Quiz knows who they’re selling to, what problem they’re (really) solving, and how to deliver value against that problem, they need a way to economically and repeatably connect with that demand.

And this may involve a couple of different recipes, particularly as they transition from an early market of enthusiasts to wider distribution in the larger population of more pragmatic buyers (followers).

The table below describes focal assumptions around Enable Quiz’s customer creation hypotheses. Notice that they’re looking at two channels: 1) direct sales and 2) online advertising and they’ve organized the assumptions around those.

Each new channel begins with a priority 1 assumption with supplemental priority 2 items breaking it down across the AIDA framework.

Priority Key Assumption Needs Proving? Experimentation
1 Channel: Direct Sales.
Enable Quiz can connect with demand economically through direct sales.
Yes.  – see below
2 Attention: If we give them a contact list, our salespeople can call on [x] qualified customers/day. Yes. – Test and measure a limited set of sales activity (probably starting with a founder/senior person)
2 Interest: If a salesperson calls on [x] qualified leads, they can scheduled [y] meetings. Yes. ditto
2 Desire: If a salesperson gets a meeting, they will see follow-up from the customer [x]% of the time. Yes. ditto
2 Action: If we approach [x] qualified leads, [y] will close for a paid offer over $[z]. Yes. ditto
1 Channel: Online Advertising.
If we market to HR managers through an AdWords campaign, we’ll achieve a cost per acquisition (of free trial) of less then $[x].
Yes. – Run an initial AdWords test
2 Attention: If we run relevant AdWords ads, we’ll get the attention of HR managers. Yes. –  click through rates of at least [x]%
2 Interest: If we get the customer to a landing page with a demo, we can capture their interest. Yes. – Achieve a bounce rate of less then [x]% on a winning variation of the landing page.
2 Desire: If we are able to show the demo page to an HR manager, we’ll see desire. Yes.  – (This one is tough to observe through this channel. You could use the adjacent items (landing page bounce rate and sign-up conversion rate) as a proxy. Other channels and interactions like direct sales and social are much better channels for this.)
2 Action: If we get HR managers to a landing page with a demo, [x]% will sign up for [our email product announcements, a free trial]. Yes. – Achieve [x]% conversion rate for the objective in question
2 Onboarding: If HR managers signs up for a free trial, at least [x]% will create a quiz. Yes. – Observe this on in-app analytics
2 Retention: only [y] portion of customers will require a support call; the rest will use the online help to onboard Yes. – ditto
1  Retention: [x] portion of customers will renew/re-purchase. Yes. – Observe this on in-app or external analytics.

The output of this section is a repeatable, economical recipe for customer creation.

Example Product Development Hypotheses

product-development-hypothesis-exampleLet’s assume Enable Quiz validates their key assumptions and decides to ‘persevere’, building a working minimum viable product. Just to over communicate on this critical point, it’s not a foregone conclusion that they get to this point with the idea as it is and they should definitely not move to this point before they

  1. validate their persona, problem, and value hypothesis
  2. in many cases their customer creation hypothesis (especially if their business model involves selling over the web)
  3. exhaust productive opportunities to field test the whole proposition with non-software/non-product MVP’

One other preface not for clarity: this material is more ‘lean’ in general than specifically Lean Startup. The practice of Lean Startup is about fundamental, strategic validation of new ideas. This material here is more tactical- while none of these items individually is likely to make the venture sink or float, it is highly productive to list out key assumptions about how customers will interact with the product and then figure out how to validate (or invalidate) those as quickly, cheaply, and above all early as possible.

The “MTP”

If, and only if, I’m building an actual product, I like to start with a ‘Minimum Testable Product’. The idea is to take everything you think is questionable about the user interface and artificially motivate users to test that before you put it out to your real users. While you don’t want to paralyze yourself with analysis, done right this is relatively quick and easy and it allows you to unbundle and debug the risk that users understand how to use your product/system from risks you have about them being fundamentally motivated to want to use it to solve their problem.

The ‘minimal’ part means that the application doesn’t actually have to be ‘working’, the responses can be static/hard-coded. That’s workable because you’re artificially motivating/directing the user to attempt something specific so you can see if they get how to do it on your product. In UI (user interface) talk, this is also known as a ‘Wizard of Oz’ prototype.

A specific example: I recently built brandlattice with some collaborators. It’s a web app that involves a lot of drag and drop, something most users still aren’t expecting from a web app, so we knew we had a substantial risk that users just wouldn’t get that part and get stuck. So we tested it and lo and behold, they got stuck. We had a toolbox of fixes to help them along, which we then tested in ascending order of disruptiveness to the overall experience We tested a few and we validated one that had a high success rate (and fortunately was minimally disruptive).

Without going into the details of Enable Quiz’s possible user interface, let’s say that they see the HR Managers progress to creating an actual quiz from the available technical topics as a big possible hurdle. They’ve discussed (argued about?) a few approaches and naturally they’d like to go with the simplest. So they build a simple front end prototype and formulate the assumption & experiment something like this:

Priority Key Assumption Needs Proving? Experimentation
1 If we provide them a self-service interface, the HR manager will be able to create the quizzes based on the available job descriptions. Yes. – Usability test with interactive prototype
2 The HR manager persona will understand the quiz creation process as presented and be able to complete it at least 90% of the time Yes. (see above)

Again, only undertake this step if you’re good on a, b and c above and if you feel like you’re mired in user testing, just move on- the main goal is to validate or invalidate your fundamental proposition.

Reference B: AdWords Experiment at Enable Quiz

Please see also section 03 above for the Enable Quiz concierge experiment.

Item Example (Enable Quiz Adwords Test)
What assumption will this test? This MVP will test our assumption about which technical topics are most promising for our hypothetical 1.0. There are many to choose from, and our intuition is that the right topics will a) by popular/in demand with employers b) overlap with the market we can reach and c) be affordable with regard to keyword phrases.
How will we test it? We have assembled a list of popular topics and workable keyword phrases ‘hire [Ruby] developer’, etc. and plan to run comparative Google AdWork campaigns to determine the top 10 most promising topics.
What is/are the pivotal metric(s)?
What is the threshold for true (validated) vs. false (invalidated)?
The pivotal metrics here are-
1: Absolute click-through-rate (CTR)
After a few iterations, we’d like to see a CTR of 2% on any topic we consider. Below this, we’re not sure our current assumptions on our Customer Creation Hypothesis hold together. We’d like to see at least ~100 impressions on each iteration, with an estimate of 2 iterations/topics (this is a blend since we’re planning to use similar patterns across topics).2: Comparative CTR
Beyond this, we’ll initially rank topics by CTR.
What will you do next if the result is true? False? If true, we will pursue a 1.0 of the product with the top 10 topics.

If false in that none of the CTR’s are >2% after we feel we’ve tested a reasonable set of alternative keywords and ad+landing page combo’s, then we’ll a) revise our Customer Creation Hypothesis and consider alternative Channels and b) pursue an alternative assessment strategy (example: looking at job postings for target customers).

How much time, money will it take to set up? Setting up and tuning the campaign (including AdWord & landing page creation and iteration) will take:
– 20 hours by our product lead
– 20 hours by our ‘growth hacking’/marketing contractor, costing $1,600
Roughly, what will it take for each individual test? The above includes both set up and our estimate on tuning. After that, we should have a usable set of results.
Roughly, how long will it take for each  test to run and produce definitive, actionable results? Based on search frequency of our preliminary keywords and the need to iteration, we think we’ll need 10 days for each test to run.

 

  • Tibor Zahorecz

    Just started to read the whole material, but excellent!!!! But need time to go through:-)

    • Hi Tibor- Great, I’m so glad you like it!

  • peter ndiangui

    Me too just started …it is excellent !

    • HI Peter- thanks! I hope you find it helpful.

  • Only 2 comments?? Everyone else must be stunned speechless by this excellent summary 🙂 Thanks for sharing it Alex

    • Yes, that must be the reason!!

      Thanks for commenting, Alex, and thanks for adding color.