The Enterprise Software Playbook


Thanks to Bryan Boroughf, David Schach, Andrew Yang, and Joan Tyner for contributing their expertise as reviewers and editors. For more on the playbook’s editors, please see the Editors Page.
CRM-project-failures-sized

Depending on whose figures you believe (summary figures), failure rates for CRM implementations are 50-70%- as good or worse than a coin toss. Yikes. And there’s no particular reason to believe that success rates in other enterprise software categories are any better (though please DM/email me if you’ve found otherwise).

Just implementing a system might have been good enough 20-30 years ago, but today, in the age of AI and fast software, we need to do better than this to stay competitive and relevant.

Why is the failure rate so high? I don’t know about you, but pretty much all the enterprise software/B2B implementations I’ve seen had the benefit of smart, well-intentioned participants. And they’re not trying to split the atom in these projects- most of the underlying applications were basically fine if not at least viable.

We’ll have to dig in a little to figure this out. For starters, I subscribe to the idea that it’s important to define your problems explicitly before you explore solutions. The playbook that follows aims to deliver on five key problems in enterprise (B2B) software, which I’ve collected and prioritized based on my own experience and consultation with other practitioners.

The idea is that these particular problem areas are root causes. So, for example, ‘low user uptake’ would be a symptom of these root causes. For each problem area, I’ve offered alternative processes, processes that I’ve seen deliver better outcomes.

1. Order Taking vs. Consulting
Frankenstein-CRMYou talk to the head of sales, the head of support, the head of operations, the head of finance, note their specific requirements, and using those requirements you construct your masterpiece. To your dismay, instead of the beautiful thing you thought you were releasing into the wild, it’s Frankensteinian. After wreaking some havoc, it’s pursued by angry townspeople with torches.While doing exactly what we’re asked by the user/customer is a good way to avoid blame and conflict, the amalgamated requirements rarely make for a good system or a good outcome. Many engagements can be improved through more constructive consultation on delivering what users need.
2. Building vs. Designing
Three-Little-Pigs-Design-SizedSalesforce makes it so easy to build something — check a few boxes, add a few fields, and you have yourself a fancy new CRM, right? Well not if it’s implemented ‘wrong’.This is my favorite definition of ‘design’: to assign in thought or intention. Many enterprise software implementations fail due to a lack of thoughtful design. They could be improved through better discovery and definition around users and processes, and regular integration of design thinking into the deployment process.
3. Papering Over Problems with Software vs. Solving Them
Dorothy-Oz-Papering-Problems-SizedSoftware allows us to operate with the regularity and efficiency of a machine, and so it should bring order and regularity to an unruly mob of humans working together. This proposition sounds silly when you speak it out loud, yet I think it drives a lot of what’s wrong with enterprise software.Too often, users and processes that need more thoughtful attention end up with an arbitrary layer of Salesforce around them, raising the failure rate on implementations. Design is neglected and we skip forward to implementation of the software because it’s there, it’s what everyone else is doing, it’s (relatively) easy to turn up and just start using.It’s always a combination of a) thoughtful design around users and processes and b) appropriate enterprise software that delivers good outcomes. Enterprise software can only automate and standardize processes that humans design, so you have to know in advance what you want it to do. The software itself is unlikely to make those processes better and very like to make them worse if it’s implemented in an arbitrary way – it can only enforce processes, not improve them. It’s important to engage on issues of substance and sketch a working solution around people and processes prior to layering software on top of the issue.
4. Project vs. Product Orientation
Little-Grey-Mouse-Composite-SizedWe burden ourselves with projects and we gratify ourselves by closing them. When we pack up for the day, we look at what we’ve checked off our to-do list.Yet for something as complex as automating and standardizing the way individuals work together, output and outcomes are very different things. Founded on the belief that the  perfect system will ‘fix everything’, we just want to get the whole system online. Assumption is compounded on top of assumption (implicitly or explicitly) to do this ahead of actual observations about user behavior, and the result is a (big) frankensteinian mess that no one likes. ‘The System’. Instead, contextualize the work your doing in your users’ jobs-to-be-done and how those align with the performance drivers for the company as a whole.
5. Lack of a Data & AI Orientation
While the other problems are more long-standing, this one is more recent. But, right now, it might be the most important problem most teams need to solve. Here, the issue is that off-the-shelf chat and language model interfaces deliver amazing results to everyone. That is as it should be and it’s great, but for individual companies and lines of business to stay competitive in this new operating environment, they need an explicit point of view on how they’ll use existing and new data to improve their performance.

The Playbook

Facilitating change while redesigning complex systems is hard and multifaceted and there are plenty of things you’re doing right. Given that, I’ve organized the playbook around three key focal points that I’ve found are the most useful starting points for solving the issues above:

it-playbook-3-points.001

These three focal points have a few things in common:

  • They take less then 90 minutes to try out
  • They build on leading practices from the worlds of innovation and design (why reinvent the wheel?)
  • They pair readily with a modern practice of agile and hypothesis-driven development

After reviewing this tutorial and doing a little practice, you will be able to:

  1. Facilitate a strategic charter for your IT projects (whether from scratch or relative to an existing strategy) using the Business Model Canvas
  2. Use modern process design as a prototyping tool for workflow automation and enterprise software in general
  3. Instrument testable success criteria into your implementations with a minimum of overhead and a maximum of actionability
  4.  Switch your IT projects from being output-based to outcome-based so they drive real value 

The balance of this section provides a short overview of the three key focal points and the balance of the page steps through those focal points.

01 Facilitate Alignment & an Economic Definition of ‘Done’

Business-Model-Canvas-Preview-2The basic idea here is to describe a strategy that a) has buy-in from your stakeholders and is consistent with the corporate strategy while b) being consumable and actionable by IT teams. For this, we’ll be using the Business Model Canvas. It is a one-page business model design tool that’s great for getting to the point and driving action. It’s also fast and will help you avoid getting bogged down in an overly elaborate strategy project.

Deliverable: A Business Model Canvas and a High-Level Process Inventory

02 Prototype with User Stories & Process Designs

Atomic ProcessI know- you’re hearing “process design” and it’s bringing back terrible memories and making you feel like you just want to leave this page and check Facebook. I’ve been there, too. However, I think you’ll find this particular approach to process design helpful and workable in small success-based batches. We’re going to link these processes back to a part of the Business Model Canvas called ‘Key Activities’ so we can make sure there processes are really strategically aligned. Also, we’re going to design them individually- not as part of some giant, static, unusable thing that wallpapers a COO’s office.

Deliverable: Individual Atomic Processes & User Stories for Detail

03 Link Testing for Outcomes with Output

success-criteriaRemember that rant earlier about how outcomes are more important than output (even though output shows better in the short term)? Well, this is how to consistently manage outcomes in a relatively simple way. 0 Day success criteria has to do with usability testing. 30 day success criteria has to do with testing for user engagement and ongoing consistency with your earlier results on usability. 90 day success criterial has to do with whether what you’ve done is driving the underlying outcome it was supposed to achieve (less overhead, more throughput, lower errors, etc.).

Deliverable: Test Designs & Decision Criteria

01 Facilitate Alignment & an Economic Definition of ‘Done’

If you’re reading this, odds are you’re a pretty hard worker. Your favorite plan of action is …action!

I salute you, I resemble you, and, more often than not, getting to work and trying things is the right idea. That said, positive outcomes in this case are driven by good design and good design is driven by focus. In this case, the right focus requires a quick description of company strategy that you can use (ongoing) to anchor your work and avoid distractions.

I know what you’re thinking- no one’s paying me to think about strategy. I’m just supposed to get the enterprise software right. Your client and/or managers feel they know what they want and that it’s the software’s job to sort everything out. You know that’s not true- that software can only standardize and automate work that you already know how to do.

I think you’ll find you that a lightweight strategic framing doesn’t take up too much time or goodwill and it will help encourage the change and focus any substantial enterprise software implementation requires. With it, you’l get:

  1. Structure.
  2. Linkage to success criteria.
  3. A drive to explicit, discussable designs
  4. Linkage to company business model & strategy.

With a little practice, you’ll be able to talk through this strategic framing in 20-30 minutes- not bad for a project you may be on for months!

In the next three subsections, we’ll be answering three key questions:

  1. Who are the buyers, users and why do they buy?
  2. What is the end-to-end customer experience?
  3. What activities are strategically important?

We’ll be using a tool called the Business Model Canvas. It’s chief virtue is that it’s pretty transparent and self-evident and I think you’ll grasp it just fine in the steps that follow. That said, here’s a Business Model Canvas Tutorial and Business Model Canvas Workshop, if you find yourself wanting more background.

The first thing you’ll want is a copy of the Canvas. I recommend making notes on the printable version in your early drafts, but here is a Google Doc’s template you can use and which I’ll reference in the material that follows:
LINK TO THE GOOGLE DOCS TEMPLATE

Before we dive into the Canvas, let’s make sure we know the business you’re working on. For your company or product, draft a standard positioning statement to make sure you have all the basics on hand:

For (target customer) who (statement of the need or opportunity), the (product name) is a (product category) that (statement of key benefit – that is, compelling reason to buy). Unlike (primary competitive alternative), our product (statement of primary differentiation).

Here’s an example statement I created to describe ‘Children’s Theater’, the example we’ll be using here:

For children (k-12) seeking an expressive experience through the arts, the Children’s Theater is a performing arts institute that offers affordable programming to low-income schools and children. Unlike private institutions, our product offers national quality programming with a long track record of success.

The positioning statement’s a handy way to make sure you understand a business- with a little practice you’ll be able to improvise it during calls, meetings, etc. for clarification.

If you’re part of a large company with multiple lines of business that have their own discrete P&L’s, then you should probably draft a Business Model Canvas for whichever line of business you’re working with and then also a ‘summary’ Canvas that describes the enterprise as a whole.

Who are the buyers, users and why do they buy?

Back to the Canvas- the first thing you’ll want to do is think through the key customer segments and personas to answer our first framing question “Who are the buyers, users and why do they buy?“.

What are the different types of customers and are there major differences among them? What key value propositions does the business deliver to them? Sketch these out and then draw lines for any key linkages. Below is an example for Children’s Theater. (For more on creating personas & value propositions, see the Personas Tutorial.)

Rough-Canvas-Segments-to-Propositions.001

What is the end-to-end customer experience?

Now our second key question: “What is the end-to-end customer experience?”. For this, I like a framework for the 19th century: AIDA, which stands for attention, interest, desire, action. They weren’t that big on services in the 1890’s, so I also like to add onboarding and retention- AIDAOR. This is a good way to make sure you understand the customer journey, at least at a high level.

One thing I like to do to make sure I’ve really thought through a customer journey with AIDAOR is to storyboard it. Here’s an example for Children’s Theater:

Customer-Journey-Storyboard-v2

If you’re interested in some tools for the storyboard, see ‘Storyboarding Customer Acquisition‘.

Once you’re thought through the customer journey, ask yourself what kind of Customer Relationship you need over the course of the AIDAOR journey. The typical spectrum is dedicated personal service (if you have a question about your stocks, you call Bob your financial advisor) to self-help online (residential, Gmail, for instance). In between you have intermediate options like personal service (a call center or retail floor) and web-based support (filing requests online).

Following from the Customer Relationship, you have Channels, the means you’ll use to deliver that relationship to your various segments. Both of these items may differ over the course of the AIDAOR journey and/or between Customer Segments.

Here’s how they look for Children’s Theater:
business-model-canvas-customer-relationship

The key point with this part of the Canvas is to have a nice, clear summary of the customer experience. This is important first because much of enterprise software is about customer experience in some way and also because the items we’ve covered so far should be driving the left (infrastructure/delivery) side of the Canvas- and that’s the part which directly drives our next section on process design. Our last question speaks this this left part of the Canvas.

What activities are strategically important?

What activities are strategically important?“. These activities should be both important and unique to the company’s strategy. So, for example, it might be important to keep the books straight, but that’s probably true for every company.

A good way to zero in on this is to think about the company’s business type. The basic proposition here is that successful businesses have a focus that coheres to one of these three fundamental types:

Business-Model-Type

Infrastructure-driven business are scale oriented: being able to sell a lot of whatever they have in a relatively consistent fashion is their key profit driver. The ones above may not be traditionally thought of as ‘sexy’ businesses, but this not general true of the type- educational institutions, for instance, are infrastructure businesses.

How does this matter for enterprise software again? Reasonable question! Consider this simple example: Would you implement a CRM the same way to support residential DSL at Verizon vs. accounts at a private Swiss bank? If you did, you’d almost certainly have a bad outcome at one of those clients.

Verizon is an infrastructure business whereas the private Swiss bank is a scope-driven business: it provides a set of services to wealthy individuals, Customization, flexibility and adaptation to individual customer needs is critical, whereas that’s not true for Verizon DSL (we all get the same thing and that’s fine).

The last category, product-driven, offers something unique and proprietary to the market as its core activity.

For more on this idea of business types, see the original HBR article ‘Unbundling the Corporation‘.

After arriving at a core business type, think about the Key Activities that drive the business. What are the 5-8 activities that deliver against the preceding items having to do with customers? The closely related item is Key Resources- what are the assets that the business has or will develop that enable the same?

Here is how it looks for Children’s Theater (along with the other last few items in the Canvas):

Business-Model-Canvas-Enterprise-Playbook-Example-v2Our next is to develop an inventory of process designs organized against these strategically significant Key Activities. We’ll then use those processes to draft a process-driven blueprint for our enterprise software deployment.

02 Prototype with User Stories & Process Designs

Few of us would object to the wisdom of ‘beginning with the end in mind’, yet how many times have we conveniently told someone, “We get the idea/we’ve got it,” when we’re not really positive we do understand?

It’s natural and it’s common, but it’s not productive, that mismatches in understanding will cascade and grow over the course of a project, turning it into a much larger mess at the end.

Conveniently for us, there’s a handy tool we can use to frame just about anything in this area: the atomic process. It has this general form:

Atomic Process

With a little (really, just a little) practice, you’ll be able to use the tool in casual conversation with stakeholders or to define a major project.

An atomic process always has

1. A discrete input (see circle on the far left)

2. A series of transformative steps (see the rectangles, triangles and flows)

3. A discrete output (see the circle on the far right)

4. These three metrics: process, output, outcome (more on that shortly)

The first three items are probably pretty self-evident, but about the fourth you may be thinking: ‘Everyone gets crazy on metrics; I can skip that one.’ Well, yes and no, is my answer.

‘Yes’ in the sense that of course you don’t need to instrument metrics for every little process.

‘No’, however, in the sense that if you can’t identify the three metrics for a process, it’s likely you’re on a shaky foundation that will crumble under the weight of everything you subsequently do.

Even if you’re not going to create or even pay attention to reporting for a particular metric, being able to at least identify these metrics is an important part of making sure you have a discrete, manageable process.

Let’s step through those metrics.

1. The Process Metric
This is a measure of raw throughput. If our subject was production in a doorknob factory, the process metric would be something like ‘doorknobs produced/hour. In a salesforce automation (SFA) context dealing with leads, it might be ‘new leads created/day’.

2. The Output Metric
This is a measure of ‘adequate’ throughput, meaning that it conforms to our definition of what a doorknob should look like. In doorknob production, it might be ‘flawed doorknobs/total doorknobs’. In our SFA example, it might be the number of leads that don’t meet minimum basic criteria- they’re duplicates, the email bounces, the contact phone number is incomplete, etc.

3. The Outcome Metric
For most of us, this metric is (relatively speaking) the hardest to formulate, yet at the design stage it may be the most important. This metric deals with the question of ‘Did the outcomes from this process move the dial on our  core objective? Did anyone care?’.

In doorknob production, the metric might be validated customer uptake (volume of new sales) and satisfaction with the doorknob (Net Promoter Score). In SFA, it might be the underlying effectiveness of the sales channel for connecting with and servicing the target customer segments (for example observed in any or several of: customer count, churn, dollar volume of sales, customer satisfaction).

That’s it. If you can make yourself fluent in formulating processes this way, you’ll be laying a much stronger foundation for you and your teams.

If this is feeling a little classroom-y, don’t fear. We’re about to apply this tool in a simple hallway conversation.

Example

Let’s say our enterprise software consultant who’s helping United Children’s theater has understood from the executive director that they look at the inputs and outputs to their lead qualification like this:

Example Process-Framed

She also knows that the input should be a Lead record and the output should be an Account, Contact, and Opportunity, and only if all the qualification criteria pass. She needs to diagnose what that really means in her next conversation, with Matilda in sales. More on that conversation shortly.

With more investigation and iteration on her prototype she arrives at the following:

(I created the diagram above in an application called ‘LucidChart’, which integrates with Google App’s. You can download the template above and are free to use it with attribution back to this page: PROCESS DESIGN TEMPLATE.)

Designing User Experience with Stories

Driving to a process design that actually resembles how things work is a powerful first step to arriving at the solution. I think you’ll find it helps keep things at about the right level of abstraction as you investigate how things work.

Once you’re ready to start developing/configuring, I recommend the tried and true agile user story.User-Stories If you’re new to agile user stories, they have this specific syntax: 
“As a [persona],
I want to [do something],

so that I can [derive a benefit/realize a reward]”

Epic stories describe a more general action. Stories detail the epic and test cases are written against those stories for supplemental detail.

The basic idea with stories is that they help the designer (or whoever is acting in that role) be detail-oriented about the target user experience without prescribing the implementation for the developer. Design and developer then work together (validating as they go- the subject of our next and last section) to make sure they’re doing something valuable for the user.

Let’s detail the process design above, starting with an epic story that summarizes the process:

Epic Story: ‘As a donor manager, I want to record the prospect’s qualifications so I understand if and how I should progress with them.’

We’d then go through and detail that story with child stories and test cases. The test cases are also a place I like to make notes and repo questions I want to discuss with my team- you can see those below with the question marks.

Pro tip: use numbers, etc. to map your stories back to various points in the process design.

Story Test Cases
As a donor manager, I want to record the Lead qualifications so myself or someone else can readily follow up with them on relevant next steps. Make sure it’s possible to qualify and record their charter.
?: Should this be a simple yes/no on arts & k-12? If so, in aggregate or separately?
?: Notion- would it be useful to record the URL if it’s online?
?: Place to make notes? If so, just one for general, or some kind of prompt or relationship to other items?
?: What’s in the DM’s notes for a typical qualification?
Make sure it’s possible to qualify current year funds.
?: What else is relevant here? Qualify when their new fund year/fiscal year starts? Size of typical donation?

This level of definition is often skipped in facile environments like Salesforce- Why write a user story for something when anyone can just go in and modify a picklist or create a text field? The reason is that having an explicit, sharable, discussable, and above all testable view of what you want to accomplish is important. It’s not modifying the picklist that’s hard, it’s figuring out if that makes any sense at all.

Go Deeper

If you want to… See
Write Agile User Stories Tutorial: Your Best Agile User Story
Template: Creating Agile User Stories

03 Link Testing for Outcomes with Output

The triangle between proposing, agreeing, and then making an acceptable delivery has bedeviled software development since its earliest days. If you struggle here, don’t feel bad- most of us do

Start by Proposing Jobs-to-be-Done

If you find yourself answering questions about a solution where not everyone’s in agreement or clear about your users’ underlying job-to-be-done (JTBD), you’re already in trouble. Lead with the problems you want to solve/the jobs you want to do for the user and get agreement on their definition and priority and then use those to anchor your progress.

If you’re on a larger project, you may want to nest JTBD at various levels of detail, for example:

Adding Profitable Customer Relationships
Encouraging the Use of Learned Best Practices in the Sales Process
Consistently Apply Qualification Criteria to Increase Close Rate
Supplying High Quality Inputs Downstream to Solutions Teams
Providing Visibility to Management on Sales Progress

This nesting will also give you a handy tool for tuning your presentation for different audiences whose preference for detail may vary.

Propose a Testable Solutions (Using Science!)

I won’t belabor this since you’ve probably heard all about it, but evidence-based work patterns are at the core of best like Lean Startup, lean enterprise, and Test-Driven Development. The diagram below summarizes my take on how it should work in enterprise software deployments in terms of the ‘framework of all frameworks’, the scientific method:

This is what hypothesis-driven development (HDD) is about- breaking big ideas into small, sequential, testable pieces and working relative to an economic definition of done.

01 IDEA
The idea(s) you’ll take through the process should come from your diagnosis of what problems/jobs are important to the user and how you might deliver something better than their current alternatives.

02 HYPOTHESIS
In our case here, this will always be some formulation along the lines of “If we do [x] for [certain user], then they will [respond in a certain way].” I recommend breaking your testing into 0 Day, 30 Day, and 90 Day test cycles.

03 EXPERIMENTAL DESIGN FOR 0 DAY CRITERIA

Not that doing things well is ever easy, but out of the three cycles, on this one you have more control and an immediately feedback loop, which is nice. This testing is basically about usability: if you ask the user to show you how they would [ex: enter a lead], are they able to do it and do they do it as you expect? If not, revise your prototype (or working software if you went that far) and repeat.

A good starter test is to have a single set of applicable users (one from each function) work a real business item through the system as they would IRL (in real life). The users should not be management (assuming management doesn’t normally do the tasks in question) but they should be top performers. These top-performing users are harder to get since they’re busy, but it’s important because at this point your job is to generate a single set of careful observations and to include as many ideas as you can from the company’s best current practices.

Have the users execute the process from start to finish with a real item (Lead, Opportunity, Call Log, etc). Keep it as realistic as possible. For instance, if it’s a salesperson working with a solution engineer (sales engineer, etc.), don’t have them sit next to each other and talk through the item if that’s not what they’d normally do. Have them go back and forth through the system.

(You might want to have them sit with you and walk through the process but that should have been in 02 Diagnose since at this point you’re validating a solution you’ve already formulated.)

You want to make sure you get this right because after this you have to wait some to see what actually happens in the wild- and that’s hard to predict consistently even if you do a great job on this 0 Day stuff. For more on usability testing,  see Usability Testing.

03 EXPERIMENTAL DESIGN FOR 30 DAY CRITERIA

You’ve validated usability with your 0 Day testing, and now you’ve released your work to more users to actually use. What happens? After 30 days, are they still using it, or have they gotten frustrated and reverted to bootleg spreadsheets, etc.? Are they using it in the way you expect- is data going in to the system in a consistent way? If not, now’s the time to find out why and revise.

03 EXPERIMENTAL DESIGN FOR 90 DAY CRITERIA

Finally, assuming you’re OK on the 0 and 30 Day criteria, are you actually getting the business better outcomes? This should be validated with something relatively objective like error rates or throughput.

Just because we’re using the scientific method doesn’t mean the results need to look like a chemistry experiment. Starting off with deeper testing on fewer users will almost always lead to better deliverables quicker and also more intuition about success criteria for the system at large. It will also help you avoid the slippery slope of going into ‘psuedo production’ and all the operational complexity that likely requires. Small scale testing with just a few users will generally give you a pretty free license change the system at will.

Pro tip: Create (with placeholders) the presentation you plan to deliver based on your validation testing, even if the presentation is a simple email. Testing delivers one of three results: validated, invalidated, or inconclusive. Envisioning your presentation will help you make sure you’ve put together everything you need to get an actionable result.

Example Validation Criteria

For the lead qualification process we’ve been reviewing, we might frame the underlying problem or job as follows: Implementing learned best practices on account development tasks & keeping those aligned with corporate objectives.

The current alternative is an informal spreadsheet for account tracking on a shared drive.

What’s the win? It might be something like: The CRM implementation will help with best practice sales and time management with structure and automation around tasks like-
* lead scoring to prioritize calls
* simple creation of follow-up’s and related notices to help prioritize work

Given all that, how might we test the 0 Day criteria? We might build a prototype (either with a prototyping tool or, if simple, the enterprise software config. itself) see if when the user inputs last 5 prospects that they go into fields as designed without additional support or questions.

How about a 30 Day criteria? In our scenario, we would want to see daily login’s to the new system from the employees we know work daily on donor development (sales). We’d also want to see lead processing at roughly the rate we were able to observe in the spreadsheet or understand from the users. If not, we need to ask ourselves the hard questions about why this isn’t happening (vs. just sending the users an email and asking them to use the system!).

Finally, what about 90 days? The system itself isn’t going to generate more sales- at best, it’s a helping hand. Part of the job of this system is to help the executive director manage a new salesperson/development manager. Is the system helping with that? We might look specifically at observations on whether new accounts are in the target segments and whether they were properly qualified in the system. If sales are supposed to have a meaningful post-mortem, we might also look at whether a manager can review those and properly understand them.

For more on this, see the template for Customer Experience Mapping.

Getting Wins with AI

There’s a reason this section comes last: all the tools and technology you need to succeed with AI are readily available and fairly affordable. The good news is that the hard part of getting enterprise IT wins (digital transformation, etc.) mostly has to do with the items you’ve already learned about in this playbook. From there, getting wins with AI is relatively straightforward, if still a fair amount of actual work.

For durable wins with AI, companies need three main ingredients:

  1. data strategy and governance
  2. valuable interventions
  3. context

Data Strategy and Governance does not have to be a big undertaking. If you’re new or small, use the private/enterprise version of services like chatGPT and off you go. You may learn later that you need a more substantial infrastructure to enhance what you’re doing and that’s fine. What won’t work is spending a long time on a ‘robust, scalable’ data strategy and then assuming the actual wins will follow. Win first, then scale.

The next item, ‘valuable interventions’, is really the fulcrum of getting durable AI wins. Where are the places that AI can actually, productively improve a given UX? Just like an enterprise software deployment or ‘digital transformation’, this is where you should start and what the playbook is about. With tools like chatGPT, it is really easy to create a culture of experimentation on this.

Finally, there’s context. You can accrue and leverage your own proprietary data to make an AI do better work, and that’s #1. You can also understand the context of your UX’s better, and feed that to the AI to get better results. A terrific example of this is edtech leader Khan Academy’s working on their ‘Khanmigo’ virtual assistant. While they use essentially ‘off the shelf’ GPT AI, they create their own markup language to add additional context to the prompts they send it, based on who the user is and what they’re doing. Here too, specifics are a good place to start and once you have a few user stories and processes designs under your belt, I think you’ll find creating contextual wins for your AI’s pretty intuitive.

A glass half empty version of all this is that it’s easy to execute well on two out of three of these items and still fail. Have a great data strategy and a great (general) idea on where to find valuable interventions, but neglect context, and you can end up with a winning strategy that fails in its execution. For example, Eric Siegel published a case study of how UPS created what, on paper, was a terrific system for optimizing package deliveries, but didn’t get the solution off the ground until they did some hard work on context, looking at the UX to AI interactions on the ground.

Individual users with access to tools like chatGPT can find quick, early wins and that should be encouraged and can be a great concierge vehicle to find more scalable wins. But a few clever users making the most of chatGPT will in and of itself lead to short-term, parity wins.

The Importance of Working in Small Batches

Project plans create the perception of certainty and we all like certainty. And with the money, time, and people an IT project occupies, naturally we crave an airtight plan. Unfortunately, even the most thoughtful implementer is unlikely to predict the interaction of all these elements accurately enough to create a meaningful plan. IT projects notoriously miss their dates and budgets. Trying to create a plan ahead of substantial visibility and then conform to it drives destructive priorities, approaches and distracts from identification and delivery on important jobs-to-be-done.

Why do we keep creating plans at a level of detail that exceeds our visibility? One classic reason for needing a plan is for accountability, which is legitimate. But for what decision, exactly? Billing? Advising changes in plan? Changing staffing? Canceling the project altogether?

For almost all of these, small batches (1 week agile iterations) against priority areas are a better solution. Each of these should have a prioritized set of objectives and deliver some working items- personas, jobs-to-be-done, process designs, or working software. For those of you familiar with agile, this is probably old-hat.

For billing clients as a third party firm, this means smaller finance commitments with more success milestones. pay-as-you-go. If management wants the prerogative to advise changes in the plan based on their observations of how things are going, management should make themselves available at the end of each iteration to review the output and provide advice. If management wants the prerogative to change staffing or cancel the project, smaller batches leave them a freer hand and more visibility with small batches.

The following is a an example of a weekly email for week 1 of the project we’ve been reviewing:

What did we accomplish this week?
– Defined and prioritized first batch of jobs-to-be-done (3)
– Working with sales and solution engineers, drafted preliminary process design for lead qualification
– Presented above to management; validated next week’s content
– Set up staging environment

What will we accomplish next week?
– Complete alpha version of lead qualification module
– Complete Phase 1 validation tests for above
– If validation successful, prepare move to production and sales training module
– If not, iterate

If more time is available:
– Review proposal generation practices, standards and customer interactions

What obstacles are impeding our progress?
– None at this time

Managing Outcome-Driven Projects

Validation will have one of three results: 1) validated 2) invalided or 3) inconclusive. #1 and #2 are part of the process. #3 means the process needs work. That’s fine and to be expected as you get comfortable with these tools. The main thing is to be honest with yourself about which of the three results you have.

Pizza-SizedIt’s also important to set realistic expectations. When’s the last time you heard someone raving about the software they use at work? Aspire to build something great- that’s the purpose of this playbook. Just don’t set yourself up for unrealistic expectations. Enterprise software is like the opposite of pizza- even when it’s really well done, it’s usually just tolerated (pizza is enjoyable even when it’s not that well done; thanks to David Rosenthal at Leonid Systems for this turn of phrase).

Part of this is due to the fact that many of the benefits of enterprise software result from improving interpersonal and interdepartmental collaboration. Take the example issue at hand: Even if there some benefit from helping them qualify leads, most of the salespeople will regard the new requirements on lead qualification as nuisances. The solution engineers will have an easier time (if the implementation validates) since they’ll get cleaner inputs that create less work on proposals. But most likely the system will clamp down on certain corners they used to cut or flexibility they enjoyed but on the whole generated less good outcomes downstream.

Emotionally, we measure our sense of accomplishment locally. Rationally, every salesperson knows that the company is better off if proposals are easier to generate and better aligned with the final operating deal. But what gives them a sense of reward? Nothing even comes close to the joy of closing a deal. This goes for everyone and not just salespeople. This is just human nature and I take account of it here to help you avoid validation measures that are unlikely to succeed.

Taking a poll of how much individuals like the new system is likely to deliver tepid results for the reasons above, even if the system as a whole is making the company much more effective. Looking at the performance of individual jobs-to-be-done and their related processes will deliver results that are a more reliable indicator of whether you’re delivering value with the system.

That’s it! Good luck with your projects and please get in touch via the Contact form here, LinkedIn, Twitter, or the comments below.

IMAGE CREDITS
Frankenstein: John Tenniel [CC0], via Wikimedia Commons

Rube Goldberg: By Rube Goldberg (an old comic book) [Public domain], via Wikimedia Commons

Pizza: By Jakob Dettner (de:User:Jdettner), Rainer Zenz (de:User:Rainer Zenz), SoothingR (en:User:SoothingR) (Own work) [CC-BY-SA-2.0-de (http://creativecommons.org/licenses/by-sa/2.0/de/deed.en)], via Wikimedia Commons