Part 2: SteerHow Vision Leads to Steering At its heart, a startup is a catalyst that transforms ideas into products. As customers interact with those products, they generate feedback and data. The feedback is both qualitative (such as what they like and don’t like) and quantitative (such as how many people use it and find it valuable). As we saw in Part One, the products a startup builds are really experiments; the learning about how to build a sustainable business is the outcome of those experiments. For startups, that information is much more important than dollars, awards, or mentions in the press, because it can influence and reshape the next set of ideas.
We can visualize this three-step process with this simple diagram:
To apply the scientific method to a startup, we need to identify which hypotheses to test. I call the riskiest elements of a startup’s plan, the parts on which everything depends, leap-of-faith assumptions. The two most important assumptions are the value hypothesis and the growth hypothesis. These give rise to tuning variables that control a startup’s engine of growth. Each iteration of a startup is an attempt to rev this engine to see if it will turn. Once it is running, the process repeats, shifting into higher and higher gears. Once clear on these leap-of-faith assumptions, the first step is to enter the Build phase as quickly as possible with a minimum viable product (MVP). The MVP is that version of the product that enables a full turn of the Build-Measure-Learn loop with a minimum amount of effort and the least amount of development time.
When we enter the Measure phase, the biggest challenge will be determining whether the product development efforts are leading to real progress. Remember, if we’re building something that nobody wants, it doesn’t much matter if we’re doing it on time and on budget. The method I recommend is called innovation accounting, a quantitative approach that allows us to see whether our engine-tuning efforts are bearing fruit. It also allows us to create learning milestones, which are an alternative to traditional business and product milestones. Learning milestones are useful for entrepreneurs as a way of assessing their progress accurately and objectively; they are also invaluable to managers and investors who must hold entrepreneurs accountable. However, not all metrics are created equal, and in Chapter 7 I’ll clarify the danger of vanity metrics in contrast to the nuts-and-bolts usefulness of actionable metrics, which help to analyze customer behavior in ways that support innovation accounting. Finally, and most important, there’s the pivot. Upon completing the Build-Measure-Learn loop, we confront the most difficult question any entrepreneur faces: whether to pivot the original strategy or persevere. If we’ve discovered that one of our hypotheses is false, it is time to make a major change to a new strategic hypothesis.
The Lean Startup method builds capital-efficient companies because it allows startups to recognize that it’s time to pivot sooner, creating less waste of time and money. Although we write the feedback loop as Build-Measure-Learn because the activities happen in that order, our planning really works in the reverse order: we figure out what we need to learn, use innovation accounting to figure out what we need to measure to know if we are gaining validated learning, and then figure out what product we need to build to run that experiment and get that measurement. All of the techniques in Part Two are designed to minimize the total time through the Build-Measure-Learn feedback loop.
Chapter 5 Leap!For startups, the role of strategy is to help figure out the right questions to ask. STRATEGY IS BASED ON ASSUMPTIONS
The first challenge for an entrepreneur is to build an organization that can test these assumptions systematically. The second challenge, as in all entrepreneurial situations, is to perform that rigorous testing without losing sight of the company’s overall vision.
Acting as if these assumptions are true is a classic entrepreneur superpower. They are called leaps of faith precisely because the success of the entire venture rests on them. If they are true, tremendous opportunity awaits. If they are false, the startup risks total failure.
Most leaps of faith take the form of an argument by analogy: Previous technology X was used to win market Y because of attribute Z. We have a new technology X2 that will enable us to win market Y2 because we too have attribute Z.
A well stated approach should make clear that what is needed is to do some empirical testing first: let’s make sure that there really are hungry customers out there eager to embrace our new technology.
Randy Komisar, whose book Getting to Plan B discussed the concept of leaps of faith in great detail, uses a framework of “analogs” and “antilogs” to plot strategy.
He explains the analog-antilog concept by using the iPod as an example. “If you were looking for analogs, you would have to look at the Walkman,” he says. “It solved a critical question that Steve Jobs never had to ask himself: Will people listen to music in a public place using earphones? We think of that as a nonsense question today, but it is fundamental. When Sony asked the question, they did not have the answer. Steve Jobs had [the answer] in the analog [version]” Sony’s Walkman was the analog. Jobs then had to face the fact that although people were willing to download music, they were not willing to pay for it. “Napster was an antilog. That antilog had to lead him to address his business in a particular way,” Komisar says. “Out of these analogs and antilogs come a series of unique, unanswered questions. Those are leaps of faith that I, as an entrepreneur, am taking if I go through with this business venture. They are going to make or break my business. In the iPod business, one of those leaps of faith was that people would pay for music.” Of course that leap of faith turned out to be correct.
There are many organizations that are wildly profitable in the short term but ultimately value-destroying, such as the organizers of Ponzi schemes, and fraudulent or misguided companies (e.g., Enron and Lehman Brothers). A similar thing is true for growth. As with value, it’s essential that entrepreneurs understand the reasons behind a startup’s growth. There are many value-destroying kinds of growth that should be avoided. An example would be a business that grows through continuous fund-raising from investors and lots of paid advertising but does not develop a value-creating product. Such businesses are engaged in what I call success theater, using the appearance of growth to make it seem that they are successful. One of the goals of innovation accounting, which is discussed in depth in Chapter 7, is to help differentiate these false startups from true innovators.
In Lean UX. they recognize that the customer archetype (like persona) is a hypothesis, not a fact. The customer profile should be considered provisional until the strategy has shown via validated learning that we can serve this type of customer in a sustainable way.
If too much analysis is dangerous but none can lead to failure, how do entrepreneurs know when to stop analyzing and start building? The answer is a concept called the minimum viable product,
Chapter 6 Test (MVP)A minimum viable product (MVP) helps entrepreneurs start the process of learning as quickly as possible. It is not necessarily the smallest product imaginable, though; it is simply the fastest way to get through the Build-Measure-Learn feedback loop with the minimum amount of effort.
Contrary to traditional product development, which usually involves a long, thoughtful incubation period and strives for product perfection, the goal of the MVP is to begin the process of learning, not end it. Unlike a prototype or concept test, an MVP is designed not just to answer product design or technical questions. Its goal is to test fundamental business hypotheses.
Early adopters are suspicious of something that is too polished: if it’s ready for everyone to adopt, how much advantage can one get by being early? As a result, additional features or polish beyond what early adopters demand is a form of wasted resources and time.
Minimum viable products range in complexity from extremely simple smoke tests (little more than an advertisement) to actual early prototypes complete with problems and missing features. Deciding exactly how complex an MVP needs to be cannot be done formulaically. It requires judgment.
The lesson of the MVP is that any additional work beyond what was required to start learning is waste, no matter how important it might have seemed at the time.
Dropbox example:The video is banal, a simple three-minute demonstration of the technology as it is meant to work, but it was targeted at a community of technology early adopters. Drew narrates the video personally, and as he’s narrating, the viewer is watching his screen. As he describes the kinds of files he’d like to synchronize, the viewer can watch his mouse manipulate his computer. Of course, if you’re paying attention, you start to notice that the files he’s moving around are full of in-jokes and humorous references that were appreciated by this community of early adopters. Drew recounted, “It drove hundreds of thousands of people to the website. Our beta waiting list went from 5,000 people to 75,000 people literally overnight. It totally blew us away.”
In this case, the video was the minimum viable product. The MVP validated Drew’s leap-of-faith assumption that customers wanted the product he was developing not because they said so in a focus group or because of a hopeful analogy to another business, but because they actually signed up.
In a concierge MVP, this personalized service is not the product but a learning activity designed to test the leap-of-faith assumptions in the company’s growth model. In fact, a common outcome of a concierge MVP is to invalidate the company’s proposed growth model, making it clear that a different approach is needed.
Only at the point where the founders were too busy to bring on additional customers did Manuel and his team start to invest in automation in the form of product development. Each iteration of their minimum viable product allowed them to save a little more time and serve a few more customers: delivering the recipes and shopping list via e-mail instead of via an in-home visit, starting to parse lists of what was on sale automatically via software instead of by hand, even eventually taking credit card payments online instead of a handwritten check.
As you consider building your own minimum viable product, let this simple rule suffice: remove any feature, process, or effort that does not contribute directly to the learning you seek.
Visionaries are especially afraid of a false negative: that customers will reject a flawed MVP that is too small or too limited. It is precisely this attitude that one sees when companies launch fully formed products without prior testing.
We all need a disciplined, systematic approach to figuring out if we’re making progress and discovering if we’re actually achieving validated learning. I call this system innovation accounting.
Chapter 7 MeasureA startup’s job is to (1) rigorously measure where it is right now, confronting the hard truths that assessment reveals, and then (2) devise experiments to learn how to move the real numbers closer to the ideal reflected in the business plan.
Innovation accounting works in three steps: first, use a minimum viable product to establish real data on where the company is right now. Without a clear-eyed picture of your current status— no matter how far from the goal you may be— you cannot begin to track your progress. Second, startups must attempt to tune the engine from the baseline toward the ideal. This may take many attempts. After the startup has made all the micro changes and product optimizations it can to move its baseline toward the ideal, the company reaches a decision point. That is the third step: pivot or persevere.
If the company is making good progress toward the ideal, that means it’s learning appropriately and using that learning effectively, in which case it makes sense to continue. If not, the management team eventually must conclude that its current product strategy is flawed and needs a serious change. When a company pivots, it starts the process all over again, reestablishing a new baseline and then tuning the engine from there. The sign of a successful pivot is that these engine-tuning activities are more productive after the pivot than before.
Establish the baselineThese MVPs provide the first example of a learning milestone. An MVP allows a startup to fill in real baseline data in its growth model— conversion rates, sign-up and trial rates, customer lifetime value, and so on— and this is valuable as the foundation for learning about customers and their reactions to a product even if that foundation begins with extremely bad news.
Tuning the EngineOnce the baseline has been established, the startup can work toward the second learning milestone: tuning the engine. Every product development, marketing, or other initiative that a startup undertakes should be targeted at improving one of the drivers of its growth model.
Pivot or PersevereOver time, a team that is learning its way toward a sustainable business will see the numbers in its model rise from the horrible baseline established by the MVP and converge to something like the ideal one established in the business plan. A startup that fails to do so will see that ideal recede ever farther into the distance. When this is done right, even the most powerful reality distortion field won’t be able to cover up this simple fact: if we’re not moving the drivers of our business model, we’re not making progress. That becomes a sure sign that it’s time to pivot.
Cohort Analysis Eg.
To read the graph, you need to understand something called cohort analysis. This is one of the most important tools of startup analytics. Although it sounds complex, it is based on a simple premise. Instead of looking at cumulative totals or gross numbers such as total revenue and total number of customers, one looks at the performance of each group of customers that comes into contact with the product independently. Each group is called a cohort.
If you look closely, you’ll see that the graph shows some clear trends. Some product improvements are helping— a little. The percentage of new customers who go on to use the product at least five times has grown from less than 5 percent to almost 20 percent. Yet despite this fourfold increase, the percentage of new customers who pay money for IMVU is stuck at around 1 percent. Think about that for a moment. After months and months of work, thousands of individual improvements, focus groups, design sessions, and usability tests, the percentage of new customers who subsequently pay money is exactly the same as it was at the onset even though many more customers are getting a chance to try the product.
Thanks to the power of cohort analysis, we could not blame this failure on the legacy of previous customers who were resistant to change, external market conditions, or any other excuse. Each cohort represented an independent report card, and try as we might, we were getting straight C’s. This helped us realize we had a problem. Once I had data in hand, my interactions with customers changed. Suddenly I had urgent questions that needed answering: Why aren’t customers responding to our product “improvements”? Why isn’t our hard work paying off?
Once our efforts were aligned with what customers really wanted, our experiments were much more likely to change their behavior for the better. This pattern would repeat time and again, from the days when we were making less than a thousand dollars in revenue per month all the way up to the time we were making millions. In fact, this is the sign of a successful pivot: the new experiments you run are overall more productive than the experiments you were running before. This is the pattern: poor quantitative results force us to declare failure and create the motivation, context, and space for more qualitative research. These investigations produce new ideas— new hypotheses— to be tested, leading to a possible pivot.
VANITY METRICS: A WORD OF CAUTION To see the danger of vanity metrics clearly, let’s return once more to the early days of IMVU. Take a look at the following graph, which is from the same era in IMVU’s history as that shown earlier in this chapter. It covers the same time period as the cohort-style graph on this page; in fact, it is from the same board presentation.
But think back to the same data presented in a cohort style. IMVU is adding new customers, but it is not improving the yield on each new group. The engine is turning, but the efforts to tune the engine are not bearing much fruit. From the traditional graph alone, you cannot tell whether IMVU is on pace to build a sustainable business; you certainly can’t tell anything about the efficacy of the entrepreneurial team behind it.
ACTIONABLE METRICS VERSUS VANITY METRICS
To get a better sense of the importance of good metrics, let’s look at a company called Grockit.
Cohorts and Split-testsGrockit changed the metrics they used to evaluate success in two ways. Instead of looking at gross metrics, Grockit switched to cohort-based metrics, and instead of looking for cause-and-effect relationships after the fact, Grockit would launch each new feature as a true split-test experiment. A split-test experiment is one in which different versions of a product are offered to customers at the same time. By observing the changes in behavior between the two groups, one can make inferences about the impact of the different variations.
KanbanFollowing the lean manufacturing principle of kanban, or capacity constraint, Grockit changed the product prioritization process. Under the new system, user stories were not considered complete until they led to validated learning. Thus, stories could be cataloged as being in one of four states of development: in the product backlog, actively being built, done (feature complete from a technical point of view), or in the process of being validated. Validated was defined as “knowing whether the story was a good idea to have been done in the first place.” This validation usually would come in the form of a split test showing a change in customer behavior but also might include customer interviews or surveys.
A solid process lays the foundation for a healthy culture, one where ideas are evaluated by merit and not by job title. Most important, teams working in this system begin to measure their productivity according to validated learning, not in terms of the production of new features.
Metrics The Three A'sThese examples from Grockit demonstrate each of the three A’s of metrics: actionable, accessible, and auditable.
ActionableFor a report to be considered actionable, it must demonstrate clear cause and effect. Otherwise, it is a vanity metric. The reports that Grockit’s team began to use to judge their learning milestones made it extremely clear what actions would be necessary to replicate the results.
AccessibleRemember the saying “Metrics are people, too.” The easiest way to make reports comprehensible is to use tangible, concrete units.
This is why cohort-based reports are the gold standard of learning metrics: they turn complex actions into people-based reports. Each cohort analysis says: among the people who used our product in this period, here’s how many of them exhibited each of the behaviors we care about. Accessibility also refers to widespread access to the reports.
AuditableThat’s why the third A of good metrics, “auditable,” is so essential. We must ensure that the data is credible to employees.
This is the only way to be able to check if the reports contain true facts. Managers need the ability to spot check the data with real customers. It also has a second benefit: systems that provide this level of auditability give managers and entrepreneurs the opportunity to gain insights into why customers are behaving the way the data indicate. Second, those building reports must make sure the mechanisms that generate the reports are not too complex. Whenever possible, reports should be drawn directly from the master data, rather than from an intermediate system, which reduces opportunities for error.
Chapter 8 Pivot or PersevereStartup productivity is not about cranking out more widgets or features. It is about aligning our efforts with a business and product that are working to create value and drive growth. In other words, successful pivots put us on a path toward growing a sustainable business.
INNOVATION ACCOUNTING LEADS TO FASTER PIVOTS
A STARTUP’S RUNWAY IS THE NUMBER OF PIVOTS IT CAN STILL MAKE
Seasoned entrepreneurs often speak of the runway that their startup has left: the amount of time remaining in which a startup must either achieve lift-off or fail. This usually is defined as the remaining cash in the bank divided by the monthly burn rate, or net drain on that account balance.
The true measure of runway is how many pivots a startup has left: the number of opportunities it has to make a fundamental change to its business strategy. Measuring runway through the lens of pivots rather than that of time suggests another way to extend that runway: get to each pivot faster. In other words, the startup has to find ways to achieve the same amount of validated learning at lower cost or in a shorter time. All the techniques in the Lean Startup model that have been discussed so far have this as their overarching goal.
Most entrepreneurs’ biggest fear is not that their vision will prove to be wrong. More terrifying is the thought that the vision might be deemed wrong without having been given a real chance to prove itself. This fear drives much of the resistance to the minimum viable product, split testing, and other techniques to test hypotheses. Ironically, this fear drives up the risk because testing doesn’t occur until the vision is fully represented. However, by that time it is often too late to pivot because funding is running out. To avoid this fate, entrepreneurs need to face their fears and be willing to fail, often in a public way.
Remember that the rationale for building low-quality MVPs is that developing any features beyond what early adopters require is a form of waste. However, the logic of this takes you only so far. Once you have found success with early adopters, you want to sell to mainstream customers. Mainstream customers have different requirements and are much more demanding. The kind of pivot IMVU needed is called a customer segment pivot. In this pivot, the company realizes that the product it’s building solves a real problem for real customers but that they are not the customers it originally planned to serve. In other words, the product hypothesis is confirmed only partially.
Under the surface, it should have been clear that our efforts at tuning the engine were reaching diminishing returns, the classic sign of the need to pivot.
It took us far too long to make the changes necessary to fix this situation. As with all pivots, we had to get back to basics and start the innovation accounting cycle over. It felt like the company’s second founding. We had gotten really good at optimizing, tuning, and iterating, but in the process we had lost sight of the purpose of those activities: testing a clear hypothesis in the service of the company’s vision. Instead, we were chasing growth, revenue, and profits wherever we could find them.
However, investing in quality, design, and larger projects did not require that we abandon our experimental roots. On the contrary, once we realized our mistake and executed the pivot, those skills served us well. We created a sandbox for experimentation like the one described in Chapter 12 and had a cross-functional team work exclusively on this major redesign. As they built, they continuously tested their new design head to head against the old one. Initially, the new design performed worse than the old one, as is usually the case. It lacked the features and functionality of the old design and had many new mistakes as well. But the team relentlessly improved the design until, months later, it performed better. This new design laid the foundation for our future growth.
A CATALOG OF PIVOTS
Zoom-in PivotIn this case, what previously was considered a single feature in a product becomes the whole product.
Zoom-out PivotIn the reverse situation, sometimes a single feature is insufficient to support a whole product. In this type of pivot, what was considered the whole product becomes a single feature of a much larger product.
Customer SegmentPivot In this pivot, the company realizes that the product it is building solves a real problem for real customers but that they are not the type of customers it originally planned to serve. In other words, the product hypothesis is partially confirmed, solving the right problem, but for a different customer than originally anticipated.
Customer Need PivotAs a result of getting to know customers extremely well, it sometimes becomes clear that the problem we’re trying to solve for them is not very important. However, because of this customer intimacy, we often discover other related problems that are important and can be solved by our team.
Platform PivotA platform pivot refers to a change from an application to a platform or vice versa.
Business Architecture PivotThis pivot borrows a concept from Geoffrey Moore - change from high margin, low volume by going mass market (e.g., Google’s search “appliance”); others, originally designed for the mass market, turned out to require long and expensive sales cycles e.g. B2B instead of B2C.
Value Capture PivotCapturing value is an intrinsic part of the product hypothesis. Often, changes to the way a company captures value can have far-reaching consequences for the rest of the business, product, and marketing strategies.
Engine of Growth PivotAs we’ll see in Chapter 10, there are three primary engines of growth that power startups: the viral, sticky, and paid growth models. In this type of pivot, a company changes its growth strategy to seek faster or more profitable growth. Commonly but not always, the engine of growth also requires a change in the way value is captured.
Channel PivotA channel pivot is a recognition that the same basic solution could be delivered through a different channel with greater effectiveness. Whenever a company abandons a previously complex sales process to “sell direct” to its end users, a channel pivot is in progress.
Technology PivotOccasionally, a company discovers a way to achieve the same solution by using a completely different technology. Technology pivots are much more common in established businesses.
A pivot is better understood as a new strategic hypothesis that will require a new minimum viable product to test.
Those familiar with the technology life cycle ideas of theorists such as Geoffrey Moore know certain later-stage pivots by the names he has given them: the Chasm, the Tornado, the Bowling Alley. Readers of the disruptive innovation literature spearheaded by Harvard’s Clayton Christensen will be familiar with established companies that fail to pivot when they should. The critical skill for managers today is to match those theories to their present situation so that they apply the right advice at the right time.