Book Image

Lean Product Management

By : Mangalam Nandakumar
Book Image

Lean Product Management

By: Mangalam Nandakumar

Overview of this book

Lean Product Management is about finding the smartest way to build an Impact Driven Product that can deliver value to customers and meet business outcomes when operating under internal and external constraints. Author, Mangalam Nandakumar, is a product management expert, with over 17 years of experience in the field. Businesses today are competing to innovate. Cost is no longer the constraint, execution is. It is essential for any business to harness whatever competitive advantage they can, and it is absolutely vital to deliver the best customer experience possible. The opportunities for creating impact are there, but product managers have to improvise on their strategy every day in order to capitalize on them. This is the Agile battleground, where you need to stay Lean and be able to respond to abstract feedback from an ever shifting market. This is where Lean Product Management will help you thrive. Lean Product Management is an essential guide for product managers, and to anyone embarking on a new product development. Mangalam Nandakumar will help you to align your product strategy with business outcomes and customer impact. She introduces the concept of investing in Key Business Outcomes as part of the product strategy in order to provide an objective metric about which product idea and strategy to pursue. You will learn how to create impactful end-to-end product experiences by engaging stakeholders and reacting to external feedback.
Table of Contents (6 chapters)

Chapter 4. Plan for Success

Until now, in the Impact Driven Product development cycle, we have arrived at a shared understanding of Key Business Outcomes and feature ideas that can help us to deliver value to our customers and maximize the impact on the KBOs. The next step is for us to identify what success means to us. For the kind of impact that we predict our feature idea to have on the KBOs, how do we ensure that every aspect of our business is aligned to enable that success? We may also need to make technical trade-offs to ensure that all effort on building the product is geared toward creating a satisfying end-to-end product experience.

When individual business functions take trade-off decisions in silo, we could end up creating a broken product experience or improvising the product experience where no improvement is required. For a business to be able to align on trade-offs that may need to be made on technology, it is important to communicate not just what is possible within business constraints but also what is not achievable. It is not necessary for the business to know or understand the specific best practices, coding practices, design patterns, and so on, that product engineering may apply. However, the business needs to know the value or the lack of value realization, of any investment that is made in terms of costs, effort, resources, and so on.

This chapter addresses the following topics:

  • The need to have a shared view of what success means for a feature idea
  • Defining the right kind of success criteria
  • Creating a shared understanding of technical success criteria

"If you want to go quickly, go alone. If you want to go far, go together. We have to go far — quickly."Al Gore

Planning for success doesn't come naturally to many of us. Come to think of it, our heroes are always the people who averted failure or pulled us out of a crisis. We perceive success as 'not failing,' but when we set clear goals, failures don't seem that important. We can learn a thing or two about planning for success by observing how babies learn to walk. The trigger for walking starts with babies getting attracted to, say, some object or person that catches their fancy. They decide to act on the trigger, focusing their full attention on the goal of reaching what caught their fancy. They stumble, fall, and hurt themselves, but they will keep going after the goal. Their goal is not about walking. Walking is a means to reaching the shiny object or the person calling to them. So, they don't really see walking without falling as a measure of success. Of course, the really smart babies know to wail their way to getting the said shiny thing without lifting a toe.

Somewhere along the way, software development seems to have forgotten about shiny objects, and instead focused on how to walk without falling. In a way, this has led to an obsession with following processes without applying them to the context and writing perfect code, while disdaining and undervaluing supporting business practices. Although technology is a great enabler, it is not the end in itself. When applied in the context of running a business or creating social impact, technology cannot afford to operate as an isolated function. This is not to say that technologists don't care about impact. Of course, we do.

Technologists show a real passion for solving customer problems. They want their code to change lives, create impact, and add value. However, many technologists underestimate the importance of supporting business functions in delivering value. I have come across many developers who don't appreciate the value of marketing, sales, or support. In many cases, like the developer who spent a year perfecting his code without acquiring a single customer (refer to Chapter 2, Invest in Key Business Outcomes), they believe that beautiful code that solves the right problem is enough to make a business succeed. Nothing can be further from the truth.

Most of this type of thinking is the result of treating technology as an isolated function. There is a significant gap that exists between nontechnical folks and software engineers. On the one hand, nontechnical folks don't understand the possibilities, costs, and limitations of software technology. On the other hand, technologists don't value the need for supporting functions and communicate very little about the possibilities and limitations of technology. This expectation mismatch often leads to unrealistic goals and a widening gap between technology teams and the supporting functions. The result of this widening gap is often cracks opening in the end-to-end product experience for the customer, thereby resulting in a loss of business. Bridging this gap of expectation mismatch requires that technical teams and business functions communicate in the same language, but first they must communicate.

What does success mean to us?

In order to set the right expectations for outcomes, we need the collective wisdom of the entire team. We need to define and agree upon what success means for each feature and to each business function. This will enable teams to set up the entire product experience for success. Setting specific, measurable, achievable, realistic, and time-bound (SMART) metrics can resolve this.

What does success mean to us?

We cannot decouple our success criteria from the impact scores we arrived at earlier. So, let's refer back to the following table that we derived in Chapter 3, Identify the Solution and its Impact on Key Business Outcomes, for the ArtGalore digital art gallery:

What does success mean to us?

The estimated impact rating was an indication of how much impact the business expected a feature idea to have on the Key Business Outcomes. If you recall, we rated this on a scale of 0 to 10. When the estimated impact of a Key Business Outcomes is less than five, then the success criteria for that feature is likely to be less ambitious. For example, the estimated impact of "existing buyers can enter a lucky draw to meet an artist of the month" toward generating revenue is zero. What this means is that we don't expect this feature idea to bring in any revenue for us or put in another way, revenue is not the measure of success for this feature idea. If any success criteria for generating revenue does come up for this feature idea, then there is a clear mismatch in terms of how we have prioritized the feature itself.

For any feature idea with an estimated impact of five or above, we need to get very specific about how to define and measure success. For instance, the feature idea "existing buyers can enter a lucky draw to meet an artist of the month" has an estimated impact rating of six towards engagement. This means that we expect an increase in engagement as a measure of success for this feature idea. Then, we need to define what "increase in engagement" means. My idea of "increase in engagement" can be very different from your idea of "increase in engagement." This is where being S.M.A.R.T. about our definition of success can be useful.

Success metrics are akin to user story acceptance criteria. Acceptance criteria define what conditions must be fulfilled by the software in order for us to sign off on the success of the user story. Acceptance criteria usually revolve around use cases and acceptable functional flows. Similarly, success criteria for feature ideas must define what indicators can tell us that the feature is delivering the expected impact on the KBO. Acceptance criteria also sometimes deal with NFRs (nonfunctional requirements). NFRs include performance, security, and reliability.

In many instances, nonfunctional requirements are treated as independent user stories. I also have seen many teams struggle with expressing the need for nonfunctional requirements from a customer's perspective. In the early days of writing user stories, the tendency for myself and most of my colleagues was to write NFRs from a system/application point of view. We would say, "this report must load in 20 seconds," or "in the event of a network failure, partial data must not be saved." These functional specifications didn't tell us how/why they were important for an end user. Writing user stories forces us to think about the user's perspective. For example, in my team we used to have interesting conversations about why a report needed to load within 20 seconds. This compelled us to think about how the user interacted with our software.

Some years ago, a friend of mine, who was working as part of a remote delivery team with little access to the real end users, narrated an interesting finding. Her team had been given a mandate to optimize report performance. One of her team members got an opportunity to travel to the location of some customers and observed how they used their software. The prime functionality of their software was to generate reports. The users would walk in at the beginning of the day, switch on their computers, launch the software, initiate a report, and step out for their morning coffee. The software took a good 15 minutes to generate a report! By the time the users had their coffee and had come back, the reports were ready. The customers had changed their habits to suit the software! The real question was how were we to react to this finding? Should we fix the reports to run faster? Should we leave them as is and focus on building other valuable functionality for the customers? Should this be a decision that technology must make in isolation?

It is not uncommon for visionary founders to throw out very ambitious goals for success. Having ambitious goals can have a positive impact in motivating teams to outperform. However, throwing lofty targets around, without having a plan for success, can be counter-productive. For instance, it's rather ambitious to say, "Our newsletter must be the first to publish artworks by all the popular artists in the country," or that "Our newsletter must become the benchmark for art curation." These are really inspiring words, but can mean nothing if we don't have a plan to get there.

I've heard many eager founders tell product engineers that their product should work like Facebook or Twitter or be as intuitive as Google. This expectation is there from the first version of the product! What do we do when the first release of a product is benchmarked against a product that took 10 years in the making and is a million iterations in? This is what I meant earlier when I mentioned expectation mismatch. It is important to get nontechnical stakeholders on board to meet the ambitious goals they prescribe to their teams. For instance, one of the things I do in ideation workshops is to not discount an ambitious (and impossible) goal such as the one stated earlier. I write it up as an outcome to achieve, and press for stakeholders to lay out their plans for how they intend to support making this happen. For example, at the level of responsiveness, performance, intuitiveness, and feature richness they expect from the product in its first release, we would need a sufficiently large user base and source of revenue to justify the costs/effort that go into building it. What is the business team's plan to source revenue in time for the first release? How do they plan to get the user base again in time for the first release?

Even when a product's technology is its main differentiator, other supporting business functions need to also come together in order to amplify the effectiveness of the technology. Successful products are a result of a lot of small things that come together to create impact.

The general rule of thumb for this part of product experience planning is that when we aim for an ambitious goal, we also sign up to making it happen. Defining success must be a collaborative exercise carried out by all stakeholders. This is the playing field for deciding where we can stretch our goals, and for everyone to agree on what we're signing up to, in order to set the product experience up for success.

Defining success metrics

For every feature idea we came up with in Chapter 3, Identify the Solution and its Impact on Key Business Outcomes, we can create feature cards that look like the following sample. This card indicates three aspects about what success means for this feature. We are asking these questions: what are we validating? When do we validate this? What Key Business Outcomes does it help us to validate?

Defining success metrics

The criteria for success demonstrates what the business anticipates as being a tangible outcome from a feature. It also demonstrates which business functions will support, own, and drive the execution of the feature. That's it! We've nailed it, right? Wrong.

Success metrics must be SMART, but how specific is the specific? The preceding success metric indicates that 80% of those who sign up for the monthly art catalog will enquire about at least one artwork. Now, 80% could mean 80 people, 800 people, or 8000 people, depending on whether we get 100 sign-ups, 1000, or 10,000, respectively!

We have defined what external (customer/market) metrics to look for, but we have not defined whether we can realistically achieve this goal, given our resources and capabilities. The question we need to ask is: are we (as a business) equipped to handle 8000 enquiries? Do we have the expertise, resources, and people to manage this? If we don't plan in advance and assign ownership, our goals can lead to a gap in the product experience. When we clarify this explicitly, each business function could make assumptions.

When we say 80% of folks will enquire about one artwork, the sales team is thinking that around 50 people will enquire. This is what the sales team at ArtGalore is probably equipped to handle. However, marketing is aiming for 750 people and the developers are planning for 1000 people. So, even if we can attract 1000 enquiries, sales can handle only 50 enquiries a month! If this is what we're equipped for today, then building anything more could be wasteful. We need to think about how we can ramp up the sales team to handle more requests. The idea of drilling into success metrics is to gauge whether we're equipped to handle our success. So, maybe our success metric should be that we expect to get about 100 sign-ups in the first three months and between 40-70 folks enquiring about artworks after they sign up. Alternatively, we can find a smart way to enable sales to handle higher sales volumes.

In Chapter 3, Identify the Solution and its Impact on Key Business Outcomes, we created user story maps that addressed how internal business functions tie in to the feature idea. We don't take an outside-in view alone. We also need to define metrics for our inside-out view. This means that to chart a product experience strategy for this feature idea, we need more than just the software product specs. Before we write up success metrics, we should be asking a whole truckload of questions that determine the before-and-after of the feature.

We need to ask the following questions:

  • What will the monthly catalog showcase?
  • How many curated art items will be showcased each month?
  • What is the nature of the content that we should showcase? Just good high-quality images and text, or is there something more?
  • Who will put together the catalog?
  • How long must this person/team(s) spend to create this catalog?
  • Where will we source the art for curation?
  • Is there a specific date each month when the newsletter needs to go out?
  • Why do we think 80% of those who sign up will enquire? Is it because of the exclusive nature of art? Is it because of the quality of presentation? Is it because of the timing? What's so special about our catalog?
  • Who handles the incoming enquiries? Is there a number to call or is it via email?
  • How long would we take to respond to enquiries?
  • If we get 10,000 sign-ups and receive 8000 enquiries, are we equipped to handle these? Are these numbers too high? Can we still meet our response time if we hit those numbers?
  • Would we still be happy if we got only 50% of folks who sign up enquiring? What if it's 30%? When would we throw away the idea of the catalog?

This is where the meat of feature success starts taking shape. We need a plan to uncover underlying assumptions and set ourselves up for success. It's very easy for folks to put out ambitious metrics without understanding the before-and-after of the work involved in meeting that metric. The intent of a strategy should be to set teams up for success, not for failure.

Often, ambitious goals are set without considering whether they are realistic and achievable or not. This is so detrimental that teams eventually resort to manipulating the metrics or misrepresenting them, playing the blame game, or hiding information. Sometimes teams try to meet these metrics by deprioritizing other stuff. Eventually, team morale, productivity, and delivery take a hit. Ambitious goals, without the required capacity, capability, and resources to deliver, are useless.

The following is a sample success metric for the same feature, now revised to include internal operational metrics, and who owns each metric:

Defining success metrics

In the preceding sample, there is one success metric (grayed out) that we cannot link to desired business outcomes. So, this deprioritizes those metrics automatically. While these goals may be desirable to achieve, they are not something we have invested in for the current plan. For instance, the goal of putting together content in less than two days is an operational metric, which has not been invested as a Key Business Outcomes. So, we can discard that from our list of metrics to validate. We can further refine this to indicate success metrics as follows:

Defining success metrics

Now, we have yet to decide whether this feature idea, shown in the preceding image fully or partially, will be part of a digital solution. This will be decided based on cost (time, effort, money, capabilities, and so on).

Mismatch in expectations from technology

Every business function needs to align toward the Key Business Outcomes and conform to the constraints under which the business operates. In our example here, the deadline is for the business to launch this feature idea before the Big Art show. So, meeting timelines is already a necessary measure of success.

The other indicators of product technology measures could be quality, usability, response times, latency, reliability, data privacy, security, and so on. These are traditionally clubbed under NFRs (nonfunctional requirements). They are indicators of how the system has been designed or how the system operates, and are not really about user behavior. There is no aspect of a product that is nonfunctional or without a bearing on business outcomes. In that sense, nonfunctional requirements are a misnomer. NFRs are really technical success criteria. They are also a business stakeholder's decision, based on what outcomes the business wants to pursue.

In many time and budget-bound software projects, technical success criteria trade-offs happen without understanding the business context or thinking about the end-to-end product experience.

Let's take a couple of examples: our app's performance may be okay when handling 100 users, but it could take a hit when we get to 10,000 users. By then, the business has moved on to other priorities and the product isn't ready to make the leap.

We can also think about cases where a product was always meant to be launched in many languages, but the Minimum Viable Product was designed to target users of one language only. We want to expand to other countries, and there will be significant effort involved in enabling the product, and operations to scale and adapt to this. Also, the effort required to scale software to one new location is not the same as the effort required to scale that software to 10 new locations. This is true of operations as well, but that effort is more relatable since it has more to do with people, process, and operations. So, the business is ready to accept the effort needed to set up scalable processes, and hire, train, and retain people. The problem is that the expectations of the technology are so misplaced that the business assumes that the technology can scale with minimal investment and effort. The limitations of technology can be sometimes perceived as lack of skills/capability of the technology team.

This depends on how each team can communicate the impact of doing or not doing something today in terms of a cost tomorrow. What that means is that engineering may be able to create software that can scale to 5000 users with minimal effort, but in order to scale to 500,000 users, there's a different level of magnitude required. The frame of reference can be vastly skewed here. In the following figure, the increase in the number of users correlates to an increase in costs:

Mismatch in expectations from technology

Let's consider a technology that is still in the realm of research, such as artificial intelligence, image recognition, or face recognition. With market-ready technology (where technology viability has been proven and can be applied to business use cases), in these domains, it may be possible to get to a 50% accuracy in image matching with some effort. Going from 50% to 80% would require an equal amount of effort as that which was needed to get to 50% accuracy. However, going from 80% to 90% accuracy would be way more complicated, and we would see a significant increase in costs and effort. Every 1% increase after 90% would be herculean, or just near impossible, given where the current technology is in that field. For instance, the number of variations in image quality that need to be considered could be a factor. The amount of blur, image compression quality, brightness, missing pixels in an image, and so on can impact the accuracy of results (https://arxiv.org/pdf/1710.01494.pdf). Face recognition from video footage brings in even more dimensions of complexity. The following figure is only for illustrative purposes and is not based on actual data:

Mismatch in expectations from technology

Now, getting our heads around something like this is going to be hard. We tend to create an analogy with a simple application. However, it's hard to get an apple-to-apple comparison of the effort involved in creating software. The potential of software is in the possibilities it can create, but that's also a bane because now that the bar is set so high, anything that lowers our expectations can be interpreted as: "Maybe you're not working hard enough at this!"

Sometimes the technology isn't even ready for this business case, or this is the wrong technology for this use case, or we shouldn't even be building this when there are products that can do this for us. Facial recognition technology with a 50% accuracy may suit a noncritical use case, but when applied to use cases for identifying criminals, or missing people, the accuracy needs are higher. In an online ads start-up that was built to show ads based on the images in the website content that a user was browsing, the context of the images was also important. The algorithm to show ads based on celebrity images worked with an accuracy that was acceptable to the business. The problem was that in some cases, the news item was related to a tragedy regarding a celebrity or an event where a celebrity was involved in a scandal. Showing the ads without the context could impact the image of the brand. This could be a potential threat for the online ads start-up looking to get new business. With a limited amount of resources and with a highly-skewed ratio of technology costs/viability, it remains a business decision on whether or not investment in technology is worth the value of the business outcomes. This is why I'm making the case that outcomes need to justify technology success criteria.

There is a different approach needed when building solutions for meeting short-term benefits, compared to how we might build systems for long-term benefits. It is not possible to generalize and make a case that just because we build an application quickly, that it is likely to be full of defects or that it won't be secure. By contrast, just because we build a lot of robustness into an application, this does not mean that it will make the product sell better. There is a cost to building something, and there is also a cost to not building something and a cost to a rework. The cost will be justified based on the benefits we can reap, but it is important for product technology and business stakeholders to align on the loss or gain in terms of the end-to-end product experience because of the technical approach we are taking today.

In order to arrive at these decisions, the business does not really need to understand design patterns, coding practices, or the nuanced technology details. They need to know the viability to meet business outcomes. This viability is based on technology possibilities, constraints, effort, skills needed, resources (hardware and software), time, and other prerequisites. What we can expect and what we cannot expect must both be agreed upon. In every scope-related discussion, I have seen that there are better insights and conversations when we highlight what the business/customer does not get from this product release. When we only highlight what value they will get, the discussions tend to go toward improvising on that value. When the business realizes what it doesn't get, the discussions lean toward improvising the end-to-end product experience.

Should a business care that we wrote unit tests? Does the business care what design patterns we used or what language or software we used? We can have general guidelines for healthy and effective ways to follow best practices within our lines of work, but best practices don't define us, outcomes do.

Cost of technical trade-offs

In the nonprofit where I was leading the product team, we launched a self-service kiosk to approve loans for people from rural India, after they clear an assessment on basic financial concepts, which was also offered through the kiosk. The solution involved so many facets of complexity. It had to be multilingual (there are 22 languages spoken in India, and an even greater number of dialects) and work in a low internet bandwidth (including literacy education videos and assessments). Many of the target users were illiterate or semiliterate and had not actively used touchscreens.

In addition, we had to ensure that we could remotely monitor, maintain, and support our kiosk software since we had no people or budgets to afford any travel. We also had to worry about security, our devices being tampered with, and that the devices had to be installed in buildings without climate control. We used Aadhar biometric authentication for our users and there were fingerprint scanners, thermal printers, and iris scanners, along with an Android tablet that served as the kiosk. On top of this, we employed machine learning to approve loans for people from rural India.

With so many moving parts, we had to prioritize our product launch. If we had to take a call on this from an isolated technology perspective, we would call out a minimal viable product with support for one language using manual approvals for loans, targeting Tier II cities with better internet and so on. However, the business context was that the nonprofit was trying to change the ecosystem of micro and peer-to-peer financing in a market that was being grossly neglected or abused by mainstream players (https://www.rangde.org/swabhimaan). The success of the solution was in how the rural folks adopted the self-service model, and how the nonprofit could get a foothold in areas where mainstream players weren't willing to venture. Our Impact Driven Product included all of these technical success criteria stated earlier.

We mercilessly cut down on functional flows, simplified our designs without remorse, and put in a lot of effort in understanding and learning about our target users. The product had the support for multiple languages, remote monitoring and maintenance, hardware that could secure our devices, software that worked in low internet bandwidth, a user interface that included audio prompts in multiple languages, and a machine learning algorithm that focused on reasons to approve a loan rather than to find reasons not to. We built all this in four months and launched it in three rural villages in different parts of India.

This would have not been possible if we had not looked at an end-to-end experience, including operations, recording audio messages, finding hardware and device partners and advisors, and ensuring every moving part moved toward the same Key Business Outcomes—adoption and sustainable operations.

Success metrics discussions are the best forums for adding value, not just by communicating what's possible but also by bringing out the opportunity cost of not building something. Product engineering needs to own the 'how' of the product. In some cases, this means taking a longer-term view on core technology foundations. There isn't a real choice between building fast and building right; sometimes, we need to do both simultaneously.

We should stop viewing engineering as an isolated function that does what it's told to do. Today, implementation decisions are either being forced down by a business that doesn't understand tech possibilities or those decisions are being made in isolation by technology without understanding business context. We should also stop getting fixated on coding practices and processes or lamenting about being unable to refactor code. If quick-and-dirty code can amply meet business outcomes, then there is no reason for us to fix it. Similarly, if the core of a technology-driven product needs a lot more attention, then technologists should find the best way to meet business outcomes with the least wasteful effort. At the same time, they should be able to own, steer, and set the direction for the business outcomes through the most valuable interventions.

Defining technical success criteria

So, in our art marketplace example, we can think of a couple of metrics that can be owned by product technology. For instance, ease of sign up or thinking of a mobile-first experience.

Defining technical success criteria

Summary

In this chapter, we learned that before commencing on the development of any feature idea, there must be a consensus on what outcomes we are seeking to achieve. The success metrics should be our guideline for finding the smartest way to implement a feature. The conversations at the stage of defining the success metrics should enable a shared understanding of what success means, how we see all the parts coming together to meet the same Key Business Outcomes, and our limitations and possibilities. This is true of not just technical success criteria, but for every business function.

In the next chapter, we will figure out the smartest way to meet success metrics.