Why Standups are Useless and How to Run Great Product Team Meetings

The majority of meetings are a waste of time. And in my opinion, one flavor of meeting that tops the charts in uselessness is the “status update” meeting. You know this meeting— the meeting where everyone gets together to share what they’ve been doing. It’s ironic that meetings like this exist because it gets in the way of people actually doing something productive. A cross-functional group of people (product, design, engineering, marketing and so on) working on a new product doesn’t greatly benefit from status updates.

Source: https://twitter.com/bradbarrow_/status/466068637267148800?lang=en

“Standups” Won’t Save You

I don’t claim to know the perfect recipe for a cross-functional product team meeting, yet I can say with a stiffened spine that the worst breed of status update meeting is the “stand up”, which has been popularized in agile methodology. I’m not the first to say this as others have had similar observations and provided comical opinions

Most standups occur daily and some teams reduce the frequency to a few times a week. The majority of standups are short, yet often meander into casual conversation where people talk about things unrelated to work, like what they did over the weekend. None of this helps you build quality products at a fast pace. We’ve become so enamored with standups that popular tools like Slack even have a standup feature built into them.

But my stance on standups is the polar opposite of the trend. The only time in which a standup is truly beneficial is in the run up to a product launch. That’s when your troops are about to storm the beach and you want to make sure they are prepared. Are we ready to flip the switch on the A/B test? Is marketing ready to publish the blog post and push out the PR pieces? Does customer service have the scripts they need to answer customer questions that will come in right after launch? Do a standup when nearing launch to make sure the troops are ready to sack Normandy. That’s when it’s useful.

In any case prior to a launch, standups are mostly a waste of time. That’s because the bulk of the product development process centers around (1) building (2) making decisions. Point #1 is obvious (of course the team is busy creating stuff), but point #2 deserves a discussion, since that’s the meat and potatoes of the subject. 

Making Decisions

Hundreds of decisions need to be made during the development of a new product, like:

  • What’s making the cut for the MVP?
  • Who is the lead designer going to be?
  • How will we promote/merchandise this new feature?
  • Will we be A/B testing it? 
  • What are our goals and key metrics?

Those are the obvious big decisions that nearly every product team needs to make. But there are also a long list of mid-sized and bite-sized decisions that need to be made along the way  like:

  • Do we need to create a new UI component or can we use the one from our existing component library?
  • Is feature ‘X’ worth building since it balloons backend scope by 3 weeks?
  • Will this be a 50/50 A/B test or should we do a simple holdout group of 95/5?
  • Are we shipping on all three platforms in parallel across web, iOS, and Android?
  • Can we cut scope on Android since it’s only 5% of our users?

For anyone that’s built a product of reasonable scope/size, we all know that the set of decisions to make balloons as the project kicks off. It’s only then that you start to peel back the onion and realize the complexity of the task at hand, in addition to the set of decisions that must be made in order to unblock the project and keep it moving. 

The universe of decisions to make begins quite large. As the product team progresses through those decisions, the hard (i.e. big) decisions tend to be made earlier in the project (e.g. “What’s making it into the MVP?”) and the mid-sized and smaller decisions tend to come in later in the game (e.g. “Can we simplify UI component ‘X’ to reduce the backend scope by a few days?”). The total number of decisions to make also approaches zero as a team gets closer to launch. 

(Lots of big decisions to make early on with fewer and smaller decisions to make later)

But let’s recall the point I made a few seconds ago— the team is unblocked when decisions are made. “Unblock” is the magic word here.

Unanswered questions act as headwinds or speed bumps when building products. Using the classic example of MVP cutoff, a team must decide on which features will be included or excluded from an MVP. That’s a hard decision to make so product teams typically move slowly through this phase of development. This is one of the driving forces behind the creation of the Google Design Sprint methodology. As the methodology explains, you can use their design sprint method to “shortcut the endless debate cycle” to arrive at a few key decisions regarding what the first product prototype will be. Each unmade decision (i.e. each unanswered question) pumps the brakes on the development process as a whole. Protracted decision-making leads to protracted product development cycles, often by weeks or months (indicated by the red bars below). The Google Design Sprint method is one great example of how a team can make important decisions more quickly, which ushers them through the process of arriving at their first testable prototype.

(Answered questions lead to delays in development indicated by the red bars)

More on Standups

Before I share with you the meeting format that I’ve learned is most effective for product teams, I’ll harp a bit more on the ubiquitously popular “standup” meeting. 

Let’s take a closer look at the prescribed format of a stand per agile methodology. Members of a standup (i.e. everyone on the team) are asked to share the following:

  1. What did I work on yesterday?
  2. What am I working on today?
  3. What issues are blocking me?

There are several big issues with this format. 

First, why should I care about what everyone worked on yesterday? Most of the time, you don’t care, and you shouldn’t care what others work on day-to-day. The practical reality is that you only care about what another team member worked on yesterday if it enables you to do your job (i.e. it unblocks you). Anything shared that does not explicitly unblock you is meaningless noise.

For example, a product manager may share something like, “Yesterday, I did one interview, worked on some specs, and interviewed one customer. Today, I need to interview one more design candidate and then I’ll update some of the user stories in the spec based on the questions I received from design. I’m not currently blocked on anything.” This example of a typical standup update is useless in terms of what it does to enable a product team to build a higher quality product at a faster pace. 

Similarly, why should anyone care about what I’m working on today? Again, it only matters if what one person is doing today unblocks another person on the team. But if it’s really that important of a blocker, should that person wait until the following daily standup to make that known? Why not email that person or swing by their desk as soon as you’re running up against the roadblock and seek to remediate the issue as it arises? 

Now, let’s dissect the third bullet point, which is to share what you’re blocked on. A team member may be blocked on something in one of two use cases. The first is that the task they are blocked on simply takes a lot of time. An example would be backend engineering waiting for more clarity on frontend specs before they can finalize the engineering design docs for what they must implement. The second reason why someone may be blocked is that a decision hasn’t yet been made. It’s less useful to say “I’m blocked by X” than it is to say “Hey, why don’t we get together and make a decision on X, so that we can move forward?”

Standups don’t carve out time for decision-making, which is the ultimate blocker. Rather, standups are designed to simply make the blocker known, yet not resolve the blocker. Resolution matters more than awareness.

A Simple Agenda to Keep Things Moving

I’d like to propose a simpler alternative that leads to more productive product meetings and improved product development cycles. It’s something I learned through trial and error in my career. I’ve found it to be the simplest, most effective meeting format for any product team (and for most meetings in general). Here it is:

  1. Action items: who is handling what key action item and by what date?
  2. What decisions need to be made?

I would run this meeting one or two times a week depending on the volume of decisions to be made. Sometimes as much as three times a week. Early in the development of a product, when a team needs to make many decisions, I increase the frequency of these meetings. As development progresses, the volume of decisions to make decreases and the proportion of time spent on building increases. Similarly, the magnitude of the decision (i.e. how hard or important it is) tends to decrease as development progresses.

If the team is effective at surfacing key decisions and in driving towards decisions expeditiously, then you can cut out weeks or months of unnecessary delays in the product development lifecycle. So, how do you make sure this is done?

Collecting Open Questions

To prepare for running effective product team meetings (i.e. decision-making meetings), I would go to each function lead on the product team (design, engineering, marketing, etc.) and ask them what decisions they needed made. I would collect the open questions and pull them into the agenda. It’s a simple enough task so you can ask people as you swing by their desk, ping them over email or Slack. In parallel, I would maintain a set of action items that came up during the decision-making meetings. From the set of open questions and action items I would compose an agenda and it would look like the following:

Action Items

  1. (Josh) Share latest iOS designs with Tammy by 9/13
  2. (Deanna) Send marketing language for review to compliance by 9/15
  3. (Andy) Sync with data science on how to configure the A/B test buckets by 9/15
  4. Etc.

Decisions to Make

  1. Should we include the 3-5 days worth of UI design polish as part of the MVP launch or not?
  2. Should we ask executive staff for support to get one more backend engineer or is there no parallel processing that can be done with extra resources to bring in the launch date?
  3. Who is going to take the lead on starting customer development for the next milestone on our roadmap? Should we even start that yet or punt it by a week or two?
  4. Etc.

If I were leading the meeting, I would run through the set of action items to make sure we continue to execute on the critical tasks each of us signed up for and hold ourselves accountable to hitting our deadlines. Any new action items that came up during that portion of the discussion would then be added to the list in realtime. We would normally move through the action items portion of the agenda in less than 10 minutes. 

The remainder of the meeting (which we normally reserved 60 minutes for) would then be spent on making decisions as a group. In many cases, a side conversation had already happened by a few relevant members of the team (e.g. design chatting with engineering about a particular aspect of the designs) and they could walk the team through the context of the open question and their recommended answer. In those cases, decisions were typically made in a matter of seconds or a few minutes. In minority cases (I would estimate about 30% of the time), the decision required ample discussion and might be too complex for a group setting and/or the time allotted. 

In those cases, I would ask 2-3 of the most capable and relevant members of the team to form a quick working group either later that day or the following day to discuss the item and come up with a recommended decision. Their recommendation was then shared with the rest of the product team either over email, Slack, or in person at the next product team meeting. We effectively had multiple concurrent decision-making meetings going on in parallel, constantly driving towards reducing uncertainty and maintaining momentum. 

(Use team meetings and small breakout discussions to quickly eliminate unanswered questions)

The more we had these meetings, the more effective we became at making decisions as a team, or in forming the breakout group to drive towards a decision and then close the loop with the rest of the team. 

Decision Log

Something else to consider adopting is a decision log as part of this meeting format. I suggest using a single Google doc for maintaining a full record of all prior meeting agendas, as well as prior decisions made. That comes in handy when a post-mortem is run after the product has been launched to assess what the team did well or could have improved upon. Often, the full context of prior decisions made is lost, especially if those decisions are made in isolation by a few people and/or made several weeks or months in the past. 

Maintaining a record of all prior decisions makes it very easy to reflect on the project during a blameless post mortem and the ability to identify the root cause of issues that eventually come up. Or, better yet, the log of all prior decisions made may help the team identify root cause behind a failed product launch. To make things convenient, I’ve created a copy of the agenda format and decision log that I used to use with my product teams. It’s publicly available for you to copy and use yourself. It’s simple enough but thought I’d share nonetheless.

Wrap Up

The format for standups popularized in Agile methodology, unfortunately, isn’t well calibrated for driving efficient product development within a team. The root cause for long development cycles is that decisions weren’t made quickly and frequently. By replacing daily standups with less frequent decision-making meetings, product teams can save themselves lots of wasted time and build products much more quickly.

Part 1: A Single-Minded Perspective on Growth

“Our industry does not respect tradition— it only respects innovation.”

That’s what Satya Nadella wrote in his opening email to the company shortly after becoming Microsoft’s new CEO. It was a clear call to arms that Microsoft needed to reignite innovation in order to scale the company after roughly 15 years of stagnation. The price of Microsoft’s stock has increased ~3x since he came back because the market seems pleased with Microsoft’s sharpened focus, progress made in the cloud business, and willingness to change how it used to do things in order to compete in the future. Some of this could be window dressing or marketing speak, but the changes happening at Microsoft seem genuine.

Satya said nothing about doubling down on what’s already working in order to get more juice out of the squeeze. Rather, he ended the email by emphasizing the need for clarity of focus on new innovations and on changing the culture which, for the most part, was focused on preserving the status quo for over a decade. It’s not unheard of that a large company often forgets how to innovate.

I haven’t spent enough time at companies with 1,000+ employees to speak deeply about the dynamics of large company stagnation, but I can speak to it happening at early-stage startups. In particular, I find it interesting that the same two problems Satya outlined for Microsoft often appear within early stage startups as well: i.e. the culture becomes comfortable with the status quo and the company loses its ability to innovate.

How does it happen? When a startup becomes obsessed with and designed around data and optimization. Today, every 50 – 100+ person startup has multiple business intelligence tools, off-the-shelf A/B testing tools, a data science team, and product managers who know much more about writing SQL than they do about interviewing customers.

In fact, I kept score while interviewing PM candidates in 2017. I spoke with 67 product managers. About 50 of them were reasonably proficient in SQL and could write a few queries on the spot. Guess how many knew how to conduct customer development? Three. That’s it. Only three product managers could proficiently describe the purpose, process, and outcomes from customer development. 75% could write SQL, but only 4% knew how to properly interview a customer. It’s a small sample size, but the gap is large.

Here’s why that’s bad: Most startups, just like large companies, need to go through continuous phases of innovation in order to create 2x+ step changes in the potential for their business. The process of going from 0 to 1 with their first product is an innovation. It’s what allows the company to get off the ground. Sometimes, that original innovation is enough to carry them from seed to IPO. But that is incredibly rare. What’s more common is that startups need to innovate several times over in order to create step changes that help them scale from early stage to growth stage and from growth stage to a publicly traded company.

Over the last 10 years, there has been a massive overcorrection in the direction of optimization based on broad availability of data, leading me to find that most PMs are incapable of effectively deriving insights from customer conversations and most startups are incapable of producing new product innovations beyond the initial product that they take to market. They’re great at A/B testing, but not great at creating new features based on customer insights and a leap of faith.

To put it plainly, growing through data analysis and A/B testing isn’t the only path to future growth. While it seems obvious, I see very few startups designed for innovation, which may be the biggest driver to new growth for your business. Do you think Facebook would be at its current scale without innovations like News Feed? Community-driven translations to expand globally? Or the developers platform? The answer is obviously “no”. Take a look at MAU acceleration beginning in 2007 / 2008. That coincides with the launch of the international translations app, which allow Facebook users to crowdsource the translation of the product. It took several months to build and a few years of ongoing maintenance and development to mature the product. That innovation led to a boom in active user growth.

The point I’m making is that today’s startups very quickly fall into the optimization trap where they think future growth will largely come from optimizing their existing product. The better approach is finding the right balance between optimization and innovation since both methods can produce future growth.

By the time you’re done with this series of blog posts, you’ll have the knowledge and tools you need to do the following:

  1. Design a company-wide org chart that creates an explicit balance between optimization efforts and innovation efforts
  2. Wisely select the “right” types of experiments to run to increase your chances of improving growth through optimization
  3. Implement a repeatable product development process for creating new, innovative features

Optimization Versus Innovation

We should first start with a more detailed explanation of the difference between optimization and innovation. Optimization is when a startup iterates on its existing products or services to squeeze more juice out of the orange. Typically, the results of optimization are incremental in nature.

If they are incremental in nature, then why do them? Well, because many small optimizations can accrue into large long-term results when you allow those optimizations to compound.

Here’s a simple example. In the below graph, I compare the 12 month growth in monthly active users (MAUs) in 4 hypothetical cases. The blue line is the base case where the monthly growth rate is slowly declining, leading to flattening growth. The red line is for sustained 10% month-over-month growth (MoM), yellow is sustained 12% MoM, and green is sustained 14% MoM. If a startup can optimize its way towards a slightly higher and sustained rate of growth, the compounded outcome is very different relative to the base case. In fact, this is what we did in 2009 at Facebook. Our growth team focused on optimizing our way towards a sustained 2% week-over-week growth rate because we knew that we would grow from ~100 million MAUs to ~300 million MAUs in 12 months if we did so. This happened to be the company-wide goal for that year.

Innovation is when a company embarks on building entirely new products or services for existing customers or for a new segment of customers. Innovation can also involve expanding into an entirely new business line. However, this happens so rarely (hello, Amazon!) that I won’t focus on this definition for the time being. Additionally, innovation can create step change improvements in the trajectory of the company, although they are much more difficult to discover and successfully execute on.

I’ve taken the same scenario above, but added in a 5th option which is labeled as “with innovation” in the below graph. What this does is take the base growth rate scenario and applies a 2x multiplier to growth midway through the year (e.g. you build a new feature, such as Facebook’s News Feed and it leads to a step change in monthly active usage). This assumes no optimizations along the way.

The point isn’t that you should pick one approach to growth over the other. Rather, the ideal outcome (and most realistic) is a healthy combination of both optimization and innovation. In the below scenario, I assumed that a segment of the company is working on optimizing the existing products and services to sustain 10% MoM growth and another segment is working on new product innovation that leads to a 50% bump in MAUs midway through the year. This scenario is plotted as a black dashed line on the graph.

Picking a Path

The appropriate question to ask is, “For my company, should I be innovating or optimizing?”

For Seed and Series A startups the practical reality is that you are headcount constrained into picking one over the other because you’ll have less than 20 employees. Prior to establishing product market fit, you’ll be entirely focused on innovation because you’ve yet to figure out the new technology that delivers something better, faster, cheaper, and more convenient relative to the alternatives in the market. Consequently, you’ll have very little growth or customers to optimize on top of, so don’t waste your time optimizing if you don’t already have exponential organic growth.

As a company matures to the point of Series B and beyond (sometimes with a large Series A) it can hire enough people that it can contemplate doing more than one thing at a time. From my experience that’s at the point in which a consumer software company has 30 or more employees. On average, about half of the employees will be engineers, so that means you’ll have 15 people that can do the building. With 15 people doing the building you can divide them amongst 3-4 teams— e.g. 2 product teams, an infrastructure team, and a floating pool of engineers needed for miscellaneous tasks and on-call work.

When a company reaches 100 employees it can certainly multi-task. Its 50 engineers can be subdivided amongst 2-3 well-staffed product teams, 2-3 infrastructure teams, and still be able to manage on-call support and miscellaneous tasks.

Stocks and Bonds

Assuming a company is able to reach the scale of 30+ employees and is now capable of walking and chewing gum at the same time, the question becomes, “How do you allocate those people in terms of optimization versus innovation?” I like to use investing analogies when thinking through this decision.

Source: https://moneyinc.com/differences-between-stocks-and-bonds/

Most investors should have an investment portfolio that maximizes their returns given the amount of risk that is appropriate for them to take (this concept is known as Modern Portfolio Theory). Put in simple terms, it stipulates that you’ll want a diversified portfolio comprised of a mix of higher risk, higher return investments (e.g. stocks) and lower risk, lower return investments (e.g. bonds). Depending on the level of risk you can afford to take, you’ll want to shift the allocation towards certain investments and away from others. For example, if I’m 70 and ready to retire, I should be taking very little risk and will want a portfolio weighted heavily towards low risk, low return investments (bonds). If I’m 30 and putting money into a retirement account that I’ll use  30 to 40 years from now, then I should be taking on more risk to generate more returns during that long time horizon (i.e. more stocks).

I hope you are starting to see how this investing analogy applies to your startup thinking. Innovation is your stocks and optimization is your bonds. The question to ask is, “What proportion of my company’s focus should be on optimization versus innovation?”

If you’re building a seed stage startup, then you’ll solely be focused on innovation (all stocks and no bonds) because you’re trying to build something new and innovative that finds product market fit. If you’re working on a series A or series B startup with clear indicators of product market fit (i.e. exponential organic growth), then you should be considering the trade-off between optimization and innovation.

Facebook is a good example of optimization and innovation at play. While I was at the company (2008-2010), we did a bit of both. The Growth Team was focused predominantly on optimization by improving sign up conversion rates, new user onboarding, reactivated user onboarding, getting people to add more friends, and a vast library of miscellaneous A/B tests for the sake of getting more users. Meanwhile, several of the core product teams were pushing out big innovations like the first smartphone app, various News Feed innovations, large enhancements to photos, and the developer’s platform.

In the next part in this series I’ll discuss how you can design an org chart and product teams that create an explicit balance between optimization and innovation. If you’re ready for that, go ahead and jump right in. And for broader context, here’s a list of all four parts in this series.

Part 2: Designing an Organization that Can Optimize and Innovate

There’s a powerful concept known as “shipping the org chart”. It was brilliantly outlined by Steven Sinofsky in his piece on Functional vs Unit Organizations. The TL;DR is that the design of your org makes its way into your product. In other words, your product is significantly influenced by the nature of the organization you’ve designed within your company.

Here’s an example from an org chart I recently reviewed with a Series A (soon to be Series B) startup currently scaling from 15 employees to about 45.

It’s a fairly straightforward org design. The ops team is focused on optimizing the field operations folks to scale their service at lower cost. The eng team is building out and scaling underlying services and products to support 10x growth in a number of customers. There are two product team. The first team is the LTV team, which is focused on increasing revenue per user. The second team— the growth team— is focused on improving all important conversion rates, such as sign up rate, new user onboarding, and so on. Lastly, the marketing team is focused on acquiring more customers.

That all seems reasonable— but, with one catch.

I asked the founders of this Series A company, “Who is focused on delivering more value to the customer?” To which I received a blank stare, followed by a bit of head scratching, and then a final, “Uhhhh … well…good question!”

The problem with an org chart like the one above is that it’s almost exclusively aligned with producing value for the business— so much so that very little attention is being given to satisfying the needs of the customer. Here’s where things get really tricky—it also pushes the company deep into optimization territory. To be specific, it’s the design of the product teams (those highlighted in green) that is most worrisome. I’ll elaborate more on this in the next section.

Optimizers Purgatory

Imagine you have 1 junior/mid experience product manager, 1 junior/mid experience designer, and 2-3 engineers— each with a few years of experience. That’s a fairly common atomic unit of a product team within a startup. This small team now refers to themselves as the “LTV team” with an understanding that their primary metric is to improve revenue per customer. The next step for them is creating a roadmap, which they begin to do through the lens of increasing revenue per user to maximize LTV for the business.

The very first project that the team puts on their roadmap is to A/B test the pricing tiers for their subscription business. Another item on their roadmap is to A/B test variations of the subscription cancellation flow with alternative messaging and discount offers in an attempt to convince customers to not cancel their subscription. Following that, the team has fleshed out a portion of their roadmap for testing new email, in-product, and push notifications to encourage freemium users to upgrade to one of the paid tiers. Again, these are all reasonable projects to work on. The issue is that they are all focused on incremental optimizations for the benefit of the business and don’t add any additional value to the user. This is the slippery slope I alluded to a few paragraphs ago.

Fast forward 12 months and the LTV team is still busy running A/B tests, looking at funnel data, and squeezing out 5% – 10% wins via the occasionally successful experiment. Meanwhile, they haven’t shipped any new, innovative products or features that deliver substantial value to the customer (which can also increase LTV for the customer!). While exercising their data analysis and A/B testing muscles, their customer development and new product development skills have atrophied.

Jump ahead another 6-12 months and this team of highly skilled optimizers is scratching their head because the company is lagging its growth goals. They’ve continued to hire PMs whose strength is in running SQL queries and designing experiments. They’re finding the occasional 5% – 10% win, but they’re starting to get the sense that they’ve scraped the bottom of the barrel because it’s becoming increasingly hard to find a positive experiment. Meanwhile, one of their competitors is scaling more quickly, compelling them to want to run even more experiments because they’re questioning if they just haven’t run the right A/B tests yet. Anecdotally, many employees at the company notice that the amount of customer love they received on social media has slowed down. They observe a noticeable decline in feature requests and praise from their existing customers in Zendesk as well.

Meanwhile, the Growth team has been busy doing much of the same. They’ve been running experiments, building innumerable data dashboards, and commiserating with the startup’s lone data scientist as to why growth is below plan and becoming increasingly dire, despite having run dozens or hundreds of A/B tests over the last two years. Several of the tests were successful, but what gives? Why does growth suck relative to their expectations?

The product teams and company have entered what I like to call “optimizers purgatory.” They’re in a strange middle ground between succeeding with plenty of data and A/B testing abilities, minus a single meaningful innovation to the user experience in the last year or two. This sounds like an extreme hypothetical, but it’s incredibly common. I’ve personally been there and worked with dozens of other startups that have encountered optimizers purgatory as well.

What can be done? The company could have considered an alternative to the org chart that struck a better balance between having some focus on optimizing for business value and some on innovation for customer satisfaction. This may in turn create business value far greater than the value that comes from solely optimizing for business metrics. Below is an example alternative that swaps the LTV Team for a Client Value Team. This new team’s primary metric is customer satisfaction score— e.g. the percent of customers “very satisfied” with their experience.

Take the same atomic unit of a team (1 PM, 1 designers, a few engineers) and you’ll find their roadmap is wildly different than the LTV Team’s roadmap. This difference is simply because their team name implies creating new value for the customer and their primary metric requires that they increase customer satisfaction. Recall  the LTV Team had a roadmap full of A/B tests focused on optimizing the business metrics. The Client Value Team’s roadmap is more likely to contain a list of new, high value features that customers have been asking for and new, innovative value that customers weren’t expecting to receive, but will be delighted with.

In contrast to the LTV Team, the Client Value Team will develop their customer development and product development muscles. They’ll have well-defined customer research and design research methods. They’ll likely also develop a closer relationship with the customer service employees within the company, leading to regular meetings with the head of customer service where they review the latest Zendesk customer requests. They’ll have fewer data dashboards and won’t be able to speak as eloquently about the parts of the product that are well optimized, but they will be able to speak about which customer complaints have tapered off and which new customer requests have bubbled to the surface.

The LTV Team and the Customer Value Team have become two very distinct organisms, simply because of the name of the team and the type of metric chosen— i.e. a customer success metric versus a business success metric. This is the notion of “shipping the org chart” at play and it’s an essential concept to understand when thinking about designing an organization with the intent to grow the business.

Creating a Balanced Organization

When working with founders on creating an org chart that adequately balances growth from optimization and innovation, I give them the following exercise:

Step 1: Concisely describe your mission and vision for the next 2-3 years

Step 2: List the 2-3 things that must be true for your customer to realize that vision

Step 3: Design an org with product teams that map to the 2-3 truths for your customer

Step 4: Revise and edit until satisfied with the results

Here’s a practical example from Wealthfront, where I was most recently the President:

Step 1: Wealthfront’s mission is to provide everyone access to sophisticated financial services with the vision that our customers would use Wealthfront to exclusively manage all of their finances.

Step 2: In order for that mission and vision to be true, our clients would need to (1) create a free financial plan that captures their needs and wants; (2) have a superior set of banking products relative to what they could get at large banks; (3) have world-class investment management that’s typically only available to the ultra wealthy.

Step 3: We set out to design the primitives of a product organization that reflected 1 and 2 above. It looked something like this:

We came up with an Onboarding Team that would digitize many of the financial processes traditionally handled over the phone or via paperwork. By digitizing these experiences, we could ensure “everyone gets access”, per our mission statement. The Onboarding Team’s primary metric was customer satisfaction. For this metric, they measured the percent of users that were very satisfied with various parts of the onboarding experience. We made the leap of faith that if the customer was more satisfied with the experience, they would trust us with more of their money ( our data science team proved to be true). That ensured we took a very customer-centric approach to innovating with the onboarding experience.

Secondly, we created a Financial Planning team to build out a whole new suite of products, so that our clients could get more value out of Wealthfront beyond just investment management (the company began with this offering). Finally, we had a Financial Services Team that would build the next generation of investing and banking products, so that our clients could get access to financial products typically reserved for the rich.

Step 4: Once we had those teams in place with a clear charter for creating new innovative products (as opposed to simply optimizing the products we already provided), we put the rest of the company org in place.

And within the product organizations, we could then provide guidance on the proportion of their roadmaps/time and effort spent on creating new feature innovations versus optimizing for growth with the existing feature set. For example, one might ask each product team to construct roadmaps that are 70% focused on building new value to the customer and 30% spent testing and optimizing for key business metrics related to their product line. With this approach to org design, a startup can be very explicit with its allocation towards growth through both optimization and innovation.

Another version of striking a balance between optimization and innovation is as follows: In this case there are 3 innovation-focused product teams (in blue) and 1 product team (the growth team in green) that is focused exclusively on optimizing the existing features and experiences in order to improve the business metrics. This would lend itself to a split of 75% innovation and 25% optimization.

Being Nimble

As noted earlier, companies need to pick their balance of “stocks and bonds”— i.e. their mix of optimization and innovation. However, they shouldn’t pick their mix once and set it for perpetuity. The mix should change over time depending on the circumstances of the business.

For example, if your company launched a new product line a few months ago and is experiencing exponential organic adoption, then the product clearly has product-market fit within your customer base. It may make sense for that product team to then spend 3-6 months optimizing the existing features within that product line to maximize for adoption via some low hanging fruit experiments. This is especially true for network effects businesses since optimizing the drivers of the network effects can produce massive results. That was the case at Facebook where we spent a lot of time optimizing for sign up rate, new user onboarding, and getting people to add friends. By doing so we meaningfully accelerated the growth of the company due to it being a network effects business.

Conversely — and is the more common scenario I’ve seen at early stage startups — is that topline growth has stagnated as a result of having not shipped anything new and innovative in the last 1-2 years. That’s often the case since most businesses do not have a network effect and must therefore grow through new product innovation. The following example comes from my time at Wealthfront. At one point three out of four product teams were setup to focus mostly on new product innovation (Onboarding, Financial Planning, Financial Services) and one team was set up exclusively for optimization (Growth). Within the Onboarding, Financial Planning, and Financial Services roadmaps, the teams then have an explicit balance of how much of their efforts is dedicated to building new innovative features versus optimizing the existing products.

In subsequent quarters, the mix would change based on new insights or overall changes to the business. The key point is to remain flexible and use this simple mental model of “stocks and bonds” to regularly communicate and decide the appropriate mix of optimization and innovation across the company and within each product team’s roadmap.

If you want to take a stab at designing your own org chart using a similar process, go ahead and copy this free template that I made available and create a version of your own. It provides guidelines for laying out your org chart, listing what you must accomplish for your customers in order to realize your mission and vision. It’s also a place for you to  balance optimization and innovation within each roadmap, as well as list the customer success metrics for each innovation team.

In the next part in this series I’ll discuss product development for optimization. The most important aspect of it is choosing the “right” experiments to run. I’ll do a deep dive on what that means and provide guidelines for you to use when it comes to making your own experiment selections. If you’re ready for it, go ahead and give it a read right now. And here’s a list of all four parts in this series in case you want to jump around.

Part 3: Product Development for Optimization

Assuming you’ve determined the right balance of optimization and innovation from the above sections, we can now take a closer look at how to manage an optimization roadmap and pick the “right” experiments to run.

Creating a Roadmap

Like any good product team, you should begin with a roadmap. The roadmap should be organized in priority order with the priority determined by estimated impact and level of effort. For example, if you estimate that a certain set of tests can produce a large increase (double digit gain) in the metrics for a relatively small amount of effort (a few weeks or less of engineering and design support), then it’s likely a high priority experiment. I’ve also created a template for creating your own experimentation roadmap, which you’re welcome to make a copy of and run with it.

The roadmap has two segments to it: The first segment allows for estimating the impact of various experiments so that you can rank them in priority order. The second segment is intended to capture the results from the experiment. It’s essential to maintain a history of all experiment results so the team can conduct post mortems in order to refine their experiment selection and design.

Generally speaking, I recommend that optimization teams— such as a growth team—operate in 6-8 week sprints focused on improving one metric at a time. A common mistake I see is a small growth team trying to optimize multiple metrics in parallel. This lack of focus normally leads to subpar results. In contrast, significant results can be produced when the full weight of a growth team is poured into a single metric for at least a few months. The team will find that they improve their pattern recognition through focused effort, leading to better test results as time goes on. As an example, during my time at Quora, our growth team spent 16 months optimizing solely for sign up rate. During that time frame we increased the sign up rate from SEO traffic from 0.1% to north of 4%. Once we reached the bottom of the barrel on that particular metric, we moved onto the next metric and repeated the process. To encourage this type of focus, I broke the experimentation roadmap template into multiple tabs where each tab maps to a roadmap for a specific growth metric — e.g. churn vs. reactivation vs. signups and so on.

Picking the “Right” Experiment

Picking the right experiment to run is part art and science. By art I mean using judgement to craft a user experience worth testing. By science I’m referring to the practical constraints of testing new experiments on a relatively small population (i.e. sample size in statistics speak) when you’re still an early stage startup.

I often see startups try to run A/B tests in the same way that large companies like Google and Facebook do. They create a list of A/B test ideas that require fairly limited level of effort and then they start shipping dozens of small change tests fairly quickly. A classic example would be making changes to the call-to-action on a landing page, such as on the homepage, and perhaps testing the location of the call-to-action as well. The problem with this sort of test is that a startup often has a much smaller sample size (because they have less traffic or users of the product), so running and resolving that A/B test at high statistical confidence takes much, much longer than running a similar test at a high traffic product like Facebook. The relationship between experiment thoughtfulness and sample size is captured in the below diagram.

Here’s how to interpret it: Companies with a large sample size (a lot of traffic) don’t have to be as thoughtful with experiment selection and design. The reason is that the large company can make relatively small changes to the product, set up an A/B test to measure the effect, and then resolve the experiment in a matter of days at high statistical confidence because they have a wealth of data to lean on. On the other hand, a small startup with very little traffic (small sample size) needs to be much more thoughtful about experiment selection and design because an A/B test on a small sample size that produces a small change relative to the control will take weeks or months to harvest enough data to reach a statistically significant conclusion. I’ll demonstrate this effect in the below table.

Let’s imagine we have three different startups (A, B, and C — below). Each is going to run an A/B test on their homepage where the base conversion rate is 10%, the relative increase in conversion rate they are aiming for is 5%, leading to a new conversion rate of 10.5%. However, each startup has a different volume of daily traffic. Startup A receives 100 visits per day to the homepage, B receives 1,000 visits per day, and C receives 10,000 visits per day. Using the A/B testing calculator from AB Tasty to calculate the necessary test duration, we get the following results.

You can see from the data that the test duration declines significantly as a result of having more samples (i.e. traffic) in the test funnel. Now, let’s take a look at what happens when you tweak the magnitude of the relative experiment effect. In other words, when you run a test that produces a small, medium, or large change to the baseline conversion rate.

By increasing the magnitude of the relative experiment effect, the test duration declines precipitously. The key takeaway here is to aim for large changes. That seems like an obvious observation, yet I see many startups testing relatively minor changes to their product in the hopes it will produce a double digit increase in the target metric.

Finally, let’s look at what happens if we manipulate the base conversion rate. By base conversion rate I’m referring to the starting conversion rate. For example, if you have 100 visitors/day to your homepage and 1 user signs up, and you’re running an A/B test on the homepage, then you have a base conversion rate of 1%. If instead you run an A/B test midway through the sign up flow where there are 10 visitors per day, and 1 visitor manages to sign up at the end of the flow, then you have a 10% base conversion rate. What you’ll notice in the below scenario is that test duration decreases as a result of having a higher base conversion rate. Practically speaking, that means you’re more likely to reach statistical significance quicker if you A/B test in the bottom half of a funnel versus the top half since the bottom half has a higher base conversion rate.

To recap, there are a few key lessons to take away from the above scenarios:

  1. Smaller startups can’t test like big companies because of sample size limitations. They simply don’t have as much traffic. If they try to test small changes to the product, which produces a small relative change in conversion rate on an already small sample size, then the test will take months or years to conclude. Startups don’t have the luxury of waiting around for insignificant results like that. On the contrary, startups need to produce step change increases in their rate of growth in order to achieve liftoff and set themselves up for another funding round.
  2. Startups must test big changes to their product in order to manage sample size limitations. If a startup runs an A/B test for a significant product change that leads to a 30% worse conversion rate, they’ll find out in a matter of days and can quickly kill the experiment and limit the downside. If it turns out that the test produces a 30% increase in conversion rate, the company will also find out in a matter of days and can turn it live to 100% of users and experience a large increase in its rate of growth. When you think of it that way, the startup really has nothing to lose!
  3. The bottom half of a funnel is often a better place to test than the top half of a funnel because obtaining statistical significance on a high baseline conversion rate is more likely than on a low baseline conversion rate.

It’s essential that anyone working on an experimentation team or roadmap understands the above statistical concepts. If so, they are less likely to stack their roadmap with poorly chosen A/B tests that will take too long to run and produce results too small to change the trajectory of the company.

In the last part in the series I’ll do a deep dive into how to implement a repeatable product development process that improves the chances that you can ship new innovative products at a fast pace.  Below is the full list of posts in the series in case you’d like to hop around.

Part 4: Product Development for Innovation

Modern software companies follow a variety of common conventions to scale quickly and efficiently. For example, most software companies have a defined and documented approach for engineers when it comes to writing, reviewing, editing, and deploying new code. It’s important to settle on some standards and procedures for software development because it means a company can write code quicker, reduce mistakes that are inherent in writing code, and provide a better working environment for software developers. The end result is more and better products delivered to the customer, which in turn is good for the business.

However, standardization of a product development process is uncommon within startups. Most companies lack a clear procedure for taking an idea and turning it into a high quality, shippable product. What typically happens is product teams form and are left on their own to figure out how they want to drive new product development. For example, who is responsible for conducting customer research, when, and how should it be conducted? How does a team come up with an initial prototype for a new product? How do you iterate on it over time? In what ways can you maintain clear internal communication with key stakeholders as the product is being built? When and how do you come up with the go-to-market plan for the product? A well-designed product development process will have an answer for each of these questions and will help you ship more and better products to your customers. Without such standards, each product team will build products through different methods, leading to inconsistent product delivery timelines and inconsistent product quality. The last thing a startup needs is more unpredictability.

I created the following content to prevent unnecessary churn when trying to create new innovative products. It describes a product development process I’ve refined over the years and use on a day-to-day basis when building compelling products customers love. The process is described in a way that will make it clear and easy to implement within your company. It is specifically designed for building large customer-facing features where “large” is defined as a product that requires 1 month or more of engineering time to complete.

Common Product Development Issues

First, it’s useful to point out the ways in which product development is typically broken or inefficient at young technology companies. Here are the common issues that I tend to see at startups:

  1. The value you want to create for your customer has not been clearly articulated upfront.
  2. Projects get “blown up” late in development due to large communication gaps during development.
  3. Creating the first product prototype takes far too long, leading to a lull in the pace of development.
  4. Customers aren’t being talked to enough, leading to products that don’t adequately reflect customer wants and needs.
  5. The project team building the product doesn’t have a clear escalation path to get unblocked.

The below process has been designed to explicitly solve or greatly mitigate each of the above issues when developing new products.

Guiding Principles

In addition to solving common product development pitfalls, this method of developing products is rooted in a set of guiding principles which further prevents the above issues and gives product teams a common language to use when describing how they build product:

  1. “Work backwards” from the customer: Start with intense focus and clarity on the value the company wants to create for customers as opposed to thinking about the value the company wants to create for itself. The belief is that if a startup makes the customer very satisfied, customers will engage deeper with the product, which leads to an increase in the key business metrics. Amazon is the best example of a company that begins product development with an intense focus on value to the customer.
  2. Collaborative: All key functions (e.g. product, design, engineering, and customer support) are present from beginning to end since each function provides a unique and valuable perspective. That means everyone must own the outcome of the product— e.g. engineering should care just as much about the quality of the user experience as a designer should. I don’t believe in the “PM as the CEO of the product” idea because most PMs don’t have CEO quality judgement. Software development is best conducted as a team sport.
  3. Interactive prototypes: A product development process should aim towards creating interactive prototypes worthy of being tested on actual customers, as quickly as possible. The reason is that startups learn the most when testing an interactive prototype on customers. Interactive can mean working code or a high fidelity visual prototype using something like Framer, which strings together visual designs through clickable hotspots.
  4. Measure and learn: Once a product is shipped, you’ll want to measure the outcome to see if it created the expected impact. If not, you can investigate why that is the case and use those insights to either deprecate the product, improve it, or carry forward those learnings into future products that are built. Shipping products without understanding the impact is unacceptable.

A Repeatable Process for Innovation

First, I’ll describe the process. Following the description is a visual concept. The product development process follows these steps:

  1. Begin with conducting Customer Research as part of “working backwards from the customer”. It’s through this research that you will refine the product hypotheses— i.e. what the product should do and why it should do it, what specific problems you’ll be solving for the customer, and what forms of delight you can provide. Customer Research can either be conducted by a PM or a designer, if your company doesn’t have a full-time research lead. Each conversation is 30 – 60 minutes and follows an open-ended format that allows for spontaneous discovery of rich customer insights. These insights should eventually make its way into product requirements.  
  2. In parallel, the lead Product Manager begins drafting product requirements (which also includes an Amazon-style press release). A draft of the product requirements and press release must be finished before starting the design sprint, which is how a product team develops its first testable prototype. The initial draft should be reviewed by the design and engineering leads, so they are familiar with it and can provide useful feedback. You want all key team members to be versed in what value you intend to create for the customer.
  3. Once Customer Research is complete, and a first draft of product requirements and the press release have been drafted, the team will then run a design sprint to quickly design the first testable prototype of the product. I selected the Google Design sprint method since it was created with the time constraints of a technology company in mind. The issue with most traditional design processes is that they can take weeks to months to get to a testable prototype. That timeframe simply doesn’t work within a startup. The Google Design Sprint method is the most effective that I’ve seen when going from 0 to 1 within a software company. The design sprint takes 1 week, at most.
  4. Once the design sprint is complete, the team can finalize the product requirements and Amazon-style press release so that the requirements and customer value are crystal clear before full development begins.
  5. The results from customer research and the design sprint are brought into a kickoff meeting to get everyone on the same page prior to the development process ramping up to 100%. A kickoff meeting should be no longer than 45 minutes and should be conducted shortly after the design sprint is completed (e.g. within 1 week). You’ll want all primary decision-makers involved so that there are no surprises, which could lead to the project being derailed later in development. Feedback from primary stakeholders should then be taken into account and incorporated into the product plans.
  6. Once development begins, the project team will present the latest prototype(s) (across all platforms— e.g. web, iOS, Android) and overall status of the project during weekly or bi-weekly product reviews until the product is finished and launched to the public. Product reviews are also 45 minutes max and should take significantly less time (e.g. 20 – 30 minutes), if run efficiently. The purpose of product reviews is to maintain coordination throughout the project, give the project team a regular interface with the leadership so that they can ask for help or support when needed, and to incorporate feedback on the prototypes iteratively.

This is a conceptual diagram for the product development process from start to finish. It’s very useful for project leads (especially the product manager) to have this process memorized, so that they always know what should be coming next in the development process. If run well, it should only take 2-3 weeks to finish customer research, the design sprint, and have a kickoff meeting session. Keep in mind that this is for new, innovative products/features, so getting to the point of alignment on a medium fidelity prototype is impressive in such a short timeframe. From there, development starts to move quickly until the product is ready to launch.

Templates

Here’s the full list of templates that you can used in conjunction with the process laid out above. This will allow you to incorporate some or all aspects of this process into your own team or company.

Wrap Up

Thanks to an abundance of data storage, analysis, and visualization tools, startups today have the ability to make rapid improvements to nearly every aspect of their business. However, this overabundance has led to a significant bias in that startups now lean on structured data too much. So much so, in fact, that some of the fundamentals of building innovative products, such as rigorous customer development, have fallen by the wayside. One of the byproducts of this data obsession is that many startups try to optimize their way towards success through relentless A/B testing. This typically pulls them further away from essential insights and truths that they might discover, if they spent less time analyzing structured data from a database and more time collating the unstructured data that can be discovered when talking to customers.

The good news is that data over-reliance can be easily corrected with a shift in mindset and some of the tools and guides I provided in this four part series. In terms of next steps, I hope you take a few key steps from here. First, move forward with designing a company-wide org chart that creates an explicit balance between optimization efforts and innovation efforts. It’s also critical to make wise decisions with the types of experiments to run and avoid running tests that will never meaningfully improve your business. And finally, that you adopt some version of the repeatable product development process I shared, so that you can innovate much more effectively for the betterment of your customers and your business.

As reference, here are all posts in the series in case you’d like to read them again:

A League of Its Own: Why I’m Going Long on BallerTV

Each year, hundreds of millions of kids from around the world are introduced to sports. I was one of them starting back in 1988. My life was designed around sports starting at the age of six and continued, without interruption, until I took off to college when I was 18. This level of involvement meant that sports was more than a game I played on nights and weekends. It was my social circle for much of my life and it’s where I spent some of the best time with family too – playing alongside my brothers, having them cheer me on from the sidelines, or being coached by my dad. Sports also served as a purveyor of valuable life lessons, such as the value of teamwork, hard work, determination, and so on.

Despite the significant role that sports played in my life, however, I have almost nothing left from those years other than memories and a box of miscellaneous home run balls, trophies, and newspaper clippings buried in a box, tucked deeply into the corner of my closet, only to be revisited in a random bout of nostalgia once or twice a decade. No video footage. No letters from college scouts (because I was never “discovered”). And no way of sharing those moments with family members that didn’t have the chance to be there while I played.

That’s how things were when I played in the 80’s and 90’s – but with today’s technology, that doesn’t have to be the case anymore. Which leads me to the team at BallerTV and why I am keen to invest in this remarkable company.

BallerTV makes it easy for anyone to watch live streams of high school sports from a growing list of games across the country, whether it’s a family member who is unable to attend an away game or a college scout looking for talent in parts of the country they typically can’t cover. Not only can users watch games being broadcast live but they can access unlimited replays, download games to keep a digital memory to revisit, and player profiles that include highlight reels helping today’s up-and-coming players get discovered by other coaches and scouts.

I first met the founders Rob Angarita and Aaron Hawkey over coffee in the summer of 2018. They shared their passion for sports and their prior entrepreneurial history as co-founders of a product called Cramster, which was sold to Chegg and is now know as Chegg Study. They unpacked their vision for BallerTV and helped me understand the potential for the business and the underlying drivers they see shaping the industry that they are helping to create.

  • Professionalization of amateur sports: high school sports is becoming increasingly professionalized much like college sports did beginning in the 1980’s. ESPN televised Lebron James’ high school games in the 90’s and Sports Illustrated has placed at least 13 high school athletes on it’s covers dating back to the 1980’s.  The All American Games started in 2000 and has been broadcasting the best high school athletes from every major sport ever since. And recently, the rate of professionalization has increased via the dominance of AAU basketball (and several other AAU sports), having largely replaced the role of high school teams in the discovery and recruitment of top amateur talent.
  • A new distribution method: whereas college sports benefited from mainstream cable television adoption, high school sports benefits from online services such as Youtube and now BallerTV. The first college sports game was broadcast on TV in 1939 but it wasn’t until mainstream adoption of cable television that college sports took off both in terms of viewership and revenue. As of 2017, the NCAA basketball tournament (better known as March Madness) generated $1B in television advertising revenue by itself. A similar ascent in viewership and revenue is starting with platforms like BallerTV. For example, players like Zion Williamson have amassed tens of millions of views of their high school highlight reels. He is just one of thousands of other amateur players to gain notoriety online before making the transition to college.
  • Lower cost game production: ESPN, CBS and other major networks only broadcast major amateur events (such as the McDonald’s All-American games) because they can only make the economics work for marquee events, given prohibitively high production costs. Turns out it is expensive to haul large television production equipment across the country and staff the event with highly paid on-air talent. In comparison, BallerTV is able to shoot amateur games at a fraction of the price as traditional broadcast television because of the proprietary technology they’ve created and ongoing advances in hardware, software, and communications technology.
  • Scalable organic growth: BallerTV has built an incredible brand within the amateur sports community. So much so that high school programs, tournaments, and leagues are banging down their door, asking BallerTV to record their games because parents and players find so much value in BallerTV’s products. For each new game that they record, more players, coaches, and scouts sign up so that they can get access to BallerTV’s fast-growing lineup of games. It’s so valuable that future Hall of Famer Dwayne Wade recently joined as brand ambassador for BallerTV.

When an investment opportunity gets to the finish line, I always do a gut check and ask myself, “Would I want to work here as a full-time employee?” In this case the answer was a clear “yes!” due to a constellation of factors. They have a small yet deeply committed team led by two co-founders with abilities in both execution and vision, a fast growing customer base that finds emotional value in the product they provide, and an increasingly large market that serves the greater good of helping kids grow up to be healthy, well-rounded adults. A company like this needs to exist and I’m very happy to be a part of it along the way.

To those of you looking for a great company to join down in LA, BallerTV is hiring!