Why Your Growth Team Shouldn't Be Focused on Winning (Willie Tran, Calendly)

Media Thumbnail
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, Why Your Growth Team Shouldn't Be Focused on Winning (Willie Tran, Calendly). The summary for this episode is: <p>Willie Tran hates winning.</p><p><br></p><p>Ok, maybe that's an exaggeration; but Willie does hate when growth teams are <em>only </em>focused on winning.</p><p><br></p><p>On this episode of Growth, Willie explains why he's led his growth teams away from a culture of winning, what he's focused on instead, and how all growth teams can (and should) change their narrative.</p><p><br></p><p>Like this episode? Be sure to leave a ⭐️⭐️⭐️⭐️⭐️⭐️ review and share the pod with your friends! You can connect with Matt on Twitter @MattBilotti, @DriftPodcasts, and Willie on LinkedIn. </p>
How to shift a "winning" narrative into one that really wins
02:44 MIN
How to build a team around Willie's method of learning
02:49 MIN
Experiment design
04:15 MIN

Matt Bilotti: Hello, and welcome to another episode of the Growth podcast. I'm your host, Matt Bilotti. And today I am really excited to have Willie Tran who is a principal growth PM at Calendly here with us today. Willie, thanks so much for joining.

Willie Tran: Hi, thank you.

Matt Bilotti: Absolutely. So I caught up with Willy a few weeks back and realized he's got some really good, solid, hot takes about growth stuff that maybe isn't your classic standard viewpoints on things. So we're going to talk through cultural growth teams, talk through experimentation and some other stuff along the way. Willie's got a background doing a ton of growth stuff, scaling up some teams. So Willie, why don't you give a little bit more of a quick background on yourself and then we'll go ahead and jump on in?

Willie Tran: Yeah. Hey, hi there. My name's Willie I'm leading product growth at Calendly. Before this, I helped kind scale up and help out product growth teams at MailChimp and Dropbox.

Matt Bilotti: All right. Let's go ahead and dive right in. So let's start with some culture stuff around growth teams. One thing that you had mentioned to me was you're against a culture of quote unquote winning. What does that mean to you in the context of growth teams?

Willie Tran: Well, I hate winning. No, I'm just joking. I think a lot of times growth teams, we see that they focus a lot on let's just pump out winning experiments. And they only talk about experiments that are stats sig positive and that's fair. Right? I think if you tell if I think most growth teams are like," Yeah, what's wrong with that? We should just talk about how we're moving the number." And it makes total sense. But however, I find that there's a lot of longer term issues with that that people don't realize or don't understand or if you haven't been a part of a more tenured growth team. So I guess, so talk about it, let's walk through essentially how product growth teams are usually started at most companies. And if this isn't, as your listener, if this isn't you, congratulations. But from my time talking with a lot of people who are starting product growth teams, this is how it usually starts. So you have a person. They're usually a maybe PM or maybe they're a higher level, but not at the very top. They're excited. They hear about this experimentation thing or they hear about PLG and they're like," Oh man, we've got to have one of these. We've got to have one of these. They're hot right now." It's essentially, imagine moving the number without having to put a lot of money in it, even though product code teams are incredibly expensive. So this person usually pitches some experimentation program, some PLG program, to leadership. And leadership usually will have an executive sponsor, but they'll eventually get buy- in. They'll be like," Okay, cool. Let's give this a shot. Here's a couple engineers, literally two, and half a designer, go figure this out." And usually the platform this person pitches it on is like," We're going to move this number." And usually that number is activation or acquisition. And so once you get that approval, you're expected to deliver some results to justify this. And from the timeframe that I've seen, it's usually six months that you're expected to produce some result to justify this investment on a company level. Within six months, you have to get something. And understandably, this is nerveracking. This can be nerveracking for the person who pitched this. So what usually ends up happening is they'll come up with a strategy where they just focus on delivering just a ton of experiments. They're just pumping stuff out, just super high velocity, which can be okay if your objective is to build the muscle of experimentation running experiments from designing it, building it, analyzing it, extrapolating results and learnings. That can be okay to build that muscle. But the problem is when you, this sounds crazy, but the problem is when you have a lot of wins, which this surprises a lot of people, right? Because that's what they're going for.

Matt Bilotti: Isn't that the good thing?

Willie Tran: That's the good thing, right? But actually it is a good thing, but only in the way in which you manage how you deliver these wins and what you focus on. So if you have a lot of these wins and you will. You almost always will. You're essentially putting a signal out to the rest of the company that," Hey, look how valuable we are because we're winning. Look how much we move the metric by." And that's all you're focusing on, that's the delivery. Because you're trying to justify the company's investment. And then people are like," Wow, this team is worth it. That's so great." And then you just keep rinsing and repeating this until eventually you have this narrative across the company of the product growth team is the shit, right? Look how much they're moving the number by. Anyways, so I think I'd imagine for the listener right now you're probably thinking," Yeah, this is great. What's the issue, Willie?" So the problem is that you're probably not factoring in that you will hit diminishing returns. So what a diminishing returns, essentially your rate of winning is going to decrease over time because when product growth teams are started, they're usually created and their first things are to pick up all that low hanging fruit. And I've never seen a company that didn't hit diminishing returns. I've never seen it. And you'll usually depends on how much traffic and opportunity you have. You'll hit it within the year. So at some point, once you hit diminishing returns, you'll notice that your experiments are not winning as much anymore. Now this is a problem, right? Because if you have pitched your team as the team that wins and then you're not winning anymore, all of a sudden your team's perceived value as no longer as high as it used to be. So this is really bad, right? And this is why I often see growth teams actually get disbanded after a year, which is terrible.

Matt Bilotti: So what do we do?

Willie Tran: Just wave your hands up and just wave the white flag.

Matt Bilotti: Every growth tier, every growth team just has a one year lifespan and that's the end of that's it?

Willie Tran: That's right. This is a great time talking to you all. So the strategy or the meta strategy or narrative that I've found to work really well is instead of focusing on winning focus on learnings and it sounds somewhat similar, but it's actually quite different in a lot of details, right? Because if you focus on learning, what will happen is you will get more wins over time because you're learning about the user. You're learning about their problem. You're learning what are the real experiments you're trying to... You should be running to keep winning, right? It's something I believe is the more you learn, the more you earn. I think that's true as an individual. I think it's true as a growth team. And not only that, if you purely position yourself as a growth team, that you're the only people who run experiments, and look how much you're moving the number, what'll also end up happening from my experience at least, is this paints a target on your back from other core product teams. Because it'll start to hype up this idea of experimentation across your company, which is great signal. And then usually leaders of the core product team for my... Again, this is just my experience, is they'll be like," Hey, core PM. Why aren't you running experiments?" And then this makes the core PM be like," Shit. I'm being measured on something else." Listener, I am Asian and I went through school and my father was like," Hey, your friend was on the stage, graduated with honors. Why didn't you graduate with honors?" It's kind of a inaudible thing. And I'm like," God damn it, Michael." So anyways, so it's creates that effect, right? Where the core product teams sees the growth team can either see them as a threat, which maybe speaking more so to the culture of your company. But instead of doing that, you should avoid that. Instead when you create learnings, you create evergreen value. And this is really important because if your growth team is only known for winning and moving the number, again, when you stop moving the number, you won't be as valuable anymore. But if you leverage the learnings to create evergreen value to lift other product teams, to help them build better experiences, that also move the number... Because it should be everyone's job to move the number, then your product growth team will be more valuable to the rest of the company beyond just the winnings, right? Because you're not just creating winnings localized core product... Or sorry, to product growth, but you're creating winnings across the entire company. And that's so incredibly valuable, so incredibly valuable that everyone wants to work with you. Everyone wants to run experiments and everyone just sees how valuable you are as a team.

Matt Bilotti: Love this. I think that the focus on learnings makes a ton of sense for the longevity of the team to learn so that others can take action too is amazing. You know, I'm listening to this and thinking of other people listening and like," Willy, it sounds like maybe are you saying I shouldn't do the low hanging fruit wins? Are these mutually exclusive things? Can I do them together? Can I still get the wins?"

Willie Tran: I think you should totally do the low hanging fruit. At the end of the day, you still have that pressure and so you should absolutely do it. I would say do that. While you're doing that, figure out a strategy that helps you learn about the user. Learn about the user's mentality. This is really important because the strategy that you create as a product growth team will be based off of the problems that the user has. Again, this is not terribly dissimilar from a core product team. And at the beginning I'd say... So it's concurrently while you're doing these low hanging food experiments, do a lot of user research. Ask a lot of questions around who your user is and what are the behaviors that you're having. And this is a common problem that I'm seeing, that I've seen when talking to other product growth teams is they usually come in with tactics. And I see this a lot where they come in with like," Hey, it wouldn't be cool to run this experiment." And it's like," Yeah, yeah, yeah, yeah." And they go design it and they run it. And that's fine for low hanging fruit, again, building your muscle. But while you're doing that, instead the first thing you should be doing actually is come in with questions. Questions are so much more important than tactics. Tactics are super easy to figure out once you have a good idea of what the problems are, but the only way you know the problems are is by first asking questions around the user's behavior, around their mental state, around what are they looking for? What's their intent? I believe conversion equals desire minus friction, right? And friction... Okay, you can look at the experience and figure that out. At the same time, friction is subjective too based on what the desire is. Because people think if I just remove all the steps, then it'll be better. Well, not exactly because sometimes people want more confidence that what they're doing is correct. So maybe adding more friction could actually be better to some extent, depending on what the desire is.

Matt Bilotti: So tell me what are some examples of these types of questions? You were talking about intent, what is their intent and all that is that the type of question? Is the question set up? How do I build my team to do this method of learning that you're talking about?

Willie Tran: So what I always do whenever I'm starting a new team or a new area is the first doc I always create is a question backlog doc. And it's essentially a super cloud of process where me, my designer, my engineers, literally anyone who wants to be involved, UXB searches, everyone, we just brainstorm together and just ask questions. And then we collect and collaborate and categorize them. So some example questions would be like," Do users not use this feature because they don't know it's there?" Because essentially you're asking, is it an awareness issue? Is it a usability issue? And this is something that quantitative can probably... You can probably figure out with via quantitative analysis. Another one was what do users need to see experience to know that Calendly is worth paying for? So this was a big one for when I was leading pay conversion at Calendly, which was like... That one you're not really going to figure out the analytics, this is probably user research, which will then... Then you test solutions via experimentation, but figuring out how are they defining value? And we often make that up ourselves based off of what we think the product is. But we often... When I was at Dropbox and I learned that, say team admins, all they they really care about was consolidated billing. That really blew my mind. They didn't really care about things storage and stuff like that. They just wanted to make it easier to pay for all of their team's licenses. That was a huge reason. I would never have guessed that, but that's really important because that dictates your strategy. Because imagine if I instead didn't do that, I didn't ask that question and we didn't figure that out. We would've just done bunch of experiments around trying to optimize the license invitation flow, which by the way we did do to minimal success. So things like that is incredibly valuable. And then another question, this is a quantitative one, is what features are most correlated with paid conversion? So if you know that you can come up with a strategy to essentially what features should you be pushing on users or would you be experimenting on to get users to adopt? Whereas if you just come in and throw spaghetti at the wall, you're not going to have a good strategy. You're not going to know what problems... What's stopping people from adopting these features? And that is a lot easier question to answer of how do we get users to adopt feature X than how do we get users to become a paid user? Because that's actually a really hard problem. It's a very broad because there's lots of reasons. But when you can narrow it down to a couple things, it creates a sense of inherent prioritization and focus, which is incredibly powerful for a product growth team when it feels you can and should be doing everything.

Matt Bilotti: I love this too, because it changes how I think about designing of experiments. Generally the PM and the designer are just working together to design two to three different versions of this thing, because we think we can make an impact there. Whereas what you're saying is more like the experiment design really just needs to be anchored in research design. It's like you should be framing... These are the three options because we believe that what we can learn from putting it in front of people will answer this question for us.

Willie Tran: 100%, running an experiment is not figuring out is this version better than this version? And I think that's what a lot of people think it is, because that's honestly, that's kind of how it's been pitched. It's the simplest way of understanding the value of an experiment. And it does answer that, but that should not be the main thing. Instead, like you just mentioned, it's about research. It's another way of doing research and when you run an experiment, you need to have questions you're trying to answer. Right? For example, a super simple one is, and this is probably most experiments, is do users not use feature X because they're not aware it exists? So then you'll design experiment that just puts it right into their face. You know, they are. And I think what a lot of product growth teams might do is they might over design it. They might change a lot of variables. They'll put it over here, whatever. And when realistically you should just be testing... And this is not saying create a bad experience, but what I'm saying is you should just be testing that initial question, is awareness the issue? So make it very aware, make them very aware that it exists. And then if we find out that, say, the down funnel metric of, say, pay conversion doesn't move, or they don't adopt the features, that's more closer metric or they don't adopt the feature. Well, then you can definitively say that awareness is not the issue. And that sounds very obvious. But what I've seen a lot of teams do is that they'll just run a lot of experiments around awareness and then they'll just keep running it until there's a win, quote unquote. And they're like," Yeah, we got a win." But that's not right. Instead your experiment should probably you have your higher arching strategy around your problem statements, but then each experiment will dictate the next step that you decide to take in this concurrent flow in this sequential flow of experiments or concurrent if you're advanced. But at the end of the day, you're trying to answer that question. You're trying to figure out... Okay, again, I'm going to go back to this example, but is awareness the issue? And if you can answer that and definitely rule out that awareness is not the issue, I can't state how big of a finding that actually is. Because then you go," Okay, well maybe it's usability." And then you start running experiments there and then you narrow down on that problem area. You narrow down until eventually... Because there is a problem, right? Otherwise, you'd have a great conversion rate. So the problem is there and you just ruled out that awareness wasn't the issue. That's based off of inconclusive results, right? Not even a stats sig positive result.

Matt Bilotti: You've talked about experimentation a lot. We've just walked around it. Why don't we dig into it a little bit more? I think when we caught up a few weeks ago, one of the things that you said that really stuck out to me was moving a number down. And this ties back to a lot of what we talked about so far, but in an experiment moving a number down is just as good if not better than moving the number up. Tell me more about that.

Willie Tran: Yeah. I would say just as good. I don't know if I'd go far as to say better, but... And this is where that... Yeah, 100%. 100%. At the end of the day, what you're trying to figure out is where is the problem? Where is the problem and why people are not converting? And if you figure this problem out, the number will move as a byproduct of that. So what I've seen a lot of companies, and this is understandable, is you see something new with a metric negatively, just stats sig negative by let's say 5%, whatever. And then you're like," Oh, let's not tell anyone about that." You've just got to sweep those results under the rug and you say,"Ah, okay. Well let's just keep going around." Instead, I know this is hard, but you should really celebrate that. You should celebrate that you found a stats sig result, positive or negative equally. And the reason being is because if you moved a number in a statistically significant way in any direction, it means that you found an element or component that the user actually cares about. Because the hardest thing that you're... The thing you're really battling against is apathy, right? The user doesn't really give a shit about this area. And your job is to find the areas in which users actually care about. Areas that affect the flow that is getting people to, say, convert or whatever the desired action is. Now this is what's known as a growth lever. Right? I think we've all heard that phrase, but really it's just finding the areas in which users actually... That has an effect on users completing this thing. Now if it moves negatively, it does mean that you found an area that affects the user to complete that task. The only thing that was quote unquote wrong was that you move the lever in the wrong direction. But what we often see is it's negative, stats sig negative. And people just like," Oh, well, I'll move on to the next one." You know what I mean? And they ditch it and they just... Whoa, dude, you just found some gold and you're choosing not to harvest. And so instead you should definitely move the lever in the other direction, do the opposite, rethink the execution because your hypothesis is made up with three parts. It's by execution, we'll see an increase or decrease in metric because assumption. And a good hypothesis is, at the end of the day, based on the results you go back and you're able to question those three things. Is our execution right, wrong? Was our execution wrong? Did we choose the wrong metric, which is surprisingly common? And then is our assumption untrue? And so when you celebrate these kinds of stats sig negative, or any stats sig result and you go back and look at that, you can see maybe our execution was wrong. It was negative. Our assumption is true that this area probably affects the user or whatever. And we were probably choosing the right metric because it moved, but maybe our execution was wrong, which go back and inaudible your execution. Keep testing there. You found something that makes people move, just do it a little bit differently. That's all. That's so valuable.

Matt Bilotti: It feels like the easy thing to do is look at it and say," Wow, we moved the number pretty far down. The current state must be really good. Let's leave it alone." Cool. What else around experimentation do you want to educate our listeners around?

Willie Tran: Yeah. So something I'm super passionate about is experiment design. And then I've harped on this a little bit already, but there's some common things that I'm seeing wrong. So one is that experiments are not ideas thought of on the fly. They're real thought out questions and hypotheses that need to be really thought through. If your experiment is starting off as like," Hey, wouldn't it be cool if you ran the copy experiment that changed this," and blah, blah, blah, you're starting at the wrong place. You're putting the solution before the problem. And a lot of people do this from what I've found and maybe the tides have changed over time. But from my experience, a lot of people do this. This is pretty prevalent. Instead, like I said, experiments should be questions that you've really pondered about and then you should really think about whether this experiment appropriately answers that question. Because you can run a bad experiment, even if it has positive results and long story short, what it means is what does it mean? Actually let me clarify. What does a bad experiment mean? What it really means is it's an experiment that you don't learn anything from. So if you change a bunch of elements and it leads to a positive result, it doesn't mean it's a good experiment because you have no idea what was the element that actually caused that result. But if you knew that, then you would be able to harvest more there and try to pull out more wins. Now that I'm sure there's a lot of people here right now like," Well Willie, isn't it good though that you found something in that area that led to that and that there's something else in there?" To which my answer is, yes, but were you delivered about that at the very beginning? Right? Were you aware that changing this whole area that, that changing the experience, which is where... There's a difference between changing the experience and changing components. And if your hypothesis is based around the experience, that's different. But if your hypothesis is based around changing the component like, let's say, removing this copy or removing this step of the flow, whatever. That's that's pretty different because essentially how you can test that is different. So because there's times if you're a smaller, if you don't have as much traffic, you have to do bigger swings. You have to make more experience changes. Your experiments are made to answer your questions, not test your idea.

Matt Bilotti: Ah, that's good.

Willie Tran: I'm going to say this again. Your experiments are made to answer your questions, not test your idea and I'm sure some people are going to be like," Whoa that's same thing." No, it's not. If you are running an experiment to test your idea, it's already flawed because you're trying to just see if your idea is a good idea or whatever. And if your idea, if you run it and your idea is not a good idea there's a good chance you won't have any learnings behind it. And if you don't produce learnings and you're not doing a good job of actually creating that, lets say, non quantitative value that you have across the company... But if you're answering questions, again, you can bring those insights up to the rest of the company. Again, even if it's inconclusive, you say... Again, going back to the awareness thing. If you run an experiment that tries to increase awareness of a feature and it doesn't increase adoption of said feature, then you can go to the rest of the company and say," Hey, for anyone else who's thinking about trying to move adoption of this feature, just FYI, you should not focus on awareness. We ran an experiment, which tried to increase awareness. It didn't lead to anything." Even though it was inconclusive, it's still... That's incredibly valuable. That could save another team so much time. But if you're just testing your idea, although there's no real learnings there. So that's really big. And I know it's really difficult to avoid this and it sounds like I'm splitting hairs, but if you focus on the stuff and really think about the question, it leads to so much increased value for the rest of the company. I can't stress that enough.

Matt Bilotti: Because otherwise you could wind up in a space where your takeaways from the last three months of experiments are that the ideals from the sales reps are bad and Joey on the team generally has the best ideas out of all of us, which to your point, is not the culture that you want. The culture is we learn that these things matter and these things don't.

Willie Tran: Right. Could you imagine having a culture where it's like," Joey is awesome. Sales, get out of here. Leave the ideation to Joey." That is a terrible culture. I hope you don't have that culture. If you do have that culture, you should really consider leaving. I couldn't imagine that. So that's when I work with other PMs and when I'm teaching people experimentation. And this is true, a lot of core PMs will to me like," Hey, Willie, I have this experiment." I'm like," Well, hold on. Before you go into the idea, let's really talk about it. Let's go up a few levels." And I always prevent people from giving me their ideas and then turns out when we dive a bit deeper into the questions, and then what the idea came from, we may actually find out that idea is actually not the best idea, or rather, it's not the best way to answer the question that matters the most for them.

Matt Bilotti: That's awesome. Anything else on experiment design that you want to cover before we move on to some of our last things here?

Willie Tran: Oh, okay. This one's for the people who are building out product growth strategy, which is surprisingly challenging, right? Because if your strategy is throw stuff at the wall, that's not really a strategy. And furthermore, if you think about the theory of constraints for the other supply chain nerds out there, you need to identify what is probably your biggest bottleneck. And this will define your strategy. So most people's bottleneck, the vast majority of people's bottleneck unless you work at Facebook or Netflix, whatever, is traffic, is figuring out how much traffic you have and what's your current baseline conversion rates. And also assuming that most people here are operating under a frequentist statistical framework and how they run experiments. So where they identify essentially how much traffic they need ahead of time in order to conclude that their experiment has is actually valid. Okay. So now that I've stated that. So if you're coming up with a strategy, you need to figure out what types of experiments you need to be able to run. And if your limiting factor is traffic, you can't control exactly... You can't really control. You can't just be like," Okay, well let me just get more traffic to come in." That's not going to happen. And another you can't control, not immediately, is like... Okay, well let me just change the baseline conversion rate. That's not going to happen either. The baseline conversion rate is what it is and that's what you're trying to move via experiments. So one thing you do have control over is minimum detectable effect. And this is essentially generally the degree of sensitivity that your test is trying to affect. It's not exactly how big of a change it is, but that's the easiest way for me to explain it. So a quote unquote, bigger change. And really, it means the delta between your current experience versus your newest one. So it doesn't exactly mean the most feature added or anything like that. It just means the more zero to one it is, as opposed to a one to N. So the bigger that is that will affect how long you need to run the experiment for which will affect how many experiments you can run within a certain timeframe. So what I've seen a lot is people who said they're working an enterprise company, and they'll be like,"Oh, I want to run a bunch of experiments." So we have a bunch of these copy experiments we run and they have 1, 000 people coming to their site a month and a really low conversion rate. And I'm like," Well, you can't run any, unless..." Not that you can't run any, but more so that you cannot do this initial high velocity strategy that you want to do unless you're okay with these experiments running concurrently for six months at a time. And no one's okay with that. Most people will just say," I'm just going to let this experiment run for a couple weeks." Where'd you do the math? How would you come... I don't know. Man, what? Instead when you figure out how long your, let's say average, experiment needs to run for that should dictate the resourcing you need in order to support this strategy, your experimentation strategy. Because if you do have low track and a relatively low conversion rate, it means that you need to make a bigger change. It means you need to have a higher minimum detectable effect. And when you change that to be higher because you're testing something that is a bigger change, it means that you need to staff differently on the engineering side, yet you're not just going to do copy changes and optimizations. You're going to probably do maybe even a net new feature add or redesigning this entire core experience to be completely different. Those are the type of experiments you need to run to tell you how much better is this experiment experience. So that's really big. And if you're listening to this and you're new to figuring out what your strategy is, first thing you should be doing is figure out a couple things. Figure out how much traffic do you expect to have. Figure out what's your baseline conversion rate. And then also in your head figure out what is your tolerance for how long you want an experiment to run for. Most people will say it's three weeks. That's from my experience. Most people think it's three weeks." I want to run it for three weeks." Whatever, sure. So then go to the Evan Miller sample size calculator, and then type in a baseline conversion number that gets you to that number. Now just because you type in that number doesn't mean that's accurate. So now you got to figure out exactly what is the right experiment you need to run in order to realistically measure that minimum detectable effect, that is actually... It's very not easy to figure out. But the idea is that do you really think this experiment is going to move the baseline conversion rate by a relative, let's just say, 15%? That's pretty big. Realistically, and this is usually based off of past experiments of similar weight, but that's a whole other thing and it's incredibly complicated.

Matt Bilotti: And in that frame, you look at it and it starts to become a little bit easier to say you're looking at the experiment that you wanted to run. You're just like, there's no way we could ever possibly see the number move that much from this thing. We have to instead go bigger like you were saying. Experiments need to be bigger. It needs to be way more-

Willie Tran: That's right, exactly. Don't fall in love with this idea of an experiment. You've got to do this stuff first because then that'll tell you what experiments you have the affordances to design and run. Because if you have a big backlog of copy change and image change experience, which I hope you don't... Because if you do, and then you find out that these experiments will need to run for six months, guess what? Your entire strategy is out the window with something you could have easily figured out in the first week. So do that first. And then that will dictate how you need to a staff. It'll dictate how you need to operate, the types of experiments you're going to run, whether you should have a growth team at all, stuff like that.

Matt Bilotti: Cool. Well, we are coming up towards time here. One other question before we go ahead and wrap. So you talked a lot about creating culture of learning and writing these questions as the framework for your experiments and all that. Maybe I feel the answer to this can be a whole other podcast, but maybe give me a condensed version of it, which is how do you share those learnings? What are some of the more effective ways that you can say we are a growth team focused on learning, here is how we empower other teams. Here is how we share and document those. Are there any couple quick things that you can pointers on before we wrap?

Willie Tran: Yeah. Part of this is really on... It's a couple things. It depends on where your company currently has, what's their cadence of these times where your product managers have this opportunity to present to the entire company and what they want to talk about and such. But generally speaking with every present I do, any presentation literally in front of anyone. I always try to make sure there's learning involved, that the product growth team has its association with said learning, which usually leads to other PMs. So it's really just present it frequently. And then sometimes though, and I should have mentioned this earlier, but these learnings also dictate this framework that at least... I embedded this framework that we use, which you ask questions. You answer the questions via, say, user research, for example, which leads to defined problem statements that around the user. So it's as a user, I want to... Let's just say I want to understand the value of Calendly and there's features that I want to use, but I'm not aware that they exist. Whatever. And then for every one feature you generally have... Sorry, one problem you have a bunch of experiments. So it has a one to many relationship. And for each experiment you will then get learnings, which will lead to more questions. Then essentially for each time you go through that cycle, you should have a learning and then you should present said learning. But by also issuing it in this framework, it also helps the other product managers help figure out how they can create action on this learning. Because you can pitch this learning. But if it doesn't lead to action, that's an issue. And I find that to be often the case. So this framework helps create that connection between," Oh wow, this is really cool. What next?" But when you present it that to the rest of the company, it helps them connect the dots too, which is really valuable.

Matt Bilotti: Cool. Well, Willie, thank you so much for joining here. This has been a ton of fun. I love the way you think about stuff. It reframes my mind on how I approach some of this. I'm sure many people that just listened through thought the same as well. So thank you again.

Willie Tran: Thank you so much for having me. It was fun to talk about it.

Matt Bilotti: Absolutely. All right. Well, for those of you listening, thank you so much for spending your time here. We have 80 plus other episodes that you can go ahead and check out with amazing growth experts, interviews, talking about tactic strategies, all that stuff. If you were a fan of this hit subscribe so you can catch all the episodes in the future. I know there's so many things you could spend your time on, working on doing, listening to, whatever. You're spending it here, listening to this and I really appreciate it. If you are a fan, write a review, give five stars. I'm also always open for feedback in whatever way it might be. My email's matt @ drift. com. Go ahead and drop me a note. And with that, I will catch you on the next episode. Thanks.


Willie Tran hates winning.

Ok, maybe that's an exaggeration; but Willie does hate when growth teams are only focused on winning.

On this episode of Growth, Willie explains why he's led his growth teams away from a culture of winning, what he's focused on instead, and how all growth teams can (and should) change their narrative.

The Notes:

  • (1:13) Why Willie is against a culture of winning
  • (6:29) How to shift a "winning" narrative into one that really wins
  • (10:20) Common problems Willie sees with growth teams today
  • (11:35) How to build a team around Willie's method of learning
  • (21:57) Experiment design
  • (31:37) The first thing you should do when figuring out your strategy
  • (34:02) How to share growth team learning with the larger organization

Like this episode? Leave a review!