Level Up Your Growth Teams with Design Research (Laura Oxenfeld, UX Research Manager at Drift)
Matt Bilotti: Hello, and welcome to another episode of The Growth Podcast. I am your host, Matt Bilotti and today I am joined by my colleague, Laura Oxenfeld. Laura, welcome to the podcast.
Laura Oxenfeld: Hey, nice to be here.
Matt Bilotti: I'm very excited to have you here. Today, we are going to cover a topic that we haven't really touched on in the podcast before, which is all about how UX research can up- level your growth team. Laura has joined us how long ago? It was like a year ago.
Laura Oxenfeld: Almost one year.
Matt Bilotti: Almost one year, almost one year. Laura comes from a research background and makes some of the best research I've ever seen, spends like hundreds of hours going deep into specific personas. It made me think. Even if Laura's work isn't necessarily driving growth team specific stuff, I think that there is a ton of opportunity for us to think about and talk through how UX research and design research can inform growth team stuff. Laura's coming from like a classic product research background, then I'll help contextualize it for the growth type stuff and one of the reasons I'm really excited to cover this topic today is because when I was working at HubSpot and I was working with Brian Balfour, who is the founder and CEO of Reforge these days in the world of growth, he joined as the VP of growth. And for the first 30 days, he did no experimentation. He did no building of new products or whatever it might. His entire first 30 days with him and the team. I think he had a team of like two or three full- time people, it was just research. So he had the team just goes super deep on research to understand the personas and all that. And I think that at the time, people looked at it internally and they're like," He's researching? Isn't he a growth guy? Shouldn't he be running experiments." I think that the value that came out of that was so outsized and I see the value that comes out of it here with Laura and the product team. It's enough blabbing for me on that. I'm excited to hop on in. Laura, maybe you can give a quick background on yourself and then we'll go ahead and jump into the topic.
Laura Oxenfeld: Yeah. That's really interesting to hear at HubSpot and it just validates like my entire career, how important it is to understand the users. People that end up in UX research, we usually have different backgrounds. I am actually trained in sociology, anthropology, so qualitative research methods. I never got formal UX training. I'm a researcher at my core and then I learned about UX and translated my skills into it. And then I have worked in like agencies, enterprises, startups. All different sizes and structures and I've seen UX and user research happen many different ways with lots of different outcomes. Now, I'm here at Drift.
Matt Bilotti: Yeah. It's been amazing, the impact. I've told leadership on the team that I think Laura is one of the best hires that we made as an entire company over the past year. The first type of research that you told me that we wanted to cover here was around like generative or discovery research. Tell us about that. Let's maybe start with like, what is it and how could somebody think about it?
Laura Oxenfeld: Yeah, definitely. I think that the easiest way to explain user research is you do it in the beginning, in the middle and the end. So in the beginning, you have to understand what is the big picture, what's the landscape? The way I like to do describe it is the first, you need to understand the forest. And after you understand the forest, then you can look at the individual trees. Generative or discovery research from a user perspective, it's just zooming out and understanding like workflows, day in the life, pain points, what are their motivators, their goals. I've also worked in B2B research for so many years now that all my answers might be slanted to B2B. So it's like very focused on what people are doing in their jobs and what tools are they using? I feel like the industry is shifting more towards valuing that type of research. But when I started my career, everyone just wanted usability testing. That's when all the decisions have been made, but you can design whatever, but if you don't understand who are these people, what world are they operating in? Then it's really hard to make informed decisions. I know that user research isn't the only thing to make a decision on. The PM counters it with lots of other data inputs. But at the end of the day, if you don't understand the big picture about the users, you're not going to understand them enough to anticipate their needs.
Matt Bilotti: Yeah. This sort of generative research, I think is exactly what I saw Brian Balfour do, which was let's generate who the personas are, what they're doing, the things they care about, the tools that they're using so that you can frame all of your experimentation and approach to trying things out in that rather than learning by like A/ B testing. You learn the foundation first and then you build from there.
Laura Oxenfeld: Just to give some examples of the types of outputs and how they're used. So like you mentioned personas, journey maps, service design blueprints, empathy maps. Basically, a primary output of generative research are these UX artifacts, these things that everyone in the company should have access to because it's not just product that's going to benefit from them like sales, marketing, customer service. Everyone is going to benefit from understanding the user better. It should be the type of learnings that aren't going to change month to month. It's the kind of stuff that maybe the journey or the persona will change over a couple of years, but it's worth the investment to like create these because you're going to use them for hopefully a couple of years to come.
Matt Bilotti: Let's say we made these, all these artifacts, whether it's like a centralized research team or if it's the product manager, the designer on the team leading it for now. We'll get back to resourcing later in the podcast. But let's say those outputs are made. How can and should product teams or growth teams like leverage these learnings over time? Is it like look at it before they make any major decisions? Is it kind of like bake it in and like review it as a team? Talk to me through some of that.
Laura Oxenfeld: Yeah. That's actually a problem that I'm trying to figure out right now, which is, if someone like me goes... I'm not in the product team. I'm going to go figure this stuff out and put artifacts together. I can show it to people, but how do I get them to use it? It's not really just about how do I get them to use it, but it's how do I get them to understand what's inside of these artifacts? I'm thinking of maybe putting together research integration workshops after like sharing out these types of deliverables and insights and really get people to spend time. Like you said earlier, I might spend a hundred hours doing a multi- persona research project. Sharing it in 30 minutes or one hour, there's no way people are going to absorb that. So I have to think of ways to get people hands- on with it. And once they understand what the content is and they know where it's physically or digitally located, then it'll be easier for them to use it. But a like a concrete example of how I've been using a journey map recently is over the summer, I put together a journey map for one of our personas. And then now, now we're trying to think through like future thinking, storyboard, like end to end workflows, where can we add value? I'm like," All right. Let's pull up the journey map." And I just put all my sticky notes below those slices in the journey map. And I remind people like," We have a journey map. It might need to change, but let's ground what we're thinking in journey map." So I'm just trying to always remind people that the stuff is there, but a better researcher would know how to get people to take ownership over those documents.
Matt Bilotti: Yeah. It's super, super hard. I think what's so valuable about this generative stage is that it is like super evergreen content. Maybe after you visited every year or so when market dynamics are starting to change or user habits are starting to change, but I've seen some of the outputs you've made, like there are things that are going to be hold true until there are major behavioral shifts in the industry. One thing that I have seen growth teams do that is worth noting here is bake into their process. So let's say an experimentation process, they have their hypothesis and they're going to try to move this number and they think they can move it this much and they bake in like a section of that experimentation framework. What research is this rooted in? Have it reference out that sort of evergreen generative research and the more you can, like you were saying, Laura, the more that you can take it and make it a part of the process and I think growth teams especially have generally a very, very rigorous process, if you could bake something into that process, that is like, what is the research that backs up this hypothesis? Hypothesis and then link to the research, link to the slide, link to the artifact that justifies that there is validity behind this hypothesis.
Laura Oxenfeld: Yeah. I haven't worked with growth teams before, but it sounds really like perfect to have a researcher work with a growth team because after a big study, to just sit down with the growth team, be like, all right, what experiments are going to come out of this? That would be, how can we do that here at Drift? Just go straight from insight to experiment, that would be great.
Matt Bilotti: Yeah. Yeah. A lot of opportunity the here. Okay. Next type of research that we talked about before we hopped on here was concept validation. Tell us what that is, what that means and how somebody can think about it.
Laura Oxenfeld: Concept validation is... You understand the tree, the forest level, you understand the forest or hopefully you do or you have guesses about it. And then concept validation is like," Okay, I think we should do this, this and this." So it's more like when your hypothesis or your multiple hypotheses are being formed and how that manifests in digital design is this is where you start to either get some pixels on paper or you can have a concept that's just in the idea stage, like just sticky notes. But basically before jumping into really detailed design work, it's better to just have a point in time where you stop and say," Okay, here's where we think we should go, but let's do some user research to see..." So it's kind of connecting back to how in discovery research, you're wanting to learn the workflows, the problems. In concept validation, you want to see are these realistic scenarios that we're building these concept around and then have them again, re- articulate like where in their workflow does this fit? Would it replace anything in their workflow? What would actually add more value? So just really trying to zoom in on the value add of the concept before you're like really investing in detailed design.
Matt Bilotti: Yeah. It's the setup. I think about how growth teams can use this. I think there is a tendency to just jump right to the experimentation, like design and throw it out there and like I said before, that just do the A/ B test. But I think, especially for small changes, I think that's fine like if you're A/ B testing a button or whatever, but for any like big workflow type experiments, this is something that is super valuable to bake into the process. Tell us about what some of those outputs look like from that sort of work.
Laura Oxenfeld: Yeah. That, if you want to be nice and quick and lightweight, it can be as simple as say you have three concepts, like to three different design directions or concepts, you can basically have your research plan around it and then instead of doing deep analysis where you're getting like journey maps and all that out, you're basically just saying," Does this concept pass or fail?" If it passes, just keep it going and you can generally get a sense of that just like, as you're doing the sessions, but if the concept is failing, if there are a lot of eggs in that basket, then maybe it's worth it to go in and do like your deep analysis and the data. But if you're just throwing ideas out there and it's okay that some concepts pass, just do a simple pass, fail, move on.
Matt Bilotti: Talk to me a little bit about how... How have you seen this bacon well with product teams and kind of driving their outcomes and their work?
Laura Oxenfeld: Yeah. So especially if there hasn't been time to do generative research first, concept validation is a really good way to maybe have some of the research guide be like a little discovery and then mix with concept validation. Basically, I think the biggest value that can add is it prevents people from spending too much time on something that doesn't have legs. It's not rooted in reality and things can feel a little water folly in product where it's like," We have an idea and now we have to build it and put it live to see if it was a good idea or not." You actually don't have to do all that. You can just like in the design phase, figure that out.
Matt Bilotti: Yeah. I'm just going to frame this again in the world of growth. I think the generative research is something that should kind of always come first. If you're spinning up a growth team, you're working on a new product or whatever, the generative research is something that will guide your team and your decisions moving forward and make it so you don't just... I think a lot of growth teams just start with like," We have this low hanging fruit. Let's optimize this stage in the flow," or whatever. But there is a better opportunity to step back and say like," What is our product solving for who? And then what are all the other problems around that?" That way, all those decisions can be guided moving forward. And then the concept validation comes in when you're doing... Let's say, maybe you didn't do the generative and it's a more complicated thing than like a one or two day experiment. The concept validation is a really great way to see is committing to this experiment worth the next three and a half weeks of our product and engineering team's time? You can kind of like get that sort of first pass, like a bit of a gate to say," This will take us a couple of days. Let's do some validation before we really commit to it." And then once you're committing to it, I think like this is where the iterate on the growth experiment thing comes because you can commit to the concept, build it out, spend the time and then because you had done concept validation, you then know that you should then keep experimenting on top of this because the core concept was validated rather than just like throwing in the trash right away and moving onto something else. It gives you a better sense of like what to do as followup.
Laura Oxenfeld: Yeah. No, I think you said it perfectly. Also, all this qualitative research can only go so far and what people say they're going to do and what they actually do are two different things. In concept validation, you should never ask," Would you X or would you Y." It's just really understanding like the workflow and the problem and kind of their response to the proposed solution. But even though in concept validation, it might seem like it's all going to be gravy, once it is out in production and behaviorally being used, it might be a different story. That's where it is important like, okay, behaviorally, it's not being used the right way or it's not doing what we expected, but then going back to any of your user research before, whether it's concept or generative. If you don't have those breadcrumbs, like you said, you just kind of have to throw it away and you're not learning from which part of this did we get right and which part of this did we get wrong?
Matt Bilotti: Yeah. I love that. It's about which parts are right and which parts are wrong. And knowing that you have the core piece, right can help guide all the product and experimentation decisions after that.
Laura Oxenfeld: Yeah.
Matt Bilotti: All right. The third bucket that we talked about before we hopped on the recording here is around usability testing, which you had mentioned earlier. Tell us about usability testing. What is it? How should someone think about it? When can and when should it be used?
Laura Oxenfeld: Yeah. So usability testing can be done on designs or on stuff that's out in the wild. Basically it's just how easy or difficult is it for people to complete tasks? Of course, we're kind of talking about the digital space. So on this website or in this app, how easy or difficult is it to do X, to do Y, to do Z? Basically, you think through what are the scenarios that get the person from A to B in the design, give them the scenario and just let them go through it and think out loud and let them know like," If struggle, that's how we're going to learn." So it's not testing them, it's actually testing our design. Give them the space to feel like it's okay to" fail", and then through that you just learn like how do we need to tweak this design to make it easy to get through? Before I mention, the industry used to over- index on usability testing. The reason that's bad is because you can make a really easy to use design, but it's not solving a problem that anyone actually experiences, it's not a workflow of the ads value. So yeah, like usability testing, super important, but you have to make sure you're solving the right problem first and the other thing is it's important to usability test, whether you have one day of experience or 50 years of experience because it's just you really don't know until you usability test it with at least five users. You do five users, get 80% of the usability issues out. Yeah. It's not like," I'm experienced. I don't need to do usability testing." Everyone does.
Matt Bilotti: Yeah. I think the usability testing thing is critical here for the world of growth because I think a lot of teams and a lot of like growth practitioners and I've got myself in this spot before where you get so caught up in, all right, we release this thing. Let's look at the charts, let's look at the numbers, let's see how many people are making it through to this step. The usability testing piece can... Maybe the chart tells you that most people are getting through, but the usability test tells you that most people are having a hard time getting through or like they're getting through and then they get through with like 40% of the context in proper mind space than you thought. The number says 70% of people convert from this stage to this stage. But then when you do usability testing, you can learn that while 70% of people convert, only 20% of them understood what they were converting to do.
Laura Oxenfeld: Yeah. Yeah. Also, I think this is where brand comes in as well because quantitatively sure, 70% got through, but what chunk of those now think poorly of your brand or your product and that's something that you just can't get through quantitative data only. The other thing usability testing can do is time on tasks. So it's like yeah, they got through it. But if it took someone 10 minutes to do something and we expected it to take two minutes, that's not the best. You just need to get humans to go through it and based on what your growth experimentations are on, like in B2B, we need our actual customers or people in that space to be doing the usability testing. But there's also types of usability testing where you just need a human with a pulse. Someone sits next to you in the office or you can hop on a zoom call with that. That works as well.
Matt Bilotti: I'll toss out one other tip for folks listening. I think if you feel like you're too time- strapped or you don't have the resources to do usability testing, putting something like a full story or a log rocket that lets you watch the user sessions back is so critical, it could fill in those gaps. You can watch someone click around. One thing that we did, we've done for years on my teams is we sit down once a week as a team, the engineers, the designer, the product manager. We sit down and we watch an hour of sessions. We just watch an hour of sessions where people go through something that we had recently iterated on or experiments with and we watched them like click on this thing four times, but they click on something and some other thing pop. Another modal pops up that some other team had that only shows in this scenario. You can catch so many other things that you're not going to catch just through looking at the charts. Laura, we talked about some of this. Maybe, I don't know if there's other like ways to frame or talk about the outputs of what usability testing looks like.
Laura Oxenfeld: Yeah. It depends on how scrappy or robust you want to be. It can be as scrappy as debriefing with the designers after sessions and being like," Okay, what stood out to you as like a glaring usability error?" Actually, there's a method called the RITE method, just like rapid, iterative, something, something. But with that, you do about three sessions and you actually update the designs before you do the next couple of sessions. So you're actually iterating the design as you go. So you're not waiting for the full five or six people to go through it. That's just more of like a lightweight conversational way between the researcher and the designer. But yeah, you can do as much as a full- blown report where you're taking clips of videos and quotes and everything and putting it on a slide deck. But I'm guessing growth teams would index more to the," Let's just talk it through and figure out where to go from there," side.
Matt Bilotti: Yeah, absolutely. Any other notes on how maybe you've seen product teams leverage that type usability testing outputs before?
Laura Oxenfeld: It maybe doesn't apply to growth as much unless like the growth team's embedded in a place with a design system, but I'm sure growth teams, like if you're doing experiments, you're veering away. If the organization has a design system, you're veering away from it. That's a good rule of thumb. If you're just using design system components, you probably don't need to use ability to test it. But if you're doing some new patterns, that is when it's worth it to take the time. Also, you mentioned before like oh, if you feel like you don't have the time, that always just boggles my mind because I always say it's easier to fix ideas than it is to fix designs and it's easier to fix designs than it is to fix code. Everyone has this perception of research slowing us down and taking time, but it's less expensive to fix stuff before code.
Matt Bilotti: Yes, absolutely. You save so much time and energy. Cool. So let's maybe wrap on the topic of resourcing and structuring research teams or the research function. I think there are two general ways to do it. I think the growth world has already thought a lot about this for data scientists and data analysts. You either have a centralized data science team where your growth teams kind of like consult with them. It's like an agency model and say," Hey, we want to dig into this part of the data. Can you help us?" Versus having a data scientist on the team and they go to the weekly syncs and they're part of the experimentation and prioritization and figuring everything out day to day. Tell me a little bit about how you think about doing research in those two different types of models.
Laura Oxenfeld: Yeah. There's pros and cons to both sides of being a centralized service versus embedded. The pro of being a centralized service is we can focus on what are the most impactful things that are also like require such a deep dive that folks with 20 things on their plate aren't going to have the mental bandwidth to do. Central research can get you those really thorough journey maps, personas all that, like the generative discovery level stuff. We can do it all. But it goes back to what I was mentioning before is that disconnect. The more disconnected the person doing the research is from those daily conversations, daily Slack channels, all that, the less they are able to be the advocate for the user's perspective. Having either a researcher that's embedded in the growth team, so they're part of all those conversations or leveling up who... I'm like not territorial about research. You know this. I don't care who wants to do the research. Anyone with the interest and the time, they should be the ones doing it. But yeah. I think it works best when the people who are in those daily conversations are running the research and it can definitely be collaborative, but for an organization to have an embedded researcher for like a bunch of teams, because there's a growth team, there's a bunch of product teams. So I would guess like not just the growth team can get the researcher. Usually only mega enterprises have the everyone gets a researcher model. That wasn't the most eloquent way to answer the question, but it all comes down to how might we get the people that need the information to make decisions to really embody it, understand it to the point where they can re- articulate the insights.
Matt Bilotti: Yeah. Talk to me a little bit about how you think about training an org to value and actively resource research stuff because right. You and I, we were talking a little bit before about teams saying like," We don't have the time for this." I'd imagine that some orgs are like," We don't have the time for research." It's this thing that's going to take us a whole bunch of time. Talk to me about how... Like the pitch that you would make to growth teams that are listening to this or product teams that are listening to this and why they should spend the time and effort and energy and how they might think about that as like hiring dedicated research people or building research into the design function or something like that.
Laura Oxenfeld: Yeah. My kind of pitch for research always goes back down to what I said before, like it's easier to fix ideas than it is design. Easier to fix design than it is code. I'm always down to do the extra work to like pull up numbers and show examples. There's like the Harvard business review chart that I have copied and pasted so many times, but design led organizations outperform the market by 200% or more. Especially when you put those type of numbers down, people might slow down. But in reality, like how do you motivate people to do something out of what they consider to be their normal job description? One way that's super effective, so research and designer are often in the same org. If you just make it an expectation in the career ladder, like you want to keep getting promoted, you have to be able to perform research to help understand the user. It's easier to pull the levers on the design side. Now on the PM side, that is hard because they have so much on their plate and then it's like a different organization. Just in my experience, it's like the people self select. The people that are interested in it are interested in it and they're going to take the time to learn. The people that are not in it, I try and really empower their designer to do that. And at least like partner with their PM on like here's where I think we need to do some research and let's get some time allocated for it. It's a bit of like a negotiation and just seeing like where the interest lies.
Matt Bilotti: Yeah. At the end of the day, it very much seems like teams will have better outcomes and outputs if they value and resource research as a core function of how they operate, why they operate. I think a lot of growth teams would avoid bad experiments if they had better or generative research. They'll be able to better inform the experiments to spend more time on through concept validation and usability testing can tell them way more than what a report could if they're just looking at a chart or a graph. Well, Laura, this has been a ton of fun. Anything else you want to like toss out there to close with that you haven't been able to mention or you feel like you kind of got it all in?
Laura Oxenfeld: Yeah. Just in general, the sentiment in the UX research profession over the last few years has been shifting. When I got my start, it was very ivory tower. It's like," I'm the researcher. You're not a researcher. Don't try and do this, keep your hands off and just let me do it." Which is inaudible of an attitude. But I think that reality has clicked in to the broader UX research profession, which is like some research is better than nothing. And so how might we just empower people to learn the basics? The thing is, research can be very dangerous if you have leading questions or not a good sample size or you're not analyzing the data properly. Knowing the basics is important to make sure you actually have reliable insights. It is like self learning or just like learning it on the side is good. But I think that's why it's important to pair things with the behavioral analytics, because if you're not a research expert and you're doing it, you might get it wrong because you actually don't know what all the right steps are. It's important to have both the analytics and trying the research up front.
Matt Bilotti: Yeah, couldn't agree more. Well, Laura, thank you again so much for joining on the podcast.
Laura Oxenfeld: Thanks. This was so fun and can't wait to keep experimenting.
Matt Bilotti: I love it. I love it. Well, really appreciate it. For all of those listening here, I really appreciate you spending your time here. If you like this episode, make sure to hit the subscribe button, check out, I think it's almost 90 other episodes with amazing experts about all sorts of top of channels for growth, strategies for growth, tactics for growth, all that. Each episode is a deep dive into a different topic. If you are a fan, definitely leave a review. Written reviews go a really long way as well. I think that's all I got for today. If you got any feedback, any thoughts, anything like that, feel free to reach. With that, I will catch you on the next episode. Thanks.
On growth teams, we often think of testing as our main way to learn about what users care about and are most interested in. In this episode, we talk about a major untapped angle for growth teams — design research.