Skip to content

Qualz.ai

How to do User Research as a Product Manager?

User Research as a Product Manager

As a product manager, you’re constantly making decisions on what to build and how to prioritize. But without clear input from your users, even the most confident decisions can miss the mark. That’s where user research comes in. Without real user insight, you risk pouring time and resources into features no one actually wants. When you ground your product decisions in actual user needs, you make smarter bets. You reduce the risk of feature bloat. You build what matters. Research connects the dots between what users say, what they actually do, and where the biggest opportunities lie. Instead of debating based on gut feelings or opinions, you get to walk into a room and say, “Here’s what we heard directly from our customers.” That kind of clarity builds confidence across product, design, marketing, and leadership.  

User research helps you understand what your customers need, not what they say they want, not what stakeholders think they need, and not what competitors are doing. Whether you’re working on a brand-new feature or iterating onboarding, user research gives you the insight to focus on solving the right problems. It’s how you uncover friction, motivation, and opportunities that aren’t obvious from dashboards alone. User research is also a powerful way to improve product-market fit by validating whether your solution solves a meaningful problem for your target audience.  

In this blog, I’ll walk you through exactly how to do user research from defining your goals and choosing the right method to recruit participants, analyzing feedback, and integrating insights into your roadmap.   

Step 1: Define the Right Problem to Investigate  

Before you jump into surveys or schedule your first user interview, pause for a second to understand what exactly you are trying to learn.  

The biggest mistake I see is diving into user research without a clear focus. So, start by narrowing your scope: Are you exploring a new opportunity? Or evaluating something you’ve already built?   

This is where understanding the difference between generative and evaluative research becomes crucial.  

  • Generative research is about discovery. Use it when you’re trying to uncover unmet needs, understand workflows, or explore new markets.  
  • Evaluative research helps you test ideas, mockups, features, or flows. Use it to validate what you’ve built before rolling it out.  

Let’s say you’re noticing a lot of drop-offs while onboarding. Instead of assuming it’s a UI problem, you might conduct generative interviews to explore the user’s goals and motivations and what’s stopping them from completing setup. Or maybe you’re choosing between two potential features to build next. In that case, evaluative testing, like a fake door test or concept feedback session, can help you prioritize.  

If you’re not sure what kind of research to run, start by framing a clear question. Ask yourself things like:  

  • What do I need to know to move forward?  
  • What decision am I trying to de-risk?  
  • What’s the worst assumption I might be making?  

Once you’ve defined your focus, it’s a lot easier to decide which methods to use, who to talk to, and what questions to ask.  

Step 2: Select the Right Research Method  

Once you’ve nailed down the problem you want to explore, the next step is choosing how you’ll get the answers. Let’s break down the most common methods and when you should use them:  

️User Interviews  

Use this when you want to dig deep into why users behave in a certain way. Interviews are perfect for exploring motivations, unmet needs, and decision-making processes. They work best early in the discovery phase or when you’re trying to shape feature direction.  

Tip: Keep your questions open-ended and avoid asking things like “Would you use this feature?” Focus on real behaviors and problems instead.  

Surveys  

Use surveys when you need quick input from a broad user base; think feature validation, customer satisfaction, or prioritizing pain points. They’re efficient, but less rich in nuance.  

If you’re wondering when to choose between surveys, interviews, or usability testing, this comparison guide is worth a read.  

Usability Testing  

Want to know whether your UI actually works for real humans? Usability tests give you clarity. Usability testing is especially useful before launching a new feature or redesign. You’ll see exactly where users get stuck, what they overlook, or how they misinterpret your flows.  

Product Analytics + A/B Testing  

Use behavioral data when you want to quantify patterns or validate hypotheses. Tools like Mixpanel or Amplitude tell you what users are doing.  

Sometimes, the right approach is combining several of these. For example, you might start with 5–7 interviews to spot themes, then launch a survey to validate those themes at scale, and finally A/B test a solution based on what you learned.  

Step 3: Recruit the Right Users  

You could design the most brilliant interview script or run a beautifully crafted survey, but if you’re not speaking to the right people, your insights will lead you in the wrong direction.  

That’s why recruiting the right users starts with one thing: clarity around your Ideal Customer Profile (ICP). You need to know exactly who you’re building for, what industry they’re in, their role, goals, pain points, and behavioral patterns. This clarity ensures your insights reflect real users who match your product’s growth path.  

If you’re unsure how to define or validate your ICP, use frameworks like the Customer Journey Pain‑Points Framework to map out friction, motivations, and unmet needs across user segments. This helps you avoid the trap of generic personas and ensures your research stays actionable.  

Where do you find great participants?  

Once you’re clear on your ICP, it’s time to find users that match. Depending on your product stage and resources, here are a few battle-tested channels:  

  • Your CRM or product database: Reach out to users who are actively engaging, dropped off recently, or fit specific usage patterns.   
  • Customer success and sales teams: Tap into internal teams who already know your power users and churn risks.  
  • Communities: Search niche industry forums, Reddit threads like r/ProductManagement, or product-specific Discords. You’ll often find engaged users there.  

How to screen participants without bias?  

You don’t want “anyone”; you want people who represent the right stage, behavior, and mindset. Here’s how you can approach it:  

  • Ask qualifying questions based on recent behavior, not opinions (e.g., “When was the last time you used a competitor’s tool?” instead of “Would you be interested in this?”).  
  • Mix in disqualifiers. For instance, if you only want non-technical users, screen out engineers early.  
  • Use a mix of open-ended and multiple-choice responses in your screener so you can assess clarity and authenticity.  

Step 4: Conduct the Sessions  

Once you’ve nailed down your plan and recruited the right participants, it’s time to run the actual sessions. First, let’s talk about formats.  

Moderated research: Moderated research involves you (or someone from your team) directly interacting with participants; think user interviews or live usability testing. This format gives you the flexibility to go off script, ask for follow-ups, and read between the lines. You can explore body language, emotions, and hesitation.  

Unmoderated research: Unmoderated research is more hands-off. Participants complete tasks or answer questions in their own time, using tools like Maze or Lookback. It’s faster, more scalable, and less resource-intensive, but you trade off depth for speed.   

AI-Moderated Interviews: Think of AI-moderated interviews as the middle ground between fully moderated and unmoderated research. You still get the richness of open-ended, conversational responses, but instead of a human running the interview live, AI steps in to ask questions, probe deeper, and adapt based on what the participant says. Tools like Qualz.ai offer AI-moderated interviews that adapt in real-time, making it easier for product teams to gather voice-of-customer data without burning cycles on scheduling or note-taking. If you’d like to know more about human vs AI as a moderator for your user research, here’s a breakdown 

Step 5: Analyze the Data Effectively  

Now comes the part where everything clicks, or at least, it should. After you’ve gathered all that rich user’s input, it’s time to make sense of it. This is where good research turns into great product decisions. Whether you’re dealing with interview transcripts, usability notes, or survey responses, your job is to extract patterns, insights, and meaning without drowning in the details.  

Let’s break down a few solid techniques that will help you move from raw data to real clarity.  

Thematic Analysis: Thematic analysis helps you to understand the reasons behind user behaviors. You’re basically grouping feedback into themes based on patterns in what people said or how they reacted. Start with open coding, reading through responses, and tagging meaningful chunks, then clustering those into themes. This method is especially helpful when you’re looking for emotional or motivational drivers, not just “did it work or not.”  

If you’re short on time or need help with coding hundreds of interviews, AI analysis can jumpstart your analysis. For example, AI-powered open coding lets you process large volumes of qualitative data fast while preserving nuance.  

Layer in Strategic Frameworks: Once you’ve got the raw themes, run your findings through structured frameworks to get strategic insights and not just anecdotes.  

Use the Jobs-to-Be-Done framework to connect your insights to real user goals. It helps frame what users are trying to accomplish, rather than what they say they want. This is clutch when prioritizing features or rethinking positioning.  

Then use the Sentiment and Emotion Spectrum as a powerful way to gauge not just what users are saying, but how strongly they feel about it. Trust me, the difference between “this is annoying” and “I hate this so much I stopped using the product” is everything.  

If you’re looking for a deeper dive into frameworks that help interpret your data across different lenses (emotional, behavioral, thematic), Formbricks’ guide to user research methods gives a great overview of both qualitative and quantitative techniques.  

Step 6: Share  Insights with Stakeholders  

Once your research sessions wrap up, your real job starts making sure the users’ research insights drive action.  

Start by pulling together your findings into a clear, digestible format. I like to include real user quotes alongside short narrative summaries and data visualizations. These humanize the data and make it stick.  

Next, connect every insight back to business goals. For example:  

  • Is this feedback tied to churn risk (retention)?  
  • Is it blocking onboarding (activation)?  
  • Does it support monetization change (conversion)?  

The more you anchor your findings to measurable outcomes, the faster you’ll get buy-in from product, design, and execs alike.  If you’re looking to enrich how you synthesize your user research data, try using lens-based analysis to examine it from different perspectives, like emotion, narrative, or motivation.    

Step 7: Integrate Research into Workflow  

To make user insights truly matter, you’ve got to embed them into your daily workflow. That means feeding relevant findings into your:  

  • Backlog grooming sessions so you prioritize features that solve real pain  
  • Roadmap planning so new initiatives are driven by evidence  
  • OKRs so customer needs show up in your team’s metrics  

But don’t stop there, build a running repository of research learnings. Think of it like a second brain for your product team. That way, when someone new joins, or when you revisit a problem space six months from now, you’re not starting from scratch.  

Continuous discovery doesn’t mean constant interviews. It means you’re regularly learning something, whether from support tickets, churn analysis, or a quick feedback poll. Make it a habit, not a project.  This mindset is especially critical in fast-moving SaaS teams, where decisions happen quickly and the cost of being wrong is high.   

Conclusion  

If there’s one takeaway here, it’s this: user research isn’t a phase. It’s not something you “slot in” when you have time or budget. It’s a product of muscle, and the more consistently you train it, the stronger your decisions get.  

Whether you’re building a V1, iterating on onboarding, or deciding which feature to kill, your edge doesn’t come from guessing better. It comes from listening better. From asking the right questions, talking to the right users, and actually acting on what you learn. If this guide helped you rethink how you approach research or if you’ve got frameworks, tips, or lessons of your own, I’d genuinely love to hear them.