Part V — Execution · Chapter 10

Research Operations & Planning

How to plan, resource, recruit for, and operationalize product research — from method selection and screener design to stakeholder management, agile integration, ethics, and budgeting.

You will learn

  • Recruiting participants without bias (and without begging).
  • Choosing the right method for the right question.
  • Building screeners that filter without leaking.
  • Integrating research into agile workflows without making it a tax.

Chapter 10: Research Operations & Planning

Good research is not just good questions. It is good logistics. The most insightful interview guide in the world is worthless if you recruit the wrong participants, alienate your stakeholders, or run out of budget in week three. Research operations — the systems, processes, and habits that make research repeatable — are what separate teams that do research once from teams that learn continuously.

Choosing the Right Method

The Research Method Selection Matrix

Not every question deserves the same method. The most common operational mistake is reaching for the tool you know (usually surveys or usability tests) regardless of what you actually need to learn.

The method selection matrix maps two dimensions: stage of product and type of insight needed prr.

Generative (What problems exist?)Evaluative (Does this work?)Descriptive (What's happening?)Causal (Why is it happening?)
Pre-productCustomer interviews, contextual inquiry, diary studiesConcept tests, painted-door testsMarket analysis, competitor analysisJobs-to-be-done interviews
Early productProblem interviews, ethnographyPrototype usability tests, Sprint testingAnalytics baseline, surveysA/B tests, funnel analysis
Growth stageContinuous interviews, unmet-need miningLive usability tests, tree testsBehavioral analytics, NPS/CSATCohort analysis, regression
Mature productExpansion interviews, churn interviewsPreference tests, benchmarkingLarge-scale surveys, segmentationMultivariate tests, causal inference

The key principle: generative methods come before evaluative methods prr cdh. You cannot test a solution you have not yet designed, and you cannot design well for a problem you do not yet understand.

When to Use Which Method

A quick decision guide for the most common methods:

  • Interviews — When you need to understand motivations, mental models, pain points, or context. Best for "why" and "how" questions. Start here when you are unsure what to build. See Customer Interviews.
  • Usability tests — When you have a prototype or live product and need to know whether people can accomplish tasks. Best for "can they" questions. See Usability Testing.
  • Surveys — When you need to quantify something you already understand qualitatively. Never use surveys to discover problems; use them to measure known ones just-enough prr.
  • Analytics — When you need to know what people actually do (as opposed to what they say they do). Complements every other method.
  • Contextual inquiry — When the environment matters as much as the task. Best for understanding workflows, workarounds, and implicit knowledge that people cannot articulate in an interview room just-enough.

The cardinal sin is using a quantitative method to answer a qualitative question, or vice versa. Surveys do not tell you why. Interviews do not tell you how many.

The Six-Step Research Process

Erika Hall outlines a clean, repeatable process that applies whether your study takes two days or two months just-enough:

  1. Define — Articulate the question you are trying to answer. Not "learn about our users" but "understand why trial users abandon onboarding at step 3." A vague question guarantees vague findings.
  2. Select — Choose the method that fits the question (see matrix above). Resist the temptation to default to what is comfortable.
  3. Plan — Write a research plan: goals, participants, screener, guide, timeline, logistics. Even a one-page plan forces clarity.
  4. Collect — Execute the study. Conduct the interviews, run the tests, deploy the survey. This is the step everyone thinks is "research," but it is only one-sixth of the work.
  5. Analyze — Synthesize raw data into patterns, themes, and insights. This is where the value is created. See Affinity Mapping.
  6. Report — Communicate findings to the people who need to act on them. The format matters: a 40-page PDF that nobody reads is worse than a 5-minute walkthrough at standup.

The most neglected steps are 1 and 6. Teams jump into data collection without defining the question and then fail to communicate what they learned. Both failures waste the effort in between.

Recruiting Participants

Recruiting is the unglamorous foundation of research quality. Talk to the wrong people and your findings are noise.

Criteria-First Screener Design

Start with the characteristics that define your target participant, not with demographics just-enough sprint. A screener for a B2B invoicing tool should filter on "manages accounts payable for a company with 50+ vendors" — not "female, 25-40, college-educated."

Build your screener in layers:

  1. Must-have criteria — Behaviors or attributes that define the target (e.g., "has switched project management tools in the past 12 months").
  2. Nice-to-have criteria — Variation you want in the sample (e.g., "mix of small and mid-size companies").
  3. Disqualifiers — People who would contaminate the data (e.g., employees of competitors, professional survey-takers, people in UX/design roles who will critique your interface instead of using it).

Behavior-Based Screening

Screen on what people do, not what they are prr. Demographics are a weak proxy for behavior. Two 35-year-old product managers may have radically different workflows. Ask about frequency of the relevant behavior, recency, and specifics. "How many times in the last month did you..." is a better screener question than "Are you interested in...".

The "Oh No" Test

Jake Knapp describes a simple gut-check for screener quality sprint: after you have written your screener, imagine the worst possible participant who could pass it. If your reaction is "Oh no, this person would completely waste our session," your screener has holes. Tighten the criteria until the worst-case participant is still someone you can learn from.

Blind Screeners

When recruiting through existing networks or customer lists, consider blind screening: do not reveal what the study is about in the screener itself sprint. If you say "We're looking for people who struggle with invoicing," you will attract people who want to complain about invoicing — a self-selected, biased sample. Instead, screen on the behavior without telegraphing your hypothesis.

Building Panels and Automating Recruiting

Research Panels for Ongoing Access

If you plan to do research continuously (and you should), investing in a participant panel pays for itself quickly prr. A panel is a pre-screened pool of people who have opted in to participate in future studies.

To build one:

  1. Add a recruitment question to your product's onboarding flow or settings page: "Would you be willing to participate in occasional research sessions?"
  2. Capture key behavioral and firmographic data at opt-in.
  3. Store and manage the panel in a simple database or spreadsheet — nothing fancy is needed to start.
  4. Rotate participants to avoid "panel conditioning" (people who do too many studies start behaving like professional testers, not real users).

Automated Recruiting for Continuous Interviewing

Teresa Torres emphasizes that the biggest barrier to continuous interviewing is the logistics of recruiting cdh. Remove that barrier by automating:

  • In-product intercepts — Trigger a short prompt ("Got 15 minutes for a quick chat? We'd love your feedback") at meaningful moments: after completing a key task, after hitting an error, after being active for N sessions.
  • Scheduling links — Use tools like Calendly or Reclaim to let participants self-schedule. Embed the link in emails, in-product prompts, and support interactions.
  • Customer-facing team referrals — Train support, sales, and customer success to refer interesting cases. Give them a one-click form.

The goal is a steady stream of 1-2 interviews per week without manual recruiting effort for each one. When the pipeline is automated, the team's only job is to show up and listen.

Managing Stakeholders

Stakeholder Interviews

Before you research customers, research your stakeholders just-enough. Stakeholder interviews at the start of a project serve three purposes:

  1. Surface assumptions — Every executive, PM, and engineer carries implicit beliefs about the customer. Making those beliefs explicit gives your study a clear target: which assumptions carry the most risk?
  2. Align on goals — Different stakeholders want different things from research. The VP of Sales wants competitive intelligence; the CPO wants usability data. Understanding these expectations up front prevents the "that's not what I asked for" reaction at the end.
  3. Build buy-in — People support what they help create. A stakeholder who contributed to the research plan is far more likely to act on the findings.

Understanding the Decider

In Sprint methodology, the Decider is the person with the authority and accountability to make the final call sprint. Every research effort needs clarity on who the Decider is. It is not always the most senior person in the room — it is the person who owns the outcome.

Know your Decider before you start. Present findings in terms that matter to them. If the Decider cares about revenue, frame insights around willingness to pay and churn risk, not usability heuristics.

Working with Agile Teams

Integrating Research into Sprints

Research and Agile development are not natural allies. Sprints demand predictability; research demands flexibility. But they can coexist with the right structure prr just-enough:

  • Decouple research from the sprint cadence. Research is continuous, not sprint-bound. Do not try to start and finish a study within a single sprint.
  • Feed insights into sprint planning. The output of this week's research informs next sprint's backlog. Maintain a "research insights" channel (Slack, Notion, or whatever your team uses) that the PM reviews before planning.
  • Use "research spikes." When the team encounters a question that blocks a decision, allocate a time-boxed spike: 2-3 days of focused research to answer that specific question.
  • Demo research findings at sprint reviews. Treat insight delivery the same way you treat feature demos. Show a 2-minute clip from an interview. Read a quote. Make the customer's voice present in the room.

The Product Trio Model

Torres argues that discovery is not the PM's job alone — it belongs to the product trio: one product manager, one designer, and one engineer, working together as a unit cdh. The trio interviews customers together, maps the opportunity space together, and makes prioritization decisions together.

Why the trio matters operationally:

  • Shared context eliminates handoffs. When the engineer hears the customer's frustration firsthand, you do not need a 10-page requirements document to explain the "why."
  • Diverse lenses catch more. The PM hears business risk, the designer hears usability pain, the engineer hears technical constraint. Same conversation, three complementary interpretations.
  • Faster decisions. When all three roles share the same evidence, alignment happens in the room, not in a three-week review cycle.

Research Ethics

Ethics are not a bureaucratic checkbox. They are a practical necessity: mistreat a participant once and you lose that person (and their network) forever just-enough.

Core Principles

  1. Informed consent — Tell participants what the session involves, how long it will take, what data you will collect, and how it will be used. Get explicit agreement. For recordings, get separate consent.
  2. Confidentiality — Anonymize data by default. Use participant IDs instead of names in your notes. If you share video clips internally, get permission first.
  3. Right to withdraw — Participants can stop at any time, for any reason, without penalty. Make this clear up front and mean it.
  4. No deception — Do not mislead participants about the purpose of the study. You can withhold your hypothesis (to avoid bias), but do not lie.
  5. Fair compensation — Pay participants for their time. The amount should reflect the time commitment and their professional level. Underpaying signals that you do not value their contribution.

Ethical Checklist

Before any study, verify:

  • Consent form is written in plain language
  • Data storage plan is defined (where, how long, who has access)
  • Incentive is appropriate and clearly communicated
  • Vulnerable populations have additional protections if applicable
  • Team knows the protocol for a participant who becomes distressed

Budgeting for Research

Research does not require a large budget. It requires a nonzero budget.

The "$50 Facebook Ad" Experiment

Bill Aulet describes using small, targeted ad spends to test demand before building anything de. Run a Facebook or Google ad for your value proposition, point it to a landing page, and measure click-through and sign-up rates. Total cost: $50-200. Total time: 48 hours. You now have quantitative signal on whether your messaging resonates with your target market. This is not a replacement for interviews, but it is a powerful complement.

Incentive Structures

Participant incentives are not optional for most studies sprint. Guidelines:

  • Consumer studies: $50-100 for a 60-minute session. Gift cards (Amazon, Visa) are universal.
  • B2B / professional studies: $100-300+ depending on seniority. For executives, sometimes a donation to charity in their name works better.
  • Unmoderated tests: $10-25 for a 15-minute task.
  • Panel members: Consider a points system or quarterly drawings for ongoing engagement.

Pay promptly. Nothing kills your panel faster than slow incentive delivery.

Remote Research Logistics

Remote research is now the default for most teams prr. Operational considerations:

  • Platform: Use whatever video tool your participants already have (Zoom, Google Meet, Teams). Do not make them install something new.
  • Recording: Always get consent. Use built-in recording plus a backup (screen capture software). Transcription services save hours of analysis time.
  • Time zones: Offer sessions across at least two time-zone windows. International research requires more.
  • Tech check: Send participants a brief "what to expect" email with a link to test their setup 24 hours before the session.
  • Backup plan: Have a phone number ready in case video fails. It always fails for someone.

Common Excuses and How to Counter Them

Teams that do not do research always have reasons. Here are the most common and their rebuttals prr:

ExcuseCounter
"We don't have time."A single 30-minute interview takes less time than the meeting where you debate what to build based on opinions.
"We already know our users."Then prove it. Write down your top three assumptions and test them. If you are right, you have lost nothing. If you are wrong, you have saved months.
"Our market is too niche to find participants."If you can sell to them, you can talk to them. Use LinkedIn, industry forums, trade shows, your own customer list.
"Research will slow us down."Building the wrong thing slows you down. Research accelerates the right thing.
"We'll just launch and iterate."Iteration requires signal. Without research, you are iterating on noise.
"Executives won't listen to research."Frame findings as risk reduction and revenue impact, not "user said." Start small, show a win, and expand.
"We can't afford it."You can afford five customer conversations this week. That is research.

The real answer to every excuse is the same: start smaller than you think you need to. One conversation is infinitely better than zero.

Putting It All Together

Research operations is not about building a research department. It is about building systems that make learning from customers a natural, low-friction part of how your team works. The components reinforce each other: automated recruiting feeds continuous interviewing cdh, stakeholder alignment ensures findings get used just-enough, agile integration ensures insights reach the people writing code prr, and ethical practice ensures participants keep showing up.

Start with the bottleneck. For most teams, that bottleneck is recruiting. Fix that first, and the rest follows.

Loading interactive: Discovery cadence

What Qualz.ai does here

Qualz.ai handles recruiting, scheduling, incentives, screening, and consent in one place — so research ops becomes a toggle, not a second job.

Qualz

Qualz Assistant

Qualz

Hey! I'm the Qualz.ai assistant. I can help you explore our platform, book a demo, or answer research methodology questions from our Research Guide.

To get started, what's your name and email? I'll send you a summary of everything we cover.

Quick questions