Purpose
The quality of your research depends on talking to the right people. A screener ensures that every participant matches the profile you need while preventing savvy respondents from guessing the "right" answers to get selected (and paid). Good screener design is the single highest-leverage activity in research recruiting because a flawed screener contaminates every interview that follows.
When to Use
- Before any round of user interviews, usability tests, or design sprint testing.
- When recruiting through a panel, social media, or any channel where you cannot pre-verify participant fit.
- When your target participant has specific behavioral or experiential criteria that are not obvious from demographic data alone.
- When refreshing an existing screener that has been letting through poor-fit participants.
Steps
-
Define your participant profile. Write out the specific characteristics of your ideal participant. Go beyond demographics: specify behaviors, experience levels, recency of relevant activity, and tools used. Draw on your customer slicing work if available.
-
List your must-have criteria. Separate absolute requirements (disqualifying if not met) from nice-to-haves. Typical must-haves include: relevant role or responsibility, recent experience with the behavior you are studying, not employed by a competitor, and not in the UX/market-research industry (to avoid professional participants).
-
Write screening questions. For each criterion, write a question that tests for it without revealing what you want. Use multiple-choice with plausible distractors. Example: instead of "Do you manage a team?", ask "Which of the following best describes your role?" with options including individual contributor, team lead, manager, director, and executive.
-
Blind the qualifying answers. Ensure the correct answer is not obvious. Place qualifying options in the middle of the list rather than first or last. Include at least three to five answer choices for each question so that guessing is impractical. Never use yes/no questions for critical criteria since they have a 50% guess rate.
-
Add a red herring question. Include one or two questions that look important but are not actually part of your selection criteria. This makes it harder for respondents to reverse-engineer which answers qualify them.
-
Include a free-text question. Add one open-ended question (e.g., "Briefly describe the last time you [relevant activity]"). This serves two purposes: it filters out low-effort respondents who write one word, and it gives you a preview of whether the person can articulate their experience in an interview.
-
Keep it short. A screener should take three to five minutes to complete. More than 10 questions causes drop-off and signals that you are trying to do too much in the screener instead of in the interview itself.
-
Test the screener internally. Have a colleague who matches your profile and one who does not both complete the screener. Verify that the right person qualifies and the wrong person does not. Check that qualifying answers are not guessable.
-
Set quotas. If you need diversity across sub-segments (e.g., three enterprise users and three SMB users), define quotas upfront so you do not accidentally fill all slots with one type.
Tips
- Screen for behavior, not identity. "I am someone who cares about productivity" is an identity claim anyone will agree with. "I used a task management tool in the past week" is a verifiable behavior. Always prefer behavioral criteria.
- Disqualify generously. It is better to reject a borderline participant and keep recruiting than to include someone who does not quite fit. One off-target interview wastes more time and budget than the delay of finding a replacement.
- Reuse and iterate. Maintain a library of proven screening questions for your common participant profiles. After each study, note which questions successfully predicted good participants and which let poor fits through, then refine.
Source
Knapp, J., Zeratsky, J., Kowitz, B. Sprint (Friday recruiting and screener design for sprint tests). Hall, E. Just Enough Research (recruiting best practices and screener methodology). Portigal, S. Interviewing Users / PRR (blind screener design and participant quality).