Not long ago, I was giving a demo to a potential client. As part of the session, I showed them a mock-up survey that featured several multiple-choice questions.
During the demo, the client paused at one particular question and asked, “What if none of the options provided apply to me?”
I froze for a second, caught off guard. It wasn’t just a moment of critique, but rather, a moment of clarity. I froze, not because I hadn’t heard the question before, but because I had felt it myself, many times. I often find myself thinking the same thing when I fill out poorly designed surveys. Instead of feeling like my opinion matters, I feel cornered and forced to conform, to pick the “least wrong” or “closest” option. It’s a frustrating experience, one that leaves me feeling helpless, like I’m being coerced into submission rather than being asked to contribute meaningfully. Rather than feeling seen, I feel dismissed. Instead of being empowered to express myself, I feel pressured to submit to someone else’s framing of my reality. The interaction becomes less about expression and more about compliance.
My instinctive response was, “Well, maybe choose the next closest option.” But inside, I cringed. This was not how I would have designed the survey, and it reminded me, sometimes painfully, of the very problem I’ve been vocal about for years.
That moment encapsulates a much larger issue with traditional survey design: the assumption that we, as survey designers, know what participants might say—or worse, what they should say. Those predefined options reflect our thinking, not the respondent’s lived reality. The choices we provide are shaped by our perspectives, biases, and expectations. But surveys, by their very nature, should center around participants, not researchers. They should be listening tools, not controlling mechanisms. And yet, the traditional model reverses this logic.
I’ll give you another example. I once worked with a client who ran feedback surveys every week; they were in the event business and constantly evaluating their sessions. As expected, their surveys were filled with multiple-choice questions. But at the end of each one, they included a few open-ended questions.
Curious, I asked, “How do you analyze the open-ended responses?” Without blinking, the client replied, “Oh, we just read through the spreadsheet and take mental notes.”
This response was illuminating for two reasons. First, it exposed the unreliability of informal data handling. If you’re dealing with fewer than ten responses, maybe mental notes are fine. But when the response volume is high and the data is rich, how do you retain and recall all that insight without a systematic process? How do you make sense of recurring themes or patterns across weeks or months?
Second, and more importantly, it highlighted a fundamental misunderstanding between reading and analyzing. Reading is passive. Summarizing is reactive. But analysis is active, systematic, and interpretive. To analyze open-ended responses means engaging deeply with the text, identifying recurring patterns, contradictions, underlying sentiments, and themes that aren’t explicitly stated but emerge in subtext. That’s the real gold, the “between-the-lines” meaning that structured surveys miss entirely. But that requires more investment in terms of time and resources.
I’ve always believed the most powerful question in any survey is the one that invites participants to speak in their voice. And yet, this is the question that receives the least attention and the weakest analysis. It’s ironic and quite frustrating at times. Especially now, in an era defined by artificial intelligence, large language models, and conversational interfaces, we have the tools to go beyond the checkbox. We can finally empower people to answer in their terms. So why do we still default to the old methods?
It’s time to rethink what we’re doing, and more importantly, who we’re doing it for. Let’s dive into this paradox.
The Problem with Predefined Choices
Imagine being asked: “Why do you enjoy your job?” and being given these options:
- A. The salary
- B. The people
- C. The flexibility
- D. The learning opportunities
What if your real answer is “I feel like I’m making a difference” or “It’s challenging in a good way”? Your authentic perspective is lost, not by your lack of willingness to share, but by the survey’s inability to hear. But should this question be open-ended? It just feels like this question would yield richer insights if it’s an open-ended question.
Predefined choices box for the respondent.
They silence the unexpected, the subtle, and the emotionally charged. They prioritize data standardization over depth of understanding. And in doing so, they favor the surveyor’s assumptions over the respondent’s reality.
Isn’t it obvious, then, that we’d get richer insights if people were free to express themselves in their own words? It should be. And yet, multiple choice reigns.
Why Do We Still Use Traditional Surveys?
If open-ended responses offer richer, more authentic insights if they make people feel heard rather than processed. Then why are we still clinging to survey designs that limit expression? It’s a question I keep returning to, especially after moments like that demo, where a client’s candid pushback revealed more truth than a thousand checkbox surveys ever could. Despite everything we know about the power of participant voice, traditional surveys haven’t just survived. They’ve become the norm. It’s not because they’re ideal. It’s because they’re familiar. Easy to administer. Easy to summarize. Easy to check off a list and say, “We did the research.”
But this comfort comes at a cost: oversimplified data, muted insights, and participants who feel more like subjects than stakeholders. So what’s behind this ongoing loyalty to outdated formats? Here are some reasons traditional surveys persist, despite their limitations:
1. Speed and Scale
Multiple-choice questions are quick to answer and easy to analyze. In a world driven by deadlines and dashboards, researchers often prioritize efficiency over richness. Open-ended answers take longer to process, especially at scale.
2. Quantifiability
Decision-makers crave numbers. Bar graphs, heat maps, and KPIs all stem from clean, quantifiable data. A 78% satisfaction rate derived from checkboxes is easier to present than 3,000 nuanced, open-ended responses that don’t fit neatly into a spreadsheet.
3. Cognitive Load (For Respondents)
It’s easier for participants to click a box than to articulate their thoughts in a few sentences. Especially in long surveys or busy contexts, people gravitate toward low-effort responses.
4. Comparability
Surveys often aim to compare responses across periods, demographics, or segments. Standardized questions with fixed options make this easier, but at the cost of depth and evolving understanding.
5. Technological Limitations (Until Recently)
For decades, there was no practical way to process large volumes of open-ended text. Natural language processing was either too rudimentary or too expensive. That’s changing, but the inertia remains.
The Hidden Cost of Simplicity
Every checkbox carries a hidden assumption: we already know the range of possible answers. This is a dangerous premise in any research context, but especially in exploratory research, product development, or social inquiry, where the unknown unknowns matter most. By presenting a fixed menu of choices, we are not just organizing information; we’re narrowing it. We’re filtering human experience through a lens that may or may not reflect the reality of the people we’re trying to understand.
Let’s go back to the demo I mentioned earlier. The moment the client asked, “What if none of the options apply to me?” They inadvertently pointed out this very flaw. That question wasn’t just about a specific survey. It was a critique of the entire design philosophy underpinning most surveys: the assumption that we’ve already anticipated the full range of human response. That assumption is not only limiting. It’s deeply flawed.
What do we lose in this process? We lose the voice of the outlier, particularly of the person whose experience doesn’t fit neatly into categories. We lose cultural nuance, emerging trends, and context-specific insights that weren’t on our radar. We lose emotion, contradiction, and surprise. We lose the opportunity to be challenged by our assumptions.
We also lose trust. When respondents can’t find themselves in the options presented, they feel reduced to abstractions. They may disengage or choose the “closest” option out of obligation rather than truth. And when that happens, the data we collect is not just shallow. It is distorted.
Most critically, we risk reducing complex human truths into oversimplified metrics. In doing so, we undermine the very purpose of research: to illuminate the human condition, not to flatten it. Simplicity, when weaponized against complexity, becomes a form of intellectual laziness. It’s tempting because it offers clean dashboards and quick summaries. But it also blinds us to what truly matters: what people think, feel, and experience. When simplicity becomes the goal rather than the tool, we sacrifice authenticity for convenience. And that is a cost too high for any researcher to ignore.
Rethinking the Future of Surveys
What if we don’t have to subscribe to the traditional ways of doing surveys and still reap all the “assumed” benefits of doing it the old way? What I mean is what if we make surveys more conversational, give more autonomy to the respondents, and still make them simple, efficient, and quantifiable?
By leveraging advanced generative AI, we can make that happen. Actually, at Qualz.ai, we are already doing that. The idea is to make the survey not boring, shallow, and, most importantly, conducive to the participant’s autonomy to respond, allowing them to express themselves in their own words and not limiting them to the researcher’s words. In other words, not just make them feel seen and heard, but listen to and value their inputs. That is when we get richer insights and increased participant engagement, something traditional surveys have always struggled with. With today’s advancements in AI, we’re finally at a turning point. Tools can now interpret open-ended responses at scale, identify themes, and even detect sentiment without needing every human researcher to sift through mountains of text.
This opens the door to more human-centered surveys- ones that allow people to speak in their own words, bring in their perspectives, and offer ideas researchers never thought to ask about.
Imagine replacing “What features do you want?” with “What would make your life easier?” and being able to meaningfully analyze thousands of answers. That’s not just better data. That’s better listening.
Conclusion
Traditional surveys are still around, not because they’re better but because they’re familiar, efficient, and easy to analyze. We cling to them because they are the devil we know, not because they are the tools we need. That moment during the demo when the client asked, “What if none of the options apply to me?” perfectly captured the quiet frustration many participants feel: the realization that their truth doesn’t quite fit the template.
In the rush to check boxes, we’ve neglected to ask ourselves a basic question: Are we listening to people? Or are we just organizing their thoughts into buckets we created in advance?
Giving participants autonomy to reflect, to explain, and to surprise us, of course requires more effort. But it’s also how we uncover truth, cultivate trust, and elevate the participant from a data point to a human being. And the tools now exist to make this approach not only possible but scalable.
With platforms like Qualz.ai, we no longer must choose between simplicity and depth, between resources and rich insights, or between quality and quantity. We can design experiences that are intuitive for participants and insightful for researchers. The traditional trade-offs are dissolving. What we once saw as competing priorities: speed vs. nuance and scale vs. empathy, are finally converging.
In an era where everyone talks about “customer-centricity” and “human insight,” maybe the most radical thing we can do is… let people speak. And perhaps, most importantly, we must remind ourselves: Simplicity does not have to come with hidden costs. Not anymore.