Every piece of guidance on AI-moderated interviews -- including my own -- assumes you know what you want to learn. You have research questions. You have a discussion guide. You have probe strategies mapped to objectives.
But what happens when you genuinely do not know what to ask?
I am not talking about laziness or poor preparation. I am talking about the research situations where prescribing questions would actively harm the quality of your data. Early-stage discovery in unfamiliar domains. Entering markets where your assumptions are untested. Exploring emergent behaviors that nobody has mapped yet. Investigating why something unexpected is happening and you cannot even articulate the right hypothesis.
These are the moments where unstructured AI interviews become not just useful, but essential. And they require a fundamentally different approach from the structured and semi-structured modes most teams default to.
When Unstructured Beats Structured
Most qualitative research benefits from structure. A well-designed semi-structured guide with clear probing instructions will outperform a free-flowing conversation in the majority of studies. I have written extensively about why and how to do this well.
But there are specific research contexts where structure becomes a liability:
You are entering a domain you do not understand. When a B2B SaaS company decides to explore healthcare as a vertical, they do not know the workflows, the pain points, the regulatory constraints, or the decision-making dynamics. Writing a discussion guide based on assumptions about this market would bake those assumptions into every interview. Unstructured interviews let practitioners describe their reality in their own terms, revealing the landscape as it actually exists rather than as you imagined it.
You are investigating emergent or anomalous behavior. A product team notices a cohort of users adopting the product in ways nobody designed for. A nonprofit sees unexpected outcomes in a program. A researcher finds contradictory patterns in preliminary data. In these situations, you need to let participants tell you what is happening before you can formulate what to ask about. Structured questions would impose a frame that might completely miss the phenomenon you are trying to understand.
The topic is sensitive or stigmatized. Rigid question sequences can feel interrogative, especially when discussing personal experiences, failures, or controversial opinions. An unstructured conversation that follows the participant's lead creates space for disclosures that a structured format might suppress. This is well-established in qualitative research design -- sometimes the best probe is silence and patience.
You are doing genuine discovery, not validation. There is a meaningful difference between continuous discovery where you have ongoing hypotheses to test, and true greenfield exploration where hypotheses do not exist yet. Unstructured interviews serve the latter. If you already have a theory and want to pressure-test it, use semi-structured. If you need to generate theories, go unstructured.
The common thread is uncertainty. When your confidence in what matters is low, your interview structure should be loose. When your confidence is high, tighten it up.
Setting Objectives Without Scripting Questions
Unstructured does not mean aimless. This is the most common misconception, and it is what makes teams afraid to try this approach. The distinction is between controlling the destination and controlling the route.
In an unstructured AI interview, you still define:
The research domain. "We are exploring how mid-market procurement teams evaluate and adopt new software tools." This is broad, but it gives the AI a territory to operate within.
What good data looks like. "We need specific stories and examples from the participant's actual experience. Abstract opinions are less valuable than concrete narratives about things that happened." This tells the AI to push for specificity without dictating what topics to push on.
Conversational principles. "Follow the participant's lead. When they mention something with emotional weight -- frustration, surprise, excitement -- stay on that topic longer. Do not redirect to a new area until the current thread feels naturally complete."
Boundaries. "Do not discuss our product or any specific vendor by name. Do not ask leading questions that suggest a particular answer. If the conversation drifts entirely outside the research domain for more than two exchanges, gently redirect."
What you do not define is the sequence of questions, the specific topics within the domain, or the probing strategy for each topic. You trust the AI to make those decisions in real time based on what the participant says. This is fundamentally different from the interview guide template approach, and that is the point.
I think of it as giving the AI a compass instead of a map. The compass always points toward your research objectives. The path it takes to get there depends entirely on what the participant brings.
The Role of Minimal Prompting
The most effective unstructured AI interviews use what I call minimal prompting -- the fewest possible words to open a conversation and keep it moving.
The opening question matters enormously. In a structured interview, your first question is a warm-up designed to ease the participant in. In an unstructured interview, your first question sets the entire trajectory. I have found that broader openings produce richer data:
"Tell me about your work and what has been on your mind lately." This is almost absurdly open. And it works. Participants naturally gravitate toward whatever is most salient to them, which is exactly what you want when you do not know what matters.
"Walk me through a typical week." Narrative prompts that ask participants to describe their reality produce natural entry points for deeper exploration. The AI can then follow whatever thread seems most promising.
"What has changed in the last six months?" Change prompts surface tensions, adaptations, and emerging needs that participants might not think to mention in response to direct questions.
After the opening, the AI's job is to follow, not lead. The best follow-up prompts in unstructured interviews are variations on:
- "Tell me more about that."
- "What happened next?"
- "Why do you think that is?"
- "Can you give me a specific example?"
These are the same probes a skilled ethnographer or grounded theory researcher would use -- content-free questions that invite the participant to go deeper into whatever they just said. The AI is not introducing new topics. It is excavating the topics the participant introduced.
How AI Handles Unexpected Conversation Directions
This is where unstructured AI interviews have a genuine advantage that surprises most researchers. You might expect that an AI would struggle with truly unexpected directions -- that it would try to pull the conversation back to familiar ground. In practice, the opposite happens.
AI interviewers do not have topic preferences. They do not get bored. They do not unconsciously steer toward their own hypotheses. When a participant starts talking about something completely unexpected, the AI follows with the same attentiveness it brings to any topic. This lack of moderator bias is a feature, not a bug, in unstructured research.
I have seen this play out repeatedly. In a study exploring how small business owners think about growth, one participant started describing an elaborate system she built for tracking employee morale using a physical whiteboard. A human moderator might have noted it and moved on -- it was not "on topic." The AI kept probing. It asked how the system evolved, what triggered her to build it, what she learned from it. Twenty minutes later, we had a rich case study of informal knowledge management practices that became the central finding of the research.
The AI is also tireless in its willingness to sit with ambiguity. When participants contradict themselves, trail off, or express uncertainty, the AI does not rush to resolve it. It asks clarifying questions without pushing toward coherence. This patience is critical in unstructured interviews where the participant is often working through their own understanding in real time.
That said, there are limitations. AI interviewers can miss non-verbal cues that a human moderator would catch -- hesitation, discomfort, emotional shifts that are not expressed in words. For particularly sensitive topics, human moderators still have advantages. But for exploratory research where the primary goal is breadth of discovery, the AI's lack of preconceptions is a genuine methodological strength.
Discoveries That Only Emerge Without Constraints
Let me share three examples from actual research where unstructured AI interviews produced findings that a structured approach would have missed entirely.
The invisible workflow. A product team wanted to understand how their users managed projects. Their semi-structured guide focused on project planning, task management, and collaboration. When they ran unstructured interviews as a supplement, participants kept talking about something nobody had asked about: the 15 minutes every morning they spent "getting oriented" -- re-reading yesterday's notes, checking what changed overnight, mentally reconstructing context. This orientation ritual was invisible in structured interviews because nobody thought to ask about it. It became the basis for a new product feature that outperformed everything on the existing roadmap.
The trust threshold. A financial services company exploring a new market segment ran unstructured interviews with potential customers. Structured interviews had focused on feature preferences, pricing sensitivity, and competitive alternatives. The unstructured conversations revealed something different: participants kept circling back to stories about being burned by previous providers. The core insight was not about features at all -- it was about a trust deficit that no amount of feature superiority would overcome. The go-to-market strategy shifted entirely from feature-led to trust-led, with case studies and guarantees replacing product demos as the primary sales tool.
The workaround ecosystem. A healthcare technology company used unstructured AI interviews to explore how clinicians actually used their electronic health records. The structured research had been optimized around workflow efficiency. Unstructured interviews revealed an elaborate ecosystem of workarounds -- sticky notes, personal spreadsheets, text message threads with colleagues -- that clinicians used to compensate for what the system could not do. These workarounds were not complaints. Participants did not frame them as problems. They only surfaced because the unstructured format gave participants space to describe their complete workflow, including the parts they had normalized.
In each case, the discovery depended on not knowing what to ask. A discussion guide designed around the team's hypotheses would have produced data that confirmed or denied those hypotheses. The unstructured approach produced data about things the team had not imagined.
Making Unstructured Interviews Practical at Scale
The traditional objection to unstructured interviews is that they do not scale. With human moderators, this is true -- training moderators to conduct genuinely unstructured interviews is difficult, and the variability between moderators introduces noise. AI-moderated unstructured interviews solve both problems. The AI applies the same conversational principles to every participant, and you can run dozens of interviews simultaneously.
Here is my recommended workflow for unstructured AI research:
Start with 5-10 unstructured interviews. Use minimal prompting and broad objectives. Let the AI follow participants wherever they go.
Analyze for emergent themes. Use thematic analysis or grounded theory approaches to identify patterns across the unstructured transcripts. What topics keep surfacing? What stories recur? Where is the energy in these conversations?
Build a semi-structured guide from the findings. Now that you know what matters, design a focused guide that probes the themes your unstructured interviews revealed. This guide will be better than anything you could have written without the exploratory phase.
Run the focused study at scale. Use the semi-structured guide for your main sample. You now have both the breadth of unstructured discovery and the depth of focused investigation.
This two-phase approach -- unstructured exploration followed by structured depth -- is the most reliable way to avoid the trap of asking the wrong questions confidently.
Getting Started
If you have never run unstructured AI interviews, start small. Take your next research project and run 5 unstructured sessions alongside your planned study. Give the AI a research domain, describe what good data looks like, and let it go. Compare what surfaces in the unstructured interviews to what your structured guide captured.
I am consistently surprised by what participants choose to talk about when nobody tells them what to talk about. The gap between what we think matters and what actually matters is the entire reason unstructured interviews exist.
If you want to explore how unstructured AI interviews could work for your research, book a session with our team. We can walk through your research context and help you design an approach that matches the level of structure your study actually needs -- whether that is a detailed discussion guide or a compass and an open question.



