Survey & interview Q&A
Asking questions of an attached study
Attaching a study turns Research Guide into a live Q&A surface over your data. Answers cite participant IDs and quoted evidence — every time.
What this unlocks
- Skip the export-to-spreadsheet loop. Ask counts, slices, and quoted answers in plain language; the guide returns cited evidence directly in chat.
- Interview triage without the weekend. 40 transcripts that would take a week of coding become answerable in minutes: "which participants mentioned pricing?", "what did they say?", "show me the contradictions in how they talked about value."
- Confident refusals, not inventive answers. The guide tells you when the data can't support a slice — so you don't ship a finding built on a tag that was never collected.
- Quote-grade outputs, not summaries. Every claim comes back with participant IDs and direct quotes, so your deliverables stand up to stakeholder scrutiny.

Surveys
Once a survey is attached, you can ask:
- Counts. "How many respondents picked 'Pricing' as their top concern?"
- Tag slices. "What did plan=starter users say about onboarding?" (The guide checks that the tag exists before answering — see grounded chat.)
- Open-ended answers. "Show me the verbatim answers to Q4." The guide returns quoted responses with participant IDs, not just counts.
- Themes. "What are the top 3 themes in open-ended responses?" The guide pulls themes from any existing analysis runs where available.
The guide will never invent a slice. Ask about a demographic the survey didn't collect, and it refuses — with a short note explaining which tags are available.
Interviews
For interviews, you can ask:
- Quotes. "Show me what p_129 said about pricing objections."
- Cross-transcript patterns. "What themes come up across these interviews? Cite quotes."
- Keyword searches. "Did anyone mention 'per-seat pricing'?" — runs as a keyword scan, not a fuzzy search, for exact-phrase asks.
- Existing analysis. "Summarize the Empathy Map lens on this study." — the guide reads pre-computed lens or grounded-emergence output where it exists.
Primary vs. derived transcripts
Interview studies often have multiple transcript variants per participant — the raw recording transcript (primary), a cleaned enhanced version, and a translate-to-en copy for non-English studies. Research Guide resolves these intelligently: enhanced wins over translate-en wins over primary. You can audit this by asking "list every transcript variant for p_42" — the guide returns the full map, marking which variant was chosen by default.
Every answer is cited
Counts link to the response rows they summarize. Quotes show participant IDs (or their opaque identifiers) and, for interviews, jump into the transcript viewer at the exact timestamp. If the guide can't cite an answer, it won't answer.
The "no-match" discipline
If a search returns nothing, the guide won't say "no one mentioned X" unless it's certain. Instead, it may broaden the search or reach into the sandbox for a computed check (see in-chat computation). Researchers have been burned by over-confident "not found" answers; this is the safeguard.