additional_features
["Round-based facilitation templates (present→react→finalize)", "Meta-summarizer output structure: key_points / conflict_map / risk_flags / revision_suggestions / meta_comment"]
example_commands
["Run a 3-round cognitive interview on this item: 'In the past 2 weeks, how often have you felt stressed?'", "Check how respondents might interpret the word 'regularly' using a discussion-based cognitive interview.", "Summarize this item with a focus on risk_flags and revision_suggestions."]
gpt_id
g-688ad8a2c2fc8191b795cdcc8823bc6d
ideal_use_cases
["Explore misunderstandings and interpretation variance (item stem, instructions, response scales)", "Diagnose item issues (ambiguity, assumptions, missing categories, double-barreled questions)", "Surface segment differences (novice vs. expert, etc.) via structured discussion", "Draft revision candidates and prioritize fixes"]
limitations
["It cannot replace recruiting real respondents or running field cognitive interviews; use outputs as hypothesis-driven diagnostics.", "High-stakes domains (legal/medical, etc.) still require expert review.", "If context is underspecified (target population, setting, scale definitions), recommendations may be less accurate."]
target_users
["Researchers and practitioners developing or refining survey/assessment items", "UX/insights teams validating question comprehension in qualitative studies"]