Modern conversational AI agents can typically handle complex, multi-turn tasks like asking clarifying questions and proactively assisting users. However, they frequently struggle with long interactions, often forgetting constraints or generating irrelevant responses. Improving these systems requires continuous training and feedback, but relying on the “gold standard” of live human testing is prohibitively expensive, time-consuming, and notoriously difficult to scale.
As a scalable alternative, the AI research community has increasingly turned to user simulators — LLM-powered agents explicitly instructed to roleplay as human users. However, modern LLM-based simulators can still suffer from a significant realism gap, exhibiting atypical levels of patience or unrealistic, sometimes encyclopedic knowledge of a domain. Think of it like a pilot using a flight simulator: the best simulators are as realistic as possible, with unpredictable weather, sudden gusts of wind, and even the occasional bird flying into the engine. To close the realism gap for LLM-based user simulators, we need to quantify it.
In our recent paper, we introduce ConvApparel, a new dataset of human-AI conversations designed to do exactly that. ConvApparel exposes the hidden flaws in today’s user simulation and provides a path towards building AI-based testers we can trust. To capture the full spectrum of human behavior — from satisfaction to profound annoyance — we employed a unique dual-agent data collection protocol where participants were randomly routed to either a helpful “Good” agent or an intentionally unhelpful “Bad” agent. This setup, paired with a three-pillar validation strategy involving population-level statistics, human-likeness scoring, and counterfactual validation, allows us to move beyond simple surface-level mimicry.
💸 Earn Instantly With This Task
No fees, no waiting — your earnings could be 1 click away.
Start Earning