Skip to main content
Experience your study from a participant’s perspective before launching. This step lets you validate that the conversation flows naturally and covers the topics you need.

Overview

Testing your conversation allows you to experience exactly what your participants will experience. You’ll have a real conversation with your configured AI interviewer, using the research plan you created in previous steps.
Note: This step is optional but highly recommended. Testing helps you catch issues before participants encounter them.

Why Test Your Conversation

Validate the Flow

Experience how the conversation progresses from topic to topic. You’ll quickly notice if:
  • Questions feel awkward or unclear
  • Transitions between topics are jarring
  • The pacing feels too fast or too slow
  • Important topics are missing

Check Question Clarity

Hear the questions as participants will hear them. This helps you identify:
  • Confusing wording that needs simplification
  • Jargon that participants might not understand
  • Questions that are too long or complex
  • Ambiguous questions that could be interpreted multiple ways

Experience the AI Interviewer

Get a feel for how the AI interviewer:
  • Responds to different types of answers
  • Probes for deeper insights
  • Handles unexpected responses
  • Maintains a professional, empathetic tone

Build Confidence

Testing gives you confidence that your study is ready for real participants.

How to Test Your Conversation

Starting the Test

  1. On the Test Conversation screen, you’ll see: “Ready to test your study?”
  2. Click Start Test Conversation to begin
  3. Experience the full participant flow

During the Test

Once the test begins, you’ll experience the full participant flow:
  1. The interviewer will greet you and introduce the study
  2. You’ll be asked questions from your research plan
  3. The interviewer will probe deeper based on your responses
  4. The conversation will conclude with a wrap-up
Tips for effective testing:
  • Respond naturally: Don’t just give short answers. Respond as a real participant would.
  • Try different response types: Give some detailed answers and some brief answers.
  • Test edge cases: What happens if you give an unexpected answer?
  • Complete the full conversation: Don’t skip ahead—experience the entire flow.

Ending the Test

When the conversation concludes (or if you end it early):
  1. You’ll see a feedback screen: “How was the conversation?”
  2. Select Good or Needs Work to provide quick feedback
  3. Optionally, add written comments in the text field
  4. Click Submit to record your feedback

What to Look For

Conversation Quality

CheckWhat to Look For
OpeningDoes the greeting feel warm and professional?
QuestionsAre questions clear and easy to understand?
ProbingDoes the AI dig deeper on interesting responses?
TransitionsDo topic changes feel natural?
ClosingDoes the wrap-up feel complete and appreciative?

Research Coverage

CheckWhat to Look For
TopicsAre all important topics covered?
DepthDoes the AI probe enough to get useful insights?
BalanceIs time distributed appropriately across topics?
Missing areasIs there anything important that wasn’t asked?

Participant Experience

CheckWhat to Look For
ComfortWould participants feel comfortable being honest?
EngagementIs the conversation interesting or boring?
LengthDoes the interview feel too long, too short, or just right?
FrictionAre there any confusing or frustrating moments?

Making Adjustments

If you identify issues during testing, you have several options:

Option 1: Minor Adjustments

For small changes to question wording or flow:
  1. Click Back to return to Step 3 (Review Goals)
  2. Edit the research plan directly
  3. Proceed through the steps again to re-test

Option 2: Major Changes

For significant restructuring of the research plan:
  1. Click Back repeatedly to return to Step 2 (Customize Plan)
  2. Chat with Charles about the changes you need
  3. Let Charles regenerate the research plan
  4. Review and test again

Option 3: Proceed Anyway

If the issues are minor and you’re comfortable proceeding:
  1. Note the issues for future improvement
  2. Click Save & Continue to proceed to launch

Usage Limits

Test conversations do not count against your usage limits. You can test as many times as you need to get the conversation right. This encourages thorough testing without worrying about consuming your allotted interviews.

Skipping the Test

While testing is optional, we strongly recommend completing at least one test conversation before launch. However, if you’re confident in your research plan:
  1. Click Save & Continue without starting a test
  2. You’ll proceed to Step 6 (Ready to Launch)
Warning: Skipping testing means your first real participant will be your first test. We recommend always testing before inviting participants.

Proceeding to the Next Step

After testing (or choosing to skip):
  1. Review any feedback or issues you noted
  2. Make adjustments if needed by clicking Back
  3. Click Save & Continue to proceed to Step 6: Launch Your Study

Frequently Asked Questions

Can I test multiple times? Yes! Test as many times as you need. Each test helps you refine the experience. Do test conversations appear in my results? No. Test conversations are kept separate from actual participant data and won’t affect your research results. Can other team members test the conversation? Yes. Anyone with access to the study can run a test conversation. How long does a test conversation take? Test conversations take as long as a real interview—typically 10-20 minutes depending on your research plan. You can end early if you’ve seen enough. The AI responded differently than I expected. Is that a problem? Not necessarily. The AI adapts to each conversation, so responses will vary. If the AI consistently misses important probes or goes in wrong directions, adjust your research plan.