Skip to main content
Let your AI agent ask real people what they think — directly from a chat conversation. The User Intuition MCP server connects any MCP-compatible client (ChatGPT, Claude, Cursor, etc.) to your User Intuition account so you can create studies, collect feedback, and get results without leaving your AI workflow.

How It Works

The MCP server acts as a bridge between your AI agent and the User Intuition research platform.
AI Agent (ChatGPT, Claude, etc.)
    ↓  MCP Protocol
User Intuition MCP Server
    ↓  REST API
User Intuition Research Platform

Real people respond via voice or text
Your agent gets access to five tools that cover the full study lifecycle — from creating a study to retrieving results.

Prerequisites

Before you begin, make sure you have:
  • A User Intuition account with credits in your wallet
  • An MCP-compatible AI client (ChatGPT, Claude Desktop, Cursor, etc.)

Connecting via ChatGPT

ChatGPT supports MCP servers natively. Connect User Intuition in a few clicks.
1

Open ChatGPT Settings

In ChatGPT, go to SettingsConnected Apps (or Tools)
2

Add MCP Server

Click Add MCP Server and enter the server URL:
https://mcp.userintuition.ai
3

Authenticate

ChatGPT will redirect you to sign in with your User Intuition account. Log in and grant access.
4

Start Using

Return to ChatGPT and ask it to run a study. For example:
“Ask 25 people which tagline they prefer: ‘Save time, save money’ vs ‘Less stress, more life‘“

Connecting via Claude Desktop

Add the MCP server to your Claude Desktop configuration.
1

Open Config File

Open your Claude Desktop MCP configuration file:
  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json
2

Add the Server

Add the User Intuition MCP server to mcpServers:
{
  "mcpServers": {
    "userintuition": {
      "url": "https://mcp.userintuition.ai/mcp"
    }
  }
}
3

Restart Claude

Restart Claude Desktop. You’ll be prompted to authenticate with your User Intuition account on first use.

Available Tools

Once connected, your AI agent has access to five tools:
ToolDescription
ask_humansCreate a new research study and collect feedback from real people
get_resultsCheck study status and retrieve results
list_studiesList your studies with optional filters
edit_studyModify a running study
cancel_studyStop a running study

ask_humans

Create a new study to collect feedback from real people. This is the primary tool for launching research. Parameters:
ParameterRequiredDefaultDescription
modeYes"preference", "claim", or "message"
textYesThe main content to test with respondents
optionsOnly for preference2–5 options to compare
contextNoBackground info shown to respondents
audienceNo"general"Target audience segment
emailsNoInvite specific respondents by email
screenersNoScreening questions to filter respondents (max 3)
nNo25Number of respondents (1–100)
voiceNofalseCollect audio responses
time_budget_hoursNo3Max hours to keep the study open
localeNoLanguage/region (e.g. "en-US", "es-MX")
include_transcriptsNofalseInclude full response text in results
redact_piiNotrueAuto-redact personally identifiable information
dry_runNofalseGet cost estimate without creating study

Study Modes

Compare 2–5 options to see which one people prefer.Great for testing taglines, names, designs, or any A/B decision.
"Ask 25 people which name they prefer for our app:
'Meadow', 'Bloom', or 'Canopy'"
Requires the options parameter with 2–5 items.

Screening Respondents

Use screeners to filter who responds to your study. Each screener is a multiple-choice question with pass logic that defines which answers qualify.
[
  {
    "question": "What is your current role?",
    "choices": [
      "Software Developer/Engineer",
      "Engineering Manager/Team Lead",
      "Non-technical role"
    ],
    "pass_logic": {
      "include": ["Software Developer/Engineer"]
    }
  }
]
Screeners are only available when audience is set to "general". You can add up to 3 screening questions per study.

Cost Estimates

Use dry_run: true to get a cost estimate before committing:
{
  "dry_run": true,
  "estimated_cost": 37.50,
  "currency": "USD",
  "eta_hours_range": [1.0, 3.0]
}

get_results

Check on a study and retrieve results when available. Parameters:
ParameterRequiredDefaultDescription
study_idYesThe study identifier
include_transcriptsNofalseInclude full response transcripts
Study statuses: queuedfieldingcomplete Other statuses: partial, failed, cancelled When complete, results include preference splits, top reasons, minority objections, and verbatim quotes.

list_studies

List your studies with optional filters. Parameters:
ParameterRequiredDefaultDescription
statusNoFilter by status
limitNo10Max number of studies to return
created_afterNoISO 8601 datetime filter

edit_study

Modify a study that is still in creating, queued, or fielding status. Only provided fields are updated.
Content changes (text, options, context, mode) will trigger moderation guide regeneration, which may affect in-progress interviews.

cancel_study

Stop a running study. Returns whether cancellation succeeded and the cost incurred for responses already collected.

Example Conversations

Here are some ways to use the MCP server through your AI agent:
You: “Ask 25 people which tagline they prefer for our fitness app: ‘Move more, stress less’ vs ‘Your body will thank you’ vs ‘Fitness that fits your life’”Agent: Creates a preference study, waits for results, and reports back:
52% preferred “Fitness that fits your life.” Top reason: it acknowledges that people have busy schedules. 24% chose “Move more, stress less” for its simplicity.
You: “Test this claim with 30 parents: ‘I would switch grocery stores if one offered a 15-minute shopping guarantee’”Agent: Creates a claim study and returns:
71% agreed with the claim. Strongest agreement among parents with children under 5. Main objection: skepticism about freshness if shopping is rushed.
You: “Test this email subject line with 20 people: ‘Your subscription changes take effect tomorrow’”Agent: Creates a message study and returns:
85% correctly understood this means their plan is changing. 15% were unsure whether they needed to take action. Suggestion: clarify whether action is required.
You: “How much would it cost to survey 50 people about our pricing page?”Agent: Runs a dry run and reports:
A message study with 50 respondents would cost approximately $75.00 and complete in 1–3 hours.

Troubleshooting

If you see an authentication error:
  • Verify you’re logged into the correct User Intuition account
  • Try disconnecting and reconnecting the MCP server in your AI client
  • Ensure your account is active at app.userintuition.ai
If a study fails with an insufficient credits error:
  • Use dry_run: true to check costs before creating studies
  • Add credits at app.userintuition.ai/wallet
  • The error message includes the exact amount needed
If a study stays in fielding status longer than expected:
  • Check progress with get_results — it shows completed vs. target responses
  • Studies with niche audiences or strict screeners may take longer
  • The time_budget_hours parameter controls the maximum duration
If the User Intuition tools don’t appear:
  • Confirm the MCP server URL is correct: https://mcp.userintuition.ai
  • Restart your AI client after adding the server
  • Check that your client supports MCP (ChatGPT, Claude Desktop, Cursor)

Need Help?