Human Signal is a survey-style mode for asking real people a single, focused question and getting a quantified answer back in hours. It’s designed for moments when you don’t need a full Study with long Interviews — you just need to know whether a tagline lands, which option people prefer, or how a claim is read. Each Human Signal run creates a lightweight Study under the hood, recruits Participants, collects short Responses, and returns a structured result with a headline metric, supporting reasons, and minority objections.Documentation Index
Fetch the complete documentation index at: https://docs.userintuition.ai/llms.txt
Use this file to discover all available pages before exploring further.
When to use Human Signal vs. a full Study
The two modes solve different problems. Pick based on the question you’re trying to answer.| Human Signal | Full Study | |
|---|---|---|
| Best for | Quick gut-check on one artifact | Deep exploration of a topic |
| Format | Short survey-style Responses | Multi-question Interviews (voice or chat) |
| Sample size | 3-200 Participants | Typically 10-50 Participants |
| Turnaround | 1-24 hours (you set the budget) | Days |
| Output | Headline metric + reasons + objections | Transcripts, themes, full report |
| How you launch | API / MCP tool | Studio UI |
Human Signal is exposed primarily through the API and the User Intuition MCP server. There is no separate Human Signal tab in the dashboard — completed runs appear alongside your other Studies and link to a dashboard URL returned in the response.
The three modes
Every Human Signal run has amode that shapes how the question is asked and how Responses are aggregated.
preference
Compare 2-5 options and find out which one people prefer. Requires an
options list. Best for taglines, naming, layouts, value props.claim
Test whether people believe or agree with a single statement. Best for marketing claims, pricing assertions, or positioning lines.
message
Check whether a message is clear and what people think it means. Best for onboarding copy, error messages, or feature descriptions.
Asking a question
Human Signal is invoked by creating a Study via the API or by calling theask_humans tool on the User Intuition MCP server. The MCP tool is the most convenient path from inside an AI assistant; the REST API is the right path for scripts and product integrations.
- MCP (ask_humans)
- REST API
- Dry run
From any MCP-aware client connected to the User Intuition MCP server:The tool returns a JSON payload with
study_id, estimated_cost, eta_hours_range, and a dashboard_url. Poll get_results with the study_id to retrieve the answer.Key request fields
| Field | Description |
|---|---|
mode | preference, claim, or message |
text | The artifact under test (1-5000 characters) |
options | 2-5 strings, required when mode = "preference" |
context | Background shown to the AI moderator (not to Participants) |
audience | general (recruit from the panel) or custom (you supply emails) |
emails | Up to 200 addresses for custom audience |
screeners | Up to 3 screening questions for general audience |
n | Target sample size, 3-200 |
voice | true for voice Interviews, false for chat (default) |
time_budget_hours | 1-24; how long fielding stays open before partial results are returned |
locale | Language/region tag, e.g. en-US |
include_transcripts | Include raw Response transcripts in results |
redact_pii | Strip PII from results (default true) |
Getting results back
GET /api/human-signal/{study_id}/results returns the current status and, once available, the structured result.
queued / fielding
Recruitment and Participant Responses are in flight.
progress.completed ticks up as Responses arrive.complete or partial
Fielding finished.
complete means the target n was hit; partial means the time_budget_hours expired before reaching n. Either way, results contains the analyzed output.results object contains a headline metric appropriate to the mode (e.g. percentage preferring each option), the top reasons Participants gave, and any minority objections. Transcripts are included only when include_transcripts: true was set.
Listing your runs
GET /api/human-signal/ returns a paginated list of your Human Signal Studies, each with a short summary and (for completed runs) a headline_metric. Filter by status and created_after as needed.
Editing and cancelling
Human Signal Studies are editable while they are stillcreating, queued, or fielding.
PATCH /api/human-signal/{study_id} accepts any subset of the original request fields. Note these behaviors:
- Changing content fields (
text,mode,options,context,audience) regenerates the moderation guide in the background. The response setsregenerating: true. - If the Study is already
fieldingand Responses have been collected, you’ll get a warning that earlier Responses were collected under the previous guide and may not be directly comparable. - You cannot reduce
nbelow the number of already-completed Interviews. - Changing
norvoicerecalculatesestimated_cost.
POST /api/human-signal/{study_id}/cancel stops a running Study. Cost incurred so far (based on completed Interviews) is captured and returned as cost_incurred; nothing further is charged.
Expiry and partial results
Every Human Signal run has atime_budget_hours (1-24, default 3). A scheduled job sweeps for Studies whose budget has expired while still fielding. When that happens:
- The Study transitions to
partial. - Whatever Responses have been collected are analyzed and returned in
results. - You are charged only for Interviews that actually completed.
time_budget_hours is safe — you’ll always get something back, even if the panel didn’t fill in time.
Pricing
Human Signal uses the same wallet and credit system as the rest of User Intuition. Cost is computed per completed Participant and depends on:- Whether the run is chat or voice (voice is more expensive)
- Your plan’s effective per-Participant rate
insufficient_credits response with the deficit and a top-up URL — no Study is created and no funds are held. Use dry_run: true to preview the cost first.
For exact rates, see your plan details in the billing settings.
Limits
| Limit | Value |
|---|---|
text length | 1-5,000 characters |
options count (preference mode) | 2-5 |
n (sample size) | 3-200 |
time_budget_hours | 1-24 |
screeners | Max 3, general audience only |
emails (custom audience) | Max 200 |
Next steps
MCP server
Connect Claude or another MCP client and call
ask_humans directlyFull Studies
For deeper, open-ended research with full Interviews

