Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.userintuition.ai/llms.txt

Use this file to discover all available pages before exploring further.

Human Signal is a survey-style mode for asking real people a single, focused question and getting a quantified answer back in hours. It’s designed for moments when you don’t need a full Study with long Interviews — you just need to know whether a tagline lands, which option people prefer, or how a claim is read. Each Human Signal run creates a lightweight Study under the hood, recruits Participants, collects short Responses, and returns a structured result with a headline metric, supporting reasons, and minority objections.

When to use Human Signal vs. a full Study

The two modes solve different problems. Pick based on the question you’re trying to answer.
Human SignalFull Study
Best forQuick gut-check on one artifactDeep exploration of a topic
FormatShort survey-style ResponsesMulti-question Interviews (voice or chat)
Sample size3-200 ParticipantsTypically 10-50 Participants
Turnaround1-24 hours (you set the budget)Days
OutputHeadline metric + reasons + objectionsTranscripts, themes, full report
How you launchAPI / MCP toolStudio UI
Use Human Signal when you have a specific artifact (a tagline, a claim, a button label, two competing options) and you want a number plus a few sentences of “why.” Use a full Study when you want open-ended discovery, long-form conversation, or multiple linked questions per Participant.
Human Signal is exposed primarily through the API and the User Intuition MCP server. There is no separate Human Signal tab in the dashboard — completed runs appear alongside your other Studies and link to a dashboard URL returned in the response.

The three modes

Every Human Signal run has a mode that shapes how the question is asked and how Responses are aggregated.

preference

Compare 2-5 options and find out which one people prefer. Requires an options list. Best for taglines, naming, layouts, value props.

claim

Test whether people believe or agree with a single statement. Best for marketing claims, pricing assertions, or positioning lines.

message

Check whether a message is clear and what people think it means. Best for onboarding copy, error messages, or feature descriptions.

Asking a question

Human Signal is invoked by creating a Study via the API or by calling the ask_humans tool on the User Intuition MCP server. The MCP tool is the most convenient path from inside an AI assistant; the REST API is the right path for scripts and product integrations.
From any MCP-aware client connected to the User Intuition MCP server:
{
  "tool": "ask_humans",
  "arguments": {
    "mode": "preference",
    "text": "Which tagline resonates more with busy parents?",
    "options": ["Save time, save money", "Less stress, more life"],
    "context": "We are a meal-kit delivery startup targeting families.",
    "n": 25,
    "time_budget_hours": 3
  }
}
The tool returns a JSON payload with study_id, estimated_cost, eta_hours_range, and a dashboard_url. Poll get_results with the study_id to retrieve the answer.

Key request fields

FieldDescription
modepreference, claim, or message
textThe artifact under test (1-5000 characters)
options2-5 strings, required when mode = "preference"
contextBackground shown to the AI moderator (not to Participants)
audiencegeneral (recruit from the panel) or custom (you supply emails)
emailsUp to 200 addresses for custom audience
screenersUp to 3 screening questions for general audience
nTarget sample size, 3-200
voicetrue for voice Interviews, false for chat (default)
time_budget_hours1-24; how long fielding stays open before partial results are returned
localeLanguage/region tag, e.g. en-US
include_transcriptsInclude raw Response transcripts in results
redact_piiStrip PII from results (default true)

Getting results back

GET /api/human-signal/{study_id}/results returns the current status and, once available, the structured result.
1

creating

The Study record exists and the AI moderator and Interview script are being generated.
2

queued / fielding

Recruitment and Participant Responses are in flight. progress.completed ticks up as Responses arrive.
3

complete or partial

Fielding finished. complete means the target n was hit; partial means the time_budget_hours expired before reaching n. Either way, results contains the analyzed output.
4

failed or cancelled

The run stopped before producing a usable result. failure_reason and cost_incurred are populated.
The results object contains a headline metric appropriate to the mode (e.g. percentage preferring each option), the top reasons Participants gave, and any minority objections. Transcripts are included only when include_transcripts: true was set.

Listing your runs

GET /api/human-signal/ returns a paginated list of your Human Signal Studies, each with a short summary and (for completed runs) a headline_metric. Filter by status and created_after as needed.

Editing and cancelling

Human Signal Studies are editable while they are still creating, queued, or fielding. PATCH /api/human-signal/{study_id} accepts any subset of the original request fields. Note these behaviors:
  • Changing content fields (text, mode, options, context, audience) regenerates the moderation guide in the background. The response sets regenerating: true.
  • If the Study is already fielding and Responses have been collected, you’ll get a warning that earlier Responses were collected under the previous guide and may not be directly comparable.
  • You cannot reduce n below the number of already-completed Interviews.
  • Changing n or voice recalculates estimated_cost.
POST /api/human-signal/{study_id}/cancel stops a running Study. Cost incurred so far (based on completed Interviews) is captured and returned as cost_incurred; nothing further is charged.
Studies in complete, partial, or cancelled status cannot be cancelled or edited.

Expiry and partial results

Every Human Signal run has a time_budget_hours (1-24, default 3). A scheduled job sweeps for Studies whose budget has expired while still fielding. When that happens:
  1. The Study transitions to partial.
  2. Whatever Responses have been collected are analyzed and returned in results.
  3. You are charged only for Interviews that actually completed.
This means setting a tight time_budget_hours is safe — you’ll always get something back, even if the panel didn’t fill in time.

Pricing

Human Signal uses the same wallet and credit system as the rest of User Intuition. Cost is computed per completed Participant and depends on:
  • Whether the run is chat or voice (voice is more expensive)
  • Your plan’s effective per-Participant rate
Before a Study is created, your wallet balance is checked against the estimated cost. If it’s short, the API returns an insufficient_credits response with the deficit and a top-up URL — no Study is created and no funds are held. Use dry_run: true to preview the cost first. For exact rates, see your plan details in the billing settings.

Limits

LimitValue
text length1-5,000 characters
options count (preference mode)2-5
n (sample size)3-200
time_budget_hours1-24
screenersMax 3, general audience only
emails (custom audience)Max 200

Next steps

MCP server

Connect Claude or another MCP client and call ask_humans directly

Full Studies

For deeper, open-ended research with full Interviews