Validate, simulate, and publish experiment configs.

Basic usage

pairit config lint your_experiment.yaml # Validate YAML and run lints
pairit config compile your_experiment.yaml # Parse and compile to canonical JSON

Publish / manage configs (stored in MongoDB via the Manager Server API)

pairit config upload your_experiment.yaml
pairit config upload your_experiment.yaml --config-id my-exp --openai-api-key sk-...
pairit config upload your_experiment.yaml --config-id my-exp --anthropic-api-key sk-ant-...
pairit config list
pairit config get <configId> --out compiled.json # TODO
pairit config delete <configId>

Per-experiment LLM credentials

If a config includes AI agents, upload the required provider key with that config.

  • --openai-api-key is used for OpenAI models like gpt-4o
  • --anthropic-api-key is used for Anthropic models like claude-sonnet-*
  • Keys are encrypted at rest and stored per configId
  • Re-uploading the same config without a new key keeps the existing key for that config
  • Agent runs do not fall back to a shared platform API key
  • If the required provider key is missing, the agent run fails

Coming soon

pairit simulate --seed 42 your_experiment.yaml

Compilation output

  • Normalizes helper shorthands (text, buttons, componentType) into canonical component entries.
  • Expands survey questions so each answer has a declared type (and required choices for multiple_choice).
  • Resolves custom component references and records the version used at publish time for auditing.

Compiled JSON can be written with --out <file> to inspect what the runtime will consume.

Validation & lints

pairit config lint runs structural checks before you publish: - Validate props against JSON Schemas declared in components. - Enforce unique ids across pages, matchmaking, and agents. - Require unique button ids per page and ensure every go_to/branch target exists. - Verify assign statements only touch session_state.* and that RHS types match the schema. - Reject unknown action.type values and undeclared component events.

Media

pairit media upload hero.png
pairit media list --prefix onboarding/
pairit media delete onboarding/hero.png

Data Export

Export experiment data for analysis.

pairit data export <configId>                    # Export as CSV (default)
pairit data export <configId> --format json      # Export as JSON
pairit data export <configId> --format jsonl     # Export as JSONL
pairit data export <configId> --out ./exports    # Custom output directory

Creates six files per export:

File Contents
{configId}-sessions.csv Session records: sessionId, configId, status, session_state., prolific., timestamps
{configId}-events.csv Component events: sessionId, type, pageId, componentType, data.*, timestamp
{configId}-chat-messages.csv Chat history: messageId, groupId, senderId, senderType, content, createdAt
{configId}-groups.csv Group records: groupId, members, createdAt
{configId}-survey-responses.csv Survey response data: sessionId, itemId, answer, pageId, timestamp
{configId}-workspace-documents.csv Workspace documents: groupId, content, fields, updatedAt

Formats: CSV flattens nested objects with dot notation (session_state.treatment). JSON/JSONL preserve full nesting.

Database layout (MongoDB)

Published configs live in the configs collection (keyed by configId) with metadata and a checksum. Runs create: - sessions documents (keyed by id) → { configId, currentPageId, session_state, endedAt?, createdAt, updatedAt, userId? } - events documents → { sessionId, configId, pageId, componentType, componentId, type, timestamp, data, createdAt }

Use pairit config get <configId> to download a compiled config snapshot for auditing or debugging (where supported). Media objects live in the configured storage backend (local filesystem for dev, Google Cloud Storage in prod); pairit media * commands proxy uploads and deletes via the manager service so the CLI never requires direct GCP credentials. Public media uploads should use the stable asset URL returned by the manager service, not a temporary signed read URL.