Single search endpoint with a mode discriminator.
API key must be provided in the Authorization header
List of chat messages
1List of repositories to query. Can be strings (slug, display name) or dicts with a 'repository' field.
List of data sources to query. Can be strings (display_name, URL, or source_id) or dicts with 'source_id' or 'identifier' fields
List of local folders to query. Can be strings (display_name or local_folder_id) or dicts with 'local_folder_id' or 'identifier' fields. Local folders are private and user-scoped.
List of Slack installation IDs to include in search
Filters for Slack message results (channels, users, date range)
Filter local folder results by classification category (e.g., 'Work', 'Personal')
Filters for local/personal sources (messages, contacts, etc.)
Optional trust-aware filtering for curated source results
Search mode: 'repositories', 'sources', or 'unified'
Whether to stream the response
Whether to include source texts in the response
Skip LLM processing for faster results (100-500ms vs 2-8s). Set to false for deeper analysis.
Return raw search results without any LLM processing. Returns only sources with scores.
Retrieval strategy: 'vector' (default similarity search), 'tree' (LLM-guided tree navigation for PDFs), or 'hybrid' (both combined)
Maximum tokens in response. Results truncated when budget reached.
100 <= x <= 100000Synthesis model override. Allowed: claude-sonnet-4-5, gpt-5.2-2025-12-11, claude-haiku-4-5-20251001
Minimum similarity for semantic cache hit (non-streaming only)
0.8 <= x <= 1Skip semantic cache (L2) lookup
Generate follow-up questions (adds ~500-1000ms LLM latency). Disabled by default for speed.
Active E2E decrypt session ID. When provided, encrypted local folder results are decrypted through the desktop bridge before synthesis.
Search mode discriminator
"query"Successful Response