Skip to main content
POST
/
search
Unified search
curl --request POST \
  --url https://apigcp.trynia.ai/v2/search \
  --header 'Content-Type: application/json' \
  --data '
{
  "messages": [
    {}
  ],
  "repositories": [
    "<string>"
  ],
  "data_sources": [
    "<string>"
  ],
  "local_folders": [
    "<string>"
  ],
  "category": "<string>",
  "local_source_filters": {
    "source_subtype": "<string>",
    "db_type": "<string>",
    "connector_type": "<string>",
    "conversation_id": "<string>",
    "contact_id": "<string>",
    "sender_role": "<string>",
    "time_after": "<string>",
    "time_before": "<string>"
  },
  "search_mode": "unified",
  "stream": false,
  "include_sources": true,
  "fast_mode": true,
  "skip_llm": false,
  "reasoning_strategy": "vector",
  "max_tokens": 50050,
  "model": "<string>",
  "semantic_cache_threshold": 0.92,
  "bypass_semantic_cache": false,
  "include_follow_ups": false,
  "mode": "query"
}
'
{
  "detail": [
    {
      "loc": [
        "<string>"
      ],
      "msg": "<string>",
      "type": "<string>"
    }
  ]
}

Body

application/json
messages
Messages · object[]
required

List of chat messages

Minimum array length: 1
repositories
(string | object)[]

List of repositories to query. Can be strings (slug, display name) or dicts with a 'repository' field.

data_sources
(string | object)[]

List of data sources to query. Can be strings (display_name, URL, or source_id) or dicts with 'source_id' or 'identifier' fields

local_folders
(string | object)[]

List of local folders to query. Can be strings (display_name or local_folder_id) or dicts with 'local_folder_id' or 'identifier' fields. Local folders are private and user-scoped.

category
string | null

Filter local folder results by classification category (e.g., 'Work', 'Personal')

local_source_filters
LocalSourceFilters · object

Filters for local/personal sources (messages, contacts, etc.)

search_mode
string
default:unified

Search mode: 'repositories', 'sources', or 'unified'

stream
boolean
default:false

Whether to stream the response

include_sources
boolean
default:true

Whether to include source texts in the response

fast_mode
boolean
default:true

Skip LLM processing for faster results (100-500ms vs 2-8s). Set to false for deeper analysis.

skip_llm
boolean
default:false

Return raw search results without any LLM processing. Returns only sources with scores.

reasoning_strategy
string
default:vector

Retrieval strategy: 'vector' (default similarity search), 'tree' (LLM-guided tree navigation for PDFs), or 'hybrid' (both combined)

max_tokens
integer | null

Maximum tokens in response. Results truncated when budget reached.

Required range: 100 <= x <= 100000
model
string | null

Synthesis model override. Allowed: claude-sonnet-4-5, gpt-5.2-2025-12-11, claude-haiku-4-5-20251001

semantic_cache_threshold
number
default:0.92

Minimum similarity for semantic cache hit (non-streaming only)

Required range: 0.8 <= x <= 1
bypass_semantic_cache
boolean
default:false

Skip semantic cache (L2) lookup

include_follow_ups
boolean
default:false

Generate follow-up questions (adds ~500-1000ms LLM latency). Disabled by default for speed.

mode
string
default:query

Search mode discriminator

Allowed value: "query"

Response

Successful Response