Skip to main content
The OpenAI caller uses the OpenAI Go SDK. Credentials are read from the vault. Streaming uses Server-Sent Events with per-token onStream callbacks. Provider directory: api/integration-api/internal/caller/openai/

Vault Credential

KeyDescription
keyOpenAI API key from platform.openai.com/api-keys

Setup

1

Get an OpenAI API key

Sign in at platform.openai.comAPI Keys → Create new secret key.
2

Add to Rapida vault

In the Rapida dashboard → Credentials → Create Credential, select provider OpenAI, enter key = your API key.
3

Configure the assistant LLM

In the assistant settings → LLM Provider, select OpenAI and set model parameters:
{
  "model.name": "gpt-4o",
  "model.temperature": 0.7,
  "model.max_tokens": 200,
  "model.tool_choice": "auto"
}

Supported Models

ModelContextNotes
gpt-4o128kBest balance of speed and capability
gpt-4o-mini128kFastest, lowest cost
gpt-4-turbo128kHigh-capability
gpt-48kLegacy
gpt-3.5-turbo16kLow cost

Model Parameters

KeySupportedNotes
model.nameRequired
model.temperature0.0–2.0
model.max_tokens
model.top_p
model.stopArray of stop sequences
model.tool_choiceauto, none, required
model.frequency_penalty-2.0–2.0
model.presence_penalty-2.0–2.0
model.seedDeterministic output
model.response_formatjson_object, text
model.reasoning_effortlow, medium, high (o-series models)