Skip to main content
The Gemini caller uses the Google Generative AI Go SDK with streaming support. Provider directory: api/integration-api/internal/caller/gemini/

Vault Credential

KeyDescription
keyGoogle AI Studio API key from aistudio.google.com

Setup

1

Get a Gemini API key

Sign in at aistudio.google.comGet API key.
2

Add to Rapida vault

In the Rapida dashboard → Credentials → Create Credential, select provider Gemini, enter key = your API key.
3

Configure the assistant LLM

In the assistant settings → LLM Provider, select Gemini and set model parameters:
{
  "model.name": "gemini-2.0-flash",
  "model.temperature": 0.7,
  "model.max_tokens": 512
}

Supported Models

ModelContextNotes
gemini-2.0-flash1MFastest, recommended for voice
gemini-1.5-pro2MLargest context window
gemini-1.5-flash1MFast, low cost
gemini-pro32kLegacy

Model Parameters

KeySupportedNotes
model.nameRequired
model.temperature0.0–2.0
model.max_tokens
model.top_p
model.top_kGemini-specific
model.stop
model.frequency_penalty
model.presence_penalty
model.seed
model.response_formatMIME type: application/json, text/plain