Skip to main content
The Azure caller uses the same OpenAI API surface but targets your Azure OpenAI resource endpoint. This is useful when data residency or enterprise compliance requirements prevent using OpenAI directly. Provider directory: api/integration-api/internal/caller/azure/

Vault Credential

KeyDescription
subscription_keyAzure OpenAI resource key (Key 1 or Key 2)
endpointAzure OpenAI resource endpoint, e.g. https://my-resource.openai.azure.com

Setup

1

Create an Azure OpenAI resource

In Azure PortalCreate a resource → Azure OpenAI. After creation, go to Keys and Endpoint to copy the key and endpoint.
2

Deploy a model

In Azure OpenAI StudioDeployments → Create new deployment. Select a model (e.g. gpt-4o) and give it a deployment name.
3

Add to Rapida vault

In the Rapida dashboard → Credentials → Create Credential, select provider Azure OpenAI, enter:
  • subscription_key = Key 1 from Azure
  • endpoint = resource endpoint URL
4

Configure the assistant LLM

In the assistant settings → LLM Provider, select Azure OpenAI and set the deployment name as the model:
{
  "model.name": "my-gpt4o-deployment",
  "model.temperature": 0.7,
  "model.max_tokens": 200
}
model.name must match your Azure deployment name exactly.

Model Parameters

Same as OpenAI — all model.* parameters are supported. See the OpenAI page for the full table.