Skip to main content

Getting Started

To integrate OpenAI with your Rapida application, follow these steps:

Supported Models

OpenAI offers a variety of models that can be used with this integration. Here’s a table of the supported models:
Model NameSeriesDescription
gpt-4GPT-4Latest GPT-4 model with improved performance
gpt-4-0613GPT-4June 2023 version of GPT-4
gpt-4oGPT-4Optimized version of GPT-4
gpt-4-turbo-previewGPT-4Preview of the turbo version of GPT-4
gpt-4-turboGPT-4Turbo version of GPT-4 for faster processing
gpt-4.1-miniGPT-4Smaller, faster version of GPT-4.1
gpt-4.1-nanoGPT-4Nano-sized version of GPT-4.1 for lightweight applications
gpt-3.5-turboGPT-3.5Optimized version of GPT-3.5 for chat-based applications
gpt-3.5-turbo-16kGPT-3.5GPT-3.5 turbo with expanded 16k token context
gpt-3.5-turbo-16k-0613GPT-3.5June 2023 version of GPT-3.5 turbo with 16k token context
o3-miniO SeriesMini version of the O3 model
o3O SeriesStandard O3 model
o3-proO SeriesProfessional version of the O3 model
o4-miniO SeriesMini version of the O4 model
gpt-4o-miniO SeriesMini version of GPT-4 optimized
o1O SeriesStandard O1 model
o1-proO SeriesProfessional version of the O1 model
These models offer various capabilities and performance levels. Choose the appropriate model based on your specific use case and requirements. In addition to the language models, OpenAI also provides embedding models. Here’s a table of the supported embedding models:
Model NameDescription
text-embedding-3-largeLarge version of the text embedding model
text-embedding-3-smallSmall version of the text embedding model
text-embedding-ada-002Ada-based text embedding model
These embedding models are designed to convert text into numerical vectors, which can be used for various natural language processing tasks such as semantic search, clustering, and similarity comparisons.

Prerequisites

  • Go to the OpenAI platform at https://platform.openai.com.
  • Sign up or log in to your OpenAI account.
  • Navigate to the API section.
  • Click on “Create new secret key” to generate your API key.
  • Copy the API key (make sure to save it securely, as it won’t be shown again).

Setting Up Provider Credentials

1

Access the Integrations Page

Integrations PageNavigate to the “Integration > Models” page. Here you’ll see a grid of various AI model providers including AWS Bedrock, Azure OpenAI, Anthropic, Cohere, OpenAI, and more.
2

Select a Provider

On the Integrations page, find the provider you want to set up credentials for. Each provider card shows a brief description and a “Connected” or “Setup Credential” button.Click the “Setup Credential” button for your chosen provider.
3

Create Provider Credential

Create Provider CredentialA modal window will appear titled “Create provider credential”. Follow these steps:
  1. Select Your Provider from the dropdown (if not already selected)
  2. Enter a Key Name: Assign a unique name to this provider key for easy identification
  3. Enter the Key: Input the actual API key or credential for the provider
  4. Click “Configure” to save the credential
4

Verify Credential Setup

After setting up the credential, you can verify it’s been added:
  1. The provider card should now show “Connected”
  2. If you click on the provider, you’ll see a “View provider credential” modal
  3. This modal displays the credential name, when it was last updated, and options to delete or close
Your provider credential is now set up and ready to use with the integration system.