# rapida.ai documentation ## Docs - [Assistant Conversation Logs](https://doc.rapida.ai/activity/conversation-logs.md): Monitor and improve your AI assistant interactions in Rapida - [LLM Logs](https://doc.rapida.ai/activity/llm-logs.md): Monitor and analyze Language Model (LLM) interactions in the Rapida platform - [Webhook Logs](https://doc.rapida.ai/activity/webhook-logs.md): Monitor and analyze webhook interactions in the Rapida platform - [Get All Assistants](https://doc.rapida.ai/api-reference/assistant/get-all-assistant.md) - [Get All Assistant Analysis](https://doc.rapida.ai/api-reference/assistant/get-all-assistant-analysis.md) - [Get All Assistant Conversations](https://doc.rapida.ai/api-reference/assistant/get-all-assistant-conversation.md) - [Get All Assistant Knowledge](https://doc.rapida.ai/api-reference/assistant/get-all-assistant-knowledge.md) - [Get All Assistant Tools](https://doc.rapida.ai/api-reference/assistant/get-all-assistant-tool.md) - [Get All Assistant Webhook](https://doc.rapida.ai/api-reference/assistant/get-all-assistant-webhook.md) - [Get All Assistant Webhook Logs](https://doc.rapida.ai/api-reference/assistant/get-all-assistant-webhook-log.md) - [Get Assistant](https://doc.rapida.ai/api-reference/assistant/get-assistant.md) - [Get Assistant Analysis](https://doc.rapida.ai/api-reference/assistant/get-assistant-analysis.md) - [Get Assistant Conversation](https://doc.rapida.ai/api-reference/assistant/get-assistant-conversation.md) - [Get Assistant Knowledge](https://doc.rapida.ai/api-reference/assistant/get-assistant-knowledge.md) - [Get Assistant Tool](https://doc.rapida.ai/api-reference/assistant/get-assistant-tool.md) - [Get Assistant Webhook](https://doc.rapida.ai/api-reference/assistant/get-assistant-webhook.md) - [Get Assistant Webhook Log](https://doc.rapida.ai/api-reference/assistant/get-assistant-webhook-log.md) - [Authentication](https://doc.rapida.ai/api-reference/authentication.md): Authentication and connection to use SDK - [Create Bulk Phone Calls](https://doc.rapida.ai/api-reference/call/create-bulk-call.md) - [Create Phone Call](https://doc.rapida.ai/api-reference/call/create-call.md) - [Get All Endpoints](https://doc.rapida.ai/api-reference/endpoint/get-all-endpoint.md) - [Get All Endpoint Logs](https://doc.rapida.ai/api-reference/endpoint/get-all-endpoint-log.md) - [Get Endpoint](https://doc.rapida.ai/api-reference/endpoint/get-endpoint.md) - [Get Endpoint Log](https://doc.rapida.ai/api-reference/endpoint/get-endpoint-log.md) - [Invoke](https://doc.rapida.ai/api-reference/endpoint/invoke.md) - [Installation](https://doc.rapida.ai/api-reference/installation.md): Build natural-sounding AI voice assistants with ease - [AgentKit — Custom LLM Backend](https://doc.rapida.ai/assistants/agentkit.md): Build a custom server-side LLM backend that Rapida calls over gRPC, giving you full control over reasoning, tool use, and context management. - [Create Analysis for Assistant](https://doc.rapida.ai/assistants/analysis/create-analysis.md): Step-by-step guide to setting up analysis for your AI assistant - [Overview](https://doc.rapida.ai/assistants/analysis/overview.md): Step-by-step guide to setting up analysis for your AI assistant - [Update Analysis for Assistant](https://doc.rapida.ai/assistants/analysis/update-analysis.md): Step-by-step guide to update analysis for your AI assistant - [Create and Configure an Assistant](https://doc.rapida.ai/assistants/create-assistant.md): Build a production-ready voice AI assistant — from prompt definition to voice pipeline, tools, knowledge, and deployment. - [Creating a New Version](https://doc.rapida.ai/assistants/create-new-version.md): Step-by-step guide to creating a new version of an existing AI assistant - [End of Speech Detection (EOS)](https://doc.rapida.ai/assistants/end-of-speech.md): Understand End of Speech providers, their parameters, and how to choose the right one for your voice AI assistant. - [Tools](https://doc.rapida.ai/assistants/introduction-to-tools.md): Enhance your assistant's capabilities with built-in and custom tools - [Adding Knowledge Sources](https://doc.rapida.ai/assistants/knowledge/add-knowledge.md): Configure knowledge sources for your assistant to enhance information retrieval - [Assistants](https://doc.rapida.ai/assistants/overview.md): Understand what an assistant is, how it is structured, how a live conversation flows through it, and how to iterate safely in production. - [Assistant Prompt Templating](https://doc.rapida.ai/assistants/prompt-templating.md): How to use runtime prompt variables correctly, with variable meaning tables and production-ready examples. - [Assistant Telemetry](https://doc.rapida.ai/assistants/telemetry.md): Configure telemetry providers, understand required fields, and use telemetry records for debugging and optimization. - [API Call Tool](https://doc.rapida.ai/assistants/tools/add-api-tool.md): Configure and use the API Call Tool to make external API requests - [End of Conversation Tool](https://doc.rapida.ai/assistants/tools/add-end-of-conversation-tool.md): Configure and use the End of Conversation Tool to signal the end of a conversation - [Endpoint (LLM Call) Tool](https://doc.rapida.ai/assistants/tools/add-endpoint-tool.md): Configure and use the Endpoint (LLM Call) Tool to make API calls to language models - [Knowledge Retrieval Tool](https://doc.rapida.ai/assistants/tools/add-knowledge-tool.md): Configure and use the Knowledge Retrieval Tool to access information from your knowledge base - [Put On Hold Tool](https://doc.rapida.ai/assistants/tools/add-put-on-hold-tool.md): Configure and use the Put On Hold Tool to pause phone calls for a specified duration - [Obtaining Twilio Credentials](https://doc.rapida.ai/assistants/twilio-credentials.md): How to find and use your Twilio account credentials for integration - [Twilio Handler Configuration](https://doc.rapida.ai/assistants/twilio-handler.md): Set up the primary handler URL to connect Twilio with your assistant - [Twilio Phone Integration](https://doc.rapida.ai/assistants/twilio-integration.md): Complete guide to deploying your assistant with Twilio phone capabilities - [Voice Activity Detection (VAD)](https://doc.rapida.ai/assistants/voice-activity-detection.md): Understand VAD providers, their parameters, and how to choose the right one for your voice AI assistant. - [Setting Up Webhooks](https://doc.rapida.ai/assistants/webhook/create-webhook.md): Configure webhooks to receive assistant data and conversation analysis - [Overview](https://doc.rapida.ai/assistants/webhook/overview.md): Configure webhooks to receive assistant data and conversation analysis - [Update Webhook](https://doc.rapida.ai/assistants/webhook/update-webhook.md): Configure webhooks to receive assistant data and conversation analysis - [Rapida Credentials](https://doc.rapida.ai/credential/rapida-credentials.md): Rapida offers two types of credentials: Project Credentials and Personal Tokens, each serving different purposes for authentication and access to Rapida platform services. - [Deployment Options](https://doc.rapida.ai/deployments/deployment-options.md): Understanding how to deploy the Rapida Voice AI Platform - [Rapida Fully Managed Deployment](https://doc.rapida.ai/deployments/fully-managed.md): Deploy the Rapida Voice AI Platform as a fully managed service - [Self hosted Deployment](https://doc.rapida.ai/deployments/self-hosted.md): Deploy the Rapida Voice AI Platform on your own infrastructure using Docker Compose - [Create endpoint](https://doc.rapida.ai/endpoint/create-endpoint.md): Endpoints allow you to integrate Large Language Models (LLMs) into your application, providing a powerful interface for AI-driven functionalities. - [Create new version](https://doc.rapida.ai/endpoint/create-new-version.md): Quick guide to updating your existing endpoint with a new version. - [Monitoring & debugging](https://doc.rapida.ai/endpoint/monitor-endpoint-performance.md): The endpoint dashboard provides a detailed view of your deployment's performance and activity. Here's how to use it effectively for monitoring and debugging: - [Overview](https://doc.rapida.ai/endpoint/overview.md): Seamlessly integrate LLMs into your application with powerful, flexible endpoints - [Organizations](https://doc.rapida.ai/governances/organization.md): Create and manage your organization — the top-level entity that owns all projects, billing, and members in Rapida. - [Governance Overview](https://doc.rapida.ai/governances/overview.md): Understand how organizations, projects, users, and roles are structured in the Rapida platform. - [Projects](https://doc.rapida.ai/governances/project.md): Projects are isolated workspaces within your organization — each with its own assistants, endpoints, knowledge bases, credentials, and team members. - [Roles & Permissions](https://doc.rapida.ai/governances/roles-and-permission.md): Your team members access your organization and its projects using individual user accounts which is what you use to sign into RapidaAI. An account must be part of an organization but it does not need to be part of all projects in the organization. Each account will have a single organization role pe… - [Anthropic](https://doc.rapida.ai/integrations/llm/anthropic.md): Anthropic is an AI research company known for its advanced language models, including the Claude series. - [Azure OpenAI](https://doc.rapida.ai/integrations/llm/azure-openai.md): Azure OpenAI Service provides REST API access to OpenAI's powerful language models including the GPT-4 series. - [AWS Bedrock](https://doc.rapida.ai/integrations/llm/bedrock.md): AWS Bedrock provides a fully managed service that offers a choice of high-performing foundation models from leading AI companies through a single API. - [Cohere](https://doc.rapida.ai/integrations/llm/cohere.md): Cohere is an AI company that provides large language models and NLP tools for developers and businesses. - [Gemini / Google AI Studio](https://doc.rapida.ai/integrations/llm/gemini.md): Gemini is Google's latest family of large language models that can understand and generate text, images, and more. - [Google AI](https://doc.rapida.ai/integrations/llm/google-ai.md): Google AI provides advanced language models and AI capabilities through a unified API. - [Google Vertex AI](https://doc.rapida.ai/integrations/llm/google-vertex-ai.md): Google Vertex AI is a fully-managed, unified AI development platform that provides access to Google's advanced models with enterprise features. - [Hugging Face](https://doc.rapida.ai/integrations/llm/huggingface.md): Hugging Face provides a wide range of open-source models and AI capabilities through their Inference API. - [Mistral](https://doc.rapida.ai/integrations/llm/mistral.md): Mistral specializes in creating fast, secure, open-source large language models with excellent performance-to-cost ratio. - [OpenAI](https://doc.rapida.ai/integrations/llm/openai.md): OpenAI is a leading AI research and deployment company, known for its advanced language models like GPT-3 and GPT-4. - [VoyageAI](https://doc.rapida.ai/integrations/llm/voyageai.md): VoyageAI is an AI company that provides advanced language models and AI solutions for various applications. - [AssemblyAI](https://doc.rapida.ai/integrations/stt/assemblyai.md): AssemblyAI provides advanced speech-to-text with AI-powered audio intelligence features. - [AWS Transcribe](https://doc.rapida.ai/integrations/stt/aws-transcribe.md): Amazon Transcribe is a fully managed automatic speech recognition (ASR) service that makes it easy to add speech-to-text capabilities to applications. - [Azure Cognitive Services](https://doc.rapida.ai/integrations/stt/azure-speech-service.md): Azure Cognitive Services Speech-to-Text provides real-time and batch transcription in 100+ languages. - [Cartesia](https://doc.rapida.ai/integrations/stt/cartesia.md): Cartesia provides advanced voice AI solutions with both speech-to-text and text-to-speech capabilities. - [Deepgram](https://doc.rapida.ai/integrations/stt/deepgram.md): Deepgram provides advanced speech-to-text and text-to-speech capabilities powered by AI. - [Google Speech Service](https://doc.rapida.ai/integrations/stt/google-speech-service.md): Google Speech Service provides advanced speech-to-text and text-to-speech capabilities powered by Google Cloud. - [Groq STT](https://doc.rapida.ai/integrations/stt/groq.md): Groq provides ultra-fast speech-to-text via an OpenAI Whisper-compatible API powered by its LPU inference engine. - [NVIDIA STT](https://doc.rapida.ai/integrations/stt/nvidia.md): NVIDIA provides enterprise-grade automatic speech recognition via the NVCF API. - [OpenAI Whisper](https://doc.rapida.ai/integrations/stt/openai-whisper.md): OpenAI Whisper is a state-of-the-art speech recognition model with broad language support and high accuracy. - [Rev.ai](https://doc.rapida.ai/integrations/stt/revai.md): Rev.ai provides highly accurate speech recognition powered by deep learning, with real-time and asynchronous transcription options. - [Sarvam AI](https://doc.rapida.ai/integrations/stt/sarvam.md): Sarvam AI provides speech-to-text and text-to-speech solutions with strong support for Indian languages. - [Speechmatics](https://doc.rapida.ai/integrations/stt/speechmatics.md): Speechmatics delivers highly accurate, language-inclusive speech recognition with real-time and batch processing capabilities. - [Asterisk](https://doc.rapida.ai/integrations/telephony/asterisk.md): Connect your Asterisk PBX to Rapida AI for real-time voice conversations using AudioSocket (native TCP) or WebSocket transport - [Exotel](https://doc.rapida.ai/integrations/telephony/exotel.md): Connect your Exotel phone numbers to Rapida AI for inbound and outbound voice AI conversations. - [SIP Trunk](https://doc.rapida.ai/integrations/telephony/sip.md): Connect any SIP-compatible PBX, carrier, or VoIP provider to Rapida AI for voice conversations. - [Twilio](https://doc.rapida.ai/integrations/telephony/twilio.md): Connect your Twilio phone numbers to Rapida AI for inbound and outbound voice AI conversations. - [Vonage](https://doc.rapida.ai/integrations/telephony/vonage.md): Connect your Vonage phone numbers to Rapida AI for inbound and outbound voice AI conversations. - [AWS Polly](https://doc.rapida.ai/integrations/tts/aws-polly.md): Amazon Polly is a cloud-based text-to-speech service that synthesizes natural-sounding speech in dozens of languages and voices. - [Azure Cognitive Services](https://doc.rapida.ai/integrations/tts/azure-speech-service.md): Azure Cognitive Services Text-to-Speech provides natural-sounding voice synthesis with multiple languages and voices. - [Cartesia Text-to-Speech](https://doc.rapida.ai/integrations/tts/cartesia.md): Cartesia delivers advanced text-to-speech capabilities with ultra-realistic voice synthesis. - [Deepgram Text-to-Speech](https://doc.rapida.ai/integrations/tts/deepgram.md): Deepgram's text-to-speech service provides ultra-realistic voice synthesis with natural-sounding output. - [ElevenLabs](https://doc.rapida.ai/integrations/tts/elevenlabs.md): ElevenLabs provides state-of-the-art text-to-speech with natural-sounding AI voices and advanced voice customization. - [Google Text-to-Speech](https://doc.rapida.ai/integrations/tts/google-speech-service.md): Google Cloud Text-to-Speech provides natural-sounding voice synthesis powered by advanced neural networks. - [Groq TTS](https://doc.rapida.ai/integrations/tts/groq.md): Groq provides ultra-fast text-to-speech via an OpenAI-compatible API powered by its LPU inference engine. - [MiniMax](https://doc.rapida.ai/integrations/tts/minimax.md): MiniMax provides high-quality speech synthesis via HTTP streaming SSE with multiple model tiers. - [Neuphonic](https://doc.rapida.ai/integrations/tts/neuphonic.md): Neuphonic provides multilingual streaming text-to-speech via WebSocket with adjustable speech speed. - [NVIDIA TTS](https://doc.rapida.ai/integrations/tts/nvidia.md): NVIDIA provides enterprise-grade neural text-to-speech via the NVCF API. - [OpenAI TTS](https://doc.rapida.ai/integrations/tts/openai-tts.md): OpenAI Text-to-Speech delivers natural-sounding speech synthesis with multiple high-quality voices. - [Resemble AI](https://doc.rapida.ai/integrations/tts/resemble.md): Resemble AI provides high-quality, real-time voice synthesis with support for custom voice cloning. - [Rime](https://doc.rapida.ai/integrations/tts/rime.md): Rime offers ultra-low-latency text-to-speech with natural-sounding voices via WebSocket streaming. - [Sarvam AI Text-to-Speech](https://doc.rapida.ai/integrations/tts/sarvam.md): Sarvam AI provides text-to-speech capabilities with strong support for Indian languages. - [Speechmatics TTS](https://doc.rapida.ai/integrations/tts/speechmatics.md): Speechmatics provides multilingual HTTP streaming text-to-speech backed by its speech platform. - [Rapida Voice AI Platform](https://doc.rapida.ai/introduction/overview.md): The open-source platform for building, deploying, and operating production voice AI systems at scale. - [Add new document](https://doc.rapida.ai/knowledge/add-new-document.md): A simple guide to adding documents to your knowledge - [Create knowledge](https://doc.rapida.ai/knowledge/create-knowledge.md): Learn how to create a new knowledge base in the system - [Manage Document Chunks](https://doc.rapida.ai/knowledge/manage-document-chunk.md): Learn how to manage and enhance document chunks for improved knowledge base performance - [Overview](https://doc.rapida.ai/knowledge/overview.md): Enhance your AI applications with powerful knowledge integration - [Architecture Overview](https://doc.rapida.ai/opensource/architecture.md): System architecture, service topology, and data flows for the Rapida Voice AI Platform - [Configuration Reference](https://doc.rapida.ai/opensource/configuration.md): Complete configuration options for Rapida services - [Installation](https://doc.rapida.ai/opensource/installation.md): Get Rapida up and running - Choose your setup method - [Self-Hosting Rapida](https://doc.rapida.ai/opensource/overview.md): Run the full Rapida Voice AI platform on your own infrastructure. Docker Compose gets you from zero to a working deployment in under ten minutes. - [Assistant API — Configuration](https://doc.rapida.ai/opensource/services/assistant-api/configuration.md): Complete environment variable reference for the assistant-api service. - [LiveKit Turn Detector EOS](https://doc.rapida.ai/opensource/services/assistant-api/eos/livekit.md): Configure LiveKit Turn Detector end-of-speech detection in assistant-api. - [End of Speech Detection — Overview](https://doc.rapida.ai/opensource/services/assistant-api/eos/overview.md): EOS interface, factory function, providers, and model setup in assistant-api. - [Pipecat Smart Turn EOS](https://doc.rapida.ai/opensource/services/assistant-api/eos/pipecat.md): Configure Pipecat Smart Turn end-of-speech detection in assistant-api. - [Silence-Based EOS](https://doc.rapida.ai/opensource/services/assistant-api/eos/silence-based.md): Configure silence-based end-of-speech detection in assistant-api. - [Local Telephony with ngrok](https://doc.rapida.ai/opensource/services/assistant-api/ngrok.md): Use ngrok to expose assistant-api to Twilio, Vonage, and Exotel webhooks during local development. - [Assistant API](https://doc.rapida.ai/opensource/services/assistant-api/overview.md): Voice orchestration hub. Manages the full real-time pipeline — STT, LLM, TTS — across WebSocket, telephony, and SIP channels. - [Assistant API — Prompt Templating](https://doc.rapida.ai/opensource/services/assistant-api/prompt-templating.md): Prompt argument pipeline, variable namespaces, and rendering behavior in model executor. - [AssemblyAI STT](https://doc.rapida.ai/opensource/services/assistant-api/stt/assemblyai.md): Configure AssemblyAI real-time speech-to-text with speaker diarization in assistant-api. - [AWS Transcribe STT](https://doc.rapida.ai/opensource/services/assistant-api/stt/aws.md): Configure AWS Transcribe speech-to-text via HTTP in assistant-api. - [Azure STT](https://doc.rapida.ai/opensource/services/assistant-api/stt/azure.md): Configure Azure Cognitive Services Speech-to-Text in assistant-api. - [Configure Your Own STT Provider](https://doc.rapida.ai/opensource/services/assistant-api/stt/custom.md): Implement the SpeechToTextTransformer interface to add a new STT provider to assistant-api. - [Deepgram STT](https://doc.rapida.ai/opensource/services/assistant-api/stt/deepgram.md): Configure Deepgram Nova-2/Nova-3 speech-to-text in assistant-api. - [Google Cloud STT](https://doc.rapida.ai/opensource/services/assistant-api/stt/google.md): Configure Google Cloud Speech-to-Text in assistant-api. - [Groq STT](https://doc.rapida.ai/opensource/services/assistant-api/stt/groq.md): Configure Groq speech-to-text with ultra-fast Whisper inference in assistant-api. - [NVIDIA STT](https://doc.rapida.ai/opensource/services/assistant-api/stt/nvidia.md): Configure NVIDIA speech-to-text via the NVCF API in assistant-api. - [Speech-to-Text — Overview](https://doc.rapida.ai/opensource/services/assistant-api/stt/overview.md): STT transformer interface, factory functions, and supported providers in assistant-api. - [Rev.ai STT](https://doc.rapida.ai/opensource/services/assistant-api/stt/revai.md): Configure Rev.ai real-time speech-to-text in assistant-api. - [Sarvam AI STT](https://doc.rapida.ai/opensource/services/assistant-api/stt/sarvamai.md): Configure Sarvam AI speech-to-text for Indian languages in assistant-api. - [Speechmatics STT](https://doc.rapida.ai/opensource/services/assistant-api/stt/speechmatics.md): Configure Speechmatics real-time speech-to-text with WebSocket streaming in assistant-api. - [Assistant API — Telemetry](https://doc.rapida.ai/opensource/services/assistant-api/telemetry.md): UI setup, runtime wiring, provider options, and telemetry query behavior for assistant-api. - [Asterisk](https://doc.rapida.ai/opensource/services/assistant-api/telephony/asterisk.md): Run assistant-api with Asterisk PBX using AudioSocket (native TCP) or WebSocket transport. - [Configure Your Own Telephony Provider](https://doc.rapida.ai/opensource/services/assistant-api/telephony/custom.md): Implement the Telephony and Streamer interfaces to add a new telephony provider to assistant-api. - [Exotel](https://doc.rapida.ai/opensource/services/assistant-api/telephony/exotel.md): Run assistant-api with Exotel for PSTN voice calls in India and South-East Asia. - [Telephony — Overview](https://doc.rapida.ai/opensource/services/assistant-api/telephony/overview.md): Provider comparison, URL routing pattern, and inbound call flow for assistant-api telephony. - [SIP](https://doc.rapida.ai/opensource/services/assistant-api/telephony/sip.md): Run assistant-api with the built-in SIP server for direct SIP connectivity. - [Twilio](https://doc.rapida.ai/opensource/services/assistant-api/telephony/twilio.md): Run assistant-api with Twilio for global PSTN voice calls using WebSocket Media Streams. - [Vonage](https://doc.rapida.ai/opensource/services/assistant-api/telephony/vonage.md): Run assistant-api with Vonage (Nexmo) for global PSTN voice calls using WebSocket and NCCO. - [AWS Polly TTS](https://doc.rapida.ai/opensource/services/assistant-api/tts/aws.md): Configure AWS Polly text-to-speech via HTTP in assistant-api. - [Azure TTS](https://doc.rapida.ai/opensource/services/assistant-api/tts/azure.md): Configure Azure Cognitive Services Neural text-to-speech in assistant-api. - [Cartesia TTS](https://doc.rapida.ai/opensource/services/assistant-api/tts/cartesia.md): Configure Cartesia ultra-low latency text-to-speech in assistant-api. - [Configure Your Own TTS Provider](https://doc.rapida.ai/opensource/services/assistant-api/tts/custom.md): Implement the TextToSpeechTransformer interface to add a new TTS provider to assistant-api. - [Deepgram Aura TTS](https://doc.rapida.ai/opensource/services/assistant-api/tts/deepgram.md): Configure Deepgram Aura text-to-speech in assistant-api. - [ElevenLabs TTS](https://doc.rapida.ai/opensource/services/assistant-api/tts/elevenlabs.md): Configure ElevenLabs text-to-speech with voice cloning in assistant-api. - [Google Cloud TTS](https://doc.rapida.ai/opensource/services/assistant-api/tts/google.md): Configure Google Cloud Text-to-Speech in assistant-api. - [Groq TTS](https://doc.rapida.ai/opensource/services/assistant-api/tts/groq.md): Configure Groq text-to-speech with ultra-fast HTTP inference in assistant-api. - [MiniMax TTS](https://doc.rapida.ai/opensource/services/assistant-api/tts/minimax.md): Configure MiniMax text-to-speech with HTTP streaming SSE in assistant-api. - [Neuphonic TTS](https://doc.rapida.ai/opensource/services/assistant-api/tts/neuphonic.md): Configure Neuphonic text-to-speech with WebSocket streaming in assistant-api. - [NVIDIA TTS](https://doc.rapida.ai/opensource/services/assistant-api/tts/nvidia.md): Configure NVIDIA text-to-speech via the NVCF API in assistant-api. - [Text-to-Speech — Overview](https://doc.rapida.ai/opensource/services/assistant-api/tts/overview.md): TTS transformer interface, factory function, and supported providers in assistant-api. - [Resemble AI TTS](https://doc.rapida.ai/opensource/services/assistant-api/tts/resembleai.md): Configure Resemble AI text-to-speech with real-time WebSocket streaming in assistant-api. - [Rime TTS](https://doc.rapida.ai/opensource/services/assistant-api/tts/rime.md): Configure Rime text-to-speech with ultra-low-latency WebSocket streaming in assistant-api. - [Sarvam AI TTS](https://doc.rapida.ai/opensource/services/assistant-api/tts/sarvamai.md): Configure Sarvam AI text-to-speech for Indian languages in assistant-api. - [Speechmatics TTS](https://doc.rapida.ai/opensource/services/assistant-api/tts/speechmatics.md): Configure Speechmatics text-to-speech with HTTP streaming in assistant-api. - [FireRed VAD](https://doc.rapida.ai/opensource/services/assistant-api/vad/firered.md): Configure FireRed Voice Activity Detection in assistant-api. - [Voice Activity Detection — Overview](https://doc.rapida.ai/opensource/services/assistant-api/vad/overview.md): VAD interface, factory function, providers, and model setup in assistant-api. - [Silero VAD](https://doc.rapida.ai/opensource/services/assistant-api/vad/silero.md): Configure Silero Voice Activity Detection in assistant-api. - [TEN VAD](https://doc.rapida.ai/opensource/services/assistant-api/vad/ten.md): Configure TEN Framework Voice Activity Detection in assistant-api. - [Document API — Configuration](https://doc.rapida.ai/opensource/services/document-api/configuration.md): config.yaml and environment variable reference for the document-api service. - [Document API](https://doc.rapida.ai/opensource/services/document-api/overview.md): Knowledge base and RAG pipeline. Handles document ingestion, chunking, embedding generation, and semantic search for assistant context retrieval. - [Endpoint API — Configuration](https://doc.rapida.ai/opensource/services/endpoint-api/configuration.md): Complete environment variable reference for the endpoint-api service. - [Endpoint API](https://doc.rapida.ai/opensource/services/endpoint-api/overview.md): Webhook and callback management. Routes post-call events to external systems with configurable retry, signature verification, and delivery history. - [Integration API — Configuration](https://doc.rapida.ai/opensource/services/integration-api/configuration.md): Complete environment variable reference for the integration-api service. - [Anthropic](https://doc.rapida.ai/opensource/services/integration-api/llm/anthropic.md): Configure Anthropic Claude models in integration-api. - [Azure OpenAI](https://doc.rapida.ai/opensource/services/integration-api/llm/azure.md): Configure Azure-hosted OpenAI deployments in integration-api. - [Configure Your Own LLM Provider](https://doc.rapida.ai/opensource/services/integration-api/llm/custom.md): Implement the LargeLanguageCaller interface to add a new LLM provider to integration-api. - [Google Gemini](https://doc.rapida.ai/opensource/services/integration-api/llm/gemini.md): Configure Google Gemini models in integration-api. - [OpenAI](https://doc.rapida.ai/opensource/services/integration-api/llm/openai.md): Configure OpenAI GPT models in integration-api. - [LLM Providers — Overview](https://doc.rapida.ai/opensource/services/integration-api/llm/overview.md): LargeLanguageCaller interface, ChatCompletionOptions, and supported LLM providers in integration-api. - [Integration API](https://doc.rapida.ai/opensource/services/integration-api/overview.md): Provider execution layer. Manages encrypted credential storage and executes all external AI provider calls on behalf of assistant-api. - [Web API — Configuration](https://doc.rapida.ai/opensource/services/web-api/configuration.md): Complete environment variable reference for the web-api service. - [Web API](https://doc.rapida.ai/opensource/services/web-api/overview.md): Core platform backend. Handles authentication, organization management, credential vault, and acts as the gRPC proxy for all dashboard operations. - [Web Console & UI](https://doc.rapida.ai/opensource/services/web-console.md): Web dashboard and user interface for Rapida - [Troubleshooting Guide](https://doc.rapida.ai/opensource/troubleshooting.md): Solutions for common Rapida setup and runtime issues - [Phone Call Deployment](https://doc.rapida.ai/voice-deployment-options/phone.md): Deploy your AI assistant on inbound and outbound phone calls with full telephony, STT, and TTS configuration. - [Web App / SDK Deployment](https://doc.rapida.ai/voice-deployment-options/web-app.md): Integrate your AI assistant into any React application via the Rapida SDK or REST API. - [Web Widget Deployment](https://doc.rapida.ai/voice-deployment-options/web-widget.md): Embed an AI voice and chat widget on any website with a single script tag. - [WhatsApp Deployment](https://doc.rapida.ai/voice-deployment-options/whatsapp.md): Connect your AI assistant to WhatsApp Business for automated conversational interactions at scale. - [Create a project](https://doc.rapida.ai/workspace/create-new-project.md): Learn how to create a new project in the Rapida platform - [Invite a User](https://doc.rapida.ai/workspace/invite-a-user.md): Learn how to invite new users to your Rapida workspace - [Workspace Overview](https://doc.rapida.ai/workspace/overview.md): A quick guide to navigating and managing your Rapida workspace - [Project Management](https://doc.rapida.ai/workspace/project-management.md): Understanding project structure and permissions in the Rapida platform - [User Management](https://doc.rapida.ai/workspace/user-management.md): Understanding user roles and management in the Rapida platform