- Which LLM you call and how you call it
- Custom tool definitions and execution
- Multi-step reasoning chains (LangChain, CrewAI, AutoGen, etc.)
- State management and memory
- Any external API or database your logic needs
How It Works
Python Quick Start
1. Install the SDK
2. Implement your agent
SubclassAgentKitAgent and override the Talk method:
3. Point your assistant to your server
In the Rapida dashboard, when configuring your assistant’s LLM provider, select AgentKit and enter the address of your server (e.g.my-server.example.com:50051).
Response Types
YourTalk method yields TalkOutput objects. Use the convenience methods on AgentKitAgent:
| Method | Use when |
|---|---|
assistant_response(msg_id, text, completed) | Streaming text response chunks |
configuration_response(config) | Acknowledging the initial handshake |
tool_call(msg_id, tool_id, name, args) | Requesting Rapida to execute a tool |
tool_call_result(msg_id, tool_id, name, result, success) | Returning a tool result |
transfer_call(msg_id, args) | Transferring the call to another number |
terminate_call(msg_id, args) | Ending the call programmatically |
error_response(code, message) | Signalling an error |
Handling Tool Calls
Rapida can execute tools on your behalf (knowledge retrieval, endpoint invocation) and send results back to your server. Here’s how to request a tool call and receive its result:SSL / TLS Configuration
For production deployments, enable TLS:Authentication
Protect your server with a bearer token:Framework Examples
AgentKit works with any Python code. Here are some common patterns:LangChain ReAct agent
LangChain ReAct agent
Anthropic Claude
Anthropic Claude