Gemini
Google's Gemini models combine large context windows with fast inference, making them well-suited for conversations that require processing long histories or complex instructions. Gemini 3.1 pro, in particular, offers an excellent balance of speed and capability — a solid option if you value cost-efficiency with strong multilingual performance.
Usage
To use Gemini as the LLM engine, pass the following JSON in the
LLMConfig field of the StartAIConversation API. Gemini uses the OpenAI-compatible protocol:// json — LLMConfig{"LLMType": "openai","Model": "gemini-3.1-pro-preview","APIKey": "<your_gemini_api_key>","APIUrl": "https://generativelanguage.googleapis.com/v1beta/chat/completions","Streaming": true,"SystemPrompt": "","Timeout": 3.0,"History": 5,"MetaInfo": {}}
For full Conversational AI configuration (STT, TTS, interruption handling, VAD, etc.), see the TRTC Conversational AI API Reference.
Parameter reference
Field | Type | Required | Description |
LLMType | String | Yes | Fixed value: "openai" (Gemini uses OpenAI-compatible protocol). |
Model | String | Yes | |
APIKey | String | Yes | |
APIUrl | String | Yes | Gemini OpenAI-compatible chat completions endpoint. |
Streaming | Boolean | No | Enable streaming. Default: true. |
SystemPrompt | String | No | System instruction to guide model behavior. |
Timeout | Float | No | Timeout in seconds. Default: 3. |
History | Integer | No | Conversation turns for context. Default: 0. Max: 50. |
MetaInfo | Object | No | Custom parameters passed in the request body. |