• 서비스
  • 가격
  • 리소스
  • 기술지원
이 페이지는 현재 영어로만 제공되며 한국어 버전은 곧 제공될 예정입니다. 기다려 주셔서 감사드립니다.
Feedback

Gemini

Google's Gemini models combine large context windows with fast inference, making them well-suited for conversations that require processing long histories or complex instructions. Gemini 3.1 pro, in particular, offers an excellent balance of speed and capability — a solid option if you value cost-efficiency with strong multilingual performance.

Usage

To use Gemini as the LLM engine, pass the following JSON in the LLMConfig field of the StartAIConversation API. Gemini uses the OpenAI-compatible protocol:
// json — LLMConfig
{
"LLMType": "openai",
"Model": "gemini-3.1-pro-preview",
"APIKey": "<your_gemini_api_key>",
"APIUrl": "https://generativelanguage.googleapis.com/v1beta/chat/completions",
"Streaming": true,
"SystemPrompt": "",
"Timeout": 3.0,
"History": 5,
"MetaInfo": {}
}
For full Conversational AI configuration (STT, TTS, interruption handling, VAD, etc.), see the TRTC Conversational AI API Reference.

Parameter reference

Field
Type
Required
Description
LLMType
String
Yes
Fixed value: "openai" (Gemini uses OpenAI-compatible protocol).
Model
String
Yes
Model: gemini-3.1-pro-preview, gemini-3-flash-preview, etc. See Gemini Models.
APIKey
String
Yes
Your Gemini API key from Google AI Studio.
APIUrl
String
Yes
Gemini OpenAI-compatible chat completions endpoint.
Streaming
Boolean
No
Enable streaming. Default: true.
SystemPrompt
String
No
System instruction to guide model behavior.
Timeout
Float
No
Timeout in seconds. Default: 3.
History
Integer
No
Conversation turns for context. Default: 0. Max: 50.
MetaInfo
Object
No
Custom parameters passed in the request body.
For more details on Gemini models and API, see the Google Gemini documentation.