Feedback

Tencent Hunyuan

To use Tencent Hunyuan as the LLM engine, pass the following JSON in the LLMConfig field of the StartAIConversation API:
// json — LLMConfig
{
"LLMType": "openai",
"Model": "hunyuan-2.0-thinking-20251109",
"APIKey": "<your_hunyuan_api_key>",
"APIUrl": "https://hunyuan.cloud.tencent.com/openai/v1/chat/completions",
"Streaming": true,
"SystemPrompt": "You are a helpful assistant.",
"Timeout": 3.0,
"History": 10,
"MetaInfo": {}
}
For full Conversational AI configuration (STT, TTS, interruption handling, VAD, etc.), see the TRTC Conversational AI API Reference.

Parameter reference

Field
Type
Required
Description
LLMType
String
Yes
Fixed value: "openai".
Model
String
Yes
Model: hunyuan-2.0-thinking-20251109, hunyuan-2.0-instruct-20251111, etc. See Hunyuan Models.
APIKey
String
Yes
Your Hunyuan API key. Obtain from Tencent Cloud Hunyuan Console.
APIUrl
String
Yes
Hunyuan OpenAI-compatible endpoint.
Streaming
Boolean
No
Enable streaming. Default: true.
SystemPrompt
String
No
System instruction to guide model behavior.
Timeout
Float
No
Timeout in seconds. Default: 3.
History
Integer
No
Conversation turns for context. Default: 0. Max: 50.
MetaInfo
Object
No
Custom parameters passed in the request body.
For more details on Tencent Hunyuan, see the Hunyuan documentation.