이 페이지는 현재 영어로만 제공되며 한국어 버전은 곧 제공될 예정입니다. 기다려 주셔서 감사드립니다.

음성 동기화

Introduction to vocal and song synchronization

Due to the jitter buffer of local vocal collection, the jitter buffer of song playback mixing, and the certain GAP between sound playback to the human ear and singing, when the singer sings along with the lyrics and BGM, the remote audience feels that there is a certain delay in the BGM playback, vocals, and lyrics. The chorus scheme uses low-latency AAudio collection inside the TRTC SDK. Specifically, you only need to enable the chorus mode and low-latency mode after entering the room.

Specific code implementation

Enable chorus mode

// The main instance (vocal instance) enables chorus mode (reducing buffer interval and audio redundancy protection)
mTRTCCloud.callExperimentalAPI("{\"api\":\"enableChorus\",\"params\":{\"enable\":1,\"audioSource\":0}}");
// The sub-instance (accompaniment instance) enables chorus mode (reducing buffer interval and audio redundancy protection)
subCloud.callExperimentalAPI("{\"api\":\"enableChorus\",\"params\":{\"enable\":1,\"audioSource\":1}}");
Note:
Parameter settings for the experimental interface enableChorus to enable chorus mode:
audioSource:0(vocals).
audioSource:1 (accompaniment).

Enable low-latency mode (high-performance audio AAudio)

// The main instance (vocal instance) enables high-performance audio AAudio
mTRTCCloud.callExperimentalAPI("{\"api\":\"setLowLatencyModeEnabled\",\"params\":{\"enable\":1}}");
// The sub-instance (accompaniment instance) enables high-performance audio AAudio
subCloud.callExperimentalAPI("{\"api\":\"setLowLatencyModeEnabled\",\"params\":{\"enable\":1}}");