Seeking alternatives after Twilio video sunset?
Check our migration guide 
Only  $9.9! Get 50,000 minutes with our Starter Plan, perfect for your MVP project.
Only $9.9! Get 50,000 minutes with our Starter Plan, perfect for your MVP project.
Grab It Now 
Seeking alternatives after Twilio video sunset?
Check our migration guide 
Only  $9.9! Get 50,000 minutes with our Starter Plan, perfect for your MVP project.
Only $9.9! Get 50,000 minutes with our Starter Plan, perfect for your MVP project.
Grab It Now 
Features & Server APIs
  • On-Cloud Recording
  • Relay to CDN
  • RTMP Streaming with TRTC
  • Event Callbacks
    • Event Callbacks
    • Relay to CDN Callback
    • Verify Signature Example
  • Testing Hardware Devices
    • Android, iOS, Windows, and macOS
    • Web
  • Testing Network Quality
    • Android, iOS, Windows, and macOS
    • Web
  • Custom Capturing and Rendering
    • Android, iOS, Windows, and macOS
    • Web
    • Flutter
  • Custom Audio Capturing and Playback
    • Android, iOS, Windows, and macOS
    • Web
  • Sending and Receiving Messages
  • Access Management
    • Overview
    • Manageable Resources and Actions
    • Preset Policies
    • Custom Policies
  • Enabling Advanced Permission Control
  • How to push stream to TRTC room with OBS WHIP
  • Server APIs
    • API Category
    • History
    • Making API Request
      • Request Structure
      • Common Params
      • Signature v3
      • Signature
      • Responses
    • Room Management APIs
      • SetUserBlockedByStrRoomId
      • SetUserBlocked
      • RemoveUser
      • DismissRoom
      • RemoveUserByStrRoomId
      • DismissRoomByStrRoomId
    • Call Quality Monitoring APIs
      • DescribeRoomInfo
      • DescribeUserEvent
      • DescribeCallDetailInfo
      • DescribeUserInfo
      • DescribeScaleInfo
    • Pull Stream Relay Related Interface
      • StartStreamIngest
      • StopStreamIngest
      • DescribeStreamIngest
    • On-cloud Recording APIs
      • CreateCloudRecording
      • DescribeCloudRecording
      • ModifyCloudRecording
      • DeleteCloudRecording
    • Stream Mixing and Relay APIs
      • UpdatePublishCdnStream
      • StartPublishCdnStream
      • StopPublishCdnStream
    • Usage Statistics APIs
      • DescribeTrtcUsage
      • DescribeRecordingUsage
      • DescribeMixTranscodingUsage
      • DescribeRelayUsage
      • DescribeTrtcRoomUsage
    • Data Types
    • Error Codes
    • Appendix
      • Event ID Mapping Table
Features & Server APIs

Flutter

This document mainly introduces how to use TRTC Flutter SDK to implement custom audio raw data acquisition.

Acquiring audio raw data

Flutter TRTC SDK provides two ways to acquire audio raw data:
Native access.
Direct use of Flutter's Dart interface.
Since transferring high-frequency and large audio raw data from Native to Dart layer consumes more performance, we recommend using Native access to acquire audio raw data.

1. Native access

The specific access process and access effect can be experienced using the demo.
1.1 Listen to audio raw data at the Native layer and acquire audio raw data.
java
swift
void enableTRTCAudioFrameDelegate() {
TRTCCloud.sharedInstance(getApplicationContext()).setAudioFrameListener(new AudioFrameListener());
result.success("");
}

void disableTRTCAudioFrameDelegate() {
TRTCCloud.sharedInstance(getApplicationContext()).setAudioFrameListener(null);
result.success("");
}

class AudioFrameListener implements TRTCCloudListener.TRTCAudioFrameListener {
@Override
public void onCapturedAudioFrame(TRTCCloudDef.TRTCAudioFrame trtcAudioFrame) {
// TODO
}

@Override
public void onLocalProcessedAudioFrame(TRTCCloudDef.TRTCAudioFrame trtcAudioFrame) {
// TODO
}

@Override
public void onRemoteUserAudioFrame(TRTCCloudDef.TRTCAudioFrame trtcAudioFrame, String s) {
// TODO
}

@Override
public void onMixedPlayAudioFrame(TRTCCloudDef.TRTCAudioFrame trtcAudioFrame) {
// TODO
}

@Override
public void onMixedAllAudioFrame(TRTCCloudDef.TRTCAudioFrame trtcAudioFrame) {
// TODO
}

@Override
public void onVoiceEarMonitorAudioFrame(TRTCCloudDef.TRTCAudioFrame trtcAudioFrame) {
// TODO
}
}
let listener = AudioFrameProcessListener()
func enableTRTCAudioFrameDelegate() {
TRTCCloud.sharedInstance().setAudioFrameDelegate(listener)
result(nil)
}

func disableTRTCAudioFrameDelegate() {
TRTCCloud.sharedInstance().setAudioFrameDelegate(nil)
result(nil)
}

class AudioFrameProcessListener: NSObject, TRTCAudioFrameDelegate {
func onCapturedAudioFrame(_ frame: TRTCAudioFrame) {
//MARK: TODO
}
func onLocalProcessedAudioFrame(_ frame: TRTCAudioFrame) {
// MARK: TODO
}

func onRemoteUserAudioFrame(_ frame: TRTCAudioFrame, userId: String) {
// MARK: TODO
}

func onMixedAllAudioFrame(_ frame: TRTCAudioFrame) {
// MARK: TODO
}

func onMixedPlay(_ frame: TRTCAudioFrame) {
// MARK: TODO
}

func onVoiceEarMonitorAudioFrame(_ frame: TRTCAudioFrame) {
// MARK: TODO
}
}
1.2 Use Method Channel to implement start/stop acquisition of audio raw data.
Step 1: Implement the start/stop interface for acquiring audio raw data at the Dart layer.
final channel = MethodChannel('TRCT_FLUTTER_EXAMPLE');
void enableAudioFrame() async {
await channel.invokeMethod('enableTRTCAudioFrameDelegate');
}

void disableAudioFrame() async {
await channel.invokeMethod('disableTRTCAudioFrameDelegate');
}
Step 2: Implement the start/stop interface for acquiring audio raw data at the Native layer.
java
swift
public class MainActivity extends FlutterActivity {
private static final String channelName = "TRCT_FLUTTER_EXAMPLE";
private MethodChannel channel;
@Override
public void configureFlutterEngine(@NonNull FlutterEngine flutterEngine) {
super.configureFlutterEngine(flutterEngine);
channel = new MethodChannel(flutterEngine.getDartExecutor().getBinaryMessenger(), channelName);
channel.setMethodCallHandler(((call, result) -> {
switch (call.method) {
case "enableTRTCAudioFrameDelegate":
enableTRTCAudioFrameDelegate();
break;
case "disableTRTCAudioFrameDelegate":
disableTRTCAudioFrameDelegate();
break;
default:
break;
}
}));
}
}
@UIApplicationMain
@objc class AppDelegate: FlutterAppDelegate {
var channel: FlutterMethodChannel?
override func application(_ application: UIApplication,
didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {
GeneratedPluginRegistrant.register(with: self)
guard let controller = window?.rootViewController as? FlutterViewController else {
fatalError("Invalid root view controller")
}
channel = FlutterMethodChannel(name: "TRCT_FLUTTER_EXAMPLE", binaryMessenger: controller.binaryMessenger)
channel?.setMethodCallHandler({ [weak self] call, result in
guard let self = self else { return }
switch (call.method) {
case "enableTRTCAudioFrameDelegate":
self.enableTRTCAudioFrameDelegate()
break
case "disableTRTCAudioFrameDelegate":
self.disableTRTCAudioFrameDelegate()
break
default:
break
}
})
return super.application(application, didFinishLaunchingWithOptions: launchOptions)
}
}

2. Access to Flutter layer interface

Currently, the Flutter Dart interface only supports the use of the onCapturedAudioFrame interface. The specific usage is as follows:
TRTCCloud trtcCloud = (await TRTCCloud.sharedInstance())!;

// Start acquiring audio raw data
final audioFrameListener = TRTCAudioFrameListener( onCapturedAudioFrame: (audioFrame) {
// TODO } ); trtcCloud.setAudioFrameListener(audioFrameListener);
// Stop acquiring audio raw data
trtcCloud.setAudioFrameListener(null);