iOS
Overview
This capability processes audio data and outputs blendshape data that meets the standards of Apple ARKit. For details, see ARFaceAnchor. You can pass the data to Unity to drive your model or use the data to implement other features.
Integration
Method 1: Integrate the Tencent Effect SDK
1. The capability of converting audio to expressions is built into the Tencent Effect SDK, so to use the capability, you can integrate the Tencent Effect SDK.
2. Download the complete edition of the Tencent Effect SDK.
3. Follow the directions in Integrating Tencent Effect SDK to integrate the SDK.
4. Import
Audio2Exp.framework
in the SDK to your project. Select your target, under the General tab, find Frameworks,Libraries, and Embedded Content, and set Audio2Exp.framework
to Embed & Sign.Method 2: Integrate the Audio-to-Expression SDK
If you only need the capability of converting audio to expressions, you can integrate the Audio-to-Expression SDK (
Audio2Exp.framework
is about 7 MB). Import the two dynamic frameworks Audio2Exp.framework
and YTCommonXMagic.framework
to your project. Select your target, under the General tab, find Frameworks,Libraries, and Embedded Content, and set Audio2Exp.framework
and YTCommonXMagic.framework
to Embed & Sign.Directions
1. Set the license. For detailed directions, see Integrating Tencent Effect SDK - Step 1. Authenticate.
2. Configure the model file. Copy the model file
audio2exp.bundle
to your project directory. When calling initWithModelPath:
of Audio2ExpApi
, pass in the path of model file.APIs
API | Description |
+ (int)initWithModelPath:(NSString*)modelPath; | Initializes the SDK. You need to pass in the path of the model file. 0 indicates the initialization is successful. |
+ (NSArray )parseAudio:(NSArray )inputData; | The input is audio, which must be one-channel and have a sample rate of 16 Kbps and an array length of 267 (267 sampling points). The output is a float array with 52 elements, which correspond to 52 blendshapes. The value range of each element is 0-1, and their sequence is specified by Apple{"eyeBlinkLeft","eyeLookDownLeft","eyeLookInLeft","eyeLookOutLeft","eyeLookUpLeft","eyeSquintLeft","eyeWideLeft","eyeBlinkRight","eyeLookDownRight","eyeLookInRight","eyeLookOutRight","eyeLookUpRight","eyeSquintRight","eyeWideRight","jawForward","jawLeft","jawRight","jawOpen","mouthClose","mouthFunnel","mouthPucker","mouthRight","mouthLeft","mouthSmileLeft","mouthSmileRight","mouthFrownRight","mouthFrownLeft","mouthDimpleLeft","mouthDimpleRight","mouthStretchLeft","mouthStretchRight","mouthRollLower","mouthRollUpper","mouthShrugLower","mouthShrugUpper","mouthPressLeft","mouthPressRight","mouthLowerDownLeft","mouthLowerDownRight","mouthUpperUpLeft","mouthUpperUpRight","browDownLeft","browDownRight","browInnerUp","browOuterUpLeft","browOuterUpRight","cheekPuff","cheekSquintLeft","cheekSquintRight","noseSneerLeft","noseSneerRight","tongueOut"} |
+ (int)releaseSdk | Releases resources. Call this API when you no longer need the capability. |
Integration Code Sample
// Initialize the Audio-to-Expression SDKNSString *path = [[NSBundle mainBundle] pathForResource:@"audio2exp" ofType:@"bundle"];int ret = [Audio2ExpApi initWithModelPath:path];// Convert audio to blendshape dataNSArray *emotionArray = [Audio2ExpApi parseAudio:floatArr];// Release the SDK[Audio2ExpApi releaseSdk];// Use with the Tencent Effect SDK// Initialize the SDKself.beautyKit = [[XMagic alloc] initWithRenderSize:previewSize assetsDict:assetsDict];// Load the avatar materials[self.beautyKit loadAvatar:bundlePath exportedAvatar:nil completion:nil];// Pass the blendshape data to the SDK, and the effects will be applied.[self.beautyKit updateAvatarByExpression:emotionArray];
Note:
For audio recording, see
TXCAudioRecorder
.For more information on using the APIs, see
VoiceViewController
and the related classes.