After the callbacks are configured successfully on Version 2.6.0 and earlier versions, the SDK will send a callback of facial data for each video frame.
-(void)onYTDataEvent:(id _Nonnull)event;
After the callbacks are configured successfully on Version 3.0.0, the SDK will send a callback of facial data for each video frame.
-(void)onAIEvent:(id _Nonnull)event;
//onAIEvent callback funtion can get the data.
NSDictionary *eventDict =(NSDictionary *)event;
if(eventDict[@"ai_info"]!= nil){
NSLog(@"ai_info %@",eventDict[@"ai_info"]);
}
The data returned is in JSON format and includes the following fields (for details about the 256 facial keypoints, see the illustration above):
/// @note The list of field descriptions
/**
| Field | Type | Value Range | Description |
| :---- | :---- |:---- | :---- |
| trace_id | int | [1,INF) | The face ID. If the faces obtained continuously from a video stream have the same face ID, they belong to the same person. |
| face_256_point | float | [0,screenWidth] or [0,screenHeight] | 512 values in total for 256 facial keypoints. (0,0) is the top-left corner of the screen. |
| face_256_visible | float | [0,1] | Visibility of the 256 facial keypoints. |
| out_of_screen | bool | true/false | Whether the face is outside of the screen view. |
| left_eye_high_vis_ratio | float | [0,1] | The percentage of keypoints with high visibility for the left eye. |
| right_eye_high_vis_ratio | float | [0,1] | The percentage of keypoints with high visibility for the right eye. |
| left_eyebrow_high_vis_ratio | float | [0,1] | The percentage of keypoints with high visibility for the left eyebrow. |
| right_eyebrow_high_vis_ratio | float | [0,1] | The percentage of keypoints with high visibility for the right eyebrow. |
| mouth_high_vis_ratio | float | [0,1] | The percentage of keypoints with high visibility for the mouth. |