Using Beauty AR Web with Mini Program
Preparations
For how to get started with Mini Program development, see the Weixin Mini Program document.
Read Overview to learn about how to use the Beauty AR Web SDK.
Directions
Step 1. Configure a domain allowlist on the Mini Program backend
As the SDK will request the backend to perform authentication and load resources, after creating your Mini Program, you need to configure a domain allowlist on the backend.
1. Open the Mini Program backend and go to Development > Development Management > Development Settings > Server Domain Name.
2. Click Modify, configure the following domain names and save them.
Request domain names:
https://webar.qcloud.com;https://webar-static.tencent-cloud.com;https://aegis.qq.com;The URL of the authentication signature API (`get-ar-sign`)
downloadFile domain name:
https://webar-static.tencent-cloud.com
Step 2. Install and build the npm package
1. Install:
npm install tencentcloud-webar
2. Build:
Open Weixin DevTools and select Tools > Build npm on the topbar.
3. Configure the path of
workers
in app.json
:"workers": "miniprogram_npm/tencentcloud-webar/worker"
Step 3. Import the files
// The import method for versions earlier than 0.3.0 (one file)// import "../../miniprogram_npm/tencentcloud-webar/lib.js";// The import method for v0.3.0 or later (two files and the 3D module (optional))import '../../miniprogram_npm/tencentcloud-webar/lib.js';import '../../miniprogram_npm/tencentcloud-webar/core.js';// Initialize the 3D plugin (optional)import '../../miniprogram_npm/tencentcloud-webar/lib-3d.js';import { plugin3d } from '../../miniprogram_npm/tencentcloud-webar/plugin-3d'// Import `ArSdk`import { ArSdk } from "../../miniprogram_npm/tencentcloud-webar/index.js";
Note
Because Mini Program has a 500 KB limit for file size, the SDK is provided as two JS files.
Starting from v0.3.0, the SDK is further split to support 3D. The 3D module can be loaded as needed. Before import, check your SDK version and use the corresponding import method.
Step 4. Initialize the SDK
Note
Before initializing the SDK, you need to configure the Mini Program
APPID
in the RT-Cube console as instructed in Getting Started.You need to insert the
camera
tag to open the camera and then set the camera parameters as detailed in Overview.Mini Program does not support
getOutput
, so you need to pass in an onscreen WebGL canvas. The SDK will output images onto this canvas.Sample code:
// wxml// Open the camera and hide it by setting `position`<cameradevice-position="{{'front'}}"frame-size="large" flash="off" resolution="medium"style="width: 750rpx; height: 134rpx;position:absolute;top:-9999px;"/>// The SDK outputs the processed image to the canvas in real time.<canvastype="webgl"canvas-id="main-canvas"id="main-canvas"style="width: 750rpx; height: 1334rpx;"></canvas>// Take a photo and draw the `ImageData` object onto the canvas<canvastype="2d"canvas-id="photo-canvas"id="photo-canvas"style="position:absolute;width:720px;height:1280px;top:-9999px;left:-9999px;"></canvas>// js/** ----- Authentication configuration ----- *//*** Your Tencent Cloud account's APPID** You can view your APPID in the [Account Center](https://console.cloud.tencent.com/developer).*/const APPID = ''; // Enter your APPID/*** Web LicenseKey** On the [Web Licenses](https://console.cloud.tencent.com/vcube/web) page of the RT-Cube console, create a project, and a `LicenseKey` will be automatically generated.*/const LICENSE_KEY = ''; // Enter the license key of your project/*** The token used to calculate the signature** Note: The sample code is for demo debugging only. In the production environment, save the token and calculate the signature on the server. Provide the signature when the frontend calls an API to request it. For more information, see* [Signature algorithm](https://cloud.tencent.com/document/product/616/71370#.E7.AD.BE.E5.90.8D.E6.96.B9.E6.B3.95)*/const token = ''; // Enter your tokenComponent({data: {makeupList: [],stickerList: [],filterList: [],recording: false},methods: {async getCanvasNode(id) {return new Promise((resolve) => {this.createSelectorQuery().select(`#${id}`).node().exec((res) => {const canvasNode = res[0].node;resolve(canvasNode);});});},getSignature() {const timestamp = Math.round(new Date().getTime() / 1000);const signature = sha256(timestamp + token + APPID + timestamp).toUpperCase();return { signature, timestamp };},// Initialize the camera typeasync initSdkCamera() {// Get the onscreen canvas. The SDK will output the processed image to the canvas in real time.const outputCanvas = await this.getCanvasNode("main-canvas");// Get the authentication informationconst auth = {licenseKey: LICENSE_KEY,appId: APP_ID,authFunc: this.getSignature};// Construct SDK initialization parametersconst config = {auth,camera: {width:720,height:1280,},output: outputCanvas,// Initial beauty effects (optional)beautify: {whiten: 0.1, // The brightening effect. Value range: 0–1.dermabrasion: 0.3, // The smooth skin effect. Value range: 0–1.lift: 0, // The slim face effect. Value range: 0–1.shave: 0, // The V shape effect. Value range: 0–1.eye: 0.2, // The big eyes effect. Value range: 0–1.chin: 0, // The chin effect. Value range: 0–1.}};const ar = new ArSdk(config);// The list of built-in effects and filters can be obtained in the `created` callback.ar.on('created', () => {// Get the list of built-in makeup effects and stickersar.getEffectList({Type: 'Preset'}).then((res) => {const list = res.map(item => ({"name": *item.Name,id: item.EffectId,cover: item.CoverUrl,url: item.Url,label: item.Label,type: item.PresetType,}));const makeupList = list.filter(item=>item.label.indexOf('makeup')>=0)const stickerList = list.filter(item=>item.label.indexOf('sticker')>=0)// Render the list of effectsthis.setData({makeupList,stickerList});}).catch((e) => {console.log(e);});// Built-in filtersar.getCommonFilter().then((res) => {const list = res.map(item => ({"name": *item.Name,id: item.EffectId,cover: item.CoverUrl,url: item.Url,label: item.Label,type: item.PresetType,}));// Render the list of filtersthis.setData({filterList: list});}).catch((e) => {console.log(e);});});// You can set beauty filters and effects in the `ready` callback.ar.on('ready', (e) => {this._sdkReady = true});ar.on('error', (e) => {console.log(e);});this.ar = ar},// Change the beauty effects. Make sure the SDK is ready.onChangeBeauty(val){if(!this._sdkReady) return// You can set beauty effects through `setBeautify`. Six attributes are supported. For more information, see the SDK integration guide.this.ar.setBeautify({dermabrasion: val.dermabrasion, // The smooth skin effect. Value range: 0–1.});},// Change the makeup style. Make sure the SDK is ready.onChangeMakeup(id, intensity){if(!this._sdkReady) return// Use `setEffect` to configure the effect. Its input parameters can be in three formats as described in the SDK integration guide.this.ar.setEffect([{id, intensity}]);},// Change the sticker. Make sure the SDK is ready.onChangeSticker(id, intensity){if(!this._sdkReady) return// Use `setEffect` to configure the effect. Its input parameters can be in three formats as described in the SDK integration guide.this.ar.setEffect([{id, intensity}]);},// Change the filter. Make sure the SDK is ready.onChangeFilter(id, intensity){if(!this._sdkReady) return// Use `setFilter` to configure the filter. The second parameter indicates the filter strength. Value range: 0–1.this.ar.setFilter(id, 1);}}})
Step 5. Implement the photo capturing and shooting features
Sample code:
The SDK will return an object containing the width, height, and buffer data, and you can draw the data on the preset 2D canvas (in the above code,
id
is photo-canvas
.) on your page and export it as an image file.async takePhoto() {const {uint8ArrayData, width, height} = this.ar.takePhoto(); // The `takePhoto` method returns the buffer data of the current image.const photoCanvasNode = await this.getCanvasNode('photo-canvas');photoCanvasNode.width = parseInt(width);photoCanvasNode.height = parseInt(height);const ctx = photoCanvasNode.getContext('2d');// Create an `ImageData` object with the data returned by the SDKconst imageData = photoCanvasNode.createImageData(uint8ArrayData, width, height);// Draw the `ImageData` object onto the canvasctx.putImageData(imageData,0,0,0,0,width,height);// Save the canvas as a local imagewx.canvasToTempFilePath({canvas: photoCanvasNode,x: 0,y: 0,width: width,height: height,destWidth: width,destHeight: height,success: (res) => {// Save the photowx.saveImageToPhotosAlbum({filePath: res.tempFilePath});}})}
Component({methods: {// Start shootingstartRecord() {this.setData({recording: true});this.ar.startRecord()}// Stop shootingasync stopRecord() {const res = await this.ar.stopRecord();// Save the videowx.saveVideoToPhotosAlbum({filePath: res.tempFilePath});this.setData({recording: false});}}})
When the Mini Program is switched to the background or the screen is locked, call
stopRecord
to stop shooting. When the page is opened again, start the SDK again.onShow() {this.ar && this.ar.start();},onHide() {this.ar && this.ar.stop();},async onUnload() {try {this.ar && this.ar.stop();if (this.data.recording) {await this.ar.stopRecord({destroy: true,});}} catch (e) {}this.ar && this.ar.destroy();}