API documentation
There are a couple of ways developers can interact with DeepAR iOS SDK:
DeepAR class - Exposes raw DeepAR features. Like ARView can be used to create a view where the rendering will occur in the UI, but more importantly, it can be used to use DeepAR Vision only features like face and feature point detection in the provided frames without rendering the results and also off-screen rendering which can be used
ARView class - UIView that wraps DeepAR features and can be positioned in View hierarchy directly.
DeepARDelegate - delegate used to notify events from DeepAR to the consumer app, events like when the DeepAR SDK is initialized, screenshot taking finished, face detected etc.
CameraController class - a helper class that wraps AVFoundation to handle camera-related logic like starting camera preview, choosing resolution, front or back camera, and video orientation.
Once you add DeepAR.framework within your project you can explore the header files for each of the above-mentioned classes to see a list of all available methods.
Depending on your use case, the SDK can be used in several different modes:
By rendering type: on-screen and off-screen. On-screen rendering mode is used when we want to display the result of DeepAR SDK on the screen to the user in real-time, like rendering camera stream and displaying it somewhere in the app UI. Off-screen rendering is used if we want to process image data but not necessarily show the result right away and the real-time processing is not that important like processing a pre-recorded video.
Live mode on/off - if we need to process the result and display them in real-time we can turn the live mode on which will optimize the inner workings of the engine for performance. Live mode off is used if we do not need continuous real-time image processing, e.g. processing a single image. With the live mode off the engine is optimized for preserving processing and memory resources.
Computer vision only mode - with this mode on the engine will not do any rendering whatsoever it will only output computer vision data like detected faces positions, rotations, emotion estimation, etc.
Using adequate API methods the user should be able to transition to any mode they need without the need to reinitialize the engine.
To see the DeepAR SDK in action we suggest exploring the provided example app or the quickstart iOS apps on our Github pages - ObjC and/or Swift variants.
DeepAR class
Main class for interacting with DeepAR engine. You need to create an instance of this class to interact with DeepAR. DeepAR can work in vision only or rendering mode. Vision only means only computer vision functionalities (like FaceData
of detected faces etc.) are available and no rendering. Rendering mode means that result of DeepAR processing will be rendered live in the UI. Different initialization methods are used for each mode.
Methods
(void)initialize
Starts the engine initialization where the DeepAR will initialize in vision only mode, meaning it will process frames in terms of detecting faces and their properties that are available in FaceData
object. No rendering will be available in this mode.
...
- (void)viewDidLoad {
[super viewDidLoad];
self.deepar = [DeepAR alloc] init];
// Initialize in DeepAR Vision only mode
[self.deepar intialize];
}
(void)initializeWithWidth:(NSInteger)width height:(NSInteger)height window:(CAEAGLLayer*)window
Starts the engine initialization where the DeepAR will initialize in rendering mode. This means users can use the rendering functionality of DeepAR, in addition to computer vision features, to load effects in the scene, render the frames in the UI, etc.
width
and height
define rendering resolutions and window
parameter is a CAEAGLLayer
of an existing view where the DeepAR will render the processed frames.
(void)initializeOffscreenWithWidth:(NSInteger)width height:(NSInteger)height
Starts the engine initialization for rendering in off-screen mode.
width
and height
is the size of the off-screen rendering buffer.
(void)switchToRenderingOffscreenWithWidth:(NSInteger)width height:(NSInteger)height
Starts rendering in the off-screen buffer of size width
x height
. Does nothing in already rendering in off-screen mode. Internally calls startCapture
method meaning the frames will be available in the frameAvailable
method as soon as they are ready.
(UIView*)createARViewWithFrame:(CGRect)frame
Starts the engine initialization where the DeepAR will initialize in rendering mode. This means users can use the rendering functionality of DeepAR, in addition to computer vision features, to load effects in the scene, render the frames in the UI, etc. This method returns an UIView on which surface the frames will be rendered. Internally this method uses initializeWithWidth:(NSInteger)width height:(NSInteger)height window:(CALayer*)window, which means the rendering resolution and the size of the view will be the size of the provided frame
.
(UIView*)switchToRenderingToViewWithFrame:(CGRect)frame
Returns new UIView in which DeepAR will render with a given frame
size. User can position the returned UIView
in the hierarchy to display the results in the app UI.
(void)changeLiveMode:(BOOL)liveMode
This is an optimization method and it allows the user to indicate the DeepAR in which mode it should operate. If called with true
value, DeepAR will expect a continuous flow of new frames and it will optimize its inner processes for such workload. An example of this is the typical use case of processing the frames from the camera stream.
If called with false
it will optimize for preserving resources and memory by pausing the rendering after each processed frame. A typical use case for this is when the user needs to process just one image. In that case, the user will feed the image to DeepAR by calling processFrame
or similar method, and DeepAR would process it and stop rendering until a new frame is received. If we did so when the DeepAR is in live mode, it would process the same frame over and over again without ever stopping the rendering process, thus wasting processing time.
(void)setLicenseKey:(NSString*)key
Set the license key for your app. The license key is generated on the DeepAR Developer portal. Here are steps on how to generate a license key:
Log in/Sign up to developer.deepar.ai
Create a new project and in that project create an iOS app
In the create app dialog enter your app name and bundle id that your app is using. Bundle id must match the one you are using in your app, otherwise, the license check will fail. Read more about iOS bundle id here.
Copy the newly generated license key as a parameter in this method
You must call this method before you call the initialize
.
(void)shutdown
Shuts down the DeepAR engine. Reinitialization of a new DeepAR instance which has not been properly shut down can cause crashes and memory leaks. Usually, it is done in ViewController
dealloc
method.
(void)pause
Pauses the rendering. This method will not release any resources and should be used only for temporary pause (e.g. user goes to the next screen). Use the shutdown
method to stop the engine and to release the resources.
(void)resume
Resumes the rendering if it was previously paused, otherwise doesn't do anything.
(BOOL)isVisionOnly
Indicates if DeepAR has been initialized in the vision-only mode or not.
(void)setRenderingResolutionWithWidth:(NSInteger)width height:(NSInteger)height
Changes the output resolution of the processed frames. Can be called any time.
(void)processFrame:(CVPixelBufferRef)imageBuffer mirror:(BOOL)mirror
Feed frame to DeepAR for processing. The result can be received in the frameAvailable
delegate method. imageBuffer
is the input image data that needs processing. mirror
indicates whether the image should be flipped vertically before processing (front/back camera).
(void)processFrameAndReturn:(CVPixelBufferRef)imageBuffer outputBuffer:(CVPixelBufferRef)outputBuffer mirror:(BOOL)mirror
Feed frame to DeepAR for processing. Outputs the result in the outputBuffer
parameter.
Requires frame capturing to be started (user must call startCapture
beforehand).
(void)enqueueCameraFrame:(CMSampleBufferRef)sampleBuffer mirror:(BOOL)mirror
Same functionality as processFrame
with CMSampleBufferRef
as an input type for frame data which is more suitable if using camera frames via AVFoundation. It is advised to use this method instead of processFrame
when using camera frames as input because it will use native textures to fetch frames from the iPhone camera more efficiently.
mirror
indicates whether the image should be flipped vertically before processing (front/back camera).
(void)enqueueAudioSample:(CMSampleBufferRef)sampleBuffer
Passes an audio sample to the DeepAR engine. Used in video recording when user wants to record audio too. Audio samples will be processed only if the startVideoRecording
method has been called with recordAudio
parameter set to true
.
(void)takeScreenshot
Produces a snapshot of the current screen preview. Resolution is equal to the dimension with which the DeepAR
has been initialized. The DeepARDelegate
method didTakeScreenshot
will be called upon successful screenshot capture is finished with a path where the image has been temporarily stored.
(void)startVideoRecordingWithOutputWidth:(int)outputWidth outputHeight:(int)outputHeight
Starts video recording of the ARView
with given outputWidth
x outputHeight
resolution.
(void)startVideoRecordingWithOutputWidth:(int)outputWidth outputHeight:(int)outputHeight subframe:(CGRect)subframe
Starts video recording of the ARView
with given outputWidth
x outputHeight
resolution. The subframe
parameter defines the sub rectangle of the ARView
that you want to record in normalized coordinates (0.0 - 1.0).
(void)startVideoRecordingWithOutputWidth:(int)outputWidth outputHeight:(int)outputHeight subframe:(CGRect)subframe videoCompressionProperties:(NSDictionary*)videoCompressionProperties
Starts video recording of the ARView
with given outputWidth
x outputHeight
resolution. The subframe
parameter defines the sub rectangle of the ARView
that you want to record in normalized coordinates (0.0 - 1.0).
videoCompressionProperties
is an NSDictionary used as the value for the key AVVideoCompressionPropertiesKey
. Read more about video compression options in the official docs here.
(void)startVideoRecordingWithOutputWidth:(int)outputWidth outputHeight:(int)outputHeight subframe:(CGRect)subframe videoCompressionProperties:(NSDictionary*)videoCompressionProperties recordAudio:(BOOL)recordAudio
Same as the previous method, additionally indicates that you want to record audio too. If recordAudio
parameter is set to true
the recording will wait until you call enqueueAudioSample
on ARView
. When DeepAR is ready to receive audio samples it will publish NSNotification with key deepar_start_audio
. You can subscribe to this notification and start feeding audio samples once you receive it. If you use provided CameraController
this is handled for you by default.
(void)finishVideoRecording
Finishes the video recording. Delegate method didFinishVideoRecording
will be called when the recording is done with the temporary path of the recorded video.
(void)pauseVideoRecording
Pauses video recording if it has been started beforehand.
(void)resumeVideoRecording
Resumes video recording after it has been paused with pauseVideoRecording
.
(void)switchEffectWithSlot:(NSString*)slot path:(NSString*)path
Load a DeepAR Studio file as an effect/filter in the scene. path
is a string path to a file located in the app bundle or anywhere in the filesystem where the app has access to. For example, one can download the filters from online locations and save them in the Documents directory. Value nil
for the path
param will remove the effect from the scene.
The slot
specifies a namespace for the effect in the scene. In each slot, there can be only one effect. If you load another effect in the same slot the previous one will be removed and replaced with a new effect. Example of loading 2 effects in the same scene:
[self.arview switchEffectWithSlot:@"mask" path:"flowers"];
[self.arview switchEffectWithSlot:@"filter" path:"tv80"];
(void)switchEffectWithSlot:(NSString*)slot path:(NSString*)path face:(uint32_t)face
Same as the previous method with added face
parameter indication on which face to apply the effect. DeepAR offers tracking up to 4 faces, so valid values for this parameter are 0, 1, 2, and 3. For example, if you call this method with face value 2, the effect will be only applied to the third detected face in the scene. If you want to set an effect on a different face
make sure to also use a different value for the slot
parameter to avoid removing the previously added effect. Example:
// apply flowers effect to the first face
[self.arview switchEffectWithSlot:@"mask_f0" path:"flowers" face:0];
// apply beard effect to the second face
[self.arview switchEffectWithSlot:@"mask_f1" path:"beard" face:1];
// replace the effect on the first face with the lion
[self.arview switchEffectWithSlot:@"mask_f0" path:"lion" face:0];
// remove the beard effect from the second face
[self.arview switchEffectWithSlot:@"mask_f1" path:nil face:1];
(void)switchEffectWithSlot:(NSString*)slot path:(NSString*)path face:(uint32_t)face targetGameObject:(NSString*)targetGameObject
Same as the override with face
parameter, but with added targetGameObject
which indicates a node in the currently loaded scene/effect into which the new effect will be loaded. By default, effects are loaded in the root node object.
(void)startCaptureWithOutputWidth:(NSInteger)outputWidth outputHeight:(NSInteger)outputHeight subframe:(CGRect)subframe
By default, DeepARDelegate will not call the frameAvailable
method on each new processed frame to save on processing time and resources. If we want the processed frames to be available in the frameAvailable
method of DeepARDelegate we need to call this method first on ARView. outputHeight
and outputWidth
define the size of the processed frames and subframe defines a sub rectangle of DeepAR rendering which will be outputted. This means that the output frame in the frameAvailable
does not need to be the same size and/or position as the one rendered.
- (void)startCaptureWithOutputWidthAndFormat:(NSInteger)outputWidth outputHeight:(NSInteger)outputHeight subframe:(CGRect)subframe outputImageFormat:(OutputFormat)outputFormat;
Same as previous method, but with added OutputFormat
parameter for user to control the pixel output format of frameAvailable
method. Check the description of OutputFormat
below in the document.
(void)changeParameter:(NSString*)gameObject component:(NSString*)component parameter:(NSString*)parameter floatValue:(float)value
Change a float value on a GameObject given by value
parameter. The parameter
is the name of the parameter you want to change, e.g. scalar uniform on a shader or blendshape. For more details about changeParameter
API read our article here.
(void)changeParameter:(NSString*)gameObject component:(NSString*)component parameter:(NSString*)parameter vectorValue:(Vector4)value
Change a vector of 4 elements on a GameObject given by value
parameter. The parameter
is the name of the parameter you want to change, e.g. an uniform on a shader or blendshape. For more details about changeParameter
API read our article here.
(void)changeParameter:(NSString*)gameObject component:(NSString*)component parameter:(NSString*)parameter vector3Value:(Vector3)value
Change a vector of 3 elements on a GameObject given by value
parameter. The parameter
is the name of the parameter you want to change, e.g. an uniform on a shader or blendshape. For more details about changeParameter
API read our article here.
(void)changeParameter:(NSString*)gameObject component:(NSString*)component parameter:(NSString*)parameter boolValue:(bool)value;
Change a boolean value on a GameObject given by value
parameter. The parameter
is the name of the parameter you want to change. Most common use case for this override is to set the enabled
property of a game object. For more details about changeParameter
API read our article here.
(void)changeParameter:(NSString*)gameObject component:(NSString*)component parameter:(NSString*)parameter image:(UIImage*)image
Change an image parameter on a game object. The parameter
is the name of the parameter you want to change. Most common use case for this override is to change the texture of a shader on a given game object. For more details about changeParameter
API read our article here.
(void)changeParameter:(NSString *)gameObject component:(NSString *)component parameter:(NSString *)parameter stringValue:(NSString *)value
Change a string parameter on a game object. The parameter
is the name of the parameter you want to change. The most common use for this override is to change blend mode and culling mode properties of a game object. Read this article for more details about the available parameter values.
Blend modes:
[self.deepar changeParameter:@"Quad" component:@"MeshRenderer" parameter:@"blend_mode" stringValue:@"off" ];
[self.deepar changeParameter:@"Quad" component:@"MeshRenderer" parameter:@"blend_mode" stringValue:@"add" ];
[self.deepar changeParameter:@"Quad" component:@"MeshRenderer" parameter:@"blend_mode" stringValue:@"aplha" ];
[self.deepar changeParameter:@"Quad" component:@"MeshRenderer" parameter:@"blend_mode" stringValue:@"darken" ];
[self.deepar changeParameter:@"Quad" component:@"MeshRenderer" parameter:@"blend_mode" stringValue:@"lighten" ];
[self.deepar changeParameter:@"Quad" component:@"MeshRenderer" parameter:@"blend_mode" stringValue:@"multiply" ];
[self.deepar changeParameter:@"Quad" component:@"MeshRenderer" parameter:@"blend_mode" stringValue:@"normal" ];
[self.deepar changeParameter:@"Quad" component:@"MeshRenderer" parameter:@"blend_mode" stringValue:@"screen" ];
[self.deepar changeParameter:@"Quad" component:@"MeshRenderer" parameter:@"blend_mode" stringValue:@"linear_burn"];
Culling modes:
[self.deepar changeParameter:@"Quad" component:@"MeshRenderer" parameter:@"culling_mode" stringValue:@"off"];
[self.deepar changeParameter:@"Quad" component:@"MeshRenderer" parameter:@"culling_mode" stringValue:@"cw" ];
[self.deepar changeParameter:@"Quad" component:@"MeshRenderer" parameter:@"culling_mode" stringValue:@"ccw"];
(void) moveGameObject:(NSString*)selectedGameObjectName targetGameObjectname:(NSString*)targetGameObjectName
Moves the selected game object from its current position in a tree and sets it as a direct child of a target game object. This is equivalent to moving around a node in the node hierarchy in the DeepAR Studio.
(void)stopCapture
Stops outputting frames to frameAvailable
.
(void)fireTrigger:(NSString*)trigger
Fire named trigger
of an fbx animation set on the currently loaded effect. To learn more about fbx and image sequence animations on DeepAR please read our article here.
(void)setFaceDetectionSensitivity:(int)sensitivity
This method allows the user to change face detection sensitivity. The sensitivity
parameter can range from 0 to 3, where 0 is the fastest but might not recognize smaller (further away) faces, and 3 is the slowest but will find smaller faces. By default, this parameter is set to 1.
(void)enableAudioProcessing:(BOOL)enabled
Enables or disables audio pitch processing for video recording.
(void)setAudioProcessingSemitone:(float)sts
Sets the pitch change amount. Negative values will make the recorded audio lower in pitch and positive values will make it higher in pitch. Must call enableAudioProcessing
to enable the pitch processing beforehand.
(void)showStats:(bool)enabled
Display debugging stats on screen.
Properties
BOOL visionInitialized
Indicates if computer vision components have been initialized during the initialization process.
BOOL renderingInitialized
Indicates if DeepAR rendering components have been initialized during the initialization process.
BOOL faceVisible
Indicates if at least one face is detected in the current frame.
CGSize renderingResolution
Rendering resolution DeepAR has been initialized with
DeepARDelegate delegate
The object which implements DeepARDelegate protocol to listen for async events coming from DeepAR.
BOOL videoRecordingWarmupEnabled
If set to true, changes how startVideoRecording and resumeVideoRecording methods work to allow the video recording to be started immediately, once warmed up. Example of video recording warmup:
- (void)viewDidLoad {
self.deepAR = [[DeepAR alloc] init];
self.deepAR.videoRecordingWarmupEnabled = YES;
...
}
...
- (void)didInitialize {
// After initialization is finished call startVideoRecording with desired parameters. Only needs to be called once
//Video recording will start paused so when you call resume, it will start immediately
[self.deepAR startVideoRecordingWithOutputWidth:outputWidth outputHeight:outputHeight subframe:screenRect];
}
-(void)didFinishPreparingForVideoRecording {
// Once this is called it is safe to start a video recording with resumeVideoRecording
}
- (IBAction)startVideoRecordingPressed:(id)sender {
// Start prepared recording
[self.deepAR resumeVideoRecording];
}
- (IBAction)stopVideoRecordingPressed:(id)sender {
//Finish recording. This will automatically start preparing for the next recording session with the same parameters
[self.deepAR finishVideoRecording];
}
ARView class
ARView
is a class that extends UIView
with all DeepAR features which can be positioned in the UI hierarchy. DeepAR will render the result of the processed camera (or static image) frames within this view. ARView will be deprecated in the future releases because DeepAR
has API to create custom view where the results are rendered. We keep it for backward compatibility.
Methods
(void)initialize
Starts the DeepAR engine initialization.
ARView
is initialized as one would any other iOS view. After instantiating the view, a user needs to call initialize
. Other DeepAR methods can be safely called only when the initialization has properly finished. Successful initialization is notified via DeepARDelegate
didInitialize
method. Example of a proper ARView initialization:
- (void)viewDidLoad {
[super viewDidLoad];
// Instantiate ARView and add it to view hierarchy.
self.arview = [[ARView alloc] initWithFrame:[UIScreen mainScreen].bounds];
[self.view insertSubview:self.arview atIndex:0];
// Set delegate handler
self.arview.delegate = self;
[self.arview initialize];
}
...
- (void)didInitialize {
// Other ARView methods are safe to be invoked after this method has been called
}
(void)shutdown
Shuts down the DeepAR engine. This method should be called when the ARView parent ViewController has been disposed of. Reinitialization of a new DeepAR instance which has not been properly shut down can cause crashes and memory leaks. Usually, it is done in ViewControllers
dealloc
method, example:
-(void)dealloc {
[self.deepAR shutdown];
[self.arview removeFromSuperview];
}
(void)setLicenseKey:(NSString*)key
Set the license key for your app. The license key is generated on DeepAR Developer portal. Here are steps how to generate license key:
Log in/Sign up to developer.deepar.ai
Create a new project and in that project create an iOS app
In the create app dialog enter your app name and bundle id that your app is using. Bundle id must match the one you are using in your app, otherwise, the license check will fail. Read more about iOS bundle id here.
Copy the newly generated license key as a parameter in this method
You must call this method before you call the initialize
.
(void)pause
Pauses the rendering. This method will not release any resources and should be used only for temporary pause (e.g. user goes to the next screen). Use the shutdown
method to stop the engine and to release the resources.
(void)resume
Resumes the rendering if it was previously paused, otherwise doesn't do anything.
(void)switchEffectWithSlot:(NSString*)slot path:(NSString*)path
Load a DeepAR Studio file as an effect/filter in the scene. path
is a string path to a file located in the app bundle or anywhere in the filesystem where the app has access to. For example, one can download the filters from online locations and save them in the Documents directory. Value nil
for the path
param will remove the effect from the scene.
The slot
specifies a namespace for the effect in the scene. In each slot, there can be only one effect. If you load another effect in the same slot the previous one will be removed and replaced with a new effect. Example of loading 2 effects in the same scene:
[self.arview switchEffectWithSlot:@"mask" path:"flowers"];
[self.arview switchEffectWithSlot:@"filter" path:"tv80"];
(void)switchEffectWithSlot:(NSString*)slot path:(NSString*)path face:(uint32_t)face
Same as the previous method with added face
parameter indication on which face to apply the effect. DeepAR offers tracking up to 4 faces, so valid values for this parameter are 0, 1, 2, and 3. For example, if you call this method with face value 2, the effect will be only applied to the third detected face in the scene. If you want to set an effect on a different face
make sure to also use a different value for the slot
parameter to avoid removing the previously added effect. Example:
// apply flowers effect to the first face
[self.arview switchEffectWithSlot:@"mask_f0" path:"flowers" face:0];
// apply beard effect to the second face
[self.arview switchEffectWithSlot:@"mask_f1" path:"beard" face:1];
// replace the effect on the first face with the lion
[self.arview switchEffectWithSlot:@"mask_f0" path:"lion" face:0];
// remove the beard effect from the second face
[self.arview switchEffectWithSlot:@"mask_f1" path:nil face:1];
(void)switchEffectWithSlot:(NSString*)slot path:(NSString*)path face:(uint32_t)face targetGameObject:(NSString*)targetGameObject
Same as the override with face
parameter, but with added targetGameObject
which indicates a node in the currently loaded scene/effect into which the new effect will be loaded. By default, effects are loaded in the root node object.
(void)takeScreenshot
Produces a snapshot of the current screen preview. Resolution is equal to the dimension with which the ARView has been initialized. The DeepARDelegate
method didTakeScreenshot
will be called upon successful screenshot capture is finished with a path where the image has been temporarily stored.
(void)startVideoRecordingWithOutputWidth:(int)outputWidth outputHeight:(int)outputHeight
Starts video recording of the ARView
with given outputWidth
x outputHeight
resolution.
(void)startVideoRecordingWithOutputWidth:(int)outputWidth outputHeight:(int)outputHeight subframe:(CGRect)subframe
Starts video recording of the ARView
with given outputWidth
x outputHeight
resolution. The subframe
parameter defines the sub rectangle of the ARView
that you want to record in normalized coordinates (0.0 - 1.0).
(void)startVideoRecordingWithOutputWidth:(int)outputWidth outputHeight:(int)outputHeight subframe:(CGRect)subframe videoCompressionProperties:(NSDictionary*)videoCompressionProperties
Starts video recording of the ARView
with given outputWidth
x outputHeight
resolution. The subframe
parameter defines the sub rectangle of the ARView
that you want to record in normalized coordinates (0.0 - 1.0).
videoCompressionProperties
is an NSDictionary used as the value for the key AVVideoCompressionPropertiesKey
. Read more about video compression options in the official docs here.
(void)startVideoRecordingWithOutputWidth:(int)outputWidth outputHeight:(int)outputHeight subframe:(CGRect)subframe videoCompressionProperties:(NSDictionary*)videoCompressionProperties recordAudio:(BOOL)recordAudio
Same as the previous method, additionally indicates that you want to record audio too. If recordAudio
parameter is set to true
the recording will wait until you call enqueueAudioSample
on ARView
. When DeepAR is ready to receive audio samples it will publish NSNotification with key deepar_start_audio
. You can subscribe to this notification and start feeding audio samples once you receive it. If you use provided CameraController
this is handled for you by default.
(void)finishVideoRecording
Finishes the video recording. Delegate method didFinishVideoRecording
will be called when the recording is done with the temporary path of the recorded video.
(void)pauseVideoRecording
Pauses video recording.
(void)resumeVideoRecording
Resumes video recording after it has been paused with pauseVideoRecording
.
(void)enqueueCameraFrame:(CMSampleBufferRef)sampleBuffer mirror:(BOOL)mirror
Enqueues an image frame for processing to DeepAR. If mirror
is set to true
the image frame will be flipped vertically before processing (e.g. depending if you use back or front camera). The processed frame will be rendered in ARView
. Additionally if DeepARDelegate is set the same frame will be available in frameAvailable
delegate method when ready (and startFrameOutputWithOutputWidth
is called).
(void)enqueueAudioSample:(CMSampleBufferRef)sampleBuffer
Passes an audio sample to the DeepAR engine. Used in video recording when user wants to record audio too. Audio samples will be processed only if the startVideoRecording
method has been called with recordAudio
parameter set to true
.
(void)startFrameOutputWithOutputWidth:(int)outputWidth outputHeight:(int)outputHeight subframe:(CGRect)subframe
By default DeepARDelegate will not call frameAvailable
method on each new processed frame to save on processing time and resources. If we want the processed frames to be availabel in frameAvailable
method of DeepARDelegate we need to call this method first on ARView
. outputHeight
and outputWidth
define the size of the processed frames and subframe
defines a subrectangle of ARView
which will be outputted. This means that the output frame in frameAvailable
does not need to be the same size and/or position as the one rendered to the ARView
.
(void)stopFrameOutput
Stops outputting frames to frameAvailable
.
(void)enableAudioProcessing:(BOOL)enabled
Enables or disables audio pitch processing for video recording.
(void)setAudioProcessingSemitone:(float)sts
Sets the pitch change amount. Negative values will make the recorded audio lower in pitch and positive values will make it higher in pitch. Must call enableAudioProcessing
to enable the pitch processing beforehand.
(void)changeParameter:(NSString*)gameObject component:(NSString*)component parameter:(NSString*)parameter floatValue:(float)value
Change a float value on a GameObject given by value
parameter. The parameter
is the name of the parameter you want to change, e.g. scalar uniform on a shader or blendshape. For more details about changeParameter
API read our article here.
(void)changeParameter:(NSString*)gameObject component:(NSString*)component parameter:(NSString*)parameter vectorValue:(Vector4)value
Change a vector of 4 elements on a GameObject given by value
parameter. The parameter
is the name of the parameter you want to change, e.g. an uniform on a shader or blendshape. For more details about changeParameter
API read our article here.
(void)changeParameter:(NSString*)gameObject component:(NSString*)component parameter:(NSString*)parameter vector3Value:(Vector3)value
Change a vector of 3 elements on a GameObject given by value
parameter. The parameter
is the name of the parameter you want to change, e.g. an uniform on a shader or blendshape. For more details about changeParameter
API read our article here.
(void)changeParameter:(NSString*)gameObject component:(NSString*)component parameter:(NSString*)parameter boolValue:(bool)value;
Change a boolean value on a GameObject given by value
parameter. The parameter
is the name of the parameter you want to change. Most common use case for this override is to set the enabled
property of a game object. For more details about changeParameter
API read our article here.
(void)changeParameter:(NSString*)gameObject component:(NSString*)component parameter:(NSString*)parameter image:(UIImage*)image
Change an image parameter on a game object. The parameter
is the name of the parameter you want to change. Most common use case for this override is to change the texture of a shader on a given game object. For more details about changeParameter
API read our article here.
(void)fireTrigger:(NSString*)trigger
Fire named trigger
of an fbx animation set on the currently loaded effect. To learn more about fbx and image sequence animations on DeepAR please read our article here.
(void)touchStart
Tells the DeepAR that the touch started for the Hide on touch component of effects.
(void)touchEnd
Tells the DeepAR that the touch ended for the Hide on touch component of effects.
(void)setFaceDetectionSensitivity:(int)sensitivity
This method allows the user to change face detection sensitivity. The sensitivity
parameter can range from 0 to 3, where 0 is the fastest but might not recognize smaller (further away) faces, and 3 is the slowest but will find smaller faces. By default, this parameter is set to 1.
(void)showStats:(bool)enabled
Display debugging stats on screen.
Properties
BOOL initialized
Indicates if ARView and the underlying DeepAR engine are successfully initialized. No method should be called on ARView until the initialization is fully finished.
BOOL faceVisible
Indicates if at least one face is detected in the current frame.
DeepARDelegate delegate
Set to the object which implements DeepARDelegate
protocol to listen for async events coming from DeepAR.
DeepARDelegate
DeepARDelegate
is a delegate that is used to notify events from DeepAR to the consumer of the DeepAR SDK. It is set on DeepAR
or ARView
.
Methods
(void)didInitialize
Called when the DeepAR engine initialization is complete
(void)didTakeScreenshot:(UIImage*)screenshot
DeepAR has finished taking a screenshot. The result is given as an UIImage object in the screenshot
parameter.
(void)didStartVideoRecording
Called when DeepAR has started video recording (after calling startVideoRecording
method).
(void)didFinishVideoRecording:(NSString*)videoFilePath
Called when the video recording is finished and the video file is saved at videoFilePath
path.
(void)recordingFailedWithError:(NSError*)error
error
happened during video recording.
(void)faceVisiblityDidChange:(BOOL)faceVisible
Called when DeepAR detects a new face or loses a face that has been tracked.
(void)faceTracked:(MultiFaceData)faceData
Called on each frame where at least one face data is detected.
(void)numberOfFacesVisibleChanged:(NSInteger)facesVisible
Whenever a face is detected or lost from the scene this method is called. facesVisible
represents the number of currently detected faces in the frame.
(void)didFinishShutdown
DeepAR has successfully shut down after the method shutdown call.
(void)frameAvailable:(CMSampleBufferRef)sampleBuffer
A new processed frame is available. Make sure to call startCaptureWithOutputWidth
on DeepAR (or startFrameOutputWithOutputWidth
if you use ARView
) if you want this method to be called whenever a new frame is ready.
(void)imageVisibilityChanged:(NSString*)gameObjectName imageVisible:(BOOL)imageVisible
DeepAR has the ability to track arbitrary images in the scene, more about it read here. This method notifies when tracked image visibility changes. gameObjectName
is the name of the game object/node in the filter file to which the image is associated.
(void)didSwitchEffect:(NSString*)slot
Called when the switchEffect
method has successfully switched given effect on a given slot
.
(void)animationTransitionedToState:(NSString*)state
Called when the conditions have been met for the animation to transition to the next state (e.g. mouth open, emotion detected etc.)
OutputFormat enum
Used to control the pixel output format of the frames provided by frameAvailable
delegate in the offscreen processing workflow. Default is RGBA
.
typedef enum {
Undefined, // 0
RGBA, // 1 (default)
BGRA, // 2
ARGB, // 3
ABGR, // 4
COUNT
} OutputFormat;
CameraController class
Helper class that wraps AVFoundation to handle camera-related logic like starting camera preview, choosing resolution, front or back camera, and video orientation. CameraController works with both DeepAR
and ARView
implementations, just make sure to set one or the other as a property on CameraController
instance.
Check Github example for detailed usage example.
Initialization example
...
self.cameraController = [[CameraController alloc] init];
self.cameraController.deepAR = self.deepAR;
// or if using ARView
// self.cameraController.arview = self.arview;
[self.cameraController startCamera];
...
Methods
(void)startCamera
Starts camera preview using AVFoundation. Checks camera permissions and asks if none has been given. If DeepAR started in rendering mode will render camera frames to the ARView
.
(void)stopCamera
Stops camera preview.
(void)startAudio
Starts capturing audio samples using AVFoundation. Checks permissions and asks if none has been given. Must be called if startRecording
has been called with recordAudio
parameter set to true.
(void)stopAudio
Stops capturing audio samples.
(void)checkCameraPermission
Checks camera permissions
(void)checkMicrophonePermission
Checks microphone permissions
Properties
DeepAR* deepAR
DeepAR instance, must be set if using DeepAR
API interface.
ARView* arview
ARView instance, must be set if using ARView
API interface
AVCaptureDevicePosition position
Currently selected camera. Options:
AVCaptureDevicePositionBack
AVCaptureDevicePositionFront
Changing this parameter in real-time causes the preview to switch to the given camera device
AVCaptureSessionPreset preset
Represents camera resolution currently used. Can be changed in real-time.
AVCaptureVideoOrientation videoOrientation
Represents currently used video orientation. Should be called with right orientation when the device rotates.
FaceData struct
FaceData
represents data structure containing all the information available about the detected face.
float translation[3]
x,y,z translation values of the face in the scene.
float rotation[3]
x,y,z rotation values in angles of the face in the scene.
float poseMatrix[16]
Translation and rotation in matrix form.
float landmarks[68*3]
Detected face feature points in 3D space. Read more here: https://help.deepar.ai/en/articles/4351347-deepar-reference-tracking-models
float landmarks2d[68*3]
Detected face feature points in 2D screen space coordinates. Usually more precise than 3D points but no estimation for z translation. Read more here about feature points:
https://help.deepar.ai/en/articles/4351347-deepar-reference-tracking-models
float faceRect[4]
A rectangle containing the face in screen coordinates.
float emotions[5]
Estimated emotions for the face. Each emotion has a value from 0.0-1.0. 1.0 valie means 100% detected emotion. We differentiate 5 different emotions: index 0 is neutral, index 1 is happiness, index 2 is surprise, index 3 is sadness and index 4 is anger
MultiFaceData
Struct containing face data for up to 4 detected faces .
FaceData faceData[4]
Array of faceData for up to 4 detected faces.