Model animations, sounds and image sequence animations

DeepAR supports fbx model animations. Supported animation types are:

  • Transformation animations - change of position, rotation and scale in time for model nodes

  • Blendshape animations - change of blendshape weight over time

Animations are loaded during the fbx import process. In DeepAR Studio, the user has to configure how and when the loaded animations will run. If multiple fbx files have been loaded using the Add Fbx feature, multiple animation controllers will be present. The user must select the Animation controller they wish to edit. The first loaded animation controller is always named RootNode, all other loaded animation controllers will have the same name as the fbx file they were loaded from.

DeepAR Studio models the animation sequences as a finite state machine which consists of animation states and triggers. Studio can be in one animation state at the time. When certain conditions are met it transitions to next state.

Other playable items can also be loaded in Studio, attached to animation states, and controlled from the Animation state editor. Other supported playable items are:

  • Sounds - loaded in Sounds editor

  • Image sequence animations - loaded as Animation in Textures tab and attached to material in Materials properties.

Animation state has the following parameters:

  • Name - animation state identifier. This parameter should be unique, meaning two states shouldn't have the same name.

  • Animation - which fbx animation is played in this state. User will be able to choose one amongst all the loaded animations.

  • Sound - which sound is played in this state. User will be able to choose one amongst all the loaded sounds.

  • Material - a material that has an animated texture attached. The attached texure will be played in this state.

  • Loop - if set to false animation plays only once, otherwise it is played in loop

  • Start offset - set the start time for animation, enables the user to play any part of the animation.

  • Duration - duration of one animation loop. By default, it is set to the animation loop duration, but it can be shortened or prolonged. If it is shorter than default animation loop time the animation will be cut before it finishes. If it is longer than the next loop will not start until duration time passes.

  • Triggers - list of triggers that cause the animation state machine to advance to next state.

Animation triggers specify transitions between animation states. Each state can have multiple triggers. Triggers have three parameters:

  • Trigger name - an action that causes the animation state to advance to next state. Currently, we support several animation trigger types:

  • uncond - unconditional transition to next state.

  • on_end - transition to next state when the current animation ends

  • mouth_open - trigger transition to next state when the mouth open gesture is detected

  • mouth_closed - trigger transition to next state when closing of the mouth is detected

  • mouth_open_sensitive - same as mouth_open but more sensitive, should be paired with mouth_closed_sensitive

  • mouth_closed_sensitive - same as mouth_closed but more sensitive, should be paired with mouth_open_sensitive

  • blink - trigger transition to next state when eye blink is detected

  • eyebrow_raise - trigger transition to next state when the eyebrows raised gesture is detected

  • neutral - transition to next state when the neutral emotion is detected

  • happiness - transition to next state when the happiness emotion is detected

  • surprise - transition to next state when the surprise emotion is detected

  • sadness - transition to next state when the sadness emotion is detected

  • anger - transition to next state when the anger emotion is detected

  • neutral_end - transition to next state when the neutral emotion is not detected

  • happiness_end- transition to next state when the happiness emotion is not detected

  • surprise_end - transition to next state when the surprise emotion is not detected

  • sadness_end - transition to next state when the sadness emotion is not detected

  • anger_end - transition to next state when the anger emotion is not detected

  • custom_trigger - user can create custom triggers for state transition by selecting this option and entering the trigger name in the text field below. These triggers can be activated through DeepSDK API methods in the users host application.

  • Next animation state - defines the next state to transition when the animation trigger happens

  • Face - when tracking multiple faces any face related triggers will automatically use the same face the parent object is tracking. If the trigger needs to respond to another face (eg. the node tracks face 0 but should trigger when face 1 has their mouth open) or the parent object is positioned in screen space, the face can be set manually in this field.

Animation states and triggers are defined in the animation editor which can be accessed through window menu: Assets → Animation state editor.

Animation state editor

Did this answer your question?