An assistant consists of a character and a conversation. The character defines what the assistant will look like, and the conversation defines how users will interact with the assistant.
A simple character creator is available to make it easy to customize one of our existing characters. You can also build and upload your own character with software like Blender or Maya.
|Source||Source of character files.||Character Creator|
|File(s)||Required if source is set to Uploaded. Must be a FBX, GLTF or GLB file. Include any supporting texture or bin files.|
|Voice||Voice of the character.||English (US), Joanna|
|Scale||Default size of the character.||1|
|Talk Animation Enabled||Set to yes if visime blendshapes are defined for the character.||true|
|Visime Blendshapes||Set to the blendshapes corresponding to each visime.|
|Blink Morph Animation Enabled||Set to yes if a blink blendshape is defined for the character.||true|
|Blink Blendshape||Set to the blendshape corresponding to an eye blink.||Blink|
|Blink Duration (ms)||The amount of time for the eye blink animation to complete.||Blink|
|Blink Animation Random Min Timeout||Sets the minimum amount of time in milliseconds to wait to run animation.||1000|
|Blink Animation Random Max Timeout||Sets the maximum amount of time in milliseconds to wait to run animation.||5000|
|‘TBD’ Animation Enabled||Each animation defined in the character files can be enabled.||false|
|‘TBD’ Animation Repeat||Set to Continuous to have animation play continuously on loop. It’ll only stop when an on-request animation is played. Set to Random to have the animation play at random intervals. Set to On Request to trigger the animation from a conversation message.||On Request|
|‘TBD’ Animation Speed||Multiplies the animation timescale by the speed value to make the animation play faster or slower.||1|
|‘TBD’ Animation Random Min Timeout||If repeat is set to Random, this sets the minimum amount of time in milliseconds to wait to run animation.||1000|
|‘TBD’ Animation Random Max Timeout||If repeat is set to Random, this sets the maximum amount of time in milliseconds to wait to run animation.||5000|
See Conversations Overview for creating conversations.
|Source||Source of the conversation. If set to Custom or Dialogflow, refer to the Conversations Overview for more details.||Prebuilt|
|Prebuilt Conversation||Uses a predefined, uneditable conversation.|
|Dialogflow Access Token||Required if source is set to Dialogflow. Dialogflow V1 API Client access token defined for your agent in Dialogflow.com|
|Webhook||Required for custom integration.|
|Verify Token||Required for custom integration. Token used in your conversation code to authorize communication.|
|Access Token||Your auto-generated API access token used for custom integration.||.|
|Get Started Message||If defined, this message is sent the first time the assistant starts listening. This results in the assistant initiating the conversation. See the Get Started Message Section in the Conversations API||.|
The link to your demo can be shared with anyone. Just copy the URL in the address bar.
In addition, we want to make it even easier to share your creation and for the community to build off each other’s creativity so we have additional gallery settings as detailed below.
|Allow Others to View in Gallery||This setting will display your character in the Hootsy Gallery.||false|
|Allow Others to Embed||(coming soon) This setting allows others to embed your character into their site or app.||false|
|Allow Others to Remix||(coming soon) This setting allows others to create a copy of your assistant and adjust the settings. They will not be able to view your webhook and token, but they can use it or override the settings.||false|
Please only set these to True when you have an assistant that you think would create a unique experience for others.
The Hootsy web experience shows the core experience; however, there are a number of features available that may be useful for your experience. These features along with the appearance of the assistant can be customized in the web and Unity SDK. If there are any questions about these features, please contact us at email@example.com.
Multiple assistants can be added to a scene. Our code defines the logic to determine which assistant the user is talking to and handle the interaction appropriately.
By default, the assistant will automatically start listening after it finishes talking. However, you can set the assistants to only listen when the speak control icon is tapped. This allows tighter control over when the assistant is listening and may be a better option in noisier environments.
You can control the interaction range which is the max distance that a user can interact with an assistant. You can also control whether a range indicator is displayed around the assistant.
(Unity only) Allows you to preload and cache all the spoken audio for a conversation the first time a user interacts with the assistant. This allows for even quicker response times and is useful if the same app will be used at a later time in a place with slower network speeds. Contact us at firstname.lastname@example.org for guidance on using this feature.
Allows you to define what actions to take based on the action property defined in the conversation message. For example, you can shrink, grow, or remove an object in the scene. Defined by the Action Control script in the SDK.
Allows you to attach an assistant to the HUD/screen so assistant can continue to listen after you walk away. You can also set an assistant to attach on start of the scene, and allow users to switch between assistant being attached and 3D assistant being placed in the scene. Defined by the Attached Assistant Control script in the SDK.
Allows you to determine which 3D object in the scene is context for a given message. For example, if a user looks at an object like a car and says ‘what is this’, the assistant will know it’s in reference to the car as the car is the context for the message. As a result, the assistant can respond with ‘this is a car’.
You can also define the subcontext for a 3D object which are parts of a 3D model. For example, the user could look at the tire and say ‘what is this’. The sent message will have context of ‘car AND ‘tire’. As a result, the assistant can respond with ‘this is the tire for the helicopter’. Defined by the Context Control script in the SDK.
(Unity only) Allows users to define initial placement of models added as part of the conversation. Defined by the Placement Control script in the SDK.
Provides control of voice interaction with assistant, allowing user to pause recording or stop assistant from talking. Defined by the Speak Control script in the SDK.
Allows you to display tips that guide new users on how to interact with the assistant. Tips are currently predefined and can only be toggled on/off. Defined by the Tip Control script in the SDK.
Allows you to move the assistant around the scene and move/rotate models added as part of the conversation. Defined by the Touch Control script in the SDK.
Once you have the assistant created and tested to your liking, select ‘Code’ button on the studio page under description. This provides you the id and token needed to add this to your app, site or game.
The Hootys-Unity SDK can be found on Github.
The assistant can be added to a Three.js scene. The SDK is currently in private beta. If you want access, please contact us at email@example.com.