API

Introduction

9min
unith makes it possible to create engaging user experiences through the provision of digital humans our digital humans are designed to act swiftly, respond promptly, and above all, assist, guide, support, and engage your users effectively we aim to amplify your concepts with our technology, fostering a next generation experience this documentation encompasses fundamental aspects such as access, avatar creation, and response management our api empowers you to programmatically generate digital humans, seamlessly integrating unith's cutting edge ai capabilities into your product unith ai helps you think outside the bot high level overview the unith api is composed of the following components authentication user organisation head video head visuals voices document authentication the component you use and the order in which you use them, largely depends on the type of user you are and your specific use case all users will need to use the authentication endpoint to authenticate, and generate their bearer authtoken as described in user docid\ a6pxi9wvcn3uxnytwj7ja head creation the first thing you will do is create your first digital human (referred to interchangeably as "head") you do this as described in create a digital human docid\ hoboropyem9tuenmqcf s you will already have all the dependencies associated with your organisations (most notably, faces and voices to choose from) head modification all heads can be modified and often are this is described in update a digital human docid\ gjy qc6alw5tk6sscxrpp please note, it is recommended to modify heads instead of create new ones unless you need a new face (currently, changing the head visual requires a new head) head administration typically, in your own application, you will want to present to a user the digital humans (heads) that have already been created along with some populate attributes, to do this, see the list digital humans docid\ al6bpw 28gspvm3byy7oc documentation voices voices are used primary to provide visibility into the voices you have available to you, when creating or modifying a digital human see list voices docid\ ujeqesnyxa j9j2 z6jtl to see voices available to your user head visuals head visuals are effectively the faces of your digital humans you can use a single head visual for more than one digital human see list faces docid\ vpzbfsclk26xbcillnvdg to see faces available to your user document the document endpoint is required to load specific knowledge into the digital human training corpse this is only required for doc qa use cases, as defined in create a doc qa digital human docid 0d qdu8wz5ugua deosea bring your own faces in the short term, these endpoints and functionality is reserved for unith unith will ensure appropriate re training of ai models upon each new face video the video endpoint is used to upload a short video of the real human you wish to import into the platform this video forms the basis of pre and post processing that occurs in creating your head visual head visuals head visuals are effectively the face to your digital human you can use a single head visual for more than 1 digital human head visuals can be both public or private private head visuals are available to only your organization, while public head visuals are available to others a typical workflow as an api user interesting in creating your first digital human, you will start by listing the faces (head visuals) you have available to you selecting the most appropriate face for your use case creating a digital human accessing the digital human via chat unith ai as defined in create a digital human docid\ hoboropyem9tuenmqcf s reviewing and modifying the digital human after your initial tests (optional) embedding the digital human in your app as defined in embed digital humans in your application docid\ tdwdj0bo8et9f0k9o5kvb or embed using iframe docid\ fcs2alc yoseu88sfknll