API
Overview
8 min
https //unith ai makes it possible to create engaging user experiences through the provision of digital humans our digital humans are designed to act swiftly, respond promptly, and above all, assist, guide, support, and engage your users effectively we aim to amplify your concepts with our technology, fostering a next generation experience this documentation encompasses fundamental aspects such as access, avatar creation, and response management our api empowers you to programmatically generate digital humans, seamlessly integrating unith's cutting edge ai capabilities into your product unith ai helps you think outside the bot high level overview the unith api is composed of the following components authentication user organisation head video head visuals voices document authentication the component you use and the order in which you use them, largely depends on the type of user you are and your specific use case all users will need to use the authentication endpoint to authenticate, and generate their bearer token as described in docid\ a6pxi9wvcn3uxnytwj7ja head creation the first thing you will do is create your first digital human (referred to interchangeably as "head") you do this as described in docid\ hoboropyem9tuenmqcf s you will already have all the dependencies associated with your organisations (most notably, faces (head visuals) and voices to choose from) head modification all digital huamans can be modified and often are this is described in docid\ gjy qc6alw5tk6sscxrpp voices voices are essential to digital human configuration voices endpoints give you visibility into the voices you have available to you, when creating or modifying a digital human see docid\ ujeqesnyxa j9j2 z6jtl to see voices available to your user check our guidelines on https //docs unith ai/voice selection guide tts head visuals head visuals are effectively the faces of your digital humans you can use a single head visual for more than one digital human see docid\ vpzbfsclk26xbcillnvdg to see faces available to your user document when using "document" based digital human, the document endpoint is required to load specific knowledge into the digital human this is only required for doc qa use cases, as defined in docid 0d qdu8wz5ugua deosea bring your own faces unith allows you to easily create your own head visuals that you can use within your organization video the video endpoint is used to upload a short video of the person you wish to import into the platform this video forms the basis of pre and post processing that occurs in creating your head visual head visuals head visuals are effectively the face to your digital human you can use a single head visual for more than 1 digital human head visuals can be both public or private private head visuals are available to only your organization, while public head visuals are available to others learn more about head visuals https //docs unith ai/creating head visuals a typical workflow as an api user interesting in creating your first digital human, you will start by listing the faces (head visuals) you have available to you selecting the most appropriate face for your use case creating a digital human accessing the digital human via chat unith ai as defined in docid\ hoboropyem9tuenmqcf s reviewing and modifying the digital human after your initial tests (optional) embedding the digital human in your app as defined in docid\ tdwdj0bo8et9f0k9o5kvb or docid\ fcs2alc yoseu88sfknll