Introduction
UNITH makes it possible to create engaging user experiences through the provision of Digital Humans. Our digital humans are designed to act swiftly, respond promptly, and above all, assist, guide, support, and engage your users effectively.
We aim to amplify your concepts with our technology, fostering a next-generation experience.
This documentation encompasses fundamental aspects such as access, digital human creation, and response management. Our API empowers you to programmatically generate digital humans, seamlessly integrating UNITH's cutting-edge AI capabilities into your product. UNITH AI helps you think outside the bot.
The UNITH API is composed of the following components.
- Authentication
- User
- Organisation
- Head
- Video
- Head Visuals
- Voices
- Document
The component you use and the order in which you use them, largely depends on the type of user you are and your specific use case.
All users will need to use the Authentication endpoint to authenticate, and generate their Bearer authtoken as described in User.
The first thing you will do is create your first digital human (referred to interchangeably as "head"). You do this as described in Create a Digital Human. You will already have all the dependencies associated with your organisations (most notably, faces and voices to choose from)
All heads can be modified and often are. This is described in Update a Digital Human. Please note, it is recommended to modify heads instead of create new ones unless you need a new face (currently, changing the head visual requires a new head).
Typically, in your own application, you will want to present to a user the Digital Humans (heads) that have already been created along with some populate attributes, to do this, see the List Digital Humans documentation.
Voices are used primary to provide visibility into the voices you have available to you, when creating or modifying a Digital Human. See List Voices to see voices available to your user.
Head visuals are effectively the faces of your Digital Humans. You can use a single head visual for more than one Digital Human. See List Faces to see faces available to your user.
The document endpoint is required to load specific knowledge into the Digital Human training corpse. This is only required for doc_qa use cases, as defined in Create a doc_qa Digital Human.
In the short term, these endpoints and functionality is reserved for UNITH. UNITH will ensure appropriate re-training of AI models upon each new face
The video endpoint is used to upload a short video of the real human you wish to import into the platform. This video forms the basis of pre and post processing that occurs in creating your Head Visual.
Head visuals are effectively the face to your Digital Human. You can use a single head visual for more than 1 digital human. Head visuals can be both public or private. Private head visuals are available to only your organization, while public head visuals are available to others.
As an API user interesting in creating your first digital human, you will start by
- Listing the faces (head visuals) you have available to you.
- Selecting the most appropriate face for your use case
- Creating a digital human
- Reviewing and modifying the digital human after your initial tests
- (Optional) Embedding the digital human in your app as defined in Embed Digital Humans in your Application or Embed using iframe.