API
Digital Human Configuration
Voice Connectors
4 min
this documentation provides an overview of our text to speech (tts) service and how you can manage tts voices for your digital humans using our voice connectors text to speech (tts) overview text to speech (tts) is a service that converts written text into synthesized human like speech your digital human must have a tts voice configured to function correctly in order to work our platform's voice connectors are designed to integrate with a variety of external tts providers, offering you a wide selection of voice options we are continuously adding support for new providers if you are interested in creating your own custom voice connector, please reach out to us for more information supported tts providers the platform relies on a number of third party tts providers you can retrieve the list of currently supported providers by using the following endpoint https //platform api unith ai/providers currently supported providers include elevenlabs microsoft azure audiostack each provider has its own unique set of usable voices, and it is not possible to cross match voices with different providers when creating or updating a digital human head, you need to configure both the ttsprovider and ttsvoice parameters these values must correspond to a valid provider name and a specific voice id from that provider regional voice compatibility voices available in one region may not be present in others this can cause a digital human created in one region (e g , the us) to not be fully functional in another (e g , the eu or australia) to ensure that the same digital human is fully operational in any region, you can use our common voices file the provided json file contains a list of voices that are guaranteed to be available and fully supported across all regions we recommend using these voices for maximum compatibility and reliability