API
...
Embedding Streaming Digital Hu...
UNITH Embed Integration Guide
31 min
overview the unith embed system provides the simplest and fastest way to integrate digital humans into your website or application we offer a custom web component ( \<unith widget> ) that can be embedded on any website using standard html, or integrated into modern frameworks like react and next js this guide covers three integration methods standard html integration using the custom \<unith widget> element react/next js integration framework specific implementation iframe integration alternative embedding approach all integration methods require three core credentials head id, org id, and api key these can be found in your unith interface for more information please visit https //docs unith ai/user standard html integration step 1 add the embedding script include the unith embed script in your html the defer attribute ensures the script loads after the page content html \<script src="https //embedded stream unith ai/index js" defer>\</script> step 2 add the widget element choose between fullscreen mode or widget mode based on your integration needs fullscreen mode (default) for fullscreen integration, wrap the \<unith widget> element in a container with defined dimensions \<div style="height 400px; width 100%;"> \<unith widget head id="yourheadid" org id="yourorgid" api key="yourapikey" \> \</unith widget> \</div> in fullscreen mode (default), you must wrap \<unith widget> in a container div with a defined height without this, the digital human interface may not display correctly widget mode (floating button) for a floating widget that appears as a chat button on your page configuration attributes core attributes (required) these three attributes are required for all unith embed integrations attribute description type example head id unique identifier for your digital human string "lola 1234" org id organization identifier from your unith profile string "unith 123" api key api key associated with your account string "uk abc123xyz " you can use the org id and head id and api key directly from the urlof your digital human www chat unith ai/ orgid 123/headid 123?api key=1234 you can also obtain your digital human's head id by navigating to your interface > digital human > edit > basic details secondary attributes (optional) display & localization language sets the ui display language for interface elements (buttons, labels, system messages) type string (language code) default user's browser or device language example language="en us" supported languages language code language ar ae arabic (united arab emirates) bg bg bulgarian (bulgaria) bn bd bengali (bangladesh) bs ba bosnian (bosnia and herzegovina) cs cz czech (czechia) de de german (germany) en us english (united states) es es spanish (spain) fr fr french (france) hu hu hungarian (hungary) id id indonesian (indonesia) it it italian (italy) ka ge georgian (georgia) kk kz kazakh (kazakhstan) lt lt lithuanian (lithuania) lv lv latvian (latvia) nl nl dutch (netherlands) pl pl polish (poland) pt pt portuguese (portugal) ro ro romanian (romania) ru ru russian (russia) sk sk slovak (slovakia) sl si slovenian (slovenia) sr rs serbian (cyrillic, serbia) th th thai (thailand) uk ua ukrainian (ukraine) example ui variant & placement variant controls the ui display mode type string default "fullscreen" supported values "fullscreen", "compact", "widget" example variant="widget" placement controls widget position when variant="widget" only applies to widget mode type string default "bottom right" supported values value description "top left" top left corner "top" top center "top right" top right corner "bottom left" bottom left corner "bottom" bottom center "bottom right" bottom right corner example widget with custom placement user identification username associates the current user session with a specific username for tracking and analytics purposes type string example username="alice johnson" example the username attribute enables user specific conversation tracking and appears in api logs with the session id format {sessionid} {username} speech to text configuration stt provider selects the speech recognition provider for voice input type string default "azure" supported values "azure", "eleven labs" example stt provider="eleven labs" example elevenlabs stt advanced configuration the following attributes are only applicable when stt provider="eleven labs" noise suppression enables background noise filtering for clearer voice recognition type boolean default true example noise suppression="true" vad silence threshold secs duration of silence (in seconds) required to detect end of speech type number default 1 5 range 0 5 3 0 example vad silence threshold secs="1 5" vad threshold voice activity detection sensitivity threshold lower values are more sensitive to speech type number default 0 4 range 0 0 1 0 example vad threshold="0 4" min speech duration ms minimum duration (in milliseconds) of audio to be considered speech type number default 100 example min speech duration ms="100" min silence duration ms minimum duration (in milliseconds) of silence between speech segments type number default 100 example min silence duration ms="100" complete elevenlabs stt example react / next js integration for react and next js applications, follow these steps to properly integrate the unith widget step 1 load the embed script use react's useeffect hook to dynamically load the unith embed script when your component mounts import { useeffect } from 'react'; useeffect(() => { // load the script const script = document createelement('script'); script src = 'https //embedded stream unith ai/index js'; script defer = true; document body appendchild(script); return () => { // cleanup remove script when component unmounts document body removechild(script); }; }, \[]); step 2 add the widget element fullscreen mode \<div style={{ height '400px', width '100%' }}> \<unith widget head id="yourheadid" org id="yourorgid" api key="yourapikey" \> \</unith widget> \</div> widget mode \<unith widget head id="yourheadid" org id="yourorgid" api key="yourapikey" variant="widget" placement="bottom right" \> \</unith widget> step 3 typescript configuration (typescript projects only) if you're using typescript, add type definitions to prevent typescript errors create or update global d ts declare namespace jsx { interface intrinsicelements { \['unith widget'] { head id string; org id string; api key string; variant? 'widget' | 'fullscreen' | 'compact'; placement? 'bottom right' | 'bottom left' | 'top right' | 'top left' | 'top' | 'bottom'; language? string; username? string; stt provider? 'azure' | 'eleven labs'; noise suppression? boolean; vad silence threshold secs? number; vad threshold? number; min speech duration ms? number; min silence duration ms? number; }; } } update tsconfig json add the global d ts file to your typescript configuration { "include" \["global d ts", "src/ / "] } complete react component example import { useeffect } from 'react'; export default function digitalhumanembed() { useeffect(() => { const script = document createelement('script'); script src = 'https //embedded stream unith ai/index js'; script defer = true; document body appendchild(script); return () => { document body removechild(script); }; }, \[]); return ( \<div style={{ height '500px', width '100%' }}> \<unith widget head id="yourheadid" org id="yourorgid" api key="yourapikey" language="en us" username="current user" \> \</unith widget> \</div> ); } iframe integration as an alternative to the custom web component, you can embed the digital human using a standard html iframe basic iframe implementation the allow="microphone" attribute is required to enable voice input functionality in the iframe customization with query parameters pass secondary attributes as url query parameters to customize the iframe embed example with multiple parameters query parameter format parameter example value required api key yourapikey yes language en us no stt provider eleven labs no username alice no noise suppression true no vad silence threshold secs 1 0 no placement center no integration examples by use case customer support widget a floating widget in the bottom right corner with user tracking fullscreen product demo an embedded fullscreen experience for product demonstrations multilingual educational platform a compact widget with optimized voice recognition for education important notes browser compatibility currently, we only support google chrome browser microphone permissions users will be prompted to grant microphone access when they first interact with the digital human ensure your website uses https, as modern browsers require secure contexts for microphone access script loading the embed script ( index js ) should be loaded with the defer attribute to ensure it executes after the dom is fully parsed container dimensions in fullscreen and compact modes, always define explicit height and width for the container div the widget will inherit these dimensions typescript support for typescript projects, always add the global d ts type definitions to prevent compilation errors with the custom \<unith widget> element query parameter encoding when using iframe integration with query parameters, ensure special characters in api keys are properly url encoded username tracking the username attribute is automatically included in session ids and conversation logs, enabling user specific analytics and tracking elevenlabs stt parameters advanced voice activity detection (vad) parameters are only applicable when using stt provider="eleven labs" these settings will be ignored for azure stt troubleshooting widget not displaying issue the \<unith widget> element appears but doesn't render the digital human interface solution ensure the parent container has defined height and width dimensions in fullscreen mode, the widget requires explicit dimensions to render correctly microphone access denied issue the digital human cannot access the microphone for voice input solution verify your site uses https (required for microphone access) for iframe integration, ensure allow="microphone" attribute is present check browser permissions and ensure microphone access is granted typescript errors issue typescript reports errors about unknown element \<unith widget> solution add the global d ts type definitions file as described in the react/next js integration section, and ensure it's included in your tsconfig json