Proven tools for proven results
Create intelligent IVR, chatbot and messaging experiences with intuitive tools built on Nuance speech and AI technologies, APIs and micro‑services.
Design, develop and test multi‑language, omni‑channel conversational AI experiences in a single project across voice and digital channels—all with an easy to use graphical UI.
Today’s consumers want choice and flexibility: Choice of channels, and flexibility in the way that they interact. Mix.dialog lets designers and developers build on a set of predefined dialogue nodes that incorporate conversational AI best practices and cover use cases from comprehensive FAQ‑type VAs to highly personalised, transactional bots and IVRs—without having to write a single line of code.
Build conversational AI experiences for voice‑enabled and digital channels in a single project. Optimise the logic for each channel and modality while ensuring consistency and reuse within a single project. Utilise rich text, media and action buttons in digital channels, and Nuance Vocalizer TTS and pre-recorded prompts in voice channels.
Conversational logic built in Mix.dialog builds on context and session awareness. User messages such as 'Am I due for a new phone yet?' will be interpreted along with information from prior messages in the same dialogue to provide the right answer. Take the dialogue understanding further by including external data such as location, account details and customer preferences to create more personalised experiences.
In Mix.dialog, call flow designers build on core components that can orchestrate mixed‑iniative dialogues. First‑time users will be guided by the system through conversations step by step, while more experienced users can take the fast track and take control of the conversation.
Bots built with Mix know how to multi‑task. Users can start a conversation and go down a path with one intention; and when they add a second intention, they can branch out of the current flow to resolve it—and then resume the original task seamlessly.
Mix.dialog lets developers integrate with existing back‑end systems for authentication, data access and transaction fulfilment.
Mix.dialog runtime APIs combine Nuance's speech technologies, NLU and dialogue orchestration in a single API—allowing for easier integration with clients including IVR, VAs or bots, smart speakers, mobile apps and social media engagement channels.
Integrate external systems (such as CRM, customer profiles) into your call flow logic to personalise the user experience and better fulfil users' needs.
Mix.dialog allows direct integration with RESTful web services and client-side data integration which helps create custom client integrations for mobile and web applications that can utilise access to client‑side data and logic.
Define personalised call flow logic and business rules without having to write a single line of code. Build complex nested conditions via an easy‑to‑manage form‑filling approach. Leverage data objects with schemas mapping to your backend methods to minimize data manipulation requirements.
Use built‑in validation and the Try mode to test your dialogue logic before pushing it into production. Drive the application logic through all possible paths by simulating different user messages and back‑end response data.
The Try mode in Mix.dialog allows developers to test-drive the application logic without having to deploy to the target environment. Simulate text and voice modalities and verify system responses, while getting visual feedback on the call flow execution in a protected environment.
While test‑driving application logic through all possible paths in Try mode, simulate back-end responses by entering different response data directly in the test window.
When configuring call flow logic details, Mix.dialog supports developers by validating the configured dialogue nodes and highlighting missing content or flawed logic.
Build natural language processing domains and continuously refine and evolve your NLU model based on real‑world usage data. Define user intents ('book a flight') and entities ('from JFK to LAX next Wednesday') and provide sample sentences to train the DNN‑based NLU engine.
Train your NLU model with sample phrases to learn to distinguish between dozens or hundreds of different user intents. For each intent, define the entities required to fulfil the customer request. Create custom entities based on word lists and everyday expressions or use ready‑made entities for numbers, currency and date/time that understand the variety of ways that customers can express that information.
Train the NLU model at any time and test it against practice sentences. Identify problem areas where intents overlap too closely, confidence levels need to be boosted or additional entities need to be defined.
Deploy the trained NLU model both to the NLU engine and at the same time, as a domain language model, to the speech‑to‑text transcription engine. This provides the highest accuracy in speech recognition results, semantic parsing and understanding of user utterances based on your application’s specific language domain.
Use one central view for managing users, access rights, and project versions and deployments. The Mix Dashboard also allows for promotion flows from a sandbox environment to staging and production environments while letting you control multi‑datacentre, multi-regional and hybrid deployment models.
Manage conversational AI projects across your enterprise. Define the supported languages, channels and modalities on a per‑project basis.
Control project versions by integrating with common version management systems. Define deployment packages, and promote them from a sandbox environment to staging and production environments across multiple regions or data centres. Roll back immediately in case of problems with a new version.
Add additional languages or additional channels and modalities to your projects.