Welcome to Nuance Innovation Quarterly (also known as Nuance IQ), the new home of AI innovations from Nuance and beyond. Each quarter, we’ll be bringing you expert takes on the hottest AI topics, tales from the innovation frontlines, and inspirational stories from bold organizations taking conversational AI to the next level.
In this edition, we look at the human-AI interaction, asking how humans can teach AI—and learn from it—so that both can support each other to perform better. We hope you enjoy it, and if there are any topics you think we should cover in future editions, let us know at email@example.com.
One more thing before you get started: Don’t forget to subscribe to our magazine, Nuance Innovation Quarterly, to get even more insights into the future of conversational AI.
Forrester states we have the strongest current offering in driving “mission‑critical, enterprise‑grade Conversational AI." Our technology was evaluated against 13 companies and the ranking was based on a diverse set of criteria, including AI, omnichannel, voice and speech, agent augmentation, human and AI blending, vertical specialization, security and authentication, vision, road map, and market approach.
Read the press release.
Tell people you work in artificial intelligence, and it’s likely they’ll look at you like you’ve just said you work for Skynet and you’re busy arranging the downfall of humankind. People are afraid of AI. Perhaps they’re not afraid of being wiped out by murderous robots, but they’re certainly afraid of being replaced by machines.
And on the other side of the coin, there are the big tech firms who’ll tell anyone who’ll listen that it’s all true—machines really can do anything we can do. (They’re wrong, by the way.)
Both these attitudes are based on a common misconception. While AI tech is good, and getting better all the time, there’s plenty it’s just not that good at and won’t be any time soon. The fact is, AI has a lot to learn from humans, but it can also teach us things we could never have learned on our own.
The reality of AI is that the domains where machines excel are relatively simple. Rules-based systems are excellent at things like playing chess, converting phrases into actions the machine should take, and even driving cars. They can answer simple customer service enquiries, but even the most advanced deep learning neural networks are incapable of achieving common customer service goals like building brand loyalty, for example.
If even we humans haven’t figured language out yet (most linguists agree they’re still just scratching the surface), we can’t expect AI to figure it out on its own. AI can easily recognize natural language. The best AI can even recognize the intent behind the words. But no machine can recognize something like sarcasm—that takes human empathy and a native understanding of the nuances of conversation.
In a business context, humans and AI must work together to make both more effective. A recent survey found that companies that encourage and enable collaboration between humans and AI see their AI initiatives deliver significantly better business results across a range of financial and operational measures.
One part of this collaboration is the role humans play in the training and supervision of AI systems. AI models are far superior to humans at using vast quantities of data to quickly identify patterns and anomalies, and at recommending the best actions to achieve a defined outcome. What they’re not so good at are things like empathy, compassion, and emotional intelligence—areas where humans beat machines every time.
In the world of conversational AI, humans must train machines how to interact with humans, not just how to recognize unusual idioms and regional accents. AI algorithms must also be taught how to perform the tasks we need them to do, and to recognize when a task is beyond their capabilities, so it can be handed over to a human (with the AI learning from the actions taken).
There are also emerging roles for humans in supervising AI, analyzing the conclusions AI models reach and approving the actions recommended by machines. As pioneers in AI technology, we all have a responsibility to ensure our models learn and behave in an ethical and sustainable way.
All this talk of training and supervising AI might make it seem as if the lessons only flow one way. But of course, AI models can teach us a huge amount and show us things we might never have seen without them.
With lots of data and a distinct outcome, a machine will produce astounding results. DeepMind’s AlphaGo Zero didn’t learn the ancient game of Go by playing against humans (it was simply taught the rules of Go and left to play itself). By the time it got around to playing—and beating—the world’s best human players, it had invented strategies unseen in the game’s 2,500-year history.
It’s this ability of machines to find previously unseen ways to deliver a known outcome that’s led biopharmaceutical companies, for example, to invest in AI programs to accelerate drug discovery and unlock the secrets of treating previously untreatable diseases.
One thing unites all of history’s technological advancements—from the wheel to Alexa. They all provide tools to augment human capabilities, but none of them replace human ingenuity, creativity, and empathy.
Today, our AI tools can learn from us, and become better tools as a result. And we can learn from these tools, finding ways to do things differently: better, faster, simpler. Many of us will have to work differently and learn new skills to make the most of the AI opportunity, but our old, human skills will be invaluable to help machines support us, rather than replace us.
1Collaborative Intelligence: Humans and AI Are Joining Forces, H. James Wilson and Paul R. Daugherty, Harvard Business Review, July-August 2018 https://hbr.org/2018/07/collaborative-intelligence-humans-and-ai-are-joining-forces