AI voice deepfakes


Defeating deepfakes with voice biometrics

AI‑generated voice deepfakes are the latest in a long list of emerging threats to the contact center. Protect your customers and your brand by fighting back with AI‑powered fraud prevention.

What is a voice deepfake?

A voice deepfake is a convincing reproduction of a real person’s voice, created using generative AI. Until recently, voice deepfakes were hard for fraudsters to execute—requiring extensive audio samples and specialized tools. But thanks to the rapid growth of generative AI, now nearly anyone can “clone” another person’s voice in a few minutes with a small amount of recorded audio.

All a fraudster has to do is type what they want the voice say, and they can generate believable synthetic speech in real time. For contact centers in particular, this makes it harder to trust that the voice on the other end of the phone belongs to a real customer.

How does synthetic speech detection work?

Every human voice is unique and full of variation—an entire live orchestra of vocal tract structures, age, emotion, and speech patterns. A digitally‑manufactured voice is like an instrument that plays one note.

To the human ear, it may be difficult to distinguish between a convincing voice deepfake and a live human. But the technology in voice biometrics analyzes millions of hidden characteristics in an audio signal to spot the small, telltale differences between a real person and a synthesized voice.

Nuance Gatekeeper: Your best defense against AI voice deepfakes

As the threat of AI voice deepfakes rises, organizations need to embrace cutting edge solutions that use their own advanced AI to detect these and other new forms of deception, rather than reverting to legacy methods like knowledge‑based and two‑factor authentication, which have proven easy to exploit.

For more than a decade, Nuance has been pioneering fraud and deepfake detection capabilities for the contact center, and we’re constantly enhancing our core algorithms to stay ahead of the latest threat vectors, including generative AI. In addition, our solution provides multiple layers of defense against synthetic speech attacks, going far beyond basic voice matching. To learn more, read our white paper, Defeating AI deepfakes in the contact center.

Use cases for synthetic speech detection

Learn how you can protect your contact center from emerging fraud threats, including AI voice deepfakes.