Voice-first user interfaces are now mainstream in smartphones and smart speakers, as Alexa, Baidu’s DuerOS, Bixby, Cortana, Google Assistant and Siri become indispensable helpers to millions. Now that people are accustomed to the service of conversational assistants, demand is surging for the same responsiveness in cars, appliances, wearables and more. All of these devices need to function in challenging acoustic environments and situations, and understand the user’s voice commands despite noise, loud music or other voices in the background. The voice activation frontend’s task is to ensure that the user’s voice gets to the backend clearly and intelligibly, so that it can be processed and understood. Here’s a look at how it works.
Read the full article on EEWeb.
You might also like
More from Audio / Voice
Evaluating Spatial Audio -Part 3- Creating a repeatable system to evaluate spatial audio
This is Part 3 of our deep dive into ‘Evaluating Spatial Audio. In Evaluating Spatial Audio - Part 1 - Criteria …
Evaluating Spatial Audio – Part 2 – Creating and Curating Content for Testing
In Part 1 of this topic of Evaluating Spatial Audio, we talked about what constitutes a ‘spatial audio’ product system, …
LE Audio and Auracast Aim to Personalize the Audio Experience
We live in a noisy world. At an airport trying to hear flight update announcements through the background clamor, in …