Google fixes 2 annoying quirks in its voice assistant


“Today when people want to talk to any digital assistant, they think of two things: what do I want to do and how should I word my order to achieve it,” says Subramanya. “I think it’s very unnatural. There is a huge cognitive burden when people talk to digital assistants; natural conversation is one way to alleviate the cognitive burden. ”

Making conversations with Assistant more natural means improving his reference resolution, his ability to link a sentence to a specific entity. For example, if you say “Set a timer for 10 minutes” and then say “Change it to 12 minutes”, a voice assistant needs to understand and resolve what you are referring to when you say “that”.

New NLU models are powered by machine learning technology, in particular bidirectional encoder representations from transformers, or BERT. Google unveiled this technique in 2018 and first applied it to Google search. The first language comprehension technology used to deconstruct every word in a single sentence, but BERT processes the relationship between all of the words in the sentence, dramatically improving the ability to identify context.

An example of how BERT has improved research (as referenced here) is when you search for “Parking on a borderless hill”. Before, the results still contained hills with borders. After activating BERT, Google searches came up with a website that advised drivers to steer the wheels to the side of the road.

With BERT models now used for timers and alarms, Subramanya claims Assistant is now able to respond to related queries, like the aforementioned adjustments, with nearly 100% accuracy. But that superior contextual understanding isn’t working everywhere just yet – Google says it’s slowly working to incorporate the updated models into more tasks like reminders and controlling smart home devices.

William Wang, director of UC Santa Barbara’s Natural Language Processing group, says Google’s improvements are drastic, especially since applying the BERT model to spoken language comprehension is “not a thing. very easy to do ”.

“In the whole field of natural language processing, after 2018, with the introduction of this BERT model by Google, everything changed,” says Wang. “BERT actually understands what follows naturally from one sentence to another and what the relationship is between the sentences. You learn a contextual representation of the word, phrases and also phrases, so compared to previous work before 2018, it’s much more powerful. “

Most of these upgrades can be relegated to timers and alarms, but you will see a general improvement in the voice assistant’s ability to comprehensively understand the context. For example, if you ask him the weather in New York and ask questions like “What is the tallest building?” and “Who built it?” The wizard will continue to provide answers knowing which city you are referring to. It’s not entirely new, but the update makes the wizard even more adept at solving these pop-up puzzles.

Names of teaching assistants

Video: Google

The wizard is now also better able to understand unique names. If you’ve tried calling or texting someone with an unusual name, there’s a good chance it will take several tries or not work at all because the Google Assistant didn’t know the pronunciation. correct.

Leave a Reply

Your email address will not be published. Required fields are marked *