Google has announced some innovations for many of its products, and of course the Google Assistant could not be missing. For example, a new app for Wear OS that Google announced some time ago is new. Now the launch of the Next Gen Assistant is finally official and dated. This summer, the even faster Google Assistant for Wear OS smartwatches launches first on the Samsung Galaxy Watch 4.
Google wants to hear less Hey Google
The new Look and Talk, which was also announced earlier, is not so widely available. The new feature is only available on the Nest Hub Max, which unfortunately is not available from us. Look and Talk ensures that Assistant recognizes you via the front camera and actively waits for your voice commands. This eliminates the need to send the commands a “Hey Google” beforehand.
“There’s a lot going on behind the scenes to detect whether you’re actually making eye contact with your device, rather than just glancing at it. In fact, six machine learning models are required to process more than 100 signals from the camera and microphone—such as proximity, head orientation, gaze direction, lip movement, context awareness, and intent classification—in real time.”
The other direct commands are also new for the Nest Hub Max. Hey Google is no longer necessary for this either. In Germany there are the first few direct commands, for example on the Pixel 6 smartphones.
We’re also extending quick phrases to Nest Hub Max, so you can skip saying “Hey Google” on some of your most common daily tasks. So as soon as you walk through the door, you can just say, “Turn on the hallway lights” or “Set a timer for 10 minutes.”
Google Assistant understands “um”, pauses and corrections better
We find further improvements in speech recognition in general. Google is working on the Assistant to better understand when a user inserts a pause, for example, and has not yet finished speaking his voice command. Corrections or the completely natural “um” should also be able to be filtered more effectively.
“To do that, we’re building new, more powerful speech and language models that can understand the nuances of human speech — like when someone pauses but hasn’t finished speaking.” And with the Tensor chip, specifically designed for super-fast on-device processing of machine learning tasks, we’re getting closer to the fluidity of real-time conversations.”
Links with a * are affiliate links. If you buy a product from a partner, we receive a commission. The price for you remains unaffected.