If you’re a (Google) Nest Secure user, you may remember Google’s apology for neglecting to include an embedded microphone on the Guard’s spec sheet. Technically, the built-in Assistant is an opt-in feature, but if you want a customized experience, you’ll have to opt in. The Assistant’s existence hasn’t prompted the first — or last — policy change or public statement from Google regarding how it uses users’ information. More recently, Belgian broadcaster VRT NWS leaked word that Google pays contractors to transcribe inquiries from Google Assistant.
This instance might prompt a little déjà vu, especially if you heard about Amazon partaking in similar annotation activities with its Alexa devices. Knowing there’s a possibility that human ears can glean information from your home — or more importantly, private spaces like bedrooms and bathrooms — might cause concern. It also raises important questions: If you use the Google Nest Guard to arm your security system as you walk out the door, will Google Assistant annotators hear? To what extent can these transcribers intervene if they sense distress — or medical emergencies — through these recordings? As far as we can tell, Google only mentions “Assistant” queries without further specification, so it’s fair to assume the Google Nest Guard hub falls under this category.
Google says these snippets technically aren’t linked to your account, but is that really cause for comfort? Jeremy Gillula, tech projects director at the Electronic Frontier Foundation told us he was “dismayed” by the language in Google’s blog post regarding account association with user audio recordings. Google’s statement that “Audio snippets are not associated with user accounts as part of the review process,” according to Gillula, is not the same thing as saying “once an audio “snippet is sent to the review process, it can never be re-associated with that user’s account.
Gillula says: “Google could design their system so that once a snippet gets selected for review, it’s impossible to re-associate with a user account, but they haven’t said that they’ve done that. So I think it’s entirely possible that in the future, Google could decide that their annotators should watch for signs of distress that can be reported to authorities, regardless of any legal obligations.”
Voice Assistants Walk a Thin Line Between Security and Privacy
By inviting a voice assistant into your home and allowing it to serve as the central command center for your home security system, you’re probably aware that you’re relinquishing personal data. But this might not totally stop you from buying big tech’s cameras, sensors, and other security devices. In fact, a survey done by Consumers International and the Internet Society shows that 75% of respondents don’t trust the way these companies share their information, despite the fact that nearly 70% also own said devices.
This might be a classic case of the privacy paradox. But it might also be a mere byproduct of a growing aversion to other options — options that may not provide as much user freedom. For example, Google Nest provides an element of more user autonomy in home security — it’s free of contracts, installation fees, and murky pricing and product displays. Trends in DIY home security suggest that consumers enjoy those freedoms: Nearly 60% of U.S. households with intentions to purchase a security system want to monitor themselves and through their mobile phones, and 52% look to install systems themselves, according to Parks Associates.
Big data, weak disclosures
By opting for a home security company with a business model that fuels its fire from your personal data, you might be forgoing lengthy, expensive contracts in the process. Transparency regarding your data, according to some of the experts we consulted, might be the cost. “Certainly, the incentives for disclosures are pretty weak,” Gillula told us.
Google says the Assistant “stores data about your interactions with the Google Assistant on its servers, which reside in its data centers.” This information could include when you wake up and go to bed, when you turn off the lights, or lock the doors at night, all tracked through voice assistant queries. What’s not as well known, however, are the specific instances that trigger human transcription on Google’s end, how that information is transcribed, and how long Google keeps it.
“In terms of transparency, Google (and Amazon and any other home assistant which makes use of human annotators) should make it explicitly clear in their advertising and in the product packaging that as part of this service, Google employees or contractors may review audio recorded by the device. Anything less than that is being dishonest with their users,” Gillula said.
“OK Google” may not be the only trigger.
VRT NWS said of the 1,000 leaked excerpts it listened to, 153 were of “conversations that should never have been recorded and during which the command ‘Okay Google’ was clearly not given.” Google did not respond to our inquiries for this particular story, but David Monsees, project manager of search at Google, posted a statement on The Keyword regarding the topic. Monsees called the leak a violation of Google’s “data security policies” and that the company is “conducting a full review of our safeguards in this space to prevent misconduct like this from happening again.”
Monsees said the transcription efforts were to better develop speech technology for more languages and to understand the nuances and accents of different languages. Only 0.2% of recordings are annotated by these employees, according to Google, and these employees are also instructed “not to transcribe background conversations or other noises, and only transcribe snippets that are directed to Google.” These cases, known as “false accepts,” or triggering from sounds similar to “OK Google.” But if you’re concerned, you can always delete your voice recordings or opt out of using the Assistant completely.
Yet Google remains adamant that “hotword detection” is really the only way to know what you’re saying will be sent to Google’s servers (and potentially transcribed by a human). “When these devices were introduced, we were led to believe that what is now happening was impossible and even if it were to happen, due to a technical error, Google would recognise it as a problem and immediately rectify it,” said Jake Cowton, co-founder of Fair Custodian, in a statement to Reviews.com. “To now learn that not only was the device able to record without the ‘trigger’ phrase being said, but also that those recordings were intentionally done to be manually annotated by contractors is a gross breach of trust.”
The Bottom Line
While Google admits it shares some Assistant interactions with advertisers, it also says it’ll keep voice recordings and video footage from home security (Google Nest) devices separate from advertising. But it has changed similar declarations in the past. How, if at all, the 0.2% (of recordings that are annotated) plays into this equation is still uncertain, as is the intersection between human transcribers and Google Nest Secure inquiries. We plan to keep you updated as the story develops.
- While we may never know exactly what Google’s doing behind its closed doors, we’ve outlined key steps users can take to limit the collection of your personal data.
- Wondering what you need to know before getting a voice assistant? We’ve got some recommendations.
- You can also check out what you should know before agreeing to terms.