Though artificial intelligence has advanced rapidly in the past few years, its capabilities remain limited. To compensate, companies hire people to improve the AI’s performance by manually reviewing the data it receives. However, this can get tricky because the data has to be collected from consumers who in many cases didn’t suspect anyone would be listening.
Consumers were outraged when these stories have broken in the news. Some contractors admitted they often listened to people’s intimate or private conversations. But now that we know this is happening, is it really so bad? As we rely more on the services technology provides, such as smart assistants and audio transcriptions, is giving up privacy just a sacrifice we have to make?
Hey, Stop Listening to Me
For our Smart Home Resources guide back in April, we conducted a survey and found that 44% of respondents used their smart home devices multiple times a day and more than half were motivated to automate their homes for convenience and better organization.
To provide this convenience, many smart home devices, like Amazon Alexa and Google Home, use “wake words” to operate. Simply say the wake work (like “Alexa” or “OK Google”) and the device will listen (record) your commands and send them to a server to be processed. Then, artificial intelligence steps in to respond to those commands. Other smart assistants, such as Apple’s Siri and Microsoft’s Cortana, work in a similar way.
But this functionality has a downside: the device is constantly listening for its wake word and can occasionally record if it mistakenly hears the wake word. To fix this, many companies hire human contractors to moderate false positives and help improve the AI. However, these accidental recordings reportedly include drug deals, medical phone calls, and people having sex.
Companies insist the data is randomized before the contractors listen to it and can’t be traced back to any individuals. However, according to Briana Brownell, founder and CEO of Pure Strategy Inc and a Canadian member of the International Organization for Standards ISO/IEC JTC 1 Standards Committee on Artificial Intelligence, “since there are no standards in place for what “anonymized” means, it makes sense to be skeptical at this point.”
Companies have been reluctant to own up to listening in. A few weeks before the news broke about Apple’s human contractors, Reviews.com published a how-to guide for opting out of Siri’s tracking features. At the time, we asked Apple if Siri might listen and record without prompting. The response was a simple “It doesn’t.” Apple declined to comment about recent headlines that seemingly contradict the company’s response to us in July.
Hey, Stop Tracking Me
This isn’t the first time there’s been a sweep of privacy violation accusations directed at tech companies. In 2011, location services fell under speculation when researchers released a report revealing Apple was recording users’ location data in a log stored locally on iOS devices.
A few days later, two customers sued Apple over this in a class action lawsuit that ended with a small settlement. Soon after, Congress questioned representatives from Apple and Google about their practice of sharing location data with third-party companies. The Federal Communications Commission and the Federal Trade Commission conducted a public forum on location services. The Pew Research Center even released a study indicating location services weren’t going anywhere.
Looking back at these stories (and the public reaction to them) eight years later, it feels like we demonized a service we’re now fully accepting of and dependent on. Location services are an omnipresent and convenient part of our digital lives. They help us get to where we need to go, learn about where we are, and organize our lives. Our use of location services hasn’t declined since the Pew’s study in 2011. Instead, our adoption and tolerance of these services have only grown.
Of course, hindsight is 20/20 and this comparison isn’t perfect. In the case of location services, virtually all consumers were affected. In contrast, tech companies have insisted that only a small portion of data has been subjected to human moderators. But to consumers, the possibility of being in that portion, however small it is, can be a cause for concern.
Hey, Just Let Me Know First, OK?
Now that we know tech companies are hiring humans to listen to our data, what’s next? Some have vowed the practice will stop, but what does that mean for the future of AI?
Beyond human moderators, the more serious transgression may be the lack of disclosure from tech companies. According to Ray Walsh, digital privacy expert at ProPrivacy.com, “while perfecting AI assistants may be considered a positive for consumers in the long run, the problem is that consumers have not been directly informed that this is happening.”
When we let a piece of technology into our homes and around our families, there’s an unspoken trust between us and the company we bought it from. We trust the product to entertain us and make our lives easier. We trust that every time we say the wake word, the product will hear us and respond. And we trust that our privacy is protected, especially when we’re at home.
However, it can be said Amazon, Google, Apple, Facebook, and Microsoft violated this trust when they didn’t make it clear other humans out there might be listening to us. Though their intentions were to improve the services they provide, the public response indicates they might’ve lost sight of their consumers’ values, or at least diminished them past the point of comfort for their users.
Brownell hopes expanding regulations coupled with consumers’ increased awareness will help change the conversation around data privacy. “I think that there will definitely be some companies that will not want to join the new age of being more careful with people’s data. But I think that in a lot of cases, [companies] are realizing that it’s an important thing for their customers, and if it’s an important thing for their customers, then it makes sense for the companies to be on board with the changing market.”
Artificial intelligence cannot learn if we do not give it datasets to learn from. And the technology has yet to reach the point where human intervention isn’t necessary to catch and correct mistakes, such as accidental recording.
“Until consumers are willing to ditch services that require them to provide consent for data to be processed, it will be hard for consumers to avoid some level of data harvesting,” Walsh said.
At least temporarily, these privacy revelations strain the trust many users have in their smart assistants and the companies behind them. If 2011 and the collective response to location services tracking is any indicator looking forward, this could serve as a lesson learned in privacy for consumers and companies alike. “There’s always going to be a need for having a human in the loop,” Brownell said. “I think that we’re starting to get smarter about what the risks are and what the harms can be,”
Apple and Facebook declined to comment for this story. Amazon, Google, and Microsoft did not respond to requests for comment.