Voice assistants have a dark side, and we may be paying for their limited benefits with our privacy, says the technology writer Roisin Kiberd

A news story in the Guardian last week confirmed what many Apple users likely already suspected: Siri, Apples voice assistant, has the power to record private conversations, and these audio clips arent always just stored on a server a number of samples are passed along to third-party, human contractors who are paid to listen to them.

This isnt as simple as a voice assistant spying on its users: the report revealed that Apples contractors listen to the clips as part of the companys quality control measures, working out whether Siri was triggered accidentally or on purpose, and whether its response was entirely correct. This practice is not explicit in Apples customer-facing privacy documentation, and due to errors in triggering Siri the sound of a zip, the whistleblower said, can often set Siri off contractors end up overhearing private conversations including drug deals, business meetings, sex and private medical appointments.

In one way this news is far from shocking while Apple trades on the assertion that high-level security comes included with its products high prices, it has always been clear that by using Siri, or any voice assistant, the user must allow their phone to record and analyse their voice. Its also worth comparing Apples approach with that of similar products. With Google Assistant, the software powering Google Home, audio is recorded and stored, but you can access your history and delete past recordings, and theres an option to automatically delete your data every couple of months. Amazons Alexa stores queries until the user manually deletes them, and both Amazon and Google employ contractors to review a small number of their recordings (Google has stated in interviews that it generally provides a text transcript rather than the original voice recording, to third-party contractors). Microsofts Cortana collects voice data in order to improve its service, while Samsungs Bixby does the same, involving a third-party service for speech-to-text conversion.

Q&A

What is AI?

Artificial Intelligence has various definitions, but in general it means a program that uses data to build a model of some aspect of the world. This model is then used to make informed decisions and predictions about future events. The technology is used widely, to provide speech and face recognition, language translation, and personal recommendations on music, film and shopping sites. In the future, it could deliver driverless cars, smart personal assistants, and intelligent energy grids. AI has the potential to make organisations more effective and efficient, but the technology raises serious issues of ethics, governance, privacy and law.

Voice assistants are recording and listening to their users whats new? But theres a subtler truth here worth considering: AI-powered intelligent assistants, lauded as efficient and effortless to use, are failing at answering even basic questions, and often activate accidentally at inappropriate times (well-known incidents include Siri interrupting the secretary of defence during a speech on Syria in the House of Commons, gatecrashing a White House press briefing and contributing to a TV news broadcast). These products arent even 100% automated behind the gleaming, smooth-voiced interfaces are underpaid, overworked and resolutely human contractors. These are people who are precariously employed, often denied full employment rights and with little allegiance to the companies they work for, but hired to fill in the gaps in artificial intelligence. This is by far the most dystopian element to the story: in exchange for giving away our privacy to tech multinationals the services of stressed-out humans are behind the machines. Technology is no different to how fast fashion or fast food is produced much of the heavy lifting is done in sweatshops out of sight, staffed by people.

Voice is hailed as the future of computing, including voice assistants, voice-recognition technology, ambient computing and the widespread use of smart speakers in the home. But voice is also the future of surveillance: earlier this year the Intercept revealed a nationwide database of voice samples collected in US prisons, while another story detailed the National Security Agencys voice-recognition systems, including a project called Voice RT (Voice in Real Time) that aimed to identify the voiceprint of any living person. Human rights activists have criticised the establishment of a voice biometric database in China, while the invention of deep voice software, a deepfake for voices, augurs ill for the future of voice-based privacy services.

We live in a time of constant technological change and its likely that soon these services really will improve, and be fully automated. We can also take some solace in the fact that the Siri voice clips are at least anonymised, and generally last no more than several seconds. But this leak reveals that the qualities Apple uses to differentiate itself from its competitors are little more than hollow marketing, and that as Apples software is proprietary we have no choice but to either engage with it on its own terms, or avoid using its platform entirely.

Were told that with AI, the more we allow it to watch us, the more sophisticated the service will become, but its worth remembering that the first duty of the companies developing it is to their shareholders. At the moment, we tolerate limitless surveillance in exchange for an extremely limited service. While theres still time if theres still time we need to consider what we gain and what we lose when we live with machines that mine us for information.

Roisin Kiberd writes about technology, culture and the intersection between the two

Source: http://www.theguardian.com/us

 

Recommended For You



Like it? Share with your friends!

0 Comments

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.