Microsoft hired human employees to review conversations on Xbox, say US media in another case that returns to the front pages the lack of transparency and control in the use of personal data by large technology companies.
The scandal of human listening to private conversations continues. And Microsoft repeats the same scenario discovered on Skype where human reviewers recorded and listened to personal conversations of users through the translation service of the communications application and voice commands that users launched towards the company’s voice assistant, Cortana.
On Xbox, the “listeners” of audio recordings began in 2014 through the Kinect peripheral and then, starting in 2016, through the virtual assistant Cortana. The case is more serious than that of Skype because it can involve millions of minors who use the console and for whom more stringent privacy guidelines are applied (or assumed).
Although in principle human employees should have only heard the voice commands activated with «Xbox» and «Hey Cortana», they also recorded private conversations that had nothing to do with the supposed objective: to improve the response of the algorithms that control these activated assistants by voice and in general of artificial intelligence systems based on machine learning. Private conversations were recorded “accidentally”, as some contractors explain. Something that seems to be recurring in all these cases.
“For a long time we have made it clear that we collect voice data to improve services and that providers sometimes review this information,” a Microsoft spokesman told Motherboard in a written statement. The problem (although there are technicians who believe that personal recordings should not be reviewed in any case ), is that it had not been clarified that the reviews were performed by human employees. Something that Microsoft has done, but after knowing the case of Skype.
The feeling that leaves the case is worrisome because the review of the conversations on Xbox involves millions of minors. As with digital home assistants, we are putting devices in our own home that behave like Trojans, record your audio conversations (some also images and video), your fights, your relationships and everything that happens in your own house. Even if it is a small part of the recordings and although in theory, they are anonymous, these recorded conversations could include information with which to easily locate the user.
Even admitting “good faith” in technology, all this information is confidential and could be misused or sold without your permission. They would simply have to be totally private and no human would have to have access to them. Despite the explanations the distrust of the user is unstoppable. Lack of transparency and control in the processing of personal data and measures to guarantee the right to privacy. On the user side, the usual: do not give away our personal data. A detailed reading of all policies, permission management, control of what they do and if they do not offer guarantees simply do not use them. Today, these do not exist.
Apple, Google, and Facebook have canceled the human review. It would not hurt that Microsoft opted for that route in an exercise of self-control much needed in the industry.