If you have checked your email, visited a website or opened an app in the past few weeks, you've probably seen an updated privacy policy or terms of service agreement. What's going on?
Arizona State University's Nadya Bliss, director of the university's Global Security Initiative, is available to discuss what's behind the onslaught of updates.
Question: Inboxes around the world are full of notices about updated privacy policies. What’s going on?
Bliss: On May 25, the General Data Protection Regulations (GDPR) went into effect. GDPR is Europe’s new privacy policy that regulates how businesses handle data. The policy requires that for a company to use personal data, the user has to provide consent. One key requirement of the GDPR is that privacy policies — for Facebook, Twitter, Google, and really any business that handles data — be simplified and easier to understand for day to day users.
The GDPR also requires that users should be able to access all of the data being collected on themselves, and ask for it to be deleted. Since many of the applications operate across geographic boundaries, some U.S companies have worked to address the GDPR’s requirements.
Q: Does this have anything to do with Cambridge Analytica?
A: Not directly. GDPR has been in development by the European Union for a number of a years to replace the previous data protection regulations, and has been approved for about a year. However, both the massive collection of personal data by online platforms and the lack of transparency in personal data use, as occurred with Cambridge Analytica, have certainly contributed to the recognition by both the policy communities and, more generally, the public for stronger privacy protection and guarantees.
Q: Something we learned from the Cambridge Analytica fiasco was that companies that aggregate data, like Facebook, aren’t transparent about how that data is used. Do these new policy updates fix that?
A: No. Or at least not as far as I can tell.
GDPR does require that an application provide the data collected to the user upon request, and the informed consent aspect implies that whomever else is processing data must come out in the open. For example, if you are using Facebook in Europe and there is an ad-targeting company that needs your data, that will require consent.
However, the data aggregation issue does not appear to be addressed, so the question then becomes whether combinations of data streams will also require explicit consent. For example, you may have given consent for the collection and use of your browsing history, and also for your recent online purchasing, but you may not want that information processed together. It is not clear how that scenario will be handled.
Additionally, GDPR is a European Union policy so there is no enforcement guarantee here in the U.S. Furthermore, there are already situations emerging with companies forcing users to consent in order to continue to use their applications — that is certainly not the spirit with which the policy was developed. And of course, the business incentive structure has not changed — if a company is providing a service for free, your personal data is likely being monetized.
Q: What would you like to see happen next?
A: A few things. First, data collection and analytics on the data should be opt-out by default. That means that basically any time I download an app, it should not collect or share any of my personal information unless I explicitly give it permission. Around the time of the Mark Zuckerberg testimony, Congress introduced the CONSENT bill to address this. These are positive signs, but data collection and processing are still poorly regulated.
Second, forced consent is not ok. There is a spectrum between privacy and convenience. It is somewhat inconvenient for me to check out as a guest on an online shopping site — since I have to enter my information multiple times — but it preserves a bit more of my privacy. Everyone’s level of comfort sharing information is different and there should be policies that respect that and are transparent about it.
Finally, given the variety and complexity of machine learning algorithms that are used today on the collected data, we desperately need more research and policy around algorithmic transparency. Many of the companies that collect our data use AI and machine learning algorithms that often provide a recommendation without explanation as to how that recommendation was generated. That introduces all sorts of vulnerabilities into these systems. This is an open research area but this is our opportunity to develop policies at the same time as we develop research. That is a tremendous opportunity.