By now, most people are vaguely if not acutely aware of the prevalence of algorithmic profiling in our daily lives. But say we wanted to quantify predominance: how many third-party automated decisions might the average individual be subject to in a single day?

Just sticking to the most popular activities on the web, an algorithm:

  • Filters whose posts we see on Facebook and with what frequency.
  • Customizes our Google search results.
  • Optimizes our travel experience on an individual GPS or external ridesharing app.
  • Recommends products on Amazon and movies on Netflix.
  • Tailors advertisements on the majority of free-access sites.
  • Determines our credit scores.

At the very minimum, we can identify seven unique algorithms that shape our online (and offline) experiences. On the surface, these algorithms seem relatively innocuous, if not beneficial aids to maximize relevance and accuracy. Yet the pendulum of popular discourse has recently swung in the opposite direction: algorithms have been accused of helping Trump win the American presidency[1] and increasing economic inequality.[2] They’ve been indicted for creating a techno-dystopia in which department stores know you are pregnant before you do,[3] where your browser ads think you are a criminal because you have an ‘African American’ name,[4] and where AI picks up on and perpetuates racist or sexist stereotypes and prejudices.[5]

Where the GDPR (Purportedly) Comes to the Rescue

The GDPR is the first piece of legislation to explicitly define profiling and situate algorithmic decision-making in the context of fundamental rights. Article 4(4) deems profiling “any form of automated processing” that uses personal data “to evaluate certain personal aspects relating to a natural person”, such as personal preferences, behaviour, work performance, and so forth. It will be important for later discussion to notice a curious scope limitation in this terminology: profiling in the GDPR excludes humans.

The first obvious solution to a great number of issues in algorithmic discrimination is to get rid of categories that “shouldn’t” be used to judge us. Princeton professor Ed Felton bifurcates these grounds (among others) into unjust parameters (sex, race, ethnicity, sexual or political orientation) and unreasonable parameters (correlations that just don’t make sense, like if sock color was used to predict recidivism).[vi]

The GDPR arguably addresses both grounds. To tackle unjust parameters, Article 9(1) prima facie prohibits the processing of “special categories” of personal data. These include all manner of parameters from sex to political orientation to health data and more. Some scholars even contend that since the GDPR requires data controllers to use appropriate “technical and organizational measures” to prevent “discriminatory effects” (Recital 71), this requirement de facto necessitates the prohibition of any proxy variables for these “special categories”.[vii]  While there are ten general exceptions to processing special category data, Article 22(4) whittles the number down to two in the case of profiling: explicit consent (Art. 9.2[a]), and “substantial public interest” (Art. 9.2[g]). While restrictions on the use of these metrics for making decisions have existed in most countries’ laws for decades, the GDPR is unique because it increases the monetary repercussions beyond and explicitly addresses algorithmic profiling.

The existence of a GDPR solution to the problem of unreasonable parameters is more debated.  Article 13(2)[f] requires that when you do use automated decision-making, you give the data subject “meaningful information about the logic involved” and predict the envisaged significance and consequences of that processing on the data subject. The rationale would follow that if, for example, a person discovered their credit score was actually influenced by the food brands they bought at a supermarket, then the person could potentially sue to have that “unreasonable” metric removed. Some scholars[viii] believe that the text raises the bar beyond current transparency requirements and gives people a right to know exactly how these black boxes work. Other scholars[ix] argue that the ambiguity in phrasing offers little more than a limited “right to be informed” about the general logic involved in the decision-making process. We won’t know for sure until a court case arises or the Article 29 Working Party sheds light on this ambiguity, but I highly doubt that we’ll be finding out much more about Google’s PageRank algorithm or FICO’s credit scoring method any time soon.

Potential Corporate Compliance Issues

If you are a company that engages in automated decision-making and wants as little legal headache or operational change as possible, what might you do upon reading the GDPR’s profiling restrictions? Here are three ways to evade-by-nominally-complying-with the GDPR:

  1. Option 1: Consent Spam.
  2. Option 2: Anonymize.
  3. Option 3: Add a human-in-the-loop.

Why these could be problems:

  1. Consent is a fixture in privacy regulation and often fails to protect privacy.[x] The reason is that people are often too biased towards short-term rewards (i.e. services offered by websites) and too oblivious to the scalar impact of thousands of tiny revelatory data points.[xi] Even worse, these biases only kick in for the .07% of people who actually read the T&Cs before clicking consent.[xii] One of the great things about the GDPR is that it aims to overhaul the obfuscation in T&Cs, but nevertheless people will still likely fall prey to their own biases and predilections regardless of how clear the Privacy Notice.
  2. Anonymization sounds like the ideal, most privacy-protective solution, but it too can raise issues. Most companies will anonymize their variables before processing to rid themselves of the issue of removing it later or amassing large datasets of sensitive information. Yet studies warn that removing these variables ex ante hampers one’s ability to identify and remove proxy variables for these special category metrics.[xiii] Not only that, but the accuracy of the algorithm itself will often be severely reduced, raising an unpleasant utility-fairness trade-off.[xiv]

  3. Adding a Human-in-the-Loop would allow a company to say that they don’t engage in profiling and thus don’t need to perform the additional safeguards (see above). The Data Protection Authorities will be hard-pressed to determine whether the human is actually helping make decisions or simply acts as a rubber stamp to evade compliance with any profiling or “right to explanation” requirements. Moreover, if humans do integrate themselves into the decision process, this raises the question of whether the human additive might increase discrimination as a result.

All in all, it’s too soon to tell whether the GDPR’s influence on algorithmic profiling has more bark than bite. In part, it will depend on the coming months of top-down clarifying certain ambiguities. The other part depends on how companies react. Transparent, fair, and accurate algorithms are a nice aspiration, but only time will tell whether the GDPR makes significant progress at serving – and balancing – these ideals.

ENDNOTES

[1] Parmy Olson, “How Facebook Helped Donald Trump Become President.” Forbes (Nov. 9, 2016). Available at: https://www.forbes.com/sites/parmyolson/2016/11/09/how-facebook-helped-donald-trump-become-president/#62574ed59c52

[2] Cathy O’Neill, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown Publishers (2016).

[3] Jordan Ellenberg, “What’s Even Creepier Than Target Guessing That You’re Pregnant?” Slate (June 9, 2014).

[4] Latanya Sweeney, “Discrimination in Online Ad Delivery”. Communications of the ACM, Vol. 56:5 (2013).

[5] Hannah Devlin, “AI Programs Exhibit Racist and Sexist Biases, Research Reveals.” The Guardian (April 13, 2017). Available at: https://www.theguardian.com/technology/2017/apr/13/ai-programs-exhibit-racist-and-sexist-biases-research-reveals.

[vi] Ed Felton, “What does it mean to ask for an ‘explainable’ algorithm?” Freedom to Tinker (May 31, 2017). Available at: http://freedom-to-tinker.com/2017/05/31/what-does-it-mean-to-ask-for-an-explainable-algorithm/.

[vii] Bryce Goodman & Seth Flaxman, “European Union regulations on algorithmic decision-making and a “right to explanation” (2016), available at: http://arxiv.org/abs/1606.08813

[viii] E.g. Goodman & Flaxman (2016).

[ix] E.g. Sandra Wachter, Brent Mittelstadt, Luciano Floridi, “Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation”. International Data Privacy Law, forthcoming (2017). Available at: https://ssrn.com/abstract=2903469.

[x] Daniel J. Solove, “Privacy Self-Management and the Consent Dilemma”. Harvard Law Review, Vol. 126 (2013), 2-4.

[xi] Ibid, 6.

[xii] Andy Greenberg, “Who Reads The Fine Print Online? Less Than One Person In 1000”. Forbes (April 8, 2010).

[xiii] Joshua Kroll et al., “Accountable Algorithms.” Univ. Penn. L. Rev., Vol. 165 (2017).

[xiv] Solon Barocas & Andrew Selbst, “Big Data’s Disparate Impact”. California Law Review, Vol. 164 (2016), 21-22.

 

 

Please follow and like us: