Tuning EU equality law to algorithmic discrimination: Three pathways to resilience

Algorithmic discrimination poses an increased risk to the legal principle of equality. Scholarly accounts of this challenge are emerging in the context of EU equality law, but the question of the resilience of the legal framework has not yet been addressed in depth. Exploring three central incompatibilities between the conceptual map of EU equality law and algorithmic discrimination, this article investigates how purposively revisiting selected conceptual and doctrinal tenets of EU non-discrimination law offers pathways towards enhancing its effectiveness and resilience. First, I argue that predictive analytics are likely to give rise to intersectional forms of discrimination, which challenge the unidimensional understanding of discrimination prevalent in EU law. Second, I show how proxy discrimination in the context of machine learning questions the grammar of EU non-discrimination law. Finally, I address the risk that new patterns of systemic discrimination emerge in the algorithmic society. Throughout the article, I show that looking at the margins of the conceptual and doctrinal map of EU equality law offers several pathways to tackling algorithmic discrimination. This exercise is particularly important with a view to securing a technology-neutral legal framework robust enough to provide an effective remedy to algorithmic threats to fundamental rights.


Introduction
In 2019, researchers based in the US conducted a revealing experiment on optimization in online advertising: they created job ads and asked Facebook to distribute them among users, targeting the same audience. The results? In extreme cases, cashier positions in supermarkets ended up being distributed to an 85% female audience, taxi driver positions to a 75% black audience and lumberjack positions to an audience that was male at 90% and white at 72%. 1 The discriminatory potential of such distribution of information on professional opportunities is obvious. At the structural level, such targeting not only perpetuates gender and racial segregation within the labour market by ascribing stereotypical affinities to certain protected groups, but it also directly affects individuals by shaping their own professional horizons through exposure to, or exclusion from, information.
Behind this and other types of online profiling are various types of Artificial Intelligence (AI), among which machine-learning algorithms feature most prominently. While AI doubtlessly offers increased opportunities in many areas, it is now well-established that it also poses enhanced risks of discrimination. 2 'Algorithmic bias' has been the subject of a growing strand of literature in various disciplines like computer science, ethics, social sciences and law. In legal research, commentators have pondered about the ability of various non-discrimination law frameworks to address data-driven discrimination. 3 Although a majority of those scholarly contributions focus on the US, gaps and weaknesses have also been described as problematic in the realm of EU nondiscrimination law. 4 A consensus is emerging on the fact that algorithmically induced discrimination poses challenges to non-discrimination law. Yet, the extent to which the legal framework in place can adequately address these challenges and effectively redress ensuing discriminatory harms is less clear. In view of the rapid evolution of technology, there is value in exploring the technology neutrality of the legal framework in place, that is whether it is robust and flexible enough to tackle multi-facetted and evolving challenges. 5 Hence, the question of the resilience of the EU nondiscrimination law framework must be posed.
This article explores how the problem of algorithmic discrimination destabilizes some of the core conceptual paradigms of EU non-discrimination law. It argues that the forms of discrimination produced by algorithmic applications differ in several ways from the human types of discrimination to which the conceptual map of EU non-discrimination law is tailored. The forms and realities of algorithmic discrimination only overlap with the legal yardsticks of equality and nondiscrimination to a certain extent. In particular, this article examines three central incompatibilities between the conceptual map of EU equality law and algorithmically induced forms of discrimination. These relate, first, to the conceptualisation of non-discrimination law around discrete protected grounds in contrast to the composite and complex nature of algorithmic classifications. A further incompatibility pertains to the causal nature of the link between discrimination and given protected categories compared to the reliance of machine learning algorithms on statistical inferences and correlations. Finally, the exhaustive list of protected grounds featured in EU nondiscrimination law poses a further compatibility issue in light of the dynamic nature of algorithmic classifications and the risk that new patterns of discrimination emerge. The grammar of EU nondiscrimination law thus provides an uneasy fit for algorithmic types of discrimination. As a result, the problem of algorithmic discrimination sharpens existing tensions in the conceptual corpus of EU equality law.
Important divergences therefore exist, which question the ability of the legal framework in place to capture data-driven discrimination. This article examines how EU equality law can respond to these challenges. Postulating that the problem of algorithmic discrimination decreases the relevance of certain 'traditional' legal categories, it argues that effective solutions can be found in purposively re-visiting and re-centring selected conceptual and doctrinal elements that are currently peripheral in EU non-discrimination law. The next sections investigate three key legal pathways to resilience that would contribute to EU equality law's capacity to effectively redress algorithmic discrimination. First, I argue that algorithmically induced discrimination challenges the unidimensional understanding of discrimination prevalent in EU law. While EU law has largely overlooked the problem of intersectional discrimination so far, the concept of 'multiple discrimination' recognized in the Gender Directives could offer a valuable remedy. Second, I show how proxy-based forms of discrimination, which are likely to arise in the context of machine learning, question the boundaries of protected grounds. Yet, pathways to resilience can be found in alternative -structural rather than essentialist -readings of protected grounds understood as vehicles capturing the production of social disadvantage. Finally, I address the question of the performativity of algorithmic discrimination, that is the risk that new patterns of discrimination emerge based on predictive analytics, algorithmic profiling and decision-making. There I turn to Article 21 of the Charter to find potential responses. Throughout the article, I show that looking at the margins of the current conceptual and doctrinal map of EU equality law offers several alternative pathways to tackle the distinctive forms of algorithmic discrimination. This exercise is particularly important in view of securing a technology-neutral legal framework robust enough to provide an effective remedy to algorithmic threats to fundamental rights. 5. See e.g. L. Jaume-Palasí, 'Diskriminierung hängt nicht vom Medium ab', Algorithm Watch (2017), <https://algor ithmwatch.org/diskriminierung-haengt-nicht-vom-medium-ab/>.

Algorithms, intersectionality and multiple discrimination: A route to redress in EU law
A first fundamental incompatibility between algorithmic discrimination and the grammar of nondiscrimination law relates to the problem of intersectionality. Inequalities are complex and multidimensional and so are phenomena of discrimination, a reality which is reflected in the data that feeds algorithms. This social complexity clashes with the organization of non-discrimination law around single distinct protected grounds, for instance gender, race, age, disability, sexual orientation or religious beliefs. Critical legal theory scholars have widely recognized the inadequacy between the prevailing 'single-axis' non-discrimination law model and the complexity and compounded nature of inequalities and discrimination. 6 The foundational work of Crenshaw and Hill Collins has for instance demonstrated how black women, who are the victims of 'intersecting' or 'interlocking' disadvantage based on race and gender, escape the protective grasp of US nondiscrimination law because of its inability to capture complex experiences of discrimination. 7 In the context of EU law, it has long been recognized that the non-discrimination legal regime -or at least the interpretation made thereof by the Court of Justice -proves insufficient with regard to the problem of intersectional discrimination. 8

A. Algorithmic profiling, predictive analytics and intersectional discrimination
Intersectional discrimination, already pervasive in the analogical world, 9 might become even more pervasive in the context of machine learning algorithms and AI given the scale and speed of algorithmic decision-making. 10 The complex and synergetic inequalities that structure the social fabric inevitably shape the data that drives predictive analytics and machine learning. Since data is the product of society, it logically mirrors existing social hierarchies. 11 These power relations are not unidimensional, rather the social matrix is made of complex and interlocking networks of privilege and oppression attached to social groups and identities. 12 Machine learning algorithms risk reproducing these discriminatory patterns through assimilating training data that is structured along intersecting axes of inequality. The algorithmic processing of this data for predictive purposes would thus inevitably lead to the reproduction, and even amplification, of intersectional forms of machine bias and hence to intersectional data-driven discrimination. Concrete examples show how 'feedback loops' or 'redundant encoding' perpetuate and reinforce intersectional discrimination. 13 Buolamwini and Gebru have for instance shown how commercial face recognition algorithms underperform in recognising black women's faces. 14 Their study reveals that error rates concerning gender prediction range between 20.8% and 34.7% for this group whereas the gender 'classification is 8.1% À 20.6% worse on female than male subjects and 11.8% À 19.2% worse on darker than lighter subjects'. 15 These statistics illustrate the existence of specific intersectional forms of algorithmic discrimination which can be traced to the underrepresentation of intersectional minorities in datasets used for training purposes. In turn, this representation problem can be linked back to more general participation and visibility issues at societal level. In the same vein, Noble exposes how algorithmic misrepresentation particularly impacts intersectionally situated groups. 16 Performing a Google search for images of 'black girls', she shows how Google's search algorithms reinforce intersectional gender-and race-based harmful stereotypes and objectification. 17 In addition to intersectional feedback effects, risks of intersectional discrimination become particularly acute with the ability of data mining and profiling technologies to combine granular data for the purpose of refined targeting. As Hoffmann points out, intersectional discrimination may arise from 'interactions between labels in a system'. 18 Profiling, for instance in advertising, might be based on very precise identity data, classifying users into distinct subgroups directly related to, or correlated with, protected categories such as age, gender or ethnic origin. For example, advertising could in principle target women in a certain age category residing in a specific geographical area, which could in turn be a proxy for a given religious or ethnic background. 19 Algorithmic profiling, when embedded into decisions about the allocation of social goods such as labour, education, housing or healthcare, thus risks dramatically reinforcing intersectional disadvantage and inequalities. At the same time, intersectional minorities are often the most marginalized and simultaneously the less visible groups, which poses important risks that intersectional discrimination remains under the radar of regulators. Thus, algorithmic intersectional discrimination might largely escape audit, control and de-biasing mechanisms if not addressed explicitly. 20

B. Intersectional discrimination: A blind spot in EU law
Yet, EU equality law does not establish a clear obligation to redress intersectional discrimination. In the absence of a binding provision defining intersectional discrimination as falling within the scope of EU non-discrimination law, it is likely to fall into the cracks. The Court of Justice in fact repeatedly failed to address the problem of discrimination arising from the interaction of multiple systems of disadvantage. In particular, two decisions of the Court of Justice show a lack of awareness and understanding of intersectionality in the context of EU law. In Z. (2014) for example, the applicant alleged gender-and disability-based discrimination in relation to her employer's refusal to grant her maternity leave. 21 Born without a uterus but with an otherwise functioning reproductive system, the applicant resorted to a surrogacy arrangement to give birth to her biological child. She subsequently applied for maternity leave, which was refused because she had not been pregnant. Instead of examining how the inherently gendered form of disability at stake in this case resulted in discriminatory effects -depriving a mother from social protection -, the Court resorted to a formalistic comparison test, separating the question of discrimination on grounds of sex from that of discrimination on grounds of disability. This reasoning obfuscated the disadvantage produced by the interaction between ableism and a strictly biological understanding of motherhood as ensuing from pregnancy. 22 As a result of this failure to consider intersectional discrimination as 'greater than the sum of' its parts, 23 the Court found no discrimination. 24 The most explicit occurrence of such failure can however be traced back to Parris (2016), a case in which the applicant claimed intersectional discrimination on grounds of age and sexual 19. See e.g. A. Lambrecht and C. Tucker, 'Algorithmic Bias? An Empirical Study of Apparent Gender-Based Discrimination in the Display of STEM Career Ads', 65 Management Science (2019). 20. To counter this, disaggregated data about intersectional patterns of discrimination should be made available. For instance, the accuracy of a predictive system should not only be assessed as an aggregate but also broken down in relation to e.g. gender, race and the intersection of gender and race. This demand has been made repeatedly with regard to intersectional discrimination to trace intra-group disadvantage. orientation. 25 He and his male partner had been excluded from a survivor pension scheme on grounds that they failed to marry prior to age 60 -a possibility that was not open to them until the legalization of same-sex partnerships in Ireland, which occurred after the applicant's 60th birthday. In its reasoning, the Court indicated that 'no new category of discrimination resulting from the combination of more than one [ . . . ] groun[d] [ . . . ] may be found to exist where discrimination on the basis of those grounds taken in isolation has not been established'. 26 This conclusion defeats the crux of the argument put forward by intersectionality scholarship, according to which intersectional discrimination is of 'synergistic' nature, and not the addition of two separate instances of discrimination based on different protected grounds. 27 In other words, intersectional discrimination can arise even where no distinct discrimination exists on the basis of one or the other protected ground alone. In Parris, the disadvantage -which the Court overlooked -stemmed from the exclusion from social benefits of a particular sub-group of same-sex couples, namely those above 60, even though neither the entire group of same-sex couples nor the entire group of people above 60 faced a similar disadvantage, meaning that no discrimination on grounds of sexual orientation or age alone existed. 28 These two cases demonstrate how the Court has so far largely failed to explicitly recognise and address intersectional discrimination. By analogy, such an approach casts doubt on the adequateness of EU non-discrimination law when it comes to redressing intersectional manifestations of algorithmic discrimination.

C. Breathing life into the concept of 'multiple discrimination': enhancing EU law's resilience to algorithmic discrimination
Several elements nevertheless indicate that EU non-discrimination law could be purposively interpreted in a way that strengthens its resilience to intersectional, and thus algorithmic, discrimination. The mandate for such an interpretative turn could be found in the concept of 'multiple discrimination' which is anchored in recitals (14) and (3)  Although currently limited to gendered forms of discrimination, the notion of multiple discrimination could offer a legal avenue to redressing intersectional forms of algorithmic discrimination. 29  discrimination', however a transversal interpretation could expand the scope of the concept to other protected grounds. 30 Several legislative, policy and doctrinal developments point in the direction of such an interpretation. At the legislative level, discussions in this sense are taking place in the context of the negotiations of the so-called 'Horizontal Directive'. 31 For example, negotiations held in June 2017 within the Council focused on a draft which included a new recital (12)(ab) which stated that '[d]iscrimination on the basis of religion or belief, disability, age or sexual orientation may be compounded by or intersect with discrimination on grounds of sex or gender identity, racial or ethnic origin, and nationality'. 32 The draft also contained a substantive prohibition of multiple discrimination in Articles 2(2)(a) and (b) with regard to direct discrimination 'on one or more [ . . . ] grounds' and 'indirect discrimination on one or multiple grounds'. 33 In addition, recent developments in EU equality policy indicate growing awareness of the issue of intersectional discrimination. For example, the new Gender Equality Strategy 2020-2025 of the European Commission indicates that '[t]he strategy will be implemented using intersectionality -the combination of gender with other personal characteristics or identities, and how these intersections contribute to unique experiences of discrimination -as a cross-cutting principle'. 34 At the doctrinal level, some encouraging signs can be read in the Court's jurisprudence despite the failures highlighted above. In Parris, the Court at least signalled that the litigation of intersectional discrimination is not precluded and that claims invoking several grounds of discrimination simultaneously can be reviewed. 35 This had already been accepted in a decision concerning alleged discrimination on grounds of sex, age and ethnic origin in Meister (2012). 36 In addition, even though not followed by the Court, the opinion rendered by AG Kokott in Parris shows awareness of, and willingness to address, intersectional discrimination. The AG opinion warns that '[t]he Court's judgment will reflect real life only if it duly analyses the combination of those two factors, rather than considering each of the factors of age and sexual orientation in isolation'. 37 Further, it acknowledges that '[t]he combination of two or more different grounds [ . . . ] is a feature which lends a new dimension to a case'. 38 It also confirms that an appropriate assessment of such a case of discrimination should take the synergy of these different axes of discrimination into account as opposed to splitting the analysis based on each ground in separation. 39 41 More implicitly, the Court's jurisprudence occasionally displays some sensitivity for the problem of intersectional discrimination. In Odar, for instance, the Court adopted what I termed elsewhere 'an intra-categorical approach' to intersectional discrimination. 42 By acknowledging 'the risks faced by severely disabled people, who generally face greater difficulties in finding new employment, as well as the fact that those risks tend to become exacerbated as they approach retirement age', the Court accepted that the interaction of age and disability produces specific disadvantage. 43 The Léger decision, which in part concerned the question of whether a permanent ban on blood donation for men having same-sex sexual relations amounted to discrimination on the basis of sexual orientation, is another example of such sensitivity. The opinion by AG Mengozzi acknowledges the existence of 'clear indirect discrimination consisting of a combination of different treatment on grounds of sex -since the criterion in question relates only to men -and sexual orientation -since the criterion in question relates almost exclusively to homosexual and bisexual men'. 44 These traces of doctrinal sensitivity for the problem of intersectional discrimination thus open potential legal pathways for a better handling of algorithmic discrimination in its intersectional manifestations. 45 Hence the concept of multiple discrimination could, in light of the doctrinal developments highlighted above, offer a valuable pathway towards more resilience of the current EU equality law framework in relation to the problems of algorithmic discrimination and data-driven disadvantage at the intersections of protected categories. Recognising intersectional discrimination as a fully-fledged concept of EU anti-discrimination law would facilitate the redress of forms of discrimination that involve various protected categories of personal data (or proxies thereof) and highly granular profiling information. Furthermore, demarginalizing the concept of multiple discrimination could also contribute to promoting intersectionality, understood as an analytical framework, in the Court's equality reasoning. Because it shifts the focus to the composite nature of grounds of discrimination and to the systems of exclusion and disadvantage which they capture, intersectionality would contribute to unearthing the social conditions of the production of algorithmic biases in socio-technical systems. Recognising intersectional discrimination as a doctrinal category would also lighten the evidentiary burden weighting on applicants' shoulders, a problem that has been repeatedly emphasized in relation to data-driven discrimination. 47 It would lower the evidentiary threshold for prima facie cases: even in cases where no discrimination can be shown with regard to protected grounds taken separately, applicants would be able to establish discrimination prima facie based on both categories taken in combination. The discriminatory impact of an algorithm could then be reviewed in court more easily, not only with regard to discreet protected categories, but also in relation to their combinations and the specific subgroups affected by any given disadvantage.

A. Grounds, groups and proxies: An unclear relationship
A second dimension of the discrepancy between algorithmically induced discriminatory harms and the current legal framework relates to the notions of statistical and proxy discrimination. It has been argued that protected grounds will often not be inputted as permissible variables in algorithmic decision-making procedures, except where it may fulfil legitimate purposes (for example in personalized advertising). 48 Yet, machine learning algorithms are trained to detect patterns, which means that they rely on statistical inferences that might reflect discriminatory correlations. Thus, blinding a machine learning algorithm to sensitive social categories like racial or ethnic origin does not suffice to prevent discrimination. 49 Because the data they process is rich in correlations, such algorithms can still discriminate based on available variables that correlate with racial or ethnic origin. 50 In fact, as Williams et al. point out, ''[d]atabases about people are full of correlations, only some of which meaningfully reflect the individual's actual capacity or needs or merit, and even fewer of which reflect relationships that are causal in nature.'' 51 They provide concrete examples of such correlations: names and patronyms as well as personal social interactions can reflect membership of both of a given ethnic group and a specific socio-economic category. 52 The embedding of such correlations in algorithmically assisted decision-making is problematic because it reifies and amplifies historical disadvantages linked to protected social categories by inference. A well-known example of the way an algorithm is able to infer membership of a particular ethnic group is the case of discrimination arising from algorithmic prediction models that process residency data. 53 An algorithm that used the distance between workers' home and workplace as a predictor for job tenure was for example found discriminatory because it disproportionately disadvantaged ethnic minority workers. 54 Discrimination can also arise in cases of misprofiling, which takes place when an algorithm makes wrong inferences about a user's identity based on given correlations and for instance excludes her from given social goods on this basis. Discrimination based on correlations with protected grounds has been described, already in analogical contexts, as 'proxy discrimination', a term largely adopted in the literature on algorithmic discrimination since then. 55 Algorithmic proxy discrimination poses important questions in relation to the foundational legal categories on which non-discrimination law is based, namely the so-called 'protected grounds'. EU equality law prohibits discrimination on grounds of sex, racial or ethnic origin, religion or belief, disability, age or sexual orientation, but without offering definitions of their scope and limits. 56 Historically, some proxies have been accepted as falling within the scope of given protected grounds by the Court of Justice. For instance, discrimination on grounds of pregnancy has been considered a form of direct sex discrimination. 57 However, other examples stemming from the Court's jurisprudence on discrimination based on ethnic origin and disability show uncertainties as to which characteristics or combinations thereof could be understood as falling within the scope of protected categories. The degree of overlap required between a proxy and a given protected group to give rise to discrimination is unclear. 58 In Jyske Finans, for instance, the Court found that an applicant's country of birth did not suffice 'in itself, [to] justify a general presumption that that person is a member of a given ethnic group'. 59 The Court argued that '[e]thnic origin cannot be determined on the basis of a single criterion but, on the contrary, is based on a whole number of factors, some objective and others subjective'. 60 It however disregarded the relevance of the applicant's patronym and nationality at birth as additional vectors of discrimination based on ethnic origin. 61 By analogy, this reasoning is problematic in the context of predictive profiling and inferential analytics because data points used in an algorithm might correlate with a protected category, resulting in a discriminatory inference, yet it is unclear when the link between such data points and any protected category will be considered strong enough for the Court to consider a case of proxy discrimination. In Chacón Navas and Kaltoft, national courts for example tested the scope of disability protection, asking whether a chronic illness and obesity could be understood as either falling within the definition of disability or as being covered by extension. 62 While the Court of Justice adopted a restrictive rather than expansive approach of the definition of disability in relation to sickness, it conceded that obesity, albeit per se not a disability, could entail one if 'hinder[ing] [a worker's] full and effective participation in professional life'. 63 However, in both cases, it rejected arguments that the scope of EU non-discrimination law 'should be extended by analogy beyond the discrimination based on the grounds listed exhaustively', including by reference to the general principle of non-discrimination and to the non-exhaustive list of protected grounds laid out in Article 21 of the EU Charter of Fundamental Rights (hereinafter 'the Charter'), despite its primary law status since the Lisbon reform. 64 This rather restrictive interpretation, coupled with the lack of clarity on, or reasoned approach to, the scope and boundaries of protected grounds, might represent a further hurdle to preventing algorithmic discrimination, so tightly linked to the problem of proxy discrimination.
These uncertainties make it difficult for algorithmic proxy discrimination to be considered as direct discrimination because its definition in EU law involves a causality link between a given treatment and a protected ground, while inferential analytics rely on correlations. 65 Although algorithmic proxy discrimination could be captured through the doctrine of indirect discrimination as an apparently neutral practice which gives rise to a 'particular disadvantage' for a given protected group, this raises procedural difficulties and opens to a larger pool of justifications than the doctrine of direct discrimination. 66 Importantly, the definition of the 'particular disadvantage' to be experienced by a protected group is unclear and the required degree of overlap between the group wronged and the group protected under EU law has not received a consistent interpretation by the Court of Justice. Kommunernes Landsforening (KL), para. 36. Art 21 prohibits discrimination on 'any ground such as sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation'. 65. Direct discrimination is a situation in which 'one person is treated less favourably on grounds of [sex, racial or ethnic origin, religion or belief, disability, age or sexual orientation] than another is, has been or would be treated in a comparable situation'. See Although the Court has not answered these particular questions, it has proposed two jurisprudential routes that can help bypass these difficulties and address the harms linked to algorithmic proxy discrimination: a non-essentialist and a structural approach to the scope of protected grounds. Such readings can be understood against the background of scholarly accounts that highlight the dual function of 'protected grounds' as both an instrument of recognition of particular social identities that deserve protection and a tool to capture the social systems and hierarchies that produce disadvantage for given social groups. 68 This normative duality produces two different understandings of protected grounds: as categories of social identification on the one hand, and as vehicles to capture external value-based ascriptions on the other. In this second assertion, protected grounds function as analytical 'shortcuts' for inequality-producing social systems and for the society's treatment of given groups. 69 This latter reading of protected grounds could, by analogy, promote EU law's resilience to proxy and statistical discrimination in the context of predictive analytics by capturing the harms linked to algorithmic profiling and ascriptions.

B. Addressing algorithmically inferred classifications: The Court's non-essentialist approach to 'protected grounds'
In its case law, the Court has progressively severed the link between a victims' identity and the 'grounds' protected under EU law, a jurisprudential evolution which could help address discriminatory classifications arising from algorithmic inferences. First, the concept of 'discrimination by ascription' or 'discrimination by assumption' could extend the grasp of the doctrine of direct discrimination to cases of algorithmic profiling, including 'misprofiling'. It would help capture cases in which algorithmic inferences (right or wrong) lead to disadvantageous classifications linked to protected grounds. For example, if an algorithmic model used to predict which job seekers incur risks of long-term unemployment systematically assigns a higher risk score to users which it classifies as 'living with a disability', disadvantaged users profiled as 'disabled' would fall under the scope of non-discrimination law even if they do not identify as such. The proxy used as a basis for such a classification and its relationship to the actual protected ground would not matter to the qualification of direct discrimination. The sole ascription by an algorithm of an individual or group to a protected category and the disadvantageous treatment of such classification would amount to direct discrimination.
This interpretation finds its roots in the fact that a victim of discrimination does not herself need to be, or identify as, a member of the protected group in order to receive protection under EU equality law. It suffices that an individual or group is perceived or treated as possessing a given protected characteristic in order to trigger the protection of EU equality law. For instance, in Accept, a football player who was discriminated against on grounds of his alleged sexual orientation did not need to disclose his own sexual orientation in order for the Court to find discrimination. 70 In CHEZ, the victim of discriminatory practices by an electricity provider targeting a Roma community of residents, despite not identifying as Roma herself, could rely on EU non-discrimination law on grounds of racial or ethnic origin. 71 This approach resonates with Recital 16 and Article 3(1) of the Race Discrimination Directive, which state that 'the protection against discrimination on grounds of racial or ethnic origin which the directive is designed to guarantee is to benefit 'all' persons'. 72 Second, the Court of Justice has developed this approach in a slightly different way through the concept of 'discrimination by association'. 73 In Coleman, the Court for example found that the mother of a child living with disabilities had been discriminated by her employer on grounds of her child's disability because of her care responsibilities. 74 This jurisprudential innovation de facto extended the personal scope of EU equality law to classifications on grounds of an individual's core social relationships and interactions. This counterpart to the Court's approach to discrimination by ascription would, similarly, protect victims of discrimination who are directly associated with a protected group.
The concept of 'discrimination by association' would cover situations where an algorithm using behavioural data for profiling purposes discriminates based on inferences made in relation to a user's proximity or 'affinity' with a protected group. 75 For example, such a case could arise if a profiling algorithm used behavioural data to determine the price of goods and services and proposed higher prices for standard goods and services to the partner of a pregnant woman. If it can be shown that the price of a given good or service systematically increases in relation to groups 'associated with' a pregnancy (e.g. interested in pregnancy-related goods), this situation could fall under the scope of EU gender equality law via the concept of 'discrimination by association' even though the person incurring higher prices is not pregnant him/herself. 76 Such discriminatory algorithmic classification could then be understood from the perspective of direct discrimination even where the victim does not herself belong to a protected group.

C. Discriminatory algorithmic correlations and systemic inequalities: A structural reading of protected grounds
A second jurisprudential route to capturing algorithmic proxy discrimination can be found in the Court's structural approach to protected grounds. In cases such as Feryn and Rete Lenford, public statements deterring certain protected minority groups from applying to a given job fell within the scope of EU non-discrimination law even in the absence of identified victims. 77 In these two cases, racist and homophobic statements made by an employer were deemed to amount to direct discrimination. The Court did not need proof that these statements had put particular victims at a disadvantage. Rather, the systematic deterrence of candidates from protected groups was enough to constitute direct discrimination.
By analogy, this doctrinal approach, which captures the 'public' harm created by the mass diffusion of discriminatory attitudes and stereotypes, could provide an interesting pathway to addressing the mass effects of algorithmic profiling and predictive analytics. In this perspective, algorithmic proxy discrimination could fall into the scope of EU equality law even in the absence of identified victims on account of the scale of its exclusionary effects on protected groups. On the one hand, the CJEU's response to Feryn and Rete Lenford extends the grasp of EU nondiscrimination law to the dissemination through algorithmic profiling of harmful stereotypes against given protected groups. On the other hand, this doctrinal approach also captures the collective harms that arise from biased algorithmic decision-making in terms of preventing access to social goods such as employment, credit, healthcare, etc. For instance, this approach could potentially provide effective legal redress against the stereotypical distribution of job ads across gender-and ethnicity-specific groups arising from platforms' optimisation of ads delivery. 78 Such skewed distribution not only reinforces structural and harmful sexist and racist stereotypes, but it also entails severe deterrence and exclusion effects on protected groups through systematic nonexposure to job opportunities. Applying the Feryn doctrine to algorithmic feedback loops would thus effectively map algorithmic proxy-based inferences onto systemic discrimination and structural inequalities.
Taking this proposal a step further would however allow better addressing the structural dimension of algorithmic discrimination. These conceptual resources in fact do not entirely solve the problem of proxy discrimination. Uncertainty persists regarding how close or overlapping algorithmic ascriptions or associations with protected grounds or groups should be to qualify as discrimination. A viable legal pathway to tackle this issue lies in the expansive interpretation of grounds of discrimination. Since grounds are not defined under EU law, judges are responsible for contextually drawing their boundaries, an exercise which should be conducted purposively in the context of algorithmic discrimination in order to ensure the effectiveness of EU law. Several commentators have argued for an 'expansive', 'inclusive', 'contextual', 'capacious', 'large' or 'complicated' reading of grounds of discrimination. 79 In essence, such arguments put forward that the important inner complexity, composite nature and heterogeneity of protected grounds of discrimination should be recognized by courts.
In the case of algorithmic discrimination, such an expansive and purposive approach to grounds would bolster the ability of the current exhaustive list of protected grounds to address proxy-related disadvantage. An additional source of inspiration to tackle this issue can be found in the concept of 'nodes of discrimination fields' developed by Schiek, who offers a re-conceptualization of EU nondiscrimination law around the nodes of 'race', 'disability' and 'gender', arguing that a comprehensive reading of these nodes would lead to recognising that discrimination can happen in their 'centre' or in their 'orbit'. 80 This concept, she argues, 'provides an opportunity to organize a multiplicity of grounds around the nodes, thus enabling the law to respond adequately to different degrees of discrimination and exclusion'. 81 A nodal interpretation of grounds, combined with the structural approach already proposed by the Court in Feryn and Rete Lenford, would greatly enhance the resilience of EU equality law to proxy discrimination resulting from profiling and predictive algorithms either when relating to the core of non-discrimination grounds or to their periphery.

A. Algorithmic decision-making: Towards systemic forms of behavioural discrimination?
A third incompatibility between the phenomenon of algorithmic discrimination and the legal framework in place relates to the performativity of predictive analytics. While it is well established that algorithmic profiling and predictive techniques reproduce existing patterns of discrimination via 'redundant encoding', biased datasets and prejudiced designs, 82 a crucial but more puzzling question is whether and how machine-learning based on big data can facilitate the emergence of new forms of discrimination, that is biases that would be socially pervasive and harmful enough to deserve the attention of non-discrimination law. 83 Asking the question of the performativity of algorithmic discrimination in fact prompts the question of the nature of wrongful discrimination. In other terms, 'what makes wrongful discrimination wrong' 84 and deserving of legal attention? The answer of course 'depends on the political, social and historical context in which any legislator or judge intervenes'. 85 Research shows the relevance of both symbolic representation and material access in the emergence and perpetuation of discrimination and inequality. 86 At the same time, recent scholarship has demonstrated how algorithmic decision-making is potentially liable to do both, that is to reinforce both distributive inequalities through reproducing discriminatory access to social goods such as labour, health services, social benefits or credit and symbolic inequalities through the discriminatory representation or 'invisibilization' of certain social groups. 87 For example, Eubanks has shown the pauperising effects of the use of predictive analytics in public authorities' decisions concerning the allocation of social benefits. 88 Recently, wide media coverage has drawn attention to Apple's discriminatory credit-granting algorithms. 89 In terms of symbolic injustices, algorithmic stereotyping is also an issue. As already noted, search engine algorithms reinforce intersectional racist and sexist prejudices. 90 In light of the above, it is worth asking how algorithmic decision-making, by relying on granular types of behavioural data, could lead to new types of bias that would be systematic and pervasive enough to deserve legal attention and potentially a qualification as unlawful discrimination. By systematically relying on currently permissible distinctions to decide on the allocation of resources, price levels, the inclusion in, or exclusion from, given social goods in a pervasive manner, algorithmic decision-making could stabilize new forms of social classification with farreaching socio-economic consequences. 91 In very broad outline, these new patterns of social sorting could enforce, by aggregation, new types of socio-economic stratification and social hierarchies, which could contribute to the consolidation of new forms of discrimination. 92 For example, risk scoring based on increasingly available health, sports, nutritional and sleep habits, lifestyle and other hygiene-related data, could lead to increased insurance premiums and reduced access opportunities for groups considered 'riskier'. The economic impoverishment of these groups 93 could in the long run lead to the stabilization of new social stratification patterns. 94 These would feed into a vicious circle of self-enforcement and reification whereby algorithmic scores would be the basis for future decisions and would feed back into these 'new' social hierarchies. 95 In this perspective, algorithmic data-driven decision-making could create, over time, new unfair forms of pervasive and systematic distinctions and exclusions which could require legal attention.

B. Algorithms and classification: The possibility of new patterns of discrimination
Beyond behavioural data, algorithms also risk contributing to creating new types of discrimination by enforcing spurious, that is fortuitous, correlations in suboptimal knowledge conditions by treating them as causal. 96 Such spuriously correlated discriminatory outputs can be illustrated by the following example: insurers find it hard and costly to measure aggressiveness, a decisive actuarial factor in estimating drivers' risk to be involved in accidents, so they rely on a proxy that happens to correlate with aggressiveness, that is the red colour of drivers' car, in order to estimate risks of accidents. The unintended consequence of this risk assessment model is however that the proxy chosen -red cars -happens to correlate, in turn, with a third category, namely drivers belonging to a minority ethnic group who thus are routinely discriminated through higher insurance prices. 97 If such spurious correlations are routinely enforced by predictive algorithmic models, but this time without corresponding to a protected ground in non-discrimination law, groups possessing these characteristics might end up being systematically disadvantaged in given fields. In the long run, algorithmic operations repeatedly enforcing such group distinctions might have sheer consequences in terms of socio-economic sorting and might overtime enact new forms of discrimination against unprotected groups. An additional challenge lies in the invisibility of such systematic exclusion: these new patterns of discrimination might escape public attention if they impact groups that are not considered socially salient and therefore not monitored in relation to their social disempowerment, disadvantage and status.
An even bigger problem in terms of the 'diminishing effectiveness of the anti-discrimination toolbox' could stem from what Leese describes as the 'deep-seated epistemological conflict between an anti-discrimination framework that conceives of knowledge as the establishment of causality and data-driven analytics that build fluid hypotheses on the basis of correlation patterns in dynamic databases'. 98 Beyond the causality/correlation mismatch exposed above, the dynamic classifications performed and enforced by machine learning algorithms challenge the static and exhaustive categorical structure of equality law. Data-driven profiling indeed relies on classifiers and generates distinctions that might be entirely 'artificial and nonrepresentational' from a societal The consequences of social sorting based on behavioural data have been discussed against the background of Foucaldian accounts of biopolitics and the disciplining of society, see e.g. M. Leese point of view in the sense that they might not correspond to any salient or even real social groups. 99 As Mann and Matzner put it, '[e]mergent forms of algorithmic discrimination stem from features and indirect proxies that themselves, on face value, seem harmless' but which in combination 'might lead to emergent forms of discrimination' based on 'patterns that have little or no intuitive meaning to human practice' and which are therefore socially unrecognisable. 100 Thus, the norms underlying algorithmic distinctions arising from existing statistical distributions might wholly escape moral, social or legal definitions and might also change as statistical distributions themselves evolve. 101 The consequences of such contingent and shifting algorithmic classifications are problematic because they escape democratic review and decisions about what is socially acceptable or desirable even when they are the foundations for crucial societal decisions of both public and private nature. In this sense, the risk of an 'invisible production of invisibilities' 102 arises, meaning that these emergent forms of discrimination could remain unfathomable to the very language and structure of non-discrimination law both in relation to their mechanics of production and their societal effects. In this context, predictive analytics is liable to perform discrimination in two ways. On the one hand, existing statistical disparities, including but not limited to current patterns of discrimination, become reified through their own performance as norms for future decision-making. On the other hand, predictive algorithms also generate self-fulfilling prophecies of discrimination because users' behaviours and expectations shift to adapt to their logic, so that in creating knowledge about the future they also inevitably partly shape it.
This poses the fundamental question of when algorithmic discrimination -a statistical operation -becomes unlawful discrimination -a legal matter. This problem faces the tautology of non-discrimination law: from a legal point of view, discrimination is what the law defines as discrimination. In order to escape this circularity and answer the question of what should legally be considered as discrimination, resorting to an external normative benchmark is unavoidable. 103 The question thus becomes which algorithmic classifications should be considered unlawful and discriminatory beyond the existing list of protected grounds and based on which normative standards? In other terms, when and following what rationale should EU non-discrimination law intervene? These questions generate other, more profound, interrogations about what unlawful discrimination is, what social harms non-discrimination law seeks to prevent or redress, and ultimately, what moral wrongs society should legally prohibit. While many have pondered about these questions, 104 the normative underpinnings and moral purpose of EU non-discrimination law have neither been made explicit nor clarified.

C. Finding resilience in the EU Charter: The open list of protected grounds in Article 21
From a doctrinal point of view, EU non-discrimination offers several pathways that could be explored in view of an adaptation to potentially emerging new forms of discrimination. By contrast to the non-discrimination directives, Article 21 of the Charter offers a non-exhaustive list of protected grounds which makes space for new criteria for protection. It prohibits discrimination 'on any ground such as sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation'. 105 On the one hand, the expanded list of explicit characteristics which it lays out could better address certain forms of behavioural or proxy discrimination generated by profiling and predictive algorithms, for instance on grounds of genetic features (e.g. through face recognition technologies and healthcare algorithmic applications), language, membership of a national minority or birth.
On the other hand, the formula 'any ground such as' is open-ended and could be used as a basis for the contextual recognition of new protected grounds, replicating interpretive patterns stemming from the jurisprudence of the European Court of Human Rights (ECtHR) in relation to Article 14 of the European Convention on Human Rights (ECHR) on which Article 21 of the Charter is modelled. 106 The ECtHR has for example granted protection against discrimination to groups not explicitly protected under Article 14 ECHR but which it has considered 'particularly vulnerable' in particular contexts, such as prisoners 107 , asylum seekers 108 or HIV survivors 109 . 110 Such an approach based on Article 21 of the Charter at the CJEU could help protect socially 'vulnerable' categories of population that are currently considered non-salient and therefore fall out of the scope of equality law although they run a risk of widespread discrimination. 111 It would also provide a basis for contextually acknowledging the minority status of a given group in a particular setting and grant protection accordingly. 112 In addition, Article 21 of the Charter could help solving existing difficulties in relation to the hierarchy of protection across grounds in EU law. In particular, Art. 21 of the Charter could fill the gaps in situations where algorithmic discrimination intervenes in a field where a ground is not protected under EU legislation (e.g. age, sexual orientation, religion and disability are not protected beyond employment). In this sense, Kilpatrick has for example raised the question of the subsidiary applicability of Article 21(1) beyond the equality directives as a basis to level up the material scope across the different grounds, albeit underlining the difficulties of such an interpretation both at the doctrinal and at the political level. 113 Bribosia, Rorive and Hislaire have also argued that Article 21 of the Charter could provide an autonomous ground for judicial review of discrimination cases, building on the Court's indication in Léger that Article 21 was applicable to the field of health, beyond the scope of application of the equality directives, and where Member States implement EU law. 114 Such a legal pathway could enhance the flexibility of the EU non-discrimination law framework with regard to its protected grounds. In terms of the purely legal value of Article 21, such a solution does not seem out of reach since, as mentioned above, the Charter has the status of primary law since the entry into force of the Lisbon Treaty in 2009. In addition, the CJEU has recognized horizontal direct effects to Article 21 of the Charter in the recent Egenberger and IR cases, which opens new questions regarding the scope of EU non-discrimination law beyond the directives on gender, racial or ethnic origin, disability, sexual orientation, religion or belief and age. 115 That said, such an interpretive innovation would require overcoming two obstacles. On the one hand, Article 51 of the Charter limits its scope of application exclusively to situations in which 'Member States [ . . . ] are implementing Union law', a provision which has been interpreted rather restrictively so far. 116 On the other hand, previously the interpretation of Article 21 of the Charter such as in Kaltoft has been restrictive and rejected an extension of the number of protected grounds beyond those protected under EU secondary law and Article 19 TFEU. 117 A second legal pathway that could be explored in order to better address the problem of emerging forms of discriminations is that of the constitutional guarantee offered in the openended general principle of non-discrimination and equality in EU law. In a resounding series of decisions, the CJEU has established that the general principle of non-discrimination applies directly not only vertically to disputes between the state and its citizens but also horizontally, that is to entirely private disputes. 118 Most importantly, in Mangold, these effects manifested even in the absence of enforceable secondary EU law. 119 Several authors have argued that an open-ended constitutional equality guarantees could in principle enhance the effectiveness and flexibility of judicial review in cases of non-discrimination as it enables judges to perform contextual assessments in the spirit of theories of substantive equality. 120 Such contextual and substantive equality reviews should assess 'whether [algorithmic systems] enforce, facilitate, or legitimize [ . . . ] exclusionary invisibilities'. 121 In this context, the general principle of nondiscrimination could, in theory, offer an appropriate legal terrain for such assessments with a view to tackling emerging forms of algorithmic discrimination that escape the grasp of the Equality Directives, although that would mean overturning the restrictive approach adopted in cases such as Chacón Navas. 122

Conclusion
Although appearing neutral and objective, algorithms embed certain voices and values and erase others. The distribution of algorithmic visibility mirrors that of the power of definition: historical injustices crystallised in structural inequalities combine with skewed participation and representation opportunities to further marginalise systematically disadvantaged social groups. 123 This socio-technological embedding of structural inequalities risks further reifying and essentializing discrimination. Thus, examining the robustness of equality law, identifying gaps and frictions and devising pathways to legal resilience prove essential tasks.
This paper has argued that algorithmic discrimination disrupts the established conceptual map of EU non-discrimination law and poses substantial difficulties regarding the application of the EU anti-discrimination law corpus. Nevertheless, this article has also demonstrated that purposively revisiting and centring some peripheral concepts, doctrinal devices and provisions of EU equality law could enhance the resilience of the current framework of protection by providing legal pathways to redress. In that regard, the concepts of multiple discrimination and the analytical framework offered by intersectionality theory, the non-essentialist and structural reading of nondiscrimination grounds and the full activation of the open-ended Article 21 of the Charter seem promising pathways. These concepts and instruments should be invoked in the name of the principle of effectiveness of EU non-discrimination law in the context of current technological disruptions.
The approach proposed above -a purposive interpretation and instrumental application of EU non-discrimination law -is justified by the necessity to ensure that technological evolutions do not jeopardize fundamental rights. As algorithmic output mirrors society, a neutral approach would only endorse the perpetuation of social inequalities. Thus, the challenge of algorithmic discrimination calls for a substantive or even a 'transformative' approach to equality. 124 Although discussions around the regulation of AI have centred on 'ethical' and 'human-centred' AI, a 'social justice centred' AI would seem more appropriate to avoid the spread of both old and new forms of discrimination recast as technological output.