Predictive models which analyze vast amounts of personal data are assuming a central role in the novel decision-making processes launched by private and public entities. The use of algorithmic processes has raised concerns that they generate unacceptable, and at times even novel, forms of discrimination. This Essay strives to enrich the growing corpus of work related to predictive analytics addressing the specter of discrimination. It does so by illuminating the central theoretical justifications of anti-discrimination theory and policy, and the way they are challenged by the novel practices arising. On the basis of this analysis, the Essay strives to establish the next mandatory regulatory steps to be taken.
In Section II, this Essay provides the foundations for the discussion, while defining discrimination and its key components (such as “salient social groups” which could be the subject of the discriminatory practices and receive the protection of anti-discrimination laws). It further provides the basic deontological and consequentialist justifications for anti-discrimination policy: the arguments that discrimination is immoral, demeaning, undermines freedom, enforces stereotypes and leads to subordination and seclusion.
Section III moves to apply these definitions and justifications to the age of predictive analytics. The analysis explores when and how rules prohibiting discriminatory intent and outcomes could be applied and justified in this novel setting while examining to what extent “intent” could be expanded in this novel realm to include new forms of culpable conduct. It also provides general recommendations. Finally, the analysis explores whether the definition of social groups which must be protected from discriminatory acts should be amended and expanded to other instances given new social and technological trends.