Predictions about the future have been made since the earliest days of humankind, but today, we are living in a brave new world of prediction. Today’s predictions are produced by machine learning algorithms that analyze massive quantities of personal data. This type of algorithm is commonly referred to as artificial intelligence (“AI”). Increasingly, important decisions about people are being made based on these algorithmic predictions.
Algorithmic predictions are a type of inference. Many laws struggle to account for inferences, and even when they do, the laws lump all inferences together. But as we argue in this Article, predictions are different from other inferences. Predictions raise several unique problems that current law is ill-suited to address. First, algorithmic predictions create a fossilization problem because they reinforce patterns in past data and can further solidify bias and inequality from the past. Second, algorithmic predictions often raise an unfalsifiability problem. Predictions involve an assertion about future events. Until these events happen, predictions remain unverifiable, resulting in an inability for individuals to challenge them as false. Third, algorithmic predictions can involve a preemptive intervention problem, where decisions or interventions render it impossible to determine whether the predictions would have come true. Fourth, algorithmic predictions can lead to a self-fulfilling prophecy problem where they actively shape the future they aim to forecast.
More broadly, the rise of algorithmic predictions raises an overarching concern: Algorithmic predictions not only forecast the future but also have the power to create and control it. The increasing pervasiveness of decisions based on algorithmic predictions is leading to a prediction society where individuals’ ability to author their own future is diminished while the organizations developing, deploying, and using predictive systems are gaining greater power to shape the future.
Privacy and data protection law do not adequately address algorithmic predictions. Many laws lack a temporal dimension and do not distinguish between predictions about the future and inferences about the past or present. Predictions about the future involve considerations that are not implicated by other types of inferences. Many laws provide correction rights and duties of accuracy that are insufficient to address problems arising from predictions, which exist in the twilight between truth and falsehood. Individual rights and anti-discrimination law also are unable to address the unique problems with algorithmic predictions.
We argue that the use of algorithmic predictions is a distinct issue warranting different treatment from other types of inference. We examine the issues laws must consider when addressing the problems of algorithmic predictions.
* Hideyuki (“Yuki”) Matsumi. PhD candidate/researcher at the Research Group on Law Science, Technology and Society (LSTS) of the Vrije Universiteit Brussel (VUB). Member of the New York Bar. I would like to thank everyone who patiently listened and waited for me.
** Eugene L. and Barbara A. Bernard Professor of Intellectual Property and Technology Law, George Washington University Law School. Thanks to my Travis Yuille for excellent research. We both want to thank Dan Bouk, Dan Burk, Jessica Eaglin, Oscar Gandy, Talia Gillis, Woodrow Hartzog, Mireille Hildebrandt, Alicia Solow-Niederman, and the participants at the Privacy Law Scholars Conference 2023 for very helpful comments. We also thank everybody in the LSTS.
The full text of this Article is available to download as a PDF.