Vol. 3 No. 1 (2023): Journal of AI-Assisted Scientific Discovery
Articles

A Critique of Algorithmic Fairness: Deconstructing Bias in Machine Learning Models

Dr. Thomas Meyer
Associate Professor of Computer Science, University of Applied Sciences Upper Austria
Cover

Published 30-06-2023

How to Cite

[1]
Dr. Thomas Meyer, “A Critique of Algorithmic Fairness: Deconstructing Bias in Machine Learning Models”, Journal of AI-Assisted Scientific Discovery, vol. 3, no. 1, pp. 216–235, Jun. 2023, Accessed: Sep. 17, 2024. [Online]. Available: https://scienceacadpress.com/index.php/jaasd/article/view/84

Abstract

In this thought-provoking and meticulously researched paper, I aim to delve even deeper into the existing critiques of formal predicates that enforce narrow definitions of fairness. By doing so, I hope to offer a comprehensive critique of the overall fairness framework itself. To accomplish this, I take a closer look at the very metaphors that are commonly employed to describe the delicate balance between predictive disparity and error, and examine the foundational assumptions that underpin these metaphors. The contributions made by my paper are twofold in nature. Firstly, I shed light on the inherent issues associated with three specific metaphors that are frequently utilized to capture the nuances of predictive disparity and its tradeoff with error. Through insightful discussions and compelling illustrations, I elucidate how the relationship between disparity and error becomes far from obvious when systematically comparing the actual disparity rates observed across two distinct proxy groups. This exploration is carried out with meticulous attention paid to disparities in the sequence of decisions, adding a crucial layer of insight to the discourse. Secondly, I present a set of context-specific recommendations, which I refer to as "first-and-then-step" recommendations, concerning the utilization of predictive models. These recommendations arise organically from the contextual understanding and insights garnered throughout my research. By considering the intricacies of each unique situation, my recommendations offer a nuanced approach to the practical application of predictive models, taking into account the complexities of the disparity/error relationship. It is my sincere belief that this expanded work will contribute significantly to the current understanding of fairness within predictive systems, and provide researchers, policymakers, and stakeholders with a valuable resource for addressing and navigating the intricacies of fairness in machine learning applications.

Downloads

Download data is not yet available.

References

  1. S. Barocas, M. Hardt, and A. Narayanan, "Fairness and Machine Learning: Limitations and Opportunities," 2020.
  2. C. Dwork, M. Hardt, T. Pitassi, O. Reingold, and R. Zemel, "Fairness through Awareness," in Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, 2012, pp. 214-226.
  3. M. Hardt, E. Price, and N. Srebro, "Equality of Opportunity in Supervised Learning," in Advances in Neural Information Processing Systems (NeurIPS), 2016, pp. 3315-3323.
  4. Tatineni, Sumanth. "Deep Learning for Natural Language Processing in Low-Resource Languages." International Journal of Advanced Research in Engineering and Technology (IJARET) 11.5 (2020): 1301-1311.
  5. Shaik, Mahammad, and Leeladhar Gudala. "Towards Autonomous Security: Leveraging Artificial Intelligence for Dynamic Policy Formulation and Continuous Compliance Enforcement in Zero Trust Security Architectures." African Journal of Artificial Intelligence and Sustainable Development1.2 (2021): 1-31.
  6. Tatineni, Sumanth. "Recommendation Systems for Personalized Learning: A Data-Driven Approach in Education." Journal of Computer Engineering and Technology (JCET) 4.2 (2020).
  7. J. Kleinberg, S. Mullainathan, and M. Raghavan, "Inherent Trade-Offs in the Fair Determination of Risk Scores," in Proceedings of the 8th Innovations in Theoretical Computer Science Conference, 2017, pp. 43:1-43:23.
  8. B. Fish, J. Kun, and Á. D. Lelkes, "A Confidence-Based Approach for Balancing Fairness and Accuracy," in Proceedings of the 2016 SIAM International Conference on Data Mining, 2016, pp. 144-152.
  9. I. Zliobaite, "A Survey on Measuring Indirect Discrimination in Machine Learning," arXiv preprint arXiv:1511.00148, 2015.
  10. K. P. Murphy, "Machine Learning: A Probabilistic Perspective," MIT Press, 2012.
  11. C. Rudin, "Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead," Nature Machine Intelligence, vol. 1, pp. 206-215, May 2019.
  12. R. Binns, "Fairness in Machine Learning: Lessons from Political Philosophy," in Proceedings of the 2018 Conference on Fairness, Accountability, and Transparency, 2018, pp. 149-159.
  13. R. Zemel, Y. Wu, K. Swersky, T. Pitassi, and C. Dwork, "Learning Fair Representations," in Proceedings of the 30th International Conference on Machine Learning, 2013, pp. 325-333.
  14. M. Joseph, M. Kearns, J. Morgenstern, and A. Roth, "Fairness in Learning: Classic and Contextual Bandits," in Advances in Neural Information Processing Systems (NeurIPS), 2016, pp. 325-333.
  15. S. Verma and J. Rubin, "Fairness Definitions Explained," in Proceedings of the 2018 IEEE/ACM International Workshop on Software Fairness (FairWare), 2018, pp. 1-7.
  16. D. Baehrens, T. Schroeter, S. Harmeling, M. Kawanabe, K. Hansen, and K. R. Müller, "How to Explain Individual Classification Decisions," Journal of Machine Learning Research, vol. 11, pp. 1803-1831, Jun. 2010.
  17. F. Kamiran and T. Calders, "Data Preprocessing Techniques for Classification without Discrimination," Knowledge and Information Systems, vol. 33, no. 1, pp. 1-33, Oct. 2012.
  18. L. E. Celis, L. Huang, V. Keswani, and N. K. Vishnoi, "Classification with Fairness Constraints: A Meta-Algorithm with Provable Guarantees," in Proceedings of the Conference on Fairness, Accountability, and Transparency, 2019, pp. 319-328.
  19. R. G. Baraniuk, "Compressive Sensing," IEEE Signal Processing Magazine, vol. 24, no. 4, pp. 118-121, Jul. 2007.
  20. S. Hajian, F. Bonchi, and C. Castillo, "Algorithmic Bias: From Discrimination Discovery to Fairness-aware Data Mining," in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 2125-2126.
  21. N. Mehrabi, F. Morstatter, N. Saxena, K. Lerman, and A. Galstyan, "A Survey on Bias and Fairness in Machine Learning," ACM Computing Surveys (CSUR), vol. 54, no. 6, pp. 1-35, Dec. 2021.
  22. M. Zafar, I. Valera, M. Gomez-Rodriguez, and K. P. Gummadi, "Fairness Constraints: Mechanisms for Fair Classification," in Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS), 2017, pp. 962-970.
  23. A. D. Selbst, A. Barocas, S. Boyd, J. A. Friedler, and S. Venkatasubramanian, "Fairness and Abstraction in Sociotechnical Systems," in Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency, 2019, pp. 59-68.
  24. M. B. Zafar, I. Valera, M. Gomez-Rodriguez, and K. P. Gummadi, "Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment," in Proceedings of the 26th International Conference on World Wide Web (WWW), 2017, pp. 1171-1180.
  25. M. Wang, X. Hu, and C. Zaniolo, "Learning to be Fair: A Consequential Approach to Fairness in Bayesian Network Classifiers," in Proceedings of the 2020 AAAI Conference on Artificial Intelligence, 2020, pp. 1233-1240.