Socially acceptable AI and fairness trade-offs in predictive analytics
Beschreibung
Fairness and non-discrimination are basic requirements for socially acceptable implementations of AI, as these are basic values of our society. However, the relation between statistical fairness concepts, the fairness perception of human stakeholders, and principles discussed in philosophical ethics is not well understood. The objective of our project is to develop a methodology to facilitate the development of fairness-by-design approaches for AI-based decision-making systems. The core of this methodology is the “Fairness Lab”, an IT environment for understanding, explaining and visualizing the fairness implications of a ML-based decision system. It will help companies to build socially accepted and ethically justifiable AI applications, educate fairness to students and developers, and support informed political decisions on regulating AI-based decision making. Conceptually, we integrate statistical approaches from computer science and philosophical theories of justice and discrimination into interdisciplinary theories of predictive fairness. With empirical research, we study the fairness perception of different stakeholders for aligning the theoretical approach. The utility of the Fairness Lab as a tool for helping to create “fairer” applications will be assessed in the context of participatory design. With respect to application areas, we focus on employment and education. Our project makes a significant contribution to the understanding of fairness in the digital transformation and to promoting improved conditions for the deployment of fair and socially accepted AI.
Eckdaten
Projektleitung
Co-Projektleitung
Dr. Ulrich Leicht-Deobald, Dr. Michele Loi
Projektteam
Corinna Hertweck, Dr. Christen Markus
Projektpartner
Universität Zürich / Digital Society Initiative; Universität St. Gallen
Projektstatus
laufend, gestartet 06/2020
Institut/Zentrum
Institut für Datenanalyse und Prozessdesign (IDP)
Drittmittelgeber
NFP 77 «Digitale Transformation» / Projekt Nr. 187473
Projektvolumen
619'808 CHF
Publikationen
-
Diskriminierung beim Einsatz von Künstlicher Intelligenz (KI) : technische Grundlagen für Rechtsanwendung und Rechtsentwicklung
2024 Thouvenin, Florent; Volz, Stephanie; Weiner, Soraya; Heitz, Christoph
-
On prediction-modelers and decision-makers : why fairness requires more than a fair prediction model
2024 Scantamburlo, Teresa; Baumann, Joachim; Heitz, Christoph
-
Is calibration a fairness requirement? : an argument from the point of view of moral philosophy and decision theory
2024 Loi, Michele; Heitz, Christoph
-
Fairness in online ad delivery
2024 Baumann, Joachim; Sapiezynski, Piotr; Heitz, Christoph; Hannák, Anikó
-
Fairness and risk : an ethical argument for a group fairness definition insurers can use
2023 Baumann, Joachim; Loi, Michele
-
Bias on demand : a modelling framework that generates synthetic data with bias
2023 Baumann, Joachim; Castelnovo, Alessandro; Crupi, Riccardo; Inverardi, Nicole; Regoli, Daniele
-
Bias on demand : investigating bias with a synthetic data generator
2023 Baumann, Joachim; Castelnovo, Alessandro; Cosentini, Andrea; Crupi, Riccardo; Inverardi, Nicole; Regoli, Daniele
-
Gradual (in)compatibility of fairness criteria
2022 Hertweck, Corinna; Räz, Tim
-
A systematic approach to group fairness in automated decision making
2021 Hertweck, Corinna; Heitz, Christoph
-
Digitale Transformation : wie fair sind Algorithmen?
2021 Heitz, Christoph