Source code/webpage/demos for the What-If Tool
-
Updated
Sep 11, 2024 - HTML
Source code/webpage/demos for the What-If Tool
Sample project using IBM's AI Fairness 360 is an open source toolkit for determining, examining, and mitigating discrimination and bias in machine learning (ML) models throughout the AI application lifecycle.
Deep-Learning approach for generating Fair and Accurate Input Representation for crime rate estimation in continuous protected attributes and continuous targets.
Tools to assess fairness and mitigate unfairness in sociolinguistic auto-coding
Add a description, image, and links to the ml-fairness topic page so that developers can more easily learn about it.
To associate your repository with the ml-fairness topic, visit your repo's landing page and select "manage topics."