Skip to content

Commit

Permalink
doc: Update documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
kozodoi committed Nov 19, 2020
1 parent 9973be5 commit f46203f
Show file tree
Hide file tree
Showing 16 changed files with 77 additions and 32 deletions.
4 changes: 2 additions & 2 deletions DESCRIPTION
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
Package: fairness
Title: Algorithmic Fairness Metrics
Version: 1.1.1
Version: 1.2.0
Authors@R: c(person('Nikita', 'Kozodoi', email = 'n.kozodoi@icloud.com', role = c('aut', 'cre')),
person('Tibor', 'V. Varga', email = 'tirgit@hotmail.com', role = c('aut'), comment = c(ORCID = '0000-0002-2383-699X')))
Maintainer: Nikita Kozodoi <n.kozodoi@icloud.com>
Description: Offers various metrics of algorithmic fairness. Fairness in machine learning is an emerging topic with the overarching aim to critically assess algorithms (predictive and classification models) whether their results reinforce existing social biases. While unfair algorithms can propagate such biases and offer prediction or classification results with a disparate impact on various sensitive subgroups of populations (defined by sex, gender, ethnicity, religion, income, socioeconomic status, physical or mental disabilities), fair algorithms possess the underlying foundation that these groups should be treated similarly / should have similar outcomes. The fairness R package offers the calculation and comparisons of commonly and less commonly used fairness metrics in population subgroups. These methods are described by Calders and Verwer (2010) <doi:10.1007/s10618-010-0190-x>, Chouldechova (2017) <doi:10.1089/big.2016.0047>, Feldman et al. (2015) <doi:10.1145/2783258.2783311> , Friedler et al. (2018) <doi:10.1145/3287560.3287589> and Zafar et al. (2017) <doi:10.1145/3038912.3052660>. The package also offers convenient visualizations to help understand fairness metrics.
Description: Offers calculation, visualization and comparison of algorithmic fairness metrics. Fair machine learning is an emerging topic with the overarching aim to critically assess whether ML algorithms reinforce existing social biases. Unfair algorithms can propagate such biases and produce predictions with a disparate impact on various sensitive groups of individuals (defined by sex, gender, ethnicity, religion, income, socioeconomic status, physical or mental disabilities). Fair algorithms possess the underlying foundation that these groups should be treated similarly or have similar prediction outcomes. The fairness R package offers the calculation and comparisons of commonly and less commonly used fairness metrics in population subgroups. These methods are described by Calders and Verwer (2010) <doi:10.1007/s10618-010-0190-x>, Chouldechova (2017) <doi:10.1089/big.2016.0047>, Feldman et al. (2015) <doi:10.1145/2783258.2783311> , Friedler et al. (2018) <doi:10.1145/3287560.3287589> and Zafar et al. (2017) <doi:10.1145/3038912.3052660>. The package also offers convenient visualizations to help understand fairness metrics.
License: MIT + file LICENSE
Language: en-US
Encoding: UTF-8
Expand Down
5 changes: 5 additions & 0 deletions NEWS.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,8 @@
# fairness 1.2.0
- added support for continuous group feature with binning options
- added group size to metric outputs
- small updates in documentation

# fairness 1.1.1
- added check and conversion of data to `data.frame` in metric functions
- quiet AUC computation in `roc_parity()`
Expand Down
4 changes: 2 additions & 2 deletions R/fairness-package.R
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
#' fairness: Algorithmic Fairness Metrics
#'
#' The \strong{fairness} package offers various metrics of algorithmic fairness. Fairness in machine learning is an emerging topic with the overarching aim to critically assess algorithms (predictive and classification models) whether their results reinforce existing social biases. While unfair algorithms can propagate such biases and offer prediction or classification results with a disparate impact on various sensitive subgroups of populations (defined by sex, gender, ethnicity, religion, income, socioeconomic status, physical or mental disabilities), fair algorithms possess the underlying foundation that these groups should be treated similarly / should have similar outcomes. The fairness R package offers the calculation and comparisons of commonly and less commonly used fairness metrics in population subgroups. The package also offers convenient visualizations to help understand fairness metrics.
#' The \strong{fairness} package offers calculation, visualization and comparison of algorithmic fairness metrics. Fair machine learning is an emerging topic with the overarching aim to critically assess whether ML algorithms reinforce existing social biases. Unfair algorithms can propagate such biases and produce predictions with a disparate impact on various sensitive groups of individuals (defined by sex, gender, ethnicity, religion, income, socioeconomic status, physical or mental disabilities). Fair algorithms possess the underlying foundation that these groups should be treated similarly or have similar prediction outcomes. The \strong{fairness} R package offers the calculation and comparisons of commonly and less commonly used fairness metrics in population subgroups. The package also offers convenient visualizations to help understand fairness metrics.
#'
#' @details
#' \tabular{ll}{
#' Package: \tab fairness\cr
#' Depends: \tab R (>= 3.5.0)\cr
#' Type: \tab Package\cr
#' Version: \tab 1.1.1\cr
#' Date: \tab 2020-07-24\cr
#' Date: \tab 2020-11-18\cr
#' License: \tab MIT\cr
#' LazyLoad: \tab Yes
#' }
Expand Down
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,16 +18,16 @@

## Package overview

The fairness R package provides tools to easily calculate algorithmic fairness metrics across different sensitive groups based on model predictions in a binary classification task. It also provides opportunities to visualize and compare other prediction metrics between the groups.
The fairness R package provides tools to calculate algorithmic fairness metrics across different sensitive groups based on model predictions in a binary classification task. It also provides opportunities to visualize and compare other prediction metrics between the sensitive groups.

The package contains functions to compute the most commonly used metrics of algorithmic fairness such as:
The package contains functions to compute the commonly used fair machine learning metrics such as:

- Demographic parity
- Proportional parity
- Equalized odds
- Predictive rate parity

In addition, the following comparisons are also implemented:
In addition, the following metrics are implemented:

- False positive rate parity
- False negative rate parity
Expand All @@ -37,7 +37,7 @@ In addition, the following comparisons are also implemented:
- ROC AUC comparison
- MCC comparison

The comprehensive tutorial on using the package is provided in [this blogpost](https://kozodoi.me/r/fairness/packages/2020/05/01/fairness-tutorial.html). We recommend that you go through the blogpost, as it contains a more in-depth description of the fairness package compared to this README. You will also find a brief tutorial in the fairness [vignette](https://github.com/kozodoi/fairness/blob/master/vignettes/fairness.Rmd):
The comprehensive tutorial on using the package is provided in [this blogpost](https://kozodoi.me/r/fairness/packages/2020/05/01/fairness-tutorial.html). We recommend that you go through the post, as it contains a more in-depth description of the fairness package compared to this README. You will also find a brief tutorial in the fairness [vignette](https://github.com/kozodoi/fairness/blob/master/vignettes/fairness.Rmd):

```r
vignette('fairness')
Expand Down
7 changes: 5 additions & 2 deletions man/acc_parity.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

7 changes: 5 additions & 2 deletions man/dem_parity.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

7 changes: 5 additions & 2 deletions man/equal_odds.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

4 changes: 2 additions & 2 deletions man/fairness.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

7 changes: 5 additions & 2 deletions man/fnr_parity.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

7 changes: 5 additions & 2 deletions man/fpr_parity.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

7 changes: 5 additions & 2 deletions man/mcc_parity.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

7 changes: 5 additions & 2 deletions man/npv_parity.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

7 changes: 5 additions & 2 deletions man/pred_rate_parity.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

7 changes: 5 additions & 2 deletions man/prop_parity.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

14 changes: 12 additions & 2 deletions man/roc_parity.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

7 changes: 5 additions & 2 deletions man/spec_parity.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

0 comments on commit f46203f

Please sign in to comment.