eval_stats_parity {fairmetrics}R Documentation

Examine Statistical Parity of a Model

Description

This function assesses statistical parity - also known as demographic parity - in the predictions of a binary classifier across two groups defined by a sensitive attribute. Statistical parity compares the rate at which different groups receive a positive prediction, irrespective of the true outcome. It reports the Positive Prediction Rate (PPR) for each group, their differences, ratios, and bootstrap-based confidence regions.

Usage

eval_stats_parity(
  data,
  outcome,
  group,
  probs,
  cutoff = 0.5,
  confint = TRUE,
  bootstraps = 2500,
  alpha = 0.05,
  digits = 2,
  message = TRUE
)

Arguments

data

Data frame containing the outcome, predicted outcome, and sensitive attribute

outcome

Name of the outcome variable, it must be binary

group

Name of the sensitive attribute

probs

Name of the predicted outcome variable

cutoff

Threshold for the predicted outcome, default is 0.5

confint

Whether to compute 95% confidence interval, default is TRUE

bootstraps

Number of bootstrap samples, default is 2500

alpha

The 1 - significance level for the confidence interval, default is 0.05

digits

Number of digits to round the results to, default is 2

message

Logical; if TRUE (default), prints a textual summary of the fairness evaluation. Only works if confint is TRUE.

Value

A list containing the following elements:

See Also

eval_cond_stats_parity

Examples


library(fairmetrics)
library(dplyr)
library(magrittr)
library(randomForest)
data("mimic_preprocessed")
set.seed(123)
train_data <- mimic_preprocessed %>%
  dplyr::filter(dplyr::row_number() <= 700)
# Fit a random forest model
rf_model <- randomForest::randomForest(factor(day_28_flg) ~ ., data = train_data, ntree = 1000)
# Test the model on the remaining data
test_data <- mimic_preprocessed %>%
  dplyr::mutate(gender = ifelse(gender_num == 1, "Male", "Female")) %>%
  dplyr::filter(dplyr::row_number() > 700)

test_data$pred <- predict(rf_model, newdata = test_data, type = "prob")[, 2]

# Fairness evaluation
# We will use sex as the sensitive attribute and day_28_flg as the outcome.
# We choose threshold = 0.41 so that the overall FPR is around 5%.

# Evaluate Statistical Parity
eval_stats_parity(
  data = test_data,
  outcome = "day_28_flg",
  group = "gender",
  probs = "pred",
  cutoff = 0.41
)


[Package fairmetrics version 1.0.4 Index]