skfair.metrics¶
from skfair.metrics import <function_name>
Fairness metrics¶
skfair.metrics.disparate_impact(y_true, y_pred, sensitive_attr)
¶
Ratio of positive prediction rates (unprivileged / privileged).
.. math:: DI = \frac{P(\hat{Y}=1 \mid S=0)}{P(\hat{Y}=1 \mid S=1)}
A value of 1.0 indicates perfect fairness. The 80 % rule threshold is 0.8.
| Parameters: |
|
|---|
| Returns: |
|
|---|
skfair.metrics.statistical_parity_difference(y_true, y_pred, sensitive_attr)
¶
Difference in positive prediction rates (unprivileged - privileged).
.. math:: SPD = P(\hat{Y}=1 \mid S=0) - P(\hat{Y}=1 \mid S=1)
A value of 0 indicates perfect fairness. Negative values indicate the unprivileged group is disadvantaged.
| Parameters: |
|
|---|
| Returns: |
|
|---|
skfair.metrics.equal_opportunity_difference(y_true, y_pred, sensitive_attr)
¶
Difference in true positive rates (unprivileged - privileged).
.. math:: EOD = TPR_{\text{unpriv}} - TPR_{\text{priv}}
A value of 0 indicates perfect fairness.
| Parameters: |
|
|---|
| Returns: |
|
|---|
skfair.metrics.equal_opportunity_ratio(y_true, y_pred, sensitive_attr)
¶
Ratio of true positive rates (unprivileged / privileged).
.. math:: EOR = \frac{TPR_{\text{unpriv}}}{TPR_{\text{priv}}}
A value of 1.0 indicates perfect fairness.
| Parameters: |
|
|---|
| Returns: |
|
|---|
skfair.metrics.average_odds_difference(y_true, y_pred, sensitive_attr)
¶
Average of FPR difference and TPR difference across groups.
.. math:: AOD = 0.5 \times [(FPR_{unpriv} - FPR_{priv}) + (TPR_{unpriv} - TPR_{priv})]
A value of 0 indicates perfect fairness.
| Parameters: |
|
|---|
| Returns: |
|
|---|
skfair.metrics.true_negative_rate_difference(y_true, y_pred, sensitive_attr)
¶
Difference in true negative rates (unprivileged - privileged).
.. math:: TNRD = TNR_{\text{unpriv}} - TNR_{\text{priv}}
A value of 0 indicates perfect fairness.
| Parameters: |
|
|---|
| Returns: |
|
|---|
skfair.metrics.false_negative_rate_difference(y_true, y_pred, sensitive_attr)
¶
Difference in false negative rates (unprivileged - privileged).
.. math:: FNRD = FNR_{\text{unpriv}} - FNR_{\text{priv}}
A value of 0 indicates perfect fairness. Positive values indicate the unprivileged group has higher false negative rates.
| Parameters: |
|
|---|
| Returns: |
|
|---|
skfair.metrics.predictive_equality(y_true, y_pred, sensitive_attr)
¶
Ratio of false positive rates (unprivileged / privileged).
.. math:: PE = \frac{FPR_{\text{unpriv}}}{FPR_{\text{priv}}}
A value of 1.0 indicates perfect fairness.
| Parameters: |
|
|---|
| Returns: |
|
|---|
skfair.metrics.accuracy_parity(y_true, y_pred, sensitive_attr)
¶
Ratio of accuracies (unprivileged / privileged).
.. math:: AP = \frac{Acc_{\text{unpriv}}}{Acc_{\text{priv}}}
A value of 1.0 indicates perfect fairness.
| Parameters: |
|
|---|
| Returns: |
|
|---|
Performance metrics¶
skfair.metrics.accuracy(y_true, y_pred)
¶
Compute accuracy.
| Parameters: |
|
|---|
| Returns: |
|
|---|
skfair.metrics.true_positive_rate(y_true, y_pred)
¶
Compute true positive rate (recall / sensitivity).
| Parameters: |
|
|---|
| Returns: |
|
|---|
skfair.metrics.false_positive_rate(y_true, y_pred)
¶
Compute false positive rate (fall-out).
| Parameters: |
|
|---|
| Returns: |
|
|---|
skfair.metrics.true_negative_rate(y_true, y_pred)
¶
Compute true negative rate (specificity).
| Parameters: |
|
|---|
| Returns: |
|
|---|
skfair.metrics.false_negative_rate(y_true, y_pred)
¶
Compute false negative rate (miss rate).
| Parameters: |
|
|---|
| Returns: |
|
|---|
skfair.metrics.balanced_accuracy(y_true, y_pred)
¶
Compute balanced accuracy.
| Parameters: |
|
|---|
| Returns: |
|
|---|
skfair.metrics.precision(y_true, y_pred)
¶
Compute precision (positive predictive value).
| Parameters: |
|
|---|
| Returns: |
|
|---|
skfair.metrics.recall = true_positive_rate
module-attribute
¶
skfair.metrics.f1_score(y_true, y_pred)
¶
Compute F1 score (harmonic mean of precision and recall).
| Parameters: |
|
|---|
| Returns: |
|
|---|