skfair.metrics

from skfair.metrics import <function_name>

Fairness metrics

skfair.metrics.disparate_impact(y_true, y_pred, sensitive_attr)

Ratio of positive prediction rates (unprivileged / privileged).

.. math:: DI = \frac{P(\hat{Y}=1 \mid S=0)}{P(\hat{Y}=1 \mid S=1)}

A value of 1.0 indicates perfect fairness. The 80 % rule threshold is 0.8.

Parameters:
  • y_true (ndarray) –

    Ground-truth binary labels (0/1).

  • y_pred (ndarray) –

    Predicted binary labels (0/1).

  • sensitive_attr (ndarray) –

    Binary group indicator (1 = privileged, 0 = unprivileged).

Returns:
  • float

skfair.metrics.statistical_parity_difference(y_true, y_pred, sensitive_attr)

Difference in positive prediction rates (unprivileged - privileged).

.. math:: SPD = P(\hat{Y}=1 \mid S=0) - P(\hat{Y}=1 \mid S=1)

A value of 0 indicates perfect fairness. Negative values indicate the unprivileged group is disadvantaged.

Parameters:
  • y_true (ndarray) –

    Ground-truth binary labels (0/1).

  • y_pred (ndarray) –

    Predicted binary labels (0/1).

  • sensitive_attr (ndarray) –

    Binary group indicator (1 = privileged, 0 = unprivileged).

Returns:
  • float

skfair.metrics.equal_opportunity_difference(y_true, y_pred, sensitive_attr)

Difference in true positive rates (unprivileged - privileged).

.. math:: EOD = TPR_{\text{unpriv}} - TPR_{\text{priv}}

A value of 0 indicates perfect fairness.

Parameters:
  • y_true (ndarray) –

    Ground-truth binary labels (0/1).

  • y_pred (ndarray) –

    Predicted binary labels (0/1).

  • sensitive_attr (ndarray) –

    Binary group indicator (1 = privileged, 0 = unprivileged).

Returns:
  • float

skfair.metrics.equal_opportunity_ratio(y_true, y_pred, sensitive_attr)

Ratio of true positive rates (unprivileged / privileged).

.. math:: EOR = \frac{TPR_{\text{unpriv}}}{TPR_{\text{priv}}}

A value of 1.0 indicates perfect fairness.

Parameters:
  • y_true (ndarray) –

    Ground-truth binary labels (0/1).

  • y_pred (ndarray) –

    Predicted binary labels (0/1).

  • sensitive_attr (ndarray) –

    Binary group indicator (1 = privileged, 0 = unprivileged).

Returns:
  • float

skfair.metrics.average_odds_difference(y_true, y_pred, sensitive_attr)

Average of FPR difference and TPR difference across groups.

.. math:: AOD = 0.5 \times [(FPR_{unpriv} - FPR_{priv}) + (TPR_{unpriv} - TPR_{priv})]

A value of 0 indicates perfect fairness.

Parameters:
  • y_true (ndarray) –

    Ground-truth binary labels (0/1).

  • y_pred (ndarray) –

    Predicted binary labels (0/1).

  • sensitive_attr (ndarray) –

    Binary group indicator (1 = privileged, 0 = unprivileged).

Returns:
  • float

skfair.metrics.true_negative_rate_difference(y_true, y_pred, sensitive_attr)

Difference in true negative rates (unprivileged - privileged).

.. math:: TNRD = TNR_{\text{unpriv}} - TNR_{\text{priv}}

A value of 0 indicates perfect fairness.

Parameters:
  • y_true (ndarray) –

    Ground-truth binary labels (0/1).

  • y_pred (ndarray) –

    Predicted binary labels (0/1).

  • sensitive_attr (ndarray) –

    Binary group indicator (1 = privileged, 0 = unprivileged).

Returns:
  • float

skfair.metrics.false_negative_rate_difference(y_true, y_pred, sensitive_attr)

Difference in false negative rates (unprivileged - privileged).

.. math:: FNRD = FNR_{\text{unpriv}} - FNR_{\text{priv}}

A value of 0 indicates perfect fairness. Positive values indicate the unprivileged group has higher false negative rates.

Parameters:
  • y_true (ndarray) –

    Ground-truth binary labels (0/1).

  • y_pred (ndarray) –

    Predicted binary labels (0/1).

  • sensitive_attr (ndarray) –

    Binary group indicator (1 = privileged, 0 = unprivileged).

Returns:
  • float

skfair.metrics.predictive_equality(y_true, y_pred, sensitive_attr)

Ratio of false positive rates (unprivileged / privileged).

.. math:: PE = \frac{FPR_{\text{unpriv}}}{FPR_{\text{priv}}}

A value of 1.0 indicates perfect fairness.

Parameters:
  • y_true (ndarray) –

    Ground-truth binary labels (0/1).

  • y_pred (ndarray) –

    Predicted binary labels (0/1).

  • sensitive_attr (ndarray) –

    Binary group indicator (1 = privileged, 0 = unprivileged).

Returns:
  • float

skfair.metrics.accuracy_parity(y_true, y_pred, sensitive_attr)

Ratio of accuracies (unprivileged / privileged).

.. math:: AP = \frac{Acc_{\text{unpriv}}}{Acc_{\text{priv}}}

A value of 1.0 indicates perfect fairness.

Parameters:
  • y_true (ndarray) –

    Ground-truth binary labels (0/1).

  • y_pred (ndarray) –

    Predicted binary labels (0/1).

  • sensitive_attr (ndarray) –

    Binary group indicator (1 = privileged, 0 = unprivileged).

Returns:
  • float

Performance metrics

skfair.metrics.accuracy(y_true, y_pred)

Compute accuracy.

Parameters:
  • y_true (ndarray) –

    Ground-truth binary labels (0/1).

  • y_pred (ndarray) –

    Predicted binary labels (0/1).

Returns:
  • float

    (TP + TN) / (TP + TN + FP + FN)


skfair.metrics.true_positive_rate(y_true, y_pred)

Compute true positive rate (recall / sensitivity).

Parameters:
  • y_true (ndarray) –

    Ground-truth binary labels (0/1).

  • y_pred (ndarray) –

    Predicted binary labels (0/1).

Returns:
  • float

    TP / (TP + FN), or 0.0 if no actual positives.


skfair.metrics.false_positive_rate(y_true, y_pred)

Compute false positive rate (fall-out).

Parameters:
  • y_true (ndarray) –

    Ground-truth binary labels (0/1).

  • y_pred (ndarray) –

    Predicted binary labels (0/1).

Returns:
  • float

    FP / (FP + TN), or 0.0 if no actual negatives.


skfair.metrics.true_negative_rate(y_true, y_pred)

Compute true negative rate (specificity).

Parameters:
  • y_true (ndarray) –

    Ground-truth binary labels (0/1).

  • y_pred (ndarray) –

    Predicted binary labels (0/1).

Returns:
  • float

    TN / (TN + FP), or 0.0 if no actual negatives.


skfair.metrics.false_negative_rate(y_true, y_pred)

Compute false negative rate (miss rate).

Parameters:
  • y_true (ndarray) –

    Ground-truth binary labels (0/1).

  • y_pred (ndarray) –

    Predicted binary labels (0/1).

Returns:
  • float

    FN / (TP + FN), or 0.0 if no actual positives.


skfair.metrics.balanced_accuracy(y_true, y_pred)

Compute balanced accuracy.

Parameters:
  • y_true (ndarray) –

    Ground-truth binary labels (0/1).

  • y_pred (ndarray) –

    Predicted binary labels (0/1).

Returns:
  • float

    (TPR + TNR) / 2


skfair.metrics.precision(y_true, y_pred)

Compute precision (positive predictive value).

Parameters:
  • y_true (ndarray) –

    Ground-truth binary labels (0/1).

  • y_pred (ndarray) –

    Predicted binary labels (0/1).

Returns:
  • float

    TP / (TP + FP), or 0.0 if no predicted positives.


skfair.metrics.recall = true_positive_rate module-attribute


skfair.metrics.f1_score(y_true, y_pred)

Compute F1 score (harmonic mean of precision and recall).

Parameters:
  • y_true (ndarray) –

    Ground-truth binary labels (0/1).

  • y_pred (ndarray) –

    Predicted binary labels (0/1).

Returns:
  • float

    2 * precision * recall / (precision + recall), or 0.0 if both are 0.