skfair.audit

from skfair.audit import BiasAuditor, FairnessAuditor

BiasAuditor

skfair.audit.BiasAuditor

Analyse data-level disparity before modelling.

Parameters:
  • X (DataFrame) –

    Feature matrix.

  • y (array - like) –

    Binary target labels (0/1).

  • sens_attr (str) –

    Column name of the sensitive attribute in X.

  • priv_group (int or str, default: 1 ) –

    Value in sens_attr that represents the privileged group.

  • pos_label (int or str, default: 1 ) –

    Value in y that represents the favourable outcome.

group_proportions()

Population share of each group value.

Returns:
  • DataFrame

    Columns: count, proportion.

plot_feature_distribution(feature, **kwargs)

Histogram of feature split by the sensitive attribute.

Parameters:
  • feature (str) –

    Column name in X to visualise.

Returns:
  • (fig, ax)

plot_group_proportions(**kwargs)

Bar chart of group proportions.

Returns:
  • (fig, ax)

plot_summary(features=None)

Display all bias plots at once.

Parameters:
  • features (list of str, default: None ) –

    Features to plot distributions for. Defaults to all numeric columns in X excluding the sensitive attribute.

Returns:
  • list of (fig, ax)

plot_target_rates(**kwargs)

Bar chart of positive-outcome rate per group.

Returns:
  • (fig, ax)

target_rate_by_group()

Positive-outcome rate for each group value.

Returns:
  • DataFrame

    Columns: count, positive_rate.


FairnessAuditor

skfair.audit.FairnessAuditor

Audit fairness of model predictions across groups.

Parameters:
  • y_true (array - like) –

    Ground-truth binary labels (0/1).

  • y_pred (array - like) –

    Predicted binary labels (0/1).

  • sens_attr (array - like) –

    Binary group indicator aligned with y_true / y_pred. The privileged group is identified by priv_group.

  • priv_group (int or str, default: 1 ) –

    Value in sens_attr that represents the privileged group.

  • pos_label (int or str, default: 1 ) –

    Value that represents the favourable outcome.

fairness_metrics()

Compute all fairness metrics.

Returns:
  • DataFrame

    Single-column DataFrame (value) indexed by metric name.

performance_by_group()

Per-group performance metrics.

Returns:
  • DataFrame

    Rows = metric names, columns = unprivileged, privileged.

plot_fairness_metrics(**kwargs)

Horizontal bar chart with colour-coded fairness metrics.

Parameters:
  • fair_threshold (float) –

    Distance from ideal for green (default 0.1).

  • warning_threshold (float) –

    Distance from ideal for orange (default 0.2).

  • **kwargs

    Forwarded to :func:_plot_metric_bars.

Returns:
  • (fig, ax)

plot_fairness_radar(mode='ratio', **kwargs)

Radar (spider) chart of fairness metrics.

Parameters:
  • mode ((ratio, difference, all), default: "ratio" ) –

    Which metric subset to plot: "ratio" for ratio-based (ideal = 1), "difference" for difference-based (ideal = 0), "all" for every metric.

    The default is "ratio" because ratio metrics share a common ideal of 1, making the radar shape directly interpretable. Difference metrics (ideal = 0) collapse toward the centre and can go negative, which distorts the polar plot.

    Values are normalised for the radar: ratio metrics use min(v, 1/v) so over- and under-representation are symmetric; difference metrics use absolute values.

Returns:
  • (fig, ax)

plot_performance_by_group(**kwargs)

Grouped bar chart of per-group performance metrics.

Returns:
  • (fig, ax)

plot_summary()

Display all fairness plots at once.

Returns:
  • list of (fig, ax)