Visualization report for fairness method comparison results.
| Parameters: |
-
results_df
(DataFrame)
–
DataFrame with columns dataset, method, classifier,
plus one column per metric (e.g. accuracy, spd).
Optional {metric}_std columns are preserved but not plotted.
-
metrics
(list of str, default:
None
)
–
Explicit list of metric column names to use. When None,
metrics are auto-detected from the DataFrame columns.
|
Examples:
>>> report = ComparisonReport(results_df)
>>> report.plot_metric_bar(metric="accuracy")
>>> report.plot_metric_bar(metric="spd")
>>> tables = report.summary_tables()
__init__(results_df, metrics=None, datasets=None, methods=None, classifiers=None)
| Parameters: |
-
results_df
(DataFrame)
–
DataFrame with columns dataset, method, classifier,
plus one column per metric.
-
metrics
(list of str, default:
None
)
–
Explicit list of metric column names to use. Auto-detected when
None.
-
datasets
(list of str, default:
None
)
–
Keep only these datasets. None keeps all.
-
methods
(list of str, default:
None
)
–
Keep only these methods. None keeps all.
-
classifiers
(list of str, default:
None
)
–
Keep only these classifiers. None keeps all.
|
plot_all(datasets=None, fairness_metric='spd', methods=None, classifiers=None)
Run all plot methods and return a list of (fig, axes).
| Parameters: |
-
datasets
(list of str, default:
None
)
–
Datasets to include. None uses all.
-
fairness_metric
(str, default:
'spd'
)
–
Fairness metric for tradeoff plot (default "spd").
-
methods
(list of str, default:
None
)
–
Methods to include. None uses all.
-
classifiers
(list of str, default:
None
)
–
Classifiers to include. None uses all.
|
| Returns: |
-
list of (fig, axes) tuples
–
|
plot_metric_bar(metric=None, datasets=None, methods=None, classifiers=None, reference_line='auto', **kw)
Grouped bar chart for a single metric across datasets.
| Parameters: |
-
metric
(str, default:
None
)
–
Metric to plot. Defaults to first performance metric.
-
datasets
(list of str, default:
None
)
–
Datasets to include. None uses all.
-
methods
(list of str, default:
None
)
–
Methods to include. None uses all.
-
classifiers
(list of str, default:
None
)
–
Classifiers to include. None uses all.
-
reference_line
(float, "auto", or None, default:
'auto'
)
–
"auto" adds reference lines for fairness metrics.
|
plot_ranking(metrics=None, datasets=None, higher_is_better=None, classifier=None, methods=None, classifiers=None, **kw)
Heatmap of method rankings per dataset.
| Parameters: |
-
metrics
(list of str, default:
None
)
–
Metrics to rank on. Defaults to all detected metrics.
-
datasets
(list of str, default:
None
)
–
Datasets to include. None uses all.
-
higher_is_better
(dict, default:
None
)
–
{metric: bool} overrides for ranking direction.
-
classifier
(None, "average", "best", or a classifier name., default:
None
)
–
How to aggregate across classifiers before ranking.
-
methods
(list of str, default:
None
)
–
Filter to these methods only.
-
classifiers
(list of str, default:
None
)
–
Filter to these classifiers only.
|
plot_tradeoff(fairness_metric='spd', performance_metric='accuracy', datasets=None, methods=None, classifiers=None, **kw)
Scatter plot: |fairness| vs performance.
| Parameters: |
-
fairness_metric
(str, default:
'spd'
)
–
Fairness metric column name (default "spd").
-
performance_metric
(str, default:
'accuracy'
)
–
Performance metric column name (default "accuracy").
-
datasets
(list of str, default:
None
)
–
Datasets to include. None uses all.
-
methods
(list of str, default:
None
)
–
Methods to include. None uses all.
-
classifiers
(list of str, default:
None
)
–
Classifiers to include. None uses all.
|
summary_tables(metrics=None, datasets=None, classifier=None, methods=None, classifiers=None)
Pivot tables of metric values per method.
| Parameters: |
-
classifier
(None, "average", "best", or a classifier name., default:
None
)
–
How to aggregate across classifiers.
-
methods
(list of str, default:
None
)
–
Filter to these methods only.
-
classifiers
(list of str, default:
None
)
–
Filter to these classifiers only.
-
Returns
–
|
to_html(path, datasets=None, methods=None, classifiers=None, metrics=None, fairness_metric='spd', performance_metric='accuracy', classifier=None)
Export an HTML report with embedded matplotlib charts.
| Parameters: |
-
path
(str)
–
Output file path (e.g. "report.html").
-
datasets
(list of str, default:
None
)
–
-
methods
(list of str, default:
None
)
–
-
classifiers
(list of str, default:
None
)
–
-
metrics
(list of str, default:
None
)
–
Metrics to include. Defaults to all detected.
-
fairness_metric
(str, default:
'spd'
)
–
Fairness metric for averaged/detailed/tradeoff charts.
-
performance_metric
(str, default:
'accuracy'
)
–
Performance metric for tradeoff chart.
-
classifier
(None, "average", "best", or a classifier name, default:
None
)
–
How to aggregate for rankings/tables.
|