Official source-code for Woergaard and Selvan (2026).
Codebase for fairness-aware mixed-precision quantization of image classifiers.
The main script (train.py) supports:
- Static mixed-precision assignment driven by group or class importance scores
- QAT fine-tuning after assignment
- Iterative QAT (progressively freezes more of the model each iteration)
- BAQ learnable quantization (learns per-layer or per-channel bit-widths)
The code reports standard accuracy metrics plus group metrics and parity gaps.
noneFull precision baseline.uniformUniform fake-quantization for all quantizable layers (Conv2d and Linear).fair_staticOne-shot, importance-guided mixed precision assignment.fair_static_qatSame assignment asfair_static, followed by fine-tuning (QAT).baq_learnableWraps Conv2d and Linear layers in a BAQ-style module with trainable:b_logitcontrolling bit-width (mapped to[--baq_bit_min, --baq_bit_max]with STE rounding)
gradientAccumulates|dL/dW|per group.grapeAccumulates(dL/dW * W)^2per group.
Used to combine importance maps across groups.
maxTakes the maximum importance across groups.meanTakes the mean across groups.cvarCVaR-style reducer over groups, controlled by--cvar_alpha.balancedReweights group maps by their share of total importance and then takes a max.subtractiveBinary-only strategy used in some FairQuantize-style experiments. Requires exactly two groups and uses--beta.
per_tensorOne bit-width per layer.per_channelOne bit-width per output channel for Conv2d and Linear.per_paramOne bit-width per parameter.
All datasets are loaded via fairquant/datasets.py.
- Fitzpatrick17k (
--dataset fitzpatrick17k) Auto-downloads a prepared archive into./data/Fitzpatrick17k/if missing.--fitzpatrick_binary_groupingmaps Fitzpatrick skin types to two groups: 1–3 vs 4–6. - ISIC 2019 (
--dataset isic2019) Auto-downloads a prepared archive into./data/ISIC2019_train/if missing. Groups areUNK,female,male(from metadata).
Model creation is handled in fairquant/models.py:
resnet18,resnet34,resnet50(torchvision)tiny_vit_5m_224,deit_tiny_patch16_224(viatimmif installed)
This repository is intended to be run from the repo root so Python can import the fairquant package. Run the following commands before training:
# from the repo root
python -m venv .venv
source .venv/bin/activate # Windows: .\.venv\Scripts\activate
python -m pip install --upgrade pip
pip install -r requirements.txt
pretrain.py saves checkpoints to ./checkpoints/.
python pretrain.py --dataset fitzpatrick17k --model resnet18 --epochs 5All experiment outputs go to ./results/<timestamp>_<dataset>_<model>_<quant_mode>/ unless --run_name is set.
python train.py --dataset fitzpatrick17k --model resnet18 --checkpoint_path ./checkpoints/resnet18_fitzpatrick17k_pretrained.pt --quant_mode fair_static_qat --granularity per_channel --importance_on_sensitive_groups --importance_metric gradient --reducer max --quant_bits 2 4 8 --quant_levels 0.2 0.4 0.4 --ft_epochs 5Progressively freezes more units each iteration until the final mix defined by --quant_bits/--quant_levels is reached.
python train.py --dataset fitzpatrick17k --model resnet18 --checkpoint_path ./checkpoints/resnet18_fitzpatrick17k_pretrained.pt --quant_mode fair_static_qat --iterative_qat --iterations 5 --ft_epochs 2 --importance_on_sensitive_groups --importance_metric grape --reducer balanced --quant_bits 2 4 8 --quant_levels 0.2 0.4 0.4Starts from an importance-based initialization, then learns bits during fine-tuning. BAQ logits use a higher learning rate than base weights.
python train.py --dataset fitzpatrick17k --model resnet18 --checkpoint_path ./checkpoints/resnet18_fitzpatrick17k_pretrained.pt --quant_mode baq_learnable --granularity per_channel --importance_on_sensitive_groups --importance_metric gradient --reducer max --quant_bits 2 4 8 16 --quant_levels 0.25 0.25 0.25 0.25 --baq_bit_min 4 --baq_bit_max 16 --baq_lambda_b 1e-5 --fairness_loss_lambda 0.5 --ft_epochs 5Data and run control:
--dataset{fitzpatrick17k, isic2019}--data_root(default./data)--model--checkpoint_path(optional)--run_name(optional)--train_subset,--test_subset(float fraction or integer count)
Fairness evaluation:
--positive_class <int>Enables DP rate, TPR, FPR, TNR, and gap metrics for one chosen class.--no_parity_gapsSkips DP/EOpp/EOdds gaps.
Static assignment and QAT:
--granularity{per_tensor, per_channel, per_param}--importance_metric{gradient, grape}--importance_on_sensitive_groups--reducer{max, mean, cvar, balanced, subtractive}--cvar_alpha--beta(used bysubtractive)--quant_bits <int ...>--quant_levels <float ...>(should sum to 1)--ft_epochs,--ft_lr
Iterative QAT:
--iterative_qat--iterations
BAQ learnable:
--baq_bit_min,--baq_bit_max--baq_lambda_b--fairness_loss_lambda--grad_clip_norm
Each run directory includes:
training.logConsole log with overall metrics and per-group breakdown.final_model.ptFinal weights.fairquant_report.txtAll CLI args for the run.bit_distribution.csvPer-layer bit histogram, average bits, parameter counts, and estimated reductions.size_report.txtH uman-readable summary, plus GOP and effective GOP estimates.bitwidth_percentages.txtBit-width distribution, channel-weighted and parameter-weighted.
- Kindly cite our publication if you use any part of the code
@article{woergaard2026fairquant,
title={FairQuant: Fairness-Aware Mixed-Precision Quantization for Medical Image Classification},
author={Thomas Woergaard and Raghavendra Selvan},
journal={Arxiv},
note={arXiv preprint arXiv:2602.23192},
year={2026}}