Machine unlearning, a tool for RTBF (Right to be Forgotten), can impact AI fairness. This study explores two common methods (SISA, AmnesiacML) vs. retraining (ORTR) across fairness datasets and deletion strategies. Results show non-uniform deletion with SISA yields better fairness outcomes, while other methods have mixed effects. These findings inform responsible RTBF implementation by highlighting potential fairness-privacy trade-offs in machine unlearning.
To be forgotten or to be fair: unveiling fairness implications of machine unlearning methods
Published
on

Leave a comment