Towards a Unified Framework for Fair and Stable Graph Representation Learning

As the representations output by Graph Neural Networks (GNNs) are increasingly employed in realworld applications, it becomes important to ensure that these representations are fair and stable. In this work, we establish a key connection between counterfactual fairness and stability and leverage it to propose a novel framework, NIFTY (uNIfying Fairness and stabiliTY), which can be used with any GNN to learn fair and stable representations. We introduce a novel objective function that simultaneously accounts for fairness and stability and develop a layer-wise weight normalization using the Lipschitz constant to enhance neural message passing in GNNs. In doing so, we enforce fairness and stability both in the objective function as well as in the GNN architecture. Further, we show theoretically that our layer-wise weight normalization promotes counterfactual fairness and stability in the resulting representations. We introduce three new graph datasets comprising of high-stakes decisions in criminal justice and financial lending domains. Extensive experimentation with the above datasets demonstrates the efficacy of our framework.

[1]  Yann LeCun,et al.  Signature Verification Using A "Siamese" Time Delay Neural Network , 1993, Int. J. Pattern Recognit. Artif. Intell..

[2]  I-Cheng Yeh,et al.  The comparisons of data mining techniques for the predictive accuracy of probability of default of credit card clients , 2009, Expert Syst. Appl..

[3]  Pierre Vandergheynst,et al.  Wavelets on Graphs via Spectral Graph Theory , 2009, ArXiv.

[4]  Toniann Pitassi,et al.  Fairness through awareness , 2011, ITCS '12.

[5]  Tina L. Freiburger,et al.  The Effect of Race/Ethnicity on Sentencing: Examining Sentence Type, Jail Length, and Prison Length , 2015 .

[6]  Xavier Bresson,et al.  Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering , 2016, NIPS.

[7]  Nathan Srebro,et al.  Equality of Opportunity in Supervised Learning , 2016, NIPS.

[8]  Matt J. Kusner,et al.  Counterfactual Fairness , 2017, NIPS.

[9]  Max Welling,et al.  Semi-Supervised Classification with Graph Convolutional Networks , 2016, ICLR.

[10]  Alexandra Chouldechova,et al.  Fair prediction with disparate impact: A study of bias in recidivism prediction instruments , 2016, Big Data.

[11]  Krishna P. Gummadi,et al.  Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment , 2016, WWW.

[12]  Jure Leskovec,et al.  Inductive Representation Learning on Large Graphs , 2017, NIPS.

[13]  Samuel S. Schoenholz,et al.  Neural Message Passing for Quantum Chemistry , 2017, ICML.

[14]  Krishna P. Gummadi,et al.  From Parity to Preference-based Notions of Fairness in Classification , 2017, NIPS.

[15]  Jon M. Kleinberg,et al.  Inherent Trade-Offs in the Fair Determination of Risk Scores , 2016, ITCS.

[16]  Jure Leskovec,et al.  Modeling polypharmacy side effects with graph convolutional networks , 2018, bioRxiv.

[17]  Ken-ichi Kawarabayashi,et al.  Representation Learning on Graphs with Jumping Knowledge Networks , 2018, ICML.

[18]  Jure Leskovec,et al.  Graph Convolutional Neural Networks for Web-Scale Recommender Systems , 2018, KDD.

[19]  M. Kearns,et al.  Fairness in Criminal Justice Risk Assessments: The State of the Art , 2017, Sociological Methods & Research.

[20]  Michael Backes,et al.  Fairwalk: Towards Fair Graph Embedding , 2019, IJCAI.

[21]  Jure Leskovec,et al.  How Powerful are Graph Neural Networks? , 2018, ICLR.

[22]  Stephan Gunnemann,et al.  Adversarial Attacks on Graph Neural Networks via Meta Learning , 2019, ICLR.

[23]  Philip S. Yu,et al.  A Comprehensive Survey on Graph Neural Networks , 2019, IEEE Transactions on Neural Networks and Learning Systems.

[24]  Yang Liu,et al.  Actionable Recourse in Linear Classification , 2018, FAT.

[25]  Pietro Liò,et al.  Deep Graph Infomax , 2018, ICLR.

[26]  William L. Hamilton,et al.  Compositional Fairness Constraints for Graph Embeddings , 2019, ICML.

[27]  Peter Kairouz,et al.  Learning Generative Adversarial RePresentations (GAP) under Fairness and Censoring Constraints , 2019, ArXiv.

[28]  M. Bronstein,et al.  Deciphering interaction fingerprints from protein molecular surfaces using geometric deep learning , 2019, Nature Methods.

[29]  Jaewoo Kang,et al.  Self-Attention Graph Pooling , 2019, ICML.

[30]  Wenwu Zhu,et al.  Robust Graph Convolutional Networks Against Adversarial Attacks , 2019, KDD.

[31]  Yizhou Sun,et al.  Heterogeneous Graph Transformer , 2020, WWW.

[32]  Christos Christodoulopoulos,et al.  Debiasing Knowledge Graph Embeddings , 2020, EMNLP.

[33]  Geoffrey E. Hinton,et al.  A Simple Framework for Contrastive Learning of Visual Representations , 2020, ICML.

[34]  Pierre H. Richemond,et al.  Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning , 2020, NeurIPS.

[35]  Jimeng Sun,et al.  SkipGNN: predicting molecular interactions with skip-graph networks , 2020, Scientific Reports.

[36]  Xiang Zhang,et al.  GNNGuard: Defending Graph Neural Networks against Adversarial Attacks , 2020, NeurIPS.

[37]  Guangyin Jin,et al.  Addressing Crime Situation Forecasting Task with Temporal Graph Convolutional Neural Network Approach , 2020, 2020 12th International Conference on Measuring Technology and Mechatronics Automation (ICMTMA).

[38]  Michelle M. Li,et al.  Subgraph Neural Networks , 2020, NeurIPS.

[39]  A. Barabasi,et al.  Network medicine framework for identifying drug-repurposing opportunities for COVID-19 , 2020, Proceedings of the National Academy of Sciences.

[40]  Stephan Günnemann,et al.  Reliable Graph Neural Networks via Robust Aggregation , 2020, NeurIPS.

[41]  Suhang Wang,et al.  FairGNN: Eliminating the Discrimination in Graph Neural Networks with Limited Sensitive Attribute Information , 2020, ArXiv.

[42]  Xinlei Chen,et al.  Exploring Simple Siamese Representation Learning , 2020, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[43]  Bernhard Pfahringer,et al.  Regularisation of neural networks by enforcing Lipschitz continuity , 2018, Machine Learning.