Interpretable deep-learning models to help achieve the Sustainable Development Goals

We discuss our insights into interpretable artificial-intelligence (AI) models, and how they are essential in the context of developing ethical AI systems, as well as data-driven solutions compliant with the Sustainable Development Goals (SDGs). We highlight the potential of extracting truly-interpretable models from deeplearning methods, for instance via symbolic models obtained through inductive biases, to ensure a sustainable development of AI.

[1]  Momen Abayazid,et al.  Toward reliable automatic liver and tumor segmentation using convolutional neural network based on 2.5D models , 2020, International Journal of Computer Assisted Radiology and Surgery.

[2]  Hui Yang,et al.  Machine learning and artificial intelligence to aid climate change research and preparedness , 2019, Environmental Research Letters.

[3]  Sang Michael Xie,et al.  Combining satellite imagery and machine learning to predict poverty , 2016, Science.

[4]  Fenglei Fan,et al.  On Interpretability of Artificial Neural Networks: A Survey , 2020, IEEE Transactions on Radiation and Plasma Medical Sciences.

[5]  Ricardo Vinuesa,et al.  A socio-technical framework for digital contact tracing , 2020, Results in Engineering.

[6]  Peter Dueben,et al.  Opportunities and challenges for machine learning in weather and climate modelling: hard, medium and soft AI , 2021, Philosophical Transactions of the Royal Society A.

[7]  M. Hilbert,et al.  Big Data for Development: A Review of Promises and Challenges , 2016 .

[8]  Rui Xu,et al.  Discovering Symbolic Models from Deep Learning with Inductive Biases , 2020, NeurIPS.

[9]  R. Vinuesa,et al.  Data deprivations, data gaps and digital divides: Lessons from the COVID-19 pandemic , 2021, Big Data Soc..

[10]  Thomas Brox,et al.  Inverting Visual Representations with Convolutional Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[11]  Richard Sandberg,et al.  A novel evolutionary algorithm applied to algebraic modifications of the RANS stress-strain relationship , 2016, J. Comput. Phys..

[12]  Markus H. Gross,et al.  Explaining Deep Neural Networks with a Polynomial Time Algorithm for Shapley Values Approximation , 2019, ICML.

[13]  Cynthia Rudin,et al.  Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead , 2018, Nature Machine Intelligence.

[14]  Max Tegmark,et al.  The role of artificial intelligence in achieving the Sustainable Development Goals , 2019, Nature Communications.

[15]  R. Vinuesa,et al.  An interpretable framework of data-driven turbulence modeling using deep neural networks , 2021, Physics of Fluids.