暂无分享,去创建一个
Peter J. Bickel | Kristofer E. Bouchard | Prabhat | Gos Micklem | Claire J. Tomlin | Kenneth Kreutz-Delgado | Michael W. Mahoney | Sean Peisert | Georgios V. Gkoutos | Benjamin Nachman | Sharlee Climer | Alejandro Wolf-Yadlin | Luca Pion-Tonachini | Anil Aswani | Jean-Baptiste Cazier | Babetta L. Marrone | Nicola Falco | Bobbie-Jo Webb-Robertson | Haruko Wainwright | Héctor García Martín | W. Bradley Holtz | Dipankar Dwivedi | Ghanshyam Pilania | Daniel B. Arnold | Sarah Powers | Quinn Jackson | Ty Carlson | Michael Sohn | Petrus Zwart | Neeraj Kumar | Amy Justice | Daniel A. Jacobson | Juliane Müller | Rick L. Stevens | Mark Anderson | James B. Brown | P. Bickel | C. Tomlin | K. Kreutz-Delgado | S. Peisert | Daniel Jacobson | A. Aswani | G. Gkoutos | B. Nachman | B. Webb-Robertson | G. Micklem | Alejandro Wolf-Yadlin | B. Marrone | K. Bouchard | J. Cazier | G. Pilania | S. Climer | H. Wainwright | D. Dwivedi | N. Falco | L. Pion-Tonachini | W. B. Holtz | Sarah Powers | Q. Jackson | Ty Carlson | M. Sohn | P. Zwart | Neeraj Kumar | Amy Justice | Juliane Müller | Mark Anderson | James B. Brown | Benjamin Nachman | Benjamin Nachman
[1] A. Aspuru-Guzik,et al. Self-driving laboratory for accelerated discovery of thin-film materials , 2019, Science Advances.
[2] Michael W. Mahoney,et al. RandNLA , 2016, Commun. ACM.
[3] Babak Hassibi,et al. Second Order Derivatives for Network Pruning: Optimal Brain Surgeon , 1992, NIPS.
[4] Michael W. Mahoney,et al. Characterizing possible failure modes in physics-informed neural networks , 2021, NeurIPS.
[5] Salah Sukkarieh,et al. Machine learning approaches for crop yield prediction and nitrogen status estimation in precision agriculture: A review , 2018, Comput. Electron. Agric..
[6] Clifford H. Wagner. Simpson's Paradox in Real Life , 1982 .
[7] Kjell A. Doksum,et al. Mathematical Statistics: Basic Ideas and Selected Topics, Volume I, Second Edition , 2015 .
[8] Michael W. Mahoney,et al. Improved Guarantees and a Multiple-descent Curve for Column Subset Selection and the Nystrom Method (Extended Abstract) , 2021, IJCAI.
[9] Nidhi Kalra,et al. Measuring Automated Vehicle Safety: Forging a Framework , 2018 .
[10] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[11] Zhenyu Liao,et al. A random matrix analysis of random Fourier features: beyond the Gaussian kernel, a precise phase transition, and the corresponding double descent , 2020, NeurIPS.
[12] Been Kim,et al. Sanity Checks for Saliency Maps , 2018, NeurIPS.
[13] Pieter Abbeel,et al. Decoupling Representation Learning from Reinforcement Learning , 2020, ICML.
[14] Maziar Raissi,et al. Deep Hidden Physics Models: Deep Learning of Nonlinear Partial Differential Equations , 2018, J. Mach. Learn. Res..
[15] Nisheeth K. Vishnoi,et al. A local spectral method for graphs: with applications to improving graph partitions and exploring data graphs locally , 2009, J. Mach. Learn. Res..
[16] Wulfram Gerstner,et al. Towards a theory of cortical columns: From spiking neurons to interacting neural populations of finite size , 2016, PLoS Comput. Biol..
[17] Michael W. Mahoney,et al. Adversarially-Trained Deep Nets Transfer Better , 2020, ArXiv.
[18] J. Ioannidis,et al. Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies , 2020, BMJ.
[19] Leo Breiman,et al. Random Forests , 2001, Machine Learning.
[20] John A. Pople,et al. Nobel Lecture: Quantum chemical models , 1999 .
[21] Michael W. Mahoney,et al. Implicit Self-Regularization in Deep Neural Networks: Evidence from Random Matrix Theory and Implications for Learning , 2018, J. Mach. Learn. Res..
[22] N. Linial,et al. Expander Graphs and their Applications , 2006 .
[23] Cynthia Rudin,et al. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead , 2018, Nature Machine Intelligence.
[24] Jure Leskovec,et al. Community Structure in Large Networks: Natural Cluster Sizes and the Absence of Large Well-Defined Clusters , 2008, Internet Math..
[25] Erwan Scornet,et al. Neural Random Forests , 2016, Sankhya A.
[26] S. Kak. Information, physics, and computation , 1996 .
[27] Sonja Grün,et al. The Scientific Case for Brain Simulations , 2019, Neuron.
[28] Chandan Singh,et al. Definitions, methods, and applications in interpretable machine learning , 2019, Proceedings of the National Academy of Sciences.
[29] Dewei Li,et al. Survey and experimental study on metric learning methods , 2018, Neural Networks.
[30] G. A. Young,et al. High‐dimensional Statistics: A Non‐asymptotic Viewpoint, Martin J.Wainwright, Cambridge University Press, 2019, xvii 552 pages, £57.99, hardback ISBN: 978‐1‐1084‐9802‐9 , 2020, International Statistical Review.
[31] Bin Yu,et al. Learning epistatic polygenic phenotypes with Boolean interactions , 2020 .
[32] Michael I. Jordan,et al. Distribution-Free, Risk-Controlling Prediction Sets , 2021, J. ACM.
[33] Charu C. Aggarwal,et al. On the Surprising Behavior of Distance Metrics in High Dimensional Spaces , 2001, ICDT.
[34] Bin Yu,et al. Three principles of data science: predictability, computability, and stability (PCS) , 2018, 2018 IEEE International Conference on Big Data (Big Data).
[35] S. Brunton,et al. Discovering governing equations from data by sparse identification of nonlinear dynamical systems , 2015, Proceedings of the National Academy of Sciences.
[36] Michela Paganini,et al. The Scientific Method in the Science of Machine Learning , 2019, ArXiv.
[37] Ewen Callaway,et al. ‘It will change everything’: DeepMind’s AI makes gigantic leap in solving protein structures , 2020, Nature.
[38] T. Hubbard,et al. A census of human cancer genes , 2004, Nature Reviews Cancer.
[39] 真人 岡田,et al. AI for Scienceとデータ駆動科学 , 2016 .
[40] Andrew M. Watkins,et al. Geometric deep learning of RNA structure , 2021, Science.
[41] P. Bickel,et al. Sex Bias in Graduate Admissions: Data from Berkeley , 1975, Science.
[42] Mikhail Belkin,et al. Reconciling modern machine-learning practice and the classical bias–variance trade-off , 2018, Proceedings of the National Academy of Sciences.
[43] Petros Drineas,et al. CUR matrix decompositions for improved data analysis , 2009, Proceedings of the National Academy of Sciences.
[44] Wojciech Samek,et al. Methods for interpreting and understanding deep neural networks , 2017, Digit. Signal Process..
[45] Joachim Denzler,et al. Deep learning and process understanding for data-driven Earth system science , 2019, Nature.
[46] Pierre Gentine,et al. Could Machine Learning Break the Convection Parameterization Deadlock? , 2018, Geophysical Research Letters.
[47] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[48] Peter J. Bickel,et al. Maximum Likelihood Estimation of Intrinsic Dimension , 2004, NIPS.
[49] Carlos A. Silva,et al. On the Interpretability of Artificial Intelligence in Radiology: Challenges and Opportunities. , 2020, Radiology. Artificial intelligence.
[50] Oliver Rübel,et al. International Neuroscience Initiatives through the Lens of High-Performance Computing , 2018, Computer.
[51] Michael W. Mahoney,et al. Predicting trends in the quality of state-of-the-art neural networks without access to training or testing data , 2020, Nature Communications.
[52] David Kainer,et al. Can exascale computing and explainable artificial intelligence applied to plant biology deliver on the United Nations sustainable development goals? , 2020, Current opinion in biotechnology.
[53] Alexandre M. Bayen,et al. Computational techniques for the verification of hybrid systems , 2003, Proc. IEEE.
[54] Kevin P. Murphy,et al. Machine learning - a probabilistic perspective , 2012, Adaptive computation and machine learning series.
[55] Oriol Vinyals,et al. Highly accurate protein structure prediction with AlphaFold , 2021, Nature.
[56] Jay D Keasling,et al. Combining mechanistic and machine learning models for predictive engineering and optimization of tryptophan metabolism , 2020, Nature Communications.
[57] Song Han,et al. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.
[58] Bin Yu,et al. Refining interaction search through signed iterative Random Forests , 2018, bioRxiv.
[59] Sara van de Geer,et al. Statistics for High-Dimensional Data: Methods, Theory and Applications , 2011 .
[60] A. Choudhary,et al. Perspective: Materials informatics and big data: Realization of the “fourth paradigm” of science in materials science , 2016 .
[61] Francisco M. De La Vega,et al. Genomics for the world , 2011, Nature.
[62] Gisbert Schneider,et al. Automating drug discovery , 2017, Nature Reviews Drug Discovery.
[63] Daniel Walton,et al. The Atmospheric River Tracking Method Intercomparison Project (ARTMIP): Quantifying Uncertainties in Atmospheric River Climatology , 2019, Journal of Geophysical Research: Atmospheres.
[64] Pablo Carbonell,et al. Opportunities at the Intersection of Synthetic Biology, Machine Learning, and Automation. , 2019, ACS synthetic biology.
[65] Dmitriy Morozov,et al. Persistent homology advances interpretable machine learning for nanoporous materials , 2020, ArXiv.
[66] Gisbert Schneider,et al. Deep Learning in Drug Discovery , 2016, Molecular informatics.
[67] Leland McInnes,et al. UMAP: Uniform Manifold Approximation and Projection , 2018, J. Open Source Softw..
[68] Jorge Gonçalves,et al. Crowdsourcing Perceptions of Fair Predictors for Machine Learning , 2019, Proc. ACM Hum. Comput. Interact..
[69] Joseph D. Janizek,et al. Explaining Explanations: Axiomatic Feature Interactions for Deep Networks , 2020, J. Mach. Learn. Res..
[70] Hector Garcia Martin,et al. A machine learning Automated Recommendation Tool for synthetic biology , 2019, Nature Communications.
[71] Max Tegmark,et al. AI Feynman: A physics-inspired method for symbolic regression , 2019, Science Advances.
[72] Christopher M. Bishop,et al. Pattern Recognition and Machine Learning (Information Science and Statistics) , 2006 .
[73] P. Bickel,et al. Local polynomial regression on unknown manifolds , 2007, 0708.0983.
[74] G. Box. Science and Statistics , 1976 .
[75] Colin B. Clement,et al. Visualizing probabilistic models and data with Intensive Principal Component Analysis , 2018, Proceedings of the National Academy of Sciences.
[76] Peter Norvig,et al. The Unreasonable Effectiveness of Data , 2009, IEEE Intelligent Systems.
[77] Rui Xu,et al. Discovering Symbolic Models from Deep Learning with Inductive Biases , 2020, NeurIPS.
[78] Dumitru Erhan,et al. The (Un)reliability of saliency methods , 2017, Explainable AI.
[79] Bin Yu,et al. Interpreting Convolutional Neural Networks Through Compression , 2017, ArXiv.
[80] Terri L. Moore,et al. Regression Analysis by Example , 2001, Technometrics.
[81] Claire J. Tomlin,et al. Statistics for sparse, high-dimensional, and nonparametric system identification , 2009, 2009 IEEE International Conference on Robotics and Automation.
[82] Michael W. Mahoney,et al. Post-mortem on a deep learning contest: a Simpson's paradox and the complementary roles of scale metrics versus shape metrics , 2021, ArXiv.
[83] Michael W. Mahoney,et al. MAPPING THE SIMILARITIES OF SPECTRA: GLOBAL AND LOCALLY-BIASED APPROACHES TO SDSS GALAXIES , 2016, ArXiv.
[84] Daniel A. Jacobson,et al. Accelerating Climate Resilient Plant Breeding by Applying Next-Generation Artificial Intelligence. , 2019, Trends in biotechnology.
[85] Kristofer E. Bouchard,et al. Union of Intersections ( UoI ) for interpretable data driven discovery and prediction in neuroscience , 2018 .
[86] Eric R. Ziegel,et al. The Elements of Statistical Learning , 2003, Technometrics.
[87] Max Tegmark,et al. AI Poincaré: Machine Learning Conservation Laws from Trajectories , 2021, Physical review letters.
[88] Giles Hooker,et al. Please Stop Permuting Features: An Explanation and Alternatives , 2019, ArXiv.
[89] R. Tibshirani. Regression Shrinkage and Selection via the Lasso , 1996 .
[90] J. Freidman,et al. Multivariate adaptive regression splines , 1991 .
[91] H. Birx,et al. The Mismeasure of Man , 1981 .
[92] Emanuele Neri,et al. Artificial intelligence: Who is responsible for the diagnosis? , 2020, La radiologia medica.
[93] Olga Kononova,et al. Unsupervised word embeddings capture latent knowledge from materials science literature , 2019, Nature.
[94] David Filliat,et al. Decoupling feature extraction from policy learning: assessing benefits of state representation learning in goal based robotics , 2018, ArXiv.
[95] Guigang Zhang,et al. Deep Learning , 2016, Int. J. Semantic Comput..
[96] Paris Perdikaris,et al. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations , 2019, J. Comput. Phys..
[97] François Laviolette,et al. Domain-Adversarial Training of Neural Networks , 2015, J. Mach. Learn. Res..
[98] Wei Chen,et al. Learning to predict the cosmological structure formation , 2018, Proceedings of the National Academy of Sciences.
[99] Ankur Taly,et al. Axiomatic Attribution for Deep Networks , 2017, ICML.
[100] Radford M. Neal. Pattern Recognition and Machine Learning , 2007, Technometrics.
[101] Wojciech Samek,et al. Explainable AI: Interpreting, Explaining and Visualizing Deep Learning , 2019, Explainable AI.
[102] Ajmal Mian,et al. Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey , 2018, IEEE Access.
[103] Judea Pearl,et al. The seven tools of causal inference, with reflections on machine learning , 2019, Commun. ACM.
[104] Mason A. Porter,et al. Think Locally, Act Locally: The Detection of Small, Medium-Sized, and Large Communities in Large Networks , 2014, Physical review. E, Statistical, nonlinear, and soft matter physics.
[105] Philipp J. Keller,et al. Light-sheet functional imaging in fictively behaving zebrafish , 2014, Nature Methods.
[106] C. Cole,et al. The COSMIC Cancer Gene Census: describing genetic dysfunction across all human cancers , 2018, Nature Reviews Cancer.
[107] Yehuda Koren,et al. Lessons from the Netflix prize challenge , 2007, SKDD.
[108] Yann LeCun,et al. Optimal Brain Damage , 1989, NIPS.
[109] L. Hood,et al. The P4 Health Spectrum - A Predictive, Preventive, Personalized and Participatory Continuum for Promoting Healthspan. , 2017, Progress in cardiovascular diseases.
[110] R. Gibbs,et al. INAUGURAL ARTICLE by a Recently Elected Academy Member:Epistasis dominates the genetic architecture of Drosophila quantitative traits , 2012 .
[111] Michael W. Mahoney,et al. PCA-Correlated SNPs for Structure Identification in Worldwide Human Populations , 2007, PLoS genetics.