Unsolved Problems in ML Safety
暂无分享,去创建一个
[1] Yuri Burda,et al. Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets , 2022, ArXiv.
[2] D. Song,et al. PixMix: Dreamlike Pictures Comprehensively Improve Safety Measures , 2021, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[3] P. Haas,et al. Augmenting Decision Making via Interactive What-If Analysis , 2021, CIDR.
[4] Owain Evans,et al. TruthfulQA: Measuring How Models Mimic Human Falsehoods , 2021, ACL.
[5] William Agnew,et al. The Values Encoded in Machine Learning Research , 2021, FAccT.
[6] Nicholas Carlini,et al. Poisoning and Backdooring Contrastive Learning , 2021, ICLR.
[7] Jinwoo Shin,et al. Consistency Regularization for Adversarial Robustness , 2021, AAAI.
[8] D. Song,et al. What Would Jiminy Cricket Do? Towards Agents That Behave Morally , 2021, NeurIPS Datasets and Benchmarks.
[9] Omid Poursaeed,et al. Robustness and Generalization via Generative Adversarial Training , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).
[10] Rongxin Jiang,et al. Towards Understanding the Generative Capability of Adversarially Robust Classifiers , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).
[11] Charles Sutton,et al. Program Synthesis with Large Language Models , 2021, ArXiv.
[12] Michael S. Bernstein,et al. On the Opportunities and Risks of Foundation Models , 2021, ArXiv.
[13] David Picard,et al. Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).
[14] Dylan Hadfield-Menell,et al. What are you optimizing for? Aligning Recommender Systems with Human Values , 2021, ArXiv.
[15] Sergey Levine,et al. Conservative Objective Models for Effective Offline Model-Based Optimization , 2021, ICML.
[16] Mohamad H. Danesh,et al. Out-of-Distribution Dynamics Detection: RL-Relevant Benchmarks and Results , 2021, ArXiv.
[17] Wojciech Zaremba,et al. Evaluating Large Language Models Trained on Code , 2021, ArXiv.
[18] Thomas Brox,et al. Test-Time Adaptation to Distribution Shift by Confidence Maximization and Input Transformation , 2021, ArXiv.
[19] Nicholas Carlini,et al. Handcrafted Backdoors in Deep Neural Networks , 2021, ArXiv.
[20] Matthieu Geist,et al. There Is No Turning Back: A Self-Supervised Approach for Reversibility-Aware Reinforcement Learning , 2021, NeurIPS.
[21] Shi Feng,et al. Concealed Data Poisoning Attacks on NLP Models , 2021, NAACL.
[22] Trevor Darrell,et al. Fighting Gradients with Gradients: Dynamic Defenses against Adversarial Attacks , 2021, ArXiv.
[23] Julien Mairal,et al. Emerging Properties in Self-Supervised Vision Transformers , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).
[24] Nuno Vasconcelos,et al. IMAGINE: Image Synthesis by Image-Guided Model Inversion , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[25] Tom Everitt,et al. Alignment of Language Agents , 2021, ArXiv.
[26] Emily M. Bender,et al. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜 , 2021, FAccT.
[27] Timothy A. Mann,et al. Fixing Data Augmentation to Improve Adversarial Robustness , 2021, ArXiv.
[28] Ilya Sutskever,et al. Learning Transferable Visual Models From Natural Language Supervision , 2021, ICML.
[29] E. Hovy,et al. Measuring and Improving Consistency in Pretrained Language Models , 2021, Transactions of the Association for Computational Linguistics.
[30] Pang Wei Koh,et al. WILDS: A Benchmark of in-the-Wild Distribution Shifts , 2020, ICML.
[31] Nicolas Flammarion,et al. RobustBench: a standardized adversarial robustness benchmark , 2020, NeurIPS Datasets and Benchmarks.
[32] Adnan Shahid Khan,et al. Network intrusion detection system: A systematic study of machine learning and deep learning approaches , 2020, Trans. Emerg. Telecommun. Technol..
[33] Shafiq R. Joty,et al. GeDi: Generative Discriminator Guided Sequence Generation , 2020, EMNLP.
[34] Dawn Song,et al. Measuring Massive Multitask Language Understanding , 2020, ICLR.
[35] Dawn Song,et al. Aligning AI With Shared Human Values , 2020, ICLR.
[36] Zheng Zhang,et al. Trojaning Language Models for Fun and Profit , 2020, 2021 IEEE European Symposium on Security and Privacy (EuroS&P).
[37] Vitaly Shmatikov,et al. You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion , 2020, USENIX Security Symposium.
[38] D. Song,et al. The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization , 2020, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).
[39] Sahil Singla,et al. Perceptual Adversarial Robustness: Defense Against Unseen Threat Models , 2020, ICLR.
[40] Chris C. Holmes,et al. Neural Ensemble Search for Uncertainty Estimation and Dataset Shift , 2020, NeurIPS.
[41] Joel Lehman,et al. Reinforcement Learning Under Moral Uncertainty , 2020, ICML.
[42] Vitaly Shmatikov,et al. Blind Backdoors in Deep Learning Models , 2020, USENIX Security Symposium.
[43] Rahul Khanna,et al. ForecastQA: A Question Answering Challenge for Event Forecasting with Temporal Text Data , 2020, ACL.
[44] Michail Maniatakos,et al. Stop-and-Go: Exploring Backdoor Attacks on Deep Reinforcement Learning-Based Traffic Congestion Control Systems , 2020, IEEE Transactions on Information Forensics and Security.
[45] Rohin Shah,et al. Optimal Policies Tend to Seek Power. , 2019, 1912.01683.
[46] Dawn Song,et al. Natural Adversarial Examples , 2019, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[47] Ramesh Karri,et al. An Empirical Cybersecurity Evaluation of GitHub Copilot's Code Contributions , 2021, ArXiv.
[48] Toby Ord,et al. The Parliamentary Approach to Moral Uncertainty , 2021 .
[49] J. Pfau,et al. Objective Robustness in Deep Reinforcement Learning , 2021, ArXiv.
[50] Trevor Darrell,et al. Tent: Fully Test-Time Adaptation by Entropy Minimization , 2021, ICLR.
[51] Silvio Savarese,et al. Localized Calibration: Metrics and Recalibration , 2021, ArXiv.
[52] Joel Z. Leibo,et al. Open Problems in Cooperative AI , 2020, ArXiv.
[53] Richard E. Harang,et al. SOREL-20M: A Large Scale Benchmark Dataset for Malicious PE Detection , 2020, ArXiv.
[54] Jonathan Stray,et al. Aligning AI Optimization to Community Well-Being , 2020, International Journal of Community Well-Being.
[55] Dakota Cary,et al. Destructive Cyber Operations and Machine Learning , 2020 .
[56] Ben Buchanan,et al. Automating Cyber Attacks , 2020 .
[57] Filip Maric,et al. Formalizing IMO Problems and Solutions in Isabelle/HOL , 2020, ThEdu@IJCAR.
[58] Mark Chen,et al. Scaling Laws for Autoregressive Generative Modeling , 2020, ArXiv.
[59] Laurent Orseau,et al. Avoiding Side Effects By Considering Future Tasks , 2020, NeurIPS.
[60] Tegan Maharaj,et al. Hidden Incentives for Auto-Induced Distributional Shift , 2020, ArXiv.
[61] Jessica Taylor,et al. Alignment for Advanced Machine Learning Systems , 2020, Ethics of Artificial Intelligence.
[62] Matthias Hein,et al. Certifiably Adversarially Robust Detection of Out-of-Distribution Data , 2020, NeurIPS.
[63] Jinwoo Shin,et al. CSI: Novelty Detection via Contrastive Learning on Distributionally Shifted Instances , 2020, NeurIPS.
[64] Cristian Danescu-Niculescu-Mizil,et al. It Takes Two to Lie: One to Lie, and One to Listen , 2020, ACL.
[65] Quoc V. Le,et al. Smooth Adversarial Training , 2020, ArXiv.
[66] Guillaume Lample,et al. Unsupervised Translation of Programming Languages , 2020, NeurIPS.
[67] Prasad Tadepalli,et al. Avoiding Side Effects in Complex Environments , 2020, NeurIPS.
[68] Andrew Critch,et al. AI Research Considerations for Human Existential Safety (ARCHES) , 2020, ArXiv.
[69] Mark Chen,et al. Language Models are Few-Shot Learners , 2020, NeurIPS.
[70] Aleksander Madry,et al. Identifying Statistical Bias in Dataset Replication , 2020, ICML.
[71] Peter Henderson,et al. Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims , 2020, ArXiv.
[72] Yisen Wang,et al. Adversarial Weight Perturbation Helps Robust Generalization , 2020, NeurIPS.
[73] Kiran Karra,et al. The TrojAI Software Framework: An OpenSource tool for Embedding Trojans into Deep Learning Models , 2020, ArXiv.
[74] Florian Tramèr,et al. On Adaptive Attacks to Adversarial Example Defenses , 2020, NeurIPS.
[75] N. Taleb. Statistical Consequences of Fat Tails: Real World Preasymptotics, Epistemology, and Applications , 2020, 2001.10488.
[76] Iason Gabriel,et al. Artificial Intelligence, Values, and Alignment , 2020, Minds and Machines.
[77] Derek Hoiem,et al. Dreaming to Distill: Data-Free Knowledge Transfer via DeepInversion , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[78] J. Gilmer,et al. AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty , 2019, ICLR.
[79] Peter Eckersley,et al. SafeLife 1.0: Exploring Side Effects in Complex Environments , 2019, SafeAI@AAAI.
[80] Feargus Pendlebury,et al. Intriguing Properties of Adversarial ML Attacks in the Problem Space , 2019, 2020 IEEE Symposium on Security and Privacy (SP).
[81] Bernt Schiele,et al. Confidence-Calibrated Adversarial Training: Generalizing to Unseen Attacks , 2019, ICML.
[82] Yizheng Chen,et al. Neutaint: Efficient Dynamic Taint Analysis with Neural Networks , 2019, 2020 IEEE Symposium on Security and Privacy (SP).
[83] Sergey Levine,et al. Adversarial Policies: Attacking Deep Reinforcement Learning , 2019, ICLR.
[84] Stefan Savage,et al. Detecting and Characterizing Lateral Phishing at Scale , 2019, USENIX Security Symposium.
[85] Percy Liang,et al. Verified Uncertainty Calibration , 2019, NeurIPS.
[86] Nick Bostrom,et al. The Vulnerable World Hypothesis , 2019, Global Policy.
[87] Peter A. Flach,et al. Beyond temperature scaling: Obtaining well-calibrated multiclass probabilities with Dirichlet calibration , 2019, NeurIPS.
[88] Yi Sun,et al. Testing Robustness Against Unforeseen Adversaries , 2019, ArXiv.
[89] Yolanda Gil,et al. A 20-Year Community Roadmap for Artificial Intelligence Research in the US , 2019, ArXiv.
[90] Nicholas Carlini,et al. Stateful Detection of Black-Box Adversarial Attacks , 2019, Proceedings of the 1st ACM Workshop on Security and Privacy on Artificial Intelligence.
[91] Dawn Song,et al. Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty , 2019, NeurIPS.
[92] Sebastian Nowozin,et al. Can You Trust Your Model's Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift , 2019, NeurIPS.
[93] Scott Garrabrant,et al. Risks from Learned Optimization in Advanced Machine Learning Systems , 2019, ArXiv.
[94] Ludwig Schmidt,et al. Unlabeled Data Improves Adversarial Robustness , 2019, NeurIPS.
[95] Qiang Wei,et al. NeuFuzz: Efficient Fuzzing With Deep Neural Network , 2019, IEEE Access.
[96] J. Zico Kolter,et al. Certified Adversarial Robustness via Randomized Smoothing , 2019, ICML.
[97] Kimin Lee,et al. Using Pre-Training Can Improve Model Robustness and Uncertainty , 2019, ICML.
[98] Michael I. Jordan,et al. Theoretically Principled Trade-off between Robustness and Accuracy , 2019, ICML.
[99] Shoshana Zuboff. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power , 2019 .
[100] Leon van der Torre,et al. Building Jiminy Cricket: An Architecture for Moral Agreements Among Stakeholders , 2018, AIES.
[101] Taehoon Kim,et al. Quantifying Generalization in Reinforcement Learning , 2018, ICML.
[102] Scott E. Coull,et al. Exploring Adversarial Examples in Malware Detection , 2018, 2019 IEEE Security and Privacy Workshops (SPW).
[103] P. Abbeel,et al. Preferences Implicit in the State of the World , 2018, ICLR.
[104] Thomas G. Dietterich,et al. Deep Anomaly Detection with Outlier Exposure , 2018, ICLR.
[105] Junfeng Yang,et al. NEUZZ: Efficient Fuzzing with Neural Program Smoothing , 2018, 2019 IEEE Symposium on Security and Privacy (SP).
[106] Thomas G. Dietterich,et al. Benchmarking Neural Network Robustness to Common Corruptions and Perturbations , 2018, ICLR.
[107] Suman Jana,et al. Certified Robustness to Adversarial Examples with Differential Privacy , 2018, 2019 IEEE Symposium on Security and Privacy (SP).
[108] Thomas G. Dietterich,et al. Sequential Feature Explanations for Anomaly Detection , 2019, ACM Trans. Knowl. Discov. Data.
[109] Dario Amodei,et al. Benchmarking Safe Exploration in Deep Reinforcement Learning , 2019 .
[110] Thomas G. Dietterich. Robust artificial intelligence and robust human organizations , 2018, Frontiers of Computer Science.
[111] Shane Legg,et al. Scalable agent alignment via reward modeling: a research direction , 2018, ArXiv.
[112] Ashish Agarwal,et al. Hallucinations in Neural Machine Translation , 2018 .
[113] Mingyan Liu,et al. Characterizing Adversarial Examples Based on Spatial Consistency Information for Semantic Segmentation , 2018, ECCV.
[114] Ryan P. Adams,et al. Motivating the Rules of the Game for Adversarial Example Research , 2018, ArXiv.
[115] Stefano Ermon,et al. Accurate Uncertainties for Deep Learning Using Calibrated Regression , 2018, ICML.
[116] Lalana Kagal,et al. Explaining Explanations: An Overview of Interpretability of Machine Learning , 2018, 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA).
[117] Dario Amodei,et al. AI safety via debate , 2018, ArXiv.
[118] Tudor Dumitras,et al. Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks , 2018, NeurIPS.
[119] Scott Garrabrant,et al. Categorizing Variants of Goodhart's Law , 2018, ArXiv.
[120] Risto Miikkulainen,et al. The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities , 2018, Artificial Life.
[121] Hyrum S. Anderson,et al. The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation , 2018, ArXiv.
[122] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[123] Aditi Raghunathan,et al. Certified Defenses against Adversarial Examples , 2018, ICLR.
[124] Matthias Bethge,et al. Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models , 2017, ICLR.
[125] Owain Evans,et al. Trial without Error: Towards Safe Reinforcement Learning via Human Intervention , 2017, AAMAS.
[126] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[127] Dan Boneh,et al. Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.
[128] Aleksander Madry,et al. A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations , 2017, ArXiv.
[129] Yang Yang,et al. Deep Learning Scaling is Predictable, Empirically , 2017, ArXiv.
[130] Andrew Y. Ng,et al. CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning , 2017, ArXiv.
[131] Brendan Dolan-Gavitt,et al. BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain , 2017, ArXiv.
[132] Kilian Q. Weinberger,et al. On Calibration of Modern Neural Networks , 2017, ICML.
[133] Charles Blundell,et al. Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles , 2016, NIPS.
[134] Anca D. Dragan,et al. The Off-Switch Game , 2016, IJCAI.
[135] Vitaly Shmatikov,et al. Membership Inference Attacks Against Machine Learning Models , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[136] Kevin Gimpel,et al. A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks , 2016, ICLR.
[137] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[138] Lujo Bauer,et al. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition , 2016, CCS.
[139] Nathan Srebro,et al. Equality of Opportunity in Supervised Learning , 2016, NIPS.
[140] Keith E. Stanovich,et al. The Rationality Quotient: Toward a Test of Rational Thinking , 2016 .
[141] Ian Goodfellow,et al. Deep Learning with Differential Privacy , 2016, CCS.
[142] John Schulman,et al. Concrete Problems in AI Safety , 2016, ArXiv.
[143] Anca D. Dragan,et al. Cooperative Inverse Reinforcement Learning , 2016, NIPS.
[144] Terrance E. Boult,et al. Towards Open Set Deep Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[145] D. Sculley,et al. Hidden Technical Debt in Machine Learning Systems , 2015, NIPS.
[146] Stuart J. Russell,et al. Research Priorities for Robust and Beneficial Artificial Intelligence , 2015, AI Mag..
[147] Kathleen M. Sutcliffe,et al. Principle 1: Preoccupation with Failure , 2015 .
[148] Brendan T. O'Connor,et al. Posterior calibration and exploratory analysis for natural language processing models , 2015, EMNLP.
[149] Dawn Xiaodong Song,et al. Recognizing Functions in Binaries with Neural Networks , 2015, USENIX Security Symposium.
[150] Evan G. Williams,et al. The Possibility of an Ongoing Moral Catastrophe , 2015 .
[151] Philip E. Tetlock,et al. Superforecasting: The Art and Science of Prediction , 2015 .
[152] Peter A. Singer,et al. The Point of View of the Universe: Sidgwick and Contemporary Ethics , 2014 .
[153] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[154] Philip E. Tetlock,et al. On the Difference between Binary Prediction and True Exposure with Implications for Forecasting Tournaments and Decision Making Research , 2013 .
[155] Chris Arney. Antifragile: Things That Gain from Disorder , 2013 .
[156] Fabio Roli,et al. Evasion Attacks against Machine Learning at Test Time , 2013, ECML/PKDD.
[157] J. Jonides,et al. Facebook Use Predicts Declines in Subjective Well-Being in Young Adults , 2013, PloS one.
[158] M. Cvach. Monitor alarm fatigue: an integrative review. , 2012, Biomedical instrumentation & technology.
[159] Nancy G. Leveson,et al. Engineering a Safer World: Systems Thinking Applied to Safety , 2012 .
[160] Ralph Langner,et al. Stuxnet: Dissecting a Cyberwarfare Weapon , 2011, IEEE Security & Privacy.
[161] A. Kyle,et al. The Flash Crash: The Impact of High Frequency Trading on an Electronic Market , 2011 .
[162] D. Kahneman,et al. High income improves evaluation of life but not emotional well-being , 2010, Proceedings of the National Academy of Sciences.
[163] Vern Paxson,et al. Outside the Closed World: On Using Machine Learning for Network Intrusion Detection , 2010, 2010 IEEE Symposium on Security and Privacy.
[164] Erik Brynjolfsson,et al. What the GDP Gets Wrong (Why Managers Should Care) , 2009 .
[165] Rain Ottis,et al. Analysis of the 2007 Cyber Attacks Against Estonia from the Information Warfare Perspective , 2008 .
[166] Nassim Nicholas Taleb,et al. The Black Swan: The Impact of the Highly Improbable , 2007 .
[167] Laura DeNardis,et al. A history of internet security , 2007 .
[168] Timothy D. Wilson,et al. Affective Forecasting , 2005 .
[169] Richard L. Hudson,et al. The Misbehavior of Markets: A Fractal View of Risk, Ruin, and Reward , 2004 .
[170] Michael Mitzenmacher,et al. A Brief History of Generative Models for Power Law and Lognormal Distributions , 2004, Internet Math..
[171] M. Nussbaum. CAPABILITIES AS FUNDAMENTAL ENTITLEMENTS: SEN AND SOCIAL JUSTICE , 2003 .
[172] Pablo Fajnzylber,et al. Inequality and Violent Crime* , 2002, The Journal of Law and Economics.
[173] Eric F. Lambin,et al. What drives tropical deforestation?: a meta-analysis of proximate and underlying causes of deforestation based on subnational case study evidence , 2001 .
[174] J Hedlund,et al. Risky business: safety regulations, risk compensation, and individual behavior , 2000, Injury prevention : journal of the International Society for Child and Adolescent Injury Prevention.
[175] Terran Lane,et al. An Application of Machine Learning to Anomaly Detection , 1999 .
[176] M. Strathern. ‘Improving ratings’: audit in the British University system , 1997, European Review.
[177] Jeremy Greenwood,et al. The Third Industrial Revolution:: Technology, Productivity, and Income Inequality , 1997 .
[178] H. Schneider. Failure mode and effect analysis : FMEA from theory to execution , 1996 .
[179] Ross J. Anderson,et al. Programming Satan's Computer , 1995, Computer Science Today.
[180] G. Botvin,et al. Smoking behavior of adolescents exposed to cigarette advertising. , 1993, Public health reports.
[181] C. Goodhart. Problems of Monetary Management: The UK Experience , 1984 .
[182] F. R. Frola,et al. System Safety in Aircraft Acquisition , 1984 .
[183] G. J. Dalcourt,et al. The Methods of Ethics , 1983 .
[184] Theodore D. Raphael. Integrative Complexity Theory and Forecasting International Crises , 1982 .
[185] Ernest J. Weinrib,et al. Utilitarianism, Economics, and Legal Theory , 1980 .
[186] John Gall,et al. Systemantics: How Systems Work and Especially How They Fail , 1977 .
[187] M. C. Jensen,et al. Harvard Business School; SSRN; National Bureau of Economic Research (NBER); European Corporate Governance Institute (ECGI); Harvard University - Accounting & Control Unit , 1976 .
[188] J. Rawls,et al. A Theory of Justice , 1971, Princeton Readings in Political Thought.
[189] D. Campbell,et al. Hedonic relativism and planning the good society , 1971 .
[190] David Hume. A Treatise of Human Nature: Being an Attempt to introduce the experimental Method of Reasoning into Moral Subjects , 1972 .
[191] V. Ridgway. Dysfunctional Consequences of Performance Measurements , 1956 .
[192] P. J. Heawood. Map-Colour Theorem , 1949 .