Epistemic Defenses against Scientific and Empirical Adversarial AI Attacks
暂无分享,去创建一个
[1] G. Ragsdell. Systems , 2002, Economics of Visual Art.
[2] Inioluwa Deborah Raji,et al. Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing , 2020, AIES.
[3] Eric Medvet,et al. Exploring the Potential of GPT-2 for Generating Fake Reviews of Research Papers , 2020, FSDM.
[4] Thabo Mahlangu,et al. ‘Data Poisoning’ – Achilles heel of cyber threat intelligence systems , 2019 .
[5] S. Ho,et al. Let’s nab fake science news: Predicting scientists’ support for interventions using the influence of presumed media influence model , 2020, Journalism.
[6] Marco Spruit,et al. The Cybersecurity Focus Area Maturity (CYSFAM) Model , 2021, J. Cybersecur. Priv..
[7] Norman Meuschke,et al. Are Neural Language Models Good Plagiarists? A Benchmark for Neural Paraphrase Detection , 2021, ArXiv.
[8] Dan Boneh,et al. How Relevant Is the Turing Test in the Age of Sophisbots? , 2019, IEEE Security & Privacy.
[9] Anita Makri. Give the public the tools to trust scientists , 2017, Nature.
[10] Sean McGregor,et al. Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database , 2020, AAAI.
[11] Mark Chen,et al. Language Models are Few-Shot Learners , 2020, NeurIPS.
[12] Ruben Verborgh,et al. Rule-driven inconsistency resolution for knowledge graph generation rules , 2019, Semantic Web.
[13] Dylan Hadfield-Menell,et al. Multi-Principal Assistance Games , 2020, ArXiv.
[14] Andrew Feenberg,et al. Philosophy of technology , 2015 .
[15] Laurent Orseau,et al. AI Safety Gridworlds , 2017, ArXiv.
[16] Howard Barnum,et al. The Beginning of Infinity: Explanations That Transform the World , 2012 .
[17] T. Vinnakota,et al. A cybernetics paradigms framework for cyberspace: Key lens to cybersecurity , 2013, 2013 IEEE International Conference on Computational Intelligence and Cybernetics (CYBERNETICSCOM).
[18] Aleksander Madry,et al. On Evaluating Adversarial Robustness , 2019, ArXiv.
[19] Leon Kester,et al. Transdisciplinary AI Observatory - Retrospective Analyses and Future-Oriented Contradistinctions , 2020, Philosophies.
[20] K. Popper,et al. Conjectures and refutations;: The growth of scientific knowledge , 1972 .
[21] Tim Finin,et al. Generating Fake Cyber Threat Intelligence Using Transformer-Based Models , 2021, 2021 International Joint Conference on Neural Networks (IJCNN).
[22] Huajun Chen,et al. The Semantic Web , 2011, Lecture Notes in Computer Science.
[23] Michael Benz,et al. Calculated risk? A cybersecurity evaluation tool for SMEs , 2020 .
[24] Jingyue Li,et al. The AI-Based Cyber Threat Landscape , 2020, ACM Comput. Surv..
[25] Chengbin Deng,et al. Deep fake geography? When geospatial data encounter Artificial Intelligence , 2021 .
[26] Hyrum S. Anderson,et al. The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation , 2018, ArXiv.
[27] K. Popper. In Search of a Better World: Lectures and Essays from Thirty Years , 2012 .
[28] José Hernández-Orallo,et al. Exploring AI Safety in Degrees: Generality, Capability and Control , 2020, SafeAI@AAAI.
[29] L. Kester,et al. Facing Immersive “Post-Truth” in AIVR? , 2020, Philosophies.
[30] Viktor Mikhaĭlovich Glushkov,et al. An Introduction to Cybernetics , 1957, The Mathematical Gazette.
[31] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[32] David Deutsch. The Logic of Experimental Tests, Particularly of Everettian Quantum Theory , 2015 .
[33] P. Cochat,et al. Et al , 2008, Archives de pediatrie : organe officiel de la Societe francaise de pediatrie.
[34] Robert M. Chesney,et al. Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security , 2018 .
[35] Magnus Sahlgren,et al. The Singleton Fallacy: Why Current Critiques of Language Models Miss the Point , 2021, Frontiers in Artificial Intelligence.
[36] Roman V. Yampolskiy,et al. Unethical Research: How to Create a Malevolent Artificial Intelligence , 2016, ArXiv.
[37] L. Floridi. Artificial Intelligence, Deepfakes and a Future of Ectypes , 2018, Philosophy & Technology.
[38] Sherali Zeadally,et al. Harnessing Artificial Intelligence Capabilities to Improve Cybersecurity , 2020, IEEE Access.
[39] N. Bostrom. Strategic Implications of Openness in AI Development , 2017, Artificial Intelligence Safety and Security.
[40] Thomas J. Holt,et al. Don’t shoot the messenger! A criminological and computer science perspective on coordinated vulnerability disclosure , 2018, Crime Science.
[41] John Schulman,et al. Concrete Problems in AI Safety , 2016, ArXiv.
[42] Ryan Smith,et al. A step-by-step tutorial on active inference and its application to empirical data , 2021, Journal of mathematical psychology.
[43] Seán S. ÓhÉigeartaigh,et al. Overcoming Barriers to Cross-cultural Cooperation in AI Ethics and Governance , 2020, Philosophy & Technology.
[44] N Dehouche,et al. Plagiarism in the age of massive Generative Pre-trained Transformers (GPT-3) , 2021, Ethics in Science and Environmental Politics.
[45] Vittorio Bufacchi. Truth, lies and tweets: A Consensus Theory of Post-Truth , 2020 .
[46] J. Sarkis,et al. The zero trust supply chain: Managing supply chain risk in the absence of trust , 2021, Int. J. Prod. Res..
[47] Anna Jobin,et al. The global landscape of AI ethics guidelines , 2019, Nature Machine Intelligence.
[48] R. Lathe. Phd by thesis , 1988, Nature.
[49] Zeyd Boukhers,et al. ECOL: Early Detection of COVID Lies Using Content, Prior Knowledge and Source Information , 2021, CONSTRAINT@AAAI.
[50] Richard Van Noorden. Publishers withdraw more than 120 gibberish papers , 2014 .
[51] IEEE Access , 2021, IEEE Journal on Emerging and Selected Topics in Circuits and Systems.
[52] Francesca Rossi,et al. AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations , 2018, Minds and Machines.