Modeling and Interpreting Expert Disagreement About Artificial Superintelligence
暂无分享,去创建一个
[1] Roman V. Yampolskiy,et al. Leakproofing the Singularity Artificial Intelligence Confinement Problem , 2012 .
[2] Ben Goertzel,et al. Superintelligence: Fears, Promises and Potentials Reflections on Bostrom's Superintelligence, Yudkowsky's From AI to Zombies, and Weaver and Veitas's "Open-Ended Intelligence" , 2015 .
[3] N. Oreskes. The Scientific Consensus on Climate Change , 2004, Science.
[4] Ben Goertzel,et al. Nine Ways to Bias Open-Source AGI Toward Friendliness , 2012 .
[5] Seth D. Baum,et al. Risk Analysis and Risk Management for the Artificial Superintelligence Research and Development Process , 2015 .
[6] Nick Bostrom,et al. Superintelligence: Paths, Dangers, Strategies , 2014 .
[7] Stuart Armstrong,et al. How We're Predicting AI - or Failing to , 2015 .
[8] Nick Bostrom,et al. Future Progress in Artificial Intelligence: A Survey of Expert Opinion , 2013, PT-AI.
[9] R. Penrose,et al. How Long Until Human-Level AI ? Results from an Expert Assessment , 2011 .
[10] Stuart Armstrong,et al. The errors, insights and lessons of famous AI predictions – and what they mean for the future , 2014, J. Exp. Theor. Artif. Intell..
[11] Ben Goertzel,et al. Infusing Advanced AGIs with Human-Like Value Systems , 2016, Journal of Ethics and Emerging Technologies.
[12] Anthony Michael Barrett,et al. A model of pathways to artificial superintelligence catastrophe for risk and decision analysis , 2016, J. Exp. Theor. Artif. Intell..