Editorial
暂无分享,去创建一个
Scientific success stories make good journal articles: a theoretical innovation backed up by a proof or a running system, the integration of diverse ideas into a more general framework, empirical data testifying to the value of an improved learning technique, and so on. Failed attempts, on the other hand, receive little publicity. Researchers are eager to move on to new ideas, journals and conferences do not seek negative results, and it can be downright embarrassing to admit in print that a new promising method does not work, or is outperformed by simpler well-documented methods in the literature. Research, by its very nature, produces unpredictable outcomes, many of which we classify as failures and relegate them to the proverbial dustbin. The question I wish to raise here is whether some failures can prove instructive, whether they should indeed be published and the field be wiser for them, or whether they should never grace the printed page. Machine learning, and artificial intelligence as a whole, is a sufficiently new science that attention has focused almost exclusively on successes, rather than sharing the limelight with some of the less sexy but equally instructive failures. When we refer to instructive failures, we can make some clear distinctions as to the cause of the failure, ranging from theoretical limitations and practical impasses to successful implementations that are outperformed by simpler existing methods.