Learning without limits
暂无分享,去创建一个
One of the most pervading criticisms of rule-based models of reasoning is that they are “pre-programmed”. Cognitive models based on production systems, including most ACTR models, already have the necessary rules in memory to perform the specific task they model. This type of modeling has a number of problems. A first problem is that rules for a new task have to be learned, and that it is not always reasonable to suppose that all of the rules have been acquired during instructions. A second problem is that in complex problem-solving situations, a model based on a fixed set of production rules cannot reason “outside the system”, a problem often described by the term “brittleness.” Many discussions have been devoted to this issue in a criticism of artificial intelligence in general, often invoking Godel’s theorem as definite proof that a rulebased approach cannot work.