Some Learnability Results for Analogical Generation.
暂无分享,去创建一个
Abstract : Progress has been made in characterizing formally the capabilities and performance of inductive learning algorithms. Similar characterizations are needed for recently-proposed methods that produce generalizations from small numbers of analyzed examples. The author considers one class of such methods, based on the analogical generalization technique in Anderson and Thompson's PUPS system. It might appear that some to-be-learned structures can be learned by analogy, while others are too chaotic or inconsistent. It is shown that this intuition is correct for a simple form of analogical generalization, so that there are learnable and unlearnable structures for this method. In contrast, the author shows that for PUPS-style generalization analogical structure can be imposed on an arbitrary system (within a broad class he calls command systems.) It follows that the constraints on the PUPS-style method lie not in any structural condition on a to-be-learned system but rather in obtaining the knowledge needed to impose analogical structure.