Strongly Typed Inductive Concept Learning
暂无分享,去创建一个
In this paper we argue that the use of a language with a type system, together with higher-order facilities and functions, provides a suitable basis for knowledge representation in inductive concept learning and, in particular, illuminates the relationship between attribute-value learning and inductive logic programming (ILP). Individuals are represented by closed terms: tuples of constants in the case of attribute-value learning; arbitrarily complex terms in the case of ILP. To illustrate the point, we take some learning tasks from the machine learning and ILP literature and represent them in Escher, a typed, higher-order, functional logic programming language being developed at the University of Bristol. We argue that the use of a type system provides better ways to discard meaningless hypotheses on syntactic grounds and encompasses many ad hoc approaches to declarative bias.
[1] Aditya K. Ghose,et al. Inductive constraint logic programming: An overview , 1996, PRICAI Workshops.
[2] Ashwin Srinivasan,et al. Mutagenesis: ILP experiments in a non-determinate biological domain , 1994 .
[3] John W. Lloyd,et al. Programming in an Integrated Functional and Logic Language , 1999, J. Funct. Log. Program..
[4] Thomas G. Dietterich. What is machine learning? , 2020, Archives of Disease in Childhood.