Selective Functional Transfer : Inductive Bias from Related Tasks

The selective transfer of task knowledge within the context of artificial neural networks is studied in the MTL learning framework, a modified version of the multiple task learning (MTL) method of functional transfer. MTL is a knowledge based inductive learning system that uses prior task knowledge to adjust its inductive bias. MTL employs a separate learning rate for each task output node. The learning rate for each secondary task varies as a function of a measure of relatedness between that task and the primary task. A definition of task relatedness is given. Eight task relatedness measures are presented and are compared empirically. Experiments demonstrate that from impoverished training sets MTL develops predictive models which have superior generalization ability compared with models produced by single task learning or multiple task learning.

[1]  P. Ut Goff,et al.  Machine learning of inductive bias , 1986 .

[2]  Robert A. Jacobs,et al.  Increased rates of convergence through learning rate adaptation , 1987, Neural Networks.

[3]  E. Kehoe A layered network model of associative learning: learning to learn and configuration. , 1988, Psychological review.

[4]  Peter Seitz,et al.  Minimum class entropy: A maximum information approach to layered networks , 1989, Neural Networks.

[5]  S. C. Suddarth,et al.  Rule-Injection Hints as a Means of Improving Network Performance and Learning Time , 1990, EURASIP Workshop.

[6]  Anders Krogh,et al.  Introduction to the theory of neural computation , 1994, The advanced book program.

[7]  Lorien Y. Pratt,et al.  Discriminability-Based Transfer between Neural Networks , 1992, NIPS.

[8]  Sebastian Thrun,et al.  Explanation-Based Neural Network Learning for Robot Control , 1992, NIPS.

[9]  Lorien Y. Pratt,et al.  Transferring previously learned back-propagation neural networks to new learning tasks , 1993 .

[10]  Rich Caruana,et al.  Learning Many Related Tasks at the Same Time with Backpropagation , 1994, NIPS.

[11]  Sebastian Thrun,et al.  Lifelong Learning: A Case Study. , 1995 .

[12]  Sebastian Thrun,et al.  Learning One More Thing , 1994, IJCAI.

[13]  Sebastian Thrun,et al.  Is Learning The n-th Thing Any Easier Than Learning The First? , 1995, NIPS.

[14]  Jonathan Baxter,et al.  Learning internal representations , 1995, COLT '95.

[15]  Yaser S. Abu-Mostafa,et al.  Hints , 2018, Neural Computation.

[16]  Anthony V. Robins,et al.  Transfer in Cognition , 1996, Connect. Sci..

[17]  Thomas G. Dietterich What is machine learning? , 2020, Archives of Disease in Childhood.

[18]  Daniel L. Silver,et al.  The Parallel Transfer of Task Knowledge Using Dynamic Learning Rates Based on a Measure of Relatedness , 1996, Connect. Sci..

[19]  Rich Caruana,et al.  Multitask Learning , 1998, Encyclopedia of Machine Learning and Data Mining.

[20]  Sebastian Thrun,et al.  Lifelong Learning Algorithms , 1998, Learning to Learn.

[21]  Robert E. Mercer,et al.  Selective transfer of neural network task knowledge , 2000 .

[22]  C A Nelson,et al.  Learning to Learn , 2017, Encyclopedia of Machine Learning and Data Mining.