Multi-task learning is a process used to learn domain-specific bias. It consists in simultaneously training models on different tasks derived from the same domain and forcing them to exchange domain information. This transfer of knowledge is performed by imposing constraints on the parameters defining the models and can lead to improved generalization performance. In this paper, we explore a particular multi-task learning method that forces the parameters of the models to lie on an affine manifold defined in parameter space and embedding domain information. We apply this method to the prediction of the prices of call options on the S&P index for a period of time ranging from 1987 to 1993. An analysis of variance of the results is presented that shows significant improvements of the generalization performance. L'apprentissage multi-tâches est une maniere d'apprendre des particularites d'un domaine (le biais) qui comprend plusieurs tâches possibles. On entraine simultanement plusieurs modeles, un par tâche, en imposant des contraintes sur les parametres de maniere a capturer ce qui est en commun entre les tâches, afin d'obtenir une meilleure generalisation sur chaque tâche, et pour pouvoir rapidement generaliser (avec peu d'exemples) sur une nouvelle tâche provenant du meme domaine. Ici cette commonalite est definie par une variete affine dans l'espace des parametres. Dans cet article, nous appliquons ces methodes a la prediction du prix d'options d'achat de l'indice S&P 500 entre 1987 et 1993. Une analyse de la variance des resultats est presentee, demontrant des ameliorations significatives de la prediction hors-echantillon.
[1]
Tom Heskes,et al.
Solving a Huge Number of Similar Tasks: A Combination of Multi-Task Learning and a Hierarchical Bayesian Approach
,
1998,
ICML.
[2]
Yoshua Bengio,et al.
Bias learning, knowledge sharing
,
2000,
Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium.
[3]
Yoshua Bengio,et al.
Multi-Task Learning for Stock Selection
,
1996,
NIPS.
[4]
Stephen M. Omohundro.
Family Discovery
,
1995,
NIPS.
[5]
Lorien Y. Pratt,et al.
Non-literal Transfer Among Neural Network Learners
,
1993
.
[6]
Rich Caruana,et al.
Algorithms and Applications for Multitask Learning
,
1996,
ICML.
[7]
Sebastian Thrun,et al.
Discovering Structure in Multiple Learning Tasks: The TC Algorithm
,
1996,
ICML.
[8]
Nathan Intrator,et al.
How to Make a Low-Dimensional Representation Suitable for Diverse Tasks
,
1996
.
[9]
Daniel L. Silver,et al.
The Parallel Transfer of Task Knowledge Using Dynamic Learning Rates Based on a Measure of Relatedness
,
1996,
Connect. Sci..
[10]
Jonathan Baxter,et al.
Learning Model Bias
,
1995,
NIPS.