Loss-Balanced Task Weighting to Reduce Negative Transfer in Multi-Task Learning
暂无分享,去创建一个
In settings with related prediction tasks, integrated multi-task learning models can often improve performance relative to independent single-task models. However, even when the average task performance improves, individual tasks may experience negative transfer in which the multi-task model’s predictions are worse than the single-task model’s. We show the prevalence of negative transfer in a computational chemistry case study with 128 tasks and introduce a framework that provides a foundation for reducing negative transfer in multitask models. Our Loss-Balanced Task Weighting approach dynamically updates task weights during model training to control the influence of individual tasks.
[1] Zhao Chen,et al. GradNorm: Gradient Normalization for Adaptive Loss Balancing in Deep Multitask Networks , 2017, ICML.
[2] Shengchao Liu,et al. Exploration on Deep Drug Discovery: Representation and Learning , 2018 .
[3] Anthony Gitter,et al. Practical Model Selection for Prospective Virtual Screening , 2018, bioRxiv.