단일 목표 최적화를 위한 파이프라인 태스크 학습

Most tasks depend on the outputs of some other subtasks. The main problem of this pipelined model is that the optimality of subtasks that are trained with their own data is not guaranteed in the final target task, since the subtasks are not optimized with respect to the target task. This paper proposes a method for consolidation of subtasks for target task as a solution to this problem. In the proposed method, all the parameters of target task and subtasks are optimized to fulfill the objective of target task. Such optimization is achieved by finding out parameters of subtasks through a back propagation algorithm. In the experiments with a well-known NLP problem, the proposed method outperforms traditional pipelined models.