Practical industrial assembly scenarios often require robotic agents to adapt their skills to unseen tasks quickly. While transfer reinforcement learning (RL) could enable such quick adaptation, much prior work has to collect many samples from source environments to learn target tasks in a model-free fashion, which still lacks sample efficiency on a practical level. In this work, we develop a novel transfer RL method named TRANSfer learning by Aggregating dynamics Models (TRANS-AM). TRANS-AM is based on model-based RL (MBRL) for its high-level sample efficiency, and only requires dynamics models to be collected from source environments. Specifically, it learns to aggregate source dynamics models adaptively in an MBRL loop to better fit the state-transition dynamics of target environments and execute optimal actions there. As a case study to show the effectiveness of this proposed approach, we address a challenging contact-rich peg-in-hole task with variable hole orientations using a soft robot. Our evaluations with both simulation and real-robot experiments demonstrate that TRANS-AM enables the soft robot to accomplish target tasks with fewer episodes compared when learning the tasks from scratch.