Animation of humanoid figures is a significant component in many current applications (e.g., video games, movie-making). However, the process of creating viable animations can be tedious, timeconsuming, and expensive due to the complexity of controlling a character through a large number of degrees-of-freedom. Such difficulties can be further compounded when the character is subject to external force, as often the case in video games. Recent dynamic response methods by Zordan et al. [Zordan et al. ] and Mandel [Mandel ] go beyond limp passive “ragdoll” animation by transitioning between motion capture playback and controlled physical simulation. When an unanticipated external force is applied to the character, a search is performed in the motion capture database to find the closest matching pose to the character’s current configuration. The found pose serves as a desired configuration for the character to servo towards and transition back into motion capture playback. Current dynamic response methods use a monolithic motion database that: 1) requires a significant computation burden for search (70% of computation time for Zordan et al.) and 2) does not readily incorporate user input. We address both of these limitations by using a modular collection of motion databases each representing some action (e.g., run, punch, kick). Through modularity, we invoke smaller independent search procedures on each database, where the choice of desired pose is informed by the action represented by each database. Eventually, we envision dense motion databases constructed from learned parameterized models [Jenkins and Mataric ; Kovar and Gleicher ].