The drastic changes in flight parameters during aerobatics and the high instability of the system make the control of autonomous aerobatics unusually difficult. In this paper, we propose a deep feature representation based imitation learning method for autonomous aerobatics, which leverages expert demonstrations to efficiently learn the mapping of high-dimensional flight observations onto continuous actions (pitch, tail, and thrust). Different from the existing methods, our proposed method requires neither trajectory specification and alignment nor any assumptions and processing of system uncertainties, so it can greatly simplify the controller solution steps and reduce the computational burden. Particularly, our method uses the proposed deep feature representation network (DFR-network) to directly map the experts’ demonstration trajectory to a deep representation space spanned by a set of learned subspaces which represent the motion patterns with the same statistical property among demonstration trajectories. Various aerobatic maneuvers can be encoded in the deep representation space through a simple combination of embedding features. Accordingly, the proposed method can perform arbitrary aerobatic maneuvers by observing a limit set of expert demonstrations. The effectiveness of the deep feature representation based imitation learning method is verified on the real-world flight data. Experiments show that compared with the existing methods, our proposed method has higher control accuracy, stronger robustness and anti-interference ability.