Artificial neural networks, which dominate artificial intelligence applications such as object recognition and speech recognition, are in evolution. To apply neural networks to wider applications, customized hardware are necessary since CPU and GPU are not efficient enough. FPGA can be an ideal platform for neural network acceleration since it is programmable and can achieve much higher energy efficiency compared with general-purpose processors. However, the long development period and insufficient performance of traditional FPGA acceleration solutions prevent it from wide utilization. In this work, we propose a complete design flow to achieve both fast deployment and high energy efficiency for accelerating neural networks on FPGA. Deep compression and data quantization are employed to exploit the redundancy in algorithm and reduce both computational and memory complexity. Two architecture designs for CNN and DNN/RNN will be introduced together with the compilation environment. Evaluated on Xilinx Zynq 7000 and Kintex Ultrascale series FPGA with realworld neural networks, up to 10 times higher energy efficiency can be achieved compared with mobile GPU and desktop GPU.