Aiming at the problem that the multi-robot task allocation method in soccer system is easy to fall into the problem of local optimal solution and real-time performance, a new multi-robot task allocation method is proposed. First of all, in order to improve the speed and efficiency of finding optimal actions and make better use of the disadvantages that traditional Q-learning can't often propagate negative values, we propose a new way to propagate negative values, that is, Q-learning methods based on negative rewards. Next, in order to adapt to the dynamic external environment, an adaptive e greedy method of which the mode of operation is judged by the e value is proposed. This method is based on the classical e-greedy. In the process of solving problems, e can be adaptively changed as needed for a better balance of exploration and exploitation in reinforcement learning. Finally, we apply this method to the robot's football game system. It has been experimentally proven that dangerous actions can be avoided effectively by the Q-learning method which can spread negative rewards. The adaptive e-greedy strategy can be used to adapt to the external environment better and faster so as to improve the speed of convergence.