With the development of the Internet of Things (IoT) and 5G, there are ubiquitous smart devices and network functions providing emerging network services efficiently and optimally through building many network connections based on WiFi, LTE/5G, Ethernet, and etc. The Multipath TCP (MPTCP) protocol that enables these devices to establish multiple paths for simultaneous data transmission, has been a widely used extension of standard TCP in smart devices and network functions. On the other hand, more heavy and time-varying traffic loads are generated in an MPTCP network, so that an efficient congestion control mechanism that schedules the traffic between multiple subflows and avoids congestion is highly required. In this paper, we propose a decentralized learning approach, DeepCC, to adapt to the volatile environments and realize the efficient congestion control. The Multi-Agent Deep Reinforcement Learning (MADRL) is used to learn a policy of congestion control for each subflow according to the real-time network states. To deal with the problem of the fixed state space and slow convergence, we adopt two self-attention mechanisms to receive the states and train the policy, respectively. Due to the asynchronous design of DeepCC, the learning process will not introduce extra delay and overhead on the decision-making process. Experiment results show that DeepCC consistently outperforms the well-known heuristic method and DRL-based MPTCP congestion control method in terms of goodput and jitter. Besides, DeepCC with the attention mechanism reduces convergence time by about 50% and increase goodput by about 80% compared with the commonly used structures of neural networks.