A Deep Reinforcement Learning Framework for Energy Management of Extended Range Electric Delivery Vehicles

Rule-based (RB) energy management strategies are widely used in hybrid-electric vehicles because they are easy to implement and can be used without prior knowledge about future trips. In the literature, parameters used in RB methods are tuned and designed using known driving cycles. Although promising results have been demonstrated, it is difficult to apply such cycle-specific methods on real trips of last-mile delivery vehicles that have significant trip-to-trip differences in distance and energy intensity. In this paper, a reinforcement learning method and a RB strategy is used to improve the fuel economy of an in-use extended range electric vehicle (EREV) used in a last-mile package delivery application. An intelligent agent is trained on historical trips of a single delivery vehicle to tune a parameter in the engine-generator control logic during the trip using real-time information. The method is demonstrated on actual historical delivery trips in a simulation environment. An average of 19.5% in fuel efficiency improvement in miles per gallon gasoline equivalent is achieved on 44 test trips with a distance range of 31 miles to 54 miles not used for training, demonstrating promise to generalize the method. The presented framework is extendable to other RB methods and EREV applications like transit buses and commuter vehicles where similar trips are frequently repeated day-to-day.

[1]  Morteza Dabbaghjamanesh,et al.  A New Efficient Fuel Optimization in Blended Charge Depletion/Charge Sustenance Control Strategy for Plug-In Hybrid Electric Vehicles , 2018, IEEE Transactions on Intelligent Vehicles.

[2]  Liang Li,et al.  Time-Efficient Stochastic Model Predictive Energy Management for a Plug-In Hybrid Electric Bus With an Adaptive Reference State-of-Charge Advisory , 2018, IEEE Transactions on Vehicular Technology.

[3]  Guoyuan Wu,et al.  Deep reinforcement learning-based vehicle energy efficiency autonomous learning system , 2017, 2017 IEEE Intelligent Vehicles Symposium (IV).

[4]  Yuval Tassa,et al.  Continuous control with deep reinforcement learning , 2015, ICLR.

[5]  Sumedha Rajakaruna,et al.  High-Efficiency Control of Internal Combustion Engines in Blended Charge Depletion/Charge Sustenance Strategies for Plug-In Hybrid Electric Vehicles , 2015, IEEE Transactions on Vehicular Technology.

[6]  Liang Li,et al.  Temporal-Difference Learning-Based Stochastic Energy Management for Plug-in Hybrid Electric Buses , 2019, IEEE Transactions on Intelligent Transportation Systems.

[7]  Shane Legg,et al.  Human-level control through deep reinforcement learning , 2015, Nature.

[8]  Bo Gao,et al.  Energy Management in Plug-in Hybrid Electric Vehicles: Recent Progress and a Connected Vehicles Perspective , 2017, IEEE Transactions on Vehicular Technology.

[9]  Chang Liu,et al.  Power management for Plug-in Hybrid Electric Vehicles using Reinforcement Learning with trip information , 2014, 2014 IEEE Transportation Electrification Conference and Expo (ITEC).

[10]  Aymeric Rousseau,et al.  Plug-in Hybrid Electric Vehicle Control Strategy: Comparison between EV and Charge-Depleting Options , 2008 .

[11]  Junqiang Xi,et al.  Real-Time Energy Management Strategy Based on Velocity Forecasts Using V2V and V2I Communications , 2017, IEEE Transactions on Intelligent Transportation Systems.

[12]  David Silver,et al.  Deep Reinforcement Learning with Double Q-Learning , 2015, AAAI.

[13]  Yangsheng Xu,et al.  Parameter Optimization of Power Control Strategy for Series Hybrid Electric Vehicle , 2006, 2006 IEEE International Conference on Evolutionary Computation.