Policy Pre-training for Autonomous Driving via Self-supervised Geometric Modeling

Witnessing the impressive achievements of pre-training techniques on large-scale data in the field of computer vision and natural language processing, we wonder whether this idea could be adapted in a grab-and-go spirit, and mitigate the sample inefficiency problem for visuomotor driving. Given the highly dynamic and variant nature of the input, the visuomotor driving task inherently lacks view and translation invariance, and the visual input contains massive irrelevant information for decision making, resulting in predominant pre-training approaches from general vision less suitable for the autonomous driving task. To this end, we propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving. We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos. The proposed PPGeo is performed in two stages to support effective self-supervised training. In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input. In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only. As such, the pre-trained visual encoder is equipped with rich driving policy related representations and thereby competent for multiple visuomotor driving tasks. Extensive experiments covering a wide span of challenging scenarios have demonstrated the superiority of our proposed approach, where improvements range from 2% to even over 100% with very limited data.

[1]  Jifeng Dai,et al.  Goal-oriented Autonomous Driving , 2022, ArXiv.

[2]  Houqiang Li,et al.  Stare at What You See: Masked Image Modeling without Reconstruction , 2022, 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[3]  P. Abbeel,et al.  Real-World Robot Learning with Masked Visual Pre-training , 2022, CoRL.

[4]  Li Dong,et al.  Image as a Foreign Language: BEiT Pretraining for All Vision and Vision-Language Tasks , 2022, ArXiv.

[5]  Li Dong,et al.  BEiT v2: Masked Image Modeling with Vector-Quantized Visual Tokenizers , 2022, ArXiv.

[6]  Hongsheng Li,et al.  Safety-Enhanced Autonomous Driving Using Interpretable Sensor Fusion Transformer , 2022, Conference on Robot Learning.

[7]  Junchi Yan,et al.  ST-P3: End-to-end Vision-based Autonomous Driving via Spatial-Temporal Feature Learning , 2022, ECCV.

[8]  S. Levine,et al.  LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action , 2022, CoRL.

[9]  P. Abbeel,et al.  Masked World Models for Visual Control , 2022, CoRL.

[10]  Li Fei-Fei,et al.  MaskViT: Masked Visual Pre-Training for Video Prediction , 2022, ICLR.

[11]  Junchi Yan,et al.  Trajectory-guided Control Prediction for End-to-end Autonomous Driving: A Simple yet Strong Baseline , 2022, NeurIPS.

[12]  Joseph J. Lim,et al.  Task-Induced Representation Learning , 2022, ICLR.

[13]  Eshed Ohn-Bar,et al.  SelfD: Self-Learning Large-Scale Driving Policies From the Web , 2022, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[14]  Bolei Zhou,et al.  Learning to Drive by Watching YouTube Videos: Action-Conditioned Contrastive Policy Pretraining , 2022, ECCV.

[15]  Ilija Radosavovic,et al.  Masked Visual Pre-training for Motor Control , 2022, ArXiv.

[16]  A. Gupta,et al.  The Unsurprising Effectiveness of Pre-Trained Vision Models for Control , 2022, ICML.

[17]  Han Hu,et al.  SimMIM: a Simple Framework for Masked Image Modeling , 2021, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[18]  Ross B. Girshick,et al.  Masked Autoencoders Are Scalable Vision Learners , 2021, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[19]  Li Dong,et al.  BEiT: BERT Pre-Training of Image Transformers , 2021, ICLR.

[20]  Zeeshan Khan Suri,et al.  CamLessMonoDepth: Monocular Depth Estimation with Unknown Camera Parameters , 2021, BMVC.

[21]  Luc Van Gool,et al.  End-to-End Urban Driving by Imitating a Reinforcement Learning Coach , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).

[22]  Rutav Shah,et al.  RRL: Resnet as representation for Reinforcement Learning , 2021, ICML.

[23]  Andreas Geiger,et al.  Multi-Modal Fusion Transformer for End-to-End Autonomous Driving , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[24]  Chelsea Finn,et al.  Learning Generalizable Robotic Reward Functions from "In-The-Wild" Human Videos , 2021, Robotics: Science and Systems.

[25]  Ilya Sutskever,et al.  Learning Transferable Visual Models From Natural Language Supervision , 2021, ICML.

[26]  Ofir Nachum,et al.  Representation Matters: Offline Pretraining for Sequential Decision Making , 2021, ICML.

[27]  R. Fergus,et al.  Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels , 2020, ICLR.

[28]  Mohammed Bany Muhammad,et al.  Eigen-CAM: Class Activation Map using Principal Components , 2020, 2020 International Joint Conference on Neural Networks (IJCNN).

[29]  Mark Chen,et al.  Language Models are Few-Shot Learners , 2020, NeurIPS.

[30]  P. Abbeel,et al.  Reinforcement Learning with Augmented Data , 2020, NeurIPS.

[31]  Justin Fu,et al.  D4RL: Datasets for Deep Data-Driven Reinforcement Learning , 2020, ArXiv.

[32]  Kaiming He,et al.  Improved Baselines with Momentum Contrastive Learning , 2020, ArXiv.

[33]  Geoffrey E. Hinton,et al.  A Simple Framework for Contrastive Learning of Visual Representations , 2020, ICML.

[34]  F. Moutarde,et al.  End-to-End Model-Free Reinforcement Learning for Urban Driving Using Implicit Affordances , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[35]  Ross B. Girshick,et al.  Momentum Contrast for Unsupervised Visual Representation Learning , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[36]  Ari S. Morcos,et al.  DD-PPO: Learning Near-Perfect PointGoal Navigators from 2.5 Billion Frames , 2019, ICLR.

[37]  Qiang Xu,et al.  nuScenes: A Multimodal Dataset for Autonomous Driving , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[38]  Vladlen Koltun,et al.  Learning by Cheating , 2019, CoRL.

[39]  Eder Santana,et al.  Exploring the Limitations of Behavior Cloning for Autonomous Driving , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[40]  Anelia Angelova,et al.  Depth From Videos in the Wild: Unsupervised Monocular Depth Learning From Unknown Cameras , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[41]  Frank Hutter,et al.  Decoupled Weight Decay Regularization , 2017, ICLR.

[42]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[43]  Eric P. Xing,et al.  CIRL: Controllable Imitative Reinforcement Learning for Vision-based Self-driving , 2018, ECCV.

[44]  Shane Legg,et al.  IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures , 2018, ICML.

[45]  Alexey Dosovitskiy,et al.  End-to-End Driving Via Conditional Imitation Learning , 2017, 2018 IEEE International Conference on Robotics and Automation (ICRA).

[46]  Tom Schaul,et al.  Rainbow: Combining Improvements in Deep Reinforcement Learning , 2017, AAAI.

[47]  Germán Ros,et al.  CARLA: An Open Urban Driving Simulator , 2017, CoRL.

[48]  Alec Radford,et al.  Proximal Policy Optimization Algorithms , 2017, ArXiv.

[49]  Lukasz Kaiser,et al.  Attention is All you Need , 2017, NIPS.

[50]  Oisin Mac Aodha,et al.  Unsupervised Monocular Depth Estimation with Left-Right Consistency , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[51]  Alexei A. Efros,et al.  Colorful Image Colorization , 2016, ECCV.

[52]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[53]  Sergey Levine,et al.  End-to-End Training of Deep Visuomotor Policies , 2015, J. Mach. Learn. Res..

[54]  Shane Legg,et al.  Human-level control through deep reinforcement learning , 2015, Nature.

[55]  Jian Sun,et al.  Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).

[56]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[57]  Andreas Geiger,et al.  Are we ready for autonomous driving? The KITTI vision benchmark suite , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[58]  Sébastien Marcel,et al.  Torchvision the machine-vision package of torch , 2010, ACM Multimedia.

[59]  Fei-Fei Li,et al.  ImageNet: A large-scale hierarchical image database , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.

[60]  Eero P. Simoncelli,et al.  Image quality assessment: from error visibility to structural similarity , 2004, IEEE Transactions on Image Processing.