State estimation with limited sensors - A deep learning based approach

The importance of state estimation in fluid mechanics is well-established; it is required for accomplishing several tasks including design/optimization, active control, and future state prediction. A common tactic in this regards is to rely on reduced order models. Such approaches, in general, use measurement data of one-time instance. However, oftentimes data available from sensors is sequential and ignoring it results in information loss. In this paper, we propose a novel deep learning based state estimation framework that learns from sequential data. The proposed model structure consists of the recurrent cell to pass information from different time steps enabling utilization of this information to recover the full state. We illustrate that utilizing sequential data allows for state recovery from only one or two sensors. For efficient recovery of the state, the proposed approached is coupled with an auto-encoder based reduced order model. We illustrate the performance of the proposed approach using two examples and it is found to outperform other alternatives existing in the literature.

[1]  M. H. Naderi,et al.  Deep Neural Networks for Nonlinear Model Order Reduction of Unsteady Flows , 2020, Physics of Fluids.

[2]  S. Obayashi,et al.  Assessment of probability density function based on POD reduced-order model for ensemble-based data assimilation , 2015 .

[3]  Kelly Cohen,et al.  A heuristic approach to effective sensor placement for modeling of a cylinder wake , 2006 .

[4]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[5]  Suparno Mukhopadhyay,et al.  Mass normalized mode shape identification of bridge structures using a single actuator‐sensor pair , 2018 .

[6]  Nicholas Geneva,et al.  Multi-fidelity Generative Deep Learning Turbulent Flows , 2020, Foundations of Data Science.

[7]  Simo Särkkä,et al.  Bayesian Filtering and Smoothing , 2013, Institute of Mathematical Statistics textbooks.

[8]  Siavash Hosseinyalamdary,et al.  Deep Kalman Filter: Simultaneous Multi-Sensor Integration and Modelling; A GNSS/IMU Case Study , 2018, Sensors.

[9]  Maria-Vittoria Salvetti,et al.  A non-linear observer for unsteady three-dimensional flows , 2008, J. Comput. Phys..

[10]  Sepp Hochreiter,et al.  The Vanishing Gradient Problem During Learning Recurrent Neural Nets and Problem Solutions , 1998, Int. J. Uncertain. Fuzziness Knowl. Based Syst..

[11]  Lorenzo Rosasco,et al.  Learning, Regularization and Ill-Posed Inverse Problems , 2004, NIPS.

[12]  Joseph H. Citriniti,et al.  Examination of a LSE/POD complementary technique using single and multi-time information in the axisymmetric shear layer , 1999 .

[13]  Chryssostomos Chryssostomidis,et al.  Learning functionals via LSTM neural networks for predicting vessel dynamics in extreme sea states , 2019, Proceedings of the Royal Society A.

[14]  Ronald Adrian,et al.  Higher‐order estimates of conditional eddies in isotropic turbulence , 1980 .

[15]  Lawrence Sirovich,et al.  Karhunen–Loève procedure for gappy data , 1995 .

[16]  Nirmal J. Nair,et al.  Leveraging reduced-order models for state estimation using deep learning , 2019 .

[17]  Yann Guezennec,et al.  Stochastic estimation of coherent structures in turbulent boundary layers , 1989 .

[18]  Kuldip K. Paliwal,et al.  Bidirectional recurrent neural networks , 1997, IEEE Trans. Signal Process..

[19]  Steven L. Brunton,et al.  Shallow Learning for Fluid Flow Reconstruction with Limited Sensors and Limited Data , 2019, ArXiv.

[20]  Radford M. Neal Pattern Recognition and Machine Learning , 2007, Technometrics.

[21]  K. Willcox Unsteady Flow Sensing and Estimation via the Gappy Proper Orthogonal Decomposition , 2004 .

[22]  Ronald Adrian,et al.  Conditional eddies in isotropic turbulence , 1979 .

[23]  Joel A. Tropp,et al.  Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit , 2007, IEEE Transactions on Information Theory.

[24]  Lili Mou,et al.  Stochastic Wasserstein Autoencoder for Probabilistic Sentence Generation , 2018, NAACL.

[25]  Aapo Hyvärinen,et al.  Variational Autoencoders and Nonlinear ICA: A Unifying Framework , 2019, AISTATS.

[26]  Jürgen Schmidhuber,et al.  Long Short-Term Memory , 1997, Neural Computation.

[27]  Zachary Chase Lipton A Critical Review of Recurrent Neural Networks for Sequence Learning , 2015, ArXiv.

[28]  Steven L. Brunton,et al.  Robust flow reconstruction from limited measurements via sparse representation , 2018, Physical Review Fluids.

[29]  Yee Whye Teh,et al.  Stacked Capsule Autoencoders , 2019, NeurIPS.

[30]  Sriram Narasimhan,et al.  A Gaussian process latent force model for joint input-state estimation in linear structural systems , 2019, Mechanical Systems and Signal Processing.

[31]  Yoshua Bengio,et al.  Deep Sparse Rectifier Neural Networks , 2011, AISTATS.

[32]  J. Templeton,et al.  Reynolds averaged turbulence modelling using deep neural networks with embedded invariance , 2016, Journal of Fluid Mechanics.

[33]  Luca Antiga,et al.  Automatic differentiation in PyTorch , 2017 .

[34]  Kevin P. Murphy,et al.  Machine learning - a probabilistic perspective , 2012, Adaptive computation and machine learning series.

[35]  Clarence W. Rowley,et al.  Linearly-Recurrent Autoencoder Networks for Learning Dynamics , 2017, SIAM J. Appl. Dyn. Syst..

[36]  Omer San,et al.  Long short-term memory embedded nudging schemes for nonlinear data assimilation of geophysical flows , 2020 .

[37]  Sanjeev Arora,et al.  Theoretical Analysis of Auto Rate-Tuning by Batch Normalization , 2018, ICLR.

[38]  Mark N. Glauser,et al.  Proportional Closed-Loop Feedback Control of Flow Separation , 2007 .

[39]  Stéphane Mallat,et al.  Matching pursuits with time-frequency dictionaries , 1993, IEEE Trans. Signal Process..

[40]  A. Hillis,et al.  Unsteady RANS computations of flow around a circular cylinder for a wide range of Reynolds numbers , 2014 .

[41]  Nicholas Geneva,et al.  Quantifying model form uncertainty in Reynolds-averaged turbulence models with Bayesian deep neural networks , 2018, J. Comput. Phys..

[42]  Nitish Srivastava,et al.  Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..

[43]  Michele Milano,et al.  Neural network modeling for near wall turbulent flow , 2002 .

[44]  H. Musoff,et al.  Unscented Kalman Filter , 2015 .

[45]  Clarence W. Rowley,et al.  Integration of non-time-resolved PIV and time-resolved velocity point sensors for dynamic estimation of velocity fields , 2013 .

[46]  Ronald Adrian,et al.  On the role of conditional averages in turbulence theory. , 1975 .

[47]  Pierre Sagaut,et al.  Reconstruction of unsteady viscous flows using data assimilation schemes , 2016, J. Comput. Phys..

[48]  Jeff Heaton,et al.  Ian Goodfellow, Yoshua Bengio, and Aaron Courville: Deep learning , 2017, Genetic Programming and Evolvable Machines.

[49]  Nicholas Geneva,et al.  Transformers for Modeling Physical Systems , 2020, ArXiv.

[50]  Tim Colonius,et al.  Ensemble-Based State Estimator for Aerodynamic Flows , 2018, AIAA Journal.

[51]  Sergey Ioffe,et al.  Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.

[52]  Hrvoje Jasak,et al.  OpenFOAM: Open source CFD in research and industry , 2009 .

[53]  Mark N. Glauser,et al.  Stochastic estimation and proper orthogonal decomposition: Complementary techniques for identifying structure , 1994 .

[54]  Giovanni Soda,et al.  Exploiting the past and the future in protein secondary structure prediction , 1999, Bioinform..

[55]  Konrad Reif,et al.  Stochastic stability of the discrete-time extended Kalman filter , 1999, IEEE Trans. Autom. Control..

[56]  A. Naguib,et al.  Stochastic estimation and flow sources associated with surface pressure events in a turbulent boundary layer , 2001 .