Deep Generative Models for Spectroscopic Analysis on Mars

Hyperspectral instruments (HSIs) measure the electromagnetic energy emitted by materials at high resolution (hundreds to thousands of channels) enabling material identification through spectroscopic analysis. Laser-induced breakdown spectroscopy (LIBS) is used by the ChemCam instrument on the Curiosity rover to measure the emission spectra of surface materials on Mars. From orbit, hyperspectral instruments (HSIs) on the CRISM instrument of the Mars Reconnaissance Orbiter (MRO) measure the electromagnetic energy emitted by materials at high resolution (hundreds to thousands of channels) enabling material identification through spectroscopic analysis. The data received are noisy, high-dimensional, and largely unlabeled. The ability to accurately predict elemental and material compositions of surface samples as well as to simulate spectra from hypothetical compositions, collectively known as hyperspectral unmixing, is invaluable to the exploration process. The nature of the problem allows us to construct deep (semi-supervised) generative models to accomplish both these tasks while making use of a large unlabeled dataset. Our main technical contribution is an invertibility trick where we train our model in reverse.

[1]  Avrim Blum,et al.  The Bottleneck , 2021, Monopsony Capitalism.

[2]  Gaël Varoquaux,et al.  Scikit-learn: Machine Learning in Python , 2011, J. Mach. Learn. Res..

[3]  Robert L. Tokar,et al.  Pre-flight calibration and initial data processing for the ChemCam laser-induced breakdown spectroscopy instrument on the Mars Science Laboratory rover , 2013 .

[4]  Daan Wierstra,et al.  Stochastic Backpropagation and Approximate Inference in Deep Generative Models , 2014, ICML.

[5]  Max Welling,et al.  Auto-Encoding Variational Bayes , 2013, ICLR.

[6]  Max Welling,et al.  Semi-supervised Learning with Deep Generative Models , 2014, NIPS.

[7]  Oriol Vinyals,et al.  Towards Principled Unsupervised Learning , 2015, ArXiv.

[8]  Yoshua Bengio,et al.  A Recurrent Latent Variable Model for Sequential Data , 2015, NIPS.

[9]  Navdeep Jaitly,et al.  Adversarial Autoencoders , 2015, ArXiv.

[10]  George Papandreou,et al.  Weakly- and Semi-Supervised Learning of a DCNN for Semantic Image Segmentation , 2015, ArXiv.

[11]  Pieter Abbeel,et al.  InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets , 2016, NIPS.

[12]  Ruslan Salakhutdinov,et al.  Importance Weighted Autoencoders , 2015, ICLR.

[13]  John Salvatier,et al.  Theano: A Python framework for fast computation of mathematical expressions , 2016, ArXiv.

[14]  Ole Winther,et al.  Auxiliary Deep Generative Models , 2016, ICML.

[15]  Phil Blunsom,et al.  Neural Variational Inference for Text Processing , 2015, ICML.

[16]  Ben Poole,et al.  Categorical Reparameterization with Gumbel-Softmax , 2016, ICLR.

[17]  David Vázquez,et al.  PixelVAE: A Latent Variable Model for Natural Images , 2016, ICLR.

[18]  Yee Whye Teh,et al.  The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables , 2016, ICLR.

[19]  Christopher Burgess,et al.  beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework , 2016, ICLR 2016.

[20]  Mohammad Ghavamzadeh,et al.  Bottleneck Conditional Density Estimation , 2016, ICML.