We present 14 autoencoders, 15 kernels and 14 multilayer perceptrons for electron micrograph restoration and compression. These have been trained for transmission electron microscopy (TEM), scanning transmission electron microscopy (STEM) and for both (TEM+STEM). TEM autoencoders have been trained for 1$\times$, 4$\times$, 16$\times$ and 64$\times$ compression, STEM autoencoders for 1$\times$, 4$\times$ and 16$\times$ compression and TEM+STEM autoencoders for 1$\times$, 2$\times$, 4$\times$, 8$\times$, 16$\times$, 32$\times$ and 64$\times$ compression. Kernels and multilayer perceptrons have been trained to approximate the denoising effect of the 4$\times$ compression autoencoders. Kernels for input sizes of 3, 5, 7, 11 and 15 have been fitted for TEM, STEM and TEM+STEM. TEM multilayer perceptrons have been trained with 1 hidden layer for input sizes of 3, 5 and 7 and with 2 hidden layers for input sizes of 5 and 7. STEM multilayer perceptrons have been trained with 1 hidden layer for input sizes of 3, 5 and 7. TEM+STEM multilayer perceptrons have been trained with 1 hidden layer for input sizes of 3, 5, 7 and 11 and with 2 hidden layers for input sizes of 3 and 7. Our code, example usage and pre-trained models are available at this https URL
[1]
Frederick R. Forst,et al.
On robust estimation of the location parameter
,
1980
.
[2]
Yoshua Bengio,et al.
Understanding the difficulty of training deep feedforward neural networks
,
2010,
AISTATS.
[3]
Jimmy Ba,et al.
Adam: A Method for Stochastic Optimization
,
2014,
ICLR.
[4]
Sergey Ioffe,et al.
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
,
2015,
ICML.
[5]
Geoffrey E. Hinton,et al.
Rectified Linear Units Improve Restricted Boltzmann Machines
,
2010,
ICML.
[6]
Hitoshi Kiya,et al.
Super-Resolution Using Convolutional Neural Networks Without Any Checkerboard Artifacts
,
2018,
2018 25th IEEE International Conference on Image Processing (ICIP).