Deep learning improves mobile-phone microscopy (Conference Presentation)

Mobile-phone based microscopy often uses 3D-printed opto-mechanical designs and inexpensive optical components that are not optimized for microscopic imaging of specimen. For example, the illumination source is often a battery-powered LED, which can create spectral distortions in the acquired image. Mechanical misalignments of the optical components and the sample holder as well as inexpensive lenses lead to spatial distortions at the microscale. Furthermore, mobile-phones are equipped with CMOS image sensors with a pixel size of ~1-2 µm, which results in an inferior signal-to-noise ratio compared to benchtop microscopes, which are typically equipped with much larger pixels, e.g., ~5-10 µm. Here, we demonstrate a supervised learning framework, based on a deep convolutional neural network for substantial enhancement of a smartphone microscope image, by eliminating spectral aberrations, increasing the signal-to-noise ratio and improving the spatial resolution of the acquired images. Once trained, the deep neural network is fixed, and it rapidly outputs an image, matching the quality of a benchtop microscope image, in a feed-forward, non-iterative manner, without the need for any modeling of the aberrations in the mobile imaging system. This framework is demonstrated using pathology slides of thin tissues sections and blood smears, validating its superior performance even using highly-compressed images, suitable especially for telemedicine applications with restricted bandwidth and storage requirements. This deep learning-powered approach can be broadly applicable to various mobile microscopy systems that can be used for point-of-care medicine and global health related applications.