Rapid and Scalable COVID-19 Screening using Speech, Breath, and Cough Recordings
暂无分享,去创建一个
Over the course of the COVID-19 pandemic, efforts have been made to rapidly scale diagnostic tests to increase access and throughput. Though the primary mechanism for testing has been wet tests, several recent studies have shown that acoustic signatures of COVID-19 can be used to accurately discriminate between positive and negative subjects. These methods show promise of wide scale access and more regular and rapid testing, but are faced with several questions involving the robustness of the methods and the sanitary nature of forced cough recordings. Here we propose an alternative method to triage patients using acoustic signatures in speech and breathing sounds. Using a crowd-sourced database with sound recordings from self-identified COVID-19 positive and negative subjects, we develop a simple method that can be applied to analyze sounds that can be deployed in a system to unobtrusively detect COVID-19. Mel-frequency cepstral coefficients (MFCCs) and relAtive specTrA perceptual linear prediction (RASTA-PLP) features are evaluated independently and conjointly with two different classification techniques, random forests (RF) and deep neural networks (DNN). The optimal results are achieved for speech and breathing sounds using a combination of MFCC and RASTA-PLP, with an area-under-the-curve (AUC) of 0.7938 for detecting COVID-19 via speech sound analysis, and 0.7575 for detecting COVID-19 via breathing sound analysis. This is compared to an AUC of 0.6836 for cough sounds using MFCCs alone. These results show promise in future deployment of a rapid screening tool using speech recordings as the world moves to contain future outbreaks and accelerate vaccination efforts.