Deep learning supported breast cancer classification with multi-modal image fusion
暂无分享,去创建一个
Early diagnosis using deep learning of the early-stage breast cancer to enhance the diagnoses, relying on one type of image modality has the risk of missing tumors or false diagnosis. Combining two image modalities (mammography and ultrasound) for better classification and combining information from different modalities can significantly improve classification accuracy. Using dense connections has a great interest in computer vision because they enable gradient flow and attain deep supervision all through training. In particular, for DenseNet every layer is connected to other different layers in a feed-forward style, this proves spectacular performances in natural image classification tasks. The proposed DenseNet 201 with connections between the same path and different paths has overall freedom to learn more complicated combinations among the modalities. Multi-modal images are used to collect features from diverse views and produce complementary information. The experimental outcomes for different parameters (accuracy, recall, precision area under the curve, and F1 score) of the proposed method were 93.83%, 93.83%, 93.83%, 95.61%, and 93.8% respectively. The obtained results in diagnosing breast cancer using ultrasound and mammogram images show better performance compared to previous methods in assisting specialists.