PQA-Net: Deep No Reference Point Cloud Quality Assessment via Multi-View Projection

Recently, 3D point cloud is becoming popular due to its capability to represent the real world for advanced content modality in modern communication systems. In view of its wide applications, especially for immersive communication towards human perception, quality metrics for point clouds are essential. Existing point cloud quality evaluations rely on a full or certain portion of the original point cloud, which severely limits their applications. To overcome this problem, we propose a novel deep learning-based no reference point cloud quality assessment method, namely PQA-Net. Specifically, the PQA-Net consists of a multi-view-based joint feature extraction and fusion (MVFEF) module, a distortion type identification (DTI) module, and a quality vector prediction (QVP) module. The DTI and QVP modules share the feature generated from the MVFEF module. By using the distortion type labels, the DTI and the MVFEF modules are first pre-trained to initialize the network parameters, based on which the whole network is then jointly trained to finally evaluate the point cloud quality. Experimental results on the Waterloo Point Cloud dataset show that PQA-Net achieves better or equivalent performance comparing with the state-of-the-art quality assessment methods. The code of the proposed model will be made publicly available to facilitate reproducible research https://github.com/qdushl/PQA-Net.