SSRNAS: Search Space Reduced One-shot NAS by a Recursive Attention-based Predictor with Cell Tensor-flow Diagram

One-shot neural architecture search(NAS) has attracted more attention since it can provide a reasonable performance ranking on all paths in the super-net. The ranking ability of the one-shot NAS model relies heavily on the training process of the super-net. However, the competition among architectures for shared weights during the training process and the Matthew effect caused by reward-based methods make it difficult to train the super-net fairly and effectively. In this paper, a novel method named Search Space Reduced One-shot NAS(SSRNAS), which continuously decreases the search space and train fairly to improve performance, is proposed. Specifically, a predictor based on graph recursive attention is trained to evaluate the performance of the architectures during the training process. In addition, a dynamic divided threshold is used as the basis for discarding architectures, and the architectures with poor performance in the search space are discarded continuously. The remaining architectures in the search space are uniformly sampled for fair training to avoid the Matthew effect. Experimental results show that our proposed method performs well on three datasets of NAS-Bench-201 search space and cifar-10 dataset of DARTS space.