The memristor-based neuromorphic computing system (NCS) with emerging storage and computing integration architecture has drawn extensive attention. Because of the unique nonvolatility and programmability, the memristor is an ideal nano-device to realize neural synapses in VLSI circuit implementation of neural networks. However, in the hardware implementation, the performance of the memristive neural network is always affected by quantization error, writing error, and conductance drift, which seriously hinders its applications in practice. In this paper, a novel weight optimization scheme combining quantization and Bayesian inference is proposed to alleviate this problem. Specifically, the weight deviation in the memristive neural network is transformed into the weight uncertainty in the Bayesian neural network, which can make the network insensitive to unexpected weight changes. A quantization regularization term is designed and utilized during the training process of the Bayesian neural network, reducing the quantization error and improving the robustness of the network. Furthermore, a partial training method is raised to extend the applicability of the proposed scheme in large-scale neural networks. Finally, the experiments on a Multilayer Perceptron and LeNet demonstrate that the proposed weight optimization scheme can significantly enhance the robustness of memristive neural networks.