For scalable feature matching in large-scale web image search, the bag-of-visual-words-based (BOW) approaches generally code local features as visual words to construct an inverted index file to match features efficiently. Both the popular feature coding techniques, i.e., K-means-based vector quantization and scalar quantization, directly quantize features to generate visual words. K-means-based vector quantization requires expensive visual codebook training, whereas scalar quantization leads to the miss of many matches due to the low stability of individual components of feature vectors. To address the above issues, we demonstrate that the corresponding sub-vectors of similar features generally have similar distances to multiple reference points in feature subspace and propose a multiple distance-based feature coding scheme for scalable feature matching. Specifically, based on the distances between the sub-vectors and multiple distinct reference points, we transform each feature to a set of feature codes, where one code is treated as a visual word required to construct the inverted index file whereas the others are embedded into the index file to further verify the feature matching based on the visual words. Experimental results demonstrate the superiority of the proposed approach in comparison to other approaches using recent feature quantization methods for large-scale web image search.