Abstract:
Bag of visual words model based object retrieval methods have several problems, such as low time efficiency, the low distinction of visual words and the weakly visual semantic resolution because of missing spatial information and quantization error. In this article, an object retrieval method based on enhanced dictionary and spatially-constrained similarity measurement is proposed aiming at the above problems. Firstly, E\+2LSH (exact Euclidean locality sensitive hashing) is used to identify and eliminate the noise key points and similar key points, consequently, the efficiency and quality of visual words are improved; Then, the stop words of dictionary are eliminated by chi-square model (CSM) to improve the distinguish ability of visual dictionary; Finally, the spatially-constrained similarity measurement is introduced to accomplish object retrieval, furthermore, a robust re-ranking method with the K-nearest neighbors of the query for automatically refining the initial search results is introduced. Experimental results indicate that the quality of visual dictionary is enhanced, and the distinguish ability of visual semantic expression is effectively improved and the object retrieval performance is substantially boosted compared with the traditional methods.