Automatic image annotation is a significant and challenging problem in pattern recognition and computer vision areas. At present, existing models can not describe the visual representations of corresponding keywords, which would lead to a great number of irrelevant annotations in final annotation results. These annotation words are not related to any part of images in visual contents. A new automatic image annotation model (VKRAM) based on relevant visual keywords is proposed to overcome the above problems. Our model divides each keyword into two categories: abstract word or non-abstract word. Firstly, we establish visual keyword seeds of each non-abstract word, and then a new method is proposed to extract visual keyword collections by using corresponding visual seeds. Secondly, according to the traits of abstract words, an algorithm based on subtraction regions is proposed to extract visual keyword seeds and corresponding collections of each abstract word. Thirdly, we propose an adaptive parameters method and a fast solution algorithm to determine the similarity thresholds of each keyword. Finally, the combinations of the above methods are used to improve annotation performance. Experimental results conducted on Corel 5K datasets verify the effectiveness of the proposed annotation image model and it has improved the annotation results on most evaluation methods.