Automatic Image Annotation Based on Homogeneous Textual-Visual Groups
Abstract
Purpose: The problem of automatic image annotation is not trivial. The training images often contain unbalanced and incomplete annotations, leading to a semantic gap between the visual features and textual description of an image. The existing methods include computationally complex algorithms which optimize the visual features and annotate a new image using all the training images and keywords, potentially reducing the accuracy. A compact visual descriptor should be developed, along with a method for choosing a group of the most informative training images for each test image. Results: A methodology for automatic image annotation is formulated, based on searching for a posteriori probability keyword association with a visual image descriptor. Six global descriptors combined in a single descriptor were obtained. The size of this single descriptor was reduced down to several hundred elements using principal component analysis. The experimental results showed an improvement of the annotation precision by 7% and a recall by 1%. Practical relevance: The compact handle visual method and automatic annotation of images based on the formation of homogeneous textual-visual groups can be used in Internet retrieval systems to improve the image search quality.Published
2016-04-21
How to Cite
Proskurin, A., & Favorskaya, M. (2016). Automatic Image Annotation Based on Homogeneous Textual-Visual Groups. Information and Control Systems, (2), 11-18. https://doi.org/10.15217/issn1684-8853.2016.2.11
Issue
Section
Information processing and control