RESEARCH ARTICLE
Combining Textual and Visual Information for Image Retrieval in the Medical Domain
Yiannis Gkoufas*, Anna Morou*, Theodore Kalamboukis*
Article Information
Identifiers and Pagination:
Year: 2011Volume: 5
Issue: Suppl 1
First Page: 50
Last Page: 57
Publisher Id: TOMINFOJ-5-50
DOI: 10.2174/1874431101105010050
Article History:
Received Date: 15/5/2011Revision Received Date: 20/5/2011
Acceptance Date: 24/5/2011
Electronic publication date: 27/7/2011
Collection year: 2011
open-access license: This is an open access article licensed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted, non-commercial use, distribution and reproduction in any medium, provided the work is properly cited.
Abstract
In this article we have assembled the experience obtained from our participation in the imageCLEF evaluation task over the past two years. Exploitation on the use of linear combinations for image retrieval has been attempted by combining visual and textual sources of images. From our experiments we conclude that a mixed retrieval technique that applies both textual and visual retrieval in an interchangeably repeated manner improves the performance while overcoming the scalability limitations of visual retrieval. In particular, the mean average precision (MAP) has increased from 0.01 to 0.15 and 0.087 for 2009 and 2010 data, respectively, when content-based image retrieval (CBIR) is performed on the top 1000 results from textual retrieval based on natural language processing (NLP).