IEEE TCDL Bulletin
 
space space

TCDL Bulletin
Current 2005
Volume 2   Issue 1

 

A Signal/Semantic Framework for Image Retrieval

Mohammed Belkhatir
MRIM-IMAG/CNRS
belkhatm@imag.fr

 

This article presents an approach for integrating perceptive signal features (i.e. color and texture) and semantic information within an integrated architecture for image retrieval. It relies on an expressive knowledge representation formalism handling high-level image descriptions and a full-text query framework. It consequently brings the level of image retrieval closer to users' needs by translating low-level signal features to high-level data and coupling it with semantics within index and query structures. Visual semantics and signal features are integrated within a multi-facetted image model consisting of:

  • A first facet called object facet which describes an image as a set of image objects (IOs), abstract structures representing visual entities within an image. For instance, in Figure 1, three IOs are highlighted: Io1, Io2 and Io3.
  • A second facet called visual semantics facet characterizing the image semantic content and based on labeling IOs with a semantic concept. For instance, in Figure 1, the second IO (Io2) is tagged by the semantic concept Hut.
  • A third facet called signal facet, which describes the image signal content in terms of symbolic perceptive features and consists in characterizing IOs with signal concepts.

The third facet is itself divided into two sub-facets. The color subfacet features the image signal content in terms of symbolic colors. In Figure 1, the first IO (Io1) is associated with the symbolic colors Cyan and White. The texture subfacet describes the signal content in terms of symbolic textures. In Figure 1, the second IO (Io2) is associated with the symbolic texture Lined.

To instantiate this model within an image retrieval framework, we choose conceptual graphs (CGs), which have proven to adapt to the symbolic approach of image retrieval. CGs are expressive graph-based representation formalisms that allow us to represent components of our image retrieval architecture and to specify expressive index and query structures.

Thumbnail image of poster

For a larger view of Figure 1, click here.

Every image in the corpus has a conceptual index representation, in terms of CGs, called image index graph built with respect to the multi-facetted image description. As far as the query module is concerned, a user full-text query is translated into an image conceptual representation also elaborated with respect to the multi-facetted image characterization: the image query graph. In Figure 1, the query "Find images with lined huts" featuring semantic and texture characterizations is translated into an image query graph that is itself compared to all index graphs of image documents in the corpus. During the retrieval process, lattices organizing semantic, color and texture concepts are processed (we propose in Figure 2 the lattice that orders semantic concepts with respect to a specific/generic partial order) and a relevance value, estimating the degree of similarity between image query and index graphs, is computed in order to rank all image documents relevant to a query.

Thumbnail image of poster

For a larger view of Figure 2, click here.

The SIR prototype implements the theoretical framework, and validation experiments are carried out on a corpus of 2500 personal color photographs. We provide in Figure 3 the retrieval results for a query featuring semantic and color characterizations: "Find images with swimming-pool water (mostly cyan)".

Thumbnail image of poster

For a larger view of Figure 3, click here.

 

© Copyright 2005 Mohammed Belkhatir
Some or all of these materials were previously published in the Proceedings of the 5th ACM/IEEE-CS Joint Conference on Digital libraries, ACM 1-58113-876-8/05/0006.

Top | Contents
Previous Article
Next Article
Home | E-mail the Editor