A Signal/Semantic Framework for Image Retrieval
This article presents an approach for integrating perceptive signal features (i.e. color and texture) and semantic information within an integrated architecture for image retrieval. It relies on an expressive knowledge representation formalism handling high-level image descriptions and a full-text query framework. It consequently brings the level of image retrieval closer to users' needs by translating low-level signal features to high-level data and coupling it with semantics within index and query structures. Visual semantics and signal features are integrated within a multi-facetted image model consisting of:
The third facet is itself divided into two sub-facets. The color subfacet features the image signal content in terms of symbolic colors. In Figure 1, the first IO (Io1) is associated with the symbolic colors Cyan and White. The texture subfacet describes the signal content in terms of symbolic textures. In Figure 1, the second IO (Io2) is associated with the symbolic texture Lined.
To instantiate this model within an image retrieval framework, we choose conceptual graphs (CGs), which have proven to adapt to the symbolic approach of image retrieval. CGs are expressive graph-based representation formalisms that allow us to represent components of our image retrieval architecture and to specify expressive index and query structures.
Every image in the corpus has a conceptual index representation, in terms of CGs, called image index graph built with respect to the multi-facetted image description. As far as the query module is concerned, a user full-text query is translated into an image conceptual representation also elaborated with respect to the multi-facetted image characterization: the image query graph. In Figure 1, the query "Find images with lined huts" featuring semantic and texture characterizations is translated into an image query graph that is itself compared to all index graphs of image documents in the corpus. During the retrieval process, lattices organizing semantic, color and texture concepts are processed (we propose in Figure 2 the lattice that orders semantic concepts with respect to a specific/generic partial order) and a relevance value, estimating the degree of similarity between image query and index graphs, is computed in order to rank all image documents relevant to a query.
The SIR prototype implements the theoretical framework, and validation experiments are carried out on a corpus of 2500 personal color photographs. We provide in Figure 3 the retrieval results for a query featuring semantic and color characterizations: "Find images with swimming-pool water (mostly cyan)".
© Copyright 2005 Mohammed Belkhatir