Automatic model-based semantic object extraction algorithm
Automatic image segmentation and object extraction play an important role in supporting content-based image coding, indexing, and retrieval. However, the low-level visual homogeneity critical (like color, texture, intensity, and so on) for segmentation do not lead to semantic objects directly becaus...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on circuits and systems for video technology 2001-10, Vol.11 (10), p.1073-1084 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Automatic image segmentation and object extraction play an important role in supporting content-based image coding, indexing, and retrieval. However, the low-level visual homogeneity critical (like color, texture, intensity, and so on) for segmentation do not lead to semantic objects directly because a semantic object can contain totally different gray levels, color, or texture. We propose an automatic model-based semantic object extraction algorithm by integrating object seeds with their region constraint graphs (perceptual models). Images are first partitioned into a set of homogeneous regions with accurate boundaries by integrating the results obtained by similarity-based region growing and edge detection procedures. We propose a 1-D fast entropic thresholding technique for determining the thresholds used in region growing and edge detection automatically. The object seeds, which are the intuitive and representative parts of semantic objects, are then distinguished from these homogeneous image regions. A seeded region aggregation procedure is used for merging the adjacent regions of a detected object seed to give a semantic object according to the perceptual model of the object. We focus on semantic human object generation by taking faces as object seeds and using a ratio-based perceptual model. |
---|---|
ISSN: | 1051-8215 1558-2205 |
DOI: | 10.1109/76.954494 |