Using CANNOT framework to generate video based applications
Content annotation is motivated by the enormous quantity of digital data produced daily. Because autonomously understanding video content is an open research problem, annotations usually complement video data with descriptors that provide a synthetic representation of their content. The annotation p...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Content annotation is motivated by the enormous quantity of digital data produced daily. Because autonomously understanding video content is an open research problem, annotations usually complement video data with descriptors that provide a synthetic representation of their content. The annotation process generates high-level metadata that are the base for organizing video repositories and later enables content-oriented video access. This paper presents a framework, called CANNOT - Coyote annotation, for supporting video annotation process. Some real applications developed by using proposed framework are also presented. |
---|---|
DOI: | 10.1109/LAWEB.2005.45 |