Altadmri, Amjad, Ahmed, Amr (2009) Video databases annotation enhancing using commonsense knowledgebases for indexing and retrieval. In: The 13th IASTED International Conference on Artificial Intelligence and Soft Computing, September 7 - 9, 2009, Palma de Mallorca, Spain. (KAR id:45944)
PDF
Publisher pdf
Language: English |
|
Download this file (PDF/303kB) |
Preview |
Request a format suitable for use with assistive technology e.g. a screenreader |
Abstract
The rapidly increasing amount of video collections, especially on the web, motivated the need for intelligent automated annotation tools for searching, rating, indexing and retrieval purposes. These videos collections contain all types of manually annotated videos. As this annotation is usually incomplete and uncertain and contains misspelling words, search using some keywords almost do retrieve only a portion of videos which actually contains the desired meaning. Hence, this annotation needs filtering, expanding and validating for better indexing and retrieval.
In this paper, we present a novel framework for video annotation enhancement, based on merging two widely known commonsense knowledgebases, namely WordNet and ConceptNet. In addition to that, a comparison between these knowledgebases in video annotation domain is presented.
Experiments were performed on random wide-domain video clips, from the \emph{vimeo.com} website. Results show that searching for a video over enhanced tags, based on our proposed framework, outperforms searching using the original tags. In addition to that, the annotation enhanced by our framework outperforms both those enhanced by WordNet and ConceptNet individually, in terms of tags enrichment ability, concept diversity and most importantly retrieval performance.
- Link to SensusAccess
- Export to:
- RefWorks
- EPrints3 XML
- BibTeX
- CSV
- Depositors only (login required):