Skip navigation
Run Run Shaw Library City University of Hong KongRun Run Shaw Library

Please use this identifier to cite or link to this item: http://dspace.cityu.edu.hk/handle/2031/3658
Full metadata record
DC FieldValueLanguage
dc.contributor.authorLam, Sen
dc.date.accessioned2006-10-19T04:17:45Z
dc.date.accessioned2017-09-19T08:51:16Z
dc.date.accessioned2019-02-12T06:53:28Z-
dc.date.available2006-10-19T04:17:45Z
dc.date.available2017-09-19T08:51:16Z
dc.date.available2019-02-12T06:53:28Z-
dc.date.issued2006
dc.identifier.other2006csls811
dc.identifier.urihttp://144.214.8.231/handle/2031/3658-
dc.description.abstractFor current content-based video retrieval system, it was able to be treated as an extension of Image Retrieval and Audio Retrieval System. However, the accuracy and the convenience of those technologies used for image or audio retrieval may not satisfy users well when they are applying on video retrieval system. This project investigates several important issues for story-based retrieval in large broadcast video corpus. Firstly, text-based retrieval feature is built based on the ASR transcripts of the videos with word weighting mechanism; the performance of text-based retrieval in videos clips will be evaluated. Secondly, the evaluation will extend to the content-based retrieval feature which allows searching clips with a series of images. Then, text-based and keyframe-based features are combined to see whether the precision can be improved. Finally, an implicit approach to retrieve story without clip segmentation is also introduced to exploit the inherent matching relationship between a given query and videos. It is a new idea to handle the retrieval of video clips. The technique used for keyframe-based retrieval is bipartite graph matching algorithm for Maximum Matching (MM) and Optimal Matching (OM). Experimental results shows that, the text-based retrieval perform extremely well on long story that may have a series follow up news. On the other hand, with the help of keyframe information, the precision at certain point is raised. It redeems the misleading or poor usage of keyword to get specific scenes or chapters in the long story. Besides, retrieval without clip boundary is also possible and able to give encouraging result. While having accurate clip segmentation usually requires additional semantic understanding and custom adjustment, the new approach eases the automation of clips indexing. To demonstrate the ideas, a sophisticated user-interface is built to host 5 months (over a hundred of hours) of CNN and ABC video news, with user-friendly query inputs for text and image. Specific works forced include development of user-interface, multi-media data indexing, image process and retrieval approaches.en
dc.format.extent163 bytes
dc.format.mimetypetext/html
dc.rightsThis work is protected by copyright. Reproduction or distribution of the work in any format is prohibited without written permission of the copyright owner.
dc.rightsAccess is restricted to CityU users.
dc.titleSearching news clips in large video archiveen
dc.contributor.departmentDepartment of Computer Scienceen
dc.description.supervisorNgo, C W. First Reader: Wang, Philips. Second Reader: Jia, Xiaohuaen
Appears in Collections:Computer Science - Undergraduate Final Year Projects 

Files in This Item:
File SizeFormat 
fulltext.html163 BHTMLView/Open
Show simple item record


Items in Digital CityU Collections are protected by copyright, with all rights reserved, unless otherwise indicated.

Send feedback to Library Systems
Privacy Policy | Copyright | Disclaimer