基于背景建模和屬性學(xué)習(xí)的視頻摘要研究
[Abstract]:With the popularization of high-definition camera equipment and the rise of Internet of things and the introduction of Ping'an City and Smart City, surveillance cameras are widely deployed in every corner of the city. Monitoring equipment can play an important role in cracking down on crime and maintaining social stability. However, the huge amount of video data also brings people a great test in the storage, archiving and retrieval of video. Traditional methods of direct storage and manual retrieval can no longer cope with the need of large-scale video processing. How to solve the problem of mass video storage and retrieval has become a hot topic for scholars at home and abroad. Therefore, this paper has carried out the related research in view of these two difficult problems. After consulting a large number of domestic and foreign literature and materials, we have a certain understanding of video storage and retrieval field, in-depth analysis of the research status of the subject. The main difficulties of the current research work are how to detect the foreground objects accurately and without omission, how to detect the foreground objects accurately and how to detect them with multiple concepts after detecting the foreground. How to cross the semantic gap when classifying and describing multi-concept objects. On this basis, this paper proposes a video summarization method based on background detection and attribute learning. The improved ViBe is used to model the background of the video sequence, remove the video frames without foreground objects, and save the remaining frames to generate the condensed video, so as to reduce the pressure caused by the video files on the storage. After obtaining the foreground object, the attribute classifier is established, and the concept of foreground object is detected by using attribute learning, and then the foreground object is described by attribute label, and the video summary is generated on the basis of condensed video. The main contents of this paper are as follows: (1) the video background modeling and concentration based on improved ViBe is proposed. After studying and comparing the video background modeling algorithm, the ViBe algorithm, which is faster than other mainstream methods and occupies less memory, is selected. In view of the problem that the original ViBe algorithm still has noise and flicker points in the actual monitoring scene and the ghosts will be introduced in the initialization process, the ViBe algorithm is improved, and the method of removing the flashing points based on the count point threshold is proposed respectively. Morphology based noise cancellation method, and an improved algorithm for ghost region detection and suppression. After the implementation and experimental verification of the improved ViBe, it is applied to foreground extraction and video concentration. Firstly, the background of the video is modeled and the foreground object is obtained. Then the useless frame without foreground object is omitted to remove redundant information in time dimension and the video is condensed. (2) Multi-concept detection and summary of foreground based on multi-core attribute learning is proposed. Firstly, multi-kernel learning is introduced into the framework of direct attribute prediction model, and the optimization method of weight vector of kernel function is given. Furthermore, the proposed model is applied to video object classification. Then, the multi-concept classification ability and attribute description ability of the model are used to detect the multi-concept of the surveillance video foreground, and the detected objects are tagged with attributes to generate the video summary. A comparative experiment is designed to verify the effectiveness of the proposed method. (3) on the basis of the above two research points, a video abstract prototype system based on background modeling and attribute learning is built by using the object-oriented approach in software engineering. The system includes video enrichment module, attribute prediction model training module and video summary module. The operation effect is good and the expected goal of this study has been achieved.
【學(xué)位授予單位】:江蘇大學(xué)
【學(xué)位級別】:碩士
【學(xué)位授予年份】:2017
【分類號】:TP391.41
【參考文獻(xiàn)】
相關(guān)期刊論文 前10條
1 魏廣巨;;視頻監(jiān)控行業(yè)發(fā)展觀察[J];中國安防;2016年11期
2 崔桐;徐欣;;一種基于語義分析的大數(shù)據(jù)視頻標(biāo)注方法[J];南京航空航天大學(xué)學(xué)報;2016年05期
3 蘭紅;周偉;齊彥麗;;動態(tài)背景下的稀疏光流目標(biāo)提取與跟蹤[J];中國圖象圖形學(xué)報;2016年06期
4 鄧仕超;黃寅;;二值圖像膨脹腐蝕的快速算法[J];計算機(jī)工程與應(yīng)用;2017年05期
5 王輝;宋建新;;一種基于閾值的自適應(yīng)Vibe目標(biāo)檢測算法[J];計算機(jī)科學(xué);2015年S1期
6 何曦;瞿建榮;盧曉燕;劉培楨;王嬌穎;;基于幀間差分和水平集的運(yùn)動目標(biāo)探測跟蹤方法[J];探測與控制學(xué)報;2015年01期
7 王娟;蔣興浩;孫錟鋒;;視頻摘要技術(shù)綜述[J];中國圖象圖形學(xué)報;2014年12期
8 楊勇;孫明偉;金裕成;;一種改進(jìn)視覺背景提取(ViBe)算法的車輛檢測方法[J];重慶郵電大學(xué)學(xué)報(自然科學(xué)版);2014年03期
9 胡小冉;孫涵;;一種新的基于ViBe的運(yùn)動目標(biāo)檢測方法[J];計算機(jī)科學(xué);2014年02期
10 王文豪;周泓;嚴(yán)云洋;;一種基于連通區(qū)域的輪廓提取方法[J];計算機(jī)工程與科學(xué);2011年06期
相關(guān)碩士學(xué)位論文 前2條
1 詹智財;基于卷積神經(jīng)網(wǎng)絡(luò)的視頻語義概念分析[D];江蘇大學(xué);2016年
2 馮嘉;SIFT算法的研究和改進(jìn)[D];吉林大學(xué);2010年
,本文編號:2274982
本文鏈接:http://www.lk138.cn/shoufeilunwen/xixikjs/2274982.html