国产伦乱,一曲二曲欧美日韩,AV在线不卡免费在线不卡免费,搞91AV视频

當(dāng)前位置:主頁 > 碩博論文 > 信息類碩士論文 >

基于卷積神經(jīng)網(wǎng)絡(luò)的人體行為識別研究

發(fā)布時間:2018-10-16 12:42
【摘要】:近年來,高清視頻設(shè)備的推出使得基于行為識別技術(shù)的人工智能在智慧安全城市、智能家居和軍事安防等領(lǐng)域得以飛速發(fā)展。廣泛的應(yīng)用前景和經(jīng)濟(jì)價值讓行為分析與識別這一技術(shù)迅速成為計算機(jī)視覺領(lǐng)域的研究熱點。傳統(tǒng)的行為識別算法通常分為運(yùn)動前景檢測、特征提取以及訓(xùn)練識別三個步驟。雖然該方法的識別率尚可接受,但是其魯棒性不高,且工作量巨大。此外,實際場景中目標(biāo)之間多有遮擋、背景復(fù)雜多樣以及拍攝角度不固定等因素都造成傳統(tǒng)方法識別困難甚至失效。本文旨在利用卷積神經(jīng)網(wǎng)絡(luò)(Convolutional Neural Networks,CNN)改善傳統(tǒng)行為識別方法中存在的這些問題,在提高算法魯棒性的同時盡量提高識別的準(zhǔn)確率。針對背景減差法和幀間差分法在運(yùn)動幅度不太大的情況下無法提取完整前景的缺點,本文提出基于高斯差分(Difference of Gaussian,DoG)圖像的人體剪影提取算法。該方法利用兩張相鄰高斯尺度空間的圖像相減構(gòu)造包含人體輪廓信息的差分圖像,然后對其進(jìn)行二值強(qiáng)化、形態(tài)學(xué)處理等操作得到粗略的人體剪影圖像;第二步使用閾值對每行的粗略人體剪影區(qū)域進(jìn)行掃描檢測,再經(jīng)閉運(yùn)算等操作后得到完整準(zhǔn)確的人體剪影圖像。為融合圖像序列的時域信息,本文累加周期內(nèi)的人體剪影圖像,生成二維特征圖,并將其送入到CNN中進(jìn)行訓(xùn)練識別。最終,經(jīng)過網(wǎng)絡(luò)調(diào)參和五折交叉驗證等實驗后在KTH公共數(shù)據(jù)集上得到85.3%的平均準(zhǔn)確率,證明該識別框架具有一定的可行性。為了更好地處理視頻數(shù)據(jù),學(xué)者們將卷積神經(jīng)網(wǎng)絡(luò)擴(kuò)展到了三維。本文利用3D CNN進(jìn)行實驗,發(fā)現(xiàn)特征組合"光流圖-幀差圖-三幀幀差圖"可以取得最佳識別效果。經(jīng)過網(wǎng)絡(luò)調(diào)參和五折交叉驗證等實驗后在KTH公共數(shù)據(jù)集上得到92.0%的平均準(zhǔn)確率。其次,通過分析KTH數(shù)據(jù)集中各類樣本數(shù)量的比例分布及其對應(yīng)的準(zhǔn)確率,本論文提出使用二次訓(xùn)練、過取樣策略和擴(kuò)展數(shù)據(jù)集這三種改進(jìn)方法來證明數(shù)據(jù)分布不均衡對實驗結(jié)果確有影響,并以此提高識別率。最終,三種改進(jìn)方法分別達(dá)到93.5%、92.8%和94.7%的平均準(zhǔn)確率,為小樣本或不均衡數(shù)據(jù)集的分類問題提供解決辦法。此外,利用3DCNN進(jìn)行行為識別的方法在減少特征提取工作量的同時提高了算法的魯棒性,即改善了傳統(tǒng)識別方法中存在的問題。
[Abstract]:In recent years, with the introduction of high-definition video equipment, artificial intelligence based on behavior recognition technology has been rapidly developed in the field of intelligent safe city, smart home and military security. Because of its wide application prospect and economic value, behavior analysis and recognition technology has become a hotspot in the field of computer vision. The traditional behavior recognition algorithms are usually divided into three steps: motion foreground detection, feature extraction and training recognition. Although the recognition rate of this method is acceptable, its robustness is not high and the workload is enormous. In addition, many factors such as occlusion between targets, complex background and uncertain shooting angle in the actual scene result in difficulty or even invalidation of traditional methods. This paper aims to improve these problems in traditional behavior recognition methods by using convolution neural network (Convolutional Neural Networks,CNN) to improve the robustness of the algorithm and improve the accuracy of recognition as much as possible. Aiming at the disadvantage that background subtraction and inter-frame differential can not extract the complete foreground without too much motion amplitude, this paper proposes a human body silhouette extraction algorithm based on Gao Si differential (Difference of Gaussian,DoG image. In this method, two subtraction images of adjacent Gao Si scale space are used to construct differential images containing human contour information, and then binary enhancement and morphological processing are performed to obtain rough human silhouette images. In the second step, the threshold is used to scan and detect the rough body silhouette area of each line, and then the complete and accurate human body silhouette image is obtained after the operations such as blocking operation. In order to fuse the temporal information of the image sequence, the human body silhouette image is accumulated in the period, and the two-dimensional feature map is generated, which is sent into the CNN for training and recognition. Finally, the average accuracy rate of 85.3% is obtained on the KTH common data set by the experiments of network parameter adjustment and 50% discount cross-validation, which proves the feasibility of the recognition framework. In order to better deal with video data, researchers extend the convolution neural network to 3 D. In this paper, 3D CNN is used to carry out experiments and it is found that the best recognition effect can be obtained by combining "optical flow graph, frame difference graph and three frame difference map". The average accuracy is 92.0% on the KTH common data set after the experiments of network parameter adjustment and 50% discount cross-validation. Secondly, by analyzing the proportional distribution of the number of samples in the KTH data set and the corresponding accuracy, this paper proposes the use of secondary training. The oversampling strategy and the extended data set are three improved methods to prove that the uneven distribution of the data has an effect on the experimental results, and thus to improve the recognition rate. Finally, the three improved methods reach the average accuracy of 93.5% and 94.7% respectively, which provide a solution to the classification problem of small sample or unbalanced data set. In addition, the method of behavior recognition using 3DCNN not only reduces the workload of feature extraction, but also improves the robustness of the algorithm, that is, it improves the problems existing in the traditional recognition methods.
【學(xué)位授予單位】:中國科學(xué)技術(shù)大學(xué)
【學(xué)位級別】:碩士
【學(xué)位授予年份】:2017
【分類號】:TP391.41;TP183

【參考文獻(xiàn)】

相關(guān)期刊論文 前8條

1 劉琮;許維勝;吳啟迪;;時空域深度卷積神經(jīng)網(wǎng)絡(luò)及其在行為識別上的應(yīng)用[J];計算機(jī)科學(xué);2015年07期

2 黃凱奇;陳曉棠;康運(yùn)鋒;譚鐵牛;;智能視頻監(jiān)控技術(shù)綜述[J];計算機(jī)學(xué)報;2015年06期

3 陶新民;郝思媛;張冬雪;徐鵬;;不均衡數(shù)據(jù)分類算法的綜述[J];重慶郵電大學(xué)學(xué)報(自然科學(xué)版);2013年01期

4 雷慶;李紹滋;;動作識別中局部時空特征的運(yùn)動表示方法研究[J];計算機(jī)工程與應(yīng)用;2010年34期

5 卜濤濤;盧超;;圖像分割算法研究[J];電腦知識與技術(shù);2010年08期

6 田國會;;家庭服務(wù)機(jī)器人研究前景廣闊[J];國際學(xué)術(shù)動態(tài);2007年01期

7 何衛(wèi)華;李平;文玉梅;葉波;;復(fù)雜背景下基于圖像融合的運(yùn)動目標(biāo)輪廓提取算法[J];計算機(jī)應(yīng)用;2006年01期

8 范瑩,郭成安;一種運(yùn)動圖像的檢測與識別技術(shù)[J];大連理工大學(xué)學(xué)報;2004年01期

相關(guān)博士學(xué)位論文 前2條

1 曹鵬;不均衡數(shù)據(jù)分類方法的研究[D];東北大學(xué);2014年

2 何衛(wèi)華;人體行為識別關(guān)鍵技術(shù)研究[D];重慶大學(xué);2012年

相關(guān)碩士學(xué)位論文 前7條

1 張永剛;基于多特征融合的行為識別算法研究[D];內(nèi)蒙古大學(xué);2016年

2 陸霖霖;基于改進(jìn)ISA深度網(wǎng)絡(luò)的人體行為識別研究與實現(xiàn)[D];電子科技大學(xué);2016年

3 吳杰;基于卷積神經(jīng)網(wǎng)絡(luò)的行為識別研究[D];電子科技大學(xué);2015年

4 費凡;智能視頻監(jiān)控中運(yùn)動人體異常行為的自動檢測與識別算法的研究與實現(xiàn)[D];南京郵電大學(xué);2014年

5 郭勁智;視頻圖像行人檢測方法研究[D];華南理工大學(xué);2012年

6 覃耀輝;視頻中的人體動作行為識別研究[D];電子科技大學(xué);2011年

7 趙鳳娟;行人異常智能視頻監(jiān)控系統(tǒng)研究與實現(xiàn)[D];電子科技大學(xué);2011年

,

本文編號:2274394

資料下載
論文發(fā)表

本文鏈接:http://lk138.cn/shoufeilunwen/xixikjs/2274394.html


Copyright(c)文論論文網(wǎng)All Rights Reserved | 網(wǎng)站地圖 |

版權(quán)申明:資料由用戶b6e98***提供,本站僅收錄摘要或目錄,作者需要刪除請E-mail郵箱bigeng88@qq.com
天天干夜夜一区| 日韩欧美性一区二区三区妖乱| 久久久亚洲天堂| 欧美成人22296| 天天日天天干天天胔| 亚洲偷拍女厕| vav黄色在线| SONIA无码一区二区| 日本人妻在线视| 久久久一级二级| 欧美偷偷操| 久久久三级变| 日韩午夜不卡| 亚洲婷婷内射少妇| 小雪尝禁果又粗又大的视频| 超碰 成人 国产| 久久久国精产品秘 一区二区| 国产熟女免费| 妞好吊免费视频在线观看| 久久久久美国| 91日韩国产欧美亚洲另类精| 宅福利有番号av| 日一日干一干爽一爽| 精品国产久久AV| 久久超碰蜜臀| 欧美又大又粗| 92一区二区三区欧美| 欧欧美日韩精品小视频| 综合 在线 欧美 日 国产| 超碰亚洲欧美日韩在线| 午夜福利激情一区| 涩综合视频在线观看| 欧美成人精品一级A片男男小说| 图片区小说区区亚洲日韩| 超碰在线观看高潮音频| 你懂的 在线视频 日韩精品| 私人午夜福利AV| 玖操亚洲一区| 欧美日韩综| 欧美特色一级黄片视频| 日本欧美一区九九|