B. Zhang, W. Fang, W. Chen, F. Bi, C. Tang and X. Huang, "Visual Tracking Based on Cooperative Model," 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), Xi’an, 2018, pp. 614-620. doi: 10.1109/FG.2018.00097
Visual tracking based on cooperative model
|Author:||Bobin, Zhang1; Weidong, Fang2; Wei, Chen1,2;|
1School of Computer Science and Technology, China University of Mining and Technology, Xuzhou, Jiangsu, 221116, China
2Key Laboratory of Wireless Sensor Network and Communication, Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, Shanghai 201899, China
3Center for Machine Vision and Signal Analysis (CMVS), University of Oulu, Oulu, FI-90540, Finland
4Faulty of Information Technology and Electrical Engineering (ITEE), University of Oulu, Oulu, FI-90540, Finland
|Online Access:||PDF Full Text (PDF, 1.6 MB)|
|Persistent link:|| http://urn.fi/urn:nbn:fi-fe2019042913520
Institute of Electrical and Electronics Engineers,
|Publish Date:|| 2019-04-29
In this paper, we propose a cooperative model combined the multi-task reverse sparse representation model (MTRSR) and the AdaBoost classifier, which were used to cope with the disturbing of target gradient information caused by motion blur or target serious occlusion, and a descriptive dictionary were used to estimate the weights of each candidates. First, we use the MTRSR model to get the blur kernel which were used to get the blur target template set, meanwhile the confidence of the candidates is also obtained by the reconstruction error. Then we use the HOG features of the target templates to get the descriptive dictionary to calculate the weights of the candidates, and a AdaBoost classifier is used to calculate the confidences of all candidates. Finally, the best target is retrieved by the sum of production of weight value and the two confidences. The experimental data show that the proposed algorithm can fully cope with the target’s information change which were caused by motion blur and target occlusion in the complex scene, and our algorithm can further improve the accuracy and robustness in visual tracking.
|Pages:||614 - 620|
13th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2018
IEEE International Conference on Automatic Face and Gesture Recognition
|Type of Publication:||
A4 Article in conference proceedings
|Field of Science:||
213 Electronic, automation and communications engineering, electronics
The research is supported in part by the NSFC and Shanxi Provincial People's Government Jointly Funded Project of China for Coal Base and Low Carbon (No.U1510115, 51104157), the Qing Lan Project, the Jiangsu Province Natural Science Foundation of China (No. BK20150201). We gratefully acknowledge Academy of Finland, the Jorma Ollila Grant of Nokia Foundation, Central Fund of Finnish Cultural Foundation, the support of NVIDIA Corporation with the donation of the Titan X Pascal GPU used for this research, the Ph.D. Programs Foundation of Ministry of Education of China (No. 20110095120008) and the China Postdoctoral Science Foundation (Nos.2013T60574, 20100481181).
© 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.