From BoW to CNN : two decades of texture representation for texture classification
|Author:||Liu, Li1,2; Chen, Jie2; Fieguth, Paul3;|
1National University of Defense Technology, Changsha, China
2University of Oulu, Oulu, Finland
3University of Waterloo, Waterloo, Canada
4University of Maryland, College Park, USA
|Online Access:||PDF Full Text (PDF, 9.5 MB)|
|Persistent link:|| http://urn.fi/urn:nbn:fi-fe2019060719458
|Publish Date:|| 2019-06-07
Texture is a fundamental characteristic of many types of images, and texture representation is one of the essential and challenging problems in computer vision and pattern recognition which has attracted extensive research attention over several decades. Since 2000, texture representations based on Bag of Words and on Convolutional Neural Networks have been extensively studied with impressive performance. Given this period of remarkable evolution, this paper aims to present a comprehensive survey of advances in texture representation over the last two decades. More than 250 major publications are cited in this survey covering different aspects of the research, including benchmark datasets and state of the art results. In retrospect of what has been achieved so far, the survey discusses open challenges and directions for future research.
International journal of computer vision
|Pages:||74 - 109|
|Type of Publication:||
A1 Journal article – refereed
|Field of Science:||
113 Computer and information sciences
This work was partially supported by the Center for Machine Vision and Signal Analysis at the University of Oulu, the Academy of Finland, Tekes Fidipro program (Grant No. 1849/31/2015), the Business Finland project (Grant No. 3116/31/2017), the Infotech Oulu, and the National Natural Science Foundation of China under Grant 61872379.
© The Author(s) 2018. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.