University of Oulu

Land cover and forest segmentation using deep neural networks

Saved in:
Author: Bengana, Mohamed1
Organizations: 1University of Oulu, Faculty of Information Technology and Electrical Engineering, Computer Science
Format: ebook
Version: published version
Access: open
Online Access: PDF Full Text (PDF, 4.9 MB)
Pages: 54
Persistent link: http://urn.fi/URN:NBN:fi:oulu-201905101715
Language: English
Published: Oulu : M. Bengana, 2019
Publish Date: 2019-05-10
Thesis type: Master's thesis
Tutor: Heikkilä, Janne
Reviewer: Heikkilä, Janne
Pedone, Matteo
Description:

Tiivistelmä

Land Use and Land Cover (LULC) information is important for a variety of applications notably ones related to forestry. The segmentation of remotely sensed images has attracted various research subjects. However this is no easy task, with various challenges to face including the complexity of satellite images, the difficulty to get hold of them, and lack of ready datasets. It has become clear that trying to classify on multiple classes requires more elaborate methods such as Deep Learning (DL). Deep Neural Networks (DNNs) have a promising potential to be a good candidate for the task. However DNNs require a huge amount of data to train including the Ground Truth (GT) data. In this thesis a DL pixel-based approach backed by the state of the art semantic segmentation methods is followed to tackle the problem of LULC mapping. The DNN used is based on DeepLabv3 network with an encoder-decoder architecture. To tackle the issue of lack of data the Sentinel-2 satellite whose data is provided for free by Copernicus was used with the GT mapping from Corine Land Cover (CLC) provided by Copernicus and modified by Tyke to a higher resolution. From the multispectral images in Sentinel-2 Red Green Blue (RGB), and Near Infra Red (NIR) channels were extracted, the 4th channel being extremely useful in the detection of vegetation. This ended up achieving quite good accuracy on a DNN based on ResNet-50 which was calculated using the Mean Intersection over Union (MIoU) metric reaching 0.53MIoU. It was possible to use this data to transfer the learning to a data from Pleiades-1 satellite with much better resolution, Very High Resolution (VHR) in fact. The results were excellent especially when compared on training right away on that data reaching an accuracy of 0.98 and 0.85MIoU.

see all

Subjects:
Copyright information: © Mohamed Bengana, 2019. This publication is copyrighted. You may download, display and print it for your own personal use. Commercial use is prohibited.