University of Oulu

3D texture reconstruction from multi-view images

Saved in:
Author: Rajagopal, Satish1
Organizations: 1University of Oulu, Faculty of Information Technology and Electrical Engineering, Department of Computer Science and Engineering, Computer Science and Engineering
Format: ebook
Version: published version
Access: open
Online Access: PDF Full Text (PDF, 1.9 MB)
Persistent link: http://urn.fi/URN:NBN:fi:oulu-201706022492
Language: English
Published: Oulu : S. Rajagopal, 2017
Publish Date: 2017-06-02
Physical Description: 45 p.
Thesis type: Master's thesis (tech)
Tutor: Heikkilä, Janne
Reviewer: Heikkilä, Janne
Kannala, Juho
Description:
Given an uncontrolled image dataset, there are several approaches to reconstruct the geometry of the scene and very few for reconstructing the texture. We analyze two different state of the art fully integrated texture reconstruction frameworks to generate a textured scene. Shan et al.’s approach [1] uses a shading model inspired by Computer Graphics rendering to formulate the scene and compute the texture. Texture is stored as albedo reflectance parameter per vertex for each color channel. Waechter et al. [2] uses a two-stage approach, first stage where a view is selected for each face and second stage where global and local adjustments are performed to smooth out seam visibility between patches. Both approaches have their own occlusion removal stage. We analyze these two drastically different approaches under different conditions. We compare the input images and rendered scenes from the same angle. We discuss about occlusion removal in an unconstrained image dataset. We modify the shading model proposed by Shan et al. to solve for a controlled indoor scene. The analysis shows the advantages of either approaches on specific conditions. The patch based texture reconstruction provides a visually appealing scene reconstructed in a considerable time. The vertex based texture reconstruction has a complex model providing us the framework to solve for lighting and environment conditions under which the images are captured. We believe that these two approaches provide fully integrated frameworks that reconstruct the scene for both geometry and texture from an uncontrolled image data set despite all the inherent challenges in a reasonable time.
see all

Subjects:
Copyright information: © Satish Rajagopal, 2017. This publication is copyrighted. You may download, display and print it for your own personal use. Commercial use is prohibited.