Mason Archival Repository Service

Learn to Synthesize Appearance, Shape and Motion Using Synthetic Data

Show simple item record

dc.contributor.advisor Lien, Jyh-Ming
dc.contributor.author Liu, Guilin
dc.creator Liu, Guilin
dc.date.accessioned 2018-10-22T01:21:20Z
dc.date.available 2018-10-22T01:21:20Z
dc.date.issued 2017
dc.identifier.uri https://hdl.handle.net/1920/11321
dc.description.abstract A vast amount of 2D images and 3D meshes and points are created every day. Many applications require us to extract more semantic information beyond the original raw discrete representation of pixels, facets and points from these data. In this dissertation, I will focus on extracting several types of shape and physical properties from these data. However, estimating these property information suffers from some difficulties, including under-constrained setting, missing accurate ground truth data and expensive computation cost. I will develop methods to solve three tasks of estimating shape and physical properties with each task representing one of the difficulties. The tasks are appearance synthesis, shape synthesis and motion synthesis. For the appearance synthesis, I will develop an end-to-end deep learning framework to estimate the material (reflectance property) from 2D images, which is originally an under-constrained problem. The main ingredient of the end-to-end framework is the introduction of a rendering layer. I will show the effectiveness of framework for editing the materials in 2D images. For shape synthesis, I will discuss how to combine the inaccurate and noisy ground truth normal data and the image itself to predict fine-scale normal from 2D images using the deep learning framework. Results will show that even though the ground truth normal is inaccurate and far from detailed, the trained deep learning model can still produce detailed normal predictions. The motion synthesis part will be about approximating the medial axis of a robot's configuration space. Originally, computing the medial axis of the configuration space is highly computationally expensive. I will show how the formulation of support vector machine can be adapted to solve this task efficiently. Detailed explanation of how the difficulties are resolved and plenty of experiment results will be provided. On the other hand, the methods for these tasks require sufficient and valid training data. I generate synthetic datasets to compensate for the lack of corresponding real image datasets for appearance synthesis and shape synthesis. Semantically meaningful segmentations of 3D shapes are used to generate plausible synthetic datasets for these two tasks. To train the model to approximate the medial axis of the robot's configuration space well, the segmentation with bounded geometric constraints of the 3D models in the environment is needed. To generate the segmentations which are semantically meaningful and with bounded geometric constraints, I will propose a new part-aware shape feature and two nearly convex decomposition methods. Comparisons with human segmentation and other alternatives will validate the effectiveness of the proposed feature and methods.
dc.format.extent 175 pages
dc.language.iso en
dc.rights Copyright 2017 Guilin Liu
dc.subject Computer science en_US
dc.subject Material Editing en_US
dc.subject Medial-Axis Motion Planning en_US
dc.subject Shape Descriptor en_US
dc.subject Shape Segmentation en_US
dc.title Learn to Synthesize Appearance, Shape and Motion Using Synthetic Data
dc.type Dissertation
thesis.degree.level Ph.D.
thesis.degree.discipline Computer Science
thesis.degree.grantor George Mason University


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search MARS


Advanced Search

Browse

My Account

Statistics