Learn to Synthesize Appearance, Shape and Motion Using Synthetic Data

dc.contributor.advisorLien, Jyh-Ming
dc.contributor.authorLiu, Guilin
dc.creatorLiu, Guilin
dc.description.abstractA vast amount of 2D images and 3D meshes and points are created every day. Many applications require us to extract more semantic information beyond the original raw discrete representation of pixels, facets and points from these data. In this dissertation, I will focus on extracting several types of shape and physical properties from these data. However, estimating these property information suffers from some difficulties, including under-constrained setting, missing accurate ground truth data and expensive computation cost. I will develop methods to solve three tasks of estimating shape and physical properties with each task representing one of the difficulties. The tasks are appearance synthesis, shape synthesis and motion synthesis. For the appearance synthesis, I will develop an end-to-end deep learning framework to estimate the material (reflectance property) from 2D images, which is originally an under-constrained problem. The main ingredient of the end-to-end framework is the introduction of a rendering layer. I will show the effectiveness of framework for editing the materials in 2D images. For shape synthesis, I will discuss how to combine the inaccurate and noisy ground truth normal data and the image itself to predict fine-scale normal from 2D images using the deep learning framework. Results will show that even though the ground truth normal is inaccurate and far from detailed, the trained deep learning model can still produce detailed normal predictions. The motion synthesis part will be about approximating the medial axis of a robot's configuration space. Originally, computing the medial axis of the configuration space is highly computationally expensive. I will show how the formulation of support vector machine can be adapted to solve this task efficiently. Detailed explanation of how the difficulties are resolved and plenty of experiment results will be provided. On the other hand, the methods for these tasks require sufficient and valid training data. I generate synthetic datasets to compensate for the lack of corresponding real image datasets for appearance synthesis and shape synthesis. Semantically meaningful segmentations of 3D shapes are used to generate plausible synthetic datasets for these two tasks. To train the model to approximate the medial axis of the robot's configuration space well, the segmentation with bounded geometric constraints of the 3D models in the environment is needed. To generate the segmentations which are semantically meaningful and with bounded geometric constraints, I will propose a new part-aware shape feature and two nearly convex decomposition methods. Comparisons with human segmentation and other alternatives will validate the effectiveness of the proposed feature and methods.
dc.format.extent175 pages
dc.rightsCopyright 2017 Guilin Liu
dc.subjectComputer science
dc.subjectMaterial Editing
dc.subjectMedial-Axis Motion Planning
dc.subjectShape Descriptor
dc.subjectShape Segmentation
dc.titleLearn to Synthesize Appearance, Shape and Motion Using Synthetic Data
thesis.degree.disciplineComputer Science
thesis.degree.grantorGeorge Mason University


Original bundle
Now showing 1 - 1 of 1
Thumbnail Image
65.99 MB
Adobe Portable Document Format