Off-campus UMass Amherst users: To download campus access dissertations, please use the following link to log into our proxy server with your UMass Amherst user name and password.
Non-UMass Amherst users: Please talk to your librarian about requesting this dissertation through interlibrary loan.
Dissertations that have an embargo placed on them will not be available to anyone until the embargo expires.
Author ORCID Identifier
Open Access Dissertation
Doctor of Philosophy (PhD)
Year Degree Awarded
Month Degree Awarded
Artificial Intelligence and Robotics | Graphics and Human Computer Interfaces
Digital representations of 3D shapes are becoming increasingly useful in several emerging applications, such as 3D printing, virtual reality and augmented reality. However, traditional modeling softwares require users to have extensive modeling experience, artistic skills and training to handle their complex interfaces and perform the necessary low-level geometric manipulation commands. Thus, there is an emerging need for computer algorithms that help novice and casual users to quickly and easily generate 3D content. In this work, I will present deep learning algorithms that are capable of automatically inferring parametric representations of shape families, which can be used to generate new 3D shapes from high-level user specifications, such as input sketches.
I will first present a deep learning technique that generates 3D shapes by translating an input sketch to parameters of a predefined procedural model. The inferred procedural model parameters then yield multiple, detailed output shapes that resemble the user's input sketch. At the heart of our approach is a deep convolutional network trained to map sketches to procedural model parameters.
Procedural models are not readily available always, thus I will present a deep learning algorithm that is capable of automatically learning parametric models of shape families from 3D model collections. The parametric models are built from dense point correspondences between shapes. To compute correspondences, we propose a probabilistic graphical model that learns a collection of deformable templates that can describe a shape family. The probabilistic model is backed by a deep convolutional network that learns surface point descriptors such that accurate point correspondences are established between shapes.
Based on the estimated shape correspondence, I will introduce a probabilistic generative model that hierarchically captures statistical relationships of corresponding surface point positions and parts as well as their existence in the input shapes. A deep learning procedure is used to capture these hierarchical relationships. The resulting generative model is used to produce control point arrangements that drive shape synthesis by combining and deforming parts from the input collection.
With these new data driven modeling algorithms, I hope to significantly shorten the design cycle of 3D products and let detail-enriched visual content creation become easy for casual modelers.
Huang, Haibin, "DEEP-LEARNED GENERATIVE REPRESENTATIONS OF 3D SHAPE FAMILIES" (2017). Doctoral Dissertations. 1098.