Efficient Deep Learning System in Mobile Computing

dc.contributor.advisorChen, Xiang
dc.creatorXu, Zirui
dc.description.abstractIn the past few years, the fast-developing Deep Neural Networks (DNNs) and their broad applications have served as the primary driving horsepower for a new technology wave.However, they are still very computationally intensive for resource-constrained mobile systems. Therefore, a lot of research works were proposed to compress and accelerate DNNs for efficient computation on mobile devices. My research topics during Ph.D. study are mainly focused on building efficient deep learning systems in mobile computing by considering three specific challenges: system deployment, application scenario, and large-scale collaboration. First, from the perspective of single system deployment, in order to adapt DNNs to various hardware constraints on mobile devices, I propose DiReCtX - a dynamic resource-aware DNN model reconfiguration framework. DiReCtX is based on a set of accurate DNN profiling models for different resource consumption and inference accuracy estimation. With manageable consumption/accuracy trade-offs, DiReCtX can reconfigure a DNN model to meet distinct resource constraint types and levels with expected inference performance maintained. Second, from the perspective of the application scenario, I find the input information in most mobile computing scenarios has many redundancies (sparsity patterns) and they are non-structural and randomly located on feature maps with non-identical shapes. Therefore, I develop a novel sparsity computing scheme called FalCon, which can well adapt to the practical sparsity patterns while still maintaining efficient computing. Additionally, a decomposed convolution computing optimization paradigm is proposed to convert the sparsity to practical acceleration. At last, from the perspective of large-scale collaboration, I propose \textit{Helios} --- a heterogeneity-aware FL framework to tackle the straggler issue in the scenario of multi-device collaborative learning. Helios identifies an individual device's heterogeneous training capability and calculates the expected neural network model training volumes on stragglers. For straggling devices, a ``soft-training'' method is proposed to dynamically compress the original identical training model into the expected volume through a rotating neuron training approach. With extensive algorithm analysis and optimization schemes, the stragglers can be accelerated while retaining the convergence for local training as well as federated collaboration. I hope the realization of the projects in this dissertation could contribute to the current research area about efficient deep learning system and motivate more studies on model optimization, collaboration system design, and even compiler-level renovation.
dc.format.extent104 pages
dc.format.mediumdoctoral dissertations
dc.rightsCopyright 2022 Zirui Xu
dc.subjectDeep Learning
dc.subjectDeep Neural Networks
dc.subjectFederated Learning
dc.subjectMobile Computing
dc.subjectModel Compression
dc.subject.keywordsComputer engineering
dc.subject.keywordsComputer science
dc.titleEfficient Deep Learning System in Mobile Computing
thesis.degree.disciplineElectrical and Computer Engineering
thesis.degree.grantorGeorge Mason University
thesis.degree.namePh.D. in Electrical and Computer Engineering


Original bundle
Now showing 1 - 1 of 1
Thumbnail Image
1.66 MB
Adobe Portable Document Format