Development of Lagrange Multiplier Algorithms for Training Support Vector Machines

dc.contributor.advisorGriva, Igor
dc.creatorAregbesola, Mayowa Kassim
dc.date.accessioned2023-03-17T19:05:44Z
dc.date.available2023-03-17T19:05:44Z
dc.date.issued2022
dc.description.abstractThe Support Vector Machine (SVM) is a supervised learning method that is widely used for data classification and regression. SVM training times can be significant for large training dataset. In pursuit of developing efficient optimization techniques for training SVM with very large datasets, the decomposition method is often used, where the SVM problem is broken into a series of SVM sub-problems. The subset of elements used in each decomposition step, called a working set, needs to be selected efficiently while working toward the goal of finding the full SVM solution. ​SVM training time can be further reduced by using parallel processing, allowing the training algorithms to run faster and more reliably. In this work, we used the Augmented Lagrangian Fast Projected Gradient Method (ALFPGM) and the Nonlinear Rescaling Augmented Lagrangian (NRAL) are used for training the SVM subproblems. We developed and implemented parallel algorithms for training SVM and we used optimized matrix-vector, matrix-matrix operations, and memory management to speed up the ALFPGM and NRAL algorithms. We proposed new working set selection (WSS) schemes to select the working sets used in the SVM decomposition. The results obtained using the proposed WSS show faster training times, while achieving a similar classification error compared to other approaches found in the literature. Numerical results showing SVM training times and classification errors obtained using the ALFPGM and NRAL methods are compared with the results obtained using LibSVM a widely used SVM solver. Numerical results show that faster training times were achieved using NRAL over LibSVM for large dataset SVM problems while achieving similar and in some cases smaller training data classification errors.
dc.format.extent135 pages
dc.format.mediumdoctoral dissertations
dc.identifier.urihttps://hdl.handle.net/1920/13182
dc.language.isoen
dc.rightsCopyright 2022 Mayowa Kassim Aregbesola
dc.rights.urihttps://rightsstatements.org/vocab/InC/1.0
dc.subjectClassification
dc.subjectConvex optimization
dc.subjectMachine learning
dc.subjectSupport vector machines
dc.subjectSVM
dc.subjectWorking set selection
dc.subject.keywordsMathematics
dc.subject.keywordsComputer science
dc.titleDevelopment of Lagrange Multiplier Algorithms for Training Support Vector Machines
dc.typeText
thesis.degree.disciplineComputational Sciences and Informatics
thesis.degree.grantorGeorge Mason University
thesis.degree.levelDoctoral
thesis.degree.namePh.D. in Computational Sciences and Informatics

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Aregbesola_gmu_0883E_12895.pdf
Size:
1.48 MB
Format:
Adobe Portable Document Format