Development of Lagrange Multiplier Algorithms for Training Support Vector Machines



Journal Title

Journal ISSN

Volume Title



The Support Vector Machine (SVM) is a supervised learning method that is widely used for data classification and regression. SVM training times can be significant for large training dataset. In pursuit of developing efficient optimization techniques for training SVM with very large datasets, the decomposition method is often used, where the SVM problem is broken into a series of SVM sub-problems. The subset of elements used in each decomposition step, called a working set, needs to be selected efficiently while working toward the goal of finding the full SVM solution. ​SVM training time can be further reduced by using parallel processing, allowing the training algorithms to run faster and more reliably. In this work, we used the Augmented Lagrangian Fast Projected Gradient Method (ALFPGM) and the Nonlinear Rescaling Augmented Lagrangian (NRAL) are used for training the SVM subproblems. We developed and implemented parallel algorithms for training SVM and we used optimized matrix-vector, matrix-matrix operations, and memory management to speed up the ALFPGM and NRAL algorithms. We proposed new working set selection (WSS) schemes to select the working sets used in the SVM decomposition. The results obtained using the proposed WSS show faster training times, while achieving a similar classification error compared to other approaches found in the literature. Numerical results showing SVM training times and classification errors obtained using the ALFPGM and NRAL methods are compared with the results obtained using LibSVM a widely used SVM solver. Numerical results show that faster training times were achieved using NRAL over LibSVM for large dataset SVM problems while achieving similar and in some cases smaller training data classification errors.



Classification, Convex optimization, Machine learning, Support vector machines, SVM, Working set selection