Date of Award




Document Type


Degree Name

Doctor of Philosophy (PhD)


Department of Mathematics and Statistics

Content Description

1 online resource (ix, 109 pages) : color illustrations.

Dissertation/Thesis Chair

Yiming Ying

Committee Members

Yunlong Feng, Karin Reinhold, Boris Goldfarb


AUC Optimization, Batch Learning, Binary Classification, Machine Learning, Online Learning, Computer algorithms, Mathematical optimization, Learning models (Stochastic processes), Receiver operating characteristic curves

Subject Categories

Physical Sciences and Mathematics


Stochastic optimizations algorithms like stochastic gradient descent (SGD) are favorable for large-scale data analysis because they update the model sequentially and with low per-iteration costs. Much of the existing work focuses on optimizing accuracy, however, it is known that accuracy is not an appropriate measure for class imbalanced data. Area under the ROC curve (AUC) is a standard metric that is used to measure classification performance for such a situation. Therefore, developing stochastic learning algorithms that maximize AUC in lieu of accuracy is of both theoretical and practical interest. However, AUC maximization presents a challenge since the learning objective function is defined over a pair of instances of opposite classes. Existing methods can overcome this issue and achieve online processing but with higher space and time complexity. In this thesis, we will develop two novel stochastic algorithms for AUC maximization. The first is an online method which is referred to as SPAM. In comparison to the previous literature, the algorithm can be applied to non-smooth penalty functions while achieving a convergence rate of O(log T / T). The second is a batch learning method which is referred to as SPDAM. We establish a linear convergence rate for a sufficiently large batch size. We demonstrate the effectiveness of such algorithms on standard benchmark data sets as well as data sets for anomaly detection tasks.