Date of Award

1-1-2019

Language

English

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

College/School/Department

Department of Computer Science

Content Description

1 online resource (xii, 147 pages) : color illustrations.

Dissertation/Thesis Chair

Won Namgoong

Committee Members

Petko Bogdanov, Siwei Lyu, Yiming Ying

Keywords

graph-structured, nonconvex, online learning, sparse learning, sparsity, stochastic optimization, Data structures (Computer science), Graph theory, Mathematical optimization, Machine learning, Sparse matrices, Graphical modeling (Statistics)

Subject Categories

Applied Mathematics | Computer Sciences

Abstract

Learning graph-structured sparse models has recently received significant attention thanks to their broad applicability to many important real-world problems. However, such models, of more effective and stronger interpretability compared with their counterparts, are difficult to learn due to optimization challenges. This thesis presents optimization algorithms for learning graph-structured sparse models under three different problem settings. Firstly, under the batch learning setting, we develop methods that can be applied to different objective functions that enjoy linear convergence guarantees up to constant errors. They can effectively optimize the statistical score functions in the task of subgraph detection; Secondly, under stochastic learning setting, we then propose a stochastic gradient-based algorithm to address the large-scale data challenge and prove the convergence guarantee of the proposed is competitive with the batch learning counterpart; Thirdly, for the online learning setting, new dual-averaging based methods have been proposed to minimize a predefined regret. We both theoretically discuss and empirically show that our designed methods are more effective and learn stronger interpretable models.

Share

COinS