Inexact Proximal Gradient Methods with Optimal Support Identification

Abstract

We consider the proximal-gradient method for minimizing an objective function that is the sum of a smooth function and a non-smooth convex function. A feature that distinguishes our work from most in the literature is that we assume that the associated proximal-gradient subproblem does not admit a closed-form solution. To address this challenge, we study two adaptive and implementable termination conditions that dictate how accurately the proximal-gradient subproblem must be solved. We prove that the iteration complexity for the resulting inexact proximal-gradient method to reach an ε first-order stationary point is $O(\epsilon^{-2})$. In addition, using the overlapping group L1 regularizer as an example, we propose a new approach for approximately solving the proximal-gradient subproblem, and then establish the support identification complexity result.

Date
Oct 16, 2022 2:00 PM — 3:15 PM
Location
Indianapolis, United States
Ph.D.

My research interests lie at the intersection of machine learning and optimization.