Advancing Distributed Optimization for Non-Convex Problems

Shahin Shahrampour

MIE Assistant Professor Shahin Shahrampour received a $500,000 NSF grant, in collaboration with Texas A&M University, to address “Consensus and Distributed Optimization in Non-Convex Environments with Applications to Networked Machine Learning.” The project will transform the understanding of consensus and coordination in non-convex environments, and will include educational components to introduce distributed optimization as a practical tool for the next generation of engineers.


Abstract Source: NSF

Distributed optimization is a vehicle for machine learning and data analysis over networks as it provides the means to train models in a scalable fashion. This project aims to advance the fundamental knowledge on distributed non-convex optimization by studying two important classes of non-convex problems. The intellectual merits of the project include investigating distributed retraction-free manifold optimization, and distributed non-smooth weakly-convex optimization. Scientific contributions of this project will bring transformative change to the understanding of consensus and coordination in non-convex environments. The broader impacts of the project include educational components to introduce distributed optimization as a practical tool for the next generation of engineers. These educational plans include seminar presentations for a broader audience as well as advanced course development to introduce state-of-the-art decentralized optimization techniques.

The goal of this project is to address distributed approaches for non-convex optimization implemented in heterogeneous computing environments. Retraction-free methods for distributed implementation of Riemannian Stochastic Gradient Descent will be investigated, followed by illustration of the benefits of this technique in reduced rank and sparse regression as well as training of sparse deep neural networks. In both cases, sparsity will be ensured by constraining the search for the model parameters to a lower-dimensional Riemannian manifold. The study will further consider distributed non-smooth, non-convex (weakly-convex) optimization, and develop asynchronous sub-gradient descent methods for communication-efficient optimization. The goal will be to establish both global and local convergence results for the proposed methods.

This award reflects NSF’s statutory mission and has been deemed worthy of support through evaluation using the Foundation’s intellectual merit and broader impacts review criteria.

Related Faculty: Shahin Shahrampour

Related Departments:Mechanical & Industrial Engineering