Posts

Showing posts from January, 2021

Online Optimization Specialization (2/4): Review of 'Making Gradient Descent Optimal for Strongly Convex Stochastic Optimization'

Specialization Introduction This specialization covers five selected grounding papers in online optimization. In each blog, I will discuss one paper, where I aim to include Brief introduction and summary to the paper Key takeaways of the paper Notice that all the discussion and summary in this specialization are based on the reviewed papers. None of the algorithms or theorems is proposed by myself.  Summary Paper Detail Title:  Making Gradient Descent Optimal for Strongly Convex Stochastic Optimization Author(s):  Alexander Rakhlin, Ohad Shamir, Karthik Sridharan URL:  https://icml.cc/2012/papers/261.pdf Abstract Stochastic gradient descent (SDG) is a simple and popular method to solve stochastic optimization problems which arise in machine learning. For strongly convex problems, its convergence rate was known to be $\mathcal{O}(log(T)/T)$, by running SGD for T iterations and returning the average point. However, recent results showed that using a different algorithm...

Online Optimization Specialization (1/4): Review of 'Online Convex Programming and Generalized Infinitesimal Gradient Ascent' by Martin Zinkevich

Specialization Introduction This specialization covers five selected grounding papers in online optimization. In each blog, I will discuss one paper, where I aim to include Brief introduction and summary to the paper Key takeaways of the paper Notice that all the discussion and summary in this specialization are based on the reviewed papers. None of the algorithms or theorems is proposed by myself.  Summary Paper Detail Title: Online Convex Programming and Generalized Infinitesimal Gradient Ascent Author(s): Martin Zinkevich URL:  https://www.aaai.org/Library/ICML/2003/icml03-120.php Abstract Convex programming involves a convex set $F \subseteq R^n$ and a convex function $c : F \to \mathbb{R}$. The goal of convex programming is to find a point in F which minimizes c. In this paper, we introduce online convex programming.  In online convex programming, the convex set is known in advance, but in each step of some repeated optimization problem, one must select a point in F ...