Introduction to online optimization: online gradient descent
online gradient descent:
Theorem:
For any closed convex action set A such that ,
for any subdifferentiable loss with bounded subgradient , the OGD strategy with parameters
and
satisfies:
strongly convex loss with bounded subgradient



曼迪匹艾公司福利 125人发布