借鉴了:
Hongyi Li的ML课程第九节《tips for DL》
https://www.jianshu.com/p/aebcaf8af76e
《ADAM: A METHOD FOR STOCHASTIC OPTIMIZATION》
https://zhuanlan.zhihu.com/p/105788925

Gradient descent及其变形

  1. stochastic gradient descent
    一次更新take in一个data point或一个mini-batch。
    gradient descent and its variants

  2. Adagrad
    use first derivative to estimate second derivative
    adaptive learning rate = learning rate/RMS of all previous and current gradients
    large RMS(sum of gradients): small learning rate, which means we need to slow down the update speed;
    small RMS: large learning rate
    gradient descent and its variants

  3. RMSProp
    梯度平方进行加权均值
    adaptive learning rate = learning rate/sigma
    sigma includes all previous gradients g0 to gt-1, and current gradient gt.
    small alpha: tends to believe gt to update parameters w_t-1
    large alpha: tends to believe previous gradients (sigma t-1) to update w_t-1
    gradient descent and its variants

  4. momentum
    gradient descent and its variants
    大致朝原方向v_t-1走,新计算出的gradient(gt)会修正原更新方向v_t-1 by simply adding v_t-1 onto gt, which means强化与之同向的分量,弱化与之反向的分量。

  5. Adam
    gradient descent and its variants
    gradient descent and its variants
    mt //一阶矩(1st moment vector),movement vector using momentum. First Moment Estimation,即梯度的均值.
    vt //二阶原始矩(2nd raw moment vector),i.e. E[(X^2)],RMSProp. Second Moment Estimation,即梯度的未中心化的方差

then bias correction:
由于m0 and v0初始化为0,会导致mt and vt偏向于0,尤其在训练初期阶段。所以,此处需要对梯度均值mt and vt进行偏差纠正,降低偏差对训练初期的影响。开始1-beta^t较小,接近于0,随着t增大,逐渐接近于1。

相关文章: