introduction
- 【training method】
- Singular Value Bounding (SVB):在网络训练过程中,通过将权重矩阵的奇异值限制在1附近,保证权值矩阵的正交性。
- Bounded Batch Normalization (BBN):用SVB的思想对BN的改进,去除了BN的ill-conditioning(ill-conditioning参考)的风险
算法
- 样本:
{xxi,yyi}Ki=1,xxi∈RNx,yyi∈RNy
-
lth层(共L层)输入特征:
xxl=f(zl)=f(WWlxxl−1+bbl)∈RNl,WWl∈RNl×Nl−1,bbl∈RNl
- 有一些理论研究显示以正交的随机高斯矩阵初始化网络能够带来更好的表现,作者希望尝试在训练过程当中保持权重矩阵的正交性,具体来说:
Θ=min{WWl,bbl}Ll=1L({xxi,yyi}Ki=1;Θ)
s.t.∀l∈{1,..,L},WWl∈O
(其中O指的是那些行向量(或者列向量)相互正交的矩阵的集合,即文中所提的Stiefel流形
- 通过在执行SGD的同时将权重矩阵的奇异值限制到[1/(1+ϵ),(1+ϵ)]来完成
- SVB:
推导与证明
前向
- 为了简化,先使用两层神经网络WW2WW1xx(忽略bias),线性**函数f(z)=z,损失函数L=12K∑Ki=1||yyi−WW2WW1xxi||22
- 其中:
||yyi−WW2WW1xxi||22=tr[(yyi−WW2WW1xxi)T(yyi−WW2WW1xxi)]=tr(yyTiyyi)−tr(yyTi(WW2WW1xxi))−tr((WW2WW1xxi)Tyyi)+tr[(WW2WW1xxi)T(WW2WW1xxi)]=tr(yyTiyyi)−2tr[(WW1xxi)TWW2Tyyi]+tr[(WW1xxi)T(WW2TWW2)(WW1xxi)]
- 上式对W2W2求偏导(矩阵求导可查表(WIKI))可得:
0−2yyi(WW1xxi)T+(WW2T)T(WW1xxi)(WW1xxi)T+(WW2T)T((WW1xxi)T)T((WW1xxi))T=2[−yyixxTi+WW2WW1(xxixxTi)]WW1T
- 综上可得:
-
∂L∂WW2=(CCxy−WW2WW1CCxx)WW1T
类似地:
∂L∂WW1=WW2T(CCxy−WW2WW1CCxx)
其中:
CCyx=1K∑i=1KyyixxTt,CCxx=1K∑i=1KxxixxTt
假设输入数据做了白化,ccyx为x和y的交叉协方差矩阵(cross-covariance matrix,注意到x是zero-mean,不过y应该不会是zero-mean吧。。。就当近似),而CCxx=II
- 其中,对ccyx作奇异值分解
CCyx=UUySSyxVVxT
,由奇异值分解的性质,左奇异向量组成的UUy∈RNy×Ny表示了输出空间RNy内的一组基底(文中:represent independent directions of output
variations),右奇异向量组成的VVx∈RNx×Nx则表示了输入空间RNx的一组基底,SSyx∈RNy×Nx是包含了排序过的奇异值的对角矩阵
- 再对WW1和WW2作初始化为:
WW1=RRSS1VVxT,WW2=RRSS2VVxT
其中,RR∈RN1×N1是一个任意的正交矩阵,并且在训练过程当中保持不变。SS1和SS1都为对角矩阵
损失函数的偏导即是:
∂L∂WW1=RRSS2T(SSyx−SS2SS1)VVxT∂L∂WW2=UUy(SSyx−SS2SS1)SS1TRRT
- 当RR给定的时候,能够保证WW1和WW2沿着他们各自的基地变化(
are optimized along their respective independent
directions of variations. )
- 记sm、tm和σm分别为SS1、SS1和SSyx的第m个对角元素,那么有:
∂L∂sm=(σm−smtm)tm,∂L∂tm=(σm−smtm)sm
(忽略常数部分)
其中,L可以和能量函数
ε(sm,tm)=12(σm−smtm)2
进行类比,
从这里可以清楚地看出,式
smtm
正朝着σm的方向优化。
- 将上述分析拓展到L层:
![[cvpr2015]Improving training of deep neural networks via Singular Value Bounding [cvpr2015]Improving training of deep neural networks via Singular Value Bounding](/default/index/img?u=L2RlZmF1bHQvaW5kZXgvaW1nP3U9YUhSMGNITTZMeTl3YVdGdWMyaGxiaTVqYjIwdmFXMWhaMlZ6THprMU5TOW1aV00yTkRNM1lqRXdNemRpTnpFM01tTXdZak0wWlRVME9UQTBaRGs0TXk1S1VFVkg=)
![[cvpr2015]Improving training of deep neural networks via Singular Value Bounding [cvpr2015]Improving training of deep neural networks via Singular Value Bounding](/default/index/img?u=L2RlZmF1bHQvaW5kZXgvaW1nP3U9YUhSMGNITTZMeTl3YVdGdWMyaGxiaTVqYjIwdmFXMWhaMlZ6THpjNE5pOWhPV0prTVRjNE9HRm1ZbUU1T1dNelpUYzFNelU1TXpBME9UY3hPRFF4WVM1S1VFVkg=)
其中,WWl=RRlSSRRl+1T,权值矩阵的右奇异向量会作为下一层矩阵的左奇异向量,但是Algorithm 1(SVB)中并未做到这一点
![[cvpr2015]Improving training of deep neural networks via Singular Value Bounding [cvpr2015]Improving training of deep neural networks via Singular Value Bounding](/default/index/img?u=L2RlZmF1bHQvaW5kZXgvaW1nP3U9YUhSMGNITTZMeTl3YVdGdWMyaGxiaTVqYjIwdmFXMWhaMlZ6THpNek9DODBNVGhtTVdWbE56WTBPVFV4TVRZelkyTmlNbUkzWkRSaFpqRTROR1kwWVM1S1VFVkg=)
![[cvpr2015]Improving training of deep neural networks via Singular Value Bounding [cvpr2015]Improving training of deep neural networks via Singular Value Bounding](/default/index/img?u=L2RlZmF1bHQvaW5kZXgvaW1nP3U9YUhSMGNITTZMeTl3YVdGdWMyaGxiaTVqYjIwdmFXMWhaMlZ6THpnNUx6ZzRPVGhsTUdSa1ptRXhOMkkzTkRaa01EZGpaR1U1T1RnM1pUTTNNbUl4TGtwUVJVYz0=)
(注:σM在给定训练数据时已经确定)
易证,当L很大的时候,如果所有奇异值slm没有落在1附近,则式(10)是不能收敛的。
- 作者认为,由于目前的训练方法中没有对权值矩阵奇异值做出限制,因此所有层的权值矩阵能够在任意层和方向上放大或者缩小,导致结果容易陷入局部最小,使得仅仅只有一部分的输入-输出互相关关系(input-output correlations,我认为就是前文的矩阵CCyx中的向量)被使用到。
- 考虑一个两层模型WWl+1WWl,做奇异值分解,有:
WWl+1WWl=UUl+1SSl+1VVl+1TUUlSSlVVlT
,
记MM=SSl+1VVl+1TUUlSSl
,其中该矩阵第m行m′列元素可记为:
MMm,m′=sl+1mslm(vvl+1mTuulm′)
式中(vvl+1mTuulm′)表示l层和l+1层之间的基底坐标变换(即表示了第l层ouput space第m′个基上的变动与第(l+1)层input space第m个基上的变动的混合)
-
Algorithm 1(SVB)能够通过限制slm和sl+1m,保证信号的变动从上一层传向下一层时,各个方向的强度更加适当(我的理解是避免了ill-conditioning的出现)。作者认为没有这些限制以后,一些方向的变动会被过度放大,使得别的方向的变动被缩小很多。
反向
![[cvpr2015]Improving training of deep neural networks via Singular Value Bounding [cvpr2015]Improving training of deep neural networks via Singular Value Bounding](/default/index/img?u=L2RlZmF1bHQvaW5kZXgvaW1nP3U9YUhSMGNITTZMeTl3YVdGdWMyaGxiaTVqYjIwdmFXMWhaMlZ6THprM05pOW1ObU0yTXpRMlptRmpOamxrWVdZd05EYzBaak5qWVRjeFlXRmpaRFEzTUM1S1VFVkg=)
- 记WWi=RRiSSRRi+1T,则有:
- 当网络很深的时候,容易发生∏Li=l+1sim的explode or vanish,导致最终的梯度爆炸和梯度消失。而作者的SVB能够避免这一情况的出现
- (理想的情况下,SVB能够保证各层输入对L的偏导后的范数和error vectorxxL对L的偏导后的范数一致)
与BN的兼容(BBN,Bounded Batch Normalization)
BN引入了一个深层神经网络训练的问题:internal covariate shift(因各层输入的分布持续变化导致训练很慢),通过加入BN层能够缓解这一状况 。
-
对于一层f(zz)=f(WxWx)∈RN,如果在**函数之前加入BN,即f(BN(zz))=f(BN(WxWx)),其中:
BN(zz)=ΓΣ(zz−μμ)+ββ
- (zero-mean):μμ∈RN为层上单个神经元的输出的均值(共N个)
- (norm->1):Σ∈RN×N为对角矩阵,对角元素{1/ςi}Ni=1为单个神经元输出的标准差再加一个小常数的倒数
- (scale):Γ∈RN×N为包含展缩(scale)元素{γi}Ni=1的度角矩阵
- (shift):ββ为可训练的偏置项
-
带入z=Wxz=Wx:
BN(xx)=WW˜xx+bb˜,s.t.WW˜=ΓΣWW,bb˜=ββ−ΓΣμμ
,其中对角矩阵ΓΣ有对角元素{γi/ςi}Ni=1
由引理1:
![[cvpr2015]Improving training of deep neural networks via Singular Value Bounding [cvpr2015]Improving training of deep neural networks via Singular Value Bounding](/default/index/img?u=L2RlZmF1bHQvaW5kZXgvaW1nP3U9YUhSMGNITTZMeTl3YVdGdWMyaGxiaTVqYjIwdmFXMWhaMlZ6THpjd0x6VTNPR0kzTW1ZME5HVTJZekJtTmpFeE16RTRNVE15TTJWa016ZG1aR0UyTGtwUVJVYz0=)
可知矩阵Γ和Σ在BN中都能够使信号在层与层的传递中分布发生改变,当对角矩阵ΓΣ的对角元素{γi/ςi}Ni=1同时偏离1比较远时,梯度爆炸/梯度弥散很容易出现了。
- 为了避免这种状况,作者打算将{γi/ςi}Ni=1限制在1附近,但是这样作会抹消BN的一个优点(Γ和β的存在能够使得BN在特定情况下退化为近似的恒定变换(γi≈ςi,即消除BN的作用))
- BN中,解耦出的{γi}Ni=1能够显著地提升网络的适应性,受次启发,作者再引入一个解耦参数α,使得SVB能够与BN算法兼容,即用{1αγi/ςi}Ni=1替代{γi/ςi}Ni=1,将{1αγi/ςi}Ni=1在训练过程当中限制到[1/(1+ϵ),(1+ϵ)]
- BBN:
Experiment
作者在:
- CIFAR 10
- CIFAR 100
- ImageNet
数据集上,使用
- 标准的卷积神经网络
- ResNets
- Wide ResNets
对算法SVB和BBN做了测试
(作者的实验结果表示当网络较深的时候,BBN确实表现的比普通的BN更加优秀)
![[cvpr2015]Improving training of deep neural networks via Singular Value Bounding [cvpr2015]Improving training of deep neural networks via Singular Value Bounding](/default/index/img?u=L2RlZmF1bHQvaW5kZXgvaW1nP3U9YUhSMGNITTZMeTl3YVdGdWMyaGxiaTVqYjIwdmFXMWhaMlZ6THpFME5pOWlaVGRqTlRGalptRXpNVEUwTkRaaU9HVXpNVE14WWpJeFlqUXlOV00wTWk1S1VFVkg=)
![[cvpr2015]Improving training of deep neural networks via Singular Value Bounding [cvpr2015]Improving training of deep neural networks via Singular Value Bounding](/default/index/img?u=L2RlZmF1bHQvaW5kZXgvaW1nP3U9YUhSMGNITTZMeTl3YVdGdWMyaGxiaTVqYjIwdmFXMWhaMlZ6THpjME9TOHpNekV3TVdNek5HVmlNVFkzWWpZMU5qWmxPRGxtWkRNeU4yTmlZVEUyWkM1S1VFVkg=)
![[cvpr2015]Improving training of deep neural networks via Singular Value Bounding [cvpr2015]Improving training of deep neural networks via Singular Value Bounding](/default/index/img?u=L2RlZmF1bHQvaW5kZXgvaW1nP3U9YUhSMGNITTZMeTl3YVdGdWMyaGxiaTVqYjIwdmFXMWhaMlZ6THpVM01TODFabU5oWTJKbU5qSmlZalZpTmpFNU5UQmlZemhpTWpjNE5ETmtZemhqTXk1S1VFVkg=)
![[cvpr2015]Improving training of deep neural networks via Singular Value Bounding [cvpr2015]Improving training of deep neural networks via Singular Value Bounding](/default/index/img?u=L2RlZmF1bHQvaW5kZXgvaW1nP3U9YUhSMGNITTZMeTl3YVdGdWMyaGxiaTVqYjIwdmFXMWhaMlZ6THpJME15OWtZakkyWTJZME1qUTJOMkpqTmpBNU1XRXdNelU0TVdGa01ESmhaR0ptWWk1S1VFVkg=)
相关文章:
-
2021-08-22
-
2021-10-22
-
2021-09-07
-
2022-01-04
-
2021-12-31
-
2021-07-10
-
2021-11-02
-
2021-05-18