参考:
https://blog.csdn.net/dcz1994/article/details/88837760

用一个Gibbs分布来表征条件随机场:
P(XI)=1Z(I)exp(cCGϕc(XcI)) P(\mathbf{X} | \mathbf{I})=\frac{1}{Z(\mathbf{I})} \exp \left(-\sum_{c \in \mathcal{C}_{\mathcal{G}}} \phi_{c}\left(\mathbf{X}_{c} | \mathbf{I}\right)\right)

【论文阅读】Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials
取随机场最大后验概率对应的x作为标签:
x=argmalxLNP(xI) \mathbf{x}^{*}=\arg \operatorname{mal}_{\mathbf{x} \in \mathcal{L}^{N}} P(\mathbf{x} | \mathbf{I})
【论文阅读】Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials
整个随机场的Gibbs能量为:
E(x)=iψu(xi)+i<jψp(xi,xj) E(\mathrm{x})=\sum_{i} \psi_{u}\left(x_{i}\right)+\sum_{i<j} \psi_{p}\left(x_{i}, x_{j}\right)
式中,ψu(xi)\psi_{u}\left(x_{i}\right)ψp(xi,xj)\psi_{p}\left(x_{i},x_j\right)分别代表unary and pairwise cliques
考虑二元势:
ψp(xi,xj)=μ(xi,xj)m=1Kw(m)k(m)(fi,fj)k(fi,fj) \psi_{p}\left(x_{i}, x_{j}\right)=\mu\left(x_{i}, x_{j}\right) \underbrace{\sum_{m=1}^{K} w^{(m)} k^{(m)}\left(\mathbf{f}_{i}, \mathbf{f}_{j}\right)}_{k\left(\mathbf{f}_{i}, \mathbf{f}_{j}\right)}
式中表示的是整个概率图模型中某一个pairwise cliques的势函数,那个K是指一共有k个高斯核吗?μ(xi,xj)\mu(x_i,x_j)是标签相关性函数:
【论文阅读】Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials
对于多类别图像分割问题使用contrast-sensitive two-kernel potentials,IiI_iIjI_j表示颜色向量,pip_ipjp_j表示位置:
k(fi,fj)=w(1)exp(pipj22θα2IiIj22θβ2) appearance kernel +w(2)exp(pipj22θγ2) smoothness kernel  k\left(\mathbf{f}_{i}, \mathbf{f}_{j}\right)=\underbrace{w^{(1)} \exp \left(-\frac{\left|p_{i}-p_{j}\right|^{2}}{2 \theta_{\alpha}^{2}}-\frac{\left|I_{i}-I_{j}\right|^{2}}{2 \theta_{\beta}^{2}}\right)}_{\text { appearance kernel }}+w^{(2)} \underbrace{\exp \left(-\frac{\left|p_{i}-p_{j}\right|^{2}}{2 \theta_{\gamma}^{2}}\right)}_{\text { smoothness kernel }}

Efficient Inference in Fully Connected CRFs

使用Q(X)Q(X)近似代替原始的P(X)P(X)分布,并使得KL散度D(QP)D(Q||P)最小。
推导过程参考FCN(5)——DenseCRF推导
这里我直接搬运过来了,这样方变做笔记哈哈哈
下面变分推断的目的是找到一个函数Q(x)Q(x),来近似表示P(x)P(x),以降低模型的复杂度。这个过程经过推导可知需要进行迭代近似。CRF的参数包括θw\theta和w,参数的学习需要使用其他算法进行。
我们首先给出denseCRF的Gibbs分布:
P(X)=1ZP~(X)=1Zexp(iψu(xi)+i<jψp(xi,xj)) P(X)=\frac{1}{Z} \tilde{P}(X)=\frac{1}{Z} \exp \left(\sum_{i} \psi_{u}\left(x_{i}\right)+\sum_{i<j} \psi_{p}\left(x_{i}, x_{j}\right)\right)
D(QP)=xQ(x)log(Q(x)P(x))=xQ(x)logP(x)+xQ(x)logQ(x) D(Q \| P)=\sum_{x} Q(x) \log \left(\frac{Q(x)}{P(x)}\right)=-\sum_{x} Q(x) \log P(x)+\sum_{x} Q(x) \log Q(x)

=EXQ[logP(X)]+EXQ[logQ(X)] =-E_{X \in Q}[\log P(X)]+E_{X \in Q}[\log Q(X)]

=EXQ[logP~(X)]+EXQ[logZ]+iEXiQ[logQi(Xi)] =-E_{X \in Q}[\log \tilde{P}(X)]+E_{X \in Q}[\log Z]+\sum_{i} E_{X_{i} \in Q}\left[\log Q_{i}\left(X_{i}\right)\right]

=EXQ[logP~(X)]+logZ+iEXiQi[logQi(Xi)] =-E_{X \in Q}[\log \tilde{P}(X)]+\log Z+\sum_{i} E_{X_{i} \in Q_{i}}\left[\log Q_{i}\left(X_{i}\right)\right]
由于我们要求的是Q,而logZ项中没有Q,所以这一项可以省略。
Q(X)是在当前输入下,某一标签取得x值的概率

同时Q还需要满足:
概率归一化
xiQi(xi)=1 \sum_{x_{i}} Q_{i}\left(x_{i}\right)=1

所以利用拉格朗日乘子法,可以得到
L(Qi)=EXiQ[logP~(X)]+iExiQi[logQi(xi)]+λ(xiQi(xi)1) L\left(Q_{i}\right)=-E_{X_{i} \in Q}[\log \tilde{P}(X)]+\sum_{i} E_{x_{i} \in Q_{i}}\left[\log Q_{i}\left(x_{i}\right)\right]+\lambda\left(\sum_{x_{i}} Q_{i}\left(x_{i}\right)-1\right)
这个公式的后面两项相对比较简单,但是前面一项比较复杂,我们单独做一下处理:
该项在之前被表示为:xQ(x)logQ(x)\sum_{x} Q(x) \log Q(x)
EXiQ[logP~(X)]=iQi(xi)[logP~(X)]dX -E_{X_{i} \in Q}[\log \tilde{P}(X)]=-\int \prod_{i} Q_{i}\left(x_{i}\right)[\log \tilde{P}(X)] d X

=Qi(xi)iQ(xi)[logP~(X)]dxidX =-\int Q_{i}\left(x_{i}\right) \prod_{i} Q\left(\overline{x}_{i}\right)[\log \tilde{P}(X)] d x_{i} d \overline{X}

=Qi(xi)EXQ[logP~(X)]dxi =-\int Q_{i}\left(x_{i}\right) E_{\overline{X} \in Q}[\log \tilde{P}(X)] d x_{i}
经过上面的公式整理,我们可以求出偏导,可得
L(Qi)Qi(xi)=EXQi[logP~(Xxi)]logQi(xi)1+λ \frac{\partial L\left(Q_{i}\right)}{\partial Q_{i}\left(x_{i}\right)}=-E_{\overline{X} \in Q_{i}}\left[\log \tilde{P}\left(X | x_{i}\right)\right]-\log Q_{i}\left(x_{i}\right)-1+\lambda
令偏导为0,就可以求出极值:
Qi(xi)=exp(λ1)exp(EXQi[logP~(Xxi)]) Q_{i}\left(x_{i}\right)=\exp (\lambda-1) \exp \left(-E_{\overline{X} \in Q_{i}}\left[\log \tilde{P}\left(X | x_{i}\right)\right]\right)
由于每一个Q的exp(λ1)\exp(\lambda-1)都相同,我们将其当作一个常数项,之后在renormalize的时候将其抵消掉,于是Q函数就等于:
Q(xi)=1Z1exp(EXQi[logP~(Xxi)]) Q\left(x_{i}\right)=\frac{1}{Z_{1}} \exp \left(-E_{\overline{X} \in Q_{i}}\left[\log \tilde{P}\left(X | x_{i}\right)\right]\right)
我们将文章开头关于\tilde{P}的定义带入,就得到了
Q(xi)=1Z1exp(EXQ[(iψu(xi)+jiψp(xi,xj))xi]) Q\left(x_{i}\right)=\frac{1}{Z_{1}} \exp \left(-E_{\overline{X} \in Q}\left[\left(\sum_{i} \psi_{u}\left(x_{i}\right)+\sum_{j \neq i} \psi_{p}\left(x_{i}, x_{j}\right)\right) | x_{i}\right]\right)
这里面xi的由于是已知的,所以我们可以得到补充材料里的结果(但是变量名不太一样):
Qi(xi=l)=1Ziexp[ψu(l)jiEXQjψp(l,Xj)] Q_{i}\left(x_{i}=l\right)=\frac{1}{Z_{i}} \exp \left[-\psi_{u}(l)-\sum_{j \neq i} E_{\overline{X} \in Q_{j}} \psi_{p}\left(l, X_{j}\right)\right]
继续扩展,就可以得到
=1Ziexp[ψu(l)m=1Kw(m)jiEXQj[μ(l,Xj)k(m)(fi,fj)]] =\frac{1}{Z_{i}} \exp \left[-\psi_{u}(l)-\sum_{m=1}^{K} w^{(m)} \sum_{j \neq i} E_{X \in Q_{j}}\left[\mu\left(l, X_{j}\right) k^{(m)}\left(f_{i}, f_{j}\right)\right]\right]

=1Ziexp[ψu(l)m=1Kw(m)jilLQj(l)μ(l,l)k(m)(fi,fj)] =\frac{1}{Z_{i}} \exp \left[-\psi_{u}(l)-\sum_{m=1}^{K} w^{(m)} \sum_{j \neq i} \sum_{l^{\prime} \in L} Q_{j}\left(l^{\prime}\right) \mu\left(l, l^{\prime}\right) k^{(m)}\left(f_{i}, f_{j}\right)\right]

=1Ziexp[ψu(l)lLμ(l,l)m=1Kw(m)jiQj(l)k(m)(fi,fj)] =\frac{1}{Z_{i}} \exp \left[-\psi_{u}(l)-\sum_{l^{\prime} \in L} \mu\left(l, l^{\prime}\right) \sum_{m=1}^{K} w^{(m)} \sum_{j \neq i} Q_{j}\left(l^{\prime}\right) k^{(m)}\left(f_{i}, f_{j}\right)\right]
这样,一个类似message passing的公式推导就完成了。其中最内层的求和可以用截断的高斯滤波完成。搬运最后的一点公式,可以得:
Qi(m~)(l)=jiQj(l)k(m)(fi,fj)=jQj(l)k(m)(fi,fj)Qi(l) Q_{i}^{(\tilde{m})}(l)=\sum_{j \neq i} Q_{j}\left(l^{\prime}\right) k^{(m)}\left(f_{i}, f_{j}\right)=\sum_{j} Q_{j}(l) k^{(m)}\left(f_{i}, f_{j}\right)-Q_{i}(l)
最终得到的迭代公式是:
Qi(xi=l)=1Ziexp{ψu(xi)lLμ(l,l)m=1Kw(m)jik(m)(fi,fj)Qj(l)} Q_{i}\left(x_{i}=l\right)=\frac{1}{Z_{i}} \exp \left\{-\psi_{u}\left(x_{i}\right)-\sum_{l^{\prime} \in \mathcal{L}} \mu\left(l, l^{\prime}\right) \sum_{m=1}^{K} w^{(m)} \sum_{j \neq i} k^{(m)}\left(\mathbf{f}_{i}, \mathbf{f}_{j}\right) Q_{j}\left(l^{\prime}\right)\right\}

【论文阅读】Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials

相关文章:

  • 2021-09-08
  • 2021-04-20
  • 2022-12-23
  • 2021-12-18
  • 2021-07-04
  • 2022-12-23
  • 2021-09-16
  • 2021-07-28
猜你喜欢
  • 2021-11-17
  • 2021-08-06
  • 2022-01-03
  • 2021-12-25
  • 2021-04-15
相关资源
相似解决方案