文章目录
引入
论文地址:https://arxiv.org/pdf/1610.02501.pdf
主要内容或优势:
1)以往的多示例神经网络聚焦于评估实例标签,本文则是习得包的表示 (bag representations);
2)预测和训练快的飞起。
1 多示例神经网络
本文符号系统如下:
| 符号 | 含义 |
|---|---|
| X = { X 1 , X 2 , ⋯ , X N } X = \{ X_1, X_2, \cdots, X_N \} X={X1,X2,⋯,XN} | 包的集合 |
| X i = { x i 1 , x i 2 , ⋯ , x i m i } X_i = \{ x_{i1}, x_{i2}, \cdots, x_{im_i} \} Xi={xi1,xi2,⋯,ximi} | 包 |
| x i j ∈ R d × 1 x_{ij} \in \mathbb{R}^{d \times 1} xij∈Rd×1 | 实例 |
| N N N | 包数量 |
| m i m_i mi | 包大小 |
| Y i ∈ { 0 , 1 } Y_i \in \{ 0, 1 \} Yi∈{0,1} | 包标签 |
| y i j ∈ { 0 , 1 } y_{ij} \in \{ 0, 1 \} yij∈{0,1} | 实例标签 |
包标签中, 1 1 1代表正包, 0 0 0代表负包,且包与实例的标签满足标准MI假设:
Y i = { 0 , ∀ y i j = 0 ; 1 , ∑ j = 1 m i y i j ≥ 1. (1*) Y_i = \begin{cases} 0, \qquad \forall y_{ij} = 0;\\ 1, \qquad \sum_{j = 1}^{m_i} y_{ij} \geq 1. \end{cases} \tag{1*} Yi={0,∀yij=0;1,∑j=1miyij≥1.(1*)
如引入所述,多示例神经网络 (MILL)中共两种策略,具体为:
1)习得实例的标签,即将实例为正的概率作为隐藏层 (placing instance probabilities of being positive as a hidden layer in the network)
[1, 2, 3]
^\text{[1, 2, 3]}
[1, 2, 3];
2)本文提出:习得包表示,直接对包分类。
考虑将单个包
X
i
X_i
Xi传递给MINN的情况:
L
L
L层,每一层均包含一个**函数
H
ℓ
(
⋅
)
H^{\ell}(\cdot)
Hℓ(⋅),其中
ℓ
\ell
ℓ表示当前层数;令
x
i
j
ℓ
x_{ij}^{\ell}
xijℓ表示实例
x
i
j
x_{ij}
xij第
ℓ
th
\ell^{\text{th}}
ℓth层的输出。
1.1 mi-Net:Instance-Space MIL Algorithm
传统MINN中
[1, 2, 3]
^\text{[1, 2, 3]}
[1, 2, 3],即mi-Net,大致过程如图1。图1中。使用四个连接层,且使用ReLU**函数。最终将获得第
L
−
2
L - 2
L−2层的实例特征,用
x
i
j
L
−
2
x_{ij}^{L - 2}
xijL−2表示,相对应的概率输出为
p
i
j
L
−
1
p_{ij}^{L - 1}
pijL−1,并归一化至
[
0
,
1
]
[0, 1]
[0,1];包的概率输出记为
P
L
(
X
i
)
P^L (X_i)
PL(Xi)。
为解决MIL中实例不带标签这一问题,在网络的训练阶段,将其标签看作是是潜在变量,最终设定某种方法汇总实例的输出概率为包的输出概率。
mi-Net可以格式表示为:
{ x i j ℓ = H ℓ ( x i j ℓ − 1 ) ; P i L = M L ( p i j ∣ j = 1 … m i L − 1 ) . (1) \begin{cases} x_{ij}^{\ell} = H^{\ell} (x_{ij}^{\ell - 1});\\ P_i^L = M^L (p_{ij \mid j = 1 \ldots m_i}^{L - 1}). \end{cases} \tag{1} {xijℓ=Hℓ(xijℓ−1);PiL=ML(pij∣j=1…miL−1).(1)
1.2 MI-Net: A new Embedded-Space MIL Algorithm
无需依赖实例的输出概率,而是直接习得包的表示,如图2,归纳如下:
{ x i j ℓ = H ℓ ( x i j ℓ − 1 ) ; X i ℓ = M ℓ ( x i j ∣ j = 1 … m i ℓ − 1 ) . (2) \begin{cases} x_{ij}^{\ell} = H^{\ell} (x_{ij}^{\ell - 1});\\ X_i^{\ell} = M^{\ell} (x_{ij \mid j = 1 \ldots m_i}^{\ell - 1}). \end{cases} \tag{2} {xijℓ=Hℓ(xijℓ−1);Xiℓ=Mℓ(xij∣j=1…miℓ−1).(2)
1.3 MI-Net with Deep Supervision
受Deeply-Supervised Nets (DSN) [4] ^\text{[4]} [4]启发,将deep supervisions添加至MI-Net中,如图3。规则化如下:
{
x
i
j
ℓ
=
H
ℓ
(
x
i
j
ℓ
−
1
)
;
X
i
ℓ
,
k
=
M
ℓ
(
x
i
j
∣
j
=
1
…
m
i
k
)
,
k
∈
{
1
,
2
,
3
}
.
(3)
\begin{cases} x_{ij}^{\ell} = H^{\ell} (x_{ij}^{\ell - 1});\\ X_i^{\ell, k} = M^{\ell} (x_{ij \mid j = 1 \ldots m_i}^k), k \in \{ 1, 2, 3 \}. \end{cases} \tag{3}
{xijℓ=Hℓ(xijℓ−1);Xiℓ,k=Mℓ(xij∣j=1…mik),k∈{1,2,3}.(3)其中
k
k
k表示将从所有不同的实例特征中习得包特征。
1.4 MI-Net with Residual Connections
规则化如下:
{ x i j ℓ = H ℓ ( x i j ℓ − 1 ) ; X i 1 = M ℓ ( x i j ∣ j = 1 … m i 1 ) ; X i ℓ = M ℓ ( x i j ∣ j = 1 … m i ℓ ) + X ℓ − 1 , ℓ > 1. (4) \left\{\begin{array}{l} x_{i j}^{\ell}=H^{\ell}\left(x_{i j}^{\ell-1}\right); \\ X_{i}^{1}=M^{\ell}\left(x_{i j \mid j=1 \ldots m_{i}}^{1}\right); \\ X_{i}^{\ell}=M^{\ell}\left(x_{i j \mid j=1 \ldots m_{i}}^{\ell}\right)+X^{\ell-1}, \ell>1. \end{array}\right. \tag{4} ⎩⎪⎪⎨⎪⎪⎧xijℓ=Hℓ(xijℓ−1);Xi1=Mℓ(xij∣j=1…mi1);Xiℓ=Mℓ(xij∣j=1…miℓ)+Xℓ−1,ℓ>1.(4)
1.5 MIL汇聚方法
本文使用三种汇聚方法,包括最大、平均以及log-sum-exp (LSE) [5] ^\text{[5]} [5]。LSE为最大、平均汇聚的平滑版本。具体如下:
{ max : M ℓ ( x i j ∣ j = 1 … m i ℓ − 1 ) = max j x i j ℓ − 1 ; mean : M ℓ ( x i j ∣ j = 1 … m i ℓ − 1 ) = 1 m i ∑ j = 1 m i x i j ℓ − 1 ; L S E : M ℓ ( x i j ∣ j = 1 … m i ℓ − 1 ) = r − 1 log [ 1 m i ∑ j = 1 m i exp ( r ⋅ x i j ℓ − 1 ) ] . (5) \left\{\begin{array}{ll} \max : & M^{\ell}\left(x_{i j \mid j=1 \ldots m_{i}}^{\ell-1}\right)=\max _{j} x_{i j}^{\ell-1}; \\ \operatorname{mean}: & M^{\ell}\left(x_{i j \mid j=1 \ldots m_{i}}^{\ell-1}\right)=\frac{1}{m_{i}} \sum_{j=1}^{m_{i}} x_{i j}^{\ell-1}; \\ \mathrm{LSE}: & M^{\ell}\left(x_{i j \mid j=1 \ldots m_{i}}^{\ell-1}\right)=r^{-1} \log \left[\frac{1}{m_{i}} \sum_{j=1}^{m_{i}} \exp \left(r \cdot x_{i j}^{\ell-1}\right)\right]. \end{array}\right. \tag{5} ⎩⎪⎪⎪⎨⎪⎪⎪⎧max:mean:LSE:Mℓ(xij∣j=1…miℓ−1)=maxjxijℓ−1;Mℓ(xij∣j=1…miℓ−1)=mi1∑j=1mixijℓ−1;Mℓ(xij∣j=1…miℓ−1)=r−1log[mi1∑j=1miexp(r⋅xijℓ−1)].(5)其中 r r r是超参数,其越大越接近最大;反正解决平均。
1.6 训练损失
训练损失为每个包得分 S i S_i Si的累加,其中每个包得分的计算如下:
Loss ( S i , Y i ) = − { ( 1 − Y i ) log ( 1 − S i ) + Y i log S i } . (6) \text{Loss} (S_i, Y_i) = - \{ (1 - Y_i) \log (1 - S_i) + Y_i \log S_i \}. \tag{6} Loss(Si,Yi)=−{(1−Yi)log(1−Si)+YilogSi}.(6)网络的训练将使用随机梯度下降的标准反馈。
[1]: J. Ramon and L. De Raedt, “Multi instance neural networks,” in Proceedings of the ICML-2000 workshop on attribute-value and relational learning, 2000, pp. 53–60.
[2]: Z.-H. Zhou and M.-L. Zhang, “Neural networks for multi-instance learning,” in Proceedings of the International Conference on Intelligent Information Technology, Beijing, China, 2002, pp. 455–459.
[3]: J. Wu, Y. Yu, C. Huang, and K. Yu, “Deep multiple instance learning for image classification and auto-annotation,” in CVPR, 2015, pp. 3460–3469.
[4]: C. Y. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu, “Deeply-Supervised Nets,” in AISTATS, 2015, pp. 562–570.
[5]: S. Boyd and L. Vandenberghe, Convex optimization. Cambridge university press, 2004.