Orderless Recurrent Models for Multi-label Classification Paper PDF

Introduction

Multi-label classification is the task of assigning a wide range of visual concepts (labels) to images. The large variety of concepts and the uncertain relations among them make this a very challenging task and, to successfully address it. RNNs have demonstrated good performance in many tasks that require processing variable length sequential data including multi-label classification. This modal takes account the relation-patterns among labels into training process naturally. But since RNNs produce sequential outputs, labels need to be ordered for the multi-label classification task.

Several recent works have tried to address this issue by imposing an arbitrary, but consistent, ordering to the ground truth label sequences. Despite alleviating the problem, these approaches are short of solving it, and many of the original issues are still present. For example, in an image that features a clearly visible and prominent dog, the LSTM may chose to predict that label first, as the evidence for it is very large. However, if dog is not the label that happened to be first in the chosen ordering, the network will be penalized for that output, and then penalized again for not predicting dog in the “correct” step according to the ground truth sequence. In this way,the training process could become very slow.

In this paper, we propose ways to dynamically order the ground truth labels with the predicted label sequence. There are the two ways of doing that: predicted label alignment (PLA) and minimal loss alignment (MLA). We empirically show that these approaches lead to faster training and also eliminate other nuisances like repeated labels in the predicted sequence.

Innovation

  1. Orderless recurrent models with the minimal loss alignment (MLA) and predicted label alignment (PLA)

Method

Image-to-sequence model

Orderless Recurrent Models for Multi-label Classification (CVPR2020)
This type of model consists of a CNN (encoder) part that extracts a compact visual representation from the image, and of an RNN (decoder) part that uses the encoding to generate a sequence of labels, modeling the label dependencies.

Linearized activations from the fourth convolutional layer are used as input for the attention module, along with the hidden state of the LSTM at each time step, thus the attention module focuses on different parts of the image every time. These attention weighted features are then concatenated with the word embedding of the class predicted in the previous time step, and given to the LSTM as input for the current time step.

The predictions for the current time step tt are computed in the following way:
xt=El^t1ht=LSTM(xt,ht1,ct1)pt=Wht+b \begin{aligned} x_{t} &= E \cdot \hat{l}_{t-1} \\ h_{t} &= LSTM(x_{t}, h_{t-1}, c_{t-1}) \\ p_{t} &= W \cdot h_{t} + b \end{aligned}

where EE is a word embedding matrix and l^t1\hat{l}_{t-1} is the predicted label index in the previous time step. ctc_t and hth_t are the model cell and hidden states in the previous LSTM unit. The prediction vector is denoted by ptp_t, and WW and bb are the weights and the bias of the fully connected layer.

Training recurrent models

To train the model a dataset with pairs of images and sets of labels is used. Let (I,L)(I, L) be one of the pairs containing an image II and its n labels L=l1,l2,...,ln,liLL = {l_1, l_2, ..., l_n}, l_i ∈ L, with LL the set of all labels with cardinality m=Lm = |L|, including the start and end tokens.

The predictions pt of the LSTM are collected in the matrix P=[p1p2...pn],PRm×nP = [p_1 p_2 ... p_n], P ∈ R_{m×n}. When the number of predicted labels kk is larger than nn, we only select the first nn prediction vectors. In case kk is smaller than n we pad the matrix with empty vectors to obtain the desired dimensions.

We can now define the standard cross-entropy loss for recurrent models as:
L=tr(Tlog(P))with{Ttj=1, if lt=jTtj=0, otherwise. (3 ) \mathfrak{L} = tr(Tlog(P)) \tag{3 } \\ with \left\{\begin{matrix} T_{tj}=1 , & \text{ if } l_{t} = j\\ T_{tj}=0 , & \text{ otherwise. } \end{matrix}\right.

where TRn×mT ∈ R_{n×m} contains the ground truth label for each time step ll. The loss is computed by comparing the prediction of the model at step tt with the corresponding label at the same step of the ground truth sequence.
For inherently orderless tasks like multi-label classification, where labels often come in random order, it becomes essential to minimize unnecessary penalization, and several approaches have been proposed in the literature. The most popular solution to improve the alignment between ground truth and predicted labels consists on defining an arbitrary criteria by which the labels will be sorted, such frequent-first, rare-first and dictionary-order. However those methods will delay convergence, as the network will have to learn the arbitrary ordering in addition to predicting the correct labels given the image. Further- more, any misalignment between the predictions and the labels will still result in higher loss and misleading updates to the network.
Orderless Recurrent Models for Multi-label Classification (CVPR2020)

Orderless recurrent models

To alleviate the problems caused by imposing a fixed order to the labels, we propose to align them to the predictions of the network before computing the loss. We consider two different strategies to achieve this:
The first strategy, called minimal loss alignment (MLA) is computed with:
L=minTtr(Tlog(P))s.t.{Ttj{1,0},jTtj=1jTtj=1,jLjTtj=0,jL \mathfrak{L} = min_{T} \quad tr(Tlog(P)) \\ s.t. \left\{\begin{matrix} T_{tj} \in \{1, 0\}, & \sum_{j}T_{tj} = 1 \\ \sum_{j}T_{tj}=1 , & \forall j \in L\\ \sum_{j}T_{tj}=0 , & \forall j \notin L \end{matrix}\right.
where TRn×mT ∈ R_{n×m} is a permutation matrix, which is constrained to have a ground truth label for each time step: jTtj=1\sum_{j}T_{tj} = 1, and that each label in the ground truth LL
should be assigned to a time step. The matrix TT is chosen in such a way as to minimize the summed cross entropy loss. This minimization problem is an assignment problem and can be solved with the Hungarian algorithm.

We also consider the predicted label alignment (PLA) solution. If we predict a label which is in the set of ground truth labels for the image, then we do not wish to change it. That leads to the following optimization problem:

L=minTtr(Tlog(P))s.t.{Ttj{1,0},jTtj=1Ttj=1, if l^tL and j=l^tjTtj=1,ljLjTtj=0,jL \mathfrak{L} = min_{T} \quad tr(Tlog(P)) \\ s.t. \left\{\begin{matrix} T_{tj} \in \{1, 0\}, & \sum_{j}T_{tj} = 1 \\ T_{tj}=1 , & \text{ if } \hat{l}_{t} \in L \text{ and } j = \hat{l}_{t} \\ \sum_{j}T_{tj}=1 , & \forall l j \in L \\ \sum_{j}T_{tj}=0 , & \forall j \notin L \end{matrix}\right.

where l^t\hat{l}_{t} is the label predicted by the model at step tt. Here we first fix those elements in the matrix TT for which we know that the prediction is in the ground truth set L, and apply the Hungarian algorithm to assign the remaining labels. This second approach results in higher losses than the first one since there are more restrictions on matrix TT. Nevertheless, this method is more consistent with the labels which were actually predicted by the LSTM.
Orderless Recurrent Models for Multi-label Classification (CVPR2020)

To further illustrate our proposed approach to train order-less recurrent models we consider an example image and its cost matrix (see Figure 4). The cost matrix shows the cost of assigning each label to the different time steps. The cost is computed as the negative logarithm of the probability at the corresponding time step. Although the MLA approach achieves the order that yields the lowest loss, in some cases this can cause misguided gradients as it does in the example in the figure. The MLA approach puts the label chair in the time step t3t_3, although the network already predicts it in the time step t4t_4. Therefore, the gradients force the network to output chair instead of sports ball although sports ball is also one of the labels.

Experiments

Convergence rate

Orderless Recurrent Models for Multi-label Classification (CVPR2020)

Co-occurrence in the predictions

Orderless Recurrent Models for Multi-label Classification (CVPR2020)

Comparation of different ordering methods

Orderless Recurrent Models for Multi-label Classification (CVPR2020)

Comparation of state-of-the-art

Orderless Recurrent Models for Multi-label Classification (CVPR2020)

相关文章:

  • 2021-05-10
  • 2021-06-30
  • 2021-11-29
  • 2022-03-04
  • 2021-04-06
  • 2021-12-07
  • 2021-05-14
  • 2021-08-17
猜你喜欢
  • 2022-02-12
  • 2021-10-23
  • 2021-07-21
  • 2021-12-27
  • 2021-06-29
  • 2021-08-28
  • 2021-05-12
相关资源
相似解决方案