论文原文:LINK
论文年份:2012
论文被引:67950(20/08/20)
文章目录
- ImageNet Classification with Deep Convolutional Neural Networks
- Abstract
- 1. Introduction
- 2. The Dataset
- 3. The Architecture
- 3.1 ReLU Nonlinearity
- 3.2 Training on Multiple GPUs
- 3.3 Local Response Normalization
- 3.4 Overlapping Pooling
- 3.5 Overall Architecture
- 4. Reducing Overfitting
- 5. Details of learning
- 6. Results
- 7. Discussion
ImageNet Classification with Deep Convolutional Neural Networks
Abstract
We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully-connected layers we employed a recently-developed regularization method called “dropout” that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.
我们训练了一个大型的深度卷积神经网络,将ImageNet LSVRC-2010竞赛中的120万个高分辨率图像分类为1000个不同的类。在测试数据上,我们实现了前1个和前5个错误率分别为37.5%和17.0%,这比以前的最新技术要好得多。该神经网络具有6000万个参数和65万个神经元,它由五个卷积层组成,其中一些跟在最大卷积层之后,还有三个完全连接的层,最后是1000路softmax。为了使训练更快,我们使用了非饱和神经元和卷积运算的非常高效的GPU实现。为了减少全连接层的过度拟合,我们采用了一种新近开发的正则化方法,称为“丢包”,这种方法被证明非常有效。我们还在ILSVRC-2012竞赛中输入了该模型的变体,并获得了最高的前5名测试错误率15.3%,而第二名仅获得了26.2%的错误率。
1. Introduction
Current approaches to object recognition make essential use of machine learning methods. To improve their performance, we can collect larger datasets, learn more powerful models, and use better techniques for preventing overfitting. Until recently, datasets of labeled images were relatively small — on the order of tens of thousands of images (e.g., NORB [16], Caltech-101/256 [8, 9], and CIFAR-10/100 [12]). Simple recognition tasks can be solved quite well with datasets of this size, especially if they are augmented with label-preserving transformations. For example, the currentbest error rate on the MNIST digit-recognition task (<0.3%) approaches human performance [4]. But objects in realistic settings exhibit considerable variability, so to learn to recognize them it is necessary to use much larger training sets. And indeed, the shortcomings of small image datasets have been widely recognized (e.g., Pinto et al. [21]), but it has only recently become possible to collect labeled datasets with millions of images. The new larger datasets include LabelMe [23], which consists of hundreds of thousands of fully-segmented images, and ImageNet [6], which consists of over 15 million labeled high-resolution images in over 22,000 categories.
当前的对象识别方法主要使用机器学习方法。为了提高其性能,我们可以收集更大的数据集,学习更强大的模型,并使用更好的技术来防止过度拟合。直到最近,带标签图像的数据集还相对较小-数以万计的图像(例如,NORB [16],Caltech-101 / 256 [8、9]和CIFAR-10 / 100 [12])。使用这种大小的数据集可以很好地解决简单的识别任务,尤其是如果使用保留标签的转换来增强它们的话。例如,MNIST数字识别任务上的当前最佳错误率(<0.3%)接近人类的表现[4]。但是现实环境中的物体表现出很大的可变性,因此要学会识别它们,有必要使用更大的训练集。确实,小型图像数据集的缺点已得到广泛认可(例如Pinto等人[21]),但直到最近才有可能收集具有数百万个图像的标记数据集。新的更大的数据集包括LabelMe [23]和ImageNet [6],LabelMe [23]由成千上万的完整分段图像组成,ImageNet由超过1500万带标签的高分辨率图像组成,这些图像超过22,000。
To learn about thousands of objects from millions of images, we need a model with a large learning capacity. However, the immense complexity of the object recognition task means that this problem cannot be specified even by a dataset as large as ImageNet, so our model should also have lots of prior knowledge to compensate for all the data we don’t have. Convolutional neural networks (CNNs) constitute one such class of models [16, 11, 13, 18, 15, 22, 26]. Their capacity can be controlled by varying their depth and breadth, and they also make strong and mostly correct assumptions about the nature of images (namely, stationarity of statistics and locality of pixel dependencies). Thus, compared to standard feedforward neural networks with similarly-sized layers, CNNs have much fewer connections and parameters and so they are easier to train, while their theoretically-best performance is likely to be only slightly worse.
要从数百万个图像中了解数千个对象,我们需要一个具有较大学习能力的模型。但是,对象识别任务的巨大复杂性意味着,即使像ImageNet这样大的数据集也无法解决此问题,因此我们的模型还应该具有很多先验知识,以补偿我们所没有的所有数据。卷积神经网络(CNN)构成了这类模型之一[16、11、13、18、15、22、26]。可以通过改变其深度和宽度来控制它们的容量,并且它们还对图像的性质(即统计的平稳性和像素依存性的局部性)做出强有力的且几乎是正确的假设。因此,与具有类似大小的层的标准前馈神经网络相比,CNN的连接和参数要少得多,因此更易于训练,而其理论上最好的性能可能只会稍差一些。
Despite the attractive qualities of CNNs, and despite the relative efficiency of their local architecture, they have still been prohibitively expensive to apply in large scale to high-resolution images. Luckily, current GPUs, paired with a highly-optimized implementation of 2D convolution, are powerful enough to facilitate the training of interestingly-large CNNs, and recent datasets such as ImageNet contain enough labeled examples to train such models without severe overfitting.
尽管CNN具有吸引人的特性,并且尽管其本地架构具有相对的效率,但将它们大规模应用于高分辨率图像仍然非常昂贵。幸运的是,当前的GPU与2D卷积的高度优化实现相结合,功能强大,足以帮助训练有趣的大型CNN,而最近的数据集(如ImageNet)包含足够的带标签示例,可以在不进行严重过度拟合的情况下训练此类模型。
The specific contributions of this paper are as follows: we trained one of the largest convolutional neural networks to date on the subsets of ImageNet used in the ILSVRC-2010 and ILSVRC-2012 competitions [2] and achieved by far the best results ever reported on these datasets. We wrote a highly-optimized GPU implementation of 2D convolution and all the other operations inherent in training convolutional neural networks, which we make available publicly1. Our network contains a number of new and unusual features which improve its performance and reduce its training time, which are detailed in Section 3. The size of our network made overfitting a significant problem, even with 1.2 million labeled training examples, so we used several effective techniques for preventing overfitting, which are described in Section 4. Our final network contains five convolutional and three fully-connected layers, and this depth seems to be important: we found that removing any convolutional layer (each of which contains no more than 1% of the model’s parameters) resulted in inferior performance.
本文的具体贡献如下:我们使用ILSVRC-2010和ILSVRC-2012竞赛[2]中迄今为止使用的ImageNet子集训练了迄今为止最大的卷积神经网络之一,并取得了迄今为止报道的最佳结果。这些数据集。我们编写了2D卷积以及训练卷积神经网络固有的所有其他操作的高度优化的GPU实现,我们将其公开提供1。我们的网络包含许多新的和不寻常的功能,这些功能可改善其性能并减少其培训时间,这将在第3节中进行详细介绍。即使有120万个带标签的训练示例,我们网络的规模也使过度拟合成为一个严重的问题,因此我们使用了几种有效的技术来防止过度拟合,这在第4节中进行了介绍。我们的最终网络包含五个卷积层和三个全连接层,此深度似乎很重要:我们发现删除任何卷积层(每个卷积层不超过1个)模型参数的百分比)导致性能下降。
In the end, the network’s size is limited mainly by the amount of memory available on current GPUs and by the amount of training time that we are willing to tolerate. Our network takes between five and six days to train on two GTX 580 3GB GPUs. All of our experiments suggest that our results can be improved simply by waiting for faster GPUs and bigger datasets to become available.
最后,网络的大小主要受到当前GPU可用的内存量以及我们愿意接受的培训时间的限制。我们的网络需要五到六天的时间来训练两个GTX 580 3GB GPU。我们所有的实验都表明,只需等待更快的GPU和更大的数据集就可以改善我们的结果。
2. The Dataset
ImageNet is a dataset of over 15 million labeled high-resolution images belonging to roughly 22,000 categories. The images were collected from the web and labeled by human labelers using Amazon’s Mechanical Turk crowd-sourcing tool. Starting in 2010, as part of the Pascal Visual Object Challenge, an annual competition called the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) has been held. ILSVRC uses a subset of ImageNet with roughly 1000 images in each of 1000 categories. In all, there are roughly 1.2 million training images, 50,000 validation images, and 150,000 testing images.
ImageNet是超过1500万张带标签的高分辨率图像的数据集,这些图像大约属于22,000个类别。这些图像是从网上收集的,并由人工贴标签者使用亚马逊的Mechanical Turk众包工具贴上标签。从2010年开始,作为Pascal视觉对象挑战赛的一部分,每年举行一次名为ImageNet大规模视觉识别挑战赛(ILSVRC)的竞赛。 ILSVRC使用ImageNet的子集,在1000个类别中的每个类别中大约包含1000张图像。总共大约有120万张训练图像,50,000张验证图像和150,000张测试图像。
ILSVRC-2010 is the only version of ILSVRC for which the test set labels are available, so this is the version on which we performed most of our experiments. Since we also entered our model in the ILSVRC-2012 competition, in Section 6 we report our results on this version of the dataset as well, for which test set labels are unavailable. On ImageNet, it is customary to report two error rates: top-1 and top-5, where the top-5 error rate is the fraction of test images for which the correct label is not among the five labels considered most probable by the model.
ILSVRC-2010是ILSVRC唯一具有测试集标签的版本,因此这是我们执行大部分实验的版本。由于我们也是在ILSVRC-2012竞赛中进入模型的,因此在第6节中,我们还将报告关于此版本数据集的结果,而该数据集的测试集标签不可用。在ImageNet上,通常会报告两个错误率:top-1和top-5,其中top-5错误率是测试图像中分数正确的标签不在模型认为最可能的五个标签中的比例。
ImageNet consists of variable-resolution images, while our system requires a constant input dimensionality. Therefore, we down-sampled the images to a fixed resolution of 256 × 256. Given a rectangular image, we first rescaled the image such that the shorter side was of length 256, and then cropped out the central 256×256 patch from the resulting image. We did not pre-process the images in any other way, except for subtracting the mean activity over the training set from each pixel. So we trained our network on the (centered) raw RGB values of the pixels.
ImageNet由可变分辨率的图像组成,而我们的系统需要恒定的输入维数。因此,我们将图像下采样为256×256的固定分辨率。给定矩形图像,我们首先对图像进行重新缩放,使较短的一面的长度为256,然后从结果中裁剪出中心256×256色块。图片。除了从每个像素中减去训练集上的平均活动以外,我们没有以其他任何方式对图像进行预处理。因此,我们在像素的(居中)原始RGB值上训练了我们的网络。
3. The Architecture
The architecture of our network is summarized in Figure 2. It contains eight learned layers — five convolutional and three fully-connected. Below, we describe some of the novel or unusual features of our network’s architecture. Sections 3.1-3.4 are sorted according to our estimation of their importance, with the most important first.
图2总结了我们网络的体系结构。它包含八个学习层-五个卷积层和三个完全连接层。下面,我们描述网络架构的一些新颖或不寻常的功能。根据我们对它们的重要性的估计,对第3.1-3.4节进行了排序,其中最重要的是第一个。
3.1 ReLU Nonlinearity
The standard way to model a neuron’s output f as a function of its input x is with or . In terms of training time with gradient descent, these saturating nonlinearities are much slower than the non-saturating nonlinearity . Following Nair and Hinton [20], we refer to neurons with this nonlinearity as Rectified Linear Units (ReLUs). Deep convolutional neural networks with ReLUs train several times faster than their equivalents with tanh units. This is demonstrated in Figure 1, which shows the number of iterations required to reach 25% training error on the CIFAR-10 dataset for a particular four-layer convolutional network. This plot shows that we would not have been able to experiment with such large neural networks for this work if we had used traditional saturating neuron models.
根据神经元的输入x来建模神经元输出f的标准方法是 或 。就梯度下降的训练时间而言,这些饱和非线性要比非饱和非线性 慢得多。继Nair和Hinton [20]之后,我们将具有这种非线性的神经元称为整流线性单位(ReLUs)。具有ReLU的深度卷积神经网络的训练速度比其与tanh单位等效的训练速度快几倍。这在图1中得到了证明,图1显示了对于特定的四层卷积网络,在CIFAR-10数据集上达到25%训练误差所需的迭代次数。此图表明,如果使用传统的饱和神经元模型,我们将无法使用如此大型的神经网络进行这项工作。
We are not the first to consider alternatives to traditional neuron models in CNNs. For example, Jarrett etal.[11]claim that the nonlinearity works particularly well with their type of contrast normalization followed by local average pooling on the Caltech-101 dataset. However, on this dataset the primary concern is preventing overfitting, so the effect they are observing is different from the accelerated ability to fit the training set which we report when using ReLUs. Faster learning has a great influence on the performance of large models trained on large datasets.
我们并不是第一个在CNN中考虑替代传统神经元模型的人。例如,Jarrett等人[11]声称非线性度 在其对比归一化类型,随后在Caltech-101数据集上进行局部平均合并的情况下,效果特别好。但是,在此数据集上,主要的问题是防止过度拟合,因此,他们观察到的效果与使用ReLU拟合我们训练的训练集的加速能力不同。更快的学习对在大型数据集上训练的大型模型的性能有很大影响。
Figure 1: A four-layer convolutional neural network with ReLUs (solid line) reaches a 25% training error rate on CIFAR-10 six times faster than an equivalent network with tanh neurons (dashed line). The learning rates for each network were chosen independently to make training as fast as possible. No regularization of any kind was employed. The magnitude of the effect demonstrated here varies with network architecture, but networks with ReLUs consistently learn several times faster than equivalents with saturating neurons.
图1:带有ReLU的四层卷积神经网络(实线)在CIFAR-10上达到25%的训练错误率,比具有tanh神经元的等效网络(虚线)快六倍。每个网络的学习率是独立选择的,以使训练尽可能快。没有使用任何形式的正则化。这里展示的效果的大小随网络体系结构的不同而变化,但是具有ReLU的网络的学习速度始终比具有饱和神经元的同等速度快几倍。
3.2 Training on Multiple GPUs
A single GTX 580 GPU has only 3GB of memory, which limits the maximum size of the networks that can be trained on it. It turns out that 1.2 million training examples are enough to train networks which are too big to fit on one GPU. Therefore we spread the net across two GPUs. Current GPUs are particularly well-suited to cross-GPU parallelization, as they are able to read from and write to one another’s memory directly, without going through host machine memory. The parallelization scheme that we employ essentially puts half of the kernels (or neurons) on each GPU, with one additional trick: the GPUs communicate only in certain layers. This means that, for example, the kernels of layer 3 take input from all kernel maps in layer 2. However, kernels in layer 4 take input only from those kernel maps in layer 3 which reside on the same GPU. Choosing the pattern of connectivity is a problem for cross-validation, but this allows us to precisely tune the amount of communication until it is an acceptable fraction of the amount of computation.
单个GTX 580 GPU仅具有3GB内存,这限制了可以在其上训练的网络的最大大小。事实证明,120万个训练示例足以训练太大而无法安装在一个GPU上的网络。因此,我们将网络分布在两个GPU上。当前的GPU特别适合跨GPU并行化,因为它们能够直接读取和写入彼此的内存,而无需通过主机内存。实际上,我们采用的并行化方案将每个内核的一半(或神经元)放在每个GPU上,还有一个额外的技巧:GPU仅在某些层进行通信。这意味着,例如,第3层的内核从第2层的所有内核映射中获取输入。但是,第4层的内核仅从第3层中位于同一GPU上的那些内核映射中获取输入。选择连接模式是交叉验证的问题,但这使我们可以精确地调整通信量,直到它是计算量的可接受的一部分为止。
The resultant architecture is somewhat similar to that of the “columnar” CNN employed by Ciresan et al. [5], except that our columns are not independent (see Figure 2). This scheme reduces our top-1 and top-5 error rates by 1.7% and 1.2%, respectively, as compared with a net with half as many kernels in each convolutional layer trained on one GPU. The two-GPU net takes slightly less time to train than the one-GPU net2.
最终的架构与Ciresan等人使用的“ columnar” CNN的架构有些相似。 [5],只是我们的列不是独立的(见图2)。与在一个GPU上训练的每个卷积层中内核数量减少一半的网络相比,该方案将我们的top-1和top-5错误率分别降低了1.7%和1.2%。两GPU网络的训练时间比单GPU网络2少。
3.3 Local Response Normalization
ReLUs have the desirable property that they do not require input normalization to prevent them from saturating. If at least some training examples produce a positive input to a ReLU, learning will happen in that neuron. However, we still find that the following local normalization scheme aids generalization. Denoting by ai x,ythe activity of a neuron computed by applying kernel i at position (x, y) and then applying the ReLU nonlinearity, the response-normalized activity bi x,yis given by the expression
ReLU具有理想的特性,即它们不需要输入规范化即可防止饱和。如果至少一些训练示例对ReLU产生了积极的投入,那么该神经元就会进行学习。但是,我们仍然发现以下局部归一化方案有助于泛化。用 表示神经元的活动,该活动是通过在位置 处应用kernel ,然后应用ReLU非线性来计算的,该表达式给出的响应归一化活动 $b^i_{x,y):
where the sum runs over n “adjacent” kernel maps at the same spatial position, and N is the total number of kernels in the layer. The ordering of the kernel maps is of course arbitrary and determined before training begins. This sort of response normalization implements a form of lateral inhibition inspired by the type found in real neurons, creating competition for big activities amongst neuron outputs computed using different kernels. The constants k, n, α, and β are hyper-parameters whose values are determined using a validation set; we used . We applied this normalization after applying the ReLU nonlinearity in certain layers (see Section 3.5).
其中,总和在相同的空间位置上遍历n个“相邻”内核映射,并且N是该层中内核的总数。内核映射的顺序当然是任意的,并且在训练开始之前确定。这种响应归一化实现了一种由实际神经元中发现的类型激发的横向抑制形式,从而在使用不同内核计算的神经元输出中竞争大型活动。常数k,n,α和β是超参数,其值是使用验证集确定的;我们使用 。在某些层中应用ReLU非线性之后,我们应用了此归一化(请参见第3.5节)。
This scheme bears some resemblance to the local contrast normalization scheme of Jarrett et al. [11], but ours would be more correctly termed “brightness normalization”, since we do not subtract the mean activity. Response normalization reduces our top-1 and top-5 error rates by 1.4% and 1.2%, respectively. We also verified the effectiveness of this scheme on the CIFAR-10 dataset: a four-layer CNN achieved a 13% test error rate without normalization and 11% with normalization.
该方案与Jarrett等人的局部对比度归一化方案有些相似。 [11],但我们将其更正确地称为“亮度归一化”,因为我们没有减去平均活性。响应标准化使我们的top-1和top-5错误率分别降低了1.4%和1.2%。我们还验证了该方案在CIFAR-10数据集上的有效性:四层CNN在未进行标准化的情况下实现了13%的测试错误率,在进行标准化的情况下达到了11%。
3.4 Overlapping Pooling
Pooling layers in CNNs summarize the outputs of neighboring groups of neurons in the same kernel map. Traditionally, the neighborhoods summarized by adjacent pooling units do not overlap (e.g., [17, 11, 4]). To be more precise, a pooling layer can be thought of as consisting of a grid of pooling units spaced s pixels apart, each summarizing a neighborhood of size z × z centered at the location of the pooling unit. If we set s = z, we obtain traditional local pooling as commonly employed in CNNs. If we set s < z, we obtain overlapping pooling. This is what we use throughout our network, with s = 2 and z = 3. This scheme reduces the top-1 and top-5 error rates by 0.4% and 0.3%, respectively, as compared with the non-overlapping scheme s = 2, z = 2, which produces output of equivalent dimensions. We generally observe during training that models with overlapping pooling find it slightly more difficult to overfit.
CNN中的池化层汇总了同一核图中的神经元相邻组的输出。 传统上,由相邻合并单元汇总的邻域不重叠(例如[17、11、4])。 更准确地说,可以将池化层视为由间隔为s个像素的池化单元的网格组成,每个网格都汇总了以池化单元的位置为中心的大小为z×z的邻域。 如果设置s = z,则可以获得CNN中常用的传统本地池。 如果设置s <z,则获得重叠池。 这是我们在整个网络中使用的值,其中s = 2和z =3。与非重叠方案s = 2,z = 2 相比,该方案分别将top-1和top-5的错误率降低了0.4%和0.3%,产生等效尺寸的输出。 我们通常会在训练过程中观察到,具有重叠池化的模型发现过拟合稍微困难一些。
3.5 Overall Architecture
Now we are ready to describe the overall architecture of our CNN. As depicted in Figure 2, the net contains eight layers with weights; the first five are convolutional and the remaining three are fullyconnected. The output of the last fully-connected layer is fed to a 1000-way softmax which produces a distribution over the 1000 class labels. Our network maximizes the multinomial logistic regression objective, which is equivalent to maximizing the average across training cases of the log-probability of the correct label under the prediction distribution.
现在,我们准备描述CNN的总体架构。如图2所示,该网包含八层带权重的层;第二层为网状。前五个是卷积的,其余三个是完全连接的。最后一个完全连接层的输出被馈送到1000路softmax,后者在1000类标签上产生分布。我们的网络将多项式逻辑回归目标最大化,这等效于在预测分布下最大化正确标签的对数概率训练案例的平均值。
The kernels of the second, fourth, and fifth convolutional layers are connected only to those kernel maps in the previous layer which reside on the same GPU (see Figure 2). The kernels of the third convolutional layer are connected to all kernel maps in the second layer. The neurons in the fullyconnected layers are connected to all neurons in the previous layer. Response-normalization layers followthefirstandsecondconvolutionallayers. Max-poolinglayers, ofthekinddescribedinSection 3.4, follow both response-normalization layers as well as the fifth convolutional layer. The ReLU non-linearity is applied to the output of every convolutional and fully-connected layer.
第二,第四和第五卷积层的内核仅连接到位于同一GPU上的上一层中的那些内核映射(请参见图2)。第三卷积层的内核连接到第二层中的所有内核映射。全连接层中的神经元连接到上一层中的所有神经元。响应归一化层位于第一和第二卷积层之后。第3.4节中所述的最大池化层位于响应归一化层以及第五卷积层之后。 ReLU非线性应用于每个卷积和全连接层的输出。
The first convolutional layer filters the 224×224×3 input image with 96 kernels of size 11×11×3 with a stride of 4 pixels (this is the distance between the receptive field centers of neighboring neurons in a kernel map). The second convolutional layer takes as input the (response-normalized and pooled) output of the first convolutional layer and filters it with 256 kernels of size 5 × 5 × 48.The third, fourth, and fifth convolutional layers are connected to one another without any intervening pooling or normalization layers. The third convolutional layer has 384 kernels of size 3 × 3 ×256 connected to the (normalized, pooled) outputs of the second convolutional layer. The fourth convolutional layer has 384 kernels of size 3 × 3 × 192 , and the fifth convolutional layer has 256 kernels of size 3 × 3 × 192. The fully-connected layers have 4096 neurons each.
第一卷积层用4个像素的步幅(这是内核映射中相邻神经元的感受野中心之间的距离)过滤具有大小为11×11×3的96个内核的224×224×3输入图像。 第二个卷积层将(响应归一化并合并)第一卷积层的输出,并使用大小为5×5×48的256个内核对其进行过滤。第三,第四和第五卷积层相互连接而没有任何介入池化或规范化层。 第三卷积层具有384个大小为3×3×的内核256连接到第二卷积层的(规范化,合并的)输出。 第四个卷积层有384个大小为3×3×192的内核,第五卷积层有256个内核大小为3×3×192。全连接层各有4096个神经元。
Figure 2: An illustration of the architecture of our CNN, explicitly showing the delineation of responsibilities between the two GPUs. One GPU runs the layer-parts at the top of the figure while the other runs the layer-parts at the bottom. The GPUs communicate only at certain layers. The network’s input is 150,528-dimensional, and the number of neurons in the network’s remaining layers is given by 253440–186624–64896–64896–43264– 4096–4096–1000.
图2:我们的CNN架构示意图,明确显示了两个GPU之间的职责划分。一个GPU在图的顶部运行图层部分,而另一个GPU在图的底部运行图层部分。 GPU仅在某些层进行通信。网络的输入为150,528维,网络其余层的神经元数量为253440–186624–64896–64896–43264– 4096–4096-1000。
4. Reducing Overfitting
Our neural network architecture has 60 million parameters. Although the 1000 classes of ILSVRC make each training example impose 10 bits of constraint on the mapping from image to label, this turns out to be insufficient to learn so many parameters without considerable overfitting. Below, we describe the two primary ways in which we combat overfitting.
我们的神经网络架构具有6000万个参数。尽管ILSVRC的1000个类别使每个训练示例在从图像到标签的映射上施加10位约束,但事实证明这不足以学习太多参数而又没有明显的过拟合。下面,我们描述了克服过度拟合的两种主要方法。
4.1 Data Augmentation
The easiest and most common method to reduce overfitting on image data is to artificially enlarge the dataset using label-preserving transformations (e.g., [25, 4, 5]). We employ two distinct forms of data augmentation, both of which allow transformed images to be produced from the original images with very little computation, so the transformed images do not need to be stored on disk. In our implementation, the transformed images are generated in Python code on the CPU while the GPU is training on the previous batch of images. So these data augmentation schemes are, in effect, computationally free.
减少图像数据过拟合的最简单,最常见的方法是使用保留标签的变换(例如[25、4、5])人工放大数据集。我们采用两种不同的数据增强形式,这两种形式都允许通过很少的计算就可以从原始图像生成转换后的图像,因此不需要将转换后的图像存储在磁盘上。在我们的实现中,当GPU训练上一批图像时,转换后的图像在CPU上以Python代码生成。因此,这些数据扩充方案实际上在计算上是免费的。
The first form of data augmentation consists of generating image translations and horizontal reflections. We do this by extracting random 224×224 patches (and their horizontal reflections) from the 256×256 images and training our network on these extracted patches. This increases the size of our training set by a factor of 2048, though the resulting training examples are, of course, highly interdependent. Without this scheme, our network suffers from substantial overfitting, which would have forced us to use much smaller networks. At test time, the network makes a prediction by extracting five 224 × 224 patches (the four corner patches and the center patch) as well as their horizontal reflections (hence ten patches in all), and averaging the predictions made by the network’s softmax layer on the ten patches.
数据扩充的第一种形式包括生成图像平移和水平反射。为此,我们从256×256图像中提取了随机的224×224补丁(及其水平反射),并在提取的补丁上训练了我们的网络。这当然使我们的训练集的大小增加了2048倍,尽管所得的训练示例当然是高度相互依赖的。没有这种方案,我们的网络将遭受严重的过度拟合,这将迫使我们使用更小的网络。在测试时,网络通过提取五个224×224色块(四个角点色块和中央色块)及其水平反射(因此总共十个色块)进行预测,并平均网络的softmax层做出的预测在十个补丁上。
The second form of data augmentation consists of altering the intensities of the RGB channels in training images. Specifically, we perform PCA on the set of RGB pixel values throughout the ImageNet training set. To each training image, we add multiples of the found principal components,with magnitudes proportional to the corresponding eigenvalues times a random variable drawn from a Gaussian with mean zero and standard deviation 0.1. Therefore to each RGB image pixel we add the following quantity:
数据扩充的第二种形式包括更改训练图像中RGB通道的强度。具体来说,我们在整个ImageNet训练集中对RGB像素值集执行PCA。对于每个训练图像,我们添加找到的主要成分的倍数,幅度与相应特征值成正比乘以从均值为零且标准偏差为0.1的高斯得出的随机变量。因此,向每个RGB图像像素 添加以下数量:
where and are ith eigenvector and eigenvalue of the 3 × 3 covariance matrix of RGB pixel values, respectively, and is the aforementioned random variable. Each is drawn only once for all the pixels of a particular training image until that image is used for training again, at which point it is re-drawn. This scheme approximately captures an important property of natural images, namely, that object identity is invariant to changes in the intensity and color of the illumination. This scheme reduces the top-1 error rate by over 1%.
其中, 和 分别是RGB像素值的3×3协方差矩阵的特征向量和特征值,而 是上述随机变量。对于特定训练图像的所有像素,每个 仅绘制一次,直到再次使用该图像进行训练为止,此时将其重新绘制。该方案近似地捕获了自然图像的重要性质,即,对象身份对于照明的强度和颜色的变化是不变的。此方案将top-1错误率降低了1%以上。
4.2 Dropout
Combining the predictions of many different models is a very successful way to reduce test errors [1, 3], but it appears to be too expensive for big neural networks that already take several days to train. There is, however, a very efficient version of model combination that only costs about a factor of two during training. The recently-introduced technique, called “dropout” [10], consists of setting to zero the output of each hidden neuron with probability 0.5. The neurons which are “dropped out” in this way do not contribute to the forward pass and do not participate in backpropagation. So every time an input is presented, the neural network samples a different architecture, but all these architectures share weights. This technique reduces complex co-adaptations of neurons, since a neuron cannot rely on the presence of particular other neurons. It is, therefore, forced to learn more robust features that are useful in conjunction with many different random subsets of the other neurons. At test time, we use all the neurons but multiply their outputs by 0.5, which is a reasonable approximation to taking the geometric mean of the predictive distributions produced by the exponentially-many dropout networks.
结合许多不同模型的预测是减少测试错误的一种非常成功的方法[1,3],但是对于已经花了几天时间进行训练的大型神经网络来说,这似乎太昂贵了。但是,有一个非常有效的模型组合版本,在训练过程中仅花费大约两倍的费用。最近引入的技术称为“dropout” [10],包括将每个隐藏神经元的输出以0.5的概率设置为零。以这种方式“退出”的神经元不会对正向做出贡献,也不会参与反向传播。因此,每次提供输入时,神经网络都会对不同的体系结构进行采样,但是所有这些体系结构都会共享权重。由于神经元不能依赖于特定其他神经元的存在,因此该技术减少了神经元的复杂共适应。因此,被迫学习更健壮的功能,这些功能可与其他神经元的许多不同随机子集结合使用。在测试时,我们使用所有神经元,但将它们的输出乘以0.5,这可以合理地近似于采用指数级下降的网络所产生的预测分布的几何平均值。
We use dropout in the first two fully-connected layers of Figure 2. Without dropout, our network exhibits substantial overfitting. Dropout roughly doubles the number of iterations required to converge.
我们在图2的前两个完全连接的层中使用了dropout。如果没有dropout,我们的网络将表现出过大的过拟合。辍学大约会使收敛所需的迭代次数加倍。
5. Details of learning
We trained our models using stochastic gradient descent with a batch size of 128 examples, momentum of 0.9, and weight decay of 0.0005. We found that this small amount of weight decay was important for the model to learn. In other words, weight decay here is not merely a regularizer: it reduces the model’s training error. The update rule for weight w was
我们使用随机梯度下降训练了模型,批次大小为128个示例,动量为0.9,权重衰减为0.0005。我们发现,少量的权重衰减对于模型的学习很重要。换句话说,这里的权重衰减不仅仅是一个正则化器:它可以减少模型的训练误差。权重w的更新规则为
其中, 是迭代指数, 是动量变量, 是学习率,偏导数是在 处评估的目标导数相对于 的第 批 的平均值。
We initialized the weights in each layer from a zero-mean Gaussian distribution with standard deviation 0.01. We initialized the neuron biases in the second, fourth, and fifth convolutional layers, as well as in the fully-connected hidden layers, with the constant 1. This initialization accelerates the early stages of learning by providing the ReLUs with positive inputs. We initialized the neuron biases in the remaining layers with the constant 0.
我们从零均值高斯分布(标准差为0.01)初始化每一层的权重。我们使用常数1初始化第二,第四和第五卷积层以及完全连接的隐藏层中的神经元偏差。此初始化通过为ReLU提供正输入来加速学习的早期阶段。我们用常数0初始化其余层中的神经元偏差。
We used an equal learning rate for all layers, which we adjusted manually throughout training. The heuristic which we followed was to divide the learning rate by 10 when the validation error rate stopped improving with the current learning rate. The learning rate was initialized at 0.01 and reduced three times prior to termination. We trained the network for roughly 90 cycles through the training set of 1.2 million images, which took five to six days on two NVIDIA GTX 580 3GB GPUs.
我们对所有层使用了相等的学习率,我们在整个培训过程中手动进行了调整。我们遵循的启发式方法是,当验证错误率不再随着当前学习率提高而提高时,将学习率除以10。将学习率初始化为0.01,并在终止之前降低三倍。我们通过120万张图像的训练集对网络进行了大约90个周期的训练,在两个NVIDIA GTX 580 3GB GPU上花了五到六天的时间。
6. Results
Our results on ILSVRC-2010 are summarized in Table 1. Our network achieves top-1 and top-5 test set error rates of 37.5% and 17.0%5. The best performance achieved during the ILSVRC2010 competition was 47.1% and 28.2% with an approach that averages the predictions produced from six sparse-coding models trained on different features [2], and since then the best published results are 45.7% and 25.7% with an approach that averages the predictions of two classifiers trained on Fisher Vectors (FVs) computed from two types of densely-sampled features [24].
表1总结了我们在ILSVRC-2010上的结果。我们的网络实现了top-1和top-5测试集错误率分别为37.5%和17.0%5。在ILSVRC2010竞赛中,采用对六个以不同特征进行训练的稀疏编码模型产生的预测结果进行平均的方法,其最佳性能分别为47.1%和28.2%[2],自那时以来,最佳的结果分别为45.7%和25.7%。该方法可以对根据两种类型的密集采样特征计算出的基于Fisher向量(FV)训练的两个分类器的预测求平均值[24]。
We also entered our model in the ILSVRC-2012 competition and report our results in Table 2. Since the ILSVRC-2012 test set labels are not publicly available, we cannot report test error rates for all the models that we tried. In the remainder of this paragraph, we use validation and test error rates interchangeably because in our experience they do not differ by more than 0.1% (see Table 2). The CNN described in this paper achieves a top-5 error rate of 18.2%. Averaging the predictions of five similar CNNs gives an error rate of 16.4%. Training one CNN, with an extra sixth convolutional layer over the last pooling layer, to classify the entire ImageNet Fall 2011 release (15M images, 22K categories), and then “fine-tuning” it on ILSVRC-2012 gives an error rate of 16.6%. Averaging the predictions of two CNNs that were pre-trained on the entire Fall 2011 release with the aforementioned five CNNs gives an error rate of 15.3%. The second-best contest entry achieved an error rate of 26.2% with an approach that averages the predictions of several classifiers trained on FVs computed from different types of densely-sampled features [7].
我们也将模型输入了ILSVRC-2012竞赛,并在表2中报告了结果。由于ILSVRC-2012测试集标签不是公开可用的,因此我们无法报告所有尝试过的模型的测试错误率。在本段的其余部分,我们将验证和测试错误率互换使用,因为根据我们的经验,它们的相差不超过0.1%(请参见表2)。本文介绍的CNN的前5位错误率达到了18.2%。平均五个相似CNN的预测得出的错误率为16.4%。训练一个CNN,在最后一个池化层上再加上一个第六卷积层,以对整个ImageNet Fall 2011版本(1500万张图像,22K个类别)进行分类,然后在ILSVRC-2012上对其进行“微调”,得出的错误率为16.6 %。将在整个2011年秋季版本中经过预训练的两个CNN与上述五个CNN的预测平均得出错误率15.3%。次优竞赛条目的错误率达到26.2%,该方法对从不同类型的密集采样特征计算出的FV训练的几个分类器的预测取平均值[7]。
Finally, we also report our error rates on the Fall 2009 version of ImageNet with 10,184 categories and 8.9 million images. On this dataset we follow the convention in the literature of using half of the images for training and half for testing. Since there is no established test set, our split necessarily differs from the splits used by previous authors, but this does not affect the results appreciably. Our top-1 and top-5 error rates on this dataset are 67.4% and 40.9%, attained by the net described above but with an additional, sixth convolutional layer over the last pooling layer. The best published results on this dataset are 78.1% and 60.9% [19].
最后,我们还报告了ImageNet 2009年秋季版本的错误率,其中包含10184个类别和890万张图像。在此数据集上,我们遵循文献中的惯例,即使用一半的图像进行训练,一半使用图像进行测试。由于没有已建立的测试集,因此我们的划分必定与以前的作者使用的划分不同,但这不会明显影响结果。通过上述网络,我们在此数据集上的top-1和top-5错误率分别为67.4%和40.9%,但在最后一个合并层上还有一个第六卷积层。在该数据集上公布的最佳结果是78.1%和60.9%[19]。
6.1 Qualitative Evaluations
Figure 3 shows the convolutional kernels learned by the network’s two data-connected layers. The network has learned a variety of frequency- and orientation-selective kernels, as well as various colored blobs. Notice the specialization exhibited by the two GPUs, a result of the restricted connectivity described in Section 3.5. The kernels on GPU 1 are largely color-agnostic, while the kernels on on GPU 2 are largely color-specific. This kind of specialization occurs during every run and is independent of any particular random weight initialization (modulo a renumbering of the GPUs).
图3显示了网络的两个数据连接层所学习的卷积内核。网络已经学会了各种频率和方向选择内核,以及各种有色斑点。请注意,这是两个GPU所展现出的专业性,这是第3.5节中描述的受限连接的结果。 GPU 1上的内核在很大程度上与颜色无关,而GPU 2上的内核在很大程度上是特定于颜色的。这种特殊化发生在每次运行期间,并且与任何特定的随机权重初始化(GPU的重新编号模)无关。
图4 :(左)八个ILSVRC-2010测试图像和我们的模型认为最可能的五个标签。正确的标签写在每个图像下,并且分配给正确标签的概率也用红色条显示(如果它恰好位于前5位)。 (右)第一列中的五张ILSVRC-2010测试图像。其余的列显示了六个训练图像,这些图像在最后一个隐藏层中生成特征矢量,这些特征矢量与测试图像的特征矢量之间的欧式距离最小。
In the left panel of Figure 4 we qualitatively assess what the network has learned by computing its top-5 predictions on eight test images. Notice that even off-center objects, such as the mite in the top-left, can be recognized by the net. Most of the top-5 labels appear reasonable. For example, only other types of cat are considered plausible labels for the leopard. In some cases (grille, cherry) there is genuine ambiguity about the intended focus of the photograph.
在图4的左面板中,我们通过在八张测试图像上计算其前五名的预测,定性地评估网络学到了什么。请注意,即使偏心的对象(例如左上角的螨)也可以被网络识别。大多数前5个标签看起来都是合理的。例如,只有其他类型的猫被认为是豹的合理标签。在某些情况下(谷物,樱桃),对于照片的预期焦点有真正的歧义。
Another way to probe the network’s visual knowledge is to consider the feature activations induced by an image at the last, 4096-dimensional hidden layer. If two images produce feature activation vectors with a small Euclidean separation, we can say that the higher levels of the neural network consider them to be similar. Figure 4 shows five images from the test set and the six images from the training set that are most similar to each of them according to this measure. Notice that at the pixel level, the retrieved training images are generally not close in L2 to the query images in the first column. For example, the retrieved dogs and elephants appear in a variety of poses. We present the results for many more test images in the supplementary material.
探查网络视觉知识的另一种方法是考虑由最后4096维隐藏层中的图像引起的特征**。如果两个图像产生的欧氏距离较小的特征**向量,则可以说神经网络的较高级别认为它们是相似的。图4显示了根据该度量,来自测试集的五幅图像和来自训练集的六幅图像,它们与每个图像最为相似。请注意,在像素级别,检索到的训练图像通常在L2中不接近第一列中的查询图像。例如,取回的狗和大象以各种姿势出现。我们在补充材料中提供了更多测试图像的结果。
Computing similarity by using Euclidean distance between two 4096-dimensional, real-valued vectorsisinefficient, butitcouldbemadeefficientbytraininganauto-encodertocompressthesevectors to short binary codes. This should produce a much better image retrieval method than applying autoencoders to the raw pixels [14], which does not make use of image labels and hence has a tendency to retrieve images with similar patterns of edges, whether or not they are semantically similar.
通过使用两个4096维实值向量之间的欧几里德距离来计算相似度效率不高,但是可以通过训练自动编码器将这些向量压缩为短二进制代码来提高效率。与将自动编码器应用于原始像素相比,这应该产生一种更好的图像检索方法[14],该方法不使用图像标签,因此有检索具有相似边缘模式的图像的趋势,而无论它们在语义上是否相似。
7. Discussion
Our results show that a large, deep convolutional neural network is capable of achieving recordbreaking results on a highly challenging dataset using purely supervised learning. It is notable that our network’s performance degrades if a single convolutional layer is removed. For example, removing any of the middle layers results in a loss of about 2% for the top-1 performance of the network. So the depth really is important for achieving our results.
我们的结果表明,大型的深度卷积神经网络能够使用纯监督学习在具有高度挑战性的数据集上实现创纪录的结果。值得注意的是,如果移除单个卷积层,我们的网络性能就会下降。例如,删除任何中间层都会导致网络的top-1性能损失约2%。因此深度对于实现我们的结果确实很重要。
To simplify our experiments, we did not use any unsupervised pre-training even though we expect that it will help, especially if we obtain enough computational power to significantly increase the size of the network without obtaining a corresponding increase in the amount of labeled data. Thus far, our results have improved as we have made our network larger and trained it longer but we still have many orders of magnitude to go in order to match the infero-temporal pathway of the human visual system. Ultimately we would like to use very large and deep convolutional nets on video sequences where the temporal structure provides very helpful information that is missing or far less obvious in static images.
为了简化我们的实验,我们没有使用任何无监督的预训练,即使我们希望这会有所帮助,尤其是如果我们获得足够的计算能力来显着增加网络的大小而又未获得标记数据量的相应增加时。到目前为止,由于我们扩大了网络并对其进行了更长的训练,我们的结果有所改善,但为了匹配人类视觉系统的时下路径,我们还有许多数量级需要走。最终,我们希望在视频序列上使用非常大且深的卷积网络,其中时间结构提供了非常有用的信息,这些信息在静态图像中已经丢失或远不那么明显。