Abstract

这篇论文第一次运用注意力机制(Attention)解决机器翻译中的问题。和传统的统计机器翻译(SMT)不同,神经机器翻译(NMT)旨在构建一个神经网络来提高翻译性能。最近(2016)提出的NMT模型都是基于编码器和解码器:将源语言编码成一个定长向量(fixed-length vector),然后用解码器生成目标语言。这篇论文假设将源语言编码成定长向量是提高翻译性能的瓶颈,提出了自动对源语言可以转换到目标语言片段的soft-search。经过实验,这种方法在英语到法语的翻译中确实取得了很棒的效果。

1 Introduction

神经机器翻译(NMT)由Kalchbrenner和Blunsom(2013),Sutskever(2014)和Cho(2014b)提出。传统的基于短语的翻译系统是对各个子元素分别进行调整,NMT构建一个大的神经网络直接实现源语言到目标语言的转换。

大部分的NMT模型都是基于编码器-解码器结构,将源句子编码成一个定长向量,然后解码器利用这个向量生成目标句子。这种方法可能带来的一个问题就是:神经网络需要将一个句子的所有信息都编码到一个定长向量。如果句子短一些还好,但是如果句子很长神经网络处理起来就会变得很困难。Cho (2014b) 发现:随着输入句子的长度增加,基于编码器解码器的模型的表现就会变得很差。

为了解决这个问题,引入一个扩展的编码-解码模型:自动学习对齐和翻译。每次生成翻译的一个单词时,在源句子中soft-search一组内容最相关的位置。这些源位置信息结合翻译的前一个单词就能预测下一个目标单词。

这种自动对齐和翻译的方法避免了将一个句子的所有信息都压缩到一个定长的向量中。特别是在处理长句子时这种方法的优势就很突出了。在英语到法语的翻译任务中,这个单一的模型已经接近了传统基于短语的翻译系统。(之前NMT一直干不过SMT)

2 Background:Neural Machine Translation

从概率学的角度来看,翻译就相当于在给定源语句 x 的情况下最大化条件概率 ,从而找到目标语句。

用公式表示就是:论文阅读《NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE》

在NMT中,利用平行语料的语句对训练模型的参数,从而在翻译过程中最大化这个条件概率。

2.1 RNN encoder-decoder

简要介绍由Cho(2014a)和Sutskever(2014)提出的底层框架:RNN encoder-decoder。在这个基础上提出同时对齐和翻译的模型结构。

  • 编码:在encoder-decoder框架中,将源语句用向量表示输入到编码器中生成语境向量c。例如通常做法是:

论文阅读《NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE》 和 论文阅读《NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE》

       论文阅读《NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE》是t时刻的隐藏状态,c是从一组隐藏状态生成的一个向量。f 和 q是非线性函数。例如,Sutskever(2014)使用LSTM作为f,    论文阅读《NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE》.

  • 解码:解码器根据语境向量(context vector)和前面所有预测的单词论文阅读《NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE》预测下一个单词论文阅读《NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE》.

          也就是:论文阅读《NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE》

           论文阅读《NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE》。每个条件概率通过论文阅读《NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE》。g是一个非线性潜在多层函数,论文阅读《NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE》是RNN的隐藏状态。

3 LearningTo Align and Translate

这篇论文使用双向RNN作为编码器和一个模拟翻译过程中通过源语句进行搜索的解码器。

论文阅读《NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE》

3.1 Decoder

这里定义条件概率:论文阅读《NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE》          g是一个非线性潜在多层函数

论文阅读《NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE》是t时刻RNN的隐藏状态,论文阅读《NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE》               其中 论文阅读《NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE》代表点乘

论文阅读《NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE》

论文阅读《NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE》是单词论文阅读《NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE》的词嵌入向量。

论文阅读《NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE》是输出的重置门,控制之前状态的多少信息会被重置:论文阅读《NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE》

论文阅读《NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE》是更新门,允许每一个隐藏单元保留之前多少信息:论文阅读《NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE》

其中:论文阅读《NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE》是logistic sigmoid 函数

 

论文阅读《NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE》取决于编码器将输入句子映射到的注释序列论文阅读《NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE》.每个注释hi都包含有关整个输入序列的信息,重点关注输入序列第i个单词周围的部分。

                         论文阅读《NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE》

                        论文阅读《NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE》

  其中: 论文阅读《NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE》是对齐模型,它可以计算位置 j 周围的输入和位置 i 的输出匹配程度。 论文阅读《NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE》反应了注释论文阅读《NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE》的重要程度,当论文阅读《NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE》越大时说明注意力越集中,这就体现了注意力机制

3.2 Encoder:双向RNN用于序列注释

一般的RNN网络在输入序列数据时从头到尾输入网络。而BiRNN不仅考虑序列的正向顺序也考虑序列的反向顺序。一个BiRNN由前向RNN和反向RNN组成。正向RNN计算前向隐藏状态论文阅读《NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE》,反向RNN计算论文阅读《NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE》

因此 论文阅读《NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE》

本部分原文:

论文阅读《NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE》

相关文章: