Attention is all you need

Abstract

Transformer : 无recurrence和convolutions,只基于attention

Introduction

Recurrent models 是seq2seq model,h_t = f (position,h_t-1);不能并行运算,
RNN 长期忘记,transformer: averaging attention-weighted positions
Self-attention : relating different positions of a single sequence

Model Architecture

  1. The encoder is composed of a stack of N = 6 identical layers. 每个layer由multi-head self-attention和fully connected feed-forward network组成
  2. The decoder is also composed of a stack of N = 6 identical layers, 3 sub-layer, multi-head self-attention和fully connected feed-forward network和multi-head attention
  3. position-wise feed-forward: FFN(x) = max(0,xW1 + b1)W2 + b2
  4. decoder output to predicted next-token probabilities.
  5. make use of the order of the sequence : 添加position信息

Attention

map a query -> key-value , 计算相关度,可以dot-product 也可以其他
Attention is all you need 2020-05-15
multi-head:将query,key和value分别线性地投影为dk,dk和dv维度的h时间,分别具有不同的学习线性投影
Attention is all you need 2020-05-15
##self-attention && multihead && 加入位置信息 示意图
Attention is all you need 2020-05-15
##并行计算 && transformer理解
Attention is all you need 2020-05-15

相关文章: