论文强调将语法和句法信息相结合,来做NLI任务。
In many problems, syntax and semantics interact closely, as generally phrased in the slogan “the syntax and the semantics work together in tandem” (Barker and Jacobson, 2007), among others.

模型结构如下图:
ACL2017 Enhancing and Combining Sequential and Tree LSTM for Natural Language Inference

给定两个句子(a1,...,ala) and b=(b1,...,blb),这里a是premise,b是hypothesis。aibj都是l-维的词向量,可通过预训练得到。目标是预测标签y,来指示ab的逻辑关系。

BLSTM对于序列a顺序处理,当第i时刻 ai¯和 第j时刻bj¯计算如下:
ACL2017 Enhancing and Combining Sequential and Tree LSTM for Natural Language Inference

ACL2017 Enhancing and Combining Sequential and Tree LSTM for Natural Language Inference

为了构建premise和hypothesis的相关性,构建attention 权重矩阵,其中每个元素为eij。例如,在premise中的一个词的hidden state, ai¯(已经encode了自身和上下文信息),其与hypothesis中的语义相关性定义如下:

ACL2017 Enhancing and Combining Sequential and Tree LSTM for Natural Language Inference

ai~{bj}¯j=1lb的加权求和。即在{bj}¯j=1lb中与ai¯相关的内容表示为ai~ 。同理公式13也可以类似理解。

相关文章: