一、Word2vec原理

(连续词袋模型) CBOW【数据竞赛】“达观杯”文本智能处理挑战赛3

【数据竞赛】“达观杯”文本智能处理挑战赛3

SKip-Gram模型【数据竞赛】“达观杯”文本智能处理挑战赛3

【数据竞赛】“达观杯”文本智能处理挑战赛3
【数据竞赛】“达观杯”文本智能处理挑战赛3

二、word2vec词向量实践

import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import TfidfVectorizer
import gensim
import time
import pickle
import numpy as np


df = pd.read_csv('./new_data/train_set.csv',index_col='id',nrows=500)
def sentence2list(sentence):
    return sentence.strip().split()


sentences_train = list(df.loc[:, 'word_seg'].apply(sentence2list))
sentences = sentences_train
model = gensim.models.Word2Vec(sentences=sentences, size=100, window=5, min_count=5, workers=8, sg=1, iter=5)

wv = model.wv
vocab_list = wv.index2word
word_idx_dict = {}
for idx, word in enumerate(vocab_list):
    word_idx_dict[word] = idx

vectors_arr = wv.vectors
vectors_arr = np.concatenate((np.zeros(100)[np.newaxis, :], vectors_arr), axis=0)  # 第0位置的vector为'unk'的vector

print(word_idx_dict)
print(vectors_arr)
f_wordidx = open('word_seg_word_idx_dict.pkl', 'wb')
f_vectors = open('word_seg_vectors_arr.pkl', 'wb')
pickle.dump(word_idx_dict, f_wordidx)
pickle.dump(vectors_arr, f_vectors)
f_wordidx.close()
f_vectors.close()

相关文章: