本文是基于吴恩达老师《深度学习》第五课第三周练习题所做。
0.背景介绍
在达叔的教程中,我们已经了解了触发字检测的作用是在听到“触发字”时唤醒设备,本文的主要目的就是实现对“触发字”的检测,本例以“Activate”作为触发字。
本例所需的第三方库、数据集及辅助程序,可点击此处下载。
import numpy as np
from pydub import AudioSegment
import random
import sys
import io
import os
import glob
import IPython
import ffmpeg
from td_utils import *
1.数据分析:创建语言数据集
为了保证学习达到理想效果,在构建数据集时既要包含positive的词(“Activate”)又要包含negative的词(除“Activate”之外的干扰词)。
1.1数据试听
在原始数据集中包含三类数据:positive、negative和background,其中background为10s长的背景噪音。
IPython.display.Audio("./raw_data/activates/1.wav")
IPython.display.Audio("./raw_data/negatives/4.wav")
IPython.display.Audio("./raw_data/backgrounds/1.wav")
注:该函数只能在jupyter中调用试听。
1.2声谱图转换
麦克风记录的信号是我们在发音时参数的一系列空气压力的微小变化,这和人耳听到声音的原理一致。由于麦克风的采样频率是44100Hz,因此一段10s长的音频记录包含4410000个数据点。从这样的原始数据中,我们很难判断单词“Activate”是否有包含在这段音频内。为了方便我们的序列模型检测触发字,需计算原始数据的声谱图(spectrogram),声谱图能直观的展示出这段音频内有多少频段,如下图:
IPython.display.Audio("audio_examples/example_train.wav")
x = graph_spectrogram("audio_examples/example_train.wav")
由于我们使用音频长度是10s,转换成声谱图后的时间步数量为5511,该数据将作为模型的特征输入,因此Tx=5511.
_, data = wavfile.read("audio_examples/example_train.wav")
print("Time steps in audio recording before spectrogram", data[:, 0].shape)
print("Time steps in input after spetrogram", x.shape)
Time steps in audio recording before spectrogram (441000,)
Time steps in input after spetrogram (101, 5511)
经过分析计算,我们最终可确定:
Tx = 5511
n_freq = 101
Ty = 1375
其中:Tx是声谱图的输出也是神经网络输入变量的维度;Ty是模型的输出,是将10s的音频离散成1375个时间区间用以识别是否包含单词“Activate”。
1.3生成一个训练样本
由于音频数据很难标注,因此positive、negative和background音频片段来合成训练样本。在合成的过程中,首先随机的选择一段10s的background,然后随机的插入0-4个positive单词,最后随机的插入0-2个negative单词, 这样一来我们可以明确的知道单词“Activate”的位置了,可以得到标签y<t>。
我们将使用pydub库将原始音频文件转化成Pydub数据结构的列表,采用频率为1ms,因此10s的音频得到10000个step。
activates, negatives, backgrounds = load_raw_audio()
print("background len : " + str(len(backgrounds[0])))
print("activate[0] len : " + str(len(activates[0])))
print("activate[1] len : " + str(len(activates[1])))
background len : 10000
activate[0] len : 721
activate[1] len : 731
在background上覆写positive和negative单词
由于是随机的插入多个positive、negative单词,因此我们要跟踪上一个插入单词的区间,进而避免当前插入的单词的区间与上一个单词的区间重复。
在覆写background的同时创建标签
随机选择的background是不含触发字“activate”的,因此此时的y<t>=0。当插入positive单词后,需要将相应的时间步标记为1,例如,我们在5s处插入“activate”(长度为50个时间步),那么此时y<688>=y<689>=...=y<737>=1,如下图所示。
生成训练样本的步骤
(1)从background中随机选择时间区间:get_random_time_segment(segment_ms);
(2) 检查选择的区间是否和已被选择的区间重合:is_overlapping(segment_time, existing_segments)
(3)向background插入声频片段:insert_audio_clip(background, audio_clip, existing_times)
(4)插入activate单词后更新y<t>的标签:insert_ones(y, segment_end_ms)
def get_random_time_segment(segment_ms):
segment_start = np.random.randint(low = 0, high = 10000 - segment_ms)
segment_end = segment_start + segment_ms - 1
return (segment_start, segment_end)
def is_overlapping(segment_time, previous_segments):
segment_start, segment_end = segment_time
overlap = False
for previous_start, previous_end in previous_segments:
if segment_start <= previous_end and segment_end >= previous_start:
overlap = True
def insert_audio_clip(background, audio_clip, previous_segments):
segment_ms = len(audio_clip)
segment_time = get_random_time_segment(segment_ms)
while is_overlapping(segment_time, previous_segments):
segment_time = get_random_time_segment(segment_ms)
previous_segments.append(segment_time)
new_background = background.overlay(audio_clip, position=segment_time[0])
return new_background, segment_time
def insert_ones(y, segment_end_ms):
segment_end_y = int(segment_end_ms * Ty / 10000.0)
for i in range(segment_end_y + 1, segment_end_y + 51):
if i < Ty:
y[0, i] = 1.0
return y
最后使用insert_audio_clip 和insert_ones函数创建样本数据
def create_training_example(background, activates, negatives):
np.random.seed(18)
background = background - 20
y = np.zeros((1, Ty))
previous_segments = []
number_of_activates = np.random.randint(0, 5)
random_indices = np.random.randint(len(activates), size=number_of_activates)
random_activates = [activates[i] for i in random_indices]
for random_activate in random_activates:
background, segment_time = insert_audio_clip(background, random_activate, previous_segments)
segment_start, segment_end = segment_time
y = insert_ones(y, segment_end)
number_of_negatives = np.random.randint(0, 3)
random_indices = np.random.randint(len(negatives), size = number_of_negatives)
random_negatives = [negatives[i] for i in random_indices]
for random_negative in random_negatives:
background, _ = background, segment_time = insert_audio_clip(background, random_negative, previous_segments)
background = match_target_amplitude(background, -20.0)
file_handle = background.export("train" + ".wav", format="wav")
print("File (train.wav) was saved in your directory")
x = graph_spectrogram("train.wav")
return x, y
x, y = create_training_example(backgrounds[0], activates, negatives)
plt.plot(y[0])
1.4完整的训练集
为节省训练时间,使用已经训练好的数据集。
X = np.load("./XY_train/X.npy")
Y = np.load("./XY_train/Y.npy")
1.5完成的开发集
X_dev = np.load("./XY_dev/X_dev.npy")
Y_dev = np.load("./XY_dev/Y_dev.npy")
2.模型
模型将使用1-D卷积层,GRU层和全连接层,使用keras库如下所示。
from keras.callbacks import ModelCheckpoint
from keras.models import Model, load_model, Sequential
from keras.layers import Dense, Activation, Dropout, Input, Masking, TimeDistributed, LSTM, Conv1D
from keras.layers import GRU, Bidirectional, BatchNormalization, Reshape
from keras.optimizers import Adam
2.1构建模型
模型整体结构如下所示;
def model(input_shape):
X_input = Input(shape = input_shape)
# Step 1: CONV layer (≈4 lines)
X = Conv1D(196, 15, strides=4)(X_input) # CONV1D
X = BatchNormalization()(X) # Batch normalization
X = Activation('relu')(X) # ReLu activation
X = Dropout(0.8)(X) # dropout (use 0.8)
# Step 2: First GRU Layer (≈4 lines)
X = GRU(units = 128, return_sequences = True)(X) # GRU (use 128 units and return the sequences)
X = Dropout(0.8)(X) # dropout (use 0.8)
X = BatchNormalization()(X) # Batch normalization
# Step 3: Second GRU Layer (≈4 lines)
X = GRU(units = 128, return_sequences = True)(X) # GRU (use 128 units and return the sequences)
X = Dropout(0.8)(X) # dropout (use 0.8)
X = BatchNormalization()(X) # Batch normalization
X = Dropout(0.8)(X) # dropout (use 0.8)
# Step 4: Time-distributed dense layer (≈1 line)
X = TimeDistributed(Dense(1, activation = "sigmoid"))(X) # time distributed (sigmoid)
### END CODE HERE ###
model = Model(inputs = X_input, outputs = X)
return model
model = load_model('./models/tr_model.h5')
2.2训练模型
opt = Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, decay=0.01)
model.compile(loss='binary_crossentropy', optimizer=opt, metrics=["accuracy"])
model.fit(X, Y, batch_size=5, epochs=1)
2.3测试模型
loss, acc = model.evaluate(X_dev, Y_dev)
print("Dev Set accuracy = ", acc)
Epoch 1/1
26/26 [==============================] - 17s 638ms/step - loss: 0.0597 - acc: 0.9806
25/25 [==============================] - 2s 95ms/step
Dev Set accuracy = 0.9312872886657715
3.进行预测
以上我们已经建立一个触发字检测系统,下面我们来进行一些测试。
def detect_triggerword(filename):
plt.subplot(2,1,1)
x = graph_spectrogram(filename)
x = x.swapaxes(0,1)
x = np.expand_dims(x, axis=0)
predictions = model.predict(x)
plt.subplot(2,1,2)
plt.plot(predictions[0,:,0])
plt.ylabel('probability')
plt.show()
return predictions
当检测到“activate”单词时自动播放“chiming”音乐,但是y<t>中会有很多个1来触发这个音乐,而我们只需在检测到第一个“1”时触发,其他时不需要。因此需要chime_on_activate函数来处理。
chime_file = 'audio_examples/chime.wav'
def chime_on_activate(filename, predictions, threshold):
audio_clip = AudioSegment.from_wave(filename)
chime = AudioSegment.from_wave(chime_file)
Ty = predictions.shape[1]
consecutive_timesteps = 0
for i in range(Ty):
consecutive_timesteps += 1
if prediction[0,i,0] > threshold and consecutive_timesteps > 75:
audio_clip = audio_clip.overlay(chime, position=((i / Ty) * audio_clip.duration_senconds)*1000)
consecutive_timesteps = 0
audio_clip.export("chime_output.wav", foramt='wav')
3.3在开发集上测试
filename = "./raw_data/dev/1.wav"
prediction = detect_triggerword(filename)
chime_on_activate(filename, prediction, 0.5)
IPython.display.Audio("./chime_out.wav")
filename = "./raw_data/dev/2.wav"
prediction = detect_triggerword(filename)
chime_on_activate(filename, prediction, 0.5)
IPython.display.Audio("./chime_out.wav")