1. 基本代码和模型文件介绍

1.1 保存模型

saver = tf.train.Saver(max_to_keep=5)
saver.save(sess, 'model/my_model.ckpt', global_step=epoch)

1.2 模型文件

Tensorflow模型保存与加载
checkpoint 最新的模型文件信息
data和index 训练好的权重、偏置等数据
meta 图结构,包括变量、操作、集合等

1.3 加载模型

new_saver = tf.train.import_meta_graph('./Model/my_model.ckpt.meta')
new_saver.restore(sess, tf.train.latest_checkpoint('./Model'))

2. 实例——非线性回归模型

2.1 训练模型并保存

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
# numpy生成20个随机点
x_data = np.linspace(-0.5,0.5,20)[:,np.newaxis]
noise = np.random.normal(0,0.02,x_data.shape)
y_data = np.square(x_data) + noise
# 定义两个placeholder
x = tf.placeholder(tf.float32,[None,1])
y = tf.placeholder(tf.float32,[None,1])

# 神经网络结构:1-30-1
w1 = tf.Variable(tf.random_normal([1,30]))
b1 = tf.Variable(tf.zeros([30]))
wx_plus_b_1 = tf.matmul(x,w1) + b1
l1 = tf.nn.tanh(wx_plus_b_1)

w2 = tf.Variable(tf.random_normal([30,1]))
b2 = tf.Variable(tf.zeros([1]))
wx_plus_b_2 = tf.matmul(l1,w2) + b2
prediction = tf.nn.tanh(wx_plus_b_2)

# 定义两个placeholder
x = tf.placeholder(tf.float32,name='input_x')
y = tf.placeholder(tf.float32,name='input_y')

# 神经网络结构:1-30-1
w1 = tf.Variable(tf.random_normal([1,30]))
b1 = tf.Variable(tf.zeros([30]))
z1 = tf.matmul(x,w1) + b1
a1 = tf.nn.tanh(z1)

w2 = tf.Variable(tf.random_normal([30,1]))
b2 = tf.Variable(tf.zeros([1]))
z2= tf.matmul(a1,w2) + b2
prediction = tf.nn.tanh(z2, name='prediction')

# 二次代价函数
loss = tf.losses.mean_squared_error(y,prediction)
# 使用梯度下降法最小化loss
optimizer = tf.train.GradientDescentOptimizer(0.1)
train_step = optimizer.minimize(loss, name='train_step')
saver = tf.train.Saver(max_to_keep=4)
with tf.Session() as sess:
    # 变量初始化
    sess.run(tf.global_variables_initializer())
    for epoch in range(10):
        for j in range(1000):
            sess.run(train_step,feed_dict={x:x_data,y:y_data})
        saver.save(sess, 'Model/non_linear_model.ckpt', global_step=epoch)
        print('epoch:%d, loss:%f' % (epoch, sess.run(loss,feed_dict={x:x_data,y:y_data})))
    # 获得预测值
    prediction_value = sess.run(prediction,feed_dict={x:x_data})
    # 画图
    plt.scatter(x_data, y_data)
    plt.plot(x_data, prediction_value, 'r-', lw=5)
    plt.show()
======================================== output ========================================
epoch:0, loss:0.305729
epoch:1, loss:0.171138
epoch:2, loss:0.048052
epoch:3, loss:0.007044
epoch:4, loss:0.001280

Tensorflow模型保存与加载

fig1. 训练5个周期后的拟合结果

Tensorflow模型保存与加载

fig2. 模型文件

  • outputfig1可以看出,经过5个周期的训练后,损失loss=0.001280,拟合效果较好
  • fig2可以看出,虽然生成了10个(套)模型文件,但是只保留了其中4个,因为指定过最多保存4个。编号6、7、8、9则是之前定义的epoch
  • 注意:对于可能需要加载的tensorop最好先设置好name

2.2 加载模型并预测、继续训练

x_da = np.linspace(-0.5,0.5,20)[:,np.newaxis]
noise = np.random.normal(0,0.02,x_da.shape)
y_da = np.square(x_da) + noise
with tf.Session() as sess:
    meta = tf.train.import_meta_graph('./Model/non_linear_model.ckpt-9.meta')
    meta.restore(sess, './Model/non_linear_model.ckpt-9')
    graph = tf.get_default_graph()
    x = graph.get_tensor_by_name('input_x:0')
    y = graph.get_tensor_by_name('input_y:0')
    prediction = graph.get_tensor_by_name('prediction:0')
    loss = tf.losses.mean_squared_error(y,prediction)
    train_step = graph.get_operation_by_name('train_step')
    for epoch in range(10):
        for j in range(100):
            sess.run(train_step,feed_dict={x:x_da, y:y_da})
        print(sess.run(loss,feed_dict={x:x_da,y:y_da}))
    # 获得预测值
    prediction_value = sess.run(prediction,feed_dict={x:x_da})
    # 画图
    plt.scatter(x_da, y_da)
    plt.plot(x_da, prediction_value, 'r-', lw=5)
    plt.show()
======================================== output ========================================
epoch:0, loss:0.052003
epoch:1, loss:0.039929
epoch:2, loss:0.033029
epoch:3, loss:0.007565
epoch:4, loss:0.002487
epoch:5, loss:0.001337
epoch:6, loss:0.000974
epoch:7, loss:0.000806
epoch:8, loss:0.000706
epoch:9, loss:0.000642

Tensorflow模型保存与加载

fig3. 再次训练10个周期后的拟合结果

  • outputfig3可以看出,最开始的loss就较小,这就是因为模型已经有过5个周期的预训练了,事实上如果预训练10个周期,这里最开始的loss约为0.0004,效果更加明显。另外可以看到,拟合效果有了进一步提升。
  • 注意点:
  1. 以下3行代码是固定的,前两行是读取模型文件,第3行是获取当前默认的计算图,因为张量和运算都是在图上的,所以这一步不可少。
    meta = tf.train.import_meta_graph('./Model/non_linear_model.ckpt-9.meta')
    meta.restore(sess, './Model/non_linear_model.ckpt-9')
    graph = tf.get_default_graph()
    
  2. 要分清楚tensorop的概念, 两者的引用方式也不一样,比如在上面的代码中:
    • 引用tensorx = graph.get_tensor_by_name('input_x:0'),后面的:0是系统保存tensor时自动加上去的,op不会加。
    • 引用optrain_step = graph.get_operation_by_name('train_step')

相关文章: