TensorFlow 实现CNN自编码网络

1. 实现:

实现卷积编码和反卷积解码的自编码网络

2. 数据集:

MNIST数据集

3. 实现流程:

3.1 编码:CNN卷积操作

CNN卷积操作 输入图像shape **函数 卷积核shape 步长 输出图像shape
第一层卷积 [batch,28,28,1] LeakReLU [3,3,1,16] [1,2,2,1] [batch,14,14,16]
第二层卷积 [batch,14,14,16] LeakReLU [3,316,32] [1,2,2,1] [batch,7,7,32]
第三层卷积 [batch,7,7,32] LeakReLU [3,3,32,64] [1,2,2,1] [batch,4,4,64]
第四层卷积 [batch,4,4,64] LeakReLU [2,2,64,64] [1,2,2,1] [batch,2,2,64]

3.2 解码:反卷积操作

CNN反卷积操作 输入图像shape **函数 卷积核shape 步长 输出图像shape
第一层反卷积 [batch,2,2,64] LeakReLU [2,2,1,16] [1,2,2,1] [batch,4,4,64]
第二层反卷积 [batch,4,4,64] LeakReLU [3,316,32] [1,2,2,1] [batch,7,7,32]
第三层反卷积 [batch,7,7,32] LeakReLU [3,3,32,64] [1,2,2,1] [batch,14,14,16]
第四层反卷积 [batch,14,14,16] LeakReLU [3,3,64,64] [1,2,2,1] [batch,28,28,1]

3.3 主网络

主网络
获取输入 占位符 类型:tf.float32 shape:[batch,28,28,1]
前向结构 获取卷积输出 [batch,2,2,64]
获取反卷积输出 [batch,28,28,1]
后向结构 损失 反卷积输出 输入数据 均值平方差
优化器 tf.train.AdamOptimizer()

3.4 训练网络

  1. MNIST数据集
  2. 每次从训练集取100张图片
  3. 因获取图片shape为【100,784】,转换为【100,28,28,1】
  4. 获取损失
  5. 每训练20次,输出损失
  6. 从测试集中取100张图片,输入进网络,获取解码后的图片,并输出

3.5 测试代码

测试网络 形状:shape batch=100
获取测试集 样本:[batch,784]
更改样本集的形状 样本:[batch,28,28,1]
向主网络中传入数据 样本:[batch,28,28,1]
获取解码器输出数据 预测数据:[batch,28,28,1]
输出一张真实图片
输出一张解码图片

4. 网络结构实现代码:

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
import numpy as np
import matplotlib.pyplot as plt
mnist = input_data.read_data_sets(r'E:\PycharmProjects\TensorflowTest\MNIST_data',one_hot=True)

# 卷积操作
class Convolution:
    def __init__(self):
        # 输入:[100,28,28,1]
        # 输出:(100, 14, 14, 16)
        self.filter1 = tf.Variable(tf.truncated_normal([3,3,1,16], stddev=0.1))
        self.b1 = tf.Variable(tf.zeros([16]))
        # 输出:(100, 7, 7, 32)
        self.filter2 = tf.Variable(tf.truncated_normal([3,3,16,32], stddev=0.1))
        self.b2 = tf.Variable(tf.zeros([32]))
        # 输出:(100, 4, 4, 64)
        self.filter3 = tf.Variable(tf.truncated_normal([3,3,32,64], stddev=0.1))
        self.b3 = tf.Variable(tf.zeros([64]))

    def forward(self,in_x):
        conv1 = tf.nn.leaky_relu(tf.add(tf.nn.conv2d(in_x,
                                                     self.filter1,
                                                     [1,2,2,1],
                                                     padding='SAME'), self.b1))
        conv2 = tf.nn.leaky_relu(tf.add(tf.nn.conv2d(conv1,
                                                     self.filter2,
                                                     [1,2,2,1],
                                                     padding='SAME'), self.b2))
        conv3 = tf.nn.leaky_relu(tf.add(tf.nn.conv2d(conv2,
                                                     self.filter3,
                                                     [1,2,2,1],
                                                     padding='SAME'), self.b3))

        return conv3
# 反卷积操作
class Deconvolution:
    def __init__(self):
        # 输入:[100,4,4,64] 输出:[100,7,7,32]
        self.filter3 = tf.Variable(tf.truncated_normal([3, 3, 32, 64], stddev=0.1))
        self.b3 = tf.Variable(tf.zeros([32]))
        # 输出:[100,14,14,16]
        self.filter2 = tf.Variable(tf.truncated_normal([3, 3, 16, 32], stddev=0.1))
        self.b2 = tf.Variable(tf.zeros([16]))
        # 输出:[100,28,28,1]
        self.filter1 = tf.Variable(tf.truncated_normal([3, 3, 1, 16], stddev=0.1))
        self.b1 = tf.Variable(tf.zeros([1]))

        self.strides=[1,2,2,1]
    def forward(self,conv_layer):
        deconv1 = tf.nn.leaky_relu(tf.nn.conv2d_transpose(conv_layer,
                                                          self.filter3,
                                                          [100,7,7,32],
                                                          strides=self.strides), self.b3)
        deconv2 = tf.nn.leaky_relu(tf.nn.conv2d_transpose(deconv1,
                                                         self.filter2,
                                                         [100,14,14,16],
                                                         strides=self.strides), self.b2)
        deconv3 = tf.nn.leaky_relu(tf.nn.conv2d_transpose(deconv2,
                                                         self.filter1,
                                                         [100,28,28,1],
                                                          strides=self.strides), self.b1)
        return deconv3
# 主网络
class Net:
    def __init__(self):
        self.conv = Convolution()
        self.deconv = Deconvolution()

        self.in_x = tf.placeholder(dtype=tf.float32, shape=[None, 28, 28, 1])

        self.forward()
        self.backward()

    def forward(self):
        self.conv_layer = self.conv.forward(self.in_x)
        self.deconv_layer = self.deconv.forward(self.conv_layer)
    def backward(self):
        self.loss = tf.reduce_mean((self.deconv_layer-self.in_x)**2)
        self.opt = tf.train.AdamOptimizer().minimize(self.loss)

5. 训练网络:获取损失和解码图

if __name__ == '__main__':
    net = Net()
    with tf.Session() as sess:
        init = tf.global_variables_initializer()
        sess.run(init)
        saver = tf.train.Saver()
        plt.ion()
        loss_sum = []
        for epoch in range(10000):
            xs,_ = mnist.train.next_batch(100)
            train_xs = np.reshape(xs, [100, 28, 28, 1])
            loss, _ = sess.run([net.loss, net.opt],feed_dict={net.in_x: train_xs})
            if epoch % 20 ==0:
                saver.save(sess,r'E:\PycharmProjects\TensorflowTest\log\CNNAEtrain1.ckpt')
                test_xs,_ = mnist.train.next_batch(100)
                test_xs = np.reshape(test_xs, [100,28, 28,1])
                photo_x = np.reshape(sess.run(net.deconv_layer, feed_dict={net.in_x: test_xs})[1],[28,28])
       			loss_sum.append(loss)
                print('loss: ', loss)
                plt.imshow(photo_x)
                plt.pause(0.1)
        plt.figure("CNN_AE_Loss图")
        plt.plot(loss_sum,label='CNN_AE_Loss')
        plt.legend()
        plt.show()

6. 代码结果输出:

loss:  0.10667889
loss:  0.0795733 
loss:  0.051609386 
loss:  0.030548936 

7. 损失图

TensorFlow 实现 CNN 自编码网络

8. 效果展示:

原始图片:

TensorFlow 实现 CNN 自编码网络

生成图片:

TensorFlow 实现 CNN 自编码网络

9. 测试网络:

测试代码:

f __name__ == '__main__':
    net = Net()
    with tf.Session() as sess:
        init = tf.global_variables_initializer()
        sess.run(init)
        saver = tf.train.Saver()
        saver.restore(sess, r'E:\PycharmProjects\TensorflowTest\log\CNNAEtrain1.ckpt')
        test_xs, _ = mnist.train.next_batch(100)
        test_xs = np.reshape(test_xs, [100, 28, 28, 1])
        photo_x = np.reshape(sess.run(net.deconv_layer, feed_dict={net.in_x: test_xs})[1], [28, 28])
        photo_xs = np.reshape(test_xs[1], [28, 28])
        plt.figure('原始图片')
        plt.imshow(photo_xs)
        plt.show()
        plt.figure('生成图片')
        plt.imshow(photo_x)
        plt.show()

原始图片:

TensorFlow 实现 CNN 自编码网络

生成图片:

TensorFlow 实现 CNN 自编码网络

相关文章:

  • 2021-12-19
  • 2022-12-23
  • 2021-06-29
  • 2022-12-23
  • 2022-12-23
  • 2021-06-17
  • 2021-11-26
  • 2021-12-19
猜你喜欢
  • 2022-12-23
  • 2021-09-24
  • 2021-07-31
  • 2021-09-22
  • 2021-04-02
  • 2021-11-30
  • 2022-01-05
相关资源
相似解决方案