批量归一化 - Tensorflow

2024-01-01

我看过一些 BN 的例子,但还是有点困惑。所以我目前正在使用这个函数,它调用这里的函数;

https://github.com/tensorflow/tensorflow/blob/master/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.batch_norm.md https://github.com/tensorflow/tensorflow/blob/master/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.batch_norm.md

from tensorflow.contrib.layers.python.layers import batch_norm as batch_norm
import tensorflow as tf

def bn(x,is_training,name):
    bn_train = batch_norm(x, decay=0.9, center=True, scale=True,
    updates_collections=None,
    is_training=True,
    reuse=None, 
    trainable=True,
    scope=name)
    bn_inference = batch_norm(x, decay=1.00, center=True, scale=True,
    updates_collections=None,
    is_training=False,
    reuse=True, 
    trainable=False,
    scope=name)
    z = tf.cond(is_training, lambda: bn_train, lambda: bn_inference)
    return z

接下来的部分是一个玩具运行,我只是检查函数是否重用了在两个特征的训练步骤中计算的均值和方差。在测试模式下运行这部分代码,即is_training=False,在训练步骤中计算的运行均值/方差正在发生变化,当我们打印出我通过调用获得的 BN 变量时可以看到这一点bnParams

if __name__ == "__main__":
    print("Example")

    import os
    import numpy as np
    import scipy.stats as stats
    np.set_printoptions(suppress=True,linewidth=200,precision=3)
    np.random.seed(1006)
    import pdb
    path = "batchNorm/"
    if not os.path.exists(path):
        os.mkdir(path)
    savePath = path + "bn.model"

    nFeats = 2
    X = tf.placeholder(tf.float32,[None,nFeats])
    is_training = tf.placeholder(tf.bool,name="is_training")
    Y = bn(X,is_training=is_training,name="bn")
    mvn = stats.multivariate_normal([0,100])
    bs = 4
    load = 0
    train = 1
    saver = tf.train.Saver()
    def bnCheck(batch,mu,std):
        # Checking calculation
        return (x - mu)/(std + 0.001)
    with tf.Session() as sess:
        if load == 1:
            saver.restore(sess,savePath)
        else:
            tf.global_variables_initializer().run()
        #### TRAINING #####
        if train == 1:
            for i in xrange(100):
                x = mvn.rvs(bs)
                y = Y.eval(feed_dict={X:x, is_training.name: True})

        def bnParams():
            beta, gamma, mean, var = [v.eval() for v in tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES,scope="bn")]
            return beta, gamma, mean, var

        beta, gamma, mean, var = bnParams()
        #### TESTING #####
        for i in xrange(10):
            x = mvn.rvs(1).reshape(1,-1)
            check = bnCheck(x,mean,np.sqrt(var))
            y = Y.eval(feed_dict={X:x, is_training.name: False})
            print("x = {0}, y = {1}, check = {2}".format(x,y,check))
            beta, gamma, mean, var = bnParams()
            print("BN Params: Beta {0} Gamma {1} mean {2} var{3} \n".format(beta,gamma,mean,var))

        saver.save(sess,savePath)

测试循环的前三次迭代如下所示;

x = [[  -1.782  100.941]], y = [[-1.843  1.388]], check = [[-1.842  1.387]]
BN Params: Beta [ 0.  0.] Gamma [ 1.  1.] mean [ -0.2   99.93] var[ 0.818  0.589] 

x = [[  -1.245  101.126]], y = [[-1.156  1.557]], check = [[-1.155  1.557]]
BN Params: Beta [ 0.  0.] Gamma [ 1.  1.] mean [  -0.304  100.05 ] var[ 0.736  0.53 ] 

x = [[ -0.107  99.349]], y = [[ 0.23  -0.961]], check = [[ 0.23 -0.96]]
BN Params: Beta [ 0.  0.] Gamma [ 1.  1.] mean [ -0.285  99.98 ] var[ 0.662  0.477] 

我不做 BP,所以 beta 和 gamma 不会改变。然而我的跑步方式/差异正在改变。我哪里错了?

编辑: 最好知道为什么这些变量需要/不需要在测试和训练之间改变;

updates_collections, reuse, trainable

你的 bn 函数是错误的。使用这个代替:

def bn(x,is_training,name):
    return batch_norm(x, decay=0.9, center=True, scale=True,
    updates_collections=None,
    is_training=is_training,
    reuse=None,
    trainable=True,
    scope=name)

is_training 是 bool 0-D 张量,表示是否更新运行平均值等。然后,只需更改张量 is_training 即可表明您处于训练阶段还是测试阶段。

编辑: 张量流中的许多操作接受张量,而不是常量 True/False 数字参数。

本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

批量归一化 - Tensorflow 的相关文章

随机推荐