你是对的,tf.nn.batch_normalization
仅提供实现批量归一化的基本功能。您必须添加额外的逻辑来跟踪训练期间的移动均值和方差,并在推理期间使用经过训练的均值和方差。你可以看看这个example https://github.com/tensorflow/models/blob/master/inception/inception/slim/ops.py#L116对于一个非常通用的实现,但是一个不使用的快速版本gamma
在这儿 :
beta = tf.Variable(tf.zeros(shape), name='beta')
moving_mean = tf.Variable(tf.zeros(shape), name='moving_mean',
trainable=False)
moving_variance = tf.Variable(tf.ones(shape),
name='moving_variance',
trainable=False)
control_inputs = []
if is_training:
mean, variance = tf.nn.moments(image, [0, 1, 2])
update_moving_mean = moving_averages.assign_moving_average(
moving_mean, mean, self.decay)
update_moving_variance = moving_averages.assign_moving_average(
moving_variance, variance, self.decay)
control_inputs = [update_moving_mean, update_moving_variance]
else:
mean = moving_mean
variance = moving_variance
with tf.control_dependencies(control_inputs):
return tf.nn.batch_normalization(
image, mean=mean, variance=variance, offset=beta,
scale=None, variance_epsilon=0.001)