在寻求帮助并更好地了解动态之后自动微分 (or autodiff),我设法得到了一个可行的、简单的例子来说明我想要实现的目标。尽管这种方法不能完全解决问题,但它使我们在理解如何解决当前问题方面向前迈进了一步。
参考型号
我已将模型简化为更小的模型:
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Activation, Dense, Layer, Flatten, Conv2D
import numpy as np
tf.random.set_seed(0)
# 3 batches, 10x10 images, 1 channel
x = tf.random.uniform((3, 10, 10, 1))
y = tf.cast(tf.random.uniform((3, 1)) > 0.5, tf.float32)
layer_0 = Sequential([Conv2D(filters=6, kernel_size=2, activation="relu")])
layer_1 = Sequential([Conv2D(filters=6, kernel_size=2, activation="relu")])
layer_2 = Sequential([Flatten(), Dense(1), Activation("sigmoid")])
loss_fn = tf.keras.losses.MeanSquaredError()
我们将其分为三个部分,layer_0, layer_1, layer_2
。普通方法只是将所有内容放在一起并一一计算梯度(或一步计算):
with tf.GradientTape(persistent=True) as tape:
out_layer_0 = layer_0(x)
out_layer_1 = layer_1(out_layer_0)
out_layer_2 = layer_2(out_layer_1)
loss = loss_fn(y, out_layer_2)
只需简单调用即可计算不同的梯度tape.gradient
:
ref_conv_dLoss_dWeights2 = tape.gradient(loss, layer_2.trainable_weights)
ref_conv_dLoss_dWeights1 = tape.gradient(loss, layer_1.trainable_weights)
ref_conv_dLoss_dWeights0 = tape.gradient(loss, layer_0.trainable_weights)
ref_conv_dLoss_dY = tape.gradient(loss, out_layer_2)
ref_conv_dLoss_dOut1 = tape.gradient(loss, out_layer_1)
ref_conv_dOut2_dOut1 = tape.gradient(out_layer_2, out_layer_1)
ref_conv_dLoss_dOut0 = tape.gradient(loss, out_layer_0)
ref_conv_dOut1_dOut0 = tape.gradient(out_layer_1, out_layer_0)
ref_conv_dOut0_dWeights0 = tape.gradient(out_layer_0, layer_0.trainable_weights)
ref_conv_dOut1_dWeights1 = tape.gradient(out_layer_1, layer_1.trainable_weights)
ref_conv_dOut2_dWeights2 = tape.gradient(out_layer_2, layer_2.trainable_weights)
稍后我们将使用这些值来比较我们方法的正确性。
带手动自动微分功能的分体式型号
对于分裂,我们的意思是每个layer_x
需要有自己的GradientTape
,负责生成自己的梯度:
with tf.GradientTape(persistent=True) as tape_0:
out_layer_0 = model.layers[0](x)
with tf.GradientTape(persistent=True) as tape_1:
tape_1.watch(out_layer_0)
out_layer_1 = model.layers[1](out_layer_0)
with tf.GradientTape(persistent=True) as tape_2:
tape_2.watch(out_layer_1)
out_flatten = model.layers[2](out_layer_1)
out_layer_2 = model.layers[3](out_flatten)
loss = loss_fn(y, out_layer_2)
现在,简单地使用tape_n.gradient
因为每一步都行不通。我们基本上丢失了很多事后无法恢复的信息。
Instead, we have to use tape.jacobian https://www.tensorflow.org/api_docs/python/tf/GradientTape#jacobian and tape.batch_jacobian https://www.tensorflow.org/api_docs/python/tf/GradientTape#batch_jacobian, except for , as we only have one value as a source.
dOut0_dWeights0 = tape_0.jacobian(out_layer_0, model.layers[0].trainable_weights)
dOut1_dOut0 = tape_1.batch_jacobian(out_layer_1, out_layer_0)
dOut1_dWeights1 = tape_1.jacobian(out_layer_1, model.layers[1].trainable_weights)
dOut2_dOut1 = tape_2.batch_jacobian(out_layer_2, out_layer_1)
dOut2_dWeights2 = tape_2.jacobian(out_layer_2, model.layers[3].trainable_weights)
dLoss_dOut2 = tape_2.gradient(loss, out_layer_2) # or dL/dY
我们将使用几个实用函数来将结果调整为我们想要的:
def add_missing_axes(source_tensor, target_tensor):
len_missing_axes = len(target_tensor.shape) - len(source_tensor.shape)
# note: the number of tf.newaxis is determined by the number of axis missing to reach
# the same dimension of the target tensor
assert len_missing_axes >= 0
# convenience renaming
source_tensor_extended = source_tensor
# add every missing axis
for _ in range(len_missing_axes):
source_tensor_extended = source_tensor_extended[..., tf.newaxis]
return source_tensor_extended
def upstream_gradient_loss_weights(dOutUpstream_dWeightsLocal, dLoss_dOutUpstream):
dLoss_dOutUpstream_extended = add_missing_axes(dLoss_dOutUpstream, dOutUpstream_dWeightsLocal)
# reduce over the first axes
len_reduce = range(len(dLoss_dOutUpstream.shape))
return tf.reduce_sum(dOutUpstream_dWeightsLocal * dLoss_dOutUpstream_extended, axis=len_reduce)
def upstream_gradient_loss_out(dOutUpstream_dOutLocal, dLoss_dOutUpstream):
dLoss_dOutUpstream_extended = add_missing_axes(dLoss_dOutUpstream, dOutUpstream_dOutLocal)
len_reduce = range(len(dLoss_dOutUpstream.shape))[1:]
return tf.reduce_sum(dOutUpstream_dOutLocal * dLoss_dOutUpstream_extended, axis=len_reduce)
最后,我们可以应用链式法则:
dOut2_dOut1 = tape_2.batch_jacobian(out_layer_2, out_layer_1)
dOut2_dWeights2 = tape_2.jacobian(out_layer_2, model.layers[3].trainable_weights)
dLoss_dOut2 = tape_2.gradient(loss, out_layer_2) # or dL/dY
dLoss_dWeights2 = upstream_gradient_loss_weights(dOut2_dWeights2[0], dLoss_dOut2)
dLoss_dBias2 = upstream_gradient_loss_weights(dOut2_dWeights2[1], dLoss_dOut2)
dLoss_dOut1 = upstream_gradient_loss_out(dOut2_dOut1, dLoss_dOut2)
dLoss_dWeights1 = upstream_gradient_loss_weights(dOut1_dWeights1[0], dLoss_dOut1)
dLoss_dBias1 = upstream_gradient_loss_weights(dOut1_dWeights1[1], dLoss_dOut1)
dLoss_dOut0 = upstream_gradient_loss_out(dOut1_dOut0, dLoss_dOut1)
dLoss_dWeights0 = upstream_gradient_loss_weights(dOut0_dWeights0[0], dLoss_dOut0)
dLoss_dBias0 = upstream_gradient_loss_weights(dOut0_dWeights0[1], dLoss_dOut0)
print("dLoss_dWeights2 valid:", tf.experimental.numpy.allclose(ref_conv_dLoss_dWeights2[0], dLoss_dWeights2).numpy())
print("dLoss_dBias2 valid:", tf.experimental.numpy.allclose(ref_conv_dLoss_dWeights2[1], dLoss_dBias2).numpy())
print("dLoss_dWeights1 valid:", tf.experimental.numpy.allclose(ref_conv_dLoss_dWeights1[0], dLoss_dWeights1).numpy())
print("dLoss_dBias1 valid:", tf.experimental.numpy.allclose(ref_conv_dLoss_dWeights1[1], dLoss_dBias1).numpy())
print("dLoss_dWeights0 valid:", tf.experimental.numpy.allclose(ref_conv_dLoss_dWeights0[0], dLoss_dWeights0).numpy())
print("dLoss_dBias0 valid:", tf.experimental.numpy.allclose(ref_conv_dLoss_dWeights0[1], dLoss_dBias0).numpy())
输出将是:
dLoss_dWeights2 valid: True
dLoss_dBias2 valid: True
dLoss_dWeights1 valid: True
dLoss_dBias1 valid: True
dLoss_dWeights0 valid: True
dLoss_dBias0 valid: True
因为所有值都彼此接近。请注意,使用雅可比行列式的方法,我们将有一定程度的误差/近似值,大约10^-7
,但我认为这已经足够好了。
Gotchas
对于极限模型或玩具模型来说,这是完美的并且效果很好。然而,在实际场景中,您将拥有大量尺寸的大图像。在处理雅可比行列式时,这并不理想,因为雅可比行列式可以很快达到很高的维度。但这本身就是一个问题。
您可以在以下资源中阅读有关该主题的更多信息:
- (EN) https://mblondel.org/teaching/autodiff-2020.pdf https://mblondel.org/teaching/autodiff-2020.pdf
- (EN) https://www.sscardapane.it/assets/files/nnds2021/Lecture_3_filled_connected.pdf https://www.sscardapane.it/assets/files/nnds2021/Lecture_3_fully_connected.pdf
- (ITA) https://iaml.it/blog/differenziazione-automatica-parte-1 https://iaml.it/blog/differenziazione-automatica-parte-1