My X_test
是 128x128x3 图像,我的Y_test
是 512x512x3 的图像。我想在每个纪元之后显示输入(X_test)的外观,预期输出(Y_test)的外观,以及实际输出的样子。到目前为止,我只知道如何在 Tensorboard 中添加前 2 个。这是调用回调的代码:
model.fit(X_train,
Y_train,
epochs=epochs,
verbose=2,
shuffle=False,
validation_data=(X_test, Y_test),
batch_size=batch_size,
callbacks=get_callbacks())
这是回调的代码:
import tensorflow as tf
from keras.callbacks import Callback
from keras.callbacks import TensorBoard
import io
from PIL import Image
from constants import batch_size
def get_callbacks():
tbCallBack = TensorBoard(log_dir='./logs',
histogram_freq=1,
write_graph=True,
write_images=True,
write_grads=True,
batch_size=batch_size)
tbi_callback = TensorBoardImage('Image test')
return [tbCallBack, tbi_callback]
def make_image(tensor):
"""
Convert an numpy representation image to Image protobuf.
Copied from https://github.com/lanpa/tensorboard-pytorch/
"""
height, width, channel = tensor.shape
print(tensor)
image = Image.fromarray(tensor.astype('uint8')) # TODO: maybe float ?
output = io.BytesIO()
image.save(output, format='JPEG')
image_string = output.getvalue()
output.close()
return tf.Summary.Image(height=height,
width=width,
colorspace=channel,
encoded_image_string=image_string)
class TensorBoardImage(Callback):
def __init__(self, tag):
super().__init__()
self.tag = tag
def on_epoch_end(self, epoch, logs={}):
# Load image
img_input = self.validation_data[0][0] # X_train
img_valid = self.validation_data[1][0] # Y_train
print(self.validation_data[0].shape) # (8, 128, 128, 3)
print(self.validation_data[1].shape) # (8, 512, 512, 3)
image = make_image(img_input)
summary = tf.Summary(value=[tf.Summary.Value(tag=self.tag, image=image)])
writer = tf.summary.FileWriter('./logs')
writer.add_summary(summary, epoch)
writer.close()
image = make_image(img_valid)
summary = tf.Summary(value=[tf.Summary.Value(tag=self.tag, image=image)])
writer = tf.summary.FileWriter('./logs')
writer.add_summary(summary, epoch)
writer.close()
return
我想知道在哪里/如何获得网络的实际输出。
我遇到的另一个问题是,这里是正在移植到 TensorBoard 中的图像之一的示例:
[[[0.10909907 0.09341043 0.08224604]
[0.11599099 0.09922747 0.09138277]
[0.15596421 0.13087936 0.11472746]
...
[0.87589591 0.72773653 0.69428956]
[0.87006552 0.7218123 0.68836991]
[0.87054225 0.72794635 0.6967475 ]]
...
[[0.26142332 0.16216267 0.10314116]
[0.31526875 0.18743924 0.12351286]
[0.5499796 0.35461449 0.24772873]
...
[0.80937942 0.62956016 0.53784871]
[0.80906054 0.62843601 0.5368183 ]
[0.81046278 0.62453899 0.53849678]]]
这就是我的原因吗image = Image.fromarray(tensor.astype('uint8'))
线可能会生成看起来与实际输出完全不同的图像?这是 TensorBoard 的示例:
我确实尝试过.astype('float64')
但它引发了一个错误,因为它显然不是受支持的类型。
无论如何,我不确定这确实是问题所在,因为我在 TensorBoard 中显示的其余图像都只是白色/灰色/黑色方块(就在那里,conv2D_7
,实际上是我网络的最后一层,因此应该显示输出的实际图像,不是吗?):
最终,我想要这样的东西,我已经在通过 matplot 训练后显示了:
最后,我想解决这个回调需要很长时间来处理的事实。有没有更有效的方法来做到这一点?它几乎使我的训练时间增加了一倍(可能是因为它需要将 numpy 转换为图像,然后再将其保存到 TensorBoard 日志文件中)。