Keras ConvLSTM2D:输出层上的 ValueError

2024-02-07

我正在尝试训练 2D 卷积 LSTM 以根据视频数据进行分类预测。然而,我的输出层似乎遇到了问题:

“ValueError:检查目标时出错:预期dense_1 有 5 个维度,但得到了形状为 (1, 1939, 9) 的数组”

我当前的模型基于ConvLSTM2D 示例 https://github.com/keras-team/keras/blob/master/examples/conv_lstm.py由 Keras 团队提供。我认为上述错误是我误解该示例及其基本原理的结果。

Data

我有任意数量的视频,其中每个视频包含任意数量的帧。每帧尺寸为 135x240x1(颜色通道最后)。这会导致输入形状为 (None, None, 135, 240, 1),其中两个“None”值依次是批量大小和时间步长。如果我用 1052 帧的单个视频进行训练,那么我的输入形状将变为 (1, 1052, 135, 240, 1)。

对于每一帧,模型应预测 9 个类别中 0 到 1 之间的值。这意味着我的输出形状是(无,无,9)。如果我用 1052 帧的单个视频进行训练,那么这个形状就会变成 (1, 1052, 9)。

Model

Layer (type)                 Output Shape              Param #
=================================================================
conv_lst_m2d_1 (ConvLSTM2D)  (None, None, 135, 240, 40 59200
_________________________________________________________________
batch_normalization_1 (Batch (None, None, 135, 240, 40 160
_________________________________________________________________
conv_lst_m2d_2 (ConvLSTM2D)  (None, None, 135, 240, 40 115360
_________________________________________________________________
batch_normalization_2 (Batch (None, None, 135, 240, 40 160
_________________________________________________________________
conv_lst_m2d_3 (ConvLSTM2D)  (None, None, 135, 240, 40 115360
_________________________________________________________________
batch_normalization_3 (Batch (None, None, 135, 240, 40 160
_________________________________________________________________
dense_1 (Dense)              (None, None, 135, 240, 9) 369
=================================================================
Total params: 290,769
Trainable params: 290,529
Non-trainable params: 240

源代码

model = Sequential()

model.add(ConvLSTM2D(
        filters=40,
        kernel_size=(3, 3),
        input_shape=(None, 135, 240, 1),
        padding='same',
        return_sequences=True))
model.add(BatchNormalization())

model.add(ConvLSTM2D(
        filters=40,
        kernel_size=(3, 3),
        padding='same',
        return_sequences=True))
model.add(BatchNormalization())

model.add(ConvLSTM2D(
        filters=40,
        kernel_size=(3, 3),
        padding='same',
        return_sequences=True))
model.add(BatchNormalization())

model.add(Dense(
        units=classes,
        activation='softmax'
))
model.compile(
        loss='categorical_crossentropy',
        optimizer='adadelta'
)
model.fit_generator(generator=training_sequence)

追溯

Epoch 1/1
Traceback (most recent call last):
  File ".\lstm.py", line 128, in <module>
    main()
  File ".\lstm.py", line 108, in main
    model.fit_generator(generator=training_sequence)
  File "C:\Users\matth\Anaconda3\envs\capstone-gpu\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "C:\Users\matth\Anaconda3\envs\capstone-gpu\lib\site-packages\keras\models.py", line 1253, in fit_generator
    initial_epoch=initial_epoch)
  File "C:\Users\matth\Anaconda3\envs\capstone-gpu\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "C:\Users\matth\Anaconda3\envs\capstone-gpu\lib\site-packages\keras\engine\training.py", line 2244, in fit_generator
    class_weight=class_weight)
  File "C:\Users\matth\Anaconda3\envs\capstone-gpu\lib\site-packages\keras\engine\training.py", line 1884, in train_on_batch
    class_weight=class_weight)
  File "C:\Users\matth\Anaconda3\envs\capstone-gpu\lib\site-packages\keras\engine\training.py", line 1487, in _standardize_user_data
    exception_prefix='target')
  File "C:\Users\matth\Anaconda3\envs\capstone-gpu\lib\site-packages\keras\engine\training.py", line 113, in _standardize_input_data
    'with shape ' + str(data_shape))
ValueError: Error when checking target: expected dense_1 to have 5 dimensions, but got array with shape (1, 1939, 9)

批量大小设置为 1 时打印的示例输入形状为 (1, 1389, 135, 240, 1)。这个形状符合我上面描述的要求,所以我认为我的 Keras Sequence 子类(在源代码中为“training_sequence”)是正确的。

我怀疑这个问题是由我直接从 BatchNormalization() 转到 Dense() 引起的。毕竟,回溯表明问题发生在dense_1(最后一层)中。然而,我不想用我的初级知识让任何人误入歧途,所以请对我的评估持保留态度。

编辑 2018 年 3 月 27 日

看完之后这个线程 https://stackoverflow.com/a/49468183/4674553,其中涉及类似的模型,我更改了最终的 ConvLSTM2D 层,以便将 return_sequences 参数设置为 False 而不是 True。我还在 Dense 层之前添加了 GlobalAveragePooling2D 层。更新后的模型如下:

Layer (type)                 Output Shape              Param #
=================================================================
conv_lst_m2d_1 (ConvLSTM2D)  (None, None, 135, 240, 40 59200
_________________________________________________________________
batch_normalization_1 (Batch (None, None, 135, 240, 40 160
_________________________________________________________________
conv_lst_m2d_2 (ConvLSTM2D)  (None, None, 135, 240, 40 115360
_________________________________________________________________
batch_normalization_2 (Batch (None, None, 135, 240, 40 160
_________________________________________________________________
conv_lst_m2d_3 (ConvLSTM2D)  (None, 135, 240, 40)      115360
_________________________________________________________________
batch_normalization_3 (Batch (None, 135, 240, 40)      160
_________________________________________________________________
global_average_pooling2d_1 ( (None, 40)                0
_________________________________________________________________
dense_1 (Dense)              (None, 9)                 369
=================================================================
Total params: 290,769
Trainable params: 290,529
Non-trainable params: 240

这是回溯的新副本:

Traceback (most recent call last):
  File ".\lstm.py", line 131, in <module>
    main()
  File ".\lstm.py", line 111, in main
    model.fit_generator(generator=training_sequence)
  File "C:\Users\matth\Anaconda3\envs\capstone-gpu\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "C:\Users\matth\Anaconda3\envs\capstone-gpu\lib\site-packages\keras\models.py", line 1253, in fit_generator
    initial_epoch=initial_epoch)
  File "C:\Users\matth\Anaconda3\envs\capstone-gpu\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "C:\Users\matth\Anaconda3\envs\capstone-gpu\lib\site-packages\keras\engine\training.py", line 2244, in fit_generator
    class_weight=class_weight)
  File "C:\Users\matth\Anaconda3\envs\capstone-gpu\lib\site-packages\keras\engine\training.py", line 1884, in train_on_batch
    class_weight=class_weight)
  File "C:\Users\matth\Anaconda3\envs\capstone-gpu\lib\site-packages\keras\engine\training.py", line 1487, in _standardize_user_data
    exception_prefix='target')
  File "C:\Users\matth\Anaconda3\envs\capstone-gpu\lib\site-packages\keras\engine\training.py", line 113, in _standardize_input_data
    'with shape ' + str(data_shape))
ValueError: Error when checking target: expected dense_1 to have 2 dimensions, but got array with shape (1, 1034, 9)

我在这次运行中打印了 x 和 y 形状。 x 为 (1, 1034, 135, 240, 1),y 为 (1, 1034, 9)。这可能会缩小问题范围。看起来问题与 y 数据有关,而不是与 x 数据有关。具体来说,密集层不喜欢时间暗淡。但是,我不确定如何纠正这个问题。

编辑 2018 年 3 月 28 日

于阳的解决方案奏效了。对于遇到类似问题并希望了解最终模型的任何人,以下是摘要:

Layer (type)                 Output Shape              Param #
=================================================================
conv_lst_m2d_1 (ConvLSTM2D)  (None, None, 135, 240, 40 59200
_________________________________________________________________
batch_normalization_1 (Batch (None, None, 135, 240, 40 160
_________________________________________________________________
conv_lst_m2d_2 (ConvLSTM2D)  (None, None, 135, 240, 40 115360
_________________________________________________________________
batch_normalization_2 (Batch (None, None, 135, 240, 40 160
_________________________________________________________________
conv_lst_m2d_3 (ConvLSTM2D)  (None, None, 135, 240, 40 115360
_________________________________________________________________
batch_normalization_3 (Batch (None, None, 135, 240, 40 160
_________________________________________________________________
average_pooling3d_1 (Average (None, None, 1, 1, 40)    0
_________________________________________________________________
reshape_1 (Reshape)          (None, None, 40)          0
_________________________________________________________________
dense_1 (Dense)              (None, None, 9)           369
=================================================================
Total params: 290,769
Trainable params: 290,529
Non-trainable params: 240

另外,源码:

model = Sequential()

model.add(ConvLSTM2D(
        filters=40,
        kernel_size=(3, 3),
        input_shape=(None, 135, 240, 1),
        padding='same',
        return_sequences=True))
model.add(BatchNormalization())

model.add(ConvLSTM2D(
        filters=40,
        kernel_size=(3, 3),
        padding='same',
        return_sequences=True))
model.add(BatchNormalization())

model.add(ConvLSTM2D(
        filters=40,
        kernel_size=(3, 3),
        padding='same',
        return_sequences=True))
model.add(BatchNormalization())

model.add(AveragePooling3D((1, 135, 240)))
model.add(Reshape((-1, 40)))
model.add(Dense(
        units=9,
        activation='sigmoid'))

model.compile(
        loss='categorical_crossentropy',
        optimizer='adadelta'
)

如果你想要每帧的预测,那么你一定应该设置return_sequences=True在你最后的ConvLSTM2D layer.

For the ValueError在目标形状上,替换GlobalAveragePooling2D()层与AveragePooling3D((1, 135, 240)) plus Reshape((-1, 40))使输出形状与您的目标数组兼容。

本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

Keras ConvLSTM2D:输出层上的 ValueError 的相关文章

随机推荐