多特征因果 CNN - Keras 实现

2024-02-04

我目前正在使用基本的 LSTM 进行回归预测,并且我想实现一个因果 CNN,因为它的计算效率应该更高。

我正在努力弄清楚如何重塑当前的数据以适应因果 CNN 单元并表示相同的数据/时间步关系以及扩张率应设置为多少。

我当前的数据是这样的:(number of examples, lookback, features)这是我现在正在使用的 LSTM NN 的基本示例。

lookback = 20   #  height -- timeseries
n_features = 5  #  width  -- features at each timestep

# Build an LSTM to perform regression on time series input/output data
model = Sequential()
model.add(LSTM(units=256, return_sequences=True, input_shape=(lookback, n_features)))
model.add(Activation('elu'))

model.add(LSTM(units=256, return_sequences=True))
model.add(Activation('elu'))

model.add(LSTM(units=256))
model.add(Activation('elu'))

model.add(Dense(units=1, activation='linear'))

model.compile(optimizer='adam', loss='mean_squared_error')

model.fit(X_train, y_train,
          epochs=50, batch_size=64,
          validation_data=(X_val, y_val),
          verbose=1, shuffle=True)

prediction = model.predict(X_test)

然后我创建了一个新的 CNN 模型(尽管与'causal'填充只是一个选项Conv1D并不是Conv2D,根据 Keras 文档。如果我理解正确的话,通过拥有多个功能,我需要使用Conv2D, 而不是Conv1D但如果我设置Conv2D(padding='causal'),我收到以下错误 -Invalid padding: causal)

不管怎样,我也能够用新的形状来拟合数据(number of examples, lookback, features, 1)并使用以下命令运行以下模型Conv2D Layer:

lookback = 20   #  height -- timeseries
n_features = 5  #  width  -- features at each timestep

 model = Sequential()
            model.add(Conv2D(128, 3, activation='elu', input_shape=(lookback, n_features, 1)))
model.add(MaxPool2D())
model.add(Conv2D(128, 3, activation='elu'))
model.add(MaxPool2D())
model.add(Flatten())
model.add(Dense(1, activation='linear'))

model.compile(optimizer='adam', loss='mean_squared_error')

model.fit(X_train, y_train,
          epochs=50, batch_size=64,
          validation_data=(X_val, y_val),
          verbose=1, shuffle=True)

prediction = model.predict(X_test)

然而,根据我的理解,这不会将数据传播为因果关系,而只是传播整个集合(lookback, features, 1)作为图像。

有什么方法可以重塑我的数据以适应Conv1D(padding='causal')图层,具有多个功能或以某种方式运行相同的数据和输入形状Conv2D with 'causal'填充?


我相信你可以拥有因果填充 with dilation对于任意数量的输入特征。这是我提出的解决方案。

The 时间分布式层 https://keras.io/layers/wrappers/#TimeDistributed这是关键。

来自 Keras 文档:“这个包装器将一层应用于输入的每个时间切片。输入应该至少是 3D,并且索引一的维度将被视为时间维度。”

出于我们的目的,我们希望这一层对每个特征应用“某些东西”,因此我们将这些特征移动到时间索引,即 1。

同样相关的是Conv1D 文档 https://keras.io/layers/convolutional/#Conv1D.

具体来说就是渠道:“输入中维度的顺序。“channels_last”对应于形状(批量、步骤、通道)的输入(Keras 中时间数据的默认格式)”

from tensorflow.python.keras import Sequential, backend
from tensorflow.python.keras.layers import GlobalMaxPool1D, Activation, MaxPool1D, Flatten, Conv1D, Reshape, TimeDistributed, InputLayer

backend.clear_session()
lookback = 20
n_features = 5

filters = 128

model = Sequential()
model.add(InputLayer(input_shape=(lookback, n_features, 1)))
# Causal layers are first applied to the features independently
model.add(Permute(dims=(2, 1)))  # UPDATE must permute prior to adding new dim and reshap
model.add(Reshape(target_shape=(n_features, lookback, 1)))
# After reshape 5 input features are now treated as the temporal layer 
# for the TimeDistributed layer

# When Conv1D is applied to each input feature, it thinks the shape of the layer is (20, 1)
# with the default "channels_last", therefore...

# 20 times steps is the temporal dimension
# 1 is the "channel", the new location for the feature maps

model.add(TimeDistributed(Conv1D(filters, 3, activation="elu", padding="causal", dilation_rate=2**0)))
# You could add pooling here if you want. 
# If you want interaction between features AND causal/dilation, then apply later
model.add(TimeDistributed(Conv1D(filters, 3, activation="elu", padding="causal", dilation_rate=2**1)))
model.add(TimeDistributed(Conv1D(filters, 3, activation="elu", padding="causal", dilation_rate=2**2)))


# Stack feature maps on top of each other so each time step can look at 
# all features produce earlier
model.add(Permute(dims=(2, 1, 3)))  # UPDATED to fix issue with reshape
model.add(Reshape(target_shape=(lookback, n_features * filters)))  # (20 time steps, 5 features * 128 filters)
# Causal layers are applied to the 5 input features dependently
model.add(Conv1D(filters, 3, activation="elu", padding="causal", dilation_rate=2**0))
model.add(MaxPool1D())
model.add(Conv1D(filters, 3, activation="elu", padding="causal", dilation_rate=2**1))
model.add(MaxPool1D())
model.add(Conv1D(filters, 3, activation="elu", padding="causal", dilation_rate=2**2))
model.add(GlobalMaxPool1D())
model.add(Dense(units=1, activation='linear'))

model.compile(optimizer='adam', loss='mean_squared_error')

model.summary()

最终模型总结

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
reshape (Reshape)            (None, 5, 20, 1)          0         
_________________________________________________________________
time_distributed (TimeDistri (None, 5, 20, 128)        512       
_________________________________________________________________
time_distributed_1 (TimeDist (None, 5, 20, 128)        49280     
_________________________________________________________________
time_distributed_2 (TimeDist (None, 5, 20, 128)        49280     
_________________________________________________________________
reshape_1 (Reshape)          (None, 20, 640)           0         
_________________________________________________________________
conv1d_3 (Conv1D)            (None, 20, 128)           245888    
_________________________________________________________________
max_pooling1d (MaxPooling1D) (None, 10, 128)           0         
_________________________________________________________________
conv1d_4 (Conv1D)            (None, 10, 128)           49280     
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 5, 128)            0         
_________________________________________________________________
conv1d_5 (Conv1D)            (None, 5, 128)            49280     
_________________________________________________________________
global_max_pooling1d (Global (None, 128)               0         
_________________________________________________________________
dense (Dense)                (None, 1)                 129       
=================================================================
Total params: 443,649
Trainable params: 443,649
Non-trainable params: 0
_________________________________________________________________

Edit:

“为什么需要重塑并使用 n_features 作为时间层”

n_features 最初需要位于时间层的原因是因为具有膨胀和因果填充的 Conv1D 一次仅适用于一个特征,并且还因为 TimeDistributed 层的实现方式。

从他们的文档中“考虑一批 32 个样本,其中每个样本是 10 个 16 维向量的序列。则该层的批量输入形状为 (32, 10, 16),而 input_shape(不包括样本维度)为 ( 10、16)。

然后,您可以使用 TimeDistributed 将 Dense 层独立地应用于 10 个时间步中的每一个:”

通过将 TimeDistributed 层独立地应用于每个特征,它可以减少问题的维度,就好像只有一个特征一样(这很容易允许扩张和因果填充)。由于有 5 个特征,因此首先需要分别处理它们。

  • 编辑后,此建议仍然适用。

  • 无论 InputLayer 包含在第一层中还是单独存在,网络方面都不应该有区别,因此如果可以解决问题,您绝对可以将其放入第一个 CNN 中。

本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

多特征因果 CNN - Keras 实现 的相关文章

随机推荐