如何在 RNN 中嵌入句子序列?

2024-01-11

我正在尝试制作一个 RNN 模型(在 Pytorch 中),它需要几个句子,然后将其分类为Class 0 or Class 1.

为了解决这个问题,我们假设句子的 max_len 为 4,max_amount of time steps 为 5。因此,每个数据点都在表单上(0 是用于填充填充值的值):

    x[1] = [
    # Input features at timestep 1
    [1, 48, 91, 0],
    # Input features at timestep 2
    [20, 5, 17, 32],
    # Input features at timestep 3
    [12, 18, 0, 0],
    # Input features at timestep 4
    [0, 0, 0, 0],
    # Input features at timestep 5
    [0, 0, 0, 0]
    ]
    y[1] = [1]

当我刚刚每个目标一句话:我只是将每个单词传递到嵌入层,然后传递到 LSTM 或 GRU,但是当我有一个单词时,我有点不知道该怎么做。每个目标的句子序列?

如何构建可以处理句子的嵌入?


最简单的方法是使用2种LSTM。

准备玩具数据集

xi = [
# Input features at timestep 1
[1, 48, 91, 0],
# Input features at timestep 2
[20, 5, 17, 32],
# Input features at timestep 3
[12, 18, 0, 0],
# Input features at timestep 4
[0, 0, 0, 0],
# Input features at timestep 5
[0, 0, 0, 0]
]
yi = 1

x = torch.tensor([xi, xi])
y = torch.tensor([yi, yi])

print(x.shape)
# torch.Size([2, 5, 4])

print(y.shape)
# torch.Size([2])

Then, x是输入的批次。这里batch_size = 2.

嵌入输入

vocab_size = 1000
embed_size = 100
hidden_size = 200
embed = nn.Embedding(vocab_size, embed_size)

# shape [2, 5, 4, 100]
x = embed(x)

第一个词-LSTM是将每个序列编码成一个向量

# convert x into a batch of sequences
# Reshape into [2, 20, 100]
x = x.view(bs * 5, 4, 100)

wlstm = nn.LSTM(embed_size, hidden_size, batch_first=True)
# get the only final hidden state of each sequence

_, (hn, _) = wlstm(x)

# hn shape [1, 10, 200]

# get the output of final layer
hn = hn[0] # [10, 200]

第二个 seq-LSTM 是将序列编码成单个向量

# Reshape hn into [bs, num_seq, hidden_size]
hn = hn.view(2, 5, 200)

# Pass to another LSTM and get the final state hn
slstm = nn.LSTM(hidden_size, hidden_size, batch_first=True)
_, (hn, _) = slstm(hn) # [1, 2, 200]

# Similarly, get the hidden state of the last layer
hn = hn[0] # [2, 200]

添加一些分类层

pred_linear = nn.Linear(hidden_size, 1)

# [2, 1]
output = torch.sigmoid(pred_linear(hn))
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

如何在 RNN 中嵌入句子序列? 的相关文章

随机推荐