Tensorflow Eager Execution 不适用于学习率衰减

2024-06-19

在这里尝试让一个热切的执行模型与 LR 衰减一起工作,但没有成功。这似乎是一个错误,因为学习率衰减张量似乎没有更新。如果我遗漏了什么,你可以帮我一下吗?谢谢。

下面的代码正在学习一些词嵌入。但是,那学习率衰减部分根本不起作用。

class Word2Vec(tf.keras.Model):
    def __init__(self, vocab_size, embed_size, num_sampled=NUM_SAMPLED):
        self.vocab_size = vocab_size
        self.num_sampled = num_sampled
        self.embed_matrix = tfe.Variable(tf.random_uniform(
            [vocab_size, embed_size]), name="embedding_matrix")
        self.nce_weight = tfe.Variable(tf.truncated_normal(
            [vocab_size, embed_size],
            stddev=1.0 / (embed_size ** 0.5)), name="weights")
        self.nce_bias = tfe.Variable(tf.zeros([vocab_size]), name="biases")

    def compute_loss(self, center_words, target_words):
        """Computes the forward pass of word2vec with the NCE loss."""
        embed = tf.nn.embedding_lookup(self.embed_matrix, center_words)
        loss = tf.reduce_mean(tf.nn.nce_loss(weights=self.nce_weight,
                                             biases=self.nce_bias,
                                             labels=target_words,
                                             inputs=embed,
                                             num_sampled=self.num_sampled,
                                             num_classes=self.vocab_size))
        return loss


def gen():
    yield from word2vec_utils.batch_gen(DOWNLOAD_URL, EXPECTED_BYTES,
                                        VOCAB_SIZE, BATCH_SIZE, SKIP_WINDOW,
                                        VISUAL_FLD)


def main():
    dataset = tf.data.Dataset.from_generator(gen, (tf.int32, tf.int32),
                                             (tf.TensorShape([BATCH_SIZE]),
                                              tf.TensorShape([BATCH_SIZE, 1])))

    global_step = tf.train.get_or_create_global_step()
    starter_learning_rate = 1.0
    end_learning_rate = 0.01
    decay_steps = 1000
    learning_rate = tf.train.polynomial_decay(starter_learning_rate, global_step.numpy(),
                                              decay_steps, end_learning_rate,
                                              power=0.5)

    train_writer = tf.contrib.summary.create_file_writer('./checkpoints')
    train_writer.set_as_default()

    optimizer = tf.train.MomentumOptimizer(learning_rate, momentum=0.95)
    model = Word2Vec(vocab_size=VOCAB_SIZE, embed_size=EMBED_SIZE)
    grad_fn = tfe.implicit_value_and_gradients(model.compute_loss)
    total_loss = 0.0  # for average loss in the last SKIP_STEP steps

    checkpoint_dir = "./checkpoints/"
    checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
    root = tfe.Checkpoint(optimizer=optimizer,
                          model=model,
                          optimizer_step=tf.train.get_or_create_global_step())

    while global_step < NUM_TRAIN_STEPS:

        for center_words, target_words in tfe.Iterator(dataset):

            with tf.contrib.summary.record_summaries_every_n_global_steps(100):

                if global_step >= NUM_TRAIN_STEPS:
                    break

                loss_batch, grads = grad_fn(center_words, target_words)
                tf.contrib.summary.scalar('loss', loss_batch)
                tf.contrib.summary.scalar('learning_rate', learning_rate)

                # print(grads)
                # print(len(grads))
                total_loss += loss_batch
                optimizer.apply_gradients(grads, global_step)
                if (global_step.numpy() + 1) % SKIP_STEP == 0:
                    print('Average loss at step {}: {:5.1f}'.format(
                        global_step.numpy(), total_loss / SKIP_STEP))
                    total_loss = 0.0

        root.save(file_prefix=checkpoint_prefix)

if __name__ == '__main__':
    main()

请注意,当启用急切执行时,tf.Tensor物体代表具体的价值观 https://www.tensorflow.org/programmers_guide/eager#setup_and_basic_usage(与将发生的计算的符号句柄相反Session.run()来电)。

因此,在上面的代码片段中,该行:

learning_rate = tf.train.polynomial_decay(starter_learning_rate, global_step.numpy(),
                                          decay_steps, end_learning_rate,
                                          power=0.5)

正在计算衰减值一次,使用global_step在调用它时以及使用以下命令创建优化器时:

optimizer = tf.train.MomentumOptimizer(learning_rate, momentum=0.95)

它被赋予固定的学习率。

为了降低学习率,你需要调用tf.train.polynomial_decay重复(更新值global_step)。做到这一点的一种方法是复制在RNN 示例 https://github.com/tensorflow/tensorflow/blob/8753e2ebde6c58b56675cc19ab7ff83072824a62/tensorflow/contrib/eager/python/examples/rnn_ptb/rnn_ptb.py#L319,使用这样的东西:

starter_learning_rate = 1.0
learning_rate = tfe.Variable(starter_learning_rate)
optimizer = tf.train.MomentumOptimizer(learning_rate, momentum=0.95)
while global_step < NUM_TRAIN_STEPS:
   # ....
   learning_rate.assign(tf.train.polynomial_decay(starter_learning_rate, global_step, decay_steps, end_learning_rate, power=0.5))

这样你就捕获了learning_rate在可以更新的变量中。此外,包含当前的内容也很简单learning_rate也在检查点中(通过在创建时包含它)Checkpoint目的)。

希望有帮助。

本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

Tensorflow Eager Execution 不适用于学习率衰减 的相关文章

随机推荐