我目前将我的教练设置为:
training_args = TrainingArguments(
output_dir=f"./results_{model_checkpoint}",
evaluation_strategy="epoch",
learning_rate=5e-5,
per_device_train_batch_size=4,
per_device_eval_batch_size=4,
num_train_epochs=2,
weight_decay=0.01,
push_to_hub=True,
save_total_limit = 1,
resume_from_checkpoint=True,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_qa["train"],
eval_dataset=tokenized_qa["validation"],
tokenizer=tokenizer,
data_collator=DataCollatorForMultipleChoice(tokenizer=tokenizer),
compute_metrics=compute_metrics
)
训练结束后,在我的output_dir
我有培训师保存的几个文件:
['README.md',
'tokenizer.json',
'training_args.bin',
'.git',
'.gitignore',
'vocab.txt',
'config.json',
'checkpoint-5000',
'pytorch_model.bin',
'tokenizer_config.json',
'special_tokens_map.json',
'.gitattributes']
来自文档 https://huggingface.co/docs/transformers/main_classes/trainer#transformers.Trainer.train.resume_from_checkpoint看起来resume_from_checkpoint
将从最后一个检查点继续训练模型:
resume_from_checkpoint (str or bool, optional) — If a str, local path to a saved checkpoint as saved by a previous instance of Trainer. If a bool and equals True, load the last checkpoint in args.output_dir as saved by a previous instance of Trainer. If present, training will resume from the model/optimizer/scheduler states loaded here.
但当我打电话时trainer.train()
它似乎删除了最后一个检查点并开始一个新的检查点:
Saving model checkpoint to ./results_distilbert-base-uncased/checkpoint-500
...
Deleting older checkpoint [results_distilbert-base-uncased/checkpoint-5000] due to args.save_total_limit
它是否真的从最后一个检查点(即 5000)继续训练,并从 0 开始新检查点的计数(保存 500 步后的第一个检查点 - “checkpoint-500”),或者它只是不继续训练?我还没有找到测试它的方法,文档对此也不清楚。