注意
单击此处下载完整的示例代码
了解基础知识 ||快速入门 ||张量 ||数据集和数据加载器 ||变换 ||构建模型 ||Autograd ||优化 ||保存并加载模型
保存并加载模型¶
创建时间: Feb 09, 2021 |上次更新时间:2024 年 10 月 15 日 |上次验证: Nov 05, 2024
在本节中,我们将了解如何通过保存、加载和运行模型预测来保持模型状态。
import torch
import torchvision.models as models
保存和加载模型权重¶
PyTorch 模型将学习到的参数存储在内部的
状态字典,称为 .这些可以通过以下方法持久化:state_dict
torch.save
model = models.vgg16(weights='IMAGENET1K_V1')
torch.save(model.state_dict(), 'model_weights.pth')
Downloading: "https://download.pytorch.org/models/vgg16-397923af.pth" to /var/lib/ci-user/.cache/torch/hub/checkpoints/vgg16-397923af.pth
0%| | 0.00/528M [00:00<?, ?B/s]
4%|3 | 20.9M/528M [00:00<00:02, 218MB/s]
8%|8 | 42.4M/528M [00:00<00:02, 222MB/s]
12%|#2 | 63.9M/528M [00:00<00:02, 224MB/s]
16%|#6 | 85.4M/528M [00:00<00:02, 224MB/s]
20%|## | 107M/528M [00:00<00:01, 225MB/s]
24%|##4 | 128M/528M [00:00<00:01, 224MB/s]
28%|##8 | 150M/528M [00:00<00:01, 225MB/s]
32%|###2 | 171M/528M [00:00<00:01, 225MB/s]
37%|###6 | 193M/528M [00:00<00:01, 224MB/s]
41%|#### | 214M/528M [00:01<00:01, 225MB/s]
45%|####4 | 236M/528M [00:01<00:01, 225MB/s]
49%|####8 | 257M/528M [00:01<00:01, 225MB/s]
53%|#####2 | 279M/528M [00:01<00:01, 225MB/s]
57%|#####6 | 300M/528M [00:01<00:01, 225MB/s]
61%|###### | 322M/528M [00:01<00:00, 225MB/s]
65%|######5 | 343M/528M [00:01<00:00, 225MB/s]
69%|######9 | 365M/528M [00:01<00:00, 225MB/s]
73%|#######3 | 387M/528M [00:01<00:00, 225MB/s]
77%|#######7 | 408M/528M [00:01<00:00, 225MB/s]
81%|########1 | 430M/528M [00:02<00:00, 225MB/s]
85%|########5 | 451M/528M [00:02<00:00, 225MB/s]
90%|########9 | 473M/528M [00:02<00:00, 225MB/s]
94%|#########3| 494M/528M [00:02<00:00, 225MB/s]
98%|#########7| 516M/528M [00:02<00:00, 225MB/s]
100%|##########| 528M/528M [00:02<00:00, 225MB/s]
要加载模型权重,您需要先创建同一模型的实例,然后加载参数
using 方法。load_state_dict()
在下面的代码中,我们设置了将
在解封期间执行的函数更改为仅
装载重量。使用
加载砝码时的最佳实践。weights_only=True
weights_only=True
model = models.vgg16() # we do not specify ``weights``, i.e. create untrained model
model.load_state_dict(torch.load('model_weights.pth', weights_only=True))
model.eval()
VGG(
(features): Sequential(
(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU(inplace=True)
(2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU(inplace=True)
(4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(6): ReLU(inplace=True)
(7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(8): ReLU(inplace=True)
(9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(11): ReLU(inplace=True)
(12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(13): ReLU(inplace=True)
(14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(15): ReLU(inplace=True)
(16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(18): ReLU(inplace=True)
(19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(20): ReLU(inplace=True)
(21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(22): ReLU(inplace=True)
(23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(25): ReLU(inplace=True)
(26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(27): ReLU(inplace=True)
(28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(29): ReLU(inplace=True)
(30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(avgpool): AdaptiveAvgPool2d(output_size=(7, 7))
(classifier): Sequential(
(0): Linear(in_features=25088, out_features=4096, bias=True)
(1): ReLU(inplace=True)
(2): Dropout(p=0.5, inplace=False)
(3): Linear(in_features=4096, out_features=4096, bias=True)
(4): ReLU(inplace=True)
(5): Dropout(p=0.5, inplace=False)
(6): Linear(in_features=4096, out_features=1000, bias=True)
)
)
注意
请务必在推理之前调用 method 以将 dropout 和 batch normalization 层设置为评估模式。如果不这样做,将产生不一致的推理结果。model.eval()
保存和加载带有形状的模型¶
在加载模型权重时,我们需要先实例化模型类,因为类
定义网络的结构。我们可能希望将这个类的结构与
模型,在这种情况下,我们可以将 (而不是 ) 传递给 saving 函数:model
model.state_dict()
torch.save(model, 'model.pth')
然后,我们可以加载模型,如下所示。
如保存和加载 torch.nn.Modules 中所述,
保存被认为是最佳实践。然而
下面我们使用,因为这涉及加载
模型,这是 .state_dict
weights_only=False
torch.save
model = torch.load('model.pth', weights_only=False),
注意
这种方法在序列化模型时使用 Python pickle 模块,因此它依赖于加载模型时可用的实际类定义。