注意
单击此处下载完整的示例代码
知识蒸馏教程¶
创建时间: Aug 22, 2023 |上次更新时间:2024 年 7 月 30 日 |上次验证: Nov 05, 2024
知识蒸馏是一种技术,它支持从大型、计算成本高昂的 模型转换为较小的模型而不会失去有效性。这允许在功能较弱的 硬件,使评估更快、更高效。
在本教程中,我们将运行一些实验,重点是提高
轻量级神经网络,使用更强大的网络作为老师。
轻量级网络的计算成本和速度将不受影响,
我们的干预只关注它的权重,而不是它的向前传球。
这项技术的应用可以在无人机或手机等设备中找到。
在本教程中,我们不使用任何外部包,因为我们需要的一切都在 和 中可用。torch
torchvision
在本教程中,您将学习:
如何修改模型类以提取隐藏的表示并将其用于进一步的计算
如何在 PyTorch 中修改常规训练循环,以在交叉熵等之上包含额外的损失以进行分类
如何通过使用更复杂的模型作为教师来提高轻量级模型的性能
先决条件¶
1 个 GPU,4GB 内存
PyTorch v2.0 或更高版本
CIFAR-10 数据集(由脚本下载并保存在名为
/data
)
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.transforms as transforms
import torchvision.datasets as datasets
# Check if GPU is available, and if not, use the CPU
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
正在加载 CIFAR-10¶
CIFAR-10 是一个流行的图像数据集,有 10 个类。我们的目标是为每个输入图像预测以下类之一。
输入图像是 RGB,因此它们有 3 个通道,像素为 32x32。基本上,每张图像都由 3 x 32 x 32 = 3072 个数字描述,范围从 0 到 255。 神经网络中的一种常见做法是规范化输入,这样做有多种原因, 包括避免常用激活函数的饱和和提高数值稳定性。 我们的标准化过程包括减去平均值并除以每个通道的标准差。 张量 “mean=[0.485, 0.456, 0.406]” 和 “std=[0.229, 0.224, 0.225]” 已经计算出来了, 它们表示 CIFAR-10 的预定义子集旨在作为训练集。 请注意我们如何将这些值也用于测试集,而无需从头开始重新计算平均值和标准差。 这是因为该网络是根据上述数字的减法和除法生成的特征进行训练的,我们希望保持一致性。 此外,在现实生活中,我们将无法计算测试集的均值和标准差,因为, 根据我们的假设,此时无法访问此数据。
作为结束点,我们通常将这个 hold-out 集称为验证集,并且我们使用单独的 set 调用测试集,在优化模型在验证集上的性能后。 这样做是为了避免根据单个指标的贪婪和有偏见的优化来选择模型。
# Below we are preprocessing data for CIFAR-10. We use an arbitrary batch size of 128.
transforms_cifar = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
# Loading the CIFAR-10 dataset:
train_dataset = datasets.CIFAR10(root='./data', train=True, download=True, transform=transforms_cifar)
test_dataset = datasets.CIFAR10(root='./data', train=False, download=True, transform=transforms_cifar)
Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to ./data/cifar-10-python.tar.gz
0%| | 0.00/170M [00:00<?, ?B/s]
0%| | 492k/170M [00:00<00:34, 4.91MB/s]
5%|4 | 8.16M/170M [00:00<00:03, 47.1MB/s]
12%|#1 | 19.9M/170M [00:00<00:01, 79.1MB/s]
19%|#8 | 31.7M/170M [00:00<00:01, 94.2MB/s]
25%|##5 | 43.4M/170M [00:00<00:01, 103MB/s]
32%|###2 | 55.2M/170M [00:00<00:01, 108MB/s]
39%|###9 | 66.9M/170M [00:00<00:00, 111MB/s]
46%|####6 | 78.6M/170M [00:00<00:00, 113MB/s]
53%|#####3 | 90.4M/170M [00:00<00:00, 114MB/s]
60%|#####9 | 102M/170M [00:01<00:00, 115MB/s]
67%|######6 | 114M/170M [00:01<00:00, 116MB/s]
74%|#######3 | 126M/170M [00:01<00:00, 116MB/s]
81%|######## | 137M/170M [00:01<00:00, 117MB/s]
88%|########7 | 149M/170M [00:01<00:00, 117MB/s]
94%|#########4| 161M/170M [00:01<00:00, 117MB/s]
100%|##########| 170M/170M [00:01<00:00, 108MB/s]
Extracting ./data/cifar-10-python.tar.gz to ./data
Files already downloaded and verified
注意
本部分仅适用于对快速结果感兴趣的 CPU 用户。仅当您对小规模实验感兴趣时,才使用此选项。请记住,使用任何 GPU 时,代码都应该运行得相当快。仅从训练/测试数据集中选择第一张图像num_images_to_keep
#from torch.utils.data import Subset
#num_images_to_keep = 2000
#train_dataset = Subset(train_dataset, range(min(num_images_to_keep, 50_000)))
#test_dataset = Subset(test_dataset, range(min(num_images_to_keep, 10_000)))
#Dataloaders
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=128, shuffle=True, num_workers=2)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=128, shuffle=False, num_workers=2)
定义模型类和效用函数¶
接下来,我们需要定义我们的模型类。此处需要设置几个用户定义的参数。我们使用两种不同的架构,在实验中保持过滤器的数量固定,以确保公平的比较。 这两种架构都是卷积神经网络 (CNN),具有不同数量的卷积层作为特征提取器,后跟一个具有 10 个类的分类器。 对于学生来说,过滤器和神经元的数量较少。
# Deeper neural network class to be used as teacher:
class DeepNN(nn.Module):
def __init__(self, num_classes=10):
super(DeepNN, self).__init__()
self.features = nn.Sequential(
nn.Conv2d(3, 128, kernel_size=3, padding=1),
nn.ReLU(),
nn.Conv2d(128, 64, kernel_size=3, padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2),
nn.Conv2d(64, 64, kernel_size=3, padding=1),
nn.ReLU(),
nn.Conv2d(64, 32, kernel_size=3, padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2),
)
self.classifier = nn.Sequential(
nn.Linear(2048, 512),
nn.ReLU(),
nn.Dropout(0.1),
nn.Linear(512, num_classes)
)
def forward(self, x):
x = self.features(x)
x = torch.flatten(x, 1)
x = self.classifier(x)
return x
# Lightweight neural network class to be used as student:
class LightNN(nn.Module):
def __init__(self, num_classes=10):
super(LightNN, self).__init__()
self.features = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2),
nn.Conv2d(16, 16, kernel_size=3, padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2),
)
self.classifier = nn.Sequential(
nn.Linear(1024, 256),
nn.ReLU(),
nn.Dropout(0.1),
nn.Linear(256, num_classes)
)
def forward(self, x):
x = self.features(x)
x = torch.flatten(x, 1)
x = self.classifier(x)
return x
我们使用 2 个函数来帮助我们生成和评估原始分类任务的结果。
调用一个函数并采用以下参数:train
model
:通过此函数训练(更新其权重)的模型实例。train_loader
: 我们定义了上面的内容,它的工作是将数据馈送到模型中。train_loader
epochs
:我们遍历数据集的次数。learning_rate
:学习率决定了我们迈向收敛的步骤应该有多大。步骤太大或太小都是有害的。device
:确定要运行工作负载的设备。可以是 CPU 或 GPU,具体取决于可用性。
我们的测试函数与此类似,但将调用它以从测试集中加载图像。test_loader
def train(model, train_loader, epochs, learning_rate, device):
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=learning_rate)
model.train()
for epoch in range(epochs):
running_loss = 0.0
for inputs, labels in train_loader:
# inputs: A collection of batch_size images
# labels: A vector of dimensionality batch_size with integers denoting class of each image
inputs, labels = inputs.to(device), labels.to(device)
optimizer.zero_grad()
outputs = model(inputs)
# outputs: Output of the network for the collection of images. A tensor of dimensionality batch_size x num_classes
# labels: The actual labels of the images. Vector of dimensionality batch_size
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
print(f"Epoch {epoch+1}/{epochs}, Loss: {running_loss / len(train_loader)}")
def test(model, test_loader, device):
model.to(device)
model.eval()
correct = 0
total = 0
with torch.no_grad():
for inputs, labels in test_loader:
inputs, labels = inputs.to(device), labels.to(device)
outputs = model(inputs)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
accuracy = 100 * correct / total
print(f"Test Accuracy: {accuracy:.2f}%")
return accuracy
交叉熵运行¶
为了可重复性,我们需要设置Torch手动种子。我们使用不同的方法训练网络,因此为了公平地比较它们, 使用相同的权重初始化网络是有意义的。 首先使用交叉熵训练教师网络:
torch.manual_seed(42)
nn_deep = DeepNN(num_classes=10).to(device)
train(nn_deep, train_loader, epochs=10, learning_rate=0.001, device=device)
test_accuracy_deep = test(nn_deep, test_loader, device)
# Instantiate the lightweight network:
torch.manual_seed(42)
nn_light = LightNN(num_classes=10).to(device)
Epoch 1/10, Loss: 1.3366997722164748
Epoch 2/10, Loss: 0.8720864758772009
Epoch 3/10, Loss: 0.6820237918583023
Epoch 4/10, Loss: 0.5375950956893394
Epoch 5/10, Loss: 0.41490358377204223
Epoch 6/10, Loss: 0.3123219789141584
Epoch 7/10, Loss: 0.22101545549185989
Epoch 8/10, Loss: 0.17098309014878615
Epoch 9/10, Loss: 0.13455525941460791
Epoch 10/10, Loss: 0.12078842208208636
Test Accuracy: 75.45%
我们再实例化一个轻量级网络模型来比较它们的性能。 反向传播对权重初始化很敏感, 因此,我们需要确保这两个网络具有完全相同的初始化。
torch.manual_seed(42)
new_nn_light = LightNN(num_classes=10).to(device)
为了确保我们已经创建了第一个网络的副本,我们检查了其第一层的规范。 如果匹配,那么我们可以安全地得出结论,这些网络确实是相同的。
# Print the norm of the first layer of the initial lightweight model
print("Norm of 1st layer of nn_light:", torch.norm(nn_light.features[0].weight).item())
# Print the norm of the first layer of the new lightweight model
print("Norm of 1st layer of new_nn_light:", torch.norm(new_nn_light.features[0].weight).item())
Norm of 1st layer of nn_light: 2.327361822128296
Norm of 1st layer of new_nn_light: 2.327361822128296
打印每个模型中的参数总数:
total_params_deep = "{:,}".format(sum(p.numel() for p in nn_deep.parameters()))
print(f"DeepNN parameters: {total_params_deep}")
total_params_light = "{:,}".format(sum(p.numel() for p in nn_light.parameters()))
print(f"LightNN parameters: {total_params_light}")
DeepNN parameters: 1,186,986
LightNN parameters: 267,738
训练和测试具有交叉熵损失的轻量级网络:
train(nn_light, train_loader, epochs=10, learning_rate=0.001, device=device)
test_accuracy_light_ce = test(nn_light, test_loader, device)
Epoch 1/10, Loss: 1.466049101346594
Epoch 2/10, Loss: 1.1519653670623173
Epoch 3/10, Loss: 1.0232561651398153
Epoch 4/10, Loss: 0.9235453337354733
Epoch 5/10, Loss: 0.8479179534156
Epoch 6/10, Loss: 0.7824301378196462
Epoch 7/10, Loss: 0.7184310383199121
Epoch 8/10, Loss: 0.6588469929707325
Epoch 9/10, Loss: 0.6075488568266945
Epoch 10/10, Loss: 0.556159371533967
Test Accuracy: 70.15%
正如我们所看到的,根据测试的准确性,我们现在可以将要用作教师的更深网络与我们假定的学生的轻量级网络进行比较。到目前为止,我们的学生还没有干预老师,因此这种表现是由学生自己实现的。 到目前为止的指标可以通过以下几行看到:
print(f"Teacher accuracy: {test_accuracy_deep:.2f}%")
print(f"Student accuracy: {test_accuracy_light_ce:.2f}%")
Teacher accuracy: 75.45%
Student accuracy: 70.15%
知识提炼运行¶
现在,让我们尝试通过合并 teacher 来提高 student 网络的测试准确性。 知识提炼是实现这一目标的一种简单技术, 基于两个网络都输出我们类的概率分布这一事实。 因此,这两个网络共享相同数量的输出神经元。 该方法的工作原理是将额外的损失合并到传统的交叉熵损失中, 它基于 Teacher Network 的 SoftMax 输出。 假设经过适当训练的教师网络的输出激活携带着学生网络在训练期间可以利用的附加信息。 原始工作表明,在软目标中使用较小概率的比率可以帮助实现深度神经网络的基本目标。 即在数据上创建一个相似性结构,其中相似对象映射得更紧密。 例如,在 CIFAR-10 中,卡车可能会被误认为是汽车或飞机。 如果它的轮子有,但不太可能被误认为是狗。 因此,假设有价值的信息不仅存在于经过适当训练的模型的顶部预测中,而且存在于整个输出分布中,这是有意义的。 然而,单独的交叉熵并不能充分利用这些信息作为非预测类的激活 往往非常小,以至于传播的梯度不会有意义地改变权重来构建这个理想的向量空间。
当我们继续定义引入师生动态的第一个 helper 函数时,我们需要包含一些额外的参数:
T
:温度控制输出分布的平滑度。较大的导致更平滑的分布,因此较小的概率获得较大的提升。T
soft_target_loss_weight
:分配给我们将要包含的额外目标的权重。ce_loss_weight
:分配给交叉熵的权重。调整这些权重会推动网络针对任一目标进行优化。
def train_knowledge_distillation(teacher, student, train_loader, epochs, learning_rate, T, soft_target_loss_weight, ce_loss_weight, device):
ce_loss = nn.CrossEntropyLoss()
optimizer = optim.Adam(student.parameters(), lr=learning_rate)
teacher.eval() # Teacher set to evaluation mode
student.train() # Student to train mode
for epoch in range(epochs):
running_loss = 0.0
for inputs, labels in train_loader:
inputs, labels = inputs.to(device), labels.to(device)
optimizer.zero_grad()
# Forward pass with the teacher model - do not save gradients here as we do not change the teacher's weights
with torch.no_grad():
teacher_logits = teacher(inputs)
# Forward pass with the student model
student_logits = student(inputs)
#Soften the student logits by applying softmax first and log() second
soft_targets = nn.functional.softmax(teacher_logits / T, dim=-1)
soft_prob = nn.functional.log_softmax(student_logits / T, dim=-1)
# Calculate the soft targets loss. Scaled by T**2 as suggested by the authors of the paper "Distilling the knowledge in a neural network"
soft_targets_loss = torch.sum(soft_targets * (soft_targets.log() - soft_prob)) / soft_prob.size()[0] * (T**2)
# Calculate the true label loss
label_loss = ce_loss(student_logits, labels)
# Weighted sum of the two losses
loss = soft_target_loss_weight * soft_targets_loss + ce_loss_weight * label_loss
loss.backward()
optimizer.step()
running_loss += loss.item()
print(f"Epoch {epoch+1}/{epochs}, Loss: {running_loss / len(train_loader)}")
# Apply ``train_knowledge_distillation`` with a temperature of 2. Arbitrarily set the weights to 0.75 for CE and 0.25 for distillation loss.
train_knowledge_distillation(teacher=nn_deep, student=new_nn_light, train_loader=train_loader, epochs=10, learning_rate=0.001, T=2, soft_target_loss_weight=0.25, ce_loss_weight=0.75, device=device)
test_accuracy_light_ce_and_kd = test(new_nn_light, test_loader, device)
# Compare the student test accuracy with and without the teacher, after distillation
print(f"Teacher accuracy: {test_accuracy_deep:.2f}%")
print(f"Student accuracy without teacher: {test_accuracy_light_ce:.2f}%")
print(f"Student accuracy with CE + KD: {test_accuracy_light_ce_and_kd:.2f}%")
Epoch 1/10, Loss: 2.3962801237545355
Epoch 2/10, Loss: 1.87831118161721
Epoch 3/10, Loss: 1.6540942881113427
Epoch 4/10, Loss: 1.4959803764777415
Epoch 5/10, Loss: 1.367971143454237
Epoch 6/10, Loss: 1.2519247448048019
Epoch 7/10, Loss: 1.1570622474336258
Epoch 8/10, Loss: 1.0719402747995712
Epoch 9/10, Loss: 0.9970421949615869
Epoch 10/10, Loss: 0.9293939061177051
Test Accuracy: 70.75%
Teacher accuracy: 75.45%
Student accuracy without teacher: 70.15%
Student accuracy with CE + KD: 70.75%
余弦损耗最小化运行¶
您可以随意使用温度参数来控制 softmax 函数的柔软度和损失系数。
在神经网络中,很容易将额外的损失函数包含在主要目标中,以实现更好的泛化等目标。
让我们尝试为学生包含一个目标,但现在让我们关注他们的隐藏状态,而不是他们的输出层。
我们的目标是通过包含一个 naive loss 函数
其最小化意味着随着损失的减少,随后传递给分类器的扁平化向量变得更加相似。
当然,教师不会更新其权重,因此最小化仅取决于学生的权重。
这种方法背后的基本原理是,我们是在假设教师模型具有更好的内部表征的情况下运作的,即
如果没有外部干预,学生不太可能实现,因此我们人为地推动学生模仿老师的内部表征。
不过,这是否最终会帮助学生并不简单,因为推动轻量级网络
达到这一点可能是一件好事,假设我们已经找到了一个可以带来更好测试准确性的内部表示,
但它也可能是有害的,因为网络具有不同的架构,并且学生的学习能力与老师不同。
换句话说,这两个向量(学生的向量和老师的向量)没有理由按组件匹配。
学生可以达到一个与老师的排列相同的内部表示,而且效率也一样高。
尽管如此,我们仍然可以运行一个快速实验来弄清楚这种方法的影响。
我们将使用以下公式给出的 which:CosineEmbeddingLoss
显然,我们首先需要解决一件事。 当我们对输出层应用蒸馏时,我们提到两个网络具有相同数量的神经元,等于类的数量。 但是,对于卷积层之后的层,情况并非如此。在这里,老师的神经元比学生多 在最终卷积层展平之后。我们的损失函数接受两个相同维的向量作为输入 因此我们需要以某种方式匹配它们。我们将通过在教师的卷积层之后包含一个平均池化层来解决这个问题,以降低其维数以匹配学生的维数。
为了继续,我们将修改我们的模型类,或创建新的类。 现在,forward 函数不仅返回网络的 logits,还返回卷积层之后的扁平化隐藏表示。我们包括上述修改后的 teacher 的池。
class ModifiedDeepNNCosine(nn.Module):
def __init__(self, num_classes=10):
super(ModifiedDeepNNCosine, self).__init__()
self.features = nn.Sequential(
nn.Conv2d(3, 128, kernel_size=3, padding=1),
nn.ReLU(),
nn.Conv2d(128, 64, kernel_size=3, padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2),
nn.Conv2d(64, 64, kernel_size=3, padding=1),
nn.ReLU(),
nn.Conv2d(64, 32, kernel_size=3, padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2),
)
self.classifier = nn.Sequential(
nn.Linear(2048, 512),
nn.ReLU(),
nn.Dropout(0.1),
nn.Linear(512, num_classes)
)
def forward(self, x):
x = self.features(x)
flattened_conv_output = torch.flatten(x, 1)
x = self.classifier(flattened_conv_output)
flattened_conv_output_after_pooling = torch.nn.functional.avg_pool1d(flattened_conv_output, 2)
return x, flattened_conv_output_after_pooling
# Create a similar student class where we return a tuple. We do not apply pooling after flattening.
class ModifiedLightNNCosine(nn.Module):
def __init__(self, num_classes=10):
super(ModifiedLightNNCosine, self).__init__()
self.features = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2),
nn.Conv2d(16, 16, kernel_size=3, padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2),
)
self.classifier = nn.Sequential(
nn.Linear(1024, 256),
nn.ReLU(),
nn.Dropout(0.1),
nn.Linear(256, num_classes)
)
def forward(self, x):
x = self.features(x)
flattened_conv_output = torch.flatten(x, 1)
x = self.classifier(flattened_conv_output)
return x, flattened_conv_output
# We do not have to train the modified deep network from scratch of course, we just load its weights from the trained instance
modified_nn_deep = ModifiedDeepNNCosine(num_classes=10).to(device)
modified_nn_deep.load_state_dict(nn_deep.state_dict())
# Once again ensure the norm of the first layer is the same for both networks
print("Norm of 1st layer for deep_nn:", torch.norm(nn_deep.features[0].weight).item())
print("Norm of 1st layer for modified_deep_nn:", torch.norm(modified_nn_deep.features[0].weight).item())
# Initialize a modified lightweight network with the same seed as our other lightweight instances. This will be trained from scratch to examine the effectiveness of cosine loss minimization.
torch.manual_seed(42)
modified_nn_light = ModifiedLightNNCosine(num_classes=10).to(device)
print("Norm of 1st layer:", torch.norm(modified_nn_light.features[0].weight).item())
Norm of 1st layer for deep_nn: 7.5062713623046875
Norm of 1st layer for modified_deep_nn: 7.5062713623046875
Norm of 1st layer: 2.327361822128296
自然地,我们需要更改训练循环,因为现在模型返回一个元组 。使用样本输入张量
我们可以打印它们的形状。(logits, hidden_representation)
# Create a sample input tensor
sample_input = torch.randn(128, 3, 32, 32).to(device) # Batch size: 128, Filters: 3, Image size: 32x32
# Pass the input through the student
logits, hidden_representation = modified_nn_light(sample_input)
# Print the shapes of the tensors
print("Student logits shape:", logits.shape) # batch_size x total_classes
print("Student hidden representation shape:", hidden_representation.shape) # batch_size x hidden_representation_size
# Pass the input through the teacher
logits, hidden_representation = modified_nn_deep(sample_input)
# Print the shapes of the tensors
print("Teacher logits shape:", logits.shape) # batch_size x total_classes
print("Teacher hidden representation shape:", hidden_representation.shape) # batch_size x hidden_representation_size
Student logits shape: torch.Size([128, 10])
Student hidden representation shape: torch.Size([128, 1024])
Teacher logits shape: torch.Size([128, 10])
Teacher hidden representation shape: torch.Size([128, 1024])
在我们的例子中, 是 。这是学生最终卷积层的扁平化特征图,如您所见,
它是其分类器的 Input。它也是为了老师,因为我们用 from 做到了这一点。
此处应用的损失仅影响学生在计算损失之前的权重。换句话说,它不会影响学生的分类器。
修改后的训练循环如下:hidden_representation_size
1024
1024
avg_pool1d
2048
def train_cosine_loss(teacher, student, train_loader, epochs, learning_rate, hidden_rep_loss_weight, ce_loss_weight, device):
ce_loss = nn.CrossEntropyLoss()
cosine_loss = nn.CosineEmbeddingLoss()
optimizer = optim.Adam(student.parameters(), lr=learning_rate)
teacher.to(device)
student.to(device)
teacher.eval() # Teacher set to evaluation mode
student.train() # Student to train mode
for epoch in range(epochs):
running_loss = 0.0
for inputs, labels in train_loader:
inputs, labels = inputs.to(device), labels.to(device)
optimizer.zero_grad()
# Forward pass with the teacher model and keep only the hidden representation
with torch.no_grad():
_, teacher_hidden_representation = teacher(inputs)
# Forward pass with the student model
student_logits, student_hidden_representation = student(inputs)
# Calculate the cosine loss. Target is a vector of ones. From the loss formula above we can see that is the case where loss minimization leads to cosine similarity increase.
hidden_rep_loss = cosine_loss(student_hidden_representation, teacher_hidden_representation, target=torch.ones(inputs.size(0)).to(device))
# Calculate the true label loss
label_loss = ce_loss(student_logits, labels)
# Weighted sum of the two losses
loss = hidden_rep_loss_weight * hidden_rep_loss + ce_loss_weight * label_loss
loss.backward()
optimizer.step()
running_loss += loss.item()
print(f"Epoch {epoch+1}/{epochs}, Loss: {running_loss / len(train_loader)}")
出于同样的原因,我们需要修改我们的 test 函数。这里我们忽略了模型返回的 hidden 表示。
def test_multiple_outputs(model, test_loader, device):
model.to(device)
model.eval()
correct = 0
total = 0
with torch.no_grad():
for inputs, labels in test_loader:
inputs, labels = inputs.to(device), labels.to(device)
outputs, _ = model(inputs) # Disregard the second tensor of the tuple
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
accuracy = 100 * correct / total
print(f"Test Accuracy: {accuracy:.2f}%")
return accuracy
在这种情况下,我们可以轻松地将知识蒸馏和余弦损失最小化包含在同一个函数中。在师生范式中,结合方法以实现更好的表现是很常见的。 现在,我们可以运行一个简单的 train-test 会话。
# Train and test the lightweight network with cross entropy loss
train_cosine_loss(teacher=modified_nn_deep, student=modified_nn_light, train_loader=train_loader, epochs=10, learning_rate=0.001, hidden_rep_loss_weight=0.25, ce_loss_weight=0.75, device=device)
test_accuracy_light_ce_and_cosine_loss = test_multiple_outputs(modified_nn_light, test_loader, device)
Epoch 1/10, Loss: 1.3019573956804202
Epoch 2/10, Loss: 1.0648430796230541
Epoch 3/10, Loss: 0.9631246839033063
Epoch 4/10, Loss: 0.8873560082577073
Epoch 5/10, Loss: 0.8324901197877381
Epoch 6/10, Loss: 0.7894727920022462
Epoch 7/10, Loss: 0.7494128462298751
Epoch 8/10, Loss: 0.7148379474649649
Epoch 9/10, Loss: 0.6762727979199051
Epoch 10/10, Loss: 0.6496843107216194
Test Accuracy: 71.17%
Intermediate regressor run¶
由于几个原因,我们的朴素最小化并不能保证更好的结果,其中一个是向量的维度。 对于更高维度的向量,余弦相似度通常比欧几里得距离效果更好, 但是我们处理的是每个向量有 1024 个分量的向量,因此提取有意义的相似性要困难得多。 此外,正如我们所提到的,推动教师和学生的隐藏表征的匹配没有理论支持。 我们没有充分的理由应该以这些向量的 1:1 匹配为目标。 我们将通过包含一个称为 regressor 的额外网络来提供训练干预的最后一个示例。 目标是首先提取教师的特征图,经过一个卷积层, 然后在卷积层之后提取学生的特征图,最后尝试匹配这些图。 但是,这一次,我们将在网络之间引入回归器,以促进匹配过程。 回归器将是可训练的,理想情况下会比我们的朴素余弦损失最小化方案做得更好。 它的主要工作是匹配这些特征图的维度,以便我们可以正确定义教师和学生之间的损失函数。 定义这样的损失函数提供了一个教学 “路径”,它基本上是一个反向传播梯度的流程,这将改变学生的权重。 专注于原始网络的每个分类器之前的卷积层输出,我们有以下形状:
# Pass the sample input only from the convolutional feature extractor
convolutional_fe_output_student = nn_light.features(sample_input)
convolutional_fe_output_teacher = nn_deep.features(sample_input)
# Print their shapes
print("Student's feature extractor output shape: ", convolutional_fe_output_student.shape)
print("Teacher's feature extractor output shape: ", convolutional_fe_output_teacher.shape)
Student's feature extractor output shape: torch.Size([128, 16, 8, 8])
Teacher's feature extractor output shape: torch.Size([128, 32, 8, 8])
我们为教师准备了 32 个过滤器,为学生准备了 16 个过滤器。 我们将包含一个可训练层,它将学生的特征图转换为教师特征图的形状。 在实践中,我们修改轻量级类,在匹配卷积大小 Feature Maps 和 teacher 类返回最终卷积层的输出,而无需池化或展平。
class ModifiedDeepNNRegressor(nn.Module):
def __init__(self, num_classes=10):
super(ModifiedDeepNNRegressor, self).__init__()
self.features = nn.Sequential(
nn.Conv2d(3, 128, kernel_size=3, padding=1),
nn.ReLU(),
nn.Conv2d(128, 64, kernel_size=3, padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2),
nn.Conv2d(64, 64, kernel_size=3, padding=1),
nn.ReLU(),
nn.Conv2d(64, 32, kernel_size=3, padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2),
)
self.classifier = nn.Sequential(
nn.Linear(2048, 512),
nn.ReLU(),
nn.Dropout(0.1),
nn.Linear(512, num_classes)
)
def forward(self, x):
x = self.features(x)
conv_feature_map = x
x = torch.flatten(x, 1)
x = self.classifier(x)
return x, conv_feature_map
class ModifiedLightNNRegressor(nn.Module):
def __init__(self, num_classes=10):
super(ModifiedLightNNRegressor, self).__init__()
self.features = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2),
nn.Conv2d(16, 16, kernel_size=3, padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2),
)
# Include an extra regressor (in our case linear)
self.regressor = nn.Sequential(
nn.Conv2d(16, 32, kernel_size=3, padding=1)
)
self.classifier = nn.Sequential(
nn.Linear(1024, 256),
nn.ReLU(),
nn.Dropout(0.1),
nn.Linear(256, num_classes)
)
def forward(self, x):
x = self.features(x)
regressor_output = self.regressor(x)
x = torch.flatten(x, 1)
x = self.classifier(x)
return x, regressor_output
之后,我们必须再次更新我们的火车循环。这一次,我们提取了 student 的回归器输出,即 teacher 的特征图
我们计算这些张量上的(它们具有完全相同的形状,因此定义正确),并根据该损失反向传播梯度,
除了分类任务的常规交叉熵损失外。MSE
def train_mse_loss(teacher, student, train_loader, epochs, learning_rate, feature_map_weight, ce_loss_weight, device):
ce_loss = nn.CrossEntropyLoss()
mse_loss = nn.MSELoss()
optimizer = optim.Adam(student.parameters(), lr=learning_rate)
teacher.to(device)
student.to(device)
teacher.eval() # Teacher set to evaluation mode
student.train() # Student to train mode
for epoch in range(epochs):
running_loss = 0.0
for inputs, labels in train_loader:
inputs, labels = inputs.to(device), labels.to(device)
optimizer.zero_grad()
# Again ignore teacher logits
with torch.no_grad():
_, teacher_feature_map = teacher(inputs)
# Forward pass with the student model
student_logits, regressor_feature_map = student(inputs)
# Calculate the loss
hidden_rep_loss = mse_loss(regressor_feature_map, teacher_feature_map)
# Calculate the true label loss
label_loss = ce_loss(student_logits, labels)
# Weighted sum of the two losses
loss = feature_map_weight * hidden_rep_loss + ce_loss_weight * label_loss
loss.backward()
optimizer.step()
running_loss += loss.item()
print(f"Epoch {epoch+1}/{epochs}, Loss: {running_loss / len(train_loader)}")
# Notice how our test function remains the same here with the one we used in our previous case. We only care about the actual outputs because we measure accuracy.
# Initialize a ModifiedLightNNRegressor
torch.manual_seed(42)
modified_nn_light_reg = ModifiedLightNNRegressor(num_classes=10).to(device)
# We do not have to train the modified deep network from scratch of course, we just load its weights from the trained instance
modified_nn_deep_reg = ModifiedDeepNNRegressor(num_classes=10).to(device)
modified_nn_deep_reg.load_state_dict(nn_deep.state_dict())
# Train and test once again
train_mse_loss(teacher=modified_nn_deep_reg, student=modified_nn_light_reg, train_loader=train_loader, epochs=10, learning_rate=0.001, feature_map_weight=0.25, ce_loss_weight=0.75, device=device)
test_accuracy_light_ce_and_mse_loss = test_multiple_outputs(modified_nn_light_reg, test_loader, device)
Epoch 1/10, Loss: 1.7312568727966464
Epoch 2/10, Loss: 1.3489013407236474
Epoch 3/10, Loss: 1.2052425062260055
Epoch 4/10, Loss: 1.108028480921255
Epoch 5/10, Loss: 1.028127753673612
Epoch 6/10, Loss: 0.9665506834264301
Epoch 7/10, Loss: 0.9110444729285472
Epoch 8/10, Loss: 0.861484837196672
Epoch 9/10, Loss: 0.8197113380712622
Epoch 10/10, Loss: 0.7801015764246206
Test Accuracy: 71.08%
预计最终的方法会比现在我们在老师和学生之间允许一个可训练的层更好。
这给了学生一些学习的回旋余地,而不是强迫学生复制老师的表述。
包括额外的网络是基于提示的蒸馏背后的想法。CosineLoss
print(f"Teacher accuracy: {test_accuracy_deep:.2f}%")
print(f"Student accuracy without teacher: {test_accuracy_light_ce:.2f}%")
print(f"Student accuracy with CE + KD: {test_accuracy_light_ce_and_kd:.2f}%")
print(f"Student accuracy with CE + CosineLoss: {test_accuracy_light_ce_and_cosine_loss:.2f}%")
print(f"Student accuracy with CE + RegressorMSE: {test_accuracy_light_ce_and_mse_loss:.2f}%")
Teacher accuracy: 75.45%
Student accuracy without teacher: 70.15%
Student accuracy with CE + KD: 70.75%
Student accuracy with CE + CosineLoss: 71.17%
Student accuracy with CE + RegressorMSE: 71.08%
结论¶
上述方法都不会增加网络或推理时间的参数数量, 因此,性能的提高是在训练期间计算梯度的代价很小。 在 ML 应用程序中,我们主要关心推理时间,因为训练发生在模型部署之前。 如果我们的轻量级模型仍然太重而无法部署,我们可以应用不同的想法,例如训练后量化。 额外的损失可以应用于许多任务,而不仅仅是分类,您可以试验系数等数量。 温度或神经元数量。随意调整上面教程中的任何数字, 但请记住,如果您更改神经元/过滤器的数量,则可能会出现形状不匹配的情况。
有关详细信息,请参阅:
脚本总运行时间:(7 分 45.085 秒)