目录

使用预训练模型

本教程介绍如何在 TorchRL 中使用预训练模型。

在本教程结束时,您将能够使用预训练模型 以实现高效的图像表示,并对其进行微调。

TorchRL 提供预训练模型,这些模型可以用作转换或用作 策略的组成部分。由于 sematic 相同,因此它们可以互换使用 在一个或另一个上下文中。在本教程中,我们将使用 R3M (https://arxiv.org/abs/2203.12601), 但其他型号(例如 VIP)也同样有效。

import torch.cuda
from tensordict.nn import TensorDictSequential
from torch import nn
from torchrl.envs import R3MTransform, TransformedEnv
from torchrl.envs.libs.gym import GymEnv
from torchrl.modules import Actor

is_fork = multiprocessing.get_start_method() == "fork"
device = (
    torch.device(0)
    if torch.cuda.is_available() and not is_fork
    else torch.device("cpu")
)

让我们首先创建一个环境。为简单起见,我们将使用 一个常见的健身房环境。在实践中,这将在更具挑战性、具体化的 AI 上下文(例如,请查看我们的 Habitat 包装器)。

base_env = GymEnv("Ant-v4", from_pixels=True, device=device)

让我们获取预训练的模型。我们通过 download=True 标志。默认情况下,此选项处于关闭状态。 接下来,我们将转换附加到环境中。在实践中,将发生的情况是 收集的每批数据都将经过转换并映射到“r3m_vec”条目上 在输出 tensordict 中。我们的策略由单层 MLP 组成,然后读取此向量并计算 相应的操作。

r3m = R3MTransform(
    "resnet50",
    in_keys=["pixels"],
    download=True,
)
env_transformed = TransformedEnv(base_env, r3m)
net = nn.Sequential(
    nn.LazyLinear(128, device=device),
    nn.Tanh(),
    nn.Linear(128, base_env.action_spec.shape[-1], device=device),
)
policy = Actor(net, in_keys=["r3m_vec"])
Downloading: "https://pytorch.s3.amazonaws.com/models/rl/r3m/r3m_50.pt" to /root/.cache/torch/hub/checkpoints/r3m_50.pt

  0%|          | 0.00/374M [00:00<?, ?B/s]
  4%|▍         | 16.5M/374M [00:00<00:04, 93.6MB/s]
  8%|▊         | 31.2M/374M [00:00<00:03, 99.2MB/s]
 11%|█         | 40.9M/374M [00:00<00:05, 66.8MB/s]
 13%|█▎        | 49.2M/374M [00:00<00:05, 64.0MB/s]
 17%|█▋        | 64.0M/374M [00:00<00:04, 76.0MB/s]
 19%|█▉        | 71.6M/374M [00:01<00:04, 72.4MB/s]
 22%|██▏       | 82.0M/374M [00:01<00:05, 58.1MB/s]
 26%|██▌       | 97.8M/374M [00:01<00:03, 75.2MB/s]
 28%|██▊       | 106M/374M [00:01<00:03, 72.8MB/s]
 31%|███       | 115M/374M [00:01<00:03, 71.5MB/s]
 35%|███▍      | 130M/374M [00:01<00:03, 70.2MB/s]
 37%|███▋      | 138M/374M [00:02<00:03, 65.4MB/s]
 38%|███▊      | 144M/374M [00:02<00:04, 60.4MB/s]
 40%|████      | 150M/374M [00:02<00:05, 40.2MB/s]
 44%|████▍     | 164M/374M [00:02<00:04, 54.2MB/s]
 48%|████▊     | 179M/374M [00:02<00:02, 68.8MB/s]
 50%|████▉     | 186M/374M [00:02<00:02, 68.4MB/s]
 52%|█████▏    | 195M/374M [00:03<00:02, 71.4MB/s]
 54%|█████▍    | 203M/374M [00:03<00:03, 59.8MB/s]
 57%|█████▋    | 212M/374M [00:03<00:02, 62.5MB/s]
 58%|█████▊    | 219M/374M [00:03<00:03, 44.5MB/s]
 61%|██████▏   | 229M/374M [00:03<00:02, 52.5MB/s]
 66%|██████▌   | 246M/374M [00:04<00:02, 63.1MB/s]
 70%|██████▉   | 262M/374M [00:04<00:01, 74.1MB/s]
 72%|███████▏  | 269M/374M [00:04<00:01, 66.1MB/s]
 74%|███████▍  | 277M/374M [00:04<00:02, 40.5MB/s]
 75%|███████▌  | 282M/374M [00:04<00:02, 38.6MB/s]
 78%|███████▊  | 293M/374M [00:05<00:01, 49.1MB/s]
 80%|████████  | 299M/374M [00:05<00:02, 39.0MB/s]
 83%|████████▎ | 311M/374M [00:05<00:01, 38.5MB/s]
 87%|████████▋ | 326M/374M [00:05<00:01, 45.5MB/s]
 88%|████████▊ | 331M/374M [00:06<00:01, 44.1MB/s]
 92%|█████████▏| 344M/374M [00:06<00:00, 50.1MB/s]
 96%|█████████▋| 360M/374M [00:06<00:00, 59.6MB/s]
100%|█████████▉| 373M/374M [00:06<00:00, 57.6MB/s]
100%|██████████| 374M/374M [00:06<00:00, 57.8MB/s]

我们来检查策略的参数数量:

print("number of params:", len(list(policy.parameters())))
number of params: 4

我们收集 32 个步骤的 rollout 并打印其输出:

rollout = env_transformed.rollout(32, policy)
print("rollout with transform:", rollout)
rollout with transform: TensorDict(
    fields={
        action: Tensor(shape=torch.Size([32, 8]), device=cpu, dtype=torch.float32, is_shared=False),
        done: Tensor(shape=torch.Size([32, 1]), device=cpu, dtype=torch.bool, is_shared=False),
        next: TensorDict(
            fields={
                done: Tensor(shape=torch.Size([32, 1]), device=cpu, dtype=torch.bool, is_shared=False),
                r3m_vec: Tensor(shape=torch.Size([32, 2048]), device=cpu, dtype=torch.float32, is_shared=False),
                reward: Tensor(shape=torch.Size([32, 1]), device=cpu, dtype=torch.float32, is_shared=False),
                terminated: Tensor(shape=torch.Size([32, 1]), device=cpu, dtype=torch.bool, is_shared=False),
                truncated: Tensor(shape=torch.Size([32, 1]), device=cpu, dtype=torch.bool, is_shared=False)},
            batch_size=torch.Size([32]),
            device=cpu,
            is_shared=False),
        r3m_vec: Tensor(shape=torch.Size([32, 2048]), device=cpu, dtype=torch.float32, is_shared=False),
        terminated: Tensor(shape=torch.Size([32, 1]), device=cpu, dtype=torch.bool, is_shared=False),
        truncated: Tensor(shape=torch.Size([32, 1]), device=cpu, dtype=torch.bool, is_shared=False)},
    batch_size=torch.Size([32]),
    device=cpu,
    is_shared=False)

为了进行微调,我们在制作参数后将 transform 集成到 policy 中 可训练的。在实践中,将其限制为参数的子集(比如最后一层)可能更明智 MLP 的)。

r3m.train()
policy = TensorDictSequential(r3m, policy)
print("number of params after r3m is integrated:", len(list(policy.parameters())))
number of params after r3m is integrated: 163

同样,我们使用 R3M 收集推出。输出的结构略有不同,就像现在一样 环境返回像素(而不是嵌入)。嵌入 “r3m_vec” 是一个中间 我们政策的结果。

rollout = base_env.rollout(32, policy)
print("rollout, fine tuning:", rollout)
rollout, fine tuning: TensorDict(
    fields={
        action: Tensor(shape=torch.Size([32, 8]), device=cpu, dtype=torch.float32, is_shared=False),
        done: Tensor(shape=torch.Size([32, 1]), device=cpu, dtype=torch.bool, is_shared=False),
        next: TensorDict(
            fields={
                done: Tensor(shape=torch.Size([32, 1]), device=cpu, dtype=torch.bool, is_shared=False),
                pixels: Tensor(shape=torch.Size([32, 480, 480, 3]), device=cpu, dtype=torch.uint8, is_shared=False),
                reward: Tensor(shape=torch.Size([32, 1]), device=cpu, dtype=torch.float32, is_shared=False),
                terminated: Tensor(shape=torch.Size([32, 1]), device=cpu, dtype=torch.bool, is_shared=False),
                truncated: Tensor(shape=torch.Size([32, 1]), device=cpu, dtype=torch.bool, is_shared=False)},
            batch_size=torch.Size([32]),
            device=cpu,
            is_shared=False),
        r3m_vec: Tensor(shape=torch.Size([32, 2048]), device=cpu, dtype=torch.float32, is_shared=False),
        terminated: Tensor(shape=torch.Size([32, 1]), device=cpu, dtype=torch.bool, is_shared=False),
        truncated: Tensor(shape=torch.Size([32, 1]), device=cpu, dtype=torch.bool, is_shared=False)},
    batch_size=torch.Size([32]),
    device=cpu,
    is_shared=False)

我们将转换从 env 交换到 policy 的简便性 是由于两者的行为都类似于 TensorDictModule:它们有一组 “in_keys”“out_keys” ,可以轻松地在不同的上下文中读取和写入输出。

为了总结本教程,让我们看看如何使用 R3M 读取 存储在重放缓冲区中的图像(例如,在离线 RL 上下文中)。首先,让我们构建数据集:

from torchrl.data import LazyMemmapStorage, ReplayBuffer

storage = LazyMemmapStorage(1000)
rb = ReplayBuffer(storage=storage, transform=r3m)

我们现在可以收集数据(出于我们的目的而随机推出)并填充重播 buffer 替换为它:

total = 0
while total < 1000:
    tensordict = base_env.rollout(1000)
    rb.extend(tensordict)
    total += tensordict.numel()

让我们看看我们的重放缓冲区存储是什么样子的。它不应包含 “r3m_vec” 条目 因为我们还没有使用它:

print("stored data:", storage._storage)
stored data: TensorDict(
    fields={
        action: MemoryMappedTensor(shape=torch.Size([1000, 8]), device=cpu, dtype=torch.float32, is_shared=False),
        done: MemoryMappedTensor(shape=torch.Size([1000, 1]), device=cpu, dtype=torch.bool, is_shared=False),
        next: TensorDict(
            fields={
                done: MemoryMappedTensor(shape=torch.Size([1000, 1]), device=cpu, dtype=torch.bool, is_shared=False),
                pixels: MemoryMappedTensor(shape=torch.Size([1000, 480, 480, 3]), device=cpu, dtype=torch.uint8, is_shared=False),
                reward: MemoryMappedTensor(shape=torch.Size([1000, 1]), device=cpu, dtype=torch.float32, is_shared=False),
                terminated: MemoryMappedTensor(shape=torch.Size([1000, 1]), device=cpu, dtype=torch.bool, is_shared=False),
                truncated: MemoryMappedTensor(shape=torch.Size([1000, 1]), device=cpu, dtype=torch.bool, is_shared=False)},
            batch_size=torch.Size([1000]),
            device=cpu,
            is_shared=False),
        pixels: MemoryMappedTensor(shape=torch.Size([1000, 480, 480, 3]), device=cpu, dtype=torch.uint8, is_shared=False),
        terminated: MemoryMappedTensor(shape=torch.Size([1000, 1]), device=cpu, dtype=torch.bool, is_shared=False),
        truncated: MemoryMappedTensor(shape=torch.Size([1000, 1]), device=cpu, dtype=torch.bool, is_shared=False)},
    batch_size=torch.Size([1000]),
    device=cpu,
    is_shared=False)

采样时,数据将经过 R3M 转换,为我们提供所需的处理数据。 通过这种方式,我们可以在由图像组成的数据集上离线训练算法:

batch = rb.sample(32)
print("data after sampling:", batch)
data after sampling: TensorDict(
    fields={
        action: Tensor(shape=torch.Size([32, 8]), device=cpu, dtype=torch.float32, is_shared=False),
        done: Tensor(shape=torch.Size([32, 1]), device=cpu, dtype=torch.bool, is_shared=False),
        next: TensorDict(
            fields={
                done: Tensor(shape=torch.Size([32, 1]), device=cpu, dtype=torch.bool, is_shared=False),
                pixels: Tensor(shape=torch.Size([32, 480, 480, 3]), device=cpu, dtype=torch.uint8, is_shared=False),
                reward: Tensor(shape=torch.Size([32, 1]), device=cpu, dtype=torch.float32, is_shared=False),
                terminated: Tensor(shape=torch.Size([32, 1]), device=cpu, dtype=torch.bool, is_shared=False),
                truncated: Tensor(shape=torch.Size([32, 1]), device=cpu, dtype=torch.bool, is_shared=False)},
            batch_size=torch.Size([32]),
            device=cpu,
            is_shared=False),
        r3m_vec: Tensor(shape=torch.Size([32, 2048]), device=cpu, dtype=torch.float32, is_shared=False),
        terminated: Tensor(shape=torch.Size([32, 1]), device=cpu, dtype=torch.bool, is_shared=False),
        truncated: Tensor(shape=torch.Size([32, 1]), device=cpu, dtype=torch.bool, is_shared=False)},
    batch_size=torch.Size([32]),
    device=cpu,
    is_shared=False)

脚本总运行时间:(0 分 55.393 秒)

估计内存使用量:2354 MB

由 Sphinx-Gallery 生成的图库

文档

访问 PyTorch 的全面开发人员文档

查看文档

教程

获取面向初学者和高级开发人员的深入教程

查看教程

资源

查找开发资源并解答您的问题

查看资源