注意
单击此处下载完整的示例代码
使用 Ray Tune 进行超参数优化¶
创建时间: Aug 31, 2020 |上次更新时间:2024 年 10 月 31 日 |上次验证: Nov 05, 2024
超参数优化可以区分普通模型和高度 准确的一个。通常是简单的事情,例如选择不同的学习率或更改 网络层大小可能会对模型性能产生巨大影响。
幸运的是,有一些工具可以帮助找到最佳的参数组合。Ray Tune 是 分布式超参数优化。Ray Tune 包含最新的超参数搜索 算法,与各种分析库集成,并且原生 通过 Ray 的分布式机器学习引擎支持分布式训练。
在本教程中,我们将向您展示如何将 Ray Tune 集成到 PyTorch 中 training 工作流。我们将从 PyTorch 文档扩展本教程以进行培训 CIFAR10 图像分类器。
如您所见,我们只需要添加一些细微的修改。特别是,我们 必须
将数据加载和训练包装在 Functions 中,
使一些网络参数可配置,
添加检查点(可选),
并定义模型调整的搜索空间
要运行本教程,请确保以下软件包是 安装:
ray[tune]
:分布式超参数优化库torchvision
:用于数据转换器
设置 / 导入¶
让我们从导入开始:
from functools import partial
import os
import tempfile
from pathlib import Path
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import random_split
import torchvision
import torchvision.transforms as transforms
from ray import tune
from ray import train
from ray.train import Checkpoint, get_checkpoint
from ray.tune.schedulers import ASHAScheduler
import ray.cloudpickle as pickle
构建 PyTorch 模型需要大多数导入。只有最后一个 导入用于 Ray Tune。
数据加载器¶
我们将数据加载器包装在它们自己的函数中,并传递一个全局数据目录。 这样,我们可以在不同的 Trial 之间共享数据目录。
def load_data(data_dir="./data"):
transform = transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]
)
trainset = torchvision.datasets.CIFAR10(
root=data_dir, train=True, download=True, transform=transform
)
testset = torchvision.datasets.CIFAR10(
root=data_dir, train=False, download=True, transform=transform
)
return trainset, testset
可配置的神经网络¶
我们只能调整那些可配置的参数。 在此示例中,我们可以指定 全连接层的层大小:
class Net(nn.Module):
def __init__(self, l1=120, l2=84):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, l1)
self.fc2 = nn.Linear(l1, l2)
self.fc3 = nn.Linear(l2, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = torch.flatten(x, 1) # flatten all dimensions except batch
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
train 函数¶
现在它变得有趣了,因为我们对 PyTorch 中的示例进行了一些更改 文档。
我们将训练脚本包装在一个函数 中。
该参数将接收我们想要的超参数
训练用。它指定了我们加载和存储数据的目录,
,以便多个运行可以共享同一数据源。
我们还在运行开始时加载模型和优化器状态,如果检查点
。在本教程的后面,您将找到有关如何操作的信息
保存检查点及其用途。train_cifar(config, data_dir=None)
config
data_dir
net = Net(config["l1"], config["l2"])
checkpoint = get_checkpoint()
if checkpoint:
with checkpoint.as_directory() as checkpoint_dir:
data_path = Path(checkpoint_dir) / "data.pkl"
with open(data_path, "rb") as fp:
checkpoint_state = pickle.load(fp)
start_epoch = checkpoint_state["epoch"]
net.load_state_dict(checkpoint_state["net_state_dict"])
optimizer.load_state_dict(checkpoint_state["optimizer_state_dict"])
else:
start_epoch = 0
优化器的学习率也是可配置的:
optimizer = optim.SGD(net.parameters(), lr=config["lr"], momentum=0.9)
我们还将训练数据拆分为训练和验证子集。因此,我们训练 80% 的数据,并计算剩余 20% 的验证损失。批量大小 使用它,我们遍历训练和测试集也是可配置的。
使用 DataParallel 添加(多)GPU 支持¶
图像分类在很大程度上受益于 GPU。幸运的是,我们可以继续使用
PyTorch 在 Ray Tune 中的抽象。因此,我们可以将模型包装起来以支持在多个 GPU 上进行数据并行训练:nn.DataParallel
device = "cpu"
if torch.cuda.is_available():
device = "cuda:0"
if torch.cuda.device_count() > 1:
net = nn.DataParallel(net)
net.to(device)
通过使用变量,我们可以确保训练在以下情况下也有效
没有可用的 GPU。PyTorch 要求我们将数据显式发送到 GPU 内存,
喜欢这个:device
for i, data in enumerate(trainloader, 0):
inputs, labels = data
inputs, labels = inputs.to(device), labels.to(device)
该代码现在支持在 CPU、单个 GPU 和多个 GPU 上进行训练。值得注意的是,Ray 还支持部分 GPU,因此我们可以在试验之间共享 GPU,只要模型仍然适合 GPU 内存。我们会再来的 以后再说。
与 Ray Tune 通信¶
最有趣的部分是与 Ray Tune 的通信:
checkpoint_data = {
"epoch": epoch,
"net_state_dict": net.state_dict(),
"optimizer_state_dict": optimizer.state_dict(),
}
with tempfile.TemporaryDirectory() as checkpoint_dir:
data_path = Path(checkpoint_dir) / "data.pkl"
with open(data_path, "wb") as fp:
pickle.dump(checkpoint_data, fp)
checkpoint = Checkpoint.from_directory(checkpoint_dir)
train.report(
{"loss": val_loss / val_steps, "accuracy": correct / total},
checkpoint=checkpoint,
)
在这里,我们首先保存一个检查点,然后将一些指标报告回 Ray Tune。具体说来 我们将验证损失和准确性发送回 Ray Tune。然后,Ray Tune 可以使用这些指标 来确定哪种超参数配置可带来最佳结果。这些指标 也可用于及早停止性能不佳的试验,以避免浪费 这些试验的资源。
检查点保存是可选的,但是,如果我们想使用 advanced Population Based Training 等调度程序。 此外,通过保存检查点,我们可以稍后加载经过训练的模型并对其进行验证 在测试集上。最后,保存 checkpoint 对于容错很有用,它允许 我们中断训练并在以后继续训练。
完整的训练功能¶
完整的代码示例如下所示:
def train_cifar(config, data_dir=None):
net = Net(config["l1"], config["l2"])
device = "cpu"
if torch.cuda.is_available():
device = "cuda:0"
if torch.cuda.device_count() > 1:
net = nn.DataParallel(net)
net.to(device)
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=config["lr"], momentum=0.9)
checkpoint = get_checkpoint()
if checkpoint:
with checkpoint.as_directory() as checkpoint_dir:
data_path = Path(checkpoint_dir) / "data.pkl"
with open(data_path, "rb") as fp:
checkpoint_state = pickle.load(fp)
start_epoch = checkpoint_state["epoch"]
net.load_state_dict(checkpoint_state["net_state_dict"])
optimizer.load_state_dict(checkpoint_state["optimizer_state_dict"])
else:
start_epoch = 0
trainset, testset = load_data(data_dir)
test_abs = int(len(trainset) * 0.8)
train_subset, val_subset = random_split(
trainset, [test_abs, len(trainset) - test_abs]
)
trainloader = torch.utils.data.DataLoader(
train_subset, batch_size=int(config["batch_size"]), shuffle=True, num_workers=8
)
valloader = torch.utils.data.DataLoader(
val_subset, batch_size=int(config["batch_size"]), shuffle=True, num_workers=8
)
for epoch in range(start_epoch, 10): # loop over the dataset multiple times
running_loss = 0.0
epoch_steps = 0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
inputs, labels = inputs.to(device), labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
epoch_steps += 1
if i % 2000 == 1999: # print every 2000 mini-batches
print(
"[%d, %5d] loss: %.3f"
% (epoch + 1, i + 1, running_loss / epoch_steps)
)
running_loss = 0.0
# Validation loss
val_loss = 0.0
val_steps = 0
total = 0
correct = 0
for i, data in enumerate(valloader, 0):
with torch.no_grad():
inputs, labels = data
inputs, labels = inputs.to(device), labels.to(device)
outputs = net(inputs)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
loss = criterion(outputs, labels)
val_loss += loss.cpu().numpy()
val_steps += 1
checkpoint_data = {
"epoch": epoch,
"net_state_dict": net.state_dict(),
"optimizer_state_dict": optimizer.state_dict(),
}
with tempfile.TemporaryDirectory() as checkpoint_dir:
data_path = Path(checkpoint_dir) / "data.pkl"
with open(data_path, "wb") as fp:
pickle.dump(checkpoint_data, fp)
checkpoint = Checkpoint.from_directory(checkpoint_dir)
train.report(
{"loss": val_loss / val_steps, "accuracy": correct / total},
checkpoint=checkpoint,
)
print("Finished Training")
如您所见,大多数代码都是直接从原始示例改编而来的。
测试集精度¶
通常,机器学习模型的性能是通过保持测试来测试的 set 使用尚未用于训练模型的数据。我们还将其包装在 功能:
def test_accuracy(net, device="cpu"):
trainset, testset = load_data()
testloader = torch.utils.data.DataLoader(
testset, batch_size=4, shuffle=False, num_workers=2
)
correct = 0
total = 0
with torch.no_grad():
for data in testloader:
images, labels = data
images, labels = images.to(device), labels.to(device)
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
return correct / total
该函数还需要一个参数,因此我们可以执行
测试集验证。device
配置搜索空间¶
最后,我们需要定义 Ray Tune 的搜索空间。下面是一个示例:
config = {
"l1": tune.choice([2 ** i for i in range(9)]),
"l2": tune.choice([2 ** i for i in range(9)]),
"lr": tune.loguniform(1e-4, 1e-1),
"batch_size": tune.choice([2, 4, 8, 16])
}
接受从中均匀采样的值列表。
在此示例中,和 参数
应该是 4 到 256 之间的 2 的幂,因此可以是 4、8、16、32、64、128 或 256。
(学习率)应在 0.0001 和 0.1 之间均匀采样。最后
批量大小是 2、4、8 和 16 之间的选项。tune.choice()
l1
l2
lr
现在,在每次试验中,Ray Tune 都会从这些参数中随机采样参数组合
搜索空间。然后,它将并行训练多个模型并找到最佳模型
执行其中的一个。我们还使用 which 将终止 bad
尽早进行试验。ASHAScheduler
我们用 包装函数来设置 constant 参数。我们还可以告诉 Ray Tune 应该是什么资源
适用于每个试用版:train_cifar
functools.partial
data_dir
gpus_per_trial = 2
# ...
result = tune.run(
partial(train_cifar, data_dir=data_dir),
resources_per_trial={"cpu": 8, "gpu": gpus_per_trial},
config=config,
num_samples=num_samples,
scheduler=scheduler,
checkpoint_at_end=True)
您可以指定 CPU 的数量,然后这些 CPU 可用,例如
以增加 PyTorch 实例的 PyTorch 实例。选中的
在每个试用中,PyTorch 可以看到 GPU 的数量。试用版无权访问
尚未请求的 GPU - 因此您不必关心两次试用
使用同一组资源。num_workers
DataLoader
这里我们还可以指定分数 GPU,所以像
完全有效。然后,试用版将相互共享 GPU。
您只需确保模型仍适合 GPU 内存。gpus_per_trial=0.5
训练模型后,我们将找到性能最佳的模型并加载经过训练的模型 network 的 NETWORK 文件。然后我们获得测试集的准确性并报告 一切都通过打印。
完整的 main 函数如下所示:
def main(num_samples=10, max_num_epochs=10, gpus_per_trial=2):
data_dir = os.path.abspath("./data")
load_data(data_dir)
config = {
"l1": tune.choice([2**i for i in range(9)]),
"l2": tune.choice([2**i for i in range(9)]),
"lr": tune.loguniform(1e-4, 1e-1),
"batch_size": tune.choice([2, 4, 8, 16]),
}
scheduler = ASHAScheduler(
metric="loss",
mode="min",
max_t=max_num_epochs,
grace_period=1,
reduction_factor=2,
)
result = tune.run(
partial(train_cifar, data_dir=data_dir),
resources_per_trial={"cpu": 2, "gpu": gpus_per_trial},
config=config,
num_samples=num_samples,
scheduler=scheduler,
)
best_trial = result.get_best_trial("loss", "min", "last")
print(f"Best trial config: {best_trial.config}")
print(f"Best trial final validation loss: {best_trial.last_result['loss']}")
print(f"Best trial final validation accuracy: {best_trial.last_result['accuracy']}")
best_trained_model = Net(best_trial.config["l1"], best_trial.config["l2"])
device = "cpu"
if torch.cuda.is_available():
device = "cuda:0"
if gpus_per_trial > 1:
best_trained_model = nn.DataParallel(best_trained_model)
best_trained_model.to(device)
best_checkpoint = result.get_best_checkpoint(trial=best_trial, metric="accuracy", mode="max")
with best_checkpoint.as_directory() as checkpoint_dir:
data_path = Path(checkpoint_dir) / "data.pkl"
with open(data_path, "rb") as fp:
best_checkpoint_data = pickle.load(fp)
best_trained_model.load_state_dict(best_checkpoint_data["net_state_dict"])
test_acc = test_accuracy(best_trained_model, device)
print("Best trial test set accuracy: {}".format(test_acc))
if __name__ == "__main__":
# You can change the number of GPUs per trial here:
main(num_samples=10, max_num_epochs=10, gpus_per_trial=0)
Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to /var/lib/workspace/beginner_source/data/cifar-10-python.tar.gz
0% 0.00/170M [00:00<?, ?B/s]
0% 459k/170M [00:00<00:37, 4.53MB/s]
5% 7.96M/170M [00:00<00:03, 45.7MB/s]
11% 18.8M/170M [00:00<00:02, 74.4MB/s]
17% 29.5M/170M [00:00<00:01, 87.2MB/s]
24% 40.6M/170M [00:00<00:01, 95.7MB/s]
30% 51.7M/170M [00:00<00:01, 101MB/s]
37% 62.8M/170M [00:00<00:01, 104MB/s]
43% 73.9M/170M [00:00<00:00, 106MB/s]
50% 84.9M/170M [00:00<00:00, 107MB/s]
56% 96.0M/170M [00:01<00:00, 108MB/s]
63% 107M/170M [00:01<00:00, 109MB/s]
69% 118M/170M [00:01<00:00, 110MB/s]
76% 130M/170M [00:01<00:00, 111MB/s]
83% 141M/170M [00:01<00:00, 111MB/s]
89% 152M/170M [00:01<00:00, 111MB/s]
96% 163M/170M [00:01<00:00, 110MB/s]
100% 170M/170M [00:01<00:00, 102MB/s]
Extracting /var/lib/workspace/beginner_source/data/cifar-10-python.tar.gz to /var/lib/workspace/beginner_source/data
Files already downloaded and verified
2025-01-02 21:58:16,732 WARNING services.py:1889 -- WARNING: The object store is using /tmp instead of /dev/shm because /dev/shm has only 2147479552 bytes available. This will harm performance! You may be able to free up space by deleting files in /dev/shm. If you are inside a Docker container, you can increase /dev/shm size by passing '--shm-size=10.24gb' to 'docker run' (or add it to the run_options list in a Ray cluster config). Make sure to set this to more than 30% of available RAM.
2025-01-02 21:58:16,992 INFO worker.py:1642 -- Started a local Ray instance.
2025-01-02 21:58:18,354 INFO tune.py:228 -- Initializing Ray automatically. For cluster usage or custom Ray initialization, call `ray.init(...)` before `tune.run(...)`.
2025-01-02 21:58:18,356 INFO tune.py:654 -- [output] This will use the new output engine with verbosity 2. To disable the new output and use the legacy output engine, set the environment variable RAY_AIR_NEW_OUTPUT=0. For more information, please see https://github.com/ray-project/ray/issues/36949
+--------------------------------------------------------------------+
| Configuration for experiment train_cifar_2025-01-02_21-58-18 |
+--------------------------------------------------------------------+
| Search algorithm BasicVariantGenerator |
| Scheduler AsyncHyperBandScheduler |
| Number of trials 10 |
+--------------------------------------------------------------------+
View detailed results here: /var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18
To visualize your results with TensorBoard, run: `tensorboard --logdir /var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18`
Trial status: 10 PENDING
Current time: 2025-01-02 21:58:18. Total running time: 0s
Logical resource usage: 0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+-------------------------------------------------------------------------------+
| Trial name status l1 l2 lr batch_size |
+-------------------------------------------------------------------------------+
| train_cifar_acb57_00000 PENDING 16 1 0.00213327 2 |
| train_cifar_acb57_00001 PENDING 1 2 0.013416 4 |
| train_cifar_acb57_00002 PENDING 256 64 0.0113784 2 |
| train_cifar_acb57_00003 PENDING 64 256 0.0274071 8 |
| train_cifar_acb57_00004 PENDING 16 2 0.056666 4 |
| train_cifar_acb57_00005 PENDING 8 64 0.000353097 4 |
| train_cifar_acb57_00006 PENDING 16 4 0.000147684 8 |
| train_cifar_acb57_00007 PENDING 256 256 0.00477469 8 |
| train_cifar_acb57_00008 PENDING 128 256 0.0306227 8 |
| train_cifar_acb57_00009 PENDING 2 16 0.0286986 2 |
+-------------------------------------------------------------------------------+
Trial train_cifar_acb57_00007 started with configuration:
+--------------------------------------------------+
| Trial train_cifar_acb57_00007 config |
+--------------------------------------------------+
| batch_size 8 |
| l1 256 |
| l2 256 |
| lr 0.00477 |
+--------------------------------------------------+
Trial train_cifar_acb57_00001 started with configuration:
+--------------------------------------------------+
| Trial train_cifar_acb57_00001 config |
+--------------------------------------------------+
| batch_size 4 |
| l1 1 |
| l2 2 |
| lr 0.01342 |
+--------------------------------------------------+
Trial train_cifar_acb57_00003 started with configuration:
+--------------------------------------------------+
| Trial train_cifar_acb57_00003 config |
+--------------------------------------------------+
| batch_size 8 |
| l1 64 |
| l2 256 |
| lr 0.02741 |
+--------------------------------------------------+
Trial train_cifar_acb57_00000 started with configuration:
+--------------------------------------------------+
| Trial train_cifar_acb57_00000 config |
+--------------------------------------------------+
| batch_size 2 |
| l1 16 |
| l2 1 |
| lr 0.00213 |
+--------------------------------------------------+
Trial train_cifar_acb57_00002 started with configuration:
+--------------------------------------------------+
| Trial train_cifar_acb57_00002 config |
+--------------------------------------------------+
| batch_size 2 |
| l1 256 |
| l2 64 |
| lr 0.01138 |
+--------------------------------------------------+
Trial train_cifar_acb57_00004 started with configuration:
+--------------------------------------------------+
| Trial train_cifar_acb57_00004 config |
+--------------------------------------------------+
| batch_size 4 |
| l1 16 |
| l2 2 |
| lr 0.05667 |
+--------------------------------------------------+
(func pid=4869) Files already downloaded and verified
Trial train_cifar_acb57_00006 started with configuration:
+--------------------------------------------------+
| Trial train_cifar_acb57_00006 config |
+--------------------------------------------------+
| batch_size 8 |
| l1 16 |
| l2 4 |
| lr 0.00015 |
+--------------------------------------------------+
Trial train_cifar_acb57_00005 started with configuration:
+--------------------------------------------------+
| Trial train_cifar_acb57_00005 config |
+--------------------------------------------------+
| batch_size 4 |
| l1 8 |
| l2 64 |
| lr 0.00035 |
+--------------------------------------------------+
(func pid=4868) [1, 2000] loss: 2.321
(func pid=4886) Files already downloaded and verified [repeated 15x across cluster] (Ray deduplicates logs by default. Set RAY_DEDUP_LOGS=0 to disable log deduplication, or see https://docs.ray.io/en/master/ray-observability/ray-logging.html#log-deduplication for more options.)
Trial status: 8 RUNNING | 2 PENDING
Current time: 2025-01-02 21:58:48. Total running time: 30s
Logical resource usage: 16.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+-------------------------------------------------------------------------------+
| Trial name status l1 l2 lr batch_size |
+-------------------------------------------------------------------------------+
| train_cifar_acb57_00000 RUNNING 16 1 0.00213327 2 |
| train_cifar_acb57_00001 RUNNING 1 2 0.013416 4 |
| train_cifar_acb57_00002 RUNNING 256 64 0.0113784 2 |
| train_cifar_acb57_00003 RUNNING 64 256 0.0274071 8 |
| train_cifar_acb57_00004 RUNNING 16 2 0.056666 4 |
| train_cifar_acb57_00005 RUNNING 8 64 0.000353097 4 |
| train_cifar_acb57_00006 RUNNING 16 4 0.000147684 8 |
| train_cifar_acb57_00007 RUNNING 256 256 0.00477469 8 |
| train_cifar_acb57_00008 PENDING 128 256 0.0306227 8 |
| train_cifar_acb57_00009 PENDING 2 16 0.0286986 2 |
+-------------------------------------------------------------------------------+
(func pid=4868) [1, 4000] loss: 1.153 [repeated 8x across cluster]
(func pid=4871) [1, 4000] loss: 1.047 [repeated 7x across cluster]
(func pid=4868) [1, 6000] loss: 0.768
(func pid=4869) [1, 6000] loss: 0.770
Trial train_cifar_acb57_00007 finished iteration 1 at 2025-01-02 21:59:18. Total running time: 1min 0s
+------------------------------------------------------------+
| Trial train_cifar_acb57_00007 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000000 |
| time_this_iter_s 54.19759 |
| time_total_s 54.19759 |
| training_iteration 1 |
| accuracy 0.4812 |
| loss 1.46991 |
+------------------------------------------------------------+
Trial train_cifar_acb57_00007 saved a checkpoint for iteration 1 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2025-01-02_21-58-18/checkpoint_000000
(func pid=4887) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2025-01-02_21-58-18/checkpoint_000000)
Trial status: 8 RUNNING | 2 PENDING
Current time: 2025-01-02 21:59:18. Total running time: 1min 0s
Logical resource usage: 16.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+----------------------------------------------------------------------------------------------------------------------------------+
| Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy |
+----------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_acb57_00000 RUNNING 16 1 0.00213327 2 |
| train_cifar_acb57_00001 RUNNING 1 2 0.013416 4 |
| train_cifar_acb57_00002 RUNNING 256 64 0.0113784 2 |
| train_cifar_acb57_00003 RUNNING 64 256 0.0274071 8 |
| train_cifar_acb57_00004 RUNNING 16 2 0.056666 4 |
| train_cifar_acb57_00005 RUNNING 8 64 0.000353097 4 |
| train_cifar_acb57_00006 RUNNING 16 4 0.000147684 8 |
| train_cifar_acb57_00007 RUNNING 256 256 0.00477469 8 1 54.1976 1.46991 0.4812 |
| train_cifar_acb57_00008 PENDING 128 256 0.0306227 8 |
| train_cifar_acb57_00009 PENDING 2 16 0.0286986 2 |
+----------------------------------------------------------------------------------------------------------------------------------+
Trial train_cifar_acb57_00006 finished iteration 1 at 2025-01-02 21:59:18. Total running time: 1min 0s
+------------------------------------------------------------+
| Trial train_cifar_acb57_00006 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000000 |
| time_this_iter_s 53.9304 |
| time_total_s 53.9304 |
| training_iteration 1 |
| accuracy 0.1185 |
| loss 2.30605 |
+------------------------------------------------------------+
Trial train_cifar_acb57_00006 saved a checkpoint for iteration 1 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00006_6_batch_size=8,l1=16,l2=4,lr=0.0001_2025-01-02_21-58-18/checkpoint_000000
Trial train_cifar_acb57_00006 completed after 1 iterations at 2025-01-02 21:59:18. Total running time: 1min 0s
Trial train_cifar_acb57_00008 started with configuration:
+--------------------------------------------------+
| Trial train_cifar_acb57_00008 config |
+--------------------------------------------------+
| batch_size 8 |
| l1 128 |
| l2 256 |
| lr 0.03062 |
+--------------------------------------------------+
(func pid=4886) Files already downloaded and verified
Trial train_cifar_acb57_00003 finished iteration 1 at 2025-01-02 21:59:20. Total running time: 1min 1s
+------------------------------------------------------------+
| Trial train_cifar_acb57_00003 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000000 |
| time_this_iter_s 55.97928 |
| time_total_s 55.97928 |
| training_iteration 1 |
| accuracy 0.2109 |
| loss 2.082 |
+------------------------------------------------------------+
Trial train_cifar_acb57_00003 saved a checkpoint for iteration 1 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00003_3_batch_size=8,l1=64,l2=256,lr=0.0274_2025-01-02_21-58-18/checkpoint_000000
Trial train_cifar_acb57_00003 completed after 1 iterations at 2025-01-02 21:59:20. Total running time: 1min 1s
Trial train_cifar_acb57_00009 started with configuration:
+-------------------------------------------------+
| Trial train_cifar_acb57_00009 config |
+-------------------------------------------------+
| batch_size 2 |
| l1 2 |
| l2 16 |
| lr 0.0287 |
+-------------------------------------------------+
(func pid=4886) Files already downloaded and verified
(func pid=4870) [1, 6000] loss: 0.734 [repeated 3x across cluster]
(func pid=4871) Files already downloaded and verified [repeated 2x across cluster]
(func pid=4868) [1, 8000] loss: 0.576
(func pid=4869) [1, 8000] loss: 0.577
(func pid=4887) [2, 2000] loss: 1.389 [repeated 4x across cluster]
(func pid=4868) [1, 10000] loss: 0.441 [repeated 3x across cluster]
(func pid=4872) [1, 10000] loss: 0.467 [repeated 3x across cluster]
Trial status: 8 RUNNING | 2 TERMINATED
Current time: 2025-01-02 21:59:48. Total running time: 1min 30s
Logical resource usage: 16.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_acb57_00000 RUNNING 16 1 0.00213327 2 |
| train_cifar_acb57_00001 RUNNING 1 2 0.013416 4 |
| train_cifar_acb57_00002 RUNNING 256 64 0.0113784 2 |
| train_cifar_acb57_00004 RUNNING 16 2 0.056666 4 |
| train_cifar_acb57_00005 RUNNING 8 64 0.000353097 4 |
| train_cifar_acb57_00007 RUNNING 256 256 0.00477469 8 1 54.1976 1.46991 0.4812 |
| train_cifar_acb57_00008 RUNNING 128 256 0.0306227 8 |
| train_cifar_acb57_00009 RUNNING 2 16 0.0286986 2 |
| train_cifar_acb57_00003 TERMINATED 64 256 0.0274071 8 1 55.9793 2.082 0.2109 |
| train_cifar_acb57_00006 TERMINATED 16 4 0.000147684 8 1 53.9304 2.30605 0.1185 |
+------------------------------------------------------------------------------------------------------------------------------------+
(func pid=4871) [1, 4000] loss: 1.169
(func pid=4870) [1, 10000] loss: 0.463
Trial train_cifar_acb57_00005 finished iteration 1 at 2025-01-02 22:00:00. Total running time: 1min 41s
+------------------------------------------------------------+
| Trial train_cifar_acb57_00005 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000000 |
| time_this_iter_s 95.20012 |
| time_total_s 95.20012 |
| training_iteration 1 |
| accuracy 0.3406 |
| loss 1.74255 |
+------------------------------------------------------------+
Trial train_cifar_acb57_00005 saved a checkpoint for iteration 1 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00005_5_batch_size=4,l1=8,l2=64,lr=0.0004_2025-01-02_21-58-18/checkpoint_000000
(func pid=4885) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00005_5_batch_size=4,l1=8,l2=64,lr=0.0004_2025-01-02_21-58-18/checkpoint_000000) [repeated 3x across cluster]
Trial train_cifar_acb57_00001 finished iteration 1 at 2025-01-02 22:00:00. Total running time: 1min 42s
+------------------------------------------------------------+
| Trial train_cifar_acb57_00001 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000000 |
| time_this_iter_s 96.32778 |
| time_total_s 96.32778 |
| training_iteration 1 |
| accuracy 0.0989 |
| loss 2.30402 |
+------------------------------------------------------------+
Trial train_cifar_acb57_00001 saved a checkpoint for iteration 1 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00001_1_batch_size=4,l1=1,l2=2,lr=0.0134_2025-01-02_21-58-18/checkpoint_000000
Trial train_cifar_acb57_00001 completed after 1 iterations at 2025-01-02 22:00:00. Total running time: 1min 42s
Trial train_cifar_acb57_00004 finished iteration 1 at 2025-01-02 22:00:01. Total running time: 1min 42s
+------------------------------------------------------------+
| Trial train_cifar_acb57_00004 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000000 |
| time_this_iter_s 96.47434 |
| time_total_s 96.47434 |
| training_iteration 1 |
| accuracy 0.101 |
| loss 2.33135 |
+------------------------------------------------------------+
Trial train_cifar_acb57_00004 saved a checkpoint for iteration 1 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00004_4_batch_size=4,l1=16,l2=2,lr=0.0567_2025-01-02_21-58-18/checkpoint_000000
Trial train_cifar_acb57_00004 completed after 1 iterations at 2025-01-02 22:00:01. Total running time: 1min 42s
(func pid=4871) [1, 6000] loss: 0.777 [repeated 4x across cluster]
Trial train_cifar_acb57_00007 finished iteration 2 at 2025-01-02 22:00:08. Total running time: 1min 50s
+------------------------------------------------------------+
| Trial train_cifar_acb57_00007 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000001 |
| time_this_iter_s 50.0624 |
| time_total_s 104.25999 |
| training_iteration 2 |
| accuracy 0.5473 |
| loss 1.28808 |
+------------------------------------------------------------+
Trial train_cifar_acb57_00007 saved a checkpoint for iteration 2 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2025-01-02_21-58-18/checkpoint_000001
(func pid=4887) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2025-01-02_21-58-18/checkpoint_000001) [repeated 3x across cluster]
Trial train_cifar_acb57_00008 finished iteration 1 at 2025-01-02 22:00:10. Total running time: 1min 52s
+------------------------------------------------------------+
| Trial train_cifar_acb57_00008 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000000 |
| time_this_iter_s 51.9493 |
| time_total_s 51.9493 |
| training_iteration 1 |
| accuracy 0.2172 |
| loss 2.05322 |
+------------------------------------------------------------+
Trial train_cifar_acb57_00008 saved a checkpoint for iteration 1 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00008_8_batch_size=8,l1=128,l2=256,lr=0.0306_2025-01-02_21-58-18/checkpoint_000000
(func pid=4885) [2, 2000] loss: 1.727 [repeated 3x across cluster]
(func pid=4871) [1, 8000] loss: 0.584
Trial status: 6 RUNNING | 4 TERMINATED
Current time: 2025-01-02 22:00:18. Total running time: 2min 0s
Logical resource usage: 12.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_acb57_00000 RUNNING 16 1 0.00213327 2 |
| train_cifar_acb57_00002 RUNNING 256 64 0.0113784 2 |
| train_cifar_acb57_00005 RUNNING 8 64 0.000353097 4 1 95.2001 1.74255 0.3406 |
| train_cifar_acb57_00007 RUNNING 256 256 0.00477469 8 2 104.26 1.28808 0.5473 |
| train_cifar_acb57_00008 RUNNING 128 256 0.0306227 8 1 51.9493 2.05322 0.2172 |
| train_cifar_acb57_00009 RUNNING 2 16 0.0286986 2 |
| train_cifar_acb57_00001 TERMINATED 1 2 0.013416 4 1 96.3278 2.30402 0.0989 |
| train_cifar_acb57_00003 TERMINATED 64 256 0.0274071 8 1 55.9793 2.082 0.2109 |
| train_cifar_acb57_00004 TERMINATED 16 2 0.056666 4 1 96.4743 2.33135 0.101 |
| train_cifar_acb57_00006 TERMINATED 16 4 0.000147684 8 1 53.9304 2.30605 0.1185 |
+------------------------------------------------------------------------------------------------------------------------------------+
(func pid=4868) [1, 16000] loss: 0.247
(func pid=4871) [1, 10000] loss: 0.467 [repeated 5x across cluster]
(func pid=4885) [2, 6000] loss: 0.536 [repeated 2x across cluster]
(func pid=4868) [1, 20000] loss: 0.197 [repeated 5x across cluster]
Trial status: 6 RUNNING | 4 TERMINATED
Current time: 2025-01-02 22:00:48. Total running time: 2min 30s
Logical resource usage: 12.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_acb57_00000 RUNNING 16 1 0.00213327 2 |
| train_cifar_acb57_00002 RUNNING 256 64 0.0113784 2 |
| train_cifar_acb57_00005 RUNNING 8 64 0.000353097 4 1 95.2001 1.74255 0.3406 |
| train_cifar_acb57_00007 RUNNING 256 256 0.00477469 8 2 104.26 1.28808 0.5473 |
| train_cifar_acb57_00008 RUNNING 128 256 0.0306227 8 1 51.9493 2.05322 0.2172 |
| train_cifar_acb57_00009 RUNNING 2 16 0.0286986 2 |
| train_cifar_acb57_00001 TERMINATED 1 2 0.013416 4 1 96.3278 2.30402 0.0989 |
| train_cifar_acb57_00003 TERMINATED 64 256 0.0274071 8 1 55.9793 2.082 0.2109 |
| train_cifar_acb57_00004 TERMINATED 16 2 0.056666 4 1 96.4743 2.33135 0.101 |
| train_cifar_acb57_00006 TERMINATED 16 4 0.000147684 8 1 53.9304 2.30605 0.1185 |
+------------------------------------------------------------------------------------------------------------------------------------+
Trial train_cifar_acb57_00007 finished iteration 3 at 2025-01-02 22:00:48. Total running time: 2min 30s
+------------------------------------------------------------+
| Trial train_cifar_acb57_00007 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000002 |
| time_this_iter_s 40.35045 |
| time_total_s 144.61044 |
| training_iteration 3 |
| accuracy 0.5602 |
| loss 1.258 |
+------------------------------------------------------------+
Trial train_cifar_acb57_00007 saved a checkpoint for iteration 3 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2025-01-02_21-58-18/checkpoint_000002
(func pid=4887) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2025-01-02_21-58-18/checkpoint_000002) [repeated 2x across cluster]
(func pid=4870) [1, 18000] loss: 0.257 [repeated 2x across cluster]
Trial train_cifar_acb57_00008 finished iteration 2 at 2025-01-02 22:00:53. Total running time: 2min 34s
+------------------------------------------------------------+
| Trial train_cifar_acb57_00008 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000001 |
| time_this_iter_s 42.5872 |
| time_total_s 94.5365 |
| training_iteration 2 |
| accuracy 0.2199 |
| loss 2.0719 |
+------------------------------------------------------------+
Trial train_cifar_acb57_00008 saved a checkpoint for iteration 2 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00008_8_batch_size=8,l1=128,l2=256,lr=0.0306_2025-01-02_21-58-18/checkpoint_000001
Trial train_cifar_acb57_00008 completed after 2 iterations at 2025-01-02 22:00:53. Total running time: 2min 34s
(func pid=4885) [2, 10000] loss: 0.305 [repeated 2x across cluster]
Trial train_cifar_acb57_00000 finished iteration 1 at 2025-01-02 22:01:01. Total running time: 2min 42s
+------------------------------------------------------------+
| Trial train_cifar_acb57_00000 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000000 |
| time_this_iter_s 156.91532 |
| time_total_s 156.91532 |
| training_iteration 1 |
| accuracy 0.2024 |
| loss 1.95374 |
+------------------------------------------------------------+
Trial train_cifar_acb57_00000 saved a checkpoint for iteration 1 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00000_0_batch_size=2,l1=16,l2=1,lr=0.0021_2025-01-02_21-58-18/checkpoint_000000
(func pid=4868) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00000_0_batch_size=2,l1=16,l2=1,lr=0.0021_2025-01-02_21-58-18/checkpoint_000000) [repeated 2x across cluster]
(func pid=4870) [1, 20000] loss: 0.232 [repeated 3x across cluster]
Trial train_cifar_acb57_00005 finished iteration 2 at 2025-01-02 22:01:08. Total running time: 2min 49s
+------------------------------------------------------------+
| Trial train_cifar_acb57_00005 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000001 |
| time_this_iter_s 68.21913 |
| time_total_s 163.41925 |
| training_iteration 2 |
| accuracy 0.449 |
| loss 1.50449 |
+------------------------------------------------------------+
Trial train_cifar_acb57_00005 saved a checkpoint for iteration 2 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00005_5_batch_size=4,l1=8,l2=64,lr=0.0004_2025-01-02_21-58-18/checkpoint_000001
(func pid=4885) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00005_5_batch_size=4,l1=8,l2=64,lr=0.0004_2025-01-02_21-58-18/checkpoint_000001)
(func pid=4868) [2, 2000] loss: 1.954
(func pid=4871) [1, 18000] loss: 0.260
Trial status: 5 RUNNING | 5 TERMINATED
Current time: 2025-01-02 22:01:18. Total running time: 3min 0s
Logical resource usage: 10.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_acb57_00000 RUNNING 16 1 0.00213327 2 1 156.915 1.95374 0.2024 |
| train_cifar_acb57_00002 RUNNING 256 64 0.0113784 2 |
| train_cifar_acb57_00005 RUNNING 8 64 0.000353097 4 2 163.419 1.50449 0.449 |
| train_cifar_acb57_00007 RUNNING 256 256 0.00477469 8 3 144.61 1.258 0.5602 |
| train_cifar_acb57_00009 RUNNING 2 16 0.0286986 2 |
| train_cifar_acb57_00001 TERMINATED 1 2 0.013416 4 1 96.3278 2.30402 0.0989 |
| train_cifar_acb57_00003 TERMINATED 64 256 0.0274071 8 1 55.9793 2.082 0.2109 |
| train_cifar_acb57_00004 TERMINATED 16 2 0.056666 4 1 96.4743 2.33135 0.101 |
| train_cifar_acb57_00006 TERMINATED 16 4 0.000147684 8 1 53.9304 2.30605 0.1185 |
| train_cifar_acb57_00008 TERMINATED 128 256 0.0306227 8 2 94.5365 2.0719 0.2199 |
+------------------------------------------------------------------------------------------------------------------------------------+
(func pid=4885) [3, 2000] loss: 1.484 [repeated 2x across cluster]
Trial train_cifar_acb57_00002 finished iteration 1 at 2025-01-02 22:01:21. Total running time: 3min 3s
+------------------------------------------------------------+
| Trial train_cifar_acb57_00002 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000000 |
| time_this_iter_s 177.56105 |
| time_total_s 177.56105 |
| training_iteration 1 |
| accuracy 0.0985 |
| loss 2.3243 |
+------------------------------------------------------------+
(func pid=4870) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00002_2_batch_size=2,l1=256,l2=64,lr=0.0114_2025-01-02_21-58-18/checkpoint_000000)
Trial train_cifar_acb57_00002 saved a checkpoint for iteration 1 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00002_2_batch_size=2,l1=256,l2=64,lr=0.0114_2025-01-02_21-58-18/checkpoint_000000
Trial train_cifar_acb57_00002 completed after 1 iterations at 2025-01-02 22:01:21. Total running time: 3min 3s
(func pid=4871) [1, 20000] loss: 0.233 [repeated 2x across cluster]
Trial train_cifar_acb57_00007 finished iteration 4 at 2025-01-02 22:01:26. Total running time: 3min 8s
+------------------------------------------------------------+
| Trial train_cifar_acb57_00007 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000003 |
| time_this_iter_s 37.85795 |
| time_total_s 182.46839 |
| training_iteration 4 |
| accuracy 0.5818 |
| loss 1.22092 |
+------------------------------------------------------------+
Trial train_cifar_acb57_00007 saved a checkpoint for iteration 4 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2025-01-02_21-58-18/checkpoint_000003
(func pid=4885) [3, 4000] loss: 0.735
(func pid=4868) [2, 6000] loss: 0.645
(func pid=4887) [5, 2000] loss: 1.068
Trial train_cifar_acb57_00009 finished iteration 1 at 2025-01-02 22:01:39. Total running time: 3min 21s
+------------------------------------------------------------+
| Trial train_cifar_acb57_00009 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000000 |
| time_this_iter_s 139.43723 |
| time_total_s 139.43723 |
| training_iteration 1 |
| accuracy 0.1015 |
| loss 2.3257 |
+------------------------------------------------------------+
Trial train_cifar_acb57_00009 saved a checkpoint for iteration 1 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00009_9_batch_size=2,l1=2,l2=16,lr=0.0287_2025-01-02_21-58-18/checkpoint_000000
Trial train_cifar_acb57_00009 completed after 1 iterations at 2025-01-02 22:01:39. Total running time: 3min 21s
(func pid=4871) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00009_9_batch_size=2,l1=2,l2=16,lr=0.0287_2025-01-02_21-58-18/checkpoint_000000) [repeated 2x across cluster]
(func pid=4885) [3, 6000] loss: 0.487
Trial status: 3 RUNNING | 7 TERMINATED
Current time: 2025-01-02 22:01:48. Total running time: 3min 30s
Logical resource usage: 6.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_acb57_00000 RUNNING 16 1 0.00213327 2 1 156.915 1.95374 0.2024 |
| train_cifar_acb57_00005 RUNNING 8 64 0.000353097 4 2 163.419 1.50449 0.449 |
| train_cifar_acb57_00007 RUNNING 256 256 0.00477469 8 4 182.468 1.22092 0.5818 |
| train_cifar_acb57_00001 TERMINATED 1 2 0.013416 4 1 96.3278 2.30402 0.0989 |
| train_cifar_acb57_00002 TERMINATED 256 64 0.0113784 2 1 177.561 2.3243 0.0985 |
| train_cifar_acb57_00003 TERMINATED 64 256 0.0274071 8 1 55.9793 2.082 0.2109 |
| train_cifar_acb57_00004 TERMINATED 16 2 0.056666 4 1 96.4743 2.33135 0.101 |
| train_cifar_acb57_00006 TERMINATED 16 4 0.000147684 8 1 53.9304 2.30605 0.1185 |
| train_cifar_acb57_00008 TERMINATED 128 256 0.0306227 8 2 94.5365 2.0719 0.2199 |
| train_cifar_acb57_00009 TERMINATED 2 16 0.0286986 2 1 139.437 2.3257 0.1015 |
+------------------------------------------------------------------------------------------------------------------------------------+
(func pid=4885) [3, 8000] loss: 0.357 [repeated 2x across cluster]
(func pid=4885) [3, 10000] loss: 0.284 [repeated 3x across cluster]
Trial train_cifar_acb57_00007 finished iteration 5 at 2025-01-02 22:01:59. Total running time: 3min 41s
+------------------------------------------------------------+
| Trial train_cifar_acb57_00007 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000004 |
| time_this_iter_s 32.97702 |
| time_total_s 215.44542 |
| training_iteration 5 |
| accuracy 0.5554 |
| loss 1.30401 |
+------------------------------------------------------------+
Trial train_cifar_acb57_00007 saved a checkpoint for iteration 5 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2025-01-02_21-58-18/checkpoint_000004
(func pid=4887) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2025-01-02_21-58-18/checkpoint_000004)
Trial train_cifar_acb57_00005 finished iteration 3 at 2025-01-02 22:02:06. Total running time: 3min 48s
+------------------------------------------------------------+
| Trial train_cifar_acb57_00005 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000002 |
| time_this_iter_s 58.1904 |
| time_total_s 221.60965 |
| training_iteration 3 |
| accuracy 0.4754 |
| loss 1.45713 |
+------------------------------------------------------------+
Trial train_cifar_acb57_00005 saved a checkpoint for iteration 3 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00005_5_batch_size=4,l1=8,l2=64,lr=0.0004_2025-01-02_21-58-18/checkpoint_000002
(func pid=4885) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00005_5_batch_size=4,l1=8,l2=64,lr=0.0004_2025-01-02_21-58-18/checkpoint_000002)
(func pid=4868) [2, 14000] loss: 0.276 [repeated 2x across cluster]
(func pid=4885) [4, 2000] loss: 1.381 [repeated 2x across cluster]
Trial status: 3 RUNNING | 7 TERMINATED
Current time: 2025-01-02 22:02:18. Total running time: 4min 0s
Logical resource usage: 6.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_acb57_00000 RUNNING 16 1 0.00213327 2 1 156.915 1.95374 0.2024 |
| train_cifar_acb57_00005 RUNNING 8 64 0.000353097 4 3 221.61 1.45713 0.4754 |
| train_cifar_acb57_00007 RUNNING 256 256 0.00477469 8 5 215.445 1.30401 0.5554 |
| train_cifar_acb57_00001 TERMINATED 1 2 0.013416 4 1 96.3278 2.30402 0.0989 |
| train_cifar_acb57_00002 TERMINATED 256 64 0.0113784 2 1 177.561 2.3243 0.0985 |
| train_cifar_acb57_00003 TERMINATED 64 256 0.0274071 8 1 55.9793 2.082 0.2109 |
| train_cifar_acb57_00004 TERMINATED 16 2 0.056666 4 1 96.4743 2.33135 0.101 |
| train_cifar_acb57_00006 TERMINATED 16 4 0.000147684 8 1 53.9304 2.30605 0.1185 |
| train_cifar_acb57_00008 TERMINATED 128 256 0.0306227 8 2 94.5365 2.0719 0.2199 |
| train_cifar_acb57_00009 TERMINATED 2 16 0.0286986 2 1 139.437 2.3257 0.1015 |
+------------------------------------------------------------------------------------------------------------------------------------+
(func pid=4887) [6, 4000] loss: 0.534 [repeated 2x across cluster]
Trial train_cifar_acb57_00007 finished iteration 6 at 2025-01-02 22:02:32. Total running time: 4min 13s
+------------------------------------------------------------+
| Trial train_cifar_acb57_00007 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000005 |
| time_this_iter_s 32.56537 |
| time_total_s 248.01078 |
| training_iteration 6 |
| accuracy 0.5841 |
| loss 1.24011 |
+------------------------------------------------------------+
Trial train_cifar_acb57_00007 saved a checkpoint for iteration 6 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2025-01-02_21-58-18/checkpoint_000005
(func pid=4887) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2025-01-02_21-58-18/checkpoint_000005)
(func pid=4885) [4, 6000] loss: 0.450 [repeated 3x across cluster]
(func pid=4887) [7, 2000] loss: 0.965 [repeated 2x across cluster]
Trial status: 3 RUNNING | 7 TERMINATED
Current time: 2025-01-02 22:02:48. Total running time: 4min 30s
Logical resource usage: 6.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_acb57_00000 RUNNING 16 1 0.00213327 2 1 156.915 1.95374 0.2024 |
| train_cifar_acb57_00005 RUNNING 8 64 0.000353097 4 3 221.61 1.45713 0.4754 |
| train_cifar_acb57_00007 RUNNING 256 256 0.00477469 8 6 248.011 1.24011 0.5841 |
| train_cifar_acb57_00001 TERMINATED 1 2 0.013416 4 1 96.3278 2.30402 0.0989 |
| train_cifar_acb57_00002 TERMINATED 256 64 0.0113784 2 1 177.561 2.3243 0.0985 |
| train_cifar_acb57_00003 TERMINATED 64 256 0.0274071 8 1 55.9793 2.082 0.2109 |
| train_cifar_acb57_00004 TERMINATED 16 2 0.056666 4 1 96.4743 2.33135 0.101 |
| train_cifar_acb57_00006 TERMINATED 16 4 0.000147684 8 1 53.9304 2.30605 0.1185 |
| train_cifar_acb57_00008 TERMINATED 128 256 0.0306227 8 2 94.5365 2.0719 0.2199 |
| train_cifar_acb57_00009 TERMINATED 2 16 0.0286986 2 1 139.437 2.3257 0.1015 |
+------------------------------------------------------------------------------------------------------------------------------------+
Trial train_cifar_acb57_00000 finished iteration 2 at 2025-01-02 22:02:49. Total running time: 4min 30s
+------------------------------------------------------------+
| Trial train_cifar_acb57_00000 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000001 |
| time_this_iter_s 107.74446 |
| time_total_s 264.65979 |
| training_iteration 2 |
| accuracy 0.2192 |
| loss 1.91832 |
+------------------------------------------------------------+
Trial train_cifar_acb57_00000 saved a checkpoint for iteration 2 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00000_0_batch_size=2,l1=16,l2=1,lr=0.0021_2025-01-02_21-58-18/checkpoint_000001
Trial train_cifar_acb57_00000 completed after 2 iterations at 2025-01-02 22:02:49. Total running time: 4min 30s
(func pid=4868) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00000_0_batch_size=2,l1=16,l2=1,lr=0.0021_2025-01-02_21-58-18/checkpoint_000001)
(func pid=4885) [4, 10000] loss: 0.266 [repeated 2x across cluster]
Trial train_cifar_acb57_00005 finished iteration 4 at 2025-01-02 22:02:59. Total running time: 4min 41s
+------------------------------------------------------------+
| Trial train_cifar_acb57_00005 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000003 |
| time_this_iter_s 53.03366 |
| time_total_s 274.64331 |
| training_iteration 4 |
| accuracy 0.5127 |
| loss 1.37224 |
+------------------------------------------------------------+
Trial train_cifar_acb57_00005 saved a checkpoint for iteration 4 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00005_5_batch_size=4,l1=8,l2=64,lr=0.0004_2025-01-02_21-58-18/checkpoint_000003
Trial train_cifar_acb57_00005 completed after 4 iterations at 2025-01-02 22:02:59. Total running time: 4min 41s
(func pid=4885) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00005_5_batch_size=4,l1=8,l2=64,lr=0.0004_2025-01-02_21-58-18/checkpoint_000003)
Trial train_cifar_acb57_00007 finished iteration 7 at 2025-01-02 22:03:02. Total running time: 4min 44s
+------------------------------------------------------------+
| Trial train_cifar_acb57_00007 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000006 |
| time_this_iter_s 30.3697 |
| time_total_s 278.38048 |
| training_iteration 7 |
| accuracy 0.5745 |
| loss 1.33752 |
+------------------------------------------------------------+
Trial train_cifar_acb57_00007 saved a checkpoint for iteration 7 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2025-01-02_21-58-18/checkpoint_000006
(func pid=4887) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2025-01-02_21-58-18/checkpoint_000006)
(func pid=4887) [8, 2000] loss: 0.951 [repeated 2x across cluster]
Trial status: 9 TERMINATED | 1 RUNNING
Current time: 2025-01-02 22:03:18. Total running time: 5min 0s
Logical resource usage: 2.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_acb57_00007 RUNNING 256 256 0.00477469 8 7 278.38 1.33752 0.5745 |
| train_cifar_acb57_00000 TERMINATED 16 1 0.00213327 2 2 264.66 1.91832 0.2192 |
| train_cifar_acb57_00001 TERMINATED 1 2 0.013416 4 1 96.3278 2.30402 0.0989 |
| train_cifar_acb57_00002 TERMINATED 256 64 0.0113784 2 1 177.561 2.3243 0.0985 |
| train_cifar_acb57_00003 TERMINATED 64 256 0.0274071 8 1 55.9793 2.082 0.2109 |
| train_cifar_acb57_00004 TERMINATED 16 2 0.056666 4 1 96.4743 2.33135 0.101 |
| train_cifar_acb57_00005 TERMINATED 8 64 0.000353097 4 4 274.643 1.37224 0.5127 |
| train_cifar_acb57_00006 TERMINATED 16 4 0.000147684 8 1 53.9304 2.30605 0.1185 |
| train_cifar_acb57_00008 TERMINATED 128 256 0.0306227 8 2 94.5365 2.0719 0.2199 |
| train_cifar_acb57_00009 TERMINATED 2 16 0.0286986 2 1 139.437 2.3257 0.1015 |
+------------------------------------------------------------------------------------------------------------------------------------+
(func pid=4887) [8, 4000] loss: 0.502
Trial train_cifar_acb57_00007 finished iteration 8 at 2025-01-02 22:03:29. Total running time: 5min 10s
+------------------------------------------------------------+
| Trial train_cifar_acb57_00007 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000007 |
| time_this_iter_s 26.38767 |
| time_total_s 304.76815 |
| training_iteration 8 |
| accuracy 0.5755 |
| loss 1.28171 |
+------------------------------------------------------------+
Trial train_cifar_acb57_00007 saved a checkpoint for iteration 8 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2025-01-02_21-58-18/checkpoint_000007
(func pid=4887) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2025-01-02_21-58-18/checkpoint_000007)
(func pid=4887) [9, 2000] loss: 0.934
(func pid=4887) [9, 4000] loss: 0.492
Trial status: 9 TERMINATED | 1 RUNNING
Current time: 2025-01-02 22:03:49. Total running time: 5min 30s
Logical resource usage: 2.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_acb57_00007 RUNNING 256 256 0.00477469 8 8 304.768 1.28171 0.5755 |
| train_cifar_acb57_00000 TERMINATED 16 1 0.00213327 2 2 264.66 1.91832 0.2192 |
| train_cifar_acb57_00001 TERMINATED 1 2 0.013416 4 1 96.3278 2.30402 0.0989 |
| train_cifar_acb57_00002 TERMINATED 256 64 0.0113784 2 1 177.561 2.3243 0.0985 |
| train_cifar_acb57_00003 TERMINATED 64 256 0.0274071 8 1 55.9793 2.082 0.2109 |
| train_cifar_acb57_00004 TERMINATED 16 2 0.056666 4 1 96.4743 2.33135 0.101 |
| train_cifar_acb57_00005 TERMINATED 8 64 0.000353097 4 4 274.643 1.37224 0.5127 |
| train_cifar_acb57_00006 TERMINATED 16 4 0.000147684 8 1 53.9304 2.30605 0.1185 |
| train_cifar_acb57_00008 TERMINATED 128 256 0.0306227 8 2 94.5365 2.0719 0.2199 |
| train_cifar_acb57_00009 TERMINATED 2 16 0.0286986 2 1 139.437 2.3257 0.1015 |
+------------------------------------------------------------------------------------------------------------------------------------+
Trial train_cifar_acb57_00007 finished iteration 9 at 2025-01-02 22:03:55. Total running time: 5min 37s
+------------------------------------------------------------+
| Trial train_cifar_acb57_00007 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000008 |
| time_this_iter_s 26.51591 |
| time_total_s 331.28406 |
| training_iteration 9 |
| accuracy 0.5687 |
| loss 1.35061 |
+------------------------------------------------------------+
Trial train_cifar_acb57_00007 saved a checkpoint for iteration 9 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2025-01-02_21-58-18/checkpoint_000008
(func pid=4887) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2025-01-02_21-58-18/checkpoint_000008)
(func pid=4887) [10, 2000] loss: 0.893
(func pid=4887) [10, 4000] loss: 0.492
Trial status: 9 TERMINATED | 1 RUNNING
Current time: 2025-01-02 22:04:19. Total running time: 6min 0s
Logical resource usage: 2.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_acb57_00007 RUNNING 256 256 0.00477469 8 9 331.284 1.35061 0.5687 |
| train_cifar_acb57_00000 TERMINATED 16 1 0.00213327 2 2 264.66 1.91832 0.2192 |
| train_cifar_acb57_00001 TERMINATED 1 2 0.013416 4 1 96.3278 2.30402 0.0989 |
| train_cifar_acb57_00002 TERMINATED 256 64 0.0113784 2 1 177.561 2.3243 0.0985 |
| train_cifar_acb57_00003 TERMINATED 64 256 0.0274071 8 1 55.9793 2.082 0.2109 |
| train_cifar_acb57_00004 TERMINATED 16 2 0.056666 4 1 96.4743 2.33135 0.101 |
| train_cifar_acb57_00005 TERMINATED 8 64 0.000353097 4 4 274.643 1.37224 0.5127 |
| train_cifar_acb57_00006 TERMINATED 16 4 0.000147684 8 1 53.9304 2.30605 0.1185 |
| train_cifar_acb57_00008 TERMINATED 128 256 0.0306227 8 2 94.5365 2.0719 0.2199 |
| train_cifar_acb57_00009 TERMINATED 2 16 0.0286986 2 1 139.437 2.3257 0.1015 |
+------------------------------------------------------------------------------------------------------------------------------------+
Trial train_cifar_acb57_00007 finished iteration 10 at 2025-01-02 22:04:21. Total running time: 6min 3s
+------------------------------------------------------------+
| Trial train_cifar_acb57_00007 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000009 |
| time_this_iter_s 26.11893 |
| time_total_s 357.40299 |
| training_iteration 10 |
| accuracy 0.5642 |
| loss 1.3626 |
+------------------------------------------------------------+
Trial train_cifar_acb57_00007 saved a checkpoint for iteration 10 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2025-01-02_21-58-18/checkpoint_000009
Trial train_cifar_acb57_00007 completed after 10 iterations at 2025-01-02 22:04:21. Total running time: 6min 3s
Trial status: 10 TERMINATED
Current time: 2025-01-02 22:04:21. Total running time: 6min 3s
Logical resource usage: 2.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_acb57_00000 TERMINATED 16 1 0.00213327 2 2 264.66 1.91832 0.2192 |
| train_cifar_acb57_00001 TERMINATED 1 2 0.013416 4 1 96.3278 2.30402 0.0989 |
| train_cifar_acb57_00002 TERMINATED 256 64 0.0113784 2 1 177.561 2.3243 0.0985 |
| train_cifar_acb57_00003 TERMINATED 64 256 0.0274071 8 1 55.9793 2.082 0.2109 |
| train_cifar_acb57_00004 TERMINATED 16 2 0.056666 4 1 96.4743 2.33135 0.101 |
| train_cifar_acb57_00005 TERMINATED 8 64 0.000353097 4 4 274.643 1.37224 0.5127 |
| train_cifar_acb57_00006 TERMINATED 16 4 0.000147684 8 1 53.9304 2.30605 0.1185 |
| train_cifar_acb57_00007 TERMINATED 256 256 0.00477469 8 10 357.403 1.3626 0.5642 |
| train_cifar_acb57_00008 TERMINATED 128 256 0.0306227 8 2 94.5365 2.0719 0.2199 |
| train_cifar_acb57_00009 TERMINATED 2 16 0.0286986 2 1 139.437 2.3257 0.1015 |
+------------------------------------------------------------------------------------------------------------------------------------+
Best trial config: {'l1': 256, 'l2': 256, 'lr': 0.00477468908087826, 'batch_size': 8}
Best trial final validation loss: 1.362598385155201
Best trial final validation accuracy: 0.5642
(func pid=4887) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-01-02_21-58-18/train_cifar_acb57_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2025-01-02_21-58-18/checkpoint_000009)
Files already downloaded and verified
Files already downloaded and verified
Best trial test set accuracy: 0.5794
如果运行代码,示例输出可能如下所示:
Number of trials: 10/10 (10 TERMINATED)
+-----+--------------+------+------+-------------+--------+---------+------------+
| ... | batch_size | l1 | l2 | lr | iter | loss | accuracy |
|-----+--------------+------+------+-------------+--------+---------+------------|
| ... | 2 | 1 | 256 | 0.000668163 | 1 | 2.31479 | 0.0977 |
| ... | 4 | 64 | 8 | 0.0331514 | 1 | 2.31605 | 0.0983 |
| ... | 4 | 2 | 1 | 0.000150295 | 1 | 2.30755 | 0.1023 |
| ... | 16 | 32 | 32 | 0.0128248 | 10 | 1.66912 | 0.4391 |
| ... | 4 | 8 | 128 | 0.00464561 | 2 | 1.7316 | 0.3463 |
| ... | 8 | 256 | 8 | 0.00031556 | 1 | 2.19409 | 0.1736 |
| ... | 4 | 16 | 256 | 0.00574329 | 2 | 1.85679 | 0.3368 |
| ... | 8 | 2 | 2 | 0.00325652 | 1 | 2.30272 | 0.0984 |
| ... | 2 | 2 | 2 | 0.000342987 | 2 | 1.76044 | 0.292 |
| ... | 4 | 64 | 32 | 0.003734 | 8 | 1.53101 | 0.4761 |
+-----+--------------+------+------+-------------+--------+---------+------------+
Best trial config: {'l1': 64, 'l2': 32, 'lr': 0.0037339984519545164, 'batch_size': 4}
Best trial final validation loss: 1.5310075663924216
Best trial final validation accuracy: 0.4761
Best trial test set accuracy: 0.4737
为避免浪费资源,大多数试验已提前停止。 表现最好的试验实现了约 47% 的验证准确率,这可以 在测试集上确认。
就是这样!您现在可以调整 PyTorch 模型的参数。
脚本总运行时间:(6 分 20.572 秒)