导入和训练模型可以使用 Pytorch 导入预训练模型。该设备可以进一步转为使用GPU,这样可以减少训练时间。import torchvision.models as models
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model_ft = models.vgg16(pretrained=True)
数据集进一步分为训练集和验证集以避免过度拟合。该模型在训练时使用的一些参数如下:标准-交叉熵损失优化器-随机梯度下降,学习率=0.01,动量=0.9指数学习率调度程序 - 每 7 步将学习率值降低 gamma=0.1 倍。最后加入一个线性全连接层,使输出收敛,给出两个预测标签。num_ftrs = model_ft.fc.in_features
# Here the size of each output sample is set to 2.
# Alternatively, it can be generalized to nn.Linear(num_ftrs, len(class_names)).
model_ft.fc = nn.Linear(num_ftrs, 2)
#model_ft = model_ft.to(device)
criterion = nn.CrossEntropyLoss()
# Observe that all parameters are being optimized
optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)
# Decay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)
这些参数可以根据自己的需要和数据集来选择。最初,我们将输入和标签传递给模型,然后我们得到标签的预测值作为输出。该预测值和标签的实际值用于计算交叉熵损失,该损失进一步用于反向传播以更新权重和偏差的值。def train_model(model, criterion, optimizer, scheduler, num_epochs=25):
since = time.time()
best_model_wts = copy.deepcopy(model.state_dict())
best_acc = 0.0
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch, num_epochs - 1))
print('-' * 10)
# Each epoch has a training and validation phase
for phase in ['train', 'val']:
if phase == 'train':
model.train() # Set model to training mode
else:
model.eval() # Set model to evaluate mode
running_loss = 0.0
running_corrects = 0
# Iterate over data.
for inputs, labels in trainloader:
inputs = inputs.to(device)
labels = labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward
# track history if only in train
with torch.set_grad_enabled(phase == 'train'):
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
# backward + optimize only if in training phase
if phase == 'train':
loss.backward()
optimizer.step()
# statistics
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
if phase == 'train':
scheduler.step()
epoch_loss = running_loss / dataset_sizes
epoch_acc = running_corrects.double() / dataset_sizes
print('{} Loss: {:.4f} Acc: {:.4f}'.format(
phase, epoch_loss, epoch_acc))
# deep copy the model
print()
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(
time_elapsed // 60, time_elapsed % 60))
# load best model weights
model.load_state_dict(best_model_wts)
return model
model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler,
num_epochs=25)
在这一步之后,你已经成功地训练了模型。
在Pytorch中使用VGG16进行迁移学习
磐创AI 2021-09-13
0 字丨阅读本文需 0 分钟
免责声明:凡注明来源本网的所有作品,均为本网合法拥有版权或有权使用的作品,欢迎转载,注明出处本网。非本网作品均来自其他媒体,转载目的在于传递更多信息,并不代表本网赞同其观点和对其真实性负责。如您发现有任何侵权内容,请依照下方联系方式进行沟通,我们将第一时间进行处理。
0赞
好资讯,需要你的鼓励
来自:磐创AI
0 0
参与评论
登录后参与讨论 0/1000