6、好莱坞明星识别

 
 

学习要求

  • 保存训练过程中的最佳模型权重
  • 调用官方的VGG-16网络框架

学习提高

  • 测试集准确率达到60%(难度有点大,但是这个过程可以学到不少)
  • 手动搭建VGG-16网络框架
 

对于模型的优化

第一次优化

首先采用baseline,我们的test_acc只能够达到20%不到,连train_acc也是如此,明显是欠拟合。对于一个模型我们第一步起码要让train_acc能够能够到95%的过拟合程度才配说进行test_acc的优化。由于VGG16模型本身较为复杂,并不可能是由于模型复杂度较低导致的欠拟合。由此我首先进行了学习率的调大,调整lr到10看看是否是由于学习率太小导致的不足。

结果得到如下:

Epoch:40, Train_acc:64.7%, Train_loss:134.587, Test_acc:45.8%, Test_loss:208.556, Lr:4.34E+00

这说明确实提高学习率能够增加精确度。

第二次优化

由于第一次优化中train_acc仍然在提高中,由此决定进一步提高lr,这次提高到50,同时将lr衰减机制去除。 结果如下

Epoch:40, Train_acc:60.3%, Train_loss:1157.049, Test_acc:38.9%, Test_loss:2019.087, Lr:5.00E+01

这次train_acc较快达到40%,但是到60%花了很久,而且最终性能不如第一次优化结果。这说明提高lr已经走到了能力范围内的极限。

第三次优化

仔细审阅模型,发现dropout为0.5,这对于一个欠拟合的模型来说显然是不合适的,虽然保留dropout层,但是将0.5提高到0.9进行尝试,同时将lr调整到10 结果如下

Epoch:40, Train_acc:18.0%, Train_loss:9157.342, Test_acc:30.0%, Test_loss:1639.763, Lr:8.17E+00

进行修改后train_acc真的蛮低的,不如dropout之前,但是test_acc反而较高,再次尝试调大lr。

Epoch:40, Train_acc:19.3%, Train_loss:43577.336, Test_acc:30.0%, Test_loss:9402.872, Lr:4.09E+01

好像并没有太多的用处,可以推断出与lr相关性不大。为什么提高dropout(减少正则化)反而导致结果下降?

第四次优化

反向操作,将dropout降低到0.2,保持lr不变。好像搞反了dropout的定义,对于欠拟合的要调低,过拟合的调高,一般维持在0.3-0.5区间内。

Epoch:39, Train_acc:87.2%, Train_loss:140.784, Test_acc:48.3%, Test_loss:1002.215, Lr:4.17E+01

成功将train_acc提高到88.3%,虽然在test_acc上仍然较低为43.1%。

第五次优化

这次将dropout调整到0.3,同时改变了优化器,采用Adam优化器,15个epoch train_acc就达到了80%的数值,但是后续就稍微有点卡住了,最好结果如下

Epoch:29, Train_acc:85.6%, Train_loss:3392.161, Test_acc:48.1%, Test_loss:23952.387, Lr:4.34E+01

第六次优化

将dropout设置为0,结果很容易就发生了过拟合

Epoch:39, Train_acc:97.5%, Train_loss:317.341, Test_acc:47.5%, Test_loss:25803.093, Lr:4.17E+01

 

一、前期工作准备部分

1、设置GPU

In [1]:
import torch
import torch.nn as nn
import torchvision.transforms as transforms
import torchvision
from torchvision import transforms, datasets
from sklearn.model_selection import KFold
from torch.optim.lr_scheduler import StepLR, MultiStepLR, LambdaLR, ExponentialLR, CosineAnnealingLR, ReduceLROnPlateau

import os,PIL,pathlib,random

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

device
Out[1]:
device(type='cuda')
 

2、导入数据

In [2]:
data_dir = '../data/6-data'
# 通过Path类创建路径对象
data_dir = pathlib.Path(data_dir)
# 获取路径下所有文件路径
paths= list(data_dir.glob('*'))
# 获取所有文件夹的名字,也就是图片类别
classNames = [str(path).split("\\")[3] for path in paths] # K哥classNames中间会多一个e
classNames
Out[2]:
['Angelina Jolie',
 'Brad Pitt',
 'Denzel Washington',
 'Hugh Jackman',
 'Jennifer Lawrence',
 'Johnny Depp',
 'Kate Winslet',
 'Leonardo DiCaprio',
 'Megan Fox',
 'Natalie Portman',
 'Nicole Kidman',
 'Robert Downey Jr',
 'Sandra Bullock',
 'Scarlett Johansson',
 'Tom Cruise',
 'Tom Hanks',
 'Will Smith']
In [3]:
# 关于transforms.Compose的更多介绍可以参考:https://blog.csdn.net/qq_38251616/article/details/124878863
train_transforms = transforms.Compose([
    transforms.Resize([224, 224]),  # 将输入图片resize成统一尺寸
    # transforms.RandomHorizontalFlip(), # 随机水平翻转
    transforms.ToTensor(),          # 将PIL Image或numpy.ndarray转换为tensor,并归一化到[0,1]之间
    transforms.Normalize(           # 标准化处理-->转换为标准正太分布(高斯分布),使模型更容易收敛
        mean=[0.485, 0.456, 0.406], 
        std=[0.229, 0.224, 0.225])  # 其中 mean=[0.485,0.456,0.406]与std=[0.229,0.224,0.225] 从数据集中随机抽样计算得到的。
])

total_data = datasets.ImageFolder("../data/6-data/",transform=train_transforms)
total_data
Out[3]:
Dataset ImageFolder
    Number of datapoints: 1800
    Root location: ../data/6-data/
    StandardTransform
Transform: Compose(
               Resize(size=[224, 224], interpolation=bilinear, max_size=None, antialias=None)
               ToTensor()
               Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
           )
In [4]:
total_data.class_to_idx
Out[4]:
{'Angelina Jolie': 0,
 'Brad Pitt': 1,
 'Denzel Washington': 2,
 'Hugh Jackman': 3,
 'Jennifer Lawrence': 4,
 'Johnny Depp': 5,
 'Kate Winslet': 6,
 'Leonardo DiCaprio': 7,
 'Megan Fox': 8,
 'Natalie Portman': 9,
 'Nicole Kidman': 10,
 'Robert Downey Jr': 11,
 'Sandra Bullock': 12,
 'Scarlett Johansson': 13,
 'Tom Cruise': 14,
 'Tom Hanks': 15,
 'Will Smith': 16}
 

3、划分数据集

In [9]:
train_size = int(0.8 * len(total_data))
test_size  = len(total_data) - train_size
train_dataset, test_dataset = torch.utils.data.random_split(total_data, [train_size, test_size])
train_dataset, test_dataset
Out[9]:
(<torch.utils.data.dataset.Subset at 0x2723c54cb00>,
 <torch.utils.data.dataset.Subset at 0x2723c54cac8>)
In [6]:
kf = KFold(n_splits=10,shuffle=True, random_state=42)  # 初始化KFold
for train_index , test_index in kf.split(total_dataset):  # split  
    # get train, val 根据索引划分
    train_dataset = torch.utils.data.dataset.Subset(total_dataset, train_index)
    test_dataset = torch.utils.data.dataset.Subset(total_dataset, test_index) 

    # package type of DataLoader
    train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True)
    test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=True)

    train_loader
 
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-6-4e419c68ba6d> in <module>()
      1 kf = KFold(n_splits=10,shuffle=True, random_state=42)  # 初始化KFold
----> 2for train_index , test_index in kf.split(total_dataset):  # split
      3     # get train, val 根据索引划分
      4     train_dataset = torch.utils.data.dataset.Subset(total_dataset, train_index)
      5     test_dataset = torch.utils.data.dataset.Subset(total_dataset, test_index)

NameError: name 'total_dataset' is not defined
In [10]:
batch_size = 32

train_dl = torch.utils.data.DataLoader(train_dataset,
                                       batch_size=batch_size,
                                       shuffle=True,
                                       num_workers=3)

test_dl = torch.utils.data.DataLoader(test_dataset,
                                      batch_size=batch_size,
                                      shuffle=True,
                                      num_workers=3)
In [11]:
for X, y in test_dl:
    print("Shape of X [N, C, H, W]: ", X.shape)
    print("Shape of y: ", y.shape, y.dtype)
    break
 
Shape of X [N, C, H, W]:  torch.Size([32, 3, 224, 224])
Shape of y:  torch.Size([32]) torch.int64
 

二、调用官方的VGG-16模型

对于一般的CNN网络来说,都是由特征提取网络和分类网络构成,其中特征提取网络用于提取图片的特征,分类网络用于将图片进行分类。

In [17]:
from torchvision.models import vgg16

device = "cuda" if torch.cuda.is_available() else "cpu"
print("Using {} device".format(device))
    
# 加载预训练模型,并且对模型进行微调
model = vgg16(pretrained = True).to(device) # 加载预训练的vgg16模型

for param in model.parameters():
    param.requires_grad = False # 冻结模型的参数,这样子在训练的时候只训练最后一层的参数

# 修改classifier模块的第6层(即:(6): Linear(in_features=4096, out_features=1000, bias=True))
# 注意查看我们下方打印出来的模型
model.classifier._modules['2'] = nn.Dropout(p=0, inplace=False) # 修改vgg16模型中最后一层全连接层,输出目标类别个数
model.classifier._modules['5'] = nn.Dropout(p=0, inplace=False) # 修改vgg16模型中最后一层全连接层,输出目标类别个数
model.classifier._modules['6'] = nn.Linear(4096,len(classNames)) # 修改vgg16模型中最后一层全连接层,输出目标类别个数
model.to(device)  
model
 
Using cuda device
 
C:\Anaconda\lib\site-packages\torchvision\models\_utils.py:209: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and will be removed in 0.15, please use 'weights' instead.
  f"The parameter '{pretrained_param}' is deprecated since 0.13 and will be removed in 0.15, "
C:\Anaconda\lib\site-packages\torchvision\models\_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and will be removed in 0.15. The current behavior is equivalent to passing `weights=VGG16_Weights.IMAGENET1K_V1`. You can also use `weights=VGG16_Weights.DEFAULT` to get the most up-to-date weights.
  warnings.warn(msg)
Out[17]:
VGG(
  (features): Sequential(
    (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): ReLU(inplace=True)
    (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (3): ReLU(inplace=True)
    (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (6): ReLU(inplace=True)
    (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (8): ReLU(inplace=True)
    (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (11): ReLU(inplace=True)
    (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (13): ReLU(inplace=True)
    (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (15): ReLU(inplace=True)
    (16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (18): ReLU(inplace=True)
    (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (20): ReLU(inplace=True)
    (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (22): ReLU(inplace=True)
    (23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (25): ReLU(inplace=True)
    (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (27): ReLU(inplace=True)
    (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (29): ReLU(inplace=True)
    (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  )
  (avgpool): AdaptiveAvgPool2d(output_size=(7, 7))
  (classifier): Sequential(
    (0): Linear(in_features=25088, out_features=4096, bias=True)
    (1): ReLU(inplace=True)
    (2): Dropout(p=0, inplace=False)
    (3): Linear(in_features=4096, out_features=4096, bias=True)
    (4): ReLU(inplace=True)
    (5): Dropout(p=0, inplace=False)
    (6): Linear(in_features=4096, out_features=17, bias=True)
  )
)
 

三、训练模型

1、编写训练函数

In [13]:
# 训练循环
def train(dataloader, model, loss_fn, optimizer):
    size = len(dataloader.dataset)  # 训练集的大小
    num_batches = len(dataloader)   # 批次数目, (size/batch_size,向上取整)

    train_loss, train_acc = 0, 0  # 初始化训练损失和正确率
    
    for X, y in dataloader:  # 获取图片及其标签
        X, y = X.to(device), y.to(device)
        
        # 计算预测误差
        pred = model(X)          # 网络输出
        loss = loss_fn(pred, y)  # 计算网络输出和真实值之间的差距,targets为真实值,计算二者差值即为损失
        
        # 反向传播
        optimizer.zero_grad()  # grad属性归零
        loss.backward()        # 反向传播
        optimizer.step()       # 每一步自动更新
        
        # 记录acc与loss
        train_acc  += (pred.argmax(1) == y).type(torch.float).sum().item()
        train_loss += loss.item()
            
    train_acc  /= size
    train_loss /= num_batches

    return train_acc, train_loss
 

2、编写训练函数

In [14]:
def test (dataloader, model, loss_fn):
    size        = len(dataloader.dataset)  # 测试集的大小
    num_batches = len(dataloader)          # 批次数目, (size/batch_size,向上取整)
    test_loss, test_acc = 0, 0
    
    # 当不进行训练时,停止梯度更新,节省计算内存消耗
    with torch.no_grad():
        for imgs, target in dataloader:
            imgs, target = imgs.to(device), target.to(device)
            
            # 计算loss
            target_pred = model(imgs)
            loss        = loss_fn(target_pred, target)
            
            test_loss += loss.item()
            test_acc  += (target_pred.argmax(1) == target).type(torch.float).sum().item()

    test_acc  /= size
    test_loss /= num_batches

    return test_acc, test_loss
 

3、设置动态学习率

In [18]:
learn_rate = 50 # 初始学习率
# 调用官方动态学习率接口时使用
lambda1 = lambda epoch: 0.98 ** (epoch // 4)
optimizer = torch.optim.Adam(model.parameters(), lr=learn_rate)
scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda=lambda1) #选定调整方法
 

4、正式训练

In [19]:
import copy

loss_fn    = nn.CrossEntropyLoss() # 创建损失函数
epochs     = 40

train_loss = []
train_acc  = []
test_loss  = []
test_acc   = []

best_acc = 0    # 设置一个最佳准确率,作为最佳模型的判别指标

for epoch in range(epochs): 
    model.train()
    epoch_train_acc, epoch_train_loss = train(train_dl, model, loss_fn, optimizer)
    scheduler.step() # 更新学习率(调用官方动态学习率接口时使用)
    
    model.eval()
    epoch_test_acc, epoch_test_loss = test(test_dl, model, loss_fn)
    
    # 保存最佳模型到 best_model
    if epoch_test_acc > best_acc:
        best_acc   = epoch_test_acc
        best_model = copy.deepcopy(model)
    
    train_acc.append(epoch_train_acc)
    train_loss.append(epoch_train_loss)
    test_acc.append(epoch_test_acc)
    test_loss.append(epoch_test_loss)
    
    # 获取当前的学习率
    lr = optimizer.state_dict()['param_groups'][0]['lr']
    
    template = ('Epoch:{:2d}, Train_acc:{:.1f}%, Train_loss:{:.3f}, Test_acc:{:.1f}%, Test_loss:{:.3f}, Lr:{:.2E}')
    print(template.format(epoch+1, epoch_train_acc*100, epoch_train_loss, 
                          epoch_test_acc*100, epoch_test_loss, lr))
    
# 保存最佳模型到文件中
PATH = './best_model.pth'  # 保存的参数文件名
torch.save(model.state_dict(), PATH)

print('Done')
 
Epoch: 1, Train_acc:23.5%, Train_loss:18419.112, Test_acc:27.8%, Test_loss:16487.549, Lr:5.00E+01
Epoch: 2, Train_acc:55.8%, Train_loss:7237.138, Test_acc:34.2%, Test_loss:14506.850, Lr:5.00E+01
Epoch: 3, Train_acc:72.6%, Train_loss:3589.199, Test_acc:39.2%, Test_loss:13085.656, Lr:5.00E+01
Epoch: 4, Train_acc:74.9%, Train_loss:3257.284, Test_acc:37.8%, Test_loss:15517.393, Lr:4.90E+01
Epoch: 5, Train_acc:79.7%, Train_loss:2418.423, Test_acc:35.6%, Test_loss:16051.575, Lr:4.90E+01
Epoch: 6, Train_acc:82.6%, Train_loss:2074.764, Test_acc:35.8%, Test_loss:17977.143, Lr:4.90E+01
Epoch: 7, Train_acc:86.2%, Train_loss:1567.173, Test_acc:36.7%, Test_loss:16448.556, Lr:4.90E+01
Epoch: 8, Train_acc:89.4%, Train_loss:1207.302, Test_acc:38.9%, Test_loss:17045.840, Lr:4.80E+01
Epoch: 9, Train_acc:91.6%, Train_loss:847.697, Test_acc:41.9%, Test_loss:17879.542, Lr:4.80E+01
Epoch:10, Train_acc:91.0%, Train_loss:844.074, Test_acc:38.9%, Test_loss:17644.222, Lr:4.80E+01
Epoch:11, Train_acc:93.1%, Train_loss:589.103, Test_acc:39.2%, Test_loss:18139.325, Lr:4.80E+01
Epoch:12, Train_acc:90.6%, Train_loss:1052.865, Test_acc:38.6%, Test_loss:19511.559, Lr:4.71E+01
Epoch:13, Train_acc:88.8%, Train_loss:1511.585, Test_acc:41.1%, Test_loss:19193.926, Lr:4.71E+01
Epoch:14, Train_acc:92.2%, Train_loss:863.491, Test_acc:43.9%, Test_loss:21514.613, Lr:4.71E+01
Epoch:15, Train_acc:94.4%, Train_loss:546.319, Test_acc:41.4%, Test_loss:21077.479, Lr:4.71E+01
Epoch:16, Train_acc:95.4%, Train_loss:480.770, Test_acc:39.4%, Test_loss:22590.432, Lr:4.61E+01
Epoch:17, Train_acc:95.3%, Train_loss:400.498, Test_acc:41.1%, Test_loss:21691.482, Lr:4.61E+01
Epoch:18, Train_acc:95.1%, Train_loss:514.818, Test_acc:40.3%, Test_loss:24339.756, Lr:4.61E+01
Epoch:19, Train_acc:95.1%, Train_loss:573.919, Test_acc:40.0%, Test_loss:22368.741, Lr:4.61E+01
Epoch:20, Train_acc:96.9%, Train_loss:291.304, Test_acc:42.2%, Test_loss:21021.291, Lr:4.52E+01
Epoch:21, Train_acc:96.5%, Train_loss:365.276, Test_acc:43.6%, Test_loss:21516.765, Lr:4.52E+01
Epoch:22, Train_acc:95.9%, Train_loss:536.665, Test_acc:43.6%, Test_loss:24450.165, Lr:4.52E+01
Epoch:23, Train_acc:97.7%, Train_loss:270.193, Test_acc:43.3%, Test_loss:22180.139, Lr:4.52E+01
Epoch:24, Train_acc:96.5%, Train_loss:402.112, Test_acc:42.8%, Test_loss:25207.530, Lr:4.43E+01
Epoch:25, Train_acc:94.4%, Train_loss:745.123, Test_acc:42.8%, Test_loss:22773.804, Lr:4.43E+01
Epoch:26, Train_acc:95.1%, Train_loss:694.442, Test_acc:45.6%, Test_loss:24067.765, Lr:4.43E+01
Epoch:27, Train_acc:95.6%, Train_loss:363.762, Test_acc:41.9%, Test_loss:24654.352, Lr:4.43E+01
Epoch:28, Train_acc:95.8%, Train_loss:525.909, Test_acc:43.6%, Test_loss:24264.356, Lr:4.34E+01
Epoch:29, Train_acc:95.6%, Train_loss:614.229, Test_acc:47.5%, Test_loss:23170.986, Lr:4.34E+01
Epoch:30, Train_acc:96.8%, Train_loss:437.881, Test_acc:43.9%, Test_loss:25827.857, Lr:4.34E+01
Epoch:31, Train_acc:96.7%, Train_loss:419.391, Test_acc:45.3%, Test_loss:25716.313, Lr:4.34E+01
Epoch:32, Train_acc:98.1%, Train_loss:258.675, Test_acc:46.1%, Test_loss:24814.016, Lr:4.25E+01
Epoch:33, Train_acc:98.5%, Train_loss:179.266, Test_acc:43.1%, Test_loss:26414.059, Lr:4.25E+01
Epoch:34, Train_acc:98.9%, Train_loss:90.506, Test_acc:45.6%, Test_loss:27010.712, Lr:4.25E+01
Epoch:35, Train_acc:97.7%, Train_loss:253.113, Test_acc:42.8%, Test_loss:30575.033, Lr:4.25E+01
Epoch:36, Train_acc:96.8%, Train_loss:367.504, Test_acc:43.3%, Test_loss:29246.973, Lr:4.17E+01
Epoch:37, Train_acc:97.6%, Train_loss:302.682, Test_acc:41.9%, Test_loss:28744.326, Lr:4.17E+01
Epoch:38, Train_acc:96.9%, Train_loss:348.813, Test_acc:43.3%, Test_loss:29835.622, Lr:4.17E+01
Epoch:39, Train_acc:97.5%, Train_loss:317.341, Test_acc:47.5%, Test_loss:25803.093, Lr:4.17E+01
Epoch:40, Train_acc:96.7%, Train_loss:463.330, Test_acc:44.7%, Test_loss:28343.533, Lr:4.09E+01
Done
 

四、结果可视化

1、Loss与Accuracy图

In [12]:
import matplotlib.pyplot as plt
#隐藏警告
import warnings
warnings.filterwarnings("ignore")               #忽略警告信息
plt.rcParams['font.sans-serif']    = ['SimHei'] # 用来正常显示中文标签
plt.rcParams['axes.unicode_minus'] = False      # 用来正常显示负号
plt.rcParams['figure.dpi']         = 100        #分辨率

epochs_range = range(epochs)

plt.figure(figsize=(12, 3))
plt.subplot(1, 2, 1)

plt.plot(epochs_range, train_acc, label='Training Accuracy')
plt.plot(epochs_range, test_acc, label='Test Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')

plt.subplot(1, 2, 2)
plt.plot(epochs_range, train_loss, label='Training Loss')
plt.plot(epochs_range, test_loss, label='Test Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
 
<Figure size 1200x300 with 2 Axes>
 

2、指定图片进行预测

In [13]:
from PIL import Image 

classes = list(train_dataset.class_to_idx)

def predict_one_image(image_path, model, transform, classes):
    
    test_img = Image.open(image_path).convert('RGB')
    plt.imshow(test_img)  # 展示预测的图片

    test_img = transform(test_img)
    img = test_img.to(device).unsqueeze(0)
    
    model.eval()
    output = model(img)

    _,pred = torch.max(output,1)
    pred_class = classes[pred]
    print(f'预测结果是:{pred_class}')
In [14]:
# 预测训练集中的某张照片
predict_one_image(image_path='E:/jupyter-notebook/data/6-data/Angelina Jolie/001_fe3347c0.jpg', 
                  model=model, 
                  transform=train_transforms, 
                  classes=classes)
 
预测结果是:nike
 

3、模型评估

In [58]:
best_model.eval()
epoch_test_acc, epoch_test_loss = test(test_dl, best_model, loss_fn)
epoch_test_acc, epoch_test_loss
Out[58]:
(0.21944444444444444, 2.4482046564420066)
In [59]:
# 查看是否与我们记录的最高准确率一致
epoch_test_acc
Out[59]:
0.21944444444444444
posted @ 2022-11-04 21:01  CASTWJ  阅读(220)  评论(0)    收藏  举报