MNSIT手写数字识别 入门

MNIST数据集手写数字识别

数据集介绍

MNIST 包括6万张28x28的训练样本,1万张测试样本,很多教程都会对它”下手”几乎成为一个 “典范”,可以说它就是计算机视觉里面的Hello World。

导入必备的包

import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets,transforms
import matplotlib.pyplot as plt
import numpy as np

设置超参数

BATCH_SIZE = 512
EPOCHS = 10
DEVICE = torch.device("cuda")
lr = 0.01

加载数据

train_loader = torch.utils.data.DataLoader(
    datasets.MNIST(
        root='data',
        train=True,
        download=False,
        transform=transforms.Compose([
            transforms.ToTensor(),
             transforms.Normalize((0.1307,),(0.3801,))
        ])),
    batch_size = BATCH_SIZE,
    shuffle=True)
test_loader = torch.utils.data.DataLoader(
    datasets.MNIST(
        root='data',
        train=False,
        download=False,
        transform=transforms.Compose([
            transforms.ToTensor(),
             transforms.Normalize((0.1307,),(0.3801,))
        ])),
    batch_size = BATCH_SIZE,
    shuffle=True)

定义模型

首先使用MLP多层感知机尝试一下

class MLP(nn.Module):
    def __init__(self):
        super().__init__()
        self.fc1 = nn.Linear(784,256)
        self.fc2 = nn.Linear(256,10)
    
    def forward(self,x):
        in_size = x.size(0)
        out = x.view(in_size,-1)
        out = self.fc1(out)# 输入28*28=784 输出256
        out = F.relu(out)
        out = self.fc2(out)# 输入256 输出10
        out = F.log_softmax(out,dim=1)
        return out 
model_mlp = MLP().to(DEVICE)
optimizer_mlp = optim.SGD(model_mlp.parameters(),lr=lr) 

训练

def train(model,device,train_loader,optimizer,epoch):
    model.train()
    for batch_idx, (data,target) in enumerate(train_loader):
        data,target = data.to(device),target.to(device)
        optimizer.zero_grad()
        output = model(data)
        loss = F.nll_loss(output,target)
        loss.backward()
        optimizer.step()
        if(batch_idx+1)%30 == 0:
            print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
                epoch, batch_idx * len(data), len(train_loader.dataset),
                100. * batch_idx / len(train_loader), loss.item()))
def test(model,device,test_loader):
    model.eval()
    test_loss = 0
    correct = 0
    with torch.no_grad():
        for data,target in test_loader:
            data, target = data.to(device), target.to(device)
            output = model(data)
            test_loss  += F.nll_loss(output,target,reduction='sum').item()
            pred = output.max(1,keepdim=True)[1]
            correct += pred.eq(target.view_as(pred)).sum().item()

    test_loss /= len(test_loader.dataset)
    print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
        test_loss, correct, len(test_loader.dataset),
        100. * correct / len(test_loader.dataset)))
for epoch in range(1, EPOCHS + 1):
    train(model_mlp, DEVICE, train_loader, optimizer_mlp, epoch)
    test(model_mlp, DEVICE, test_loader)
Train Epoch: 1 [14848/60000 (25%)]	Loss: 1.943828
Train Epoch: 1 [30208/60000 (50%)]	Loss: 1.579372
Train Epoch: 1 [45568/60000 (75%)]	Loss: 1.273092

Test set: Average loss: 1.0416, Accuracy: 8151/10000 (82%)

Train Epoch: 2 [14848/60000 (25%)]	Loss: 0.892413
Train Epoch: 2 [30208/60000 (50%)]	Loss: 0.777179
Train Epoch: 2 [45568/60000 (75%)]	Loss: 0.703713

Test set: Average loss: 0.6323, Accuracy: 8635/10000 (86%)

Train Epoch: 3 [14848/60000 (25%)]	Loss: 0.581855
Train Epoch: 3 [30208/60000 (50%)]	Loss: 0.585811
Train Epoch: 3 [45568/60000 (75%)]	Loss: 0.575292

Test set: Average loss: 0.4979, Accuracy: 8789/10000 (88%)

Train Epoch: 4 [14848/60000 (25%)]	Loss: 0.524915
Train Epoch: 4 [30208/60000 (50%)]	Loss: 0.506676
Train Epoch: 4 [45568/60000 (75%)]	Loss: 0.489618

Test set: Average loss: 0.4336, Accuracy: 8860/10000 (89%)

Train Epoch: 5 [14848/60000 (25%)]	Loss: 0.502646
Train Epoch: 5 [30208/60000 (50%)]	Loss: 0.453554
Train Epoch: 5 [45568/60000 (75%)]	Loss: 0.422082

Test set: Average loss: 0.3956, Accuracy: 8945/10000 (89%)

Train Epoch: 6 [14848/60000 (25%)]	Loss: 0.428002
Train Epoch: 6 [30208/60000 (50%)]	Loss: 0.442678
Train Epoch: 6 [45568/60000 (75%)]	Loss: 0.420096

Test set: Average loss: 0.3706, Accuracy: 8983/10000 (90%)

Train Epoch: 7 [14848/60000 (25%)]	Loss: 0.350920
Train Epoch: 7 [30208/60000 (50%)]	Loss: 0.367541
Train Epoch: 7 [45568/60000 (75%)]	Loss: 0.376175

Test set: Average loss: 0.3527, Accuracy: 9011/10000 (90%)

Train Epoch: 8 [14848/60000 (25%)]	Loss: 0.371939
Train Epoch: 8 [30208/60000 (50%)]	Loss: 0.364408
Train Epoch: 8 [45568/60000 (75%)]	Loss: 0.398144

Test set: Average loss: 0.3379, Accuracy: 9041/10000 (90%)

Train Epoch: 9 [14848/60000 (25%)]	Loss: 0.381710
Train Epoch: 9 [30208/60000 (50%)]	Loss: 0.350960
Train Epoch: 9 [45568/60000 (75%)]	Loss: 0.364110

Test set: Average loss: 0.3270, Accuracy: 9071/10000 (91%)

Train Epoch: 10 [14848/60000 (25%)]	Loss: 0.261994
Train Epoch: 10 [30208/60000 (50%)]	Loss: 0.329554
Train Epoch: 10 [45568/60000 (75%)]	Loss: 0.388473

Test set: Average loss: 0.3165, Accuracy: 9103/10000 (91%)

我们可以看到准确率还有提高的空间,下面用CNN尝试

定义CNN卷积神经网络

class ConvNet(nn.Module):
    def __init__(self):
        super().__init__()
        #  batch1*28*28
        # 下面的卷积层Conv2d的第一个参数指输入通道数,第二个参数指输出通道数,第三个参数指卷积核的大小
        self.conv1 = nn.Conv2d(1,10,5)
        self.conv2 = nn.Conv2d(10,20,3)
        self.fc1 = nn.Linear(20*10*10,500)# 输入2000,输出500
        self.fc2 = nn.Linear(500,10)
    
    def forward(self,x):
        in_size = x.size(0)
        out = self.conv1(x) # 输入为batch_size*1*28*28的张量 经过5*5卷积 输出batc_size*10*24*24
        out = F.relu(out)
        out = F.max_pool2d(out,2,2) #2*2池化层  输出batc_size*10*12*12
        out = self.conv2(out) # 3*3卷积 输出batc_size*20*10*10
        out = F.relu(out)
        out = out.view(in_size,-1)# in_size为batch_size 展成batc_size*2000
        out = self.fc1(out)# 线性层  2000→500
        out = F.relu(out)
        out = self.fc2(out)# 线性层  500→10
        out = F.log_softmax(out,dim=1) #log softmax
        return out
model_cnn = ConvNet().to(DEVICE)
optimizer_cnn = optim.Adam(model_cnn.parameters())
for epoch in range(1,EPOCHS+1):
    train(model_cnn,DEVICE,train_loader,optimizer_cnn,epoch)
    test(model_cnn,DEVICE,test_loader)
Train Epoch: 1 [14848/60000 (25%)]	Loss: 0.386750
Train Epoch: 1 [30208/60000 (50%)]	Loss: 0.218018
Train Epoch: 1 [45568/60000 (75%)]	Loss: 0.137457

Test set: Average loss: 0.1020, Accuracy: 9667/10000 (97%)

Train Epoch: 2 [14848/60000 (25%)]	Loss: 0.099728
Train Epoch: 2 [30208/60000 (50%)]	Loss: 0.056601
Train Epoch: 2 [45568/60000 (75%)]	Loss: 0.094554

Test set: Average loss: 0.0563, Accuracy: 9819/10000 (98%)

Train Epoch: 3 [14848/60000 (25%)]	Loss: 0.085307
Train Epoch: 3 [30208/60000 (50%)]	Loss: 0.038580
Train Epoch: 3 [45568/60000 (75%)]	Loss: 0.059188

Test set: Average loss: 0.0443, Accuracy: 9851/10000 (99%)

Train Epoch: 4 [14848/60000 (25%)]	Loss: 0.036482
Train Epoch: 4 [30208/60000 (50%)]	Loss: 0.036975
Train Epoch: 4 [45568/60000 (75%)]	Loss: 0.036004

Test set: Average loss: 0.0460, Accuracy: 9850/10000 (98%)

Train Epoch: 5 [14848/60000 (25%)]	Loss: 0.022025
Train Epoch: 5 [30208/60000 (50%)]	Loss: 0.052678
Train Epoch: 5 [45568/60000 (75%)]	Loss: 0.022728

Test set: Average loss: 0.0395, Accuracy: 9886/10000 (99%)

Train Epoch: 6 [14848/60000 (25%)]	Loss: 0.032698
Train Epoch: 6 [30208/60000 (50%)]	Loss: 0.053429
Train Epoch: 6 [45568/60000 (75%)]	Loss: 0.034514

Test set: Average loss: 0.0349, Accuracy: 9880/10000 (99%)

Train Epoch: 7 [14848/60000 (25%)]	Loss: 0.007919
Train Epoch: 7 [30208/60000 (50%)]	Loss: 0.011484
Train Epoch: 7 [45568/60000 (75%)]	Loss: 0.044704

Test set: Average loss: 0.0278, Accuracy: 9918/10000 (99%)

Train Epoch: 8 [14848/60000 (25%)]	Loss: 0.018610
Train Epoch: 8 [30208/60000 (50%)]	Loss: 0.028454
Train Epoch: 8 [45568/60000 (75%)]	Loss: 0.012064

Test set: Average loss: 0.0300, Accuracy: 9909/10000 (99%)

Train Epoch: 9 [14848/60000 (25%)]	Loss: 0.016430
Train Epoch: 9 [30208/60000 (50%)]	Loss: 0.009306
Train Epoch: 9 [45568/60000 (75%)]	Loss: 0.025945

Test set: Average loss: 0.0345, Accuracy: 9896/10000 (99%)

Train Epoch: 10 [14848/60000 (25%)]	Loss: 0.032782
Train Epoch: 10 [30208/60000 (50%)]	Loss: 0.006292
Train Epoch: 10 [45568/60000 (75%)]	Loss: 0.009470

Test set: Average loss: 0.0304, Accuracy: 9903/10000 (99%)

准确率达到了99%


尽管如此,我们知道MNIST是一个很简单的数据集,由于它的局限性只能作为研究用途,对实际应用带来的价值非常有限。如果你的模型连MNIST都搞不定,那么你的模型没有任何的价值即使你的模型搞定了MNIST,你的模型也可能没有任何的价值 但我只会这两个

posted @ 2020-02-22 22:26  尾巴一米八  阅读(529)  评论(0)    收藏  举报