CS231N Assignment1 SVM 笔记

svm.ipynb

  • 为SVM实现一个完全矢量化的损失函数
  • 为其解析梯度实现完全矢量化表达式
  • 使用数值梯度检查实现结果
  • 使用验证集调整学习率和正则化
  • 使用 SGD 优化损失函数
  • 可视化最终学习权重

第一部分

1. 一些配置和库的导入

# Run some setup code for this notebook.
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10 # 该函数用于加载CIFAR-10数据集。
import matplotlib.pyplot as plt # 导入matplotlib.pyplot库,并命名为plt,用于绘图

# This is a bit of magic to make matplotlib figures appear inline in the
# notebook rather than in a new window.
# 在Jupyter Notebook中将绘制的图形显示在笔记本中而不是弹出新窗口

%matplotlib inline 
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots 绘图默认大小
plt.rcParams['image.interpolation'] = 'nearest' # 设置了图像的插值方式为最近邻插值
plt.rcParams['image.cmap'] = 'gray' # 颜色映射为灰度

2. 导入数据集路径,删除之前冗余的训练集和测试集并重新载入,输出例如Training data shape: (50000, 32, 32, 3),代表训练数据集包含50,000个样本,每个样本的维度是32x32像素,具有3个通道(RGB颜色通道)。

3. 为部分样本的可视化。

4. 训练集、验证集和测试集的划分,并创建了一个小的dev集作为训练数据的子集

mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
# 选择train中num_training到num_training + num_validation范围的样本作验证集

mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
# 选择train中的前num_training个样本作训练集

mask = np.random.choice(num_training, num_dev, replace=False)
X_dev = X_train[mask]
y_dev = y_train[mask]
# 从train中随机选择了num_dev个样本作为dev集
# np.random.choice函数用于从num_training个样本中选择num_dev个样本
# replace=False表示不允许重复选择。

mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# 从原始test中选择了前num_test个样本作为测试集

5. 将图像数据转化为行向量

X_train = np.reshape(X_train, (X_train.shape[0], -1))
# (X_train.shape[0], -1)表示将数组的第一个维度保持不变(即样本数量),而将后面的维度展平为一维。
# -1的使用是为了自动计算展平后的维度,以保持总元素数量不变。
# 即每个样本的图像数据展为一个长度为32*32*3的行向量

6. 减去图像平均值,注意train,val,test,dev数据集减去的都是train的平均值

并为train、val、test、dev集分别添加一列1向量作为SVM的偏置

# third: append the bias dimension of ones (i.e. bias trick) so that our SVM
# only has to worry about optimizing a single weight matrix W.
X_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))])
X_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))])
X_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))])
X_dev = np.hstack([X_dev, np.ones((X_dev.shape[0], 1))])

 第二部分 SVM Classifier

主要完成 cs231n/classifiers/linear_svm.py 当中的svm_loss_naive和svm_loss_vectorized两个函数。
 
1. 带循环的svm loss函数svm_loss_naive
对N个样本的小批量进行操作,数据维度为D,共有C个类别。
输入为权重矩阵W(大小为C*D),小批量数据数组X(大小为N*D),训练标签数组y(大小为N*1),浮点数reg表示正则化强度。
返回为一个元组,包括以单个浮点数表示的损失值,和以与W形状相同的数组表示的相对于权重W的梯度。
在逐图计算损失函数之前预赋值:
dW = np.zeros(W.shape)  # initialize the gradient as zero
# compute the loss and the gradient
num_classes = W.shape[1] # 权重矩阵W(D, C)的列数,即类别的数量C
num_train = X.shape[0] # 数据X(N, D)的行数,即训练样本的数量N
loss = 0.0

计算损失函数并调整dW

for i in range(num_train):
        scores = X[i].dot(W) # 第i张图各类别的score
        correct_class_score = scores[y[i]] # 正确类别的得分
        for j in range(num_classes):
            if j == y[i]: 
                continue # 跳过正确类别
            margin = scores[j] - correct_class_score + 1  # note delta = 1
            if margin > 0:
                loss += margin
                dW[:, j] += X[i] #对于错误类别j,将X[i]的特征向量添加到dW的第j列,以增加与非正确类别j相关的梯度
                dW[:, y[i]] -= X[i]
    # Right now the loss is a sum over all training examples, but we want it
    # to be an average instead so we divide by num_train.
    loss /= num_train
    dW /= num_train
    # Add regularization to the loss.
    loss += reg * np.sum(W * W)
    dW += 2 * reg * W

    return loss, dW

此时使得svm.py中 naive implementation of the loss 输出结果约为9,检查数值梯度与解析梯度,输出为:

numerical: -5.307963 analytic: -5.307963, relative error: 6.594235e-11
numerical: -0.682609 analytic: -0.682609, relative error: 2.391609e-10
numerical: -13.345338 analytic: -13.345338, relative error: 6.576081e-12
numerical: 14.802119 analytic: 14.802119, relative error: 4.652609e-12
numerical: 10.270928 analytic: 10.270928, relative error: 1.109950e-11
numerical: -8.296183 analytic: -8.296183, relative error: 3.013113e-11
numerical: 16.433458 analytic: 16.433458, relative error: 8.212111e-12
numerical: -6.156451 analytic: -6.156451, relative error: 8.508562e-12
numerical: 16.405473 analytic: 16.405473, relative error: 2.345135e-11
numerical: 18.069155 analytic: 18.069155, relative error: 9.391046e-12
numerical: -4.456779 analytic: -4.456779, relative error: 7.112725e-11
numerical: 11.308422 analytic: 11.308422, relative error: 1.921026e-11
numerical: 14.150021 analytic: 14.150021, relative error: 7.236039e-12
numerical: -19.405984 analytic: -19.405984, relative error: 1.218480e-11
numerical: -2.109591 analytic: -2.109591, relative error: 1.329375e-10
numerical: 3.450493 analytic: 3.450493, relative error: 5.982507e-11
numerical: -7.641039 analytic: -7.641039, relative error: 2.339206e-11
numerical: -22.684454 analytic: -22.684454, relative error: 9.529207e-12
numerical: -8.240884 analytic: -8.240884, relative error: 4.336906e-11
numerical: 11.273042 analytic: 11.273042, relative error: 2.986500e-11

(这个有没有正则化的差异我也没搞懂怎么看)

2. 向量化的的svm loss函数svm_loss_vectorized

def svm_loss_vectorized(W, X, y, reg):

    loss = 0.0
    dW = np.zeros(W.shape)  # initialize the gradient as zero

    num_train = X.shape[0]
    scores = X.dot(W)
    correct_class_scores = scores[np.arange(num_train), y].reshape(-1, 1)
    margins = np.maximum(0, scores - correct_class_scores + 1)
    margins[np.arange(num_train), y] = 0
    loss = np.sum(margins) / num_train + reg * np.sum(W * W)

    binary = margins
    binary[margins > 0] = 1
    row_sum = np.sum(binary, axis=1)
    binary[np.arange(num_train), y] = -row_sum
    dW = X.T.dot(binary) / num_train + 2 * reg * W


    return loss, dW

 第三部分 Stochastic Gradient Descent

 cs231n/classifiers/linear_classifier.py
 首先完成SGD LinearClassifier.train():
输入为:
数据数组X(大小为N*D)
训练标签数组y(大小为N*1)
浮点数learning rate 为优化学习率
浮点数reg表示正则化强度
整数num_iters为优化步数
整数batch_size为每步训练样本数量
布尔变量verbose表示是否打印过程
输出为:
一个包含每次训练迭代损失函数值的列表
class LinearClassifier(object):
    def __init__(self):
        self.W = None

    def train(
        self,
        X,
        y,
        learning_rate=1e-3,
        reg=1e-5,
        num_iters=100,
        batch_size=200,
        verbose=False,
    ):
        num_train, dim = X.shape
        num_classes = (
            np.max(y) + 1
        )  # assume y takes values 0...K-1 where K is number of classes
        if self.W is None:
            # lazily initialize W
            self.W = 0.001 * np.random.randn(dim, num_classes)

        # Run stochastic gradient descent to optimize W
        loss_history = []
        for it in range(num_iters)       
            indices = np.random.choice(num_train, batch_size, replace=True) 
            X_batch = X[indices]
            y_batch = y[indices]

            # evaluate loss and gradient
            loss, grad = self.loss(X_batch, y_batch, reg)
            loss_history.append(loss)

            # perform parameter update
             self.W -= learning_rate * grad
            if verbose and it % 100 == 0:  #每一百步打印一次
                print("iteration %d / %d: loss %f" % (it, num_iters, loss))

        return loss_history

输出为

iteration 0 / 1500: loss 785.501628
iteration 100 / 1500: loss 287.509919
iteration 200 / 1500: loss 107.628805
iteration 300 / 1500: loss 41.988279
iteration 400 / 1500: loss 18.835256
iteration 500 / 1500: loss 9.834987
iteration 600 / 1500: loss 6.770916
iteration 700 / 1500: loss 6.600116
iteration 800 / 1500: loss 5.691690
iteration 900 / 1500: loss 5.231099
iteration 1000 / 1500: loss 5.708499
iteration 1100 / 1500: loss 5.363125
iteration 1200 / 1500: loss 4.935640
iteration 1300 / 1500: loss 5.723507
iteration 1400 / 1500: loss 5.441428
That took 8.358134s

绘制为图像

然后完成LibearSVM.predict函数

 输入X输出y

    def predict(self, X):
        y_pred = np.zeros(X.shape[0])
        scores = X.dot(self.W)
        y_pred = np.argmax(scores, axis=1) #np.argmax()可以返回数组中最大元素的索引。
        return y_pred

接下来的任务是调整超参数使得准确率上升到0.39左右(刚刚仅有0.37-0.38)

 
results = {}
best_val = -1   # The highest validation accuracy that we have seen so far.
best_svm = None # The LinearSVM object that achieved the highest validation rate.

为了减少训练时间,先把步数设得比较小,找到合适的参数再调整得大一些

 
learning_rates = [1.5e-7,1.4e-7,1.3e-7,1.2e-7]#调整超参数
regularization_strengths = [2.5e4,3e4]

for lr in learning_rates:
    for reg in regularization_strengths:
        svm = LinearSVM()
        svm.train(X_train, y_train, learning_rate=lr, reg=reg, num_iters=500)  # 使用较小的 num_iters 进行训练
        
        # 计算训练集和验证集上的准确率
        train_accuracy = np.mean(y_train == svm.predict(X_train))
        val_accuracy = np.mean(y_val == svm.predict(X_val))
        
        # 将准确率存储在 results 字典中
        results[(lr, reg)] = (train_accuracy, val_accuracy)
        
        # 更新最高验证准确率和对应的 LinearSVM 对象
        if val_accuracy > best_val:
            best_val = val_accuracy
            best_svm = svm

for lr, reg in sorted(results):
    train_accuracy, val_accuracy = results[(lr, reg)]
    print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
                lr, reg, train_accuracy, val_accuracy))
    
print('best validation accuracy achieved during cross-validation: %f' % best_val)

最后调出来差不多这样:(1500num_iters)

lr 1.400000e-07 reg 2.100000e+04 train accuracy: 0.368980 val accuracy: 0.374000
lr 1.400000e-07 reg 2.200000e+04 train accuracy: 0.372510 val accuracy: 0.387000
lr 1.400000e-07 reg 2.500000e+04 train accuracy: 0.366163 val accuracy: 0.393000
lr 1.400000e-07 reg 2.700000e+04 train accuracy: 0.364918 val accuracy: 0.384000
lr 1.400000e-07 reg 3.000000e+04 train accuracy: 0.351837 val accuracy: 0.353000
best validation accuracy achieved during cross-validation: 0.393000
 也可用用随机的:(不太会调这个,效果都不怎么好)
np.random.seed(7)

learning_rates = 10**(np.random.rand(5) * 1.5 - 7.5)#[1.4e-7]
regularization_strengths = 10**(np.random.rand(5) * 2 + 3)#[2.1e4,2.2e4,2.5e4,2.7e4,3e4]

绘散点图

 在测试集上输出结果

# Evaluate the best svm on test set
y_test_pred = best_svm.predict(X_test)
test_accuracy = np.mean(y_test == y_test_pred)
print('linear SVM on raw pixels final test set accuracy: %f' % test_accuracy)

最后做了一个权重可视化,需要rescale the weights to be between 0 and 255

 

 
 
posted @ 2023-10-01 16:46  AbeChan33  阅读(59)  评论(0编辑  收藏  举报
邮箱:sihangao2004@gmail.com