Pytorch学习笔记阶段一
一、Tensor
直接数据:
data = [[1, 2],[3, 4]]
x_data = torch.tensor(data)
Numpy转化:
np_array = np.array(data)
x_np = torch.from_numpy(np_array)
随机或特制张量:
shape = (2,3,)
rand_tensor = torch.rand(shape)
ones_tensor = torch.ones(shape)
zeros_tensor = torch.zeros(shape)
print(f"Random Tensor: \n {rand_tensor} \n")
print(f"Ones Tensor: \n {ones_tensor} \n")
print(f"Zeros Tensor: \n {zeros_tensor}")
>>>
Random Tensor:
tensor([[0.8772, 0.1196, 0.4408],
[0.2220, 0.1709, 0.6351]])
Ones Tensor:
tensor([[1., 1., 1.],
[1., 1., 1.]])
Zeros Tensor:
tensor([[0., 0., 0.],
[0., 0., 0.]])
张量的性质:
tensor = torch.rand(3,4)
print(f"Shape of tensor: {tensor.shape}")
print(f"Datatype of tensor: {tensor.dtype}")
print(f"Device tensor is stored on: {tensor.device}")
>>>
Shape of tensor: torch.Size([3, 4])
Datatype of tensor: torch.float32
Device tensor is stored on: cpu
合并张量torch.cat:
t1 = torch.cat([tensor, tensor, tensor], dim=1)
print(t1)
>>>
tensor([[1., 0., 1., 1., 1., 0., 1., 1., 1., 0., 1., 1.],
[1., 0., 1., 1., 1., 0., 1., 1., 1., 0., 1., 1.],
[1., 0., 1., 1., 1., 0., 1., 1., 1., 0., 1., 1.],
[1., 0., 1., 1., 1., 0., 1., 1., 1., 0., 1., 1.]])
二、Datasets&Dataloader
啥也不说上代码:
import os
import numpy as np
from PIL import Image
from torch.utils.data import Dataset
class MaskImageDataset(Dataset):
def __init__(self, annotations_file, img_dir, transform=None): #target_transform=None
super(MaskImageDataset, self).__init__()
self.img_labels_dir = annotations_file
self.img_dir = img_dir
self.transform = transform
# self.target_transform = target_transform
def __len__(self):
return len(os.listdir(self.img_dir))
def __getitem__(self, idx):
img_lable_path = os.path.join(self.img_labels_dir, os.listdir(self.img_labels_dir)[idx])
img_path = os.path.join(self.img_dir, os.listdir(self.img_dir)[idx])
image = Image.open(img_path).convert('RGB')
label = np.loadtxt(img_lable_path)
if self.transform:
image = self.transform(image)
# if self.target_transform:
# label = self.target_transform(label)
return image, label
三、Transform
所有API以包含于:
四、搭建网络
class NeuralNetwork(nn.Module):
def __init__(self):
super(NeuralNetwork, self).__init__()
def forward(self, x):
return x
Torch.nn包含所有API:
五、自动求导 TORCH.AUTOGRAD
import torch
x = torch.ones(5) # input tensor
y = torch.zeros(3) # expected output
w = torch.randn(5, 3, requires_grad=True)
b = torch.randn(3, requires_grad=True)
z = torch.matmul(x, w)+b
loss = torch.nn.functional.binary_cross_entropy_with_logits(z, y)
为了优化,我们需要计算loss_function的梯度,于是在w、b设置了requires_grad=True以此可以更新w、b,并且loss_function和output的grad_fn用来存储后向传播的导数。
计算后向传播的导数用loss.backward()。
梯度追踪:为了记录计算过程,以此支持梯度计算,默认设置tensor为requires_grad=True,但是有时候不需要去记录梯度值,于是利用torch.no_grad模块,e.g. 我们只需要向前传播时,就不需要记录梯度。
with torch.no_grad():
z = torch.matmul(x, w)+b
print(z.requires_grad)
>>> False
雅各比矩阵:在向量分析中, 雅可比矩阵是一阶偏导数以一定方式排列成的矩阵, 其行列式称为雅可比行列式,雅可比矩阵的重要性在于它体现了一个可微方程与给出点的最优线性逼近. 因此, 雅可比矩阵类似于多元函数的导数
pytorch提供了对雅各比行列式的计算方法:
inp = torch.eye(5, requires_grad=True)
out = (inp+1).pow(2)
out.backward(torch.ones_like(inp), retain_graph=True)
print(f"First call\n{inp.grad}")
out.backward(torch.ones_like(inp), retain_graph=True)
print(f"\nSecond call\n{inp.grad}")
inp.grad.zero_()
out.backward(torch.ones_like(inp), retain_graph=True)
print(f"\nCall after zeroing gradients\n{inp.grad}")
>>>
First call
tensor([[4., 2., 2., 2., 2.],
[2., 4., 2., 2., 2.],
[2., 2., 4., 2., 2.],
[2., 2., 2., 4., 2.],
[2., 2., 2., 2., 4.]])
Second call
tensor([[8., 4., 4., 4., 4.],
[4., 8., 4., 4., 4.],
[4., 4., 8., 4., 4.],
[4., 4., 4., 8., 4.],
[4., 4., 4., 4., 8.]])
Call after zeroing gradients
tensor([[4., 2., 2., 2., 2.],
[2., 4., 2., 2., 2.],
[2., 2., 4., 2., 2.],
[2., 2., 2., 4., 2.],
[2., 2., 2., 2., 4.]])
六、优化器
超参数:
LR一般为0.001,0.01,0.1,1以0.1倍计算。
优化迭代:每次迭代的过程包含两个主要的部分,一个是训练循环(迭代训练集,并尝试收敛优化器),一个是验证循环(迭代测试急,检测模型表现)。
def train_loop(dataloader, model, loss_fn, optimizer):
size = len(dataloader.dataset)
for batch, (X, y) in enumerate(dataloader):
# Compute prediction and loss
pred = model(X)
loss = loss_fn(pred, y)
# Backpropagation
optimizer.zero_grad()
loss.backward()
optimizer.step()
if batch % 100 == 0:
loss, current = loss.item(), batch * len(X)
print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]")
def test_loop(dataloader, model, loss_fn):
size = len(dataloader.dataset)
num_batches = len(dataloader)
test_loss, correct = 0, 0
with torch.no_grad():
for X, y in dataloader:
pred = model(X)
test_loss += loss_fn(pred, y).item()
correct += (pred.argmax(1) == y).type(torch.float).sum().item()
test_loss /= num_batches
correct /= size
print(f"Test Error: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")
优化器使用:在训练迭代过程内,要调用三个函数
七、保存模型
模型参数被存储到model自身的internal state dictionary中,调用model的state_dict可以保存模型参数。
model = models.vgg16(pretrained=True)
torch.save(model.state_dict(), 'model_weights.pth')
model = models.vgg16() # we do not specify pretrained=True, i.e. do not load default weights
model.load_state_dict(torch.load('model_weights.pth'))
model.eval()
在加载模型的过程中,需要创建一个一样结构的模型实例,然后载入参数权重文件用load_state_dict()方法。
Note:注意如果不需要训练,记得调用model.eval()这个阶段是用来测试的,于是模型的参数在该阶段不进行更新,并且关闭dropout。预期相反的model.train()是训练的,可以更新参数。

浙公网安备 33010602011771号