pytorch

------------恢复内容开始------------

# Initialize x, y and z to values 4, -3 and 5
x = torch.tensor(4., requires_grad=True)
y = torch.tensor(-3., requires_grad=True)
z = torch.tensor(5., requires_grad=True)

# Set q to sum of x and y, set f to product of q with z
q = x + y
f = q * z

# Compute the derivatives
f.backward()

# Print the gradients
print("Gradient of x is: " + str(x.grad))
print("Gradient of y is: " + str(y.grad))
print("Gradient of z is: " + str(z.grad))
 
-------------------------------------------------------------------------------------------------
torch.matmul() does matrix multiplication
while * operator does elementwise multiplication.
 
 
calculating gradients is as easy as calculating derivatives in PyTorch.
gradients?
derivatives?
 

Your first neural network

You are going to build a neural network in PyTorch, using the hard way. Your input will be images of size (28, 28), so images containing 784 pixels. Your network will contain an input_layer (provided for you), a hidden layer with 200 units, and an output layer with 10 classes. The input layer has already been created for you. You are going to create the weights, and then do matrix multiplications, getting the results from the network.

# Initialize the weights of the neural network
weight_1 = torch.rand(784, 200)
weight_2 = torch.rand(200,10)

# Multiply input_layer with weight_1
hidden_1 = torch.matmul(input_layer, weight_1)

# Multiply hidden_1 with weight_2
output_layer = torch.matmul(hidden_1,weight_2)
print(output_layer)
 
 
 
 
-----------------------------------------------------------------------------------
class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        
        # Instantiate all 2 linear layers  
        self.fc1 = nn.Linear(784, 200)
        self.fc2 = nn.Linear(200,10)

    def forward(self, x):
      
        # Use the instantiated layers and return x
        x = self.fc1(x)
        x = self.fc2(x)
        return x
pytorch neutral network
 
 

 

 

 

------------恢复内容结束------------

# Import torch and torch.nn
import torch
import torch.nn as nn

# Initialize logits and ground truth
logits = torch.rand(1, 1000)
ground_truth = torch.tensor([111])

# Instantiate cross-entropy loss
criterion = nn.CrossEntropyLoss()

# Calculate and print the loss
loss = criterion(logits, ground_truth)
print(loss)
 
 
 
 

Preparing MNIST dataset

You are going to prepare dataloaders for MNIST training and testing set. As we explained in the lecture, MNIST has some differences to CIFAR-10, with the main difference being that MNIST images are grayscale (1 channel based) instead of RGB (3 channels).

Instructions
0 XP
  • Transform the data to torch tensors and normalize it, mean is 0.1307 while std is 0.3081.
  • Prepare the trainset and the testset.
  • Prepare the dataloaders for training and testing so that only 32 pictures are processed at a time.

 

# Transform the data to torch tensors and normalize it 
transform = transforms.Compose([transforms.ToTensor(),
                                transforms.Normalize((0.1307), ((0.3081)))])
###### transforms normalize   mean and std
# Prepare the datasets
trainset = torchvision.datasets.MNIST('mnist', train=True, 
                                      download=True, transform=transform)
testset = torchvision.datasets.MNIST('mnist', train=False, 
                                      download=True, transform=transform)

# Prepare the dataloaders
trainloader = torch.utils.data.DataLoader(trainset, batch_size=32,
                                          shuffle=True, num_workers=0)
testloader = torch.utils.data.DataLoader(testset, batch_size=32,
                                         shuffle=False, num_workers=0)       
 
-------------------------------------------------------------------------------------
 
 
# Compute the shape of the training set and testing set
trainset_shape = trainloader.dataset.train_data.shape
testset_shape = testloader.dataset.test_data.shape

# Print the computed shapes
print(trainset_shape, testset_shape)

# Compute the size of the minibatch for training set and testing set
trainset_batchsize = trainloader.batch_size
testset_batchsize = testloader.batch_size

# Print sizes of the minibatch
print(trainset_batchsize, testset_batchsize)
 
 
 

 

posted @ 2021-04-10 20:21  samsara0614  阅读(45)  评论(0)    收藏  举报