Shortcuts

Learn the Basics || Quickstart || Tensors || Datasets & DataLoaders || Transforms || Build Model || Autograd || Optimization || Save & Load Model

Build the Neural Network

Neural networks comprise of layers/modules that perform operations on data. The torch.nn namespace provides all the building blocks you need to build your own neural network. Every module in PyTorch subclasses the nn.Module. A neural network is a module itself that consists of other modules (layers). This nested structure allows for building and managing complex architectures easily.

In the following sections, we’ll build a neural network to classify images in the FashionMNIST dataset.

import os
import torch
from torch import nn
from torch.utils.data import DataLoader
from torchvision import datasets, transforms

Get Device for Training

We want to be able to train our model on a hardware accelerator like the GPU, if it is available. Let’s check to see if torch.cuda is available, else we continue to use the CPU.

device = 'cuda' if torch.cuda.is_available() else 'cpu'
print('Using {} device'.format(device))

Out:

Using cuda device

Define the Class

We define our neural network by subclassing nn.Module, and initialize the neural network layers in __init__. Every nn.Module subclass implements the operations on input data in the forward method.

class NeuralNetwork(nn.Module):
    def __init__(self):
        super(NeuralNetwork, self).__init__()
        self.flatten = nn.Flatten()
        self.linear_relu_stack = nn.Sequential(
            nn.Linear(28*28, 512),
            nn.ReLU(),
            nn.Linear(512, 512),
            nn.ReLU(),
            nn.Linear(512, 10),
            nn.ReLU()
        )

    def forward(self, x):
        x = self.flatten(x)
        logits = self.linear_relu_stack(x)
        return logits

We create an instance of NeuralNetwork, and move it to the device, and print it’s structure.

model = NeuralNetwork().to(device)
print(model)

Out:

NeuralNetwork(
  (flatten): Flatten(start_dim=1, end_dim=-1)
  (linear_relu_stack): Sequential(
    (0): Linear(in_features=784, out_features=512, bias=True)
    (1): ReLU()
    (2): Linear(in_features=512, out_features=512, bias=True)
    (3): ReLU()
    (4): Linear(in_features=512, out_features=10, bias=True)
    (5): ReLU()
  )
)

To use the model, we pass it the input data. This executes the model’s forward, along with some background operations. Do not call model.forward() directly!

Calling the model on the input returns a 10-dimensional tensor with raw predicted values for each class. We get the prediction probabilities by passing it through an instance of the nn.Softmax module.

X = torch.rand(1, 28, 28, device=device)
logits = model(X)
pred_probab = nn.Softmax(dim=1)(logits)
y_pred = pred_probab.argmax(1)
print(f"Predicted class: {y_pred}")

Out:

Predicted class: tensor([6], device='cuda:0')

Model Layers

Lets break down the layers in the FashionMNIST model. To illustrate it, we will take a sample minibatch of 3 images of size 28x28 and see what happens to it as we pass it through the network.

input_image = torch.rand(3,28,28)
print(input_image.size())

Out:

torch.Size([3, 28, 28])

nn.Flatten

We initialize the nn.Flatten layer to convert each 2D 28x28 image into a contiguous array of 784 pixel values ( the minibatch dimension (at dim=0) is maintained).

flatten = nn.Flatten()
flat_image = flatten(input_image)
print(flat_image.size())

Out:

torch.Size([3, 784])

nn.Linear

The linear layer is a module that applies a linear transformation on the input using it’s stored weights and biases.

layer1 = nn.Linear(in_features=28*28, out_features=20)
hidden1 = layer1(flat_image)
print(hidden1.size())

Out:

torch.Size([3, 20])

nn.ReLU

Non-linear activations are what create the complex mappings between the model’s inputs and outputs. They are applied after linear transformations to introduce nonlinearity, helping neural networks learn a wide variety of phenomena.

In this model, we use nn.ReLU between our linear layers, but there’s other activations to introduce non-linearity in your model.

print(f"Before ReLU: {hidden1}\n\n")
hidden1 = nn.ReLU()(hidden1)
print(f"After ReLU: {hidden1}")

Out:

Before ReLU: tensor([[-0.4007,  0.5836, -0.0140, -0.0956, -0.2288,  0.0181, -0.3303, -0.4466,
          0.3118, -0.3199, -0.3786, -0.1970,  0.4361,  0.8707, -0.0524, -0.3900,
          0.3749, -0.6645, -0.1192,  0.0752],
        [-0.6737,  0.4766,  0.1678, -0.1205, -0.2410, -0.0120, -0.1661, -0.2173,
          0.8240, -0.0305, -0.4806, -0.0143,  0.3055,  0.6513, -0.1032, -0.2957,
          0.3505, -0.5618,  0.2258,  0.1818],
        [-0.3843,  0.5948,  0.0975, -0.2577, -0.4811, -0.1273, -0.1702, -0.8380,
          0.8268,  0.0564, -0.2664,  0.3355,  0.1078,  0.7216, -0.2549, -0.3097,
          0.2744, -0.6682, -0.1338,  0.1401]], grad_fn=<AddmmBackward>)


After ReLU: tensor([[0.0000, 0.5836, 0.0000, 0.0000, 0.0000, 0.0181, 0.0000, 0.0000, 0.3118,
         0.0000, 0.0000, 0.0000, 0.4361, 0.8707, 0.0000, 0.0000, 0.3749, 0.0000,
         0.0000, 0.0752],
        [0.0000, 0.4766, 0.1678, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.8240,
         0.0000, 0.0000, 0.0000, 0.3055, 0.6513, 0.0000, 0.0000, 0.3505, 0.0000,
         0.2258, 0.1818],
        [0.0000, 0.5948, 0.0975, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.8268,
         0.0564, 0.0000, 0.3355, 0.1078, 0.7216, 0.0000, 0.0000, 0.2744, 0.0000,
         0.0000, 0.1401]], grad_fn=<ReluBackward0>)

nn.Sequential

nn.Sequential is an ordered container of modules. The data is passed through all the modules in the same order as defined. You can use sequential containers to put together a quick network like seq_modules.

seq_modules = nn.Sequential(
    flatten,
    layer1,
    nn.ReLU(),
    nn.Linear(20, 10)
)
input_image = torch.rand(3,28,28)
logits = seq_modules(input_image)

nn.Softmax

The last linear layer of the neural network returns logits - raw values in [-infty, infty] - which are passed to the nn.Softmax module. The logits are scaled to values [0, 1] representing the model’s predicted probabilities for each class. dim parameter indicates the dimension along which the values must sum to 1.

softmax = nn.Softmax(dim=1)
pred_probab = softmax(logits)

Model Parameters

Many layers inside a neural network are parameterized, i.e. have associated weights and biases that are optimized during training. Subclassing nn.Module automatically tracks all fields defined inside your model object, and makes all parameters accessible using your model’s parameters() or named_parameters() methods.

In this example, we iterate over each parameter, and print its size and a preview of its values.

print("Model structure: ", model, "\n\n")

for name, param in model.named_parameters():
    print(f"Layer: {name} | Size: {param.size()} | Values : {param[:2]} \n")

Out:

Model structure:  NeuralNetwork(
  (flatten): Flatten(start_dim=1, end_dim=-1)
  (linear_relu_stack): Sequential(
    (0): Linear(in_features=784, out_features=512, bias=True)
    (1): ReLU()
    (2): Linear(in_features=512, out_features=512, bias=True)
    (3): ReLU()
    (4): Linear(in_features=512, out_features=10, bias=True)
    (5): ReLU()
  )
)


Layer: linear_relu_stack.0.weight | Size: torch.Size([512, 784]) | Values : tensor([[-0.0093,  0.0135, -0.0168,  ...,  0.0094, -0.0052,  0.0270],
        [-0.0338,  0.0073,  0.0204,  ..., -0.0005, -0.0114, -0.0174]],
       device='cuda:0', grad_fn=<SliceBackward>)

Layer: linear_relu_stack.0.bias | Size: torch.Size([512]) | Values : tensor([ 0.0220, -0.0176], device='cuda:0', grad_fn=<SliceBackward>)

Layer: linear_relu_stack.2.weight | Size: torch.Size([512, 512]) | Values : tensor([[-0.0187, -0.0218, -0.0009,  ...,  0.0177,  0.0298,  0.0088],
        [-0.0263, -0.0118,  0.0365,  ...,  0.0343, -0.0027,  0.0099]],
       device='cuda:0', grad_fn=<SliceBackward>)

Layer: linear_relu_stack.2.bias | Size: torch.Size([512]) | Values : tensor([0.0220, 0.0358], device='cuda:0', grad_fn=<SliceBackward>)

Layer: linear_relu_stack.4.weight | Size: torch.Size([10, 512]) | Values : tensor([[-0.0159, -0.0107,  0.0169,  ...,  0.0394,  0.0397,  0.0147],
        [-0.0331,  0.0396, -0.0088,  ..., -0.0409,  0.0039, -0.0358]],
       device='cuda:0', grad_fn=<SliceBackward>)

Layer: linear_relu_stack.4.bias | Size: torch.Size([10]) | Values : tensor([ 0.0171, -0.0308], device='cuda:0', grad_fn=<SliceBackward>)

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources