laitimes

10 internal uses of PyTorch

author:Refrigeration plant
10 internal uses of PyTorch

Welcome to this concise guide to the principles of PyTorch[1]. Whether you're a beginner or have some experience, knowing these principles can make your journey smoother. Let's get started!

1. Tensors: Building Blocks

Tensors in PyTorch are multidimensional arrays. They are similar to NumPy's ndarray, but can run on GPUs.

import torch

# Create a 2x3 tensor
tensor = torch.tensor([[1, 2, 3], [4, 5, 6]])
print(tensor)
           

2. Dynamic Calculation Graph

PyTorch uses a dynamic compute graph, which means that the graph is built on the fly as an action is performed. This provides flexibility to modify the graph at runtime.

# Define two tensors
a = torch.tensor([2.], requires_grad=True)
b = torch.tensor([3.], requires_grad=True)

# Compute result
c = a * b
c.backward()

# Gradients
print(a.grad)  # Gradient w.r.t a
           

3. GPU acceleration

PyTorch allows for easy switching between CPU and GPU. Leverage .to(device) for best performance.

device = "cuda" if torch.cuda.is_available() else "cpu"
tensor = tensor.to(device)
           

4. Autograd: Automatic differentiation

PyTorch's autograd provides automatic differentiation for all operations on tensors. Set require_grad=True to track the calculations.

x = torch.tensor([2.], requires_grad=True)
y = x**2
y.backward()
print(x.grad)  # Gradient of y w.r.t x
           

5. With nn. Module's modular neural network

PyTorch provides nn. Module class to define the neural network architecture. Create custom layers by subclassing.

import torch.nn as nn

class SimpleNN(nn.Module):

    def __init__(self):
        super().__init__()
        self.fc = nn.Linear(1, 1)
        
    def forward(self, x):
        return self.fc(x)
           

6. Predefined layers and loss functions

PyTorch provides a variety of predefined layers, loss functions, and optimization algorithms in the nn module.

loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
           

7. Datasets and DataLoader

For efficient data processing and batch processing, PyTorch provides Dataset and DataLoader classes.

from torch.utils.data import Dataset, DataLoader

class CustomDataset(Dataset):
    # ... (methods to define)
    
data_loader = DataLoader(dataset, batch_size=32, shuffle=True)
           

8. Model training loop

In general, training in PyTorch follows the following pattern: Forward Pass, Computational Loss, Back Pass, and Parameter Update.

for epoch in range(epochs):
    for data, target in data_loader:
        optimizer.zero_grad()
        output = model(data)
        loss = loss_fn(output, target)
        loss.backward()
        optimizer.step()
           

9. Model Serialization

Use torch.save() and torch.load() to save and load the model.

# Save
torch.save(model.state_dict(), 'model_weights.pth')

# Load
model.load_state_dict(torch.load('model_weights.pth'))
           

10. Eager Execution and JIT

While PyTorch runs in eager mode by default, it provides just-in-time (JIT) compilation for production-ready models.

scripted_model = torch.jit.script(model)
scripted_model.save("model_jit.pt")
           

Reference

[1] Source: https://medium.com/@kasperjuunge/10-principles-of-pytorch-bbe4bf0c42cd