Module 04: PyTorch Practice
Outcomes
- Build and train baseline models in PyTorch with clean training loops.
- Implement CNN/LSTM/Transformer projects and evaluate results.
- Diagnose training issues (overfitting, unstable loss, data leakage).
Primary Resource:
- Daniel Bourke's PyTorch Bootcamp: https://www.youtube.com/watch?v=Z_ikDlimN6A
Practice Projects (in order):
- MNIST with CNN (Week 5)
import torch
import torch.nn as nn
import torch.optim as optim
from torchvision import datasets, transforms
class CNN(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1, 32, 3, padding=1)
self.conv2 = nn.Conv2d(32, 64, 3, padding=1)
self.pool = nn.MaxPool2d(2, 2)
self.fc1 = nn.Linear(64 * 7 * 7, 128)
self.fc2 = nn.Linear(128, 10)
self.relu = nn.ReLU()
self.dropout = nn.Dropout(0.25)
def forward(self, x):
x = self.pool(self.relu(self.conv1(x)))
x = self.pool(self.relu(self.conv2(x)))
x = x.view(-1, 64 * 7 * 7)
x = self.dropout(self.relu(self.fc1(x)))
x = self.fc2(x)
return x
- CIFAR-10 with ResNet (Week 5)
- Sentiment Analysis with LSTM (Week 6)
- Text Classification with Transformer (Week 6)
Interview checkpoints
- Write a minimal train/eval loop with
DataLoader, device placement, and metrics. - Explain why learning rate schedules and weight decay help.
- Debug common issues: shape mismatches, exploding gradients, data leakage.
Comments
Share your approach or ask questions
?
|
Markdown supported
Sign in to post
Loading comments...