基於Pytorch的MNIST手寫資料集的RNN與CNN實現

2020-10-27 12:01:16

基於Pytorch的MNIST手寫資料集的RNN與CNN實現

LSTM處理手寫資料集(分類問題)
LSTM通過sin預測cos(迴歸問題)

pytorch輸入引數格式

input_size – 輸入的特徵維度,即詞嵌入裡是one-hot的長度,一個單詞的長度是300,則input_size為300,輸入影象的寬為28,則input_size為28

hidden_size – 隱狀態的特徵維度,可以自行設定

num_layers – 層數(和時序展開要區分開),一般為1或者2

bias – 如果為False,那麼LSTM將不會使用 b i h , b h h b_{ih},b_{hh} bih,bhh,預設為True。

batch_first – 如果為True,那麼輸入和輸出Tensor的形狀為 (batch, time_step, input_size),否則 (time_step batch, input_size)

dropout – 如果非零的話,將會在RNN的輸出上加個dropout,最後一層除外。

bidirectional – 如果為True,將會變成一個雙向RNN,預設為False。


time_step:長度為一句話的長度,一句話包含的單詞數量,
batch_size:分批次送入rnn的數量
以下程式碼來自lstm

lstm輸入是input, (h_0, c_0)
input (time_step, batch, input_size) 如果設定了batch_first,則batch為第一維。

(h_0, c_0) 隱層狀態
h0 shape:(num_layers * num_directions, batch, hidden_size) 
c0 shape:(num_layers * num_directions, batch, hidden_size)

lstm輸出是output, (h_n, c_n)
output (time_step, batch, hidden_size * num_directions) 包含每一個時刻的輸出特徵,如果設定了batch_first,則batch為第一維
(h_n, c_n) 隱層狀態 
h_n shape: (num_layers * num_directions, batch, hidden_size)
c_n shape: (num_layers * num_directions, batch, hidden_size)

在一些檔案中,time_step和seq_len都表示時間步

class RNN(nn.Module):
    def __init__(self):
        super(RNN, self).__init__()

        self.rnn = nn.LSTM(         
            input_size=INPUT_SIZE,
            hidden_size=64,        
            num_layers=1,          
            batch_first=True,     #(batch, time_step, input_size)
        )

        self.out = nn.Linear(64, 10)  #Linear(num_layer*hidden_size,分類的個數)

    def forward(self, x):
        # x shape (batch, time_step, input_size)
        # r_out shape (batch, time_step, output_size)
        # h_n shape (n_layers, batch, hidden_size)
        # h_c shape (n_layers, batch, hidden_size)
        r_out, (h_n, h_c) = self.rnn(x, None)   # 初始狀態為None
        out = self.out(r_out[:, -1, :])
        return out

其中 out = self.out(r_out[:, -1, :])這一句是取time_step為最後時刻的輸出作為linera層的輸入,因為輸入一個xt和ht-1會產生一個r_out和(h_n, h_c),產生的hn會繼續送入rnn的輸入繼續產生下一個r_out,所以需要取最後時刻的輸出作為輸入

完整程式碼整理後釋出
CNN實現手寫資料集程式碼如下

import torch
import torch.nn
from torch import nn
from torch.utils.data import Dataset
from torch.utils.data import DataLoader
import numpy as np
import matplotlib.pyplot as plt
from torchvision import datasets,transforms
import torch.optim as optim
import torch.nn.functional as F



BATCH_SIZE=512
EPOCHS=20
DEVICE=torch.device("cuda" if torch.cuda.is_available() else "cpu")


transform=transforms.Compose([
    transforms.ToTensor(),
    transforms.Normalize((0.1307,), (0.3081,))
])
train_data=datasets.MNIST(root='./mnist',download=True,train=True,transform=transform)
test_data=datasets.MNIST(root='./mnist',download=True,train=False,transform=transform)

train_loader=DataLoader(train_data,batch_size=BATCH_SIZE,shuffle=True)
test_loader=DataLoader(test_data,batch_size=BATCH_SIZE,shuffle=True)







class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Sequential(
            nn.Conv2d(in_channels=1, out_channels=10, kernel_size=5),
            nn.ReLU(),
            nn.MaxPool2d(kernel_size=2)

        )
        self.conv2 = nn.Sequential(
            nn.Conv2d(in_channels=10, out_channels=20, kernel_size=3),
            nn.ReLU(),
            nn.MaxPool2d(2)
        )
        self.out = nn.Linear(20 * 5 * 5, 10)

    def forward(self, x):
        x = self.conv1(x)
        x = self.conv2(x)
        x = x.view(x.size(0), -1)
        output = self.out(x)
        return output


model = Net()  # 範例化網路net,再送入gpu訓練
optimizer = optim.Adam(model.parameters())
criterion = nn.CrossEntropyLoss()


def train(model, device, train_loader, optimizer, epoch, criterion):
    model.train()
    for batch_idx, (data, target) in enumerate(train_loader):

        output = model(data)

        # loss=criterion(output,target)

        optimizer.zero_grad()
        loss = criterion(output, target)
        loss.backward()
        optimizer.step()

        if (batch_idx + 1) % 30 == 0:  # train_loader的長度為train_loader.dataset的長度除以batch_size
            print('Train Epoch:{} [{}/{} ({:.0f}%)]\tLoss:{:.6f}'.format(
                epoch, batch_idx * len(data), len(train_loader.dataset), 100. * batch_idx / len(train_loader),
                loss.item()
            ))


def test(model, device, test_loader):
    model.eval()
    test_loss = 0
    test_corr = 0
    with torch.no_grad():
        for img, label in test_loader:
            output = model(img)
            test_loss += criterion(output, label)
            pred = output.max(1, keepdim=True)[1]
            test_corr += pred.eq(label.view_as(pred)).sum().item()

        print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
            test_loss, test_corr, len(test_loader.dataset), 100. * (test_corr / len(test_loader.dataset))
        ))


for epoch in range(1, EPOCHS + 1):
    train(model, DEVICE, train_loader, optimizer, epoch, criterion)
    test(model, DEVICE, test_loader)

訓練結果如下
Train Epoch:1 [14848/60000 (25%)] Loss:0.809119
Train Epoch:1 [30208/60000 (50%)] Loss:0.332066
Train Epoch:1 [45568/60000 (75%)] Loss:0.248601

Test set: Average loss: 3.3879, Accuracy: 9515/10000 (95%)

Train Epoch:2 [14848/60000 (25%)] Loss:0.200926
Train Epoch:2 [30208/60000 (50%)] Loss:0.167642
Train Epoch:2 [45568/60000 (75%)] Loss:0.129635

Test set: Average loss: 1.9960, Accuracy: 9700/10000 (97%)

Train Epoch:3 [14848/60000 (25%)] Loss:0.097073
Train Epoch:3 [30208/60000 (50%)] Loss:0.078300
Train Epoch:3 [45568/60000 (75%)] Loss:0.095262

Test set: Average loss: 1.5412, Accuracy: 9764/10000 (98%)

Train Epoch:4 [14848/60000 (25%)] Loss:0.067570
Train Epoch:4 [30208/60000 (50%)] Loss:0.091387
Train Epoch:4 [45568/60000 (75%)] Loss:0.058170

Test set: Average loss: 1.3722, Accuracy: 9795/10000 (98%)

Train Epoch:5 [14848/60000 (25%)] Loss:0.081385
Train Epoch:5 [30208/60000 (50%)] Loss:0.069488
Train Epoch:5 [45568/60000 (75%)] Loss:0.108909

Test set: Average loss: 1.1676, Accuracy: 9818/10000 (98%)

Train Epoch:6 [14848/60000 (25%)] Loss:0.060494
Train Epoch:6 [30208/60000 (50%)] Loss:0.070833
Train Epoch:6 [45568/60000 (75%)] Loss:0.085588

Test set: Average loss: 1.0887, Accuracy: 9833/10000 (98%)

Train Epoch:7 [14848/60000 (25%)] Loss:0.067081
Train Epoch:7 [30208/60000 (50%)] Loss:0.082414
Train Epoch:7 [45568/60000 (75%)] Loss:0.045014

Test set: Average loss: 1.0601, Accuracy: 9837/10000 (98%)

Train Epoch:8 [14848/60000 (25%)] Loss:0.062390
Train Epoch:8 [30208/60000 (50%)] Loss:0.048241
Train Epoch:8 [45568/60000 (75%)] Loss:0.042879

Test set: Average loss: 0.9528, Accuracy: 9836/10000 (98%)

Train Epoch:9 [14848/60000 (25%)] Loss:0.048539
Train Epoch:9 [30208/60000 (50%)] Loss:0.055073
Train Epoch:9 [45568/60000 (75%)] Loss:0.055796

Test set: Average loss: 0.8623, Accuracy: 9866/10000 (99%)

Train Epoch:10 [14848/60000 (25%)] Loss:0.051431
Train Epoch:10 [30208/60000 (50%)] Loss:0.045435
Train Epoch:10 [45568/60000 (75%)] Loss:0.075674

Test set: Average loss: 0.7783, Accuracy: 9874/10000 (99%)

Train Epoch:11 [14848/60000 (25%)] Loss:0.028392
Train Epoch:11 [30208/60000 (50%)] Loss:0.049267
Train Epoch:11 [45568/60000 (75%)] Loss:0.042472

Test set: Average loss: 0.8189, Accuracy: 9875/10000 (99%)

Train Epoch:12 [14848/60000 (25%)] Loss:0.058731
Train Epoch:12 [30208/60000 (50%)] Loss:0.025470
Train Epoch:12 [45568/60000 (75%)] Loss:0.029647

Test set: Average loss: 0.7829, Accuracy: 9871/10000 (99%)

Train Epoch:13 [14848/60000 (25%)] Loss:0.052567
Train Epoch:13 [30208/60000 (50%)] Loss:0.028609
Train Epoch:13 [45568/60000 (75%)] Loss:0.020649

Test set: Average loss: 0.7527, Accuracy: 9872/10000 (99%)

Train Epoch:14 [14848/60000 (25%)] Loss:0.039200
Train Epoch:14 [30208/60000 (50%)] Loss:0.019106
Train Epoch:14 [45568/60000 (75%)] Loss:0.067107

Test set: Average loss: 0.7386, Accuracy: 9886/10000 (99%)

Train Epoch:15 [14848/60000 (25%)] Loss:0.038181
Train Epoch:15 [30208/60000 (50%)] Loss:0.022419
Train Epoch:15 [45568/60000 (75%)] Loss:0.016036

Test set: Average loss: 0.7954, Accuracy: 9862/10000 (99%)

Train Epoch:16 [14848/60000 (25%)] Loss:0.018675
Train Epoch:16 [30208/60000 (50%)] Loss:0.039494
Train Epoch:16 [45568/60000 (75%)] Loss:0.017992

Test set: Average loss: 0.8029, Accuracy: 9859/10000 (99%)

Train Epoch:17 [14848/60000 (25%)] Loss:0.019442
Train Epoch:17 [30208/60000 (50%)] Loss:0.014947
Train Epoch:17 [45568/60000 (75%)] Loss:0.024432

Test set: Average loss: 0.6863, Accuracy: 9874/10000 (99%)

Train Epoch:18 [14848/60000 (25%)] Loss:0.013267
Train Epoch:18 [30208/60000 (50%)] Loss:0.022075
Train Epoch:18 [45568/60000 (75%)] Loss:0.024906

Test set: Average loss: 0.6707, Accuracy: 9887/10000 (99%)

Train Epoch:19 [14848/60000 (25%)] Loss:0.031900
Train Epoch:19 [30208/60000 (50%)] Loss:0.014791
Train Epoch:19 [45568/60000 (75%)] Loss:0.037303

Test set: Average loss: 0.7329, Accuracy: 9878/10000 (99%)

Train Epoch:20 [14848/60000 (25%)] Loss:0.030795
Train Epoch:20 [30208/60000 (50%)] Loss:0.016112
Train Epoch:20 [45568/60000 (75%)] Loss:0.020148

Test set: Average loss: 0.6894, Accuracy: 9884/10000 (99%)