In this second notebook on sequence-to-sequence models using PyTorch and TorchText, we'll be implementing the model from Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. This model will achieve improved test perplexity whilst only using a single layer RNN in both the encoder and the decoder.
Let's remind ourselves of the general encoder-decoder model.
We use our encoder (green) over the source sequence to create a context vector (red). We then use that context vector with the decoder (blue) and a linear layer (purple) to generate the target sentence.
In the previous model, we used an multi-layered LSTM as the encoder and decoder.
One downside of the previous model is that the decoder is trying to cram lots of information into the hidden states. Whilst decoding, the hidden state will need to contain information about the whole of the source sequence, as well as all of the tokens have been decoded so far. By alleviating some of this information compression, we can create a better model!
We'll also be using a GRU (Gated Recurrent Unit) instead of an LSTM (Long Short-Term Memory). Why? Mainly because that's what they did in the paper (this paper also introduced GRUs) and also because we used LSTMs last time. If you want to understand how GRUs (and LSTMs) differ from standard RNNS, check out this link. Is a GRU better than an LSTM? Research has shown they're pretty much the same, and both are better than standard RNNs.
All of the data preparation will be (almost) the same as last time, so I'll very briefly detail what each code block does. See the previous notebook if you've forgotten.
We'll import PyTorch, TorchText, spaCy and a few standard modules.
import torch
import torch.nn as nn
import torch.optim as optim
from torchtext.datasets import TranslationDataset, Multi30k
from torchtext.data import Field, BucketIterator
import spacy
import random
import math
import time
Then set a random seed for deterministic results/reproducability.
SEED = 1234
random.seed(SEED)
torch.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
Instantiate our German and English spaCy models.
spacy_de = spacy.load('de')
spacy_en = spacy.load('en')
Previously we reversed the source (German) sentence, however in the paper we are implementing they don't do this, so neither will we.
def tokenize_de(text):
"""
Tokenizes German text from a string into a list of strings
"""
return [tok.text for tok in spacy_de.tokenizer(text)]
def tokenize_en(text):
"""
Tokenizes English text from a string into a list of strings
"""
return [tok.text for tok in spacy_en.tokenizer(text)]
Create our fields to process our data. This will append the "start of sentence" and "end of sentence" tokens as well as converting all words to lowercase.
SRC = Field(tokenize=tokenize_de,
init_token='<sos>',
eos_token='<eos>',
lower=True)
TRG = Field(tokenize = tokenize_en,
init_token='<sos>',
eos_token='<eos>',
lower=True)
Load our data.
train_data, valid_data, test_data = Multi30k.splits(exts = ('.de', '.en'),
fields = (SRC, TRG))
We'll also print out an example just to double check they're not reversed.
print(vars(train_data.examples[0]))
{'src': ['zwei', 'junge', 'weiße', 'männer', 'sind', 'im', 'freien', 'in', 'der', 'nähe', 'vieler', 'büsche', '.'], 'trg': ['two', 'young', ',', 'white', 'males', 'are', 'outside', 'near', 'many', 'bushes', '.']}
Then create our vocabulary, converting all tokens appearing less than twice into <unk>
tokens.
SRC.build_vocab(train_data, min_freq = 2)
TRG.build_vocab(train_data, min_freq = 2)
Finally, define the device
and create our iterators.
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
BATCH_SIZE = 128
train_iterator, valid_iterator, test_iterator = BucketIterator.splits(
(train_data, valid_data, test_data),
batch_size = BATCH_SIZE,
device = device)
The encoder is similar to the previous one, with the multi-layer LSTM swapped for a single-layer GRU. We also don't pass the dropout as an argument to the GRU as that dropout is used between each layer of a multi-layered RNN. As we only have a single layer, PyTorch will display a warning if we try and use pass a dropout value to it.
Another thing to note about the GRU is that it only requires and returns a hidden state, there is no cell state like in the LSTM.
ht=GRU(xt,ht−1)(ht,ct)=LSTM(xt,(ht−1,ct−1))ht=RNN(xt,ht−1)
From the equations above, it looks like the RNN and the GRU are identical. Inside the GRU, however, is a number of gating mechanisms that control the information flow in to and out of the hidden state (similar to an LSTM). Again, for more info, check out this excellent post.
The rest of the encoder should be very familar from the last tutorial, it takes in a sequence, X=x1,x2,...,xT, recurrently calculates hidden states, H=h1,h2,...,hT, and returns a context vector (the final hidden state), z=hT.
ht=EncoderGRU(xt,ht−1)
This is identical to the encoder of the general seq2seq model, with all the "magic" happening inside the GRU (green squares).
class Encoder(nn.Module):
def __init__(self, input_dim, emb_dim, hid_dim, dropout):
super().__init__()
self.input_dim = input_dim
self.emb_dim = emb_dim
self.hid_dim = hid_dim
self.dropout = dropout
self.embedding = nn.Embedding(input_dim, emb_dim) #no dropout as only one layer!
self.rnn = nn.GRU(emb_dim, hid_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, src):
#src = [src sent len, batch size]
embedded = self.dropout(self.embedding(src))
#embedded = [src sent len, batch size, emb dim]
outputs, hidden = self.rnn(embedded) #no cell state!
#outputs = [src sent len, batch size, hid dim * n directions]
#hidden = [n layers * n directions, batch size, hid dim]
#outputs are always from the top hidden layer
return hidden
The decoder is where the implementation differs significantly from the previous model and we alleviate some of the information compression.
Instead of the GRU in the decoder taking just the target token, yt and the previous hidden state st−1 as inputs, it also takes the context vector z.
st=DecoderGRU(yt,st−1,z)
Note how this context vector, z, does not have a t subscript, meaning we re-use the same context vector returned by the encoder for every time-step in the decoder.
Before, we predicted the next token, $\hat{y}{t+1},withthelinearlayer,f,onlyusingthetop−layerdecoderhiddenstateatthattime−step,s_t,as\hat{y}{t+1}=f(s_t^L).Now,wealsopassthecurrenttoken,\hat{y}_tandthecontextvector,z$ to the linear layer.
ˆyt+1=f(yt,st,z)
Thus, our decoder now looks something like this:
Note, the initial hidden state, s0, is still the context vector, z, so when generating the first token we are actually inputting two identical context vectors into the GRU.
How do these two changes reduce the information compression? Well, hypothetically the decoder hidden states, st, no longer need to contain information about the source sequence as it is always available as an input. Thus, it only needs to contain information about what tokens it has generated so far. The addition of yt to the linear layer also means this layer can directly see what the token is, without having to get this information from the hidden state.
However, this hypothesis is just a hypothesis, it is impossible to determine how the model actually uses the information provided to it (don't listen to anyone that tells you differently). Nevertheless, it is a solid intuition and the results seem to indicate that this modifications are a good idea!
Within the implementation, we will pass yt and z to the GRU by concatenating them together, so the input dimensions to the GRU are now emb_dim + hid_dim
(as context vector will be of size hid_dim
). The linear layer will take yt,st and z also by concatenating them together, hence the input dimensions are now emb_dim + hid_dim*2
. We also don't pass a value of dropout to the GRU as it only uses a single layer.
forward
now takes a context
argument. Inside of forward
, we concatenate yt and z as emb_con
before feeding to the GRU, and we concatenate yt, st and z together as output
before feeding it through the linear layer to receive our predictions, ˆyt+1.
class Decoder(nn.Module):
def __init__(self, output_dim, emb_dim, hid_dim, dropout):
super().__init__()
self.emb_dim = emb_dim
self.hid_dim = hid_dim
self.output_dim = output_dim
self.dropout = dropout
self.embedding = nn.Embedding(output_dim, emb_dim)
self.rnn = nn.GRU(emb_dim + hid_dim, hid_dim)
self.out = nn.Linear(emb_dim + hid_dim * 2, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, input, hidden, context):
#input = [batch size]
#hidden = [n layers * n directions, batch size, hid dim]
#context = [n layers * n directions, batch size, hid dim]
#n layers and n directions in the decoder will both always be 1, therefore:
#hidden = [1, batch size, hid dim]
#context = [1, batch size, hid dim]
input = input.unsqueeze(0)
#input = [1, batch size]
embedded = self.dropout(self.embedding(input))
#embedded = [1, batch size, emb dim]
emb_con = torch.cat((embedded, context), dim = 2)
#emb_con = [1, batch size, emb dim + hid dim]
output, hidden = self.rnn(emb_con, hidden)
#output = [sent len, batch size, hid dim * n directions]
#hidden = [n layers * n directions, batch size, hid dim]
#sent len, n layers and n directions will always be 1 in the decoder, therefore:
#output = [1, batch size, hid dim]
#hidden = [1, batch size, hid dim]
output = torch.cat((embedded.squeeze(0), hidden.squeeze(0), context.squeeze(0)),
dim = 1)
#output = [batch size, emb dim + hid dim * 2]
prediction = self.out(output)
#prediction = [batch size, output dim]
return prediction, hidden
Putting the encoder and decoder together, we get:
Again, in this implementation we need to ensure the hidden dimensions in both the encoder and the decoder are the same.
Briefly going over all of the steps:
outputs
tensor is created to hold all predictions, ˆYcontext
vectorcontext
vector, s0=z=hT<sos>
tokens as the first input
, y1class Seq2Seq(nn.Module):
def __init__(self, encoder, decoder, device):
super().__init__()
self.encoder = encoder
self.decoder = decoder
self.device = device
assert encoder.hid_dim == decoder.hid_dim, \
"Hidden dimensions of encoder and decoder must be equal!"
def forward(self, src, trg, teacher_forcing_ratio = 0.5):
#src = [src sent len, batch size]
#trg = [trg sent len, batch size]
#teacher_forcing_ratio is probability to use teacher forcing
#e.g. if teacher_forcing_ratio is 0.75 we use ground-truth inputs 75% of the time
batch_size = trg.shape[1]
max_len = trg.shape[0]
trg_vocab_size = self.decoder.output_dim
#tensor to store decoder outputs
outputs = torch.zeros(max_len, batch_size, trg_vocab_size).to(self.device)
#last hidden state of the encoder is the context
context = self.encoder(src)
#context also used as the initial hidden state of the decoder
hidden = context
#first input to the decoder is the <sos> tokens
input = trg[0,:]
for t in range(1, max_len):
#insert input token embedding, previous hidden state and the context state
#receive output tensor (predictions) and new hidden state
output, hidden = self.decoder(input, hidden, context)
#place predictions in a tensor holding predictions for each token
outputs[t] = output
#decide if we are going to use teacher forcing or not
teacher_force = random.random() < teacher_forcing_ratio
#get the highest predicted token from our predictions
top1 = output.argmax(1)
#if teacher forcing, use actual next token as next input
#if not, use predicted token
input = trg[t] if teacher_force else top1
return outputs
The rest of this tutorial is very similar to the previous one.
We initialise our encoder, decoder and seq2seq model (placing it on the GPU if we have one). As before, the embedding dimensions and the amount of dropout used can be different between the encoder and the decoder, but the hidden dimensions must remain the same.
INPUT_DIM = len(SRC.vocab)
OUTPUT_DIM = len(TRG.vocab)
ENC_EMB_DIM = 256
DEC_EMB_DIM = 256
HID_DIM = 512
ENC_DROPOUT = 0.5
DEC_DROPOUT = 0.5
enc = Encoder(INPUT_DIM, ENC_EMB_DIM, HID_DIM, ENC_DROPOUT)
dec = Decoder(OUTPUT_DIM, DEC_EMB_DIM, HID_DIM, DEC_DROPOUT)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = Seq2Seq(enc, dec, device).to(device)
Next, we initialize our parameters. The paper states the parameters are initialized from a normal distribution with a mean of 0 and a standard deviation of 0.01, i.e. N(0,0.01).
It also states we should initialize the recurrent parameters to a special initialization, however to keep things simple we'll also initialize them to N(0,0.01).
def init_weights(m):
for name, param in m.named_parameters():
nn.init.normal_(param.data, mean=0, std=0.01)
model.apply(init_weights)
Seq2Seq( (encoder): Encoder( (embedding): Embedding(7855, 256) (rnn): GRU(256, 512) (dropout): Dropout(p=0.5, inplace=False) ) (decoder): Decoder( (embedding): Embedding(5893, 256) (rnn): GRU(768, 512) (out): Linear(in_features=1280, out_features=5893, bias=True) (dropout): Dropout(p=0.5, inplace=False) ) )
We print out the number of parameters.
Even though we only have a single layer RNN for our encoder and decoder we actually have more parameters than the last model. This is due to the increased size of the inputs to the GRU and the linear layer. However, it is not a significant amount of parameters and causes a minimal amount of increase in training time (~3 seconds per epoch extra).
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'The model has {count_parameters(model):,} trainable parameters')
The model has 14,220,293 trainable parameters
We initiaize our optimizer.
optimizer = optim.Adam(model.parameters())
We also initialize the loss function, making sure to ignore the loss on <pad>
tokens.
PAD_IDX = TRG.vocab.stoi['<pad>']
criterion = nn.CrossEntropyLoss(ignore_index = PAD_IDX)
We then create the training loop...
def train(model, iterator, optimizer, criterion, clip):
model.train()
epoch_loss = 0
for i, batch in enumerate(iterator):
src = batch.src
trg = batch.trg
optimizer.zero_grad()
output = model(src, trg)
#trg = [trg sent len, batch size]
#output = [trg sent len, batch size, output dim]
output = output[1:].view(-1, output.shape[-1])
trg = trg[1:].view(-1)
#trg = [(trg sent len - 1) * batch size]
#output = [(trg sent len - 1) * batch size, output dim]
loss = criterion(output, trg)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), clip)
optimizer.step()
epoch_loss += loss.item()
return epoch_loss / len(iterator)
...and the evaluation loop, remembering to set the model to eval
mode and turn off teaching forcing.
def evaluate(model, iterator, criterion):
model.eval()
epoch_loss = 0
with torch.no_grad():
for i, batch in enumerate(iterator):
src = batch.src
trg = batch.trg
output = model(src, trg, 0) #turn off teacher forcing
#trg = [trg sent len, batch size]
#output = [trg sent len, batch size, output dim]
output = output[1:].view(-1, output.shape[-1])
trg = trg[1:].view(-1)
#trg = [(trg sent len - 1) * batch size]
#output = [(trg sent len - 1) * batch size, output dim]
loss = criterion(output, trg)
epoch_loss += loss.item()
return epoch_loss / len(iterator)
We'll also define the function that calculates how long an epoch takes.
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
Then, we train our model, saving the parameters that give us the best validation loss.
N_EPOCHS = 10
CLIP = 1
best_valid_loss = float('inf')
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss = train(model, train_iterator, optimizer, criterion, CLIP)
valid_loss = evaluate(model, valid_iterator, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'tut2-model.pt')
print(f'Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f}')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. PPL: {math.exp(valid_loss):7.3f}')
Epoch: 01 | Time: 0m 28s Train Loss: 5.072 | Train PPL: 159.561 Val. Loss: 5.065 | Val. PPL: 158.309 Epoch: 02 | Time: 0m 27s Train Loss: 4.446 | Train PPL: 85.290 Val. Loss: 5.474 | Val. PPL: 238.400 Epoch: 03 | Time: 0m 27s Train Loss: 4.148 | Train PPL: 63.299 Val. Loss: 4.868 | Val. PPL: 130.008 Epoch: 04 | Time: 0m 27s Train Loss: 3.830 | Train PPL: 46.041 Val. Loss: 4.617 | Val. PPL: 101.215 Epoch: 05 | Time: 0m 27s Train Loss: 3.535 | Train PPL: 34.283 Val. Loss: 4.411 | Val. PPL: 82.338 Epoch: 06 | Time: 0m 27s Train Loss: 3.272 | Train PPL: 26.362 Val. Loss: 4.143 | Val. PPL: 62.964 Epoch: 07 | Time: 0m 27s Train Loss: 3.044 | Train PPL: 20.994 Val. Loss: 3.923 | Val. PPL: 50.562 Epoch: 08 | Time: 0m 27s Train Loss: 2.817 | Train PPL: 16.733 Val. Loss: 3.872 | Val. PPL: 48.041 Epoch: 09 | Time: 0m 27s Train Loss: 2.610 | Train PPL: 13.601 Val. Loss: 3.777 | Val. PPL: 43.706 Epoch: 10 | Time: 0m 27s Train Loss: 2.453 | Train PPL: 11.625 Val. Loss: 3.732 | Val. PPL: 41.772
Finally, we test the model on the test set using these "best" parameters.
model.load_state_dict(torch.load('tut2-model.pt'))
test_loss = evaluate(model, test_iterator, criterion)
print(f'| Test Loss: {test_loss:.3f} | Test PPL: {math.exp(test_loss):7.3f} |')
| Test Loss: 3.672 | Test PPL: 39.311 |
Just looking at the test loss, we get better performance. This is a pretty good sign that this model architecture is doing something right! Relieving the information compression seems like the way forard, and in the next tutorial we'll expand on this even further with attention.
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。