In this project, you will use RNN to create your own Simpsons TV series. You will use the data set for some of the scripts in the 27th season of the Simpsons . The neural network you create will generate a new set of scripts for a scene in the Moe Pub .
We have already provided you with the data. You will use a subset of the original dataset, which only includes the scenes in the Moe Pub. Other versions of the pub are not included in the data, such as "Moe's Cave", "Burning Moe Pub", "Moe Uncle's Family Meal" and more.
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Use view_sentence_range
to view different parts of the data.
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
The first operation on the data set is preprocessing. Please implement the following two preprocessor functions:
To create a word embedding, you first need to convert the word to an id. Please create two dictionaries in this function:
vocab_to_int
int_to_vocab
Please return these dictionaries in the tuple below
(vocab_to_int, int_to_vocab)
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
text = list(set(" ".join(text).split(" ")))
int_to_vocab = dict((i, t) for i, t in enumerate(text))
vocab_to_int = { v : k for k , v in int_to_vocab . items ()}
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
We will use a space as a separator to split the script into an array of words. However, symbols such as periods and exclamation marks make it difficult for neural networks to distinguish between "goodbye" and "goodbye!"
Implementing a function token_lookup
to return a dictionary, the dictionary for "!" Symbols such as marked "|| Exclamation_Mark ||" form. Create a dictionary for the following symbols, where the symbol is a flag and the value is a tag.
This dictionary will be used to mark symbols and add a separator (space) around them. This allows the symbol to be separated as a separate vocabulary and makes it easier for the neural network to predict the next vocabulary. Make sure you don't use tags that are easily confused with words. Instead of using a tag like "dash", try using "||dash||".
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return {'.':'||period||',
',':'||comma||',
'"':'||quotation_mark||',
';':'||semicolon||',
'!':'||exclamation_mark||',
'?':'||question_mark||',
'(':'||left_parenthesis||',
')':'||right_parenthesis||',
'--':'||dash||',
'\n':'||return||'}
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
Running the following code will preprocess all the data and save them to a file.
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
This is the first checkpoint you have encountered. If you want to go back to this notebook or need to reopen your notebook, you can start here. The preprocessed data has been saved.
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Implementing a function get_inputs()
to create TF placeholder for the neural network. It will create the following placeholders:
name
parameter to enter the "input" text placeholder.Returns placeholders in the following tuples (Input, Targets, LearningRate)
def get_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
"""
# TODO: Implement Function
i = tf.placeholder(tf.int32, (None, None), name="input") # int because of the dictionary
t = tf.placeholder(tf.int32, (None, None), name="target") # int because of the dictionary
lr = tf.placeholder(tf.float32, name="lr")
# must include name="xxx"
return (i, t, lr)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs)
In MultiRNNCell
stacking one or moreBasicLSTMCells
rnn_size
set RNN size.zero_state()
function to initialize the state of Celltf.identity()
the initial state of the application name "initial_state"Returns the initial state in the cell and the following tuple (Cell, InitialState)
def get_init_cell(batch_size, rnn_size, n_cell=1):
"""
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
"""
# TODO: Implement Function
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
cell = tf.contrib.rnn.MultiRNNCell([lstm] * n_cell)
init = cell.zero_state(batch_size, tf.float32)
return cell, tf.identity(init, name="initial_state")
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell)
Use TensorFlow use to embed input_data
in. Returns the embedded sequence.
def get_embed(input_data, vocab_size, embed_dim):
"""
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
"""
# TODO: Implement Function
init = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1)) #https://www.tensorflow.org/tutorials/representation/word2vec?hl=zh-cn
return tf.nn.embedding_lookup(init, input_data) #https://youtu.be/D-xK6gu1ohE
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed)
You already get_init_cell()
created RNN Cell function. It's time to use this Cell to create an RNN.
tf.nn.dynamic_rnn()
create RNNtf.identity()
the name "final_state" applied to the final stateReturns the output and final state in the following tuple(Outputs, FinalState)
def build_rnn(cell, inputs):
"""
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
"""
# TODO: Implement Function
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name="final_state")
return (outputs, final_state)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_rnn(build_rnn)
Apply the function you implemented above to:
get_embed(input_data, vocab_size, embed_dim)
function will be embedded into applications input_data
incell
your build_rnn(cell, inputs)
function to create RNNvocab_size
layered as the number of outputs.Returns the logit and final state in the following tuple Logits, FinalState
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
"""
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
"""
# TODO: Implement Function
x = get_embed(input_data, vocab_size, embed_dim)
outputs, final_state = build_rnn(cell, x)
logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None)
# no activation?
return logits, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn)
Implement get_batches
used int_text
to create input and target lot. These batches should be Numpy arrays and have shapes (number of batches, 2, batch size, sequence length)
. Each batch contains two elements:
[batch size, sequence length]
[batch size, sequence length]
If you are unable to fill in enough data in the last batch, please abandon the batch.
For example get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3)
returns the following Numpy array:
[
# First Batch
[
# Batch of Input
[[ 1 2 3], [ 7 8 9]],
# Batch of targets
[[ 2 3 4], [ 8 9 10]]
],
# Second Batch
[
# Batch of Input
[[ 4 5 6], [10 11 12]],
# Batch of targets
[[ 5 6 7], [11 12 13]]
]
]
import math
def get_batches(int_text, batch_size, seq_length):
"""
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
"""
n_batch = len(int_text) // (batch_size * seq_length) # int division
x = np.array(int_text[:n_batch * batch_size * seq_length])
y = np.array(int_text[1:n_batch * batch_size * seq_length + 1])
"""The last target of the last batch should be the first input of the first batch.
Found [4476 4477 4478 4479 4480] but expected [4476 4477 4478 4479 0]"""
y[-1] = x[0]
batch = []
for i in range(n_batch):
x_ = []
y_ = []
for ii in range(batch_size):
from_ = ii*n_batch*seq_length + i*seq_length
to_ = from_+seq_length
x_.append(x[from_:to_])
y_.append(y[from_:to_])
batch.append([x_, y_])
return np.array(batch)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)
Adjust the following parameters:
num_epochs
set training times.batch_size
set the size of the program groups.rnn_size
set RNN size.embed_dim
set size of the embedding.seq_length
set a sequence length.learning_rate
set the learning rate.show_every_n_batches
set number of program groups to be output by the neural network.# Number of Epochs
num_epochs = 128
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 256
# Embedding Dimension Size
embed_dim = 512
# Sequence Length
seq_length = 32
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 32
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save'
Create a chart using the neural network you implemented.
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Storage seq_length
and save_dir
to generate a new drama script.
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Use get_tensor_by_name()
function from the loaded_graph
acquisition of the tensor. Get tensor with the following name:
Return tensor in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
def get_tensors(loaded_graph):
"""
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
InputTensor = loaded_graph.get_tensor_by_name("input:0")
InitialStateTensor = loaded_graph.get_tensor_by_name("initial_state:0")
FinalStateTensor = loaded_graph.get_tensor_by_name("final_state:0")
ProbsTensor = loaded_graph.get_tensor_by_name("probs:0")
# TODO: Implement Function
return InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors)
Implement pick_word()
functions to use probabilities
to select the next vocabulary.
def pick_word(probabilities, int_to_vocab):
"""
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
"""
i = int(np.random.choice(len(probabilities), 1, p=probabilities))
# TODO: Implement Function
return int_to_vocab[i]
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word)
This will generate a TV drama for you. By setting gen_length
to adjust the length of the script you want to generate.
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[0][dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
If this TV drama is meaningless, it doesn't matter. Our training text is less than one megabyte. In order to get better results, you need to use a smaller vocabulary range or more data. Fortunately, we do have more data! As we mentioned at the beginning of this project, this is a subset of another data set. We didn't let you train based on all the data, because it would take a lot of time. However, you are free to use this data to train your neural network. Of course, after completing this project.
When submitting your project, make sure you run all the cell code before saving your notebook. Save the notebook file as "dlnd_tv_script_generation.ipynb" and save it as an HTML file in "File" -> "Download as". Please submit the "helper.py" and "problem_unittests.py" files together.