Generate a TV drama script¶

In this project, you will use RNN to create your own Simpsons TV series. You will use the data set for some of the scripts in the 27th season of the Simpsons . The neural network you create will generate a new set of scripts for a scene in the Moe Pub .

Get data¶

We have already provided you with the data. You will use a subset of the original dataset, which only includes the scenes in the Moe Pub. Other versions of the pub are not included in the data, such as "Moe's Cave", "Burning Moe Pub", "Moe Uncle's Family Meal" and more.

In [1]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper

data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]

Exploring data¶

Use view_sentence_rangeto view different parts of the data.

In [18]:
view_sentence_range = (0, 10)

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np

print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))

sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))

print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Dataset Stats
Roughly the number of unique words: 11492
Number of scenes: 262
Average number of sentences in each scene: 15.248091603053435
Number of lines: 4257
Average number of words in each line: 11.50434578341555

The sentences 0 to 10:
Moe_Szyslak: (INTO PHONE) Moe's Tavern. Where the elite meet to drink.
Bart_Simpson: Eh, yeah, hello, is Mike there? Last name, Rotch.
Moe_Szyslak: (INTO PHONE) Hold on, I'll check. (TO BARFLIES) Mike Rotch. Mike Rotch. Hey, has anybody seen Mike Rotch, lately?
Moe_Szyslak: (INTO PHONE) Listen you little puke. One of these days I'm gonna catch you, and I'm gonna carve my name on your back with an ice pick.
Moe_Szyslak: What's the matter Homer? You're not your normal effervescent self.
Homer_Simpson: I got my problems, Moe. Give me another one.
Moe_Szyslak: Homer, hey, you should not drink to forget your problems.
Barney_Gumble: Yeah, you should only drink to enhance your social skills.


Implement preprocessing function

The first operation on the data set is preprocessing. Please implement the following two preprocessor functions:

  • Query list
  • Tag symbol string

Query form¶

To create a word embedding, you first need to convert the word to an id. Please create two dictionaries in this function:

  • Convert the word to a dictionary of ids, we call it vocab_to_int
  • Convert id to a dictionary of words, we call it int_to_vocab

Please return these dictionaries in the tuple below (vocab_to_int, int_to_vocab)

In [19]:
import numpy as np
import problem_unittests as tests

def create_lookup_tables(text):
    """
    Create lookup tables for vocabulary
    :param text: The text of tv scripts split into words
    :return: A tuple of dicts (vocab_to_int, int_to_vocab)
    """
    text = list(set(" ".join(text).split(" ")))
    int_to_vocab = dict((i, t) for i, t in enumerate(text))
    vocab_to_int  =  { v :  k  for  k ,  v  in  int_to_vocab . items ()}
    
    return vocab_to_int, int_to_vocab


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
Tests Passed

Tag string

We will use a space as a separator to split the script into an array of words. However, symbols such as periods and exclamation marks make it difficult for neural networks to distinguish between "goodbye" and "goodbye!"

Implementing a function token_lookupto return a dictionary, the dictionary for "!" Symbols such as marked "|| Exclamation_Mark ||" form. Create a dictionary for the following symbols, where the symbol is a flag and the value is a tag.

  • period ( . )
  • comma (,)
  • quotation mark ( " )
  • semicolon ( ; )
  • exclamation mark ( ! )
  • question mark ( ? )
  • left parenthesis ( ( )
  • right parenthesis ( ) )
  • dash ( -- )
  • return ( \n )

This dictionary will be used to mark symbols and add a separator (space) around them. This allows the symbol to be separated as a separate vocabulary and makes it easier for the neural network to predict the next vocabulary. Make sure you don't use tags that are easily confused with words. Instead of using a tag like "dash", try using "||dash||".

In [20]:
def token_lookup():
    """
    Generate a dict to turn punctuation into a token.
    :return: Tokenize dictionary where the key is the punctuation and the value is the token
    """
    # TODO: Implement Function
    return {'.':'||period||', 
             ',':'||comma||', 
             '"':'||quotation_mark||', 
             ';':'||semicolon||',
             '!':'||exclamation_mark||', 
             '?':'||question_mark||', 
             '(':'||left_parenthesis||',
             ')':'||right_parenthesis||', 
             '--':'||dash||', 
             '\n':'||return||'}

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
Tests Passed

Preprocess and save all data¶

Running the following code will preprocess all the data and save them to a file.

In [21]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)

Checkpoint

This is the first checkpoint you have encountered. If you want to go back to this notebook or need to reopen your notebook, you can start here. The preprocessed data has been saved.

In [1]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests

int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()

Create a neural network¶

You will create the necessary elements for building the RNN by implementing the following functions:

  • get_inputs
  • get_init_cell
  • get_embed
  • build_rnn
  • build_nn
  • get_batches

Check the TensorFlow version and access the GPU

In [2]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf

# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))

# Check for a GPU
if not tf.test.gpu_device_name():
    warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
    print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
TensorFlow Version: 1.3.0
Default GPU Device: /gpu:0

Input

Implementing a function get_inputs()to create TF placeholder for the neural network. It will create the following placeholders:

  • Use the TF placeholder name parameter to enter the "input" text placeholder.
  • Targets placeholder
  • Learning Rate placeholder

Returns placeholders in the following tuples (Input, Targets, LearningRate)

In [3]:
def get_inputs():
    """
    Create TF Placeholders for input, targets, and learning rate.
    :return: Tuple (input, targets, learning rate)
    """
    # TODO: Implement Function
    i = tf.placeholder(tf.int32, (None, None), name="input") # int because of the dictionary
    t = tf.placeholder(tf.int32, (None, None), name="target") # int because of the dictionary
    lr = tf.placeholder(tf.float32, name="lr")
    # must include name="xxx"
    return (i, t, lr)


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs)
Tests Passed

Create an RNN Cell and initialize it¶

In MultiRNNCellstacking one or moreBasicLSTMCells

  • Use rnn_sizeset RNN size.
  • MultiRNNCell using the zero_state()function to initialize the state of Cell
  • Use tf.identity()the initial state of the application name "initial_state"

Returns the initial state in the cell and the following tuple (Cell, InitialState)

In [4]:
def get_init_cell(batch_size, rnn_size, n_cell=1):
    """
    Create an RNN Cell and initialize it.
    :param batch_size: Size of batches
    :param rnn_size: Size of RNNs
    :return: Tuple (cell, initialize state)
    """
    # TODO: Implement Function
    lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
    cell = tf.contrib.rnn.MultiRNNCell([lstm] * n_cell)
    init = cell.zero_state(batch_size, tf.float32)
    return cell, tf.identity(init, name="initial_state")


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell)
Tests Passed

Word embedded

Use TensorFlow use to embed input_datain. Returns the embedded sequence.

In [5]:
def get_embed(input_data, vocab_size, embed_dim):
    """
    Create embedding for <input_data>.
    :param input_data: TF placeholder for text input.
    :param vocab_size: Number of words in vocabulary.
    :param embed_dim: Number of embedding dimensions
    :return: Embedded input.
    """
    # TODO: Implement Function
    init = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1)) #https://www.tensorflow.org/tutorials/representation/word2vec?hl=zh-cn
    return tf.nn.embedding_lookup(init, input_data) #https://youtu.be/D-xK6gu1ohE


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed)
Tests Passed

Creating RNN

You already get_init_cell()created RNN Cell function. It's time to use this Cell to create an RNN.

Returns the output and final state in the following tuple(Outputs, FinalState)

In [6]:
def build_rnn(cell, inputs):
    """
    Create a RNN using a RNN Cell
    :param cell: RNN Cell
    :param inputs: Input text data
    :return: Tuple (Outputs, Final State)
    """
    # TODO: Implement Function
    outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
    final_state = tf.identity(final_state, name="final_state")
    return (outputs, final_state)


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_rnn(build_rnn)
Tests Passed

Building a neural network¶

Apply the function you implemented above to:

  • Use your get_embed(input_data, vocab_size, embed_dim)function will be embedded into applications input_datain
  • Use cellyour build_rnn(cell, inputs)function to create RNN
  • Application of a linear fully activated Unicom and vocab_sizelayered as the number of outputs.

Returns the logit and final state in the following tuple Logits, FinalState

In [7]:
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
    """
    Build part of the neural network
    :param cell: RNN cell
    :param rnn_size: Size of rnns
    :param input_data: Input data
    :param vocab_size: Vocabulary size
    :param embed_dim: Number of embedding dimensions
    :return: Tuple (Logits, FinalState)
    """
    # TODO: Implement Function
    x = get_embed(input_data, vocab_size, embed_dim)
    outputs, final_state = build_rnn(cell, x)
    logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None)
    # no activation?
    return logits, final_state


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn)
Tests Passed

Batch

Implement get_batchesused int_textto create input and target lot. These batches should be Numpy arrays and have shapes (number of batches, 2, batch size, sequence length). Each batch contains two elements:

  • The first element is a separate batch of input and has a shape[batch size, sequence length]
  • The second element is a separate batch of the target and has a shape[batch size, sequence length]

If you are unable to fill in enough data in the last batch, please abandon the batch.

For example get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3)returns the following Numpy array:

[
  # First Batch
  [
    # Batch of Input
    [[ 1  2  3], [ 7  8  9]],
    # Batch of targets
    [[ 2  3  4], [ 8  9 10]]
  ],

  # Second Batch
  [
    # Batch of Input
    [[ 4  5  6], [10 11 12]],
    # Batch of targets
    [[ 5  6  7], [11 12 13]]
  ]
]
In [8]:
import math

def get_batches(int_text, batch_size, seq_length):
    """
    Return batches of input and target
    :param int_text: Text with the words replaced by their ids
    :param batch_size: The size of batch
    :param seq_length: The length of sequence
    :return: Batches as a Numpy array
    """
    n_batch = len(int_text) // (batch_size * seq_length) # int division
    x = np.array(int_text[:n_batch * batch_size * seq_length])
    y = np.array(int_text[1:n_batch * batch_size * seq_length + 1])
    
    """The last target of the last batch should be the first input of the first batch.
    Found [4476 4477 4478 4479 4480] but expected [4476 4477 4478 4479    0]"""
    y[-1] = x[0]
    
    batch = []
    for i in range(n_batch):
        x_ = []
        y_ = []
        for ii in range(batch_size):
            from_ = ii*n_batch*seq_length + i*seq_length
            to_ = from_+seq_length
            x_.append(x[from_:to_])
            y_.append(y[from_:to_])
        batch.append([x_, y_])
    return np.array(batch)
    

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)
Tests Passed

Neural network training

Super parameters

Adjust the following parameters:

  • The num_epochsset training times.
  • To batch_sizeset the size of the program groups.
  • The rnn_sizeset RNN size.
  • The embed_dimset size of the embedding.
  • To seq_lengthset a sequence length.
  • It will learning_rateset the learning rate.
  • The show_every_n_batchesset number of program groups to be output by the neural network.
In [26]:
# Number of Epochs
num_epochs = 128
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 256
# Embedding Dimension Size
embed_dim = 512
# Sequence Length
seq_length = 32
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 32

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save'

Create a chart¶

Create a chart using the neural network you implemented.

In [27]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from tensorflow.contrib import seq2seq

train_graph = tf.Graph()
with train_graph.as_default():
    vocab_size = len(int_to_vocab)
    input_text, targets, lr = get_inputs()
    input_data_shape = tf.shape(input_text)
    cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
    logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)

    # Probabilities for generating words
    probs = tf.nn.softmax(logits, name='probs')

    # Loss function
    cost = seq2seq.sequence_loss(
        logits,
        targets,
        tf.ones([input_data_shape[0], input_data_shape[1]]))

    # Optimizer
    optimizer = tf.train.AdamOptimizer(lr)

    # Gradient Clipping
    gradients = optimizer.compute_gradients(cost)
    capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
    train_op = optimizer.apply_gradients(capped_gradients)

Training

The neural network is trained in the pre-processed data. If you are experiencing difficulties, please check this form to see if anyone has encountered the same problem as you.

In [28]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)

with tf.Session(graph=train_graph) as sess:
    sess.run(tf.global_variables_initializer())

    for epoch_i in range(num_epochs):
        state = sess.run(initial_state, {input_text: batches[0][0]})

        for batch_i, (x, y) in enumerate(batches):
            feed = {
                input_text: x,
                targets: y,
                initial_state: state,
                lr: learning_rate}
            train_loss, state, _ = sess.run([cost, final_state, train_op], feed)

            # Show every <show_every_n_batches> batches
            if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
                print('Epoch {:>3} Batch {:>4}/{}   train_loss = {:.3f}'.format(
                    epoch_i,
                    batch_i,
                    len(batches),
                    train_loss))

    # Save Model
    saver = tf.train.Saver()
    saver.save(sess, save_dir)
    print('Model Trained and Saved')
Epoch   0 Batch    0/8   train_loss = 8.822
Epoch   4 Batch    0/8   train_loss = 4.355
Epoch   8 Batch    0/8   train_loss = 3.290
Epoch  12 Batch    0/8   train_loss = 2.656
Epoch  16 Batch    0/8   train_loss = 2.244
Epoch  20 Batch    0/8   train_loss = 1.866
Epoch  24 Batch    0/8   train_loss = 1.571
Epoch  28 Batch    0/8   train_loss = 1.336
Epoch  32 Batch    0/8   train_loss = 1.155
Epoch  36 Batch    0/8   train_loss = 1.021
Epoch  40 Batch    0/8   train_loss = 0.881
Epoch  44 Batch    0/8   train_loss = 0.762
Epoch  48 Batch    0/8   train_loss = 0.648
Epoch  52 Batch    0/8   train_loss = 0.569
Epoch  56 Batch    0/8   train_loss = 0.523
Epoch  60 Batch    0/8   train_loss = 0.472
Epoch  64 Batch    0/8   train_loss = 0.384
Epoch  68 Batch    0/8   train_loss = 0.343
Epoch  72 Batch    0/8   train_loss = 0.302
Epoch  76 Batch    0/8   train_loss = 0.274
Epoch  80 Batch    0/8   train_loss = 0.274
Epoch  84 Batch    0/8   train_loss = 0.208
Epoch  88 Batch    0/8   train_loss = 0.165
Epoch  92 Batch    0/8   train_loss = 0.147
Epoch  96 Batch    0/8   train_loss = 0.142
Epoch 100 Batch    0/8   train_loss = 0.143
Epoch 104 Batch    0/8   train_loss = 0.125
Epoch 108 Batch    0/8   train_loss = 0.113
Epoch 112 Batch    0/8   train_loss = 0.103
Epoch 116 Batch    0/8   train_loss = 0.098
Epoch 120 Batch    0/8   train_loss = 0.094
Epoch 124 Batch    0/8   train_loss = 0.092
Model Trained and Saved

Storage parameters

Storage seq_lengthand save_dirto generate a new drama script.

In [29]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))

Checkpoint

In [30]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests

_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()

Generating functions to achieve

Get Tensors

Use get_tensor_by_name()function from the loaded_graphacquisition of the tensor. Get tensor with the following name:

  • "input:0"
  • "initial_state:0"
  • "final_state:0"
  • "probs:0"

Return tensor in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)

In [31]:
def get_tensors(loaded_graph):
    """
    Get input, initial state, final state, and probabilities tensor from <loaded_graph>
    :param loaded_graph: TensorFlow graph loaded from file
    :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
    """
    InputTensor = loaded_graph.get_tensor_by_name("input:0")
    InitialStateTensor = loaded_graph.get_tensor_by_name("initial_state:0")
    FinalStateTensor = loaded_graph.get_tensor_by_name("final_state:0")
    ProbsTensor = loaded_graph.get_tensor_by_name("probs:0")
    # TODO: Implement Function
    return InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors)
Tests Passed

Choose the word

Implement pick_word()functions to use probabilitiesto select the next vocabulary.

In [32]:
def pick_word(probabilities, int_to_vocab):
    """
    Pick the next word in the generated text
    :param probabilities: Probabilites of the next word
    :param int_to_vocab: Dictionary of word ids as the keys and words as the values
    :return: String of the predicted word
    """
    i = int(np.random.choice(len(probabilities), 1, p=probabilities))
    # TODO: Implement Function
    return int_to_vocab[i]


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word)
Tests Passed

Generate a TV drama script¶

This will generate a TV drama for you. By setting gen_lengthto adjust the length of the script you want to generate.

In [33]:
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
    # Load saved model
    loader = tf.train.import_meta_graph(load_dir + '.meta')
    loader.restore(sess, load_dir)

    # Get Tensors from loaded model
    input_text, initial_state, final_state, probs = get_tensors(loaded_graph)

    # Sentences generation setup
    gen_sentences = [prime_word + ':']
    prev_state = sess.run(initial_state, {input_text: np.array([[1]])})

    # Generate sentences
    for n in range(gen_length):
        # Dynamic Input
        dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
        dyn_seq_length = len(dyn_input[0])

        # Get Prediction
        probabilities, prev_state = sess.run(
            [probs, final_state],
            {input_text: dyn_input, initial_state: prev_state})
        
        pred_word = pick_word(probabilities[0][dyn_seq_length-1], int_to_vocab)

        gen_sentences.append(pred_word)
    
    # Remove tokens
    tv_script = ' '.join(gen_sentences)
    for key, token in token_dict.items():
        ending = ' ' if key in ['\n', '(', '"'] else ''
        tv_script = tv_script.replace(' ' + token.lower(), key)
    tv_script = tv_script.replace('\n ', '\n')
    tv_script = tv_script.replace('( ', '(')
        
    print(tv_script)
INFO:tensorflow:Restoring parameters from ./save
moe_szyslak: goodnight, moon.
moe_szyslak: goodnight, broom. good night jukebox that won't play a tune.
moe_szyslak: goodnight eggs. goodnight dregs. goodnight bugs crawlin' up my dad. he's back on which, but they love less when they're as clean as your foot. wanna come / hey you, behind the bushes."(tapping mic) uh, is this thing on?
moe_szyslak: that seems is a crowbar!
moe_szyslak: see? they do worse!
homer_simpson: moe, i don't know.
carny: teacup? how'd that what was that?
barney_gumble: why did i don't blame her. i won't be coming in tomorrow-- religious holiday... the uh, feast of...(looking at sign) maximum occupancy.
moe_szyslak: ya know, i can't say him.
homer_simpson:(handing maggie over) but he goes a all ripping...
homer_simpson: wow, that's the farthest that one of my eggs ever. i'm gonna, like, your beloved isotopes are. i'm talkin' about deadly, i

The TV play is meaningless

If this TV drama is meaningless, it doesn't matter. Our training text is less than one megabyte. In order to get better results, you need to use a smaller vocabulary range or more data. Fortunately, we do have more data! As we mentioned at the beginning of this project, this is a subset of another data set. We didn't let you train based on all the data, because it would take a lot of time. However, you are free to use this data to train your neural network. Of course, after completing this project.

Submit project¶

When submitting your project, make sure you run all the cell code before saving your notebook. Save the notebook file as "dlnd_tv_script_generation.ipynb" and save it as an HTML file in "File" -> "Download as". Please submit the "helper.py" and "problem_unittests.py" files together.

Original text