Face Generation

In this project, you will use Generative Adversarial Nets to generate new face images.

Get data¶

The project will use the following data sets:

  • MNIST
  • CelebA

Because the CelebA data set is more complicated, and this is your first time using GANs. We want you to test your GANs model on the MNIST dataset first, so that you can evaluate the performance of the model you build faster.

If you are using FloydHub , set data_dirto "/ input" and use FloydHub the Data ID "R5KrjnANiKVhLWAkpXhNBe."

In [1]:
data_dir = '/data'
!pip install matplotlib==2.0.2
# FloydHub - Use with data ID "R5KrjnANiKVhLWAkpXhNBe"
#data_dir = '/input'


"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper

helper . download_extract ( 'mnist' ,  data_dir ) 
helper . download_extract ( 'celeba' ,  data_dir )
Collecting matplotlib==2.0.2
  Downloading https://files.pythonhosted.org/packages/60/d4/6b6d8a7a6bc69a1602ab372f6fc6e88ef88a8a96398a1a25edbac636295b/matplotlib-2.0.2-cp36-cp36m-manylinux1_x86_64.whl (14.6MB)
    100% | ███████████████████████████████ | 14.6MB 37kB / s eta 0:00:01
Requirement already satisfied: pyparsing!=2.0.0,!=2.0.4,!=2.1.2,!=2.1.6,>=1.5.6 in /opt/conda/lib/python3.6/site-packages (from matplotlib==2.0.2)
Requirement already satisfied: python-dateutil in /opt/conda/lib/python3.6/site-packages (from matplotlib==2.0.2)
Requirement already satisfied: six>=1.10 in /opt/conda/lib/python3.6/site-packages (from matplotlib==2.0.2)
Requirement already satisfied: pytz in /opt/conda/lib/python3.6/site-packages (from matplotlib==2.0.2)
Requirement already satisfied: cycler>=0.10 in /opt/conda/lib/python3.6/site-packages/cycler-0.10.0-py3.6.egg (from matplotlib==2.0.2)
Requirement already satisfied: numpy>=1.7.1 in /opt/conda/lib/python3.6/site-packages (from matplotlib==2.0.2)
Installing collected packages: matplotlib
  Found existing installation: matplotlib 2.1.0
    Uninstalling matplotlib-2.1.0:
      Successfully uninstalled matplotlib-2.1.0
Successfully installed matplotlib-2.0.2
You are using pip version 9.0.1, however version 18.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Found mnist Data
Found celeba Data

Exploring the Data

MNIST

MNIST is a handwritten digital image dataset. You can change show_n_imagesexplore this dataset.

In [2]:
show_n_images = 25

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
import os
from glob import glob
from matplotlib import pyplot

mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'mnist/*.jpg'))[:show_n_images], 28, 28, 'L')
pyplot.imshow(helper.images_square_grid(mnist_images, 'L'), cmap='gray')
Out[2]:
<matplotlib.image.AxesImage at 0x7f5e3c784400>

CelebA

CelebFaces Attributes Dataset (CelebA) is a dataset containing more than 200,000 celebrity images and related photo captions. You will use this dataset to generate faces without using instructions. You can change show_n_imagesexplore this dataset.

In [3]:
show_n_images = 25

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'img_align_celeba/*.jpg'))[:show_n_images], 28, 28, 'RGB')
pyplot.imshow(helper.images_square_grid(mnist_images, 'RGB'))
Out[3]:
<matplotlib.image.AxesImage at 0x7f5e3c6b1080>

Preprocess the Data

Since the focus of the project is to build a GANs model, we will preprocess the data for you.

After data preprocessing, the values ​​of the MNIST and CelebA data sets are in the range of [-0.5, 0.5] for 28×28 dimensional images. The image in the CelebA dataset crops the portion of the image that is not the face and then adjusts to the 28x28 dimension.

MNIST image data set is a single channel monochrome image, CelebA image data set is RGB three-channel color image .

Build the Neural Network

You will build the main components of GANs by deploying the following functions:

  • model_inputs
  • discriminator
  • generator
  • model_loss
  • model_opt
  • train

Check the TensorFlow version and get the GPU model¶

Check if you are using the correct TensorFlow version and get the GPU model

In [4]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf

# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer.  You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))

# Check for a GPU
if not tf.test.gpu_device_name():
    warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
    print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
TensorFlow Version: 1.3.0
Default GPU Device: /gpu:0

Input (Input)

Deployment model_inputsfunction to create a neural network placeholder (TF Placeholders) . Please create the following placeholders:

  • Input image placeholder: use image_width, image_heightand image_channelsto rank 4.
  • Enter the Z placeholder: set to rank 2 and name it z_dim.
  • Learning Rate Placeholder: Set to rank 0.

The shape of the placeholder tuple is (tensor of real input images, tensor of z data, learning rate).

In [5]:
import problem_unittests as tests

def model_inputs(image_width, image_height, image_channels, z_dim):
    """
    Create the model inputs
    :param image_width: The input image width
    :param image_height: The input image height
    :param image_channels: The number of image channels
    :param z_dim: The dimension of Z
    :return: Tuple of (tensor of real input images, tensor of z data, learning rate)
    """
    # TODO: Implement Function
    real_input = tf.placeholder(tf.float32, shape=(None, image_height, image_width, image_channels))
    z = tf.placeholder(tf.float32, shape=(None, z_dim)) # None in the first demension for batch
    lr = tf.placeholder(tf.float32, shape=(None))

    return real_input, z, lr


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
ERROR:tensorflow:==================================
Object was never used (type <class 'tensorflow.python.framework.ops.Operation'>):
<tf.Operation 'assert_rank_2/Assert/Assert' type=Assert>
If you want to mark it as used call its "mark_used()" method.
It was originally created here:
['File "/opt/conda/lib/python3.6/runpy.py", line 193, in _run_module_as_main\n    "__main__", mod_spec)', 'File "/opt/conda/lib/python3.6/runpy.py", line 85, in _run_code\n    exec(code, run_globals)', 'File "/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py", line 16, in <module>\n    app.launch_new_instance()', 'File "/opt/conda/lib/python3.6/site-packages/traitlets/config/application.py", line 658, in launch_instance\n    app.start()', 'File "/opt/conda/lib/python3.6/site-packages/ipykernel/kernelapp.py", line 497, in start\n    self.io_loop.start()', 'File "/opt/conda/lib/python3.6/site-packages/tornado/ioloop.py", line 832, in start\n    self._run_callback(self._callbacks.popleft())', 'File "/opt/conda/lib/python3.6/site-packages/tornado/ioloop.py", line 605, in _run_callback\n    ret = callback()', 'File "/opt/conda/lib/python3.6/site-packages/tornado/stack_context.py", line 277, in null_wrapper\n    return fn(*args, **kwargs)', 'File "/opt/conda/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py", line 536, in <lambda>\n    self.io_loop.add_callback(lambda : self._handle_events(self.socket, 0))', 'File "/opt/conda/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py", line 450, in _handle_events\n    self._handle_recv()', 'File "/opt/conda/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py", line 480, in _handle_recv\n    self._run_callback(callback, msg)', 'File "/opt/conda/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py", line 432, in _run_callback\n    callback(*args, **kwargs)', 'File "/opt/conda/lib/python3.6/site-packages/tornado/stack_context.py", line 277, in null_wrapper\n    return fn(*args, **kwargs)', 'File "/opt/conda/lib/python3.6/site-packages/ipykernel/kernelbase.py", line 283, in dispatcher\n    return self.dispatch_shell(stream, msg)', 'File "/opt/conda/lib/python3.6/site-packages/ipykernel/kernelbase.py", line 233, in dispatch_shell\n    handler(stream, idents, msg)', 'File "/opt/conda/lib/python3.6/site-packages/ipykernel/kernelbase.py", line 399, in execute_request\n    user_expressions, allow_stdin)', 'File "/opt/conda/lib/python3.6/site-packages/ipykernel/ipkernel.py", line 208, in do_execute\n    res = shell.run_cell(code, store_history=store_history, silent=silent)', 'File "/opt/conda/lib/python3.6/site-packages/ipykernel/zmqshell.py", line 537, in run_cell\n    return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)', 'File "/opt/conda/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2662, in run_cell\n    raw_cell, store_history, silent, shell_futures)', 'File "/opt/conda/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2785, in _run_cell\n    interactivity=interactivity, compiler=compiler, result=result)', 'File "/opt/conda/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2907, in run_ast_nodes\n    if self.run_code(code, result):', 'File "/opt/conda/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2961, in run_code\n    exec(code_obj, self.user_global_ns, self.user_ns)', 'File "<ipython-input-5-7fcbfe48bcef>", line 23, in <module>\n    tests.test_model_inputs(model_inputs)', 'File "/home/workspace/problem_unittests.py", line 12, in func_wrapper\n    result = func(*args)', 'File "/home/workspace/problem_unittests.py", line 68, in test_model_inputs\n    _check_input(learn_rate, [], \'Learning Rate\')', 'File "/home/workspace/problem_unittests.py", line 34, in _check_input\n    _assert_tensor_shape(tensor, shape, \'Real Input\')', 'File "/home/workspace/problem_unittests.py", line 20, in _assert_tensor_shape\n    assert tf.assert_rank(tensor, len(shape), message=\'{} has wrong rank\'.format(display_name))', 'File "/opt/conda/lib/python3.6/site-packages/tensorflow/python/ops/check_ops.py", line 617, in assert_rank\n    dynamic_condition, data, summarize)', 'File "/opt/conda/lib/python3.6/site-packages/tensorflow/python/ops/check_ops.py", line 571, in _assert_rank_condition\n    return control_flow_ops.Assert(condition, data, summarize=summarize)', 'File "/opt/conda/lib/python3.6/site-packages/tensorflow/python/util/tf_should_use.py", line 175, in wrapped\n    return _add_should_use_warning(fn(*args, **kwargs))', 'File "/opt/conda/lib/python3.6/site-packages/tensorflow/python/util/tf_should_use.py", line 144, in _add_should_use_warning\n    wrapped = TFShouldUseWarningWrapper(x)', 'File "/opt/conda/lib/python3.6/site-packages/tensorflow/python/util/tf_should_use.py", line 101, in __init__\n    stack = [s.strip() for s in traceback.format_stack()]']
===========================================
Tests Passed

Discriminator

Deployment discriminatorfunction to create a neural network to identify discrimination images. This function should be able to reuse various variables in the neural network. In tf.variable_scopeusing the "discriminator" in the name of the variable space to reuse the function variables.

This function should return a tuple of the form (tensor output of the discriminator, tensor logits of the discriminator).

In [6]:
def discriminator(images, reuse=False):
    """
    Create the discriminator network
    :param image: Tensor of input image(s)
    :param reuse: Boolean if the weights should be reused
    :return: Tuple of (tensor output of the discriminator, tensor logits of the discriminator)
    """
    # TODO: Implement Function
#     initializer = tf.contrib.layers.variance_scaling_initializer()
    initializer = tf.random_normal_initializer(stddev=0.02)
    
    with tf.variable_scope('discriminator', reuse=reuse): # I hate tensorflow, Udacity should use pytorch!
        # 28*28*3
        x = images
#         x = tf.nn.dropout(x, 0.9) # prevent mode collapse
        x = tf.layers.conv2d(x, 64, 5, strides=2, kernel_initializer=initializer, padding='same')
        x = tf.maximum(x, 0.1 * x)
#         x = tf.layers.batch_normalization(x, training=True)
        # 14*14*64
        x = tf.layers.conv2d(x, 128, 5, strides=2, kernel_initializer=initializer, padding='same')
        x = tf.layers.batch_normalization(x, training=True)
        x = tf.maximum(x, 0.1 * x)
        # 7*7*128
        x = tf.layers.conv2d(x, 256, 5, strides=2, kernel_initializer=initializer, padding='same')
        x = tf.layers.batch_normalization(x, training=True)
        x = tf.maximum(x, 0.1 * x)
        # 3.5*3.5 (4*4*256)
        
        x = tf.layers.dense(x, 4*4*256)
        x = tf.layers.batch_normalization(x, training=True)
        x = tf.maximum(x, 0.1 * x)
        
        x = tf.reshape(x, (-1, 4*4*256)) # flatten
        logits = tf.layers.dense(x, 1)
        output = tf.sigmoid(logits)
        
    return output, logits


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_discriminator(discriminator, tf)
Tests Passed

Generator

Deployment generatorfunction used to zgenerate an image. This function should be able to reuse various variables in the neural network. In tf.variable_scopeusing the "generator" in the name of the variable space to reuse the function variables.

This function should return the resulting 28 x 28 x out_channel_dimdimensional image.

In [7]:
def generator(z, out_channel_dim, is_train=True):
    """
    Create the generator network
    :param z: Input z
    :param out_channel_dim: The number of channels in the output image
    :param is_train: Boolean if generator is being used for training
    :return: The tensor output of the generator
    """
    # TODO: Implement Function
#     initializer = tf.contrib.layers.variance_scaling_initializer()
    initializer = tf.random_normal_initializer(stddev=0.02) # kaiming init sucks here
    
    with tf.variable_scope('generator', reuse= not is_train): # I hate tensorflow, Udacity should use pytorch!
        # 4*4*256
        x = z
        x = tf.layers.dense(z, 7 * 7 *256)
        x = tf.reshape(x, (-1, 7, 7, 256))
#         x = tf.layers.batch_normalization(x, training=is_train)
        x = tf.maximum(x, 0.1 * x)
        # 7*7*256
        x = tf.layers.conv2d_transpose(x, 128, 5, strides=2, kernel_initializer=initializer, padding='same')
        x = tf.layers.batch_normalization(x, training=is_train)
        x = tf.maximum(x, 0.1 * x)
        # 14*14*128
        x = tf.layers.conv2d_transpose(x, 64, 5, strides=2, kernel_initializer=initializer, padding='same')
        x = tf.maximum(x, 0.1 * x)
        x = tf.layers.batch_normalization(x, training=True)
        # 28*28*64
        logits = tf.layers.conv2d_transpose(x, out_channel_dim, 5, strides=1, kernel_initializer=initializer, padding='same')
        output = tf.tanh(logits)
        
    return output # no logits requried

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_generator(generator, tf)
Tests Passed

Loss function (Loss)

Deployment model_lossfunction of training and calculate the loss GANs. This function should return a tuple of the form (discriminator loss, generator loss).

Use the function you have implemented:

  • discriminator(images, reuse=False)
  • generator(z, out_channel_dim, is_train=True)
In [8]:
def model_loss(input_real, input_z, out_channel_dim):
    """
    Get the loss for the discriminator and generator
    :param input_real: Images from the real dataset
    :param input_z: Z input
    :param out_channel_dim: The number of channels in the output image
    :return: A tuple of (discriminator loss, generator loss)
    """
    # TODO: Implement Function
    # https://classroom.udacity.com/nanodegrees/nd101-cn-advanced/parts/34ed075b-3ca2-45f0-916c-00db3186f18f/modules/af4b44d7-35bd-4408-bf77-22e469eec31b/lessons/1411d674-356f-4a26-961e-bc04a059f36e/concepts/3bf52eeb-a50a-4734-bd26-d9603b1fcc84
    
    g_fake_output = generator(input_z, out_channel_dim, is_train=True)
    
    # this line should be first... tf you did not tell me so
    d_real_output, d_real_logits = discriminator(input_real, False) # you need to trun resue False. Again, I hate Tensorflow! QAQ
    
    d_fake_output, d_fake_logits = discriminator(g_fake_output, reuse=True)
    
    g_fake_label = tf.ones_like(d_fake_output) # force generator increate probability to discriminator think its real
    d_fake_label = tf.zeros_like(d_fake_output)
    d_real_label = tf.ones_like(d_real_output) * 0.9
    
    g_fake_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_fake_logits, labels=g_fake_label))
    d_fake_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_fake_logits, labels=d_fake_label))
    d_real_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_real_logits, labels=d_real_label))

    return (d_fake_loss+d_real_loss)*1, g_fake_loss*1


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_loss(model_loss)
Tests Passed

Optimization

Deployment model_optfunction to optimize the GANs. Use tf.trainable_variablesget trainable all variables. By variable space name discriminatorand generatorto filter variables. This function should return a tuple of the form discriminator training operation, generator training operation.

In [9]:
def model_opt(d_loss, g_loss, learning_rate, beta1):
    """
    Get optimization operations
    :param d_loss: Discriminator loss Tensor
    :param g_loss: Generator loss Tensor
    :param learning_rate: Learning Rate Placeholder
    :param beta1: The exponential decay rate for the 1st moment in the optimizer
    :return: A tuple of (discriminator training operation, generator training operation)
    """
    # TODO: Implement Function
    variables = tf.trainable_variables()
    d_vars = [var for  was  in  variables  if  var . name . startwith ( 'discriminator' )] 
    g_vars  =  [ var  for  var  in  variables  if  var . name . startwith ( 'generator' )]
    
    with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
        d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1).minimize(d_loss, var_list=d_vars)
        g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1).minimize(g_loss, var_list=g_vars)
#     lr_generator = tf.assign(learning_rate, learning_rate*0.9)
    
    # what is 'operation'? you mean optimizer?
    return d_train_opt, g_train_opt #why don't you need my learning rate?


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_opt(model_opt, tf)
Tests Passed

Training Neural Network (Neural Network Training)

Output display¶

Use this function to display the current output of the generator during training, which will help you assess the training level of the GANs model.

In [10]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np

def show_generator_output(sess, n_images, input_z, out_channel_dim, image_mode):
    """
    Show example output for the generator
    :param sess: TensorFlow session
    :param n_images: Number of Images to display
    :param input_z: Input Z Tensor
    :param out_channel_dim: The number of channels in the output image
    :param image_mode: The mode to use for images ("RGB" or "L")
    """
    cmap = None if image_mode == 'RGB' else 'gray'
    z_dim = input_z.get_shape().as_list()[-1]
    example_z = np.random.uniform(-1, 1, size=[n_images, z_dim])

    samples = sess.run(
        generator(input_z, out_channel_dim, False),
        feed_dict={input_z: example_z})

    images_grid = helper.images_square_grid(samples, image_mode)
    pyplot.imshow(images_grid, cmap=cmap)
    pyplot.show()

Training

Deployment trainfunction to establish and train GANs model. Remember to use the following functions you have completed:

  • model_inputs(image_width, image_height, image_channels, z_dim)
  • model_loss(input_real, input_z, out_channel_dim)
  • model_opt(d_loss, g_loss, learning_rate, beta1)

Using the show_generator_outputfunction to display generatorthe output in the training process.

Note : Run in each batch (BATCH) in show_generator_outputfunction of the training time will significantly increase the volume of the notebook. Recommended every 100 output a batch generatoroutput.

In [11]:
from tqdm import tqdm_notebook as tqdm

def train(epoch_count, batch_size, z_dim, learning_rate, beta1, get_batches, data_shape, data_image_mode):
    """
    Train the GAN
    :param epoch_count: Number of epochs
    :param batch_size: Batch Size
    :param z_dim: Z dimension
    :param learning_rate: Learning Rate
    :param beta1: The exponential decay rate for the 1st moment in the optimizer
    :param get_batches: Function to get batches
    :param data_shape: Shape of the data
    :param data_image_mode: The image mode to use for images ("RGB" or "L")
    """
    # TODO: Build Model
    # note that data_shape[0] is batch
    real_input, z, lr = model_inputs(image_width=data_shape[1], image_height=data_shape[2], image_channels=data_shape[3], z_dim=z_dim)
    d_loss, g_loss = model_loss(real_input, z, out_channel_dim=data_shape[3])
    d_train_opt, g_train_opt = model_opt(d_loss, g_loss, lr, beta1)
    
    
    with tf.Session() as sess: # what the heck is tensorflow session
        sess.run(tf.global_variables_initializer())
        loss_g_numpy = None
        loss_d_numpy = None
        for epoch_i in range(epoch_count):
            pbar = tqdm(get_batches(batch_size))
            for b, batch_images in enumerate(pbar):
                """ Here tanh is applied to the output of the generator, the tanh function output is between -1 and 1, 
                but the batch_images range is between -0.5 and 0.5, 
                so this place needs to rescale the real image to -1 to 1 Between, 
                this can be achieved by batch_images = batch_images*2, 
                so that the real image passed to the discriminator and the fake image of the generator are in the same scope.""" 
                batch_images  =  batch_images * 2 
                # TODO: Train Model 
                z_noise  =  np . Random . Uniform ( - . 1 ,  . 1 ,  size = ( the batch_size ,  z_dim )) 
                Sess . RUN (d_train_opt, feed_dict={real_input: batch_images, z: z_noise, lr:learning_rate})
                sess.run(g_train_opt, feed_dict={real_input: batch_images, z: z_noise, lr:learning_rate})
                
                # I still cannot understand tf
                # It took me a long time to search for how to get the loss from packed tensorflow session
                
                if b%100 == 0:
                    show_generator_output(sess, n_images=batch_size, input_z=z, out_channel_dim=data_shape[3], image_mode=data_image_mode)
                    
                    loss_g_numpy = d_loss.eval({real_input: batch_images, z: z_noise, lr:learning_rate})
                    loss_d_numpy = g_loss.eval({z: z_noise, lr:learning_rate})

                
                pbar.set_description("E{}B{}, G_loss={} D_loss={}".format(epoch_i, b, loss_g_numpy, loss_d_numpy))
                

MNIST

Test your GANs model on MNIST. After 2 iterations, GANs should be able to generate images similar to handwritten numbers. Make sure the generator is below the discriminator's loss, or close to zero.

In [12]:
batch_size = 32
z_dim = 64 # don't set too large or too small, otherwise mode collapse
learning_rate = 0.001
beta1 = 0.4


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 1

mnist_dataset = helper.Dataset('mnist', glob(os.path.join(data_dir, 'mnist/*.jpg')))
with tf.Graph().as_default():
    train(epochs, batch_size, z_dim, learning_rate, beta1, mnist_dataset.get_batches,
          mnist_dataset.shape, mnist_dataset.image_mode)

CelebA

Run your GANs model on CelebA. It takes about 20 minutes to run each iteration on a normal GPU. You can run the entire iteration or stop it when the GANs start producing a real face image.

In [13]:
batch_size = 32
z_dim = 128
learning_rate = 0.001
beta1 = 0.4


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 1

celeba_dataset = helper.Dataset('celeba', glob(os.path.join(data_dir, 'img_align_celeba/*.jpg')))
with tf.Graph().as_default():
    train(epochs, batch_size, z_dim, learning_rate, beta1, celeba_dataset.get_batches,
          celeba_dataset.shape, celeba_dataset . image_mode )
---------------------------------------------------------------------------
KeyboardInterrupt                         Traceback (most recent call last)
<ipython-input-13-fd6f032e9dd2> in <module>()
     13 with tf.Graph().as_default():
     14     train(epochs, batch_size, z_dim, learning_rate, beta1, celeba_dataset.get_batches,
---> 15           celeba_dataset.shape, celeba_dataset.image_mode)

<ipython-input-11-dda36c70445d> in train(epoch_count, batch_size, z_dim, learning_rate, beta1, get_batches, data_shape, data_image_mode)
     35                 # TODO: Train Model
     36                 z_noise = np.random.uniform(-1, 1, size=(batch_size, z_dim))
---> 37                 sess.run(d_train_opt, feed_dict={real_input: batch_images, z: z_noise, lr:learning_rate})
     38                 sess.run(g_train_opt, feed_dict={real_input: batch_images, z: z_noise, lr:learning_rate})
     39 

/opt/conda/lib/python3.6/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
    893     try:
    894       result = self._run(None, fetches, feed_dict, options_ptr,
--> 895                          run_metadata_ptr)
    896       if run_metadata:
    897         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

/opt/conda/lib/python3.6/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
   1122     if final_fetches or final_targets or (handle and feed_dict_tensor):
   1123       results = self._do_run(handle, final_targets, final_fetches,
-> 1124                              feed_dict_tensor, options, run_metadata)
   1125     else:
   1126       results = []

/opt/conda/lib/python3.6/site-packages/tensorflow/python/client/session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
   1319     if handle is None:
   1320       return self._do_call(_run_fn, self._session, feeds, fetches, targets,
-> 1321                            options, run_metadata)
   1322     else:
   1323       return self._do_call(_prun_fn, self._session, handle, feeds, fetches)

/opt/conda/lib/python3.6/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
   1325   def _do_call(self, fn, *args):
   1326     try:
-> 1327       return fn(*args)
   1328     except errors.OpError as e:
   1329       message = compat.as_text(e.message)

/opt/conda/lib/python3.6/site-packages/tensorflow/python/client/session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata)
   1304           return tf_session.TF_Run(session, options,
   1305                                    feed_dict, fetch_list, target_list,
-> 1306                                    status, run_metadata)
   1307 
   1308     def _prun_fn(session, handle, feed_dict, fetch_list):

KeyboardInterrupt: 

Submit project¶

Before submitting this project, be sure to save the file after running all cells.

Save the file as "dlnd_face_generation.ipynb" and save it as HTML "File" -> "Download as". Please include the "helper.py" and "problem_unittests.py" files when submitting your project.

Original text