{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "view-in-github"
      },
      "source": [
        "<a href=\"https://colab.research.google.com/github/lmoroney/dlaicourse/blob/master/Course%201%20-%20Part%208%20-%20Lesson%203%20-%20Notebook.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "rX8mhOLljYeM"
      },
      "source": [
        "##### Copyright 2019 The TensorFlow Authors."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "cellView": "form",
        "colab": {},
        "colab_type": "code",
        "id": "BZSlp3DAjdYf"
      },
      "outputs": [],
      "source": [
        "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "# you may not use this file except in compliance with the License.\n",
        "# You may obtain a copy of the License at\n",
        "#\n",
        "# https://www.apache.org/licenses/LICENSE-2.0\n",
        "#\n",
        "# Unless required by applicable law or agreed to in writing, software\n",
        "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "# See the License for the specific language governing permissions and\n",
        "# limitations under the License."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "RXZT2UsyIVe_"
      },
      "outputs": [],
      "source": [
        "!wget --no-check-certificate \\\n",
        "    https://storage.googleapis.com/laurencemoroney-blog.appspot.com/horse-or-human.zip \\\n",
        "    -O /tmp/horse-or-human.zip"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "0mLij6qde6Ox"
      },
      "outputs": [],
      "source": [
        "!wget --no-check-certificate \\\n",
        "    https://storage.googleapis.com/laurencemoroney-blog.appspot.com/validation-horse-or-human.zip \\\n",
        "    -O /tmp/validation-horse-or-human.zip"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "9brUxyTpYZHy"
      },
      "source": [
        "The following python code will use the OS library to use Operating System libraries, giving you access to the file system, and the zipfile library allowing you to unzip the data. "
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "PLy3pthUS0D2"
      },
      "outputs": [],
      "source": [
        "import os\n",
        "import zipfile\n",
        "\n",
        "local_zip = '/tmp/horse-or-human.zip'\n",
        "zip_ref = zipfile.ZipFile(local_zip, 'r')\n",
        "zip_ref.extractall('/tmp/horse-or-human')\n",
        "local_zip = '/tmp/validation-horse-or-human.zip'\n",
        "zip_ref = zipfile.ZipFile(local_zip, 'r')\n",
        "zip_ref.extractall('/tmp/validation-horse-or-human')\n",
        "zip_ref.close()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "o-qUPyfO7Qr8"
      },
      "source": [
        "The contents of the .zip are extracted to the base directory `/tmp/horse-or-human`, which in turn each contain `horses` and `humans` subdirectories.\n",
        "\n",
        "In short: The training set is the data that is used to tell the neural network model that 'this is what a horse looks like', 'this is what a human looks like' etc. \n",
        "\n",
        "One thing to pay attention to in this sample: We do not explicitly label the images as horses or humans. If you remember with the handwriting example earlier, we had labelled 'this is a 1', 'this is a 7' etc.  Later you'll see something called an ImageGenerator being used -- and this is coded to read images from subdirectories, and automatically label them from the name of that subdirectory. So, for example, you will have a 'training' directory containing a 'horses' directory and a 'humans' one. ImageGenerator will label the images appropriately for you, reducing a coding step. \n",
        "\n",
        "Let's define each of these directories:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "NR_M9nWN-K8B"
      },
      "outputs": [],
      "source": [
        "# Directory with our training horse pictures\n",
        "train_horse_dir = os.path.join('/tmp/horse-or-human/horses')\n",
        "\n",
        "# Directory with our training human pictures\n",
        "train_human_dir = os.path.join('/tmp/horse-or-human/humans')\n",
        "\n",
        "# Directory with our training horse pictures\n",
        "validation_horse_dir = os.path.join('/tmp/validation-horse-or-human/horses')\n",
        "\n",
        "# Directory with our training human pictures\n",
        "validation_human_dir = os.path.join('/tmp/validation-horse-or-human/humans')"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "LuBYtA_Zd8_T"
      },
      "source": [
        "Now, let's see what the filenames look like in the `horses` and `humans` training directories:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "4PIP1rkmeAYS"
      },
      "outputs": [],
      "source": [
        "train_horse_names = os.listdir(train_horse_dir)\n",
        "print(train_horse_names[:10])\n",
        "\n",
        "train_human_names = os.listdir(train_human_dir)\n",
        "print(train_human_names[:10])\n",
        "\n",
        "validation_horse_hames = os.listdir(validation_horse_dir)\n",
        "print(validation_horse_hames[:10])\n",
        "\n",
        "validation_human_names = os.listdir(validation_human_dir)\n",
        "print(validation_human_names[:10])"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "HlqN5KbafhLI"
      },
      "source": [
        "Let's find out the total number of horse and human images in the directories:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "H4XHh2xSfgie"
      },
      "outputs": [],
      "source": [
        "print('total training horse images:', len(os.listdir(train_horse_dir)))\n",
        "print('total training human images:', len(os.listdir(train_human_dir)))\n",
        "print('total validation horse images:', len(os.listdir(validation_horse_dir)))\n",
        "print('total validation human images:', len(os.listdir(validation_human_dir)))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "C3WZABE9eX-8"
      },
      "source": [
        "Now let's take a look at a few pictures to get a better sense of what they look like. First, configure the matplot parameters:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "b2_Q0-_5UAv-"
      },
      "outputs": [],
      "source": [
        "%matplotlib inline\n",
        "\n",
        "import matplotlib.pyplot as plt\n",
        "import matplotlib.image as mpimg\n",
        "\n",
        "# Parameters for our graph; we'll output images in a 4x4 configuration\n",
        "nrows = 4\n",
        "ncols = 4\n",
        "\n",
        "# Index for iterating over images\n",
        "pic_index = 0"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "xTvHzGCxXkqp"
      },
      "source": [
        "Now, display a batch of 8 horse and 8 human pictures. You can rerun the cell to see a fresh batch each time:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "Wpr8GxjOU8in"
      },
      "outputs": [],
      "source": [
        "# Set up matplotlib fig, and size it to fit 4x4 pics\n",
        "fig = plt.gcf()\n",
        "fig.set_size_inches(ncols * 4, nrows * 4)\n",
        "\n",
        "pic_index += 8\n",
        "next_horse_pix = [os.path.join(train_horse_dir, fname) \n",
        "                for fname in train_horse_names[pic_index-8:pic_index]]\n",
        "next_human_pix = [os.path.join(train_human_dir, fname) \n",
        "                for fname in train_human_names[pic_index-8:pic_index]]\n",
        "\n",
        "for i, img_path in enumerate(next_horse_pix+next_human_pix):\n",
        "  # Set up subplot; subplot indices start at 1\n",
        "  sp = plt.subplot(nrows, ncols, i + 1)\n",
        "  sp.axis('Off') # Don't show axes (or gridlines)\n",
        "\n",
        "  img = mpimg.imread(img_path)\n",
        "  plt.imshow(img)\n",
        "\n",
        "plt.show()\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "5oqBkNBJmtUv"
      },
      "source": [
        "## Building a Small Model from Scratch\n",
        "\n",
        "But before we continue, let's start defining the model:\n",
        "\n",
        "Step 1 will be to import tensorflow."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "qvfZg3LQbD-5"
      },
      "outputs": [],
      "source": [
        "import tensorflow as tf"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "BnhYCP4tdqjC"
      },
      "source": [
        "We then add convolutional layers as in the previous example, and flatten the final result to feed into the densely connected layers."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "gokG5HKpdtzm"
      },
      "source": [
        "Finally we add the densely connected layers. \n",
        "\n",
        "Note that because we are facing a two-class classification problem, i.e. a *binary classification problem*, we will end our network with a [*sigmoid* activation](https://wikipedia.org/wiki/Sigmoid_function), so that the output of our network will be a single scalar between 0 and 1, encoding the probability that the current image is class 1 (as opposed to class 0)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "PixZ2s5QbYQ3"
      },
      "outputs": [],
      "source": [
        "model = tf.keras.models.Sequential([\n",
        "    # Note the input shape is the desired size of the image 300x300 with 3 bytes color\n",
        "    # This is the first convolution\n",
        "    tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(300, 300, 3)),\n",
        "    tf.keras.layers.MaxPooling2D(2, 2),\n",
        "    # The second convolution\n",
        "    tf.keras.layers.Conv2D(32, (3,3), activation='relu'),\n",
        "    tf.keras.layers.MaxPooling2D(2,2),\n",
        "    # The third convolution\n",
        "    tf.keras.layers.Conv2D(64, (3,3), activation='relu'),\n",
        "    tf.keras.layers.MaxPooling2D(2,2),\n",
        "    # The fourth convolution\n",
        "    tf.keras.layers.Conv2D(64, (3,3), activation='relu'),\n",
        "    tf.keras.layers.MaxPooling2D(2,2),\n",
        "    # The fifth convolution\n",
        "    tf.keras.layers.Conv2D(64, (3,3), activation='relu'),\n",
        "    tf.keras.layers.MaxPooling2D(2,2),\n",
        "    # Flatten the results to feed into a DNN\n",
        "    tf.keras.layers.Flatten(),\n",
        "    # 512 neuron hidden layer\n",
        "    tf.keras.layers.Dense(512, activation='relu'),\n",
        "    # Only 1 output neuron. It will contain a value from 0-1 where 0 for 1 class ('horses') and 1 for the other ('humans')\n",
        "    tf.keras.layers.Dense(1, activation='sigmoid')\n",
        "])"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "s9EaFDP5srBa"
      },
      "source": [
        "The model.summary() method call prints a summary of the NN "
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "7ZKj8392nbgP"
      },
      "outputs": [],
      "source": [
        "model.summary()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "DmtkTn06pKxF"
      },
      "source": [
        "The \"output shape\" column shows how the size of your feature map evolves in each successive layer. The convolution layers reduce the size of the feature maps by a bit due to padding, and each pooling layer halves the dimensions."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "PEkKSpZlvJXA"
      },
      "source": [
        "Next, we'll configure the specifications for model training. We will train our model with the `binary_crossentropy` loss, because it's a binary classification problem and our final activation is a sigmoid. (For a refresher on loss metrics, see the [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/descending-into-ml/video-lecture).) We will use the `rmsprop` optimizer with a learning rate of `0.001`. During training, we will want to monitor classification accuracy.\n",
        "\n",
        "**NOTE**: In this case, using the [RMSprop optimization algorithm](https://wikipedia.org/wiki/Stochastic_gradient_descent#RMSProp) is preferable to [stochastic gradient descent](https://developers.google.com/machine-learning/glossary/#SGD) (SGD), because RMSprop automates learning-rate tuning for us. (Other optimizers, such as [Adam](https://wikipedia.org/wiki/Stochastic_gradient_descent#Adam) and [Adagrad](https://developers.google.com/machine-learning/glossary/#AdaGrad), also automatically adapt the learning rate during training, and would work equally well here.)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "8DHWhFP_uhq3"
      },
      "outputs": [],
      "source": [
        "from tensorflow.keras.optimizers import RMSprop\n",
        "\n",
        "model.compile(loss='binary_crossentropy',\n",
        "              optimizer=RMSprop(lr=0.001),\n",
        "              metrics=['accuracy'])"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "Sn9m9D3UimHM"
      },
      "source": [
        "### Data Preprocessing\n",
        "\n",
        "Let's set up data generators that will read pictures in our source folders, convert them to `float32` tensors, and feed them (with their labels) to our network. We'll have one generator for the training images and one for the validation images. Our generators will yield batches of images of size 300x300 and their labels (binary).\n",
        "\n",
        "As you may already know, data that goes into neural networks should usually be normalized in some way to make it more amenable to processing by the network. (It is uncommon to feed raw pixels into a convnet.) In our case, we will preprocess our images by normalizing the pixel values to be in the `[0, 1]` range (originally all values are in the `[0, 255]` range).\n",
        "\n",
        "In Keras this can be done via the `keras.preprocessing.image.ImageDataGenerator` class using the `rescale` parameter. This `ImageDataGenerator` class allows you to instantiate generators of augmented image batches (and their labels) via `.flow(data, labels)` or `.flow_from_directory(directory)`. These generators can then be used with the Keras model methods that accept data generators as inputs: `fit`, `evaluate_generator`, and `predict_generator`."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "ClebU9NJg99G"
      },
      "outputs": [],
      "source": [
        "from tensorflow.keras.preprocessing.image import ImageDataGenerator\n",
        "\n",
        "# All images will be rescaled by 1./255\n",
        "train_datagen = ImageDataGenerator(rescale=1/255)\n",
        "validation_datagen = ImageDataGenerator(rescale=1/255)\n",
        "\n",
        "# Flow training images in batches of 128 using train_datagen generator\n",
        "train_generator = train_datagen.flow_from_directory(\n",
        "        '/tmp/horse-or-human/',  # This is the source directory for training images\n",
        "        target_size=(300, 300),  # All images will be resized to 150x150\n",
        "        batch_size=128,\n",
        "        # Since we use binary_crossentropy loss, we need binary labels\n",
        "        class_mode='binary')\n",
        "\n",
        "# Flow training images in batches of 128 using train_datagen generator\n",
        "validation_generator = validation_datagen.flow_from_directory(\n",
        "        '/tmp/validation-horse-or-human/',  # This is the source directory for training images\n",
        "        target_size=(300, 300),  # All images will be resized to 150x150\n",
        "        batch_size=32,\n",
        "        # Since we use binary_crossentropy loss, we need binary labels\n",
        "        class_mode='binary')"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "mu3Jdwkjwax4"
      },
      "source": [
        "### Training\n",
        "Let's train for 15 epochs -- this may take a few minutes to run.\n",
        "\n",
        "Do note the values per epoch.\n",
        "\n",
        "The Loss and Accuracy are a great indication of progress of training. It's making a guess as to the classification of the training data, and then measuring it against the known label, calculating the result. Accuracy is the portion of correct guesses. "
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "Fb1_lgobv81m"
      },
      "outputs": [],
      "source": [
        "history = model.fit(\n",
        "      train_generator,\n",
        "      steps_per_epoch=8,  \n",
        "      epochs=15,\n",
        "      verbose=1,\n",
        "      validation_data = validation_generator,\n",
        "      validation_steps=8)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "o6vSHzPR2ghH"
      },
      "source": [
        "###Running the Model\n",
        "\n",
        "Let's now take a look at actually running a prediction using the model. This code will allow you to choose 1 or more files from your file system, it will then upload them, and run them through the model, giving an indication of whether the object is a horse or a human."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "DoWp43WxJDNT"
      },
      "outputs": [],
      "source": [
        "import numpy as np\n",
        "from google.colab import files\n",
        "from keras.preprocessing import image\n",
        "\n",
        "uploaded = files.upload()\n",
        "\n",
        "for fn in uploaded.keys():\n",
        " \n",
        "  # predicting images\n",
        "  path = '/content/' + fn\n",
        "  img = image.load_img(path, target_size=(300, 300))\n",
        "  x = image.img_to_array(img)\n",
        "  x = np.expand_dims(x, axis=0)\n",
        "\n",
        "  images = np.vstack([x])\n",
        "  classes = model.predict(images, batch_size=10)\n",
        "  print(classes[0])\n",
        "  if classes[0]>0.5:\n",
        "    print(fn + \" is a human\")\n",
        "  else:\n",
        "    print(fn + \" is a horse\")\n",
        " "
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "-8EHQyWGDvWz"
      },
      "source": [
        "### Visualizing Intermediate Representations\n",
        "\n",
        "To get a feel for what kind of features our convnet has learned, one fun thing to do is to visualize how an input gets transformed as it goes through the convnet.\n",
        "\n",
        "Let's pick a random image from the training set, and then generate a figure where each row is the output of a layer, and each image in the row is a specific filter in that output feature map. Rerun this cell to generate intermediate representations for a variety of training images."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "-5tES8rXFjux"
      },
      "outputs": [],
      "source": [
        "import numpy as np\n",
        "import random\n",
        "from tensorflow.keras.preprocessing.image import img_to_array, load_img\n",
        "\n",
        "# Let's define a new Model that will take an image as input, and will output\n",
        "# intermediate representations for all layers in the previous model after\n",
        "# the first.\n",
        "successive_outputs = [layer.output for layer in model.layers[1:]]\n",
        "#visualization_model = Model(img_input, successive_outputs)\n",
        "visualization_model = tf.keras.models.Model(inputs = model.input, outputs = successive_outputs)\n",
        "# Let's prepare a random input image from the training set.\n",
        "horse_img_files = [os.path.join(train_horse_dir, f) for f in train_horse_names]\n",
        "human_img_files = [os.path.join(train_human_dir, f) for f in train_human_names]\n",
        "img_path = random.choice(horse_img_files + human_img_files)\n",
        "\n",
        "img = load_img(img_path, target_size=(300, 300))  # this is a PIL image\n",
        "x = img_to_array(img)  # Numpy array with shape (150, 150, 3)\n",
        "x = x.reshape((1,) + x.shape)  # Numpy array with shape (1, 150, 150, 3)\n",
        "\n",
        "# Rescale by 1/255\n",
        "x /= 255\n",
        "\n",
        "# Let's run our image through our network, thus obtaining all\n",
        "# intermediate representations for this image.\n",
        "successive_feature_maps = visualization_model.predict(x)\n",
        "\n",
        "# These are the names of the layers, so can have them as part of our plot\n",
        "layer_names = [layer.name for layer in model.layers]\n",
        "\n",
        "# Now let's display our representations\n",
        "for layer_name, feature_map in zip(layer_names, successive_feature_maps):\n",
        "  if len(feature_map.shape) == 4:\n",
        "    # Just do this for the conv / maxpool layers, not the fully-connected layers\n",
        "    n_features = feature_map.shape[-1]  # number of features in feature map\n",
        "    # The feature map has shape (1, size, size, n_features)\n",
        "    size = feature_map.shape[1]\n",
        "    # We will tile our images in this matrix\n",
        "    display_grid = np.zeros((size, size * n_features))\n",
        "    for i in range(n_features):\n",
        "      # Postprocess the feature to make it visually palatable\n",
        "      x = feature_map[0, :, :, i]\n",
        "      x -= x.mean()\n",
        "      x /= x.std()\n",
        "      x *= 64\n",
        "      x += 128\n",
        "      x = np.clip(x, 0, 255).astype('uint8')\n",
        "      # We'll tile each filter into this big horizontal grid\n",
        "      display_grid[:, i * size : (i + 1) * size] = x\n",
        "    # Display the grid\n",
        "    scale = 20. / n_features\n",
        "    plt.figure(figsize=(scale * n_features, scale))\n",
        "    plt.title(layer_name)\n",
        "    plt.grid(False)\n",
        "    plt.imshow(display_grid, aspect='auto', cmap='viridis')"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "tuqK2arJL0wo"
      },
      "source": [
        "As you can see we go from the raw pixels of the images to increasingly abstract and compact representations. The representations downstream start highlighting what the network pays attention to, and they show fewer and fewer features being \"activated\"; most are set to zero. This is called \"sparsity.\" Representation sparsity is a key feature of deep learning.\n",
        "\n",
        "\n",
        "These representations carry increasingly less information about the original pixels of the image, but increasingly refined information about the class of the image. You can think of a convnet (or a deep network in general) as an information distillation pipeline."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "j4IBgYCYooGD"
      },
      "source": [
        "## Clean Up\n",
        "\n",
        "Before running the next exercise, run the following cell to terminate the kernel and free memory resources:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "651IgjLyo-Jx"
      },
      "outputs": [],
      "source": [
        "import os, signal\n",
        "os.kill(os.getpid(), signal.SIGKILL)"
      ]
    }
  ],
  "metadata": {
    "accelerator": "GPU",
    "colab": {
      "collapsed_sections": [],
      "include_colab_link": true,
      "name": "Course 1 - Part 8 - Lesson 3 - Notebook.ipynb",
      "provenance": [],
      "toc_visible": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}