A simple neural network with Python and Keras

If you’ve been following along with this series of blog posts, then you already know what a huge fan I am of Keras.

Keras is a super powerful, easy to use Python library for building neural networks and deep learning networks.

In the remainder of this blog post, I’ll demonstrate how to build a simple neural network using Python and Keras, and then apply it to the task of image classification.

Looking for the source code to this post?
Jump right to the downloads section.

A simple neural network with Python and Keras

To start this post, we’ll quickly review the most common neural network architecture — feedforward networks.

We’ll then write some Python code to define our feedforward neural network and specifically apply it to the Kaggle Dogs vs. Cats classification challenge. The goal of this challenge is to correctly classify whether a given image contains a dog or a cat.

Finally, we’ll review the results of our simple neural network architecture and discuss methods to improve it.

Feedforward neural networks

While there are many, many different neural network architectures, the most common architecture is the feedforward network:

Figure 1: An example of a feedforward neural network with 3 input nodes, a hidden layer with 2 nodes, a second hidden layer with 3 nodes, and a final output layer with 2 nodes.

Figure 1: An example of a feedforward neural network with 3 input nodes, a hidden layer with 2 nodes, a second hidden layer with 3 nodes, and a final output layer with 2 nodes.

In this type of architecture, a connection between two nodes is only permitted from nodes in layer i to nodes in layer i + 1 (hence the term feedforward; there are no backwards or inter-layer connections allowed).

Furthermore, the nodes in layer i are fully connected to the nodes in layer i + 1. This implies that every node in layer i connects to every node in layer i + 1. For example, in the figure above, there are a total of 2 x 3 = 6 connections between layer 0 and layer 1 — this is where the term “fully connected” or “FC” for short, comes from.

We normally use a sequence of integers to quickly and concisely describe the number of nodes in each layer.

For example, the network above is a 3-2-3-2 feedforward neural network:

  • Layer 0 contains 3 inputs, our x_{i} values. These could be raw pixel intensities or entries from a feature vector.
  • Layers 1 and 2 are hidden layers, containing 2 and 3 nodes, respectively.
  • Layer 3 is the output layer or the visible layer — this is where we obtain the overall output classification from our network. The output layer normally has as many nodes as class labels; one node for each potential output. In our Kaggle Dogs vs. Cats example, we have two output nodes — one for “dog” and another for “cat”.

Implementing our own neural network with Python and Keras

Now that we understand the basics of feedforward neural networks, let’s implement one for image classification using Python and Keras.

To start, you’ll want to follow this tutorial to ensure you have Keras and the associated prerequisites installed on your system.

From there, open up a new file, name it simple_neural_network.py , and we’ll get coding:

We start off by importing our required Python packages. We’ll be using a number of scikit-learn implementations along with Keras layers and activation functions. If you do not already have your development environment configured for Keras, please see this blog post.

We’ll be also using imutils, my personal library of OpenCV convenience functions. If you do not already have imutils  installed on your system, you can install it via pip :

Next, let’s define a method to accept and image and describe it. In previous tutorials, we’ve extracted color histograms from images and used these distributions to characterize the contents of an image.

This time, let’s use the raw pixel intensities instead. To accomplish this, we define the image_to_feature_vector  function which accepts an input image  and resizes it to a fixed size , ignoring the aspect ratio:

We resize our image  to fixed spatial dimensions to ensure each and every image in the input dataset has the same “feature vector” size. This is a requirement when utilizing our neural network — each image must be represented by a vector.

In this case, we resize our image to 32 x 32 pixels and then flatten the 32 x 32 x 3 image (where we have three channels, one for each Red, Green, and Blue channel, respectively) into a 3,072-d feature vector.

The next code block handles parsing our command line arguments and taking care of a few initializations:

We only need a single switch here, --dataset , which is the path to the input directory containing the Kaggle Dogs vs. Cats images. This dataset can be downloaded from the official Kaggle Dogs vs. Cats competition page.

Line 28 grabs the paths to our --dataset  of images residing on disk. We then initialize the data  and labels  lists, respectively, on Lines 31 and 32.

Now that we have our imagePaths , we can loop over them individually, load them from disk, convert the images to feature vectors, and the update the data  and labels  lists:

The data  list now contains the flattened 32 x 32 x 3 = 3,072-d representations of every image in our dataset. However, before we can train our neural network, we first need to perform a bit of preprocessing:

Lines 59 and 60 handle scaling the input data to the range [0, 1], followed by converting the labels  from a set of integers to a set of vectors (a requirement for the cross-entropy loss function we will apply when training our neural network).

We then construct our training and testing splits on Lines 65 and 66, using 75% of the data for training and the remaining 25% for testing.

For a more detailed review of the data preprocessing stage, please see this blog post.

We are now ready to define our neural network using Keras:

On Lines 69-74 we construct our neural network architecture — a 3072-768-384-2 feedforward neural network.

Our input layer has 3,072 nodes, one for each of the 32 x 32 x 3 = 3,072 raw pixel intensities in our flattened input images.

We then have two hidden layers, each with 768 and 384 nodes, respectively. These node counts were determined via a cross-validation and hyperparameter tuning experiment performed offline.

The output layer has 2 nodes — one for each of the “dog” and “cat” labels.

We then apply a softmax  activation function on top of the network — this will give us our actual output class label probabilities.

The next step is to train our model using Stochastic Gradient Descent (SGD):

To train our model, we’ll set the learning rate parameter of SGD to 0.01. We’ll use the binary_crossentropy  loss function for the network as well.

In most cases, you’ll want to use just crossentropy , but since there are only two class labels, we use binary_crossentropy . For > 2 class labels, make sure you use crossentropy .

The network is then allowed to train for a total of 50 epochs, meaning that the model “sees” each individual training example 50 times in an attempt to learn an underlying pattern.

The final code block evaluates our Keras neural network on the testing data:

Classifying images using neural networks with Python and Keras

To execute our simple_neural_network.py  script, make sure you have:

  1. Downloaded the source code to this post by using the “Downloads” section at the bottom of this tutorial.
  2. Downloaded the Kaggle Dogs vs. Cats dataset from the Kaggle competition page.

The following command can be used to train our neural network using Python and Keras:

Note: You might need to rename your Kaggle dataset directory (or simply update the path supplied to --dataset ) before executing the command above.

The output of our script can be seen in the screenshot below:

Figure 2: Training a simple neural network using the Keras deep learning library and the Python programming language.

Figure 2: Training a simple neural network using the Keras deep learning library and the Python programming language.

On my Titan X GPU, the entire process of feature extraction, training the neural network, and evaluation took a total of 1m 15s with each epoch taking less than 0 seconds to complete.

At the end of the 50th epoch, we see that we are getting ~76% accuracy on the training data and 67% accuracy on the testing data.

This ~9% difference in accuracy implies that our network is overfitting a bit; however, it is very common to see ~10% gaps in training versus testing accuracy, especially if you have limited training data.

You should start to become very worried regarding overfitting when your training accuracy reaches 90%+ and your testing accuracy is substantially lower than that.

In either case, this 67.376% is the highest accuracy we’ve obtained thus far in this series of tutorials. As we’ll find out later on, we can easily obtain > 95% accuracy by utilizing Convolutional Neural Networks.

Summary

In today’s blog post, I demonstrated how to train a simple neural network using Python and Keras.

We then applied our neural network to the Kaggle Dogs vs. Cats dataset and obtained 67.376% accuracy utilizing only the raw pixel intensities of the images.

Starting next week, I’ll begin discussing optimization methods such as gradient descent and Stochastic Gradient Descent (SGD). I’ll also include a tutorial on backpropagation to help you understand the inner-workings of this important algorithm.

Before you go, be sure to enter your email address in the form below to be notified when future blog posts are published — you won’t want to miss them!

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 11-page Resource Guide on Computer Vision and Image Search Engines, including exclusive techniques that I don’t post on this blog! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , , ,

25 Responses to A simple neural network with Python and Keras

  1. Stan September 26, 2016 at 9:48 pm #

    That is awesome. Thanks. Please keep posting that stuff.

    • Adrian Rosebrock September 27, 2016 at 6:36 am #

      Thanks Stan! I’ll certainly be doing more neural network and deep learning tutorials in the future.

  2. Bogomil September 27, 2016 at 4:01 pm #

    Hi Adrian,

    I used the default engine for keras – tensorflow and got the following:

    Epoch 50/50
    18750/18750 [==============================] – 12s – loss: 0.4859 – acc: 0.7707
    [INFO] evaluating on testing set…
    6250/6250 [==============================] – 1s
    [INFO] loss=0.6020, accuracy: 68.0960%

    is this difference normal ?

    • Adrian Rosebrock September 28, 2016 at 10:42 am #

      Absolutely. Keep in mind that neural networks are stochastic algorithms meaning there is a level of randomness involved with them (specifically the weight initializations). It’s totally normal to see a bit of variance between training runs.

  3. Gilad September 28, 2016 at 1:06 am #

    Hi,
    wonderful post!
    I have a question – how did you manage to pick your parameters (including the NN scheme)?
    No matter what I did (and I did a lot – including adding 2 more NN levels, adding dropout, changeling the SGD parameters and all other parameters), I didn’t manage to get more than your 67%.
    Especially I wonder why adding more levels and increasing the depth of each, didn’t contribute to my score (but as expected contribute to my run time ;-))
    Only when I increased the resolution to 64×64, and the depth of the 2 NN levels, I manage to get 68%, and I wonder why it is so low.

    • Adrian Rosebrock September 28, 2016 at 10:37 am #

      Hey Gilad — as the blog post states, I determined the parameters to the network using hyperparameter tuning.

      Regarding the accuracy, keep in mind that this is a simple feedforward neural network. 68% accuracy is actually quite good for only considering the raw pixel intensities. And again, as the blog post states, we require a more powerful network architecture (i.e., Convolutional Neural Networks) to obtain higher accuracy. I’ll be covering how to apply CNNs to the Dogs vs. Cats dataset in a future blog post. In the meantime, I would suggest reading this blog post on MNIST + LeNet to help you get started with CNNs.

  4. Max Kostka September 28, 2016 at 2:10 pm #

    Yes, absolutely awesome Adrian, i am already totally eager for a simple convolutional neural network. I love your blog :) Been following it for a year now. Keep up the great work.
    Btw, i did this simple neural network on a raspberry Pi 2 and FYI it took almost 5 hours 😀

    • Adrian Rosebrock September 30, 2016 at 6:51 am #

      Thanks for the kind words Max, I’m happy the tutorial helped you (and that you’ve been a long time reader)!

      If you would like a simple CNN, take a look at this blog post on LeNet to help you get started. Future posts will discuss each of the layer types in detail, etc.

      • Max Kostka September 30, 2016 at 1:29 pm #

        i did that right away, another awesome post:D and fyi, the training there on a raspi 2 took almost about 19 hours.

  5. Marios September 29, 2016 at 11:09 pm #

    You could also do an implementation of your NN using TensorFlow!

    • Adrian Rosebrock September 30, 2016 at 6:40 am #

      Keras can use either Theano or TensorFlow as a backend — it’s really your choice. I personally like using Keras because it adds a layer of abstraction over what would otherwise be a lot more code to accomplish the same task. In future blog posts I’m planning on continuing using Keras, but I’ll also consider the “nitty-gritty” with TensorFlow as well!

  6. roberto October 1, 2016 at 7:52 am #

    Hello Adrian, awesome work!

    I run the code, but i would like to use it to classify some images, but i dont want to run it every time. How can i save the model and use it to classify?

    ps: I’ll be waiting for next post to improve the accuracy!

    Regards!!!

    • Adrian Rosebrock October 2, 2016 at 9:02 am #

      Once your model is saved you can actually serialize it to disk using model.save and then load it again via load_model. Take a look at the Keras documentation for more information and a code example.

  7. Atti November 29, 2016 at 10:53 am #

    great article thanks for all the insights

  8. Alberto Franzaroli December 1, 2016 at 6:06 am #

    Now there is also a opensource library The Microsoft Cognitive Toolkit
    would you like to try it and compare with Keras ?

    • Adrian Rosebrock December 1, 2016 at 7:20 am #

      I haven’t used the Microsoft Cognitive Toolkit before, but I’ll look into it. I don’t normally use Microsoft products.

  9. Dharma KC December 11, 2016 at 9:06 am #

    Please can you provide the link to the tutorial with convolutional neural network to solve this problem with 95% accuracy. Thank you.

    • Adrian Rosebrock December 11, 2016 at 10:46 am #

      I will be covering how to obtain 95%+ accuracy in the Dogs vs. Cats challenge in my upcoming deep learning book. Stay tuned!

  10. UDAY December 12, 2016 at 7:25 am #

    How much it will take to train without a GPU
    and how we can get a GPU for trail.

  11. Tajj kasem December 14, 2016 at 4:57 pm #

    Hi Adrian
    How I can use model.predict() after training my neural network .
    I have this error :

    Exception: Error when checking : expected dense_input_1 to have 2 dimensions, but got array with shape (303, 400, 3)
    How i fixed it ?

    • Adrian Rosebrock December 18, 2016 at 9:10 am #

      You need to call image_to_feature_vector on your image before passing it into model.predict.

  12. azhng December 26, 2016 at 10:23 pm #

    Thank you so much for this awesome tutorial. However, when I run the code on my laptop, the process with terminated with exit code of 137.
    Any idea what does that mean?

  13. Yunhwan Kim January 10, 2017 at 11:06 pm #

    Hi Adrian,

    Thank you for awesome tutorial.
    I just wonder how you could use Titan X GPU on your (seemingly) OSX machine. I see “ssh” in the top of the terminal window figure, and I guess that you access other (probably linux) machine with GPU from your OSX machine via ssh.
    Then, do you have any plan to post about that process? It would be much helpful if I (and other readers) could use GPU in other machine from OSX machine.
    Thank you again.

    Yunhwan

    • Adrian Rosebrock January 11, 2017 at 10:35 am #

      You are correct, Yunhwan — I am ssh’ing into my Ubuntu GPU box and then running any scripts over the SSH session. Does that help clarify your question? If you are looking to learn more about SSH and how to SSH into machines I would suggest reading up on SSH basics.

  14. Vincent FOUCAULT January 15, 2017 at 1:48 pm #

    Hi Adrien,
    didn’t you forget, in picture1, connection from first node in layer2 to second in layer3 ?

    I’m impatient to see your next books.

    CU.
    Vincent

Leave a Reply