Traffic Sign Classification with Keras and Deep Learning

In this tutorial, you will learn how to train your own traffic sign classifier/recognizer capable of obtaining over 95% accuracy using Keras and Deep Learning.

Last weekend I drove down to Maryland to visit my parents. As I pulled into their driveway I noticed something strange — there was a car I didn’t recognize sitting in my dad’s parking spot.

I parked my car, grabbed my bags out of the trunk, and before I could even get through the front door, my dad came out, excited and enlivened, exclaiming that he had just gotten back from the car dealership and traded in his old car for a brand new 2020 Honda Accord.

Most everyone enjoys getting a new car, but for my dad, who puts a lot of miles on his car each year for work, getting a new car is an especially big deal.

My dad wanted the family to go for a drive and check out the car, so my dad, my mother, and I climbed into the vehicle, the “new car scent” hitting you like bad cologne that you’re ashamed to admit that you like.

As we drove down the road my mother noticed that the speed limit was automatically showing up on the car’s dashboard — how was that happening?

The answer?

Traffic sign recognition.

In the 2020 Honda Accord models, a front camera sensor is mounted to the interior of the windshield behind the rearview mirror.

That camera polls frames, looks for signs along the road, and then classifies them.

The recognized traffic sign is then shown on the LCD dashboard as a reminder to the driver.

It’s admittedly a pretty neat feature and the rest of the drive quickly turned from a vehicle test drive into a lecture on how computer vision and deep learning algorithms are used to recognize traffic signs (I’m not sure my parents wanted that lecture but they got it anyway).

When I returned from visiting my parents I decided it would be fun (and educational) to write a tutorial on traffic sign recognition — you can use this code as a starting point for your own traffic sign recognition projects.

To learn more about traffic sign classification with Keras and Deep Learning, just keep reading!

Looking for the source code to this post?
Jump right to the downloads section.

Traffic Sign Classification with Keras and Deep Learning

In the first part of this tutorial, we’ll discuss the concept of traffic sign classification and recognition, including the dataset we’ll be using to train our own custom traffic sign classifier.

From there we’ll review our directory structure for the project.

We’ll then implement TrafficSignNet, a Convolutional Neural Network which we’ll train on our dataset.

Given our trained model we’ll evaluate its accuracy on the test data and even learn how to make predictions on new input data as well.

What is traffic sign classification?

Figure 1: Traffic sign recognition consists of object detection: (1) detection/localization and (2) classification. In this blog post we will only focus on classification of traffic signs with Keras and deep learning.

Traffic sign classification is the process of automatically recognizing traffic signs along the road, including speed limit signs, yield signs, merge signs, etc. Being able to automatically recognize traffic signs enables us to build “smarter cars”.

Self-driving cars need traffic sign recognition in order to properly parse and understand the roadway. Similarly, “driver alert” systems inside cars need to understand the roadway around them to help aid and protect drivers.

Traffic sign recognition is just one of the problems that computer vision and deep learning can solve.

Our traffic sign dataset

Figure 2: The German Traffic Sign Recognition Benchmark (GTSRB) dataset will be used for traffic sign classification with Keras and deep learning. (image source)

The dataset we’ll be using to train our own custom traffic sign classifier is the German Traffic Sign Recognition Benchmark (GTSRB).

The GTSRB dataset consists of 43 traffic sign classes and nearly 50,000 images.

A sample of the dataset can be seen in Figure 2 above — notice how the traffic signs have been pre-cropped for us, implying that the dataset annotators/creators have manually labeled the signs in the images and extracted the traffic sign Region of Interest (ROI) for us, thereby simplifying the project.

In the real-world, traffic sign recognition is a two-stage process:

  1. Localization: Detect and localize where in an input image/frame a traffic sign is.
  2. Recognition: Take the localized ROI and actually recognize and classify the traffic sign.

Deep learning object detectors can perform localization and recognition in a single forward-pass of the network — if you’re interested in learning more about object detection and traffic sign localization using Faster R-CNNs, Single Shot Detectors (SSDs), and RetinaNet, be sure to refer to my book, Deep Learning for Computer Vision with Python, where I cover the topic in detail.

Challenges with the GTSRB dataset

There are a number of challenges in the GTSRB dataset, the first being that images are low resolution, and worse, have poor contrast (as seen in Figure 2 above). These images are pixelated, and in some cases, it’s extremely challenging, if not impossible, for the human eye and brain to recognize the sign.

The second challenge with the dataset is handling class skew:

Figure 3: The German Traffic Sign Recognition Benchmark (GTSRB) dataset is an example of an unbalanced dataset. We will account for this when training our traffic sign classifier with Keras and deep learning. (image source)

The top class (Speed limit 50km/h) has over 2,000 examples while the least represented class (Speed limit 20km/h) has under 200 examples — that’s an order of magnitude difference!

In order to successfully train an accurate traffic sign classifier we’ll need to devise an experiment that can:

  • Preprocess our input images to improve contrast.
  • Account for class label skew.

Project structure

Go ahead and use the “Downloads” section of this article to download the source code. Once downloaded, unzip the files on your machine.

From here we’ll download the GTSRB dataset from Kaggle. Simply click the “Download (300MB)” button in the Kaggle menubar and follow the prompts to sign into Kaggle using one of the third party authentication partners or with your email address. You may then click the “Download (300MB)” button once more and your download will commence as shown:

Figure 4: How to download the GTSRB dataset from Kaggle for traffic sign recognition with Keras and deep learning.

I extracted the dataset into my project directory as you can see here:

Our project contains three main directories and one Python module:

  • gtsrb-german-traffic-sign/ : Our GTSRB dataset.
  • output/ : Contains our output model and training history plot generated by .
  • examples/ : Contains a random sample of 25 annotated images generated by .
  • pyimagesearch : A module that comprises our TrafficSignNet CNN.

We will also walkthrough  and . Our training script loads the data, compiles the model, trains, and outputs the serialized model and plot image to disk. From there, our prediction script generates annotated images for visual validation purposes.

Configuring your development environment

For this article, you’ll need to have the following packages installed:

  • OpenCV
  • NumPy
  • scikit-learn
  • scikit-image
  • imutils
  • matplotlib
  • TensorFlow 2.0 (CPU or GPU)

Luckily each of these is easily installed with pip, a Python package manager.

Let’s install the packages now, ideally into a virtual environment as shown (you’ll need to create the environment):

Using pip to install OpenCV is hands-down the fastest and easiest way to get started with OpenCV. Instructions on how to create your virtual environment are included in the tutorial at this link. This method (as opposed to compiling from source) simply checks prerequisites and places a precompiled binary that will work on most systems into your virtual environment site-packages. Optimizations may or may not be active. Just keep in mind that the maintainer has elected not to include patented algorithms for fear of lawsuits. Sometimes on PyImageSearch, we use patented algorithms for educational and research purposes (there are free alternatives that you can use commercially). Nevertheless, the pip method is a great option for beginners — just remember that you don’t have the full install. If you need the full install, refer to my install tutorials page.

If you are curious about (1) why we are using TensorFlow 2.0, and (2) wondering why I didn’t instruct you to install Keras, you may be surprised to know that Keras is actually included as part of TensorFlow now. Admittedly, the marriage of TensorFlow and Keras is built upon an interesting past. Be sure to read Keras vs. tf.keras: What’s the difference in TensorFlow 2.0? if you are curious about why TensorFlow now includes Keras.

Once your environment is ready to go, it is time to work on recognizing traffic signs with Keras!

Implementing TrafficSignNet, our CNN traffic sign classifier

Figure 5: The Keras deep learning framework is used to build a Convolutional Neural Network (CNN) for traffic sign classification.

Let’s go ahead and implement a Convolutional Neural Network to classify and recognize traffic signs.

Note: Be sure to review my Keras Tutorial if this is your first time building a CNN with Keras.

I have decided to name this classifier TrafficSignNet — open up the file in your project directory and then insert the following code:

Our tf.keras  imports are listed on Lines 2-9. We will be taking advantage of Keras’ Sequential API to build our TrafficSignNet CNN (Line 2).

Line 11 defines our TrafficSignNet  class followed by Line 13 which defines our build  method. The build  method accepts four parameters: the image dimensions, depth , and number of classes  in the dataset.

Lines 16-19 initialize our Sequential  model and specify the CNN’s inputShape .

Let’s define our CONV => RELU => BN => POOL  layer set:

This set of layers uses a 5×5 kernel to learn larger features — it will help to distinguish between different traffic sign shapes and color blobs on the traffic signs themselves.

From there we define two sets of (CONV => RELU => CONV => RELU) * 2 => POOL layers:

These sets of layers deepen the network by stacking two sets of CONV => RELU => BN  layers before applying max-pooling to reduce volume dimensionality.

The head of our network consists of two sets of fully connected layers and a softmax classifier:

Dropout is applied as a form of regularization which aims to prevent overfitting. The result is often a more generalizable model.

Line 54 returns our model ; we will compile and train the model in our  script next.

If you struggled to understand the terms in this class, be sure to refer to Deep Learning for Computer Vision with Python for conceptual knowledge on the layer types. My Keras Tutorial also provides a brief overview.

Implementing our training script

Now that our TrafficSignNet architecture has been implemented, let’s create our Python training script that will be responsible for:

  • Loading our training and testing split from the GTSRB dataset
  • Preprocessing the images
  • Training our model
  • Evaluating our model’s accuracy
  • Serializing the model to disk so we can later use it to make predictions on new traffic sign data

Let’s get started — open up the file in your project directory and add the following code:

Lines 2-18 import our necessary packages:

  • matplotlib : The de facto plotting package for Python. We use the "Agg"  backend ensuring that we are able to export our plots as image files to disk (Lines 2 and 3).
  • TrafficSignNet : Our traffic sign Convolutional Neural Network that we coded with Keras in the previous section (Line 6).
  • tensorflow.keras : Ensures that we can handle data augmentation, Adam  optimization, and one-hot encoding (Lines 7-9).
  • classification_report : A scikit-learn method for printing a convenient evaluation for training (Line 10).
  • skimage : We will use scikit-image for preprocessing our dataset in lieu of OpenCV as scikit-image provides some additional preprocessing algorithms that OpenCV does not (Lines 11-13).
  • numpy : For array and numerical operations (Line 15).
  • argparse : Handles parsing command line arguments (Line 16).
  • random : For shuffling our dataset randomly (Line 17).
  • os : We’ll use this module for grabbing our operating system’s path separator (Line 18).

Let’s go ahead and define a function to load our data from disk:

The GTSRB dataset is pre-split into training/testing splits for us. Line 20 defines load_split  to load each training split respectively. It accepts a path to the base of the dataset as well as a .csv  file path which contains the class label for each image.

Lines 22 and 23 initialize our data  and labels  lists which this function will soon populate and return.

Line 28 loads our .csv  file, strips whitespace, and grabs each row via the newline delimiter, skipping the first header row. The result is a list of rows  which Line 29 then shuffles randomly.

The result of Lines 28 and 29 can be seen here (i.e. if you were to print the first three rows in the list via  print(rows[:3]) ):

The format of the data is: Width, Height, X1, Y1, X2, Y2, ClassID, Image Path .

Let’s go ahead and loop over the rows  now and extract + preprocess the data that we need:

Line 32 loops over the rows . Inside the loop, we proceed to:

  • Display a status update to the terminal for every 1000th image processed (Lines 34 and 35).
  • Extract the ClassID ( label) and imagePath  from the row  (Line 39).
  • Derive the full path to the image file + load the image with scikit-image (Lines 42 and 43).

As mentioned in the “Challenges with the GTSRB dataset” section above, one of the biggest issues with the dataset is that many images have low contrast, making it challenging for the human eye to recognize a given sign (let alone a computer vision/deep learning model).

We can automatically improve image contrast by applying an algorithm called Contrast Limited Adaptive Histogram Equalization (CLAHE), the implementation of which can be found in the scikit-image library.

Using CLAHE we can improve the contrast of our traffic sign images:

Figure 6: As part of preprocessing for our GTSRB dataset for deep learning classification of traffic signs, we apply a method known as Contrast Limited Adaptive Histogram Equalization (CLAHE) to improve image contrast. Original images input images can be seen on the left — notice how contrast is very low and some signs cannot be recognize. By applying CLAHE (right) we can improve image contrast.

While our images may seem a bit “unnatural” to the human eye, the improvement in contrast will better aid our computer vision algorithms in automatically recognizing our traffic signs.

Note: A big thanks to Thomas Tracey who proposed using CLAHE to improve traffic sign recognition in his 2017 article.

Let’s preprocess our images by applying CLAHE now:

To complete our loop over the rows , we:

  • Resize the image to 32×32 pixels (Line 48).
  • Apply CLAHE image contrast correction (Line 49).
  • Update data  and labels  lists with the image  itself and the class label  (Lines 52 and 53).

Then, Lines 56-60 convert the data  and labels  into NumPy arrays and return  them to the calling function.

With our load_split  function defined, now we can move on to parsing command line arguments:

Our three command line arguments consist of:

  • --dataset : The path to our GTSRB dataset.
  • --model : The desired path/filename of our output model.
  • --plot : The path to our training history plot.

Let’s initialize a few hyperparameters and load our class label names:

Lines 74-76 initialize the number of epochs to train for, our initial learning rate, and batch size.

Lines 79 and 80 load the class labelNames  from a .csv  file. Unnecessary markup in the file is automatically discarded.

Now let’s go ahead and load + preprocess our data:

In this block we:

  • Derive paths to the training and testing splits (Lines 83 and 84).
  • Use our load_split function to load each of the training/testing splits, respectively (Lines 88 and 89).
  • Preprocess the images by scaling them to the range [0, 1] (Lines 92 and 93).
  • One-hot encode the training/testing class labels (Lines 96-98).
  • Account for skew in our dataset (i.e. the fact that we have significantly more images for some classes than others). Lines 101 and 102 assign a weight to each class for use during training.

From here, we’ll prepare + train our model :

Lines 105-113 initialize our data augmentation object with random rotation, zoom, shift, shear, and flip settings. Notice how we’re not applying horizontal or vertical flips here as traffic signs in the wild will not be flipped.

Lines 117-121 compile our TraffigSignNet  model with the Adam  optimizer and learning rate decay.

Lines 125-131 train the model  using Keras’ fit_generator method. Notice the class_weight  parameter is passed to accommodate the skew in our dataset.

Next, we will evaluate the model  and serialize it to disk:

Line 135 evaluates the model  on the testing set. From there, Lines 136 and 137 print a classification report in the terminal.

Line 141 serializes the Keras model  to disk so that we can later use it for inference in our prediction script.

Finally, the following code block plots the training accuracy/loss curves and exports the plot to an image file on disk:

Take special note here that TensorFlow 2.0 has renamed the training history keys:

  • H.history["acc"]  is now H.history["accuracy"] .
  • H.history["val_acc"]  is now H.history["val_accuracy"] .

At this point, you should be using TensorFlow 2.0 (with Keras built-in), but if you aren’t, you can adjust the key names (Lines 149 and 150).

Personally, I still haven’t figured out why the TensorFlow developers made the change to spell out “accuracy” but did not spell out “validation”. It seems counterintuitive to me. That said, all frameworks and codebases have certain nuances that we need to learn to deal with.

Training TrafficSignNet on the traffic sign dataset

To train our traffic sign classification model make sure you have:

  1. Used the “Downloads” section of this tutorial to download the source code.
  2. Followed the “Project structure” section above to download our traffic sign dataset.

From there, open up a terminal and execute the following command:

Note: Some class names have been shortened for readability in the terminal output block.

Figure 7: Keras and deep learning is used to train a traffic sign classifier.

Here you can see we are obtaining 95% accuracy on our testing set!

Implementing our prediction script

Now that our traffic sign recognition model is trained, let’s learn how to:

  1. Load the model from disk
  2. Load sample images from disk
  3. Preprocess the sample images in the same manner as we did for training
  4. Pass our images through our traffic sign classifier
  5. Obtain our final output predictions

To accomplish these goals we’ll need to inspect the contents of

Lines 2-12 import our necessary packages, modules, and functions. Most notably we import load_model  from tensorflow.keras.models , ensuring that we can load our serialized model from disk. You can learn more about saving and loading Keras models here.

We’ll use scikit-image to preprocess our images, just like we did in our training script.

But unlike in our training script, we’ll utilize OpenCV to annotate and write our output image to disk.

Let’s parse our command line arguments:

Lines 15-22 parse three command line arguments:

  • --model : The path to the serialized traffic sign recognizer Keras model on disk (we trained the model in the previous section).
  • --images : The path to a directory of testing images.
  • --examples : Our path to the directory where our annotated output images will be stored.

With each of these paths in the args  dictionary, we’re ready to proceed:

Line 26 loads our trained traffic sign model from disk into memory.

Lines 29 and 30 load and parse the class labelNames .

Lines 34-36 grab the paths to the input images, shuffle  them, and grab 25  sample images.

We’ll now loop over the samples:

Inside our loop over the imagePaths (beginning on Line 39), we:

  • Load the input image with scikit-image (Line 43).
  • Preprocess the image in same manner as we did for training data (Lines 44-48). It is absolutely crucial to preprocess our images in the same way we did for training, including, (1) resizing, (2) CLAHE contrast adjustment, and (3) scaling to the range [0, 1]If we don’t preprocess our testing data in the same manner as our training data then our model predictions won’t make sense.
  • Add a dimension to the image — we will perform inference on a batch size of 1 (Line 49).
  • Make a prediction and grab the class label with the highest probability (Lines 52-54).
  • Using OpenCV we load, resize, annotate the image with the label, and write the output image to disk (Lines 58-65).

This process is repeated for all 25 image samples.

Make predictions on traffic sign data

To make predictions on traffic sign data using our trained TrafficSignNet model, make sure you have used the “Downloads” section of this tutorial to download the source code and pre-trained model.

From there, open up a terminal and execute the following command:

Figure 8: Keras deep learning traffic sign classification results.

As you can see, our traffic sign classifier is correctly recognizing our input traffic signs!

Where can I learn more about traffic sign recognition?

Figure 9: In my deep learning book, I cover multiple object detection methods. I actually cover how to build the CNN, train the CNN, and make inferences. Not to mention deep learning fundamentals, best practices, and my personal set of rules of thumb. Grab your copy now so you can start learning new skills.

This tutorial frames traffic sign recognition as a classification problem, meaning that the traffic signs have been pre-cropped from the input image — this process was done when the dataset curators manually annotated and created the dataset.

However, in the real-world, traffic sign recognition is actually an object detection problem.

Object detection enables you to not only recognize the traffic sign but also localize where in the input frame the traffic sign is.

The process of object detection is not as simple and straightforward as image classification. It is actually far, far more complicated — the details and intricacies are outside the scope of blog post. They are, however, within the scope of my deep learning book.

If you’re interested in learning how to:

  1. Prepare and annotate your own image datasets for object detection
  2. Fine-tune and train your own custom object detectors, including Faster R-CNNs, SSDs, and RetinaNet on your own datasets
  3. Uncover my best practices, techniques, and procedures to utilize when training your own deep learning object detectors

…then you’ll want to be sure to take a look at my new deep learning book. Inside Deep Learning for Computer Vision with Python, I will guide you, step-by-step, on building your own deep learning object detectors.

You will learn to replicate my very own experiments by:

  • Training a Faster R-CNN from scratch to localize and recognize 47 types of traffic signs.
  • Training a Single Shot Detector (SSD) on a dataset of front and rear views of vehicles.
  • Recognizing familiar product logos in images using a custom RetinaNet model.
  • Building a weapon detection system using RetinaNet that is capable of real-time video object detection.

Be sure to take a look — and don’t forget to grab your free sample chapters + table of contents PDF while you’re there!


In this tutorial, you learned how to perform traffic sign classification and recognition with Keras and Deep Learning.

To create our traffic sign classifier, we:

  • Utilized the popular German Traffic Sign Recognition Benchmark (GTSRB) as our dataset.
  • Implemented a Convolutional Neural Network called TrafficSignNet using the Keras deep learning library.
  • Trained TrafficSignNet on the GTSRB dataset, obtaining 95% accuracy.
  • Created a Python script that loads our trained TrafficSignNet model and then classifies new input images.

I hope you enjoyed today’s post on traffic sign classification with Keras!

If you’re interested in learning more about training your own custom deep learning models for traffic sign recognition and detection, be sure to refer to Deep Learning for Computer Vision with Python where I cover the topic in more detail.

To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), just enter your email address in the form below!


If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , ,

26 Responses to Traffic Sign Classification with Keras and Deep Learning

  1. DH November 4, 2019 at 10:56 am #

    Cool article, thanks once again!!

    Any thoughts on how Honda and other car manufacturers are actually identifying street signs? I would have thought using a CNN would be too computationally expensive for a neat, but not something people would pay much of a premium for, feature.

    • Adrian Rosebrock November 6, 2019 at 10:03 am #

      Most are running deep learning-based object detectors. I’m not sure on the specifics of each architecture as it’s proprietary information, but I imagine most are quantized in some manner and running on an embedded GPU.

  2. Yash Rathod November 4, 2019 at 1:55 pm #

    Amazing tutorial..
    Well documented..
    Looking forward to implement it ASAP…
    LOVE ADRIAN…❤️❤️

    • Adrian Rosebrock November 6, 2019 at 10:01 am #

      Thanks Yash!

  3. Sarmad Gulzar November 4, 2019 at 2:43 pm #

    Correction in

    On line 152,
    H.history[“accuracy”] should be H.history[“acc”]

    and on 153,
    H.history[“val_accuracy”] should be H.history[“val_acc”]

    • Adrian Rosebrock November 6, 2019 at 10:02 am #

      – In TensorFlow 1.x it’s “acc” and “val_acc”
      – In TensorFlow 2.0 it’s “accuracy” and “val_accuracy”

  4. Andrey November 4, 2019 at 4:13 pm #

    Traceback (most recent call last):
    File “”, line 155, in
    plt.plot(N, H.history[“accuracy”], label=”train_acc”)
    KeyError: ‘accuracy’

    Maybe it should be “acc” and “val_acc”?

    • Adrian Rosebrock November 6, 2019 at 10:02 am #

      See my reply to Sarmad. You’re using TensorFlow 1.x with Keras but this tutorial assumes you’re using TensorFlow 2.0.

  5. JP Cassar November 4, 2019 at 4:48 pm #

    Again, an excellent article. Thanks Adrian!
    As we got a lot of differences between Europe and US, I was wondering if using this example, the US dataset available at can be used.
    Any ideas?

  6. Nikolay Klimchuk November 4, 2019 at 7:18 pm #

    Nice but why all examples I find for European road signs only?
    Is it because US road signs are so different nobody actually able to do any kind of decent recognition for them?

    • Adrian Rosebrock November 6, 2019 at 10:05 am #

      No, it’s simply because the German Traffic Sign database was the easiest and most straightforward to use in this example. If you would like to use a US dataset, see Deep Learning for Computer Vision with Python where I show you how to detect and recognize US signs.

  7. Tony Holdroyd November 5, 2019 at 4:16 am #

    Fantastic tutorial, as always Adrian. Thank you for the work you put into this. Best, Tony

    • Adrian Rosebrock November 6, 2019 at 10:05 am #

      Thanks Tony!

  8. Zizo November 5, 2019 at 5:35 pm #

    Hi Adrian, I’m trying to implement traffic sign recognition project on real-time webcam capturing.

  9. Andy November 6, 2019 at 5:33 am #

    Thanks Adrian, for yet another excellent tutorial with concise instructions.

    Is there a (somewhat) straightforward way to plug this trained model into an OpenCV based object detection routine?

    Would you consider a ‘Part 2’ to this blog post that discusses this further?

    You wrote a great post that used a caffe based model – (
    Can the model we trained here be used in a similar manner via the cv2.dnn.readNetFromTensorflow function?

    • Adrian Rosebrock November 6, 2019 at 10:07 am #

      OpenCV’s “dnn” module has come a long way and is nice to use, but it can complicate things a bit. Instead of trying to fit a Keras/TensorFlow model into OpenCV’s dnn module, just use OpenCV and Keras together. This tutorial will show you how to do exactly that.

  10. Davi November 7, 2019 at 5:57 am #

    Hi adrian why is the trafficsignnet.model on my pc becomes folder? Have any idea?

    • Adrian Rosebrock November 7, 2019 at 10:01 am #

      TensorFlow 2.0 creates a directory with the model files.

  11. Nick Butts November 8, 2019 at 7:57 pm #

    When I try and re-train the network by downloading the code, downloading the gtsrb-german-traffic-sign data set and running (using a virtualenv):
    python -d gtsrb-german-traffic-sign -m output2 -p output2/train_nlb

    It runs for a while and generates the plot. In your run the training and validation accuracy both approach 1.0. In my run the training accuracy approaches 1.0, but the validation accuracy approaches 0.

    • Adrian Rosebrock November 11, 2019 at 12:43 pm #

      What version of Keras/TensorFlow are you using?

  12. Suraj November 9, 2019 at 9:44 pm #

    Hi Adrian, thanks for the wonderful post!

    I have a question regarding the dataset. Are the .csv files created by you or were they a part of the original download? I’m not able to see them in the zip file I downloaded from Kaggle, and I’m not able to find anything about them in the article as well. Wondering what did I miss!

    • Adrian Rosebrock November 11, 2019 at 12:44 pm #

      Those files were part of the Kaggle dataset that I downloaded.

  13. Denis November 11, 2019 at 7:54 am #

    nice tutorial – as always
    your contribution has been very helpful to me, professionally and personally – thanks for that

    I wonder about the second “flatten()” on line 53 ? (in the trafficSignNet)

    I assume the 2D image has been flattened into a 1D structure already on line 46

    So why flattening again ? or am I mistaken ? or is it a typo ?

    Thanks for you insight

    • Adrian Rosebrock November 11, 2019 at 12:43 pm #

      Thats a typo, thanks for pointing it out Denis!

Before you leave a comment...

Hey, Adrian here, author of the PyImageSearch blog. I'd love to hear from you, but before you submit a comment, please follow these guidelines:

  1. If you have a question, read the comments first. You should also search this page (i.e., ctrl + f) for keywords related to your question. It's likely that I have already addressed your question in the comments.
  2. If you are copying and pasting code/terminal output, please don't. Reviewing another programmers’ code is a very time consuming and tedious task, and due to the volume of emails and contact requests I receive, I simply cannot do it.
  3. Be respectful of the space. I put a lot of my own personal time into creating these free weekly tutorials. On average, each tutorial takes me 15-20 hours to put together. I love offering these guides to you and I take pride in the content I create. Therefore, I will not approve comments that include large code blocks/terminal output as it destroys the formatting of the page. Kindly be respectful of this space.
  4. Be patient. I receive 200+ comments and emails per day. Due to spam, and my desire to personally answer as many questions as I can, I hand moderate all new comments (typically once per week). I try to answer as many questions as I can, but I'm only one person. Please don't be offended if I cannot get to your question
  5. Do you need priority support? Consider purchasing one of my books and courses. I place customer questions and emails in a separate, special priority queue and answer them first. If you are a customer of mine you will receive a guaranteed response from me. If there's any time left over, I focus on the community at large and attempt to answer as many of those questions as I possibly can.

Thank you for keeping these guidelines in mind before submitting your comment.

Leave a Reply