Label smoothing with Keras, TensorFlow, and Deep Learning

In this tutorial, you will learn two ways to implement label smoothing using Keras, TensorFlow, and Deep Learning.

When training your own custom deep neural networks there are two critical questions that you should constantly be asking yourself:

  1. Am I overfitting to my training data?
  2. Will my model generalize to data outside my training and testing splits?

Regularization methods are used to help combat overfitting and help our model generalize. Examples of regularization methods include dropout, L2 weight decay, data augmentation, etc.

However, there is another regularization technique we haven’t discussed yet — label smoothing.

Label smoothing:

  • Turns “hard” class label assignments to “soft” label assignments.
  • Operates directly on the labels themselves.
  • Is dead simple to implement.
  • Can lead to a model that generalizes better.

In the remainder of this tutorial, I’ll show you how to implement label smoothing and utilize it when training your own custom neural networks.

To learn more about label smoothing with Keras and TensorFlow, just keep reading!

Looking for the source code to this post?
Jump right to the downloads section.

Label smoothing with Keras, TensorFlow, and Deep Learning

In the first part of this tutorial I’ll address three questions:

  1. What is label smoothing?
  2. Why would we want to apply label smoothing?
  3. How does label smoothing improve our output model?

From there I’ll show you two methods to implement label smoothing using Keras and TensorFlow:

  1. Label smoothing by explicitly updating your labels list
  2. Label smoothing using your loss function

We’ll then train our own custom models using both methods and examine the results.

What is label smoothing and why would we want to use it?

When performing image classification tasks we typically think of labels as hard, binary assignments.

For example, let’s consider the following image from the MNIST dataset:

Figure 1: Label smoothing with Keras, TensorFlow, and Deep Learning is a regularization technique with a goal of enabling your model to generalize to new data better.

This digit is clearly a “7”, and if we were to write out the one-hot encoded label vector for this data point it would look like the following:

[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0]

Notice how we’re performing hard label assignment here: all entries in the vector are 0 except for the 8th index (which corresponds to the digit 7) which is a 1.

Hard label assignment is natural to us and maps to how our brains want to efficiently categorize and store information in neatly labeled and packaged boxes.

For example, we would look at Figure 1 and say something like:

“I’m sure that’s a 7. I’m going to label it a 7 and put it in the ‘7’ box.”

It would feel awkward and unintuitive to say the following:

“Well, I’m sure that’s a 7. But even though I’m 100% certain that it’s a 7, I’m going to put 90% of that 7 in the ‘7’ box and then divide the remaining 10% into all boxes just so my brain doesn’t overfit to what a ‘7’ looks like.”

If we were to apply soft label assignment to our one-hot encoded vector above it would now look like this:

[0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.91 0.01 0.01]

Notice how summing the list of values equals 1, just like in the original one-hot encoded vector.

This type of label assignment is called soft label assignment.

Unlike hard label assignments where class labels are binary (i.e., positive for one class and a negative example for all other classes), soft label assignment allows:

  • The positive class to have the largest probability
  • While all other classes have a very small probability

So, why go through all the trouble?

The answer is that we don’t want our model to become too confident in its predictions.

By applying label smoothing we can lessen the confidence of the model and prevent it from descending into deep crevices of the loss landscape where overfitting occurs.

For a mathematically motivated discussion of label smoothing, I would recommend reading the following article by Lei Mao.

Additionally, be sure to read Müller et al.’s 2019 paper, When Does Label Smoothing Help? as well as He at al.’s Bag of Tricks for Image Classification with Convolutional Neural Networks for detailed studies on label smoothing.

In the remainder of this tutorial, I’ll show you how to implement label smoothing with Keras and TensorFlow.

Project structure

Go ahead and grab today’s files from the “Downloads” section of today’s tutorial.

Once you have extracted the files, you can use the tree  command as shown to view the project structure:

Inside the pyimagesearch  module you’ll find two files:

We will not be covering the above implementations today and will instead focus on our two label smoothing methods:

  1. Method #1 uses label smoothing by explicitly updating your labels list in .
  2. Method #2 covers label smoothing using your TensorFlow/Keras loss function in .

Method #1: Label smoothing by explicitly updating your labels list

The first label smoothing implementation we’ll be looking at directly modifies our labels after one-hot encoding — all we need to do is implement a simple custom function.

Let’s get started.

Open up the file in your project structure and insert the following code:

Lines 2-16 import our packages, modules, classes, and functions. In particular, we’ll work with the scikit-learn LabelBinarizer  (Line 9).

The heart of Method #1 lies in the smooth_labels  function:

Line 18 defines the smooth_labels function. The function accepts two parameters:

  • labels: Contains one-hot encoded labels for all data points in our dataset.
  • factor: The optional “smoothing factor” is set to 10% by default.

The remainder of the smooth_labels function is best explained with a two-step example.

To start, let’s assume that the following one-hot encoded vector is supplied to our function:

[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0]

Notice how we have a hard label assignment here — the true class labels is a 1 while all others are 0.

Line 20 reduces our hard assignment label of 1  by the supplied factor amount. With factor=0.1, the operation on Line 20 yields the following vector:

[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.9, 0.0, 0.0]

Notice how the hard assignment of 1.0 has been dropped to 0.9.

The next step is to apply a very small amount of confidence to the rest of the class labels in the vector.

We accomplish this task by taking factor and dividing it by the total number of possible class labels. In our case, there are 10 possible class labels so when factor=0.1, we, therefore, have 0.1 / 10 = 0.01 — that value is added to our vector on Line 21, resulting in:

[0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.91 0.01 0.01]

Notice how the “incorrect” classes here have a very small amount of confidence. It doesn’t seem like much, but in practice, it can help our model from overfitting.

Finally, Line 24 returns the smoothed labels to the calling function.

Note: The smooth_labels function in part comes from Chengwei’s article where they discuss the Bag of Tricks for Image Classification with Convolutional Neural Networks paper. Be sure to read the article if you’re interested in implementations from the paper.

Let’s continue on with our implementation:

Our two command line arguments include:

  • --smoothing : The smoothing factor  (refer to the smooth_labels  function and example above).
  • --plot : The path to the output plot file.

Let’s prepare our hyperparameters and data:

Lines 36-38 initialize three training hyperparameters including the total number of epochs to train for, initial learning rate, and batch size.

Lines 41 and 42 then initialize our class labelNames  for the CIFAR-10 dataset.

Lines 47-49 handle loading CIFAR-10 dataset.

Mean subtraction, a form of normalization covered in the Practitioner Bundle of Deep Learning for Computer Vision with Python, is applied to the data via Lines 52-54.

Let’s apply label smoothing via Method #1:

Lines 58-62 one-hot encode the labels and convert them to floats.

Line 67 applies label smoothing using our smooth_labels function.

From here we’ll prepare data augmentation and our learning rate scheduler:

Lines 71-75 instantiate our data augmentation object.

Lines 78-80 initialize learning rate decay via a callback that will be executed at the start of each epoch. To learn about creating your own custom Keras callbacks, be sure to refer to the Starter Bundle of Deep Learning for Computer Vision with Python.

We then compile and train our model (Lines 84-97).

Once the model is fully trained, we go ahead and generate a classification report as well as a training history plot:

Method #2: Label smoothing using your TensorFlow/Keras loss function

Our second method to implement label smoothing utilizes Keras/TensorFlow’s CategoricalCrossentropy class directly.

The benefit here is that we don’t need to implement any custom function — label smoothing can be applied on the fly when instantiating the CategoricalCrossentropy class with the label_smoothing parameter, like so:


Again, the benefit here is that we don’t need any custom implementation.

The downside is that we don’t have access to the raw labels list which would be a problem if you need it to compute your own custom metrics when monitoring the training process.

With all that said, let’s learn how to utilize the CategoricalCrossentropy for label smoothing.

Our implementation is very similar to the previous section but with a few exceptions — I’ll be calling out the differences along the way. For a detailed review of our training script, refer to the previous section.

Open up the file in your directory structure and we’ll get started:

Lines 2-17 handle our imports. Most notably Line 10 imports  CategoricalCrossentropy.

Our --smoothing  and --plot  command line arguments are the same as in Method #1.

Our next codeblock is nearly the same as Method #1 with the exception of the very last part:

Here we:

  • Initialize training hyperparameters (Lines 29-31).
  • Initialize our CIFAR-10 class names (Lines 34 and 35).
  • Load CIFAR-10 data (Lines 40-42).
  • Apply mean subtraction (Lines 45-47).

Each of those steps is the same as Method #1.

Lines 50-52 one-hot encode labels with a caveat compared to our previous method. The CategoricalCrossentropy class will take care of label smoothing for us, so there is no need to directly modify the trainY and testY lists, as we did previously.

Let’s instantiate our data augmentation and learning rate scheduler callbacks:

And from there we will initialize our loss with the label smoothing parameter:

Lines 84 and 85 initialize our optimizer and loss function.

The heart of Method #2 is here in the loss method with label smoothing: Notice how we’re passing in the label_smoothing parameter to the CategoricalCrossentropy class. This class will automatically apply label smoothing for us.

We then compile the model, passing in our loss with label smoothing.

To wrap up, we’ll train our model, evaluate it, and plot the training history:

Label smoothing results

Now that we’ve implemented our label smoothing scripts, let’s put them to work.

Start by using the “Downloads” section of this tutorial to download the source code.

From there, open up a terminal and execute the following command to apply label smoothing using our custom smooth_labels function:

Figure 2: The results of training using our Method #1 of Label smoothing with Keras, TensorFlow, and Deep Learning.

Here you can see we are obtaining ~89% accuracy on our testing set.

But what’s really interesting to study is our training history plot in Figure 2.

Notice that:

  1. Validation loss is significantly lower than the training loss.
  2. Yet the training accuracy is better than the validation accuracy.

That’s quite strange behavior — typically, lower loss correlates with higher accuracy.

How is it possible that the validation loss is lower than the training loss, yet the training accuracy is better than the validation accuracy?

The answer lies in label smoothing — keep in mind that we only smoothed the training labels. The validation labels were not smoothed.

Thus, you can think of the training labels as having additional “noise” in them.

The ultimate goal of applying regularization when training our deep neural networks is to reduce overfitting and increase the ability of our model to generalize.

Typically we achieve this goal by sacrificing training loss/accuracy during training time in hopes of a better generalizable model — that’s the exact behavior we’re seeing here.

Next, let’s use Keras/TensorFlow’s CategoricalCrossentropy class when performing label smoothing:

Figure 3: The results of training using our Method #2 of Label smoothing with Keras, TensorFlow, and Deep Learning.

Here we are obtaining ~90% accuracy, but that does not mean that the CategoricalCrossentropy method is “better” than the smooth_labels technique — for all intents and purposes these results are “equal” and would show to follow the same distribution if the results were averaged over multiple runs.

Figure 3 displays the training history for the loss-based label smoothing method.

Again, note that our validation loss is lower than our training loss yet our training accuracy is higher than our validation accuracy — this is totally normal behavior when using label smoothing so don’t be alarmed by it.

When should I apply label smoothing?

I recommend applying label smoothing when you are having trouble getting your model to generalize and/or your model is overfitting to your training set.

When those situations happen we need to apply regularization techniques. Label smoothing is just one type of regularization, however. Other types of regularization include:

  • Dropout
  • L1, L2, etc. weight decay
  • Data augmentation
  • Decreasing model capacity

You can mix and match these methods to combat overfitting and increase the ability of your model to generalize.

What’s next?

Figure 4: Learn computer vision and deep learning using my proven method. Students of mine have gone on to change their careers to CV/DL, win research grants, publish papers, and land R&D positions in research labs. Grab your free table of contents and sample chapters here!

Your future in the field of computer vision, deep learning, and data science depends upon furthering your education.

You need to start with a plan rather than bouncing from website to website fumbling for answers without a solid foundation or an understanding of what you are doing and why.

Your plan begins with my deep learning book.

Join 1000s of PyImageSearch website readers like yourself who have mastered deep learning using my book.

After reading my deep learning book and replicating the examples/experiments, you’ll be well-equipped to:

  • Write deep learning Python code independently.
  • Design and tweak custom Convolutional Neural Networks to successfully complete your very own deep learning projects.
  • Train production-ready deep learning models, impressing your teammates and superiors with results.
  • Understand and evaluate emerging and state-of-the-art techniques and publications giving you a leg-up in your research, studies, and workplace.

I’ll be at your side to answer your questions as you embark on your deep learning journey.

Be sure to take a look — and while you’re at it, don’t forget to grab your (free) table of contents and sample chapters.


In this tutorial you learned two methods to apply label smoothing using Keras, TensorFlow, and Deep Learning:

  1. Method #1: Label smoothing by updating your labels lists using a custom label parsing function
  2. Method #2: Label smoothing using your loss function in TensorFlow/Keras

You can think of label smoothing as a form of regularization that improves the ability of your model to generalize to testing data, but perhaps at the expense of accuracy on your training set — typically this tradeoff is well worth it.

I normally recommend Method #1 of label smoothing when either:

  1. Your entire dataset fits into memory and you can smooth all labels in a single function call.
  2. You need direct access to your label variables.

Otherwise, Method #2 tends to be easier to utilize as (1) it’s baked right into Keras/TensorFlow and (2) does not require any hand-implemented functions.

Regardless of which method you choose, they both do the same thing — smooth your labels, thereby attempting to improve the ability of your model to generalize.

I hope you enjoyed the tutorial!

To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), just enter your email address in the form below!


If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , , ,

8 Responses to Label smoothing with Keras, TensorFlow, and Deep Learning

  1. younus December 31, 2019 at 3:11 am #

    Hi there,thanks for your tutorials,can you please recommend setup(gpu etc) for deep learning.

  2. Mohanad January 1, 2020 at 1:39 pm #

    Thank you Adrian, you always brilliant.

    • Adrian Rosebrock January 2, 2020 at 8:43 am #

      Thank you for the kind words, Mohanad 🙂

  3. Zayne January 1, 2020 at 10:23 pm #

    Is this method helpful for solving the problem of unbalanced trainset?

    • Adrian Rosebrock January 2, 2020 at 8:43 am #

      No, label smoothing won’t help much with an unbalanced dataset. Try weight your classes instead.

  4. Yaser Sakkaf January 3, 2020 at 1:51 am #

    Will this still be useful in case of Binary Classification?

    • Adrian Rosebrock January 3, 2020 at 1:15 pm #

      Yes, it can still be used (and useful) for binary classification.

  5. Avinandan Banerjee January 5, 2020 at 8:46 am #

    Thank you for the brilliant post.

    I want to point out that label smoothing is also a technique that helps GANs to converge when training.

Before you leave a comment...

Hey, Adrian here, author of the PyImageSearch blog. I'd love to hear from you, but before you submit a comment, please follow these guidelines:

  1. If you have a question, read the comments first. You should also search this page (i.e., ctrl + f) for keywords related to your question. It's likely that I have already addressed your question in the comments.
  2. If you are copying and pasting code/terminal output, please don't. Reviewing another programmers’ code is a very time consuming and tedious task, and due to the volume of emails and contact requests I receive, I simply cannot do it.
  3. Be respectful of the space. I put a lot of my own personal time into creating these free weekly tutorials. On average, each tutorial takes me 15-20 hours to put together. I love offering these guides to you and I take pride in the content I create. Therefore, I will not approve comments that include large code blocks/terminal output as it destroys the formatting of the page. Kindly be respectful of this space.
  4. Be patient. I receive 200+ comments and emails per day. Due to spam, and my desire to personally answer as many questions as I can, I hand moderate all new comments (typically once per week). I try to answer as many questions as I can, but I'm only one person. Please don't be offended if I cannot get to your question
  5. Do you need priority support? Consider purchasing one of my books and courses. I place customer questions and emails in a separate, special priority queue and answer them first. If you are a customer of mine you will receive a guaranteed response from me. If there's any time left over, I focus on the community at large and attempt to answer as many of those questions as I possibly can.

Thank you for keeping these guidelines in mind before submitting your comment.

Leave a Reply