How to use Keras fit and fit_generator (a hands-on tutorial)

In this tutorial, you will learn how the Keras .fit  and .fit_generator  functions work, including the differences between them. To help you gain hands-on experience, I’ve included a full example showing you how to implement a Keras data generator from scratch.

Today’s blog post is inspired by PyImageSearch reader, Shey.

Shey asks:

Hi Adrian, thanks for your tutorials. I’ve been methodically going through every one. They’ve really helped me learn deep learning.

I have a question about the Keras “.fit_generator” function.

I’ve noticed you use it quite a bit in your blog posts but I’m not really sure how the function is different than Keras’ standard “.fit” function.

How is it different? How do I know when to use each? And how to I create a data generator for the “.fit_generator” function?

Shey asks a great question.

The Keras deep learning library includes three separate functions that can be used to train your own models:

  • .fit
  • .fit_generator
  • .train_on_batch

If you’re new to Keras and deep learning you may feel a bit overwhelmed trying to determine which function you’re supposed to use — this confusion is only compounded if you need to work with your own custom data.

To help lift the cloud of confusion regarding the Keras fit and fit_generator functions, I’m going to spend this tutorial discussing:

  1. The differences between Keras’ .fit , .fit_generator , and .train_on_batch  functions
  2. When to use each when training your own deep learning models
  3. How to implement your own Keras data generator and utilize it when training a model using .fit_generator
  4. How to use the .predict_generator  function when evaluating your network after training

To learn more about Keras’ .fit  and .fit_generator  functions, including how to train a deep learning model on your own custom dataset, just keep reading!

Looking for the source code to this post?
Jump right to the downloads section.

How to use Keras fit and fit_generator (a hands-on tutorial)

In the first part of today’s tutorial we’ll discuss the differences between Keras’ .fit , .fit_generator , and .train_on_batch  functions.

From there I’ll show you an example of a “non-standard” image dataset which doesn’t contain any actual PNG, JPEG, etc. images at all! Instead, the entire image dataset is represented by two CSV files, one for training and the second for evaluation.

Our goal will be to implement a Keras generator capable of training a network on this CSV image data (don’t worry, I’ll show you how to implement such a generator function from scratch).

Finally, we’ll train and evaluate our network.

When to use Keras’ fit, fit_generator, and train_on_batch functions?

Keras provides three functions that can be used to train your own deep learning models:

  1. .fit
  2. .fit_generator
  3. .train_on_batch

All three of these functions can essentially accomplish the same task — but how they go about doing it is very different.

Let’s explore each of these functions one-by-one, looking at an example function call, and then discussing how they are different from each other.

The Keras .fit function

Figure 1: The Keras .fit function signature.

Let’s start with a call to .fit :

Here you can see that we are supplying our training data ( trainX ) and training labels ( trainY ).

We then instruct Keras to allow our model to train for 50  epochs with a batch size of 32 .

The call to .fit  is making two primary assumptions here:

  1. Our entire training set can fit into RAM
  2. There is no data augmentation going on (i.e., there is no need for Keras generators)

Instead, our network will be trained on the raw data.

The raw data itself will fit into memory — we have no need to move old batches of data out of RAM and move new batches of data into RAM.

Furthermore, we will not be manipulating the training data on the fly using data augmentation.

The Keras fit_generator function

Figure 2: The Keras .fit_generator function allows for data augmentation and data generators.

For small, simplistic datasets it’s perfectly acceptable to use Keras’ .fit  function.

These datasets are often not very challenging and do not require any data augmentation.

However, real-world datasets are rarely that simple:

  • Real-world datasets are often too large to fit into memory.
  • They also tend to be challenging, requiring us to perform data augmentation to avoid overfitting and increase the ability of our model to generalize.

In those situations we need to utilize Keras’ .fit_generator  function:

Here we start by first initializing the number of epochs we are going to train our network for along with the batch size.

We then initialize aug , a Keras ImageDataGenerator  object that is used to apply data augmentation, randomly translating, rotating, resizing, etc. images on the fly.

Performing data augmentation is a form of regularization, enabling our model to generalize better.

However, applying data augmentation implies that our training data is no longer “static” — the data is constantly changing.

Each new batch of data is randomly adjusted according to the parameters supplied to ImageDataGenerator .

Thus, we now need to utilize Keras’ .fit_generator  function to train our model.

As the name suggests, the .fit_generator  function assumes there is an underlying function that is generating the data for it.

The function itself is a Python generator.

Internally, Keras is using the following process when training a model with .fit_generator :

  1. Keras calls the generator function supplied to .fit_generator  (in this case, aug.flow ).
  2. The generator function yields a batch of size BS  to the .fit_generator  function.
  3. The .fit_generator  function accepts the batch of data, performs backpropagation, and updates the weights in our model.
  4. This process is repeated until we have reached the desired number of epochs.

You’ll notice we now need to supply a steps_per_epoch  parameter when calling .fit_generator  (the .fit  method had no such parameter).

Why do we need steps_per_epoch ?

Keep in mind that a Keras data generator is meant to loop infinitely — it should never return or exit.

Since the function is intended to loop infinitely, Keras has no ability to determine when one epoch starts and a new epoch begins.

Therefore, we compute the steps_per_epoch  value as the total number of training data points divided by the batch size. Once Keras hits this step count it knows that it’s a new epoch.

The Keras train_on_batch function

Figure 3: The .train_on_batch function in Keras offers expert-level control over training Keras models.

For deep learning practitioners looking for the finest-grained control over training your Keras models, you may wish to use the .train_on_batch  function:

The train_on_batch  function accepts a single batch of data, performs backpropagation, and then updates the model parameters.

The batch of data can be of arbitrary size (i.e., it does not require an explicit batch size to be provided).

The data itself can be generated however you like as well. This data could be raw images on disk or data that has been modified or augmented in some manner.

You’ll typically use the .train_on_batch  function when you have very explicit reasons for wanting to maintain your own training data iterator, such as the data iteration process being extremely complex and requiring custom code.

If you find yourself asking if you need the .train_on_batch  function then in all likelihood you probably don’t.

In 99% of the situations you will not need such fine-grained control over training your deep learning models. Instead, a custom Keras .fit_generator  function is likely all you need it.

That said, it’s good to know that the function exists if you ever need it.

I typically only recommend using the .train_on_batch  function if you are an advanced deep learning practitioner/engineer, and you know exactly what you’re doing and why.

An image dataset…as a CSV file?

Figure 4: The Flowers-17 dataset has been serialized into two CSV files (training and evaluation). In this blog post we’ll write a custom Keras generator to parse the CSV data and yield batches of images to the .fit_generator function. (credits: image & icon)

The dataset we will be using here today is the Flowers-17 dataset, a collection of 17 different flower species with 80 images per class.

Our goal will be to train a Keras Convolutional Neural Network to correctly classify each species of flowers.

However, there’s a bit of a twist to this project:

  • Instead of working with the raw image files residing on disk…
  • …I’ve serialized the entire image dataset to two CSV files (one for training, and one for evaluation).

To construct each CSV file I:

  • Looped over all images in our input dataset
  • Resized them to 64×64 pixels
  • Flattened the 64x64x3=12,288 RGB pixel intensities into a single list
  • Wrote 12,288 pixel values + class label to the CSV file (one per line)

Our goal is to now write a custom Keras generator to parse the CSV file and yield batches of images and labels to the .fit_generator  function.

Wait, why bother with a CSV file if you already have the images?

Today’s tutorial is meant to be an example of how to implement your own Keras generator for the .fit_generator  function.

In the real-world datasets are not nicely curated for you:

  • You may have unstructured directories of images.
  • You could be working with both images and text.
  • Your images could be serialized in a particular format, whether that’s a CSV file, a Caffe or TensorFlow record file, etc.

In these situations, you will need to know how to write your own Keras generator functions.

Keep in mind that it’s not the particular data format that’s important here — it’s the actual process of writing your own Keras generator that you need to learn (and that’s exactly what’s covered in the rest of the tutorial).

Project structure

Let’s inspect the project tree for today’s example:

Today we’ll be using the MiniVGGNet CNN. We won’t be covering the implementation here today as I’ll assume you already know how to implement a CNN. If not, no worries — just refer to my Keras tutorial.

Our serialized image dataset is contained within flowers17_training.csv  and flowers17_testing.csv (included in the “Downloads” associated with today’s post).

We’ll be reviewing train.py , our training script, in the next two sections.

Implementing a custom Keras fit_generator function

Figure 5: What’s our fuel source for our ImageDataGenerator? Two CSV files with serialized image text strings. The generator engine is the ImageDataGenerator from Keras coupled with our custom csv_image_generator. The generator will burn the CSV fuel to create batches of images for training.

Let’s go ahead and get started.

I’ll be assuming you have the following libraries installed on your system:

  • NumPy
  • TensorFlow + Keras
  • Scikit-learn
  • Matplotlib

Each of these packages can be installed via pip in your virtual environment. If you have virtualenvwrapper installed you can create an environment with mkvirtualenv and activate your environment with the workon  command. From there you can use pip to set up your environment:

Once your virtual environment is set up, you can proceed with writing the training script. Make sure you use the “Downloads” section of today’s post grab the source code and Flowers-17 CSV image dataset.

Open up the train.py  file and insert the following code:

Lines 2-12 import our required packages and modules. Since we’ll be saving our training plot to disk, Line 3 sets matplotlib ‘s backend appropriately.

Notable imports include ImageDataGenerator , which contains the data augmentation and image generator functionality, along with  MiniVGGNet , our CNN that we will be training.

Let’s define the csv_image_generator  function:

On Line 14 we’ve defined the csv_image_generator . This function is responsible for reading our CSV data file and loading images into memory. It yields batches of data to our Keras .fit_generator  function.

As such, the function accepts the following parameters:

  • inputPath : the path to the CSV dataset file.
  • bs : The batch size. We’ll be using 32.
  • lb : A label binarizer object which contains our class labels.
  • mode : (default is "train" ) If and only if the mode=="eval" , then a special accommodation is made to not apply data augmentation via the aug  object (if one is supplied).
  • aug : (default is None ) If an augmentation object is specified, then we’ll apply it before we yield our images and labels.

On Line 16 we’ll go ahead and open the CSV data file for reading.

Let’s begin looping over the lines of data:

Each line of data in the CSV file contains an image serialized as a text string. Again, I generated the text strings from the Flowers-17 dataset. Additionally, I know this isn’t the most efficient way to store an image, but it is great for the purposes of this example.

Our Keras generator must loop indefinitely as is defined on Line 19. The .fit_generator  function will be calling our csv_image_generator  function each time it needs a new batch of data.

And furthermore, Keras maintains a cache/queue of data, ensuring the model we are training always has data to train on. Keras constantly keeps this queue full so even if you have reached the total number of epochs to train for, keep in mind that Keras is still feeding the data generator, keeping data in the queue.

Always make sure your function returns data, otherwise, Keras will error out saying it could not obtain more training data from your generator.

At each iteration of the loop, we’ll reinitialize our images  and labels  to empty lists (Lines 21 and 22).

From there, we’ll begin appending images and labels to these lists until we’ve reached our batch size:

Let’s walk through this loop:

  • First, we read a line  from our text file object, f  (Line 27).
  • If line  is empty:
    • …we reset our file pointer and try to read a line  (Lines 34 and 35).
    • And if we’re in evaluation mode , we go ahead and break  from the loop (Lines 40 and 41).
  • At this point, we’ll parse our image  and label  from the CSV file (Lines 44-46).
  • We go ahead and call .reshape  to reshape our 1D array into our image which is 64×64 pixels with 3 color channels (Line 47).
  • Finally, we append the image  and label  to their respective lists, repeating this process until our batch of images is full (Lines 50 and 51).

Note: The key to making evaluation work here is that we supply the number of steps  to model.predict_generator , ensuring that each image in the testing set is predicted only once. I’ll be covering how to do this process later in the tutorial.

With our batch of images and corresponding labels ready, we can now take two steps before yielding our batch:

Our final steps include:

  • One-hot encoding labels  (Line 54)
  • Applying data augmentation if necessary (Lines 57-59)

Finally, our generator “yields” our array of images and our list of labels to the calling function on request (Line 62). If you aren’t familiar with the yield  keyword, it is used for Python Generator functions as a convenient shortcut in place of building an iterator class with less memory consumption. You can read more about Python Generators here.

Let’s initialize our training parameters:

A number of initializations are hardcoded in this example training script:

  • Our training and testing CSV filepaths (Lines 65 and 66).
  • The number of epochs and batch size for training (Lines 69 and 70).
  • Two variables which will hold the number of training and testing images (Lines 73 and 74).

Let’s take a look at the next block of code:

This block of code is long, but it has three purposes:

  1. Extract all labels from our training dataset so that we can subsequently determine unique labels. Notice that labels  is a set  which only allows unique entries.
  2. Assemble a list of testLabels .
  3. Count the NUM_TRAIN_IMAGES  and NUM_TEST_IMAGES .

Let’s build our LabelBinarizer  object and construct the  data augmentation object:

Using the unique labels, we’ll .fit  our LabelBinarizer  object (Lines 107 and 108).

We’ll also go ahead and transform our testLabels  into binary one-hot encoded testLabels  (Line 109).

From there, we’ll construct aug , an ImageDataGenerator  (Lines 112-114). Our image data augmentation object will randomly rotate, flip, shear, etc. our training images.

Now let’s initialize our training and testing image generators:

Our trainGen  and testGen  generator objects generate image data from their respective CSV files using the csv_image_generator  (Lines 117-120).

Notice the subtle similarities and differences:

  • We’re using mode="train"  for both generators
  • Only trainGen  will perform data augmentation

Let’s initialize + compile our MiniVGGNet model with Keras and begin training:

Lines 123-126 compile our model. We’re using a Stochastic Gradient Descent optimizer with a hardcoded initial learning rate of 1e-2 . Learning rate decay is applied at each epoch. Categorical crossentropy is used since we have more than 2 classes (binary crossentropy would be used otherwise). Be sure to refer to my Keras tutorial for additional reading.

On Lines 130-135 we call .fit_generator  to start training.

The trainGen  generator object is responsible for yielding batches of data and labels to the .fit_generator  function.

Notice how we compute the steps per epoch and validation steps based on number of images and batch size. It’s paramount that we supply the steps_per_epoch  value, otherwise Keras will not know when one epoch starts and another one begins.

Now let’s evaluate the results of training:

We go ahead and re-initialize our testGen , this time changing the mode  to "eval"  for evaluation purposes.

After re-initialization, we make predictions using our .predict_generator  function and our testGen  (Lines 143 and 144). At the end of this process, we’ll proceed to grab the max prediction indices (Line 145).

Using the testLabels  and predIdxs , we’ll generate a classification_report  via scikit-learn (Lines 149 and 150). The classification report is printed nicely to our terminal for inspection at the end of training and evaluation.

As a final step, we’ll use our training history dictionary, H , to generate a plot with matplotlib:

The accuracy/loss plot is generated and saved to disk as plot.png  for inspection upon script exit.

Training a Keras model using fit_generator and evaluating with predict_generator

To train our Keras model using our custom data generator, make sure you use the “Downloads” section to download the source code and example CSV image dataset.

From there, open a terminal, navigate to where you downloaded the source code + dataset, and execute the following command:

Figure 6: Our accuracy/loss Keras training plot for MiniVGGNet trained on Flowers-17.

Here you can see that our network has obtained 80% accuracy on the evaluation set, which is quite respectable for the relatively shallow CNN used.

Most importantly, you learned how to utilize:

  • Data generators
  • .fit_generator
  • .predict_generator

…all to train and evaluate your own custom Keras model!

Again, it’s not the actual format of the data itself that’s important here. Instead of CSV files, we could have been working with Caffe or TensorFlow record files, a combination of numerical/categorical data along with images, or any other synthesis of data that you may encounter in the real-world.

Instead, it’s the actual process of implementing your own Keras data generator that matters here.

Follow the steps in this tutorial and you’ll have a blueprint that you can use for implementing your own Keras data generators.

Need more hands-on experience working with large datasets and Keras generators?

Figure 7: My deep learning book, Deep Learning for Computer with Python.

Are you interested in gaining more hands-on experience working with large datasets and deep learning?

If so, you’ll want to take a look at my book, Deep Learning for Computer Vision with Python.

Inside the book you’ll find:

  1. Super practical walkthroughs that present solutions to actual, real-world image classification problems on large datasets.
  2. Hands-on tutorials (with lots of code) that not only show you the algorithms behind deep learning for computer vision but their implementations as well, including how to work with large amounts of data and train Keras deep learning models on top of your dataset.
  3. A no-nonsense teaching style that is guaranteed help you master deep learning for image understanding and visual recognition.

To learn more about my deep learning book (and grab your free PDF of sample chapters and table of contents), just click here.

Summary

In this tutorial you learned the differences between Keras’ three primary functions used to train a deep neural network:

  1. .fit : Used when the entire training dataset can fit into memory and no data augmentation is applied.
  2. .fit_generator : Should be used when either (1) the dataset is too large to fit into memory, (2) data augmentation needs to be applied, or (3) in any situation when it’s more convenient to yield training data in batches (i.e., using the flow_from_directory  function).
  3. .train_on_batch : Can be used to train a Keras model on a single batch of data. Should be utilized only when you need the finest-grained control training your network, such as in situations where your data iterator is highly complex.

From there, we discovered how to:

  1. Implement our own custom Keras generator function
  2. Use our custom generator along with Keras’ .fit_generator  to train our deep neural network

You can use today’s example code as a template when implementing your own Keras generators in your own projects.

I hope you enjoyed today’s blog post!

To download the source code to this post, and be notified when future tutorials are published here on PyImageSearch, just enter your email address in the form below!

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , ,

59 Responses to How to use Keras fit and fit_generator (a hands-on tutorial)

  1. Sivarama Krishnan Rajaraman December 24, 2018 at 11:10 am #

    Hi Adrian,
    Thanks for this wonderful post. We have been working with images all the time. However, there is no clear information online on how to serialize the image dataset along with their labels to the CSV files. I am sure many enthusiastic readers of your blog would love to see this kind of a post. Looking forward.
    Best,
    Shiva

    • Adrian Rosebrock December 27, 2018 at 10:36 am #

      The general algorithm is actually quite simple:

      1. Loop over all images in your dataset
      2. Load image
      3. Resize to fixed dimensions (or embed the dimensions as the first entries for the row)
      4. Flatten the image to a list of pixels
      5. Write label, flattened list, and any other meta data (such as dimension info) to the CSV file

      • Xu Zhang December 27, 2018 at 5:32 pm #

        If you would like to show some codes, it will help a lot. Thank you so much for your great post

        • Sivarama Krishnan Rajaraman December 29, 2018 at 10:09 am #

          I agree with Zhang’s request. If you can point us to some reliable code for the process, it would be a lot helpful.

          • Adrian Rosebrock December 31, 2018 at 11:01 am #

            I’ve uploaded the .zip associated with this post (available via the “Downloads” section) to include my build_dataset.py file which can be used to create a CSV file of images. Enjoy!

      • Safi January 5, 2019 at 11:09 pm #

        Hi Sir,

        I’ve downloaded the code and try to use #build_datasets to convert some images into csv files, but I’m stuck with parse arguments. in the sources code you provided, do I need to input through ap.argument or how. when I try to run I get this errors ” error: the following arguments are required: -d/–dataset”.

        I didn’t get well the concept of parses here.

        Thanks Sir.

        • Adrian Rosebrock January 7, 2019 at 6:44 am #

          It’s okay if you are new to command line arguments but make sure you read this tutorial on argpase first. From there you will have the knowledge you need to continue.

          • Safi January 8, 2019 at 3:33 am #

            Thanks so much Adrian Rosebrock, the tutorial on agparse is so helpful, I’m able to figuring after reading the tutorial.

            Thanks so much. keep it up the good work. you’re amazing and talented.

          • Adrian Rosebrock January 8, 2019 at 6:37 am #

            Thanks Safi, I’m glad it helped you 🙂

  2. Sanjeevi December 24, 2018 at 12:24 pm #

    Actually data augmention is used to produce more data with rotating images,shift the image.Data augmention used when our dataset is small right

    • Adrian Rosebrock December 27, 2018 at 10:35 am #

      Your understanding of data augmentation is slightly incorrect. See my reply to Sagar.

      • Xu Zhang April 16, 2019 at 7:11 pm #

        In Francois Chollet’s book “Deep Learning with Python” on page 139, he wrote ” Data augmentation takes the approach of generating more training data from existing training samples, ……. The goal is that at training time. your model will never see the exact same picture twice. …..”

        Would you like to explain your opinions?

        • Adrian Rosebrock April 18, 2019 at 6:50 am #

          You’re not understanding Francois’ explanation. He is saying that data augmentation takes the original training data and then modifies it on the fly via random perturbations. Data augmentation is not an additive operation, meaning that the network is NOT trained on the original data + augmented data. Instead, it’s trained on data that is augmented, on the fly, from the original training data.

          I would strongly encourage you, or anyone else who has this same question, to read through Deep Learning for Computer Vision with Python where I discuss data augmentation and how it works in more detail.

          • Martin June 23, 2019 at 5:11 pm #

            Hello, Adrian,

            there are Augmentor tools out there that create a bunch of extended images and still keep the original images. But in this case, you first generate data and save the images and create a matching CSV file. That is, if I read this correctly, the number of images is also correct. But you’re right about the “on-fly” method, which you use here. The dataset doesn’t get bigger. I think the questioner has stumbled over it.

            Best regards
            Martin

          • Adrian Rosebrock June 26, 2019 at 1:29 pm #

            You should take a look at my dedicated tutorial on data augmentation coming out in a few weeks 🙂 Keep an eye on the PyImageSearch blog.

  3. Tom December 24, 2018 at 1:34 pm #

    Thanks for the posting. May I know if you can post a sample on classification of moving video object such as a person is walking or the person is falling on the ground based upon the video.

    • Adrian Rosebrock December 27, 2018 at 10:34 am #

      I think what you are referring to is called “human activity recognition”. I don’t have any tutorials on human activity recognition but I will consider it for the future.

  4. Ravi December 25, 2018 at 11:10 pm #

    Thanks for the post Adrian. Excuse me for posting a slightly off-topic question.

    We can train a model with Keras wrapper over TF and could save the Model to H5 format, when we follow your above instructions. Is there a way to export the model to ckpt files? What changes we need to make in the code while saving ? Is it possible to export TF MetaGraph directly from Keras?

  5. Kim, Eun-ho December 26, 2018 at 1:10 am #

    Dear Adrian,

    Thank you for this very useful article.

    For data augmentation, the total number of training data points per epoch is to multiply steps_per_epoch(len(trainX) // BS) by batch_size(BS). therefore, no data augmentation is occuring.

    And you have said that the proper number of training data points per class is 1000 ~ 5000. So, the total number of training datapoints per epoch should be to multiply the number of classes by (1000 ~ 5000).
    I think that steps_per_epoch should be that
    class_number x (1000 ~ 5000) // batch_size

    • Adrian Rosebrock December 27, 2018 at 10:19 am #

      No, the number of steps per epoch is the total number of training examples divided by the batch size. Data augmentation is applied internally inside the data generator. I’m not sure where the multiplication comment is coming from so perhaps you can clarify your comment but my general intuition is that I believe you have a misunderstanding on how data augmentation actually works. Make sure you see my reply to “Sagar”.

      • Kim, Eun-ho January 2, 2019 at 1:46 am #

        I think that the total number of training examples per epoch for data augmentation is not training data points but the number of classes times (1000 ~ 5000).

        • Adrian Rosebrock January 2, 2019 at 9:09 am #

          No, that is incorrect. The number of training examples per epoch with data augmentation is the number of total training data points. Applying data augmentation does not add more data to your training set, it simply augments it by randomly perturbing each and every data point with some transformation.

          Again, your understanding of data augmentation inside of Keras is incorrect. Data augmentation is not “additive” — data augmentation replaces the original training set with randomly perturbed examples.

          • Kim, Eun-ho January 2, 2019 at 10:25 pm #

            Yes, you are right. But I think it is possible to increase the total number of training examples per epoch through change of steps_per_epoch of fit_generator method.
            Accordingly, I think that NUM_TRAIN_IMAGES in steps_per-epoch should be not training data points but the number of classes times (1000 ~ 5000).

          • Adrian Rosebrock January 5, 2019 at 8:57 am #

            You can arbitrarily increase the number of batches or images per epoch, yes. But there’s no reason to do that.

            As for your second remark, no that is 100% false. The number of training steps per epoch is the total number of training images divided by the batch size. The total number of class labels has absolutely nothing do with the batch size.

            I would highly encourage you to read through Deep Learning for Computer Vision with Python. The book will help you understand the fundamentals and remove any confusions you have surrounding batch sizes and steps per epoch. Be sure to take a look.

  6. Atul December 26, 2018 at 1:49 am #

    Hi Adrian,
    I closely follow you and your tutorials and thanks for this one.
    I have one question, above you provided tutorial to train custom data in keras, but as you know keras has few models like VGG16, Resnet50 etc so Is there any way to fine tune these models ? Because I want to add few more classes in existing keras model, like they have 1000 classes and I want to add 10 more in the same model.

  7. Sagar Rathod December 26, 2018 at 2:02 am #

    Good Explanation, Adrian !!!

    I have one doubt here:

    These lines of code in csv_image_generator function is going to modify all images in the current batch if augmentation supplied, right? If yes, then it means that model is never going to see original images in the dataset.

    # if the data augmentation object is not None, apply it
    if aug is not None:
    (images, labels) = next(aug.flow(np.array(images),
    labels, batch_size=bs))

    What I thought is that the data augmentation technique is to augment the training set by adding additional images, in particular, to increase the size of a training set.

    • Adrian Rosebrock December 27, 2018 at 10:17 am #

      When applying data augmentation the goal is to purposely apply data augmentation for each and every batch of images, implying that each image is randomly transformed in some way. The goal is not to replace the dataset, it’s to randomly modify each image. You can think of data augmentation is applying a set of transformations with probability “p”. The goal is not to enlarge the dataset, it’s simply to augment it on the fly.

      • Rajesh Agrawal March 24, 2019 at 1:05 am #

        In case of data augementation, will the batch size remain same ? or it will be batch size of images from training + augemented images?

        • Adrian Rosebrock March 27, 2019 at 9:10 am #

          The batch size will remain the same. Data augmentation does not add new images to the training set, it just augments the existing ones on the fly. I would suggest you read Deep Learning for Computer Vision with Python so you can learn more about data augmentation and how it works.

      • Xu Zhang April 16, 2019 at 6:58 pm #

        If the purpose of data augmentation is not to enlarge the dataset, how can data augmentation reduce overfitting? What are the mechanisms?

        If I use this technic, I can generate more images using the original ones and save them into the dataset, then load the rebuilt dataset and train the model. Are they different between doing data augmentation in the code and training enlarge the dataset using the same augmentation technic? Thanks

        • Adrian Rosebrock April 18, 2019 at 6:50 am #

          Kindly refer to my reply to your other comment.

  8. Xu Zhang December 27, 2018 at 6:19 pm #

    Thank you for your tutorial. What are the purposes that you changed the image files into .csv files? Thanks a lot.

    • Adrian Rosebrock December 28, 2018 at 11:37 am #

      Take a look at the “Wait, why bother with a CSV file if you already have the images?” section.

  9. Kleyson Rios January 14, 2019 at 6:38 am #

    Hi Adrian,

    Some days before you post this very nice blogpost, I’ve been playing with Keras generators, and after validating of my code I noticed some strange behaviors.

    One of them is the steps_per_epoch and validation_steps. Doing as you did, that is correct based on the Keras documentation, might not feed the model with the full dataset as expected. See the discussion on this thread – https://github.com/keras-team/keras/issues/11877

    The correct way should be: math.ceil(NUM_TRAIN_IMAGES / BS)

    The second one is regarding the .fit_generator itself, please take a look on this thread – https://github.com/keras-team/keras/issues/11878 to understand better the issue.

    Still regarding the second issue, I would like to see a new blogpost 🙂 using sequence instead of generator, as suggested by a member in the respective thread.

    Best Regards.
    Kleyson Rios.

    • Adrian Rosebrock January 16, 2019 at 9:54 am #

      Regarding the first issue, that’s normally a implementation-specific choice by the DL engineer whether or not they want to pass the final non-full size batch through the model. I wouldn’t call that an “issue”, just a matter of preference.

      As for the sequence vs. generator question, I’ve never ran into that before. I’ll have to take a look.

  10. Ivan Donadello February 28, 2019 at 5:11 am #

    Hi Adrian,

    first of all thank you very much for all your posts. Pyimagesearch is a very precious and useful resources for researchers, workers and Computer Vision lovers.

    Regarding this post, do you have any hint or tutorial for writing our own generators with data augmentation?

    Thank a lot

    Ivan

    • Adrian Rosebrock February 28, 2019 at 1:38 pm #

      Thanks Ivan.

      As for your question this tutorial actually shows how you can apply data augmentation within the generator so perhaps I’m not understanding your question properly?

  11. JP Cassar March 3, 2019 at 8:20 am #

    Thanks Adrian for the post,
    I was wondering if you can add an example of classification (classify.py) using the MiniVGGNet model created by this post and images from Flowers-17.

  12. Mike March 6, 2019 at 10:27 pm #

    Hi Adrian,
    Shouldn’t the mode of testGen also be set to ‘eval’ when training?

    • Davy Jones March 21, 2019 at 9:08 am #

      No, eval is to stop generating data when you reach end of file (for predicting after training is complete)

  13. Mike March 6, 2019 at 10:44 pm #

    And another question:
    Why do you reset the file pointer to the beginning of the file once the end of the file is reached? I think this will never happen during training since you set the number of steps per epoch to number of examples divided by batch size.

  14. Rajesh Agrawal March 24, 2019 at 1:07 am #

    I am bit confused with model.fit , when we are mentioning batch_size , how can you say it fits the whole data in ram? or is it weights and baises of the model?

    • Adrian Rosebrock March 27, 2019 at 9:09 am #

      The call to “model.fit” assumes your entire dataset is in RAM. The “.fit” method does not use a data generator so the entire dataset must be loaded into RAM before calling it.

      • Rajesh April 3, 2019 at 11:04 am #

        Thanks Adrian for the clearing my concept. I got your point fit needs training data to be readily available in the code before calling fit. It perfectly makes sense. Thank you!

  15. rabah March 29, 2019 at 5:42 am #

    There’s something I really don’t understand.
    You train the model with an output of lb.classes size. So that will depend on the batch size right? since you add a label in that loop.
    But what if you have X labels and your loop length is only X/2 ?

    • Adrian Rosebrock April 2, 2019 at 6:19 am #

      No, the “lb” is “fit” on all the input “labels”. It has nothing to do with the batch size. Go back and review the code again.

  16. Tiago Carvalho April 8, 2019 at 11:43 am #

    Hi Adrian

    first all, thanks a lot about your blog. It is very useful to follow advances on CV, ML and PI fields when working with Python, OpenCV and Deep Learning frameworks.

    My question is about performance on fit() and fit_generator() methods.

    Currently I’m trying to reproduce my results when I was using fit() but now using fit_generator(). However, even using same parameters and inputs, my results when using fit_generator is much worse than my results using just fit(). I believe this is because the way fit() split input data for training batches but I’m not completely sure.

    Do you have some guests? There is some way you know to obtain exact same results?

    Since now, thank you so much.

    Bests

    • Adrian Rosebrock April 12, 2019 at 12:33 pm #

      I would go back and double-check your code. Make sure you are using the same hyperparameters between the two examples. It could also be the case that you have a bug in your generator function causing incorrect data + corresponding labels to be generated.

  17. donto April 12, 2019 at 10:46 pm #

    Hi @adrian,

    somehow I get confused with ‘steps_per_epoch’ parameter. You wrote:

    “Since the function is intended to loop infinitely, Keras has no ability to determine when one epoch starts and a new epoch begins.”

    In my case, I use custom generator (https://stanford.edu/~shervine/blog/keras-how-to-generate-data-on-the-fly) to generate my data and I simply set how many epoch I need. So, what is the correlation ‘epoch’ and ‘steps_per_epoch’ ?

    • Adrian Rosebrock April 18, 2019 at 7:41 am #

      You can absolutely set the number of epochs you want your network to train for. However, if you are using a data generator you also Need to supply the number of steps per epoch. The steps per epoch is the total number of training images divided by your batch size.

  18. Martin April 25, 2019 at 6:49 pm #

    Hej Adrian,

    thanks for a nice tutorial! I wonder if you have any suggestions for how one does to shuffle the data when using fit_generator?

    Best regards
    Martin

    • Adrian Rosebrock May 1, 2019 at 12:07 pm #

      There are a few things you could do:

      1. Pre-shuffle the data
      2. At each epoch, pick a random index into your data and then start generating your batches from there

  19. Alan May 23, 2019 at 2:05 pm #

    Hi Adrian,

    I was wondering why we train on the testGen sample and also evaluate on the testGen sample?

    Thanks!

    • Adrian Rosebrock May 30, 2019 at 9:44 am #

      You’re not training on the “testGen”, you’re training with the “trainGen”.

  20. Martin June 23, 2019 at 5:22 pm #

    Usually, I don’t have a question and understand source code fluently, but I am a little bit confused right now how the image generator really works.

    Let’s assume my RAM can only handle 1000 pictures at a time, but I want to train my machine learning model with 100.000 pictures. Then, I have to set the BS value to 1000 or even smaller right?

    So on the first batch/chunk, it reads in the first 1000 images with labels and it will train on them. On the 2nd chunk it hast to start reading lines 1001 to 2001 of your csv file. But I can’t find this in your code above. In my humble opinion, it always starts at line 0 when I call the method. Is the method treaten like a thread? And I only have to reset the value for the next epoch? Or why don’t you save the last line number you where in and start from that line?

    • Adrian Rosebrock June 26, 2019 at 1:29 pm #

      I get the example that you’re including, but typical batch values are BS={8,16,32,64,128,256}. Very rarely would a batch size be larger than 256.

      As for always starting at line 0 of the file, that’s not the case. The file pointer only restarts if the line read was empty (which would happen at the end of the file).

  21. Ponraj July 18, 2019 at 4:31 am #

    Hello Adrian,
    Thanks for your post.

    While performing model.fit_generator, whether it is necessary to have common BS for both trainGen and testGen. In my data set I have small number of testing data, so whether can i arbitrarily provide the Validation_sets quantity ?

Leave a Reply

[email]
[email]