Change input shape dimensions for fine-tuning with Keras

In this tutorial, you will learn how to change the input shape tensor dimensions for fine-tuning using Keras. After going through this guide you’ll understand how to apply transfer learning to images with different image dimensions than what the CNN was originally trained on.

A few weeks ago I published a tutorial on transfer learning with Keras and deep learning — soon after the tutorial was published, I received a question from Francesca Maepa who asked the following:

Do you know of a good blog or tutorial that shows how to implement transfer learning on a dataset that has a smaller shape than the pre-trained model?

I created a really good pre-trained model, and would like to use some features for the pre-trained model and transfer them to a target domain that is missing certain feature training datasets and I’m not sure if I’m doing it right.

Francesca asks a great question.

Typically we think of Convolutional Neural Networks as accepting fixed size inputs (i.e., 224×224, 227×227, 299×299, etc.).

But what if you wanted to:

  1. Utilize a pre-trained network for transfer learning…
  2. …and then update the input shape dimensions to accept images with different dimensions than what the original network was trained on?

Why might you want to utilize different image dimensions?

There are two common reasons:

  • Your input image dimensions are considerably smaller than what the CNN was trained on and increasing their size introduces too many artifacts and dramatically hurts loss/accuracy.
  • Your images are high resolution and contain small objects that are hard to detect. Resizing to the original input dimensions of the CNN hurts accuracy and you postulate increasing resolution will help improve your model.

In these scenarios, you would wish to update the input shape dimensions of the CNN and then be able to perform transfer learning.

The question then becomes, is such an update possible?

Yes, in fact, it is.

Looking for the source code to this post?
Jump right to the downloads section.

Change input shape dimensions for fine-tuning with Keras

In the first part of this tutorial, we’ll discuss the concept of an input shape tensor and the role it plays with input image dimensions to a CNN.

From there we’ll discuss the example dataset we’ll be using in this blog post. I’ll then show you how to:

  1. Update the input image dimensions to pre-trained CNN using Keras.
  2. Fine-tune the updated CNN. Let’s get started!

What is an input shape tensor?

Figure 1: Convolutional Neural Networks built with Keras for deep learning have different input shape expectations. In this blog post, you’ll learn how to change input shape dimensions for fine-tuning with Keras.

When working with Keras and deep learning, you’ve probably either utilized or run into code that loads a pre-trained network via:

The code above is initializing the VGG16 architecture and then loading the weights for the model (pre-trained on ImageNet).

We would typically use this code when our project needs to classify input images that have class labels inside ImageNet (as this tutorial demonstrates).

When performing transfer learning or fine-tuning you may use the following code to leave off the fully-connected (FC) layer heads:

We’re still indicating that the pre-trained ImageNet weights should be used, but now we’re setting include_top=False , indicating that the FC head should not be loaded.

This code would typically be utilized when you’re performing transfer learning either via feature extraction or fine-tuning.

Finally, we can update our code to include an input_tensor dimension:

We’re still loading VGG16 with weights pre-trained on ImageNet and we’re still leaving off the FC layer heads…but now we’re specifying an input shape of 224×224x3 (which are the input image dimensions that VGG16 was originally trained on, as seen in Figure 1, left).

That’s all fine and good — but what if we now wanted to fine-tune our model on 128×128px images?

That’s actually just a simple update to our model initialization:

Figure 1 (right) provides a visualization of the network updating the input tensor dimensions — notice how the input volume is now 128x128x3 (our updated, smaller dimensions) versus the previous 224x224x3 (the original, larger dimensions).

Updating the input shape dimensions of a CNN via Keras is that simple!

But there are a few caveats to look out for.

Can I make the input dimensions anything I want?

Figure 2: Updating a Keras CNN’s input shape is straightforward; however, there are a few caveats to take into consideration,

There are limits to how much you can update the image dimensions, both from an accuracy/loss perspective and from limitations of the network itself.

Consider the fact that CNNs reduce volume dimensions via two methods:

  1. Pooling (such as max-pooling in VGG16)
  2. Strided convolutions (such as in ResNet)

If your input image dimensions are too small then the CNN will naturally reduce volume dimensions during the forward propagation and then effectively “run out” of data.

In that case your input dimensions are too small.

I’ve included an error of what happens during that scenario below when, for example, when using 48×48 input images, I received this error message:

Notice how Keras is complaining that our volume is too small. You will encounter similar errors for other pre-trained networks as well. When you see this type of error, you know you need to increase your input image dimensions.

You can also make your input dimensions too large.

You won’t run into any errors per se, but you may see your network fail to obtain reasonable accuracy due to the fact that there are not enough layers in the network to:

  1. Learn robust, discriminative filters.
  2. Naturally reduce volume size via pooling or strided convolution.

If that happens, you have a few options:

  • Explore other (pre-trained) network architectures that are trained on larger input dimensions.
  • Tune your hyperparameters exhaustively, focusing first on learning rate.
  • Add additional layers to the network. For VGG16 you’ll use 3×3 CONV layers and max-pooling. For ResNet you’ll include residual layers with strided convolution.

The final suggestion will require you to update the network architecture and then perform fine-tuning on the newly initialized layers.

To learn more about fine-tuning and and transfer learning, along with my tips, suggestions, and best practices when training networks, make sure you refer to my book, Deep Learning for Computer Vision with Python.

Our example dataset

Figure 3: A subset of the Kaggle Dogs vs. Cats dataset is used for this Keras input shape example. Using a smaller dataset not only proves the point more quickly, but also allows just about any computer hardware to be used (i.e. no expensive GPU machine/instance necessary).

The dataset we’ll be using here today is a small subset of Kaggle’s Dogs vs. Cats dataset.

We also use this dataset inside Deep Learning for Computer Vision with Python to teach the fundamentals of training networks, ensuring that readers with either CPUs or GPUs can follow along and learn best practices when training models.

The dataset itself contains 2,000 images belonging to 2 classes (“cat” and dog”):

  • Cat: 1,000 images
  • Dog: 1,000 images

A visualization of the dataset can be seen in Figure 3 above.

In the remainder of this tutorial you’ll learn how to take this dataset and:

  1. Update the input shape dimensions for a pre-trained CNN.
  2. Fine-tune the CNN with the smaller image dimensions.

Installing necessary packages

All of today’s packages can be installed via pip.

I recommend that you create a Python virtual environment for today’s project, but it is not necessarily required. To learn how to create a virtual environment quickly and to install OpenCV into it, refer to my pip install opencv tutorial.

To install the packages for today’s project, just enter the following commands:

Project structure

Go ahead and grab the code + dataset from the “Downloads section of today’s blog post.

Once you’ve extracted the .zip archive, you may inspect the project structure using the tree  command:

Our dataset is contained within the dogs_vs_cats_small/  directory. The two subdirectories contain images of our classes. If you’re working with a different dataset be sure the structure is <dataset>/<class_name> .

Today we’ll be reviewing the train.py  script. The training script generates plot.png  containing our accuracy/loss curves.

Updating the input shape dimensions with Keras

It’s now time to update our input image dimensions with Keras and a pre-trained CNN.

Open up the train.py  file in your project structure and insert the following code:

Lines 2-20 import required packages:

  • keras  and sklearn  are for deep learning/machine learning. Be sure to refer to my extensive deep learning book, Deep Learning for Computer Vision with Python, to become more familiar with the classes and functions we use from these tools.
  • paths  from imutils traverses a directory and enables us to list all images in a directory.
  • matplotlib  will allow us to plot our training accuracy/loss history.
  • numpy  is a Python package for numerical operations; one of the ways we’ll put it to work is for “mean subtraction”, a scaling/normalization technique.
  • cv2  is OpenCV.
  • argparse  will be used to read and parse command line arguments.

Let’s go ahead and parse the command line arguments now:

Our script accepts three command line arguments via Lines 23-30:

  • --dataset : The path to our input dataset. We’re using a condensed version of Dogs vs. Cats, but you could use other binary, 2-class datasets with little or no modification as well (provided they follow a similar structure).
  • --epochs : The number of times we’ll pass our data through the network during training; by default, we’ll train for 25  epochs unless a different value is supplied.
  • --plot : The path to our output accuracy/loss plot. Unless otherwise specified, the file will be named plot.png  and placed in the project directory. If you are conducting multiple experiments, be sure to give your plots a different name each time for future comparison purposes.

Next, we will load and preprocess our images:

First, we grab our imagePaths  on Line 35 and then initialize our data  and labels  (Lines 36 and 37).

Lines 40-52 loop over the imagePaths  while first extracting the labels. Each image is loaded, the color channels are swapped, and the image is resized. The images and labels are added to the data  and labels  lists respectively.

VGG16 was trained on 224×224px images; however, I’d like to draw your attention to Line 48. Notice how we’ve resized our images to 128×128px. This resizing is an example of applying transfer learning on images with different dimensions.

Although Line 48 doesn’t fully answer Francesca Maepa’s question yet, we’re getting close.

Let’s go ahead and one-hot encode our labels as well as split our data:

Lines 55 and 56 convert our data  and labels  to NumPy array format.

Then, Lines 59-61 perform one-hot encoding on our labels. Essentially, this process converts our two labels (“cat” and “dog”) to arrays indicating which label is active/hot. If a training image is representative of a dog, then the value would be [0, 1]  where “dog” is hot. Otherwise, for a “cat”, the value would be [1, 0] .

To reinforce the point, if for example, we had 5 classes of data, a one-hot encoded array may look like [0, 0, 0, 1, 0]  where the 4th element is hot indicating that the image is from the 4th class. For further details, please refer to Deep Learning for Computer Vision with Python.

Lines 65 and 66 mark 75% of our data for training and the remaining 25% for testing via the train_test_split  function.

Let’s now initialize our data augmentation generator. We’ll also establish our ImageNet mean for mean subtraction:

Lines 69-76 initialize a data augmentation object for performing random manipulations on our input images during training.

Line 80 also takes advantage of the ImageDataGenerator  class for validation, but without any parameters — we won’t manipulate validation images with the exception of performing mean subtraction.

Both training and validation/testing generators will conduct mean subtraction. Mean subtraction is a scaling/normalization technique proven to increase accuracy. Line 85 contains the mean for each respective RGB channel while Lines 86 and 87 are then populated with the value. Later, our data generators will automatically perform the mean subtraction on our training/validation data.

Note: I’ve covered data augmentation in detail in this blog post as well as in the Practitioner Bundle of Deep Learning for Computer Vision with Python. Scaling and normalization techniques such as mean subtraction are covered in DL4CV as well.

We’re performing transfer learning with VGG16. Let’s initialize the base model now:

Lines 92 and 93 load VGG16  with an input shape dimension of 128×128 using 3 channels.

Remember, VGG16 was originally trained on 224×224 imagesnow we’re updating the input shape dimensions to handle 128×128 images.

Effectively, we have now fully answered Francesca Maepa’s question! We accomplished changing the input dimensions via two steps:

  1. We resized all of our input images to 128×128.
  2. Then we set the input shape=(128, 128, 3) .

Line 97 will print a model summary in our terminal so that we can inspect it. Alternatively, you may visualize the model graphically by studying Chapter 19 “Visualizing Network Architectures” of Deep Learning for Computer Vision with Python.

Since we’re performing transfer learning, the include_top  parameter is set to False  (Line 92) — we chopped off the head!

Now we’re going to perform surgery by erecting a new head and suturing it onto the CNN:

Line 101 takes the output from the baseModel  and sets it as input to the headModel .

From there, Lines 102-106 construct the rest of the head.

The baseModel  is already initialized with ImageNet weights per Line 92. On Lines 114 and 115, we set the base layers in VGG16 as not trainable (i.e., they will not be updated during the backpropagation phase). Be sure to read my previous fine-tuning tutorial for further explanation.

We’re now ready to compile and train the model with our data:

Our model  is compiled with the Adam  optimizer and a 1e-4  learning rate (Lines 120-122).

We use "binary_crossentropy"  for 2-class classification. If you have more than two classes of data, be sure to use "categorical_crossentropy" .

Lines 128-133 then train our transfer learning network. Our training and validation generators are put to work in the process.

Upon training completion, we’ll evaluate the network and plot the training history:

Lines 137-139 evaluate our model  and print a classification report for statistical analysis.

We then employ matplotlib  to plot our accuracy and loss history during training (Lines 142-152). The plot figure is saved to disk via Line 153.

Fine-tuning a CNN using the updated input dimensions

Figure 4: Changing Keras input shape dimensions for fine-tuning produced the following accuracy/loss training plot.

To fine-tune our CNN using the updated input dimensions first make sure you’ve used the “Downloads” section of this guide to download the (1) source code and (2) example dataset.

From there, open up a terminal and execute the following command:

Our first set of output shows our updated input shape dimensions.

Notice how our input_1 (i.e., the InputLayer) has input dimensions of 128x128x3 versus the normal 224x224x3 for VGG16.

The input image will then forward propagate through the network until the final MaxPooling2D  layer (i.e., block5_pool).

At this point, our output volume has dimensions of 4x4x512 (for reference, VGG16 with a 224x224x3 input volume would have the shape 7x7x512 after this layer).

Note: If your input image dimensions are too small then you risk the model, effectively, reducing the tensor volume into “nothing” and then running out of data, leading to an error. See the “Can I make the input dimensions anything I want?” section of this post for more details.

We then flatten that volume and apply the FC layers from the headModel , ultimately leading to our final classification.

Once our model is constructed we can then fine-tune it:

At the end of fine-tuning we see that our model has obtained 93% accuracy, respectable given our small image dataset.

As Figure 4 demonstrates, our training is also quite stable as well with no signs of overfitting.

More importantly, you now know how to change the input image shape dimensions of a pre-trained network and then apply feature extraction/fine-tuning using Keras!

Be sure to use this tutorial as a template for whenever you need to apply transfer learning to a pre-trained network with different image dimensions than what it was originally trained on.

Summary

In this tutorial, you learned how to change input shape dimensions for fine-tuning with Keras.

We typically perform such an operation when we want to apply transfer learning, including both feature extraction and fine-tuning.

Using the methods in this guide, you can update your input image dimensions for your pre-trained CNN and then perform transfer learning; however, there are two caveats you need to look out for:

  1. If your input images are too small, Keras will error out.
  2. If your input images are too large, you may not obtain your desired accuracy.

Be sure to refer to the “Can I make the input dimensions anything I want?” section of this post for more details on these caveats, including suggestions on how to solve them.

I hope you enjoyed this tutorial!

To download the source code to this post, and be notified when future tutorials are published here on PyImageSearch, just enter your email address in the form below!

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , ,

39 Responses to Change input shape dimensions for fine-tuning with Keras

  1. David Bonn June 24, 2019 at 10:34 am #

    Adrian,

    Thanks for the extremely cool post.

    • Adrian Rosebrock June 24, 2019 at 1:49 pm #

      Thanks David, I’m glad you liked it!

  2. wally June 24, 2019 at 12:20 pm #

    This is interesting, I’ve not gotten into training or re-training models yet, but I’ve quite a lot of experience using the MobileNet-SSD and MobileNet-SSD-V2 models from your OpenVINO and Coral tutorials on 1920×1080 and 1280×720 images from real security cameras for “person detection” — in real world varying lighting and weather conditions.

    I’m amazed how well they both detect when a 1920×1080 image is resized to 300×300 for the inference. OTOH 4K (3840×2160) is definitely too much resolution.

    My goal for security system purposes is to make the false detection rate be as close to zero as possible, false negatives are of little consequence if the frame rate is decent as the person will be detected as they move to more “favorable” locations or orientations within the frame.

    My most successful approach so far has been to run with a fairly low initial detection confidence (~0.65) and then crop the full image to the detection box (startX, startY, endX, endY) and rerun the inference requiring a higher confidience (~0.75) to verify.

    I’m currently testing in 15 camera rtsp streams with the images pushed to the AI as mqtt buffers to two simultaneously running systems (so each system gets the “same” images) one using MobileNet-SSDv2 on the Coral TPH the other using NCS2 with MobileNet-SSD

    So far I’ve got the false detection rate well below 1 per million and getting lower as more images continue to be processes without a false positive on either system!

    Debugging code I’ve inserted shows the Coco trained MobileNetSSDv2 is significantly better in both initial detection sensitivity and verification (many fewer detections are “wrongly” rejected because the zoomed image failed to increase the confidence). Reduction of confidence on zoomed false detections (plants, bushes, trees, background clutter, vehicles etc.) has been the key to the system improvement. On balance I’d say the lowering of the initial confidence threshold has more than made up for the “true detecitons” lost from failure to gain enough confidence when zoomed for verification.

    So to modify he question of this topic, given a better model for a different tensor processor architecture now feasible is it to convert to an other tesnor coprocessor? We have NCS2, Coral TPH and Jetson Nano at present with more expected on the market soon.

    • Adrian Rosebrock June 24, 2019 at 1:51 pm #

      Thanks for detailing your project, Wally!

      As for converting to different coprocessors it’s a bit too early to say there. I’ve been doing work with the NCS, Coral, and Nano but I haven’t found an easy way to convert between their own optimized versions.

  3. Dan June 24, 2019 at 2:37 pm #

    I have a question about the other end. I have a nicely trained network that extracts features from an image. I also have observations that are the xyz velocities of my robot. I’d like to combine the outputs from the network and the velocities to use as input to a couple fully connected layers. Then train from the output of the fully connected layers.

    I have this currently working by using openCV to extract features (e.g., green balls and red balls), outputting the size and pixel coordinates of the largest red ball and the largest green ball. I take those values along with the robot velocities, normalize each one, then use that as input to two fully connected layers and output a selected action for the robot (e.g., move forward, turn right,..).

    Is there someway to back propagate the fully connected layers up into a CNN, so that I can train the CNN to recognize whatever features are important, rather than having to pick my own features and use openCV to find them?

    My current thought is to just have some extra outputs from the CNN that I then ignore, substituting the velocities for those values as inputs to the fully connected layers, but this is clearly a kludge.

    • Adrian Rosebrock June 25, 2019 at 1:02 pm #

      So if I understand correctly, you:

      1. Have some various data points collected from a different sensor.
      2. Are using a CNN for feature extraction

      And then you want to combine the data points from the sensors with the CNN features and then train a separate model which could be Logistic Regression, SVM, or another NN?

      I guess I would tell you not to limit yourself to a NN. Why not something more straight forward? If you think that the dimensionality of the features from the CNN might be an issue you should consider applying feature selection on the extracted features (scikit-learn can help with that).

      • dan June 25, 2019 at 4:03 pm #

        That’s a good point. Something like a decision tree model may work well. But for now, I have a framework that is working really well and I’m hoping to slightly generalize it by substituting a CNN for the openCV part. Is there some way to combine outputs from a CNN and another sensor (e.g., velocities), feed that combination to a network, and then be able to backpropigate from the bottom back up through the CNN?

        Trying to do that with Tensorflow has me in “placeholder hell.” Specifically, I tried to take the outputs from the CNN and add three more nodes for normalized velocities, and put all that as input to a fully connected net, then run the entire thing. Couldn’t get it to go.

        My next thought is to use the velocities as bias inputs to the last three output nodes of the CNN. Basically, force those nodes to be zero, then the velocities go in as biases, and the whole thing will train. But this seems likely to fail, as it implies that pixel images have some spatial relationship (it is a CNN) to the velocities, when they really don’t.

        • Adrian Rosebrock June 26, 2019 at 11:20 am #

          You could do something like multi-input and multiple outputs but I really think it’s overkill and not the way to go.

          • dan June 26, 2019 at 1:00 pm #

            I agree that for this particular project, it’s overkill, but in the future when there are many different types of inputs, it could be very useful.
            The post you referenced is exactly what I was looking for.
            Thanks once again for the outstanding guidance!

  4. Aayush June 25, 2019 at 2:26 am #

    Hi Adrain ,

    Thanks for great post.
    One thing I would like to ask.While working with satellite image classification(sat-4 and sat-6 dataset) I was stuck with similar issue as number of channel are 4(in my case).
    Is there a way around to number of channels?

    Thanks

  5. Mark June 25, 2019 at 3:00 am #

    Binary_classification. We may prefer last layer is Dense(1, activation =’sigmoid’) instead of Dense(2, activaiton =’softmax’) .

  6. MOHAMED AWNI HAMED June 25, 2019 at 8:26 am #

    Thanks Adrian for this very good explanation. you use transfer learning for fine-tune the pre-trained model but what if I want to use the pretrained model as feature extractor to the target domain and the target domain images with different dimensions? There will be a problem in CNN dimensions since the target domain images need to propagate forward in the pretrained model.

    • Adrian Rosebrock June 25, 2019 at 12:59 pm #

      Why not just resize your input images to a fixed, known size and then forward propagate for feature extraction? That will ensure the output volumes are the same size.

  7. Victor Arias June 25, 2019 at 8:40 am #

    Hi Mr. Adrian, really thank you for you tutorial. this was a thing that wasn’t in the book and it was very important, so I’m very happy about it. It is possible that you do a tutorial on how to do this but with images larger than 224×224, that which you said: “images are high resolution and contain small objects that are hard to detect”, is exactly what happens to me in eye fundus images, it would be a great help to me, thank you.

    • Adrian Rosebrock June 25, 2019 at 12:58 pm #

      Hey Victor — would you be able to share your dataset with myself and others? If so, I can take a look and potentially it could be made into a blog post.

  8. Hamid June 25, 2019 at 9:41 am #

    Hey Adrian,
    Thanks a lot for your great post. I have a trained/fine-tune a model using transfer learning and my base model is vgg16 with input shape of 224×224. If I increase my input size does this help the model to generalize better? My input images are definitely larger than 224×224.

    • Adrian Rosebrock June 25, 2019 at 12:57 pm #

      You would need to run that as an experiment and verify. Every dataset is different so it’s hard for me to provide that level of general advice.

  9. oscar June 25, 2019 at 6:28 pm #

    I been trying to use VGGFace16(weights=”imagenet”, include_top=False,input_tensor=Input(shape=(128, 128, 3))) with smaller side that 197x197x3 but it does no allow me, what can I do?

    Thanks.

    • Adrian Rosebrock June 26, 2019 at 11:19 am #

      Hey Oscar — I replied to your thread in the PyImageSearch Gurus forums, can you check there instead? Thanks!

  10. Isaac June 26, 2019 at 10:56 am #

    Hi Adrian,

    Can I use asymmetric shape input in a ResNet model? The shape could be 256×128?

    • Adrian Rosebrock June 26, 2019 at 11:13 am #

      Yes, you can use asymmetric shape, provided that you don’t run into the dimension issues highlighted in the post.

  11. Sahar June 26, 2019 at 6:54 pm #

    Hey Adrian,
    Thanks a lot for your great post, I want to How to change the input layer size of a pre_trained vgg16 from 227x227x3 to 32x32x1? i want to chande the chanel of the input of CNN. how can i do this?

    • Adrian Rosebrock June 28, 2019 at 9:21 am #

      You cannot do that. Just expand your input image to 3 channels:

      image = np.dstack([image] * 3)

      That will create a 3 channel image out of a 1 channel image.

  12. Evgeny June 28, 2019 at 2:38 am #

    Hi, Adrian!

    Thank you for the post.
    I didn’t understand what happening with the weights in the network when we resize the input?
    It is clear to me that if we change the head – then the output adjusted to the problem.
    If we change the input – how does the network still works?
    Weights are averaging or smth. else?

    Best regards,
    Evgeny

  13. Xu Zhang June 28, 2019 at 8:12 pm #

    Hi Adrian,

    Thank you for your great post.

    I am not sure if you are familiar with progressive resizing. I knew that fastai library implemented this method to train CNN from a small input size to a larger input size step-by-step. I hope you could write a post about this using Keras.

    • Adrian Rosebrock July 4, 2019 at 10:47 am #

      I have heard of progressive resizing before and used it before. I’m debating on whether I want to write a tutorial on it. I’ll definitely let you know if I do!

  14. Abkul July 3, 2019 at 8:10 am #

    Hi Adrian,

    Thanks for the excellent post on learning how to change the input shape tensor dimensions for fine-tuning using Keras.

    Kindly look at the following paper which looks at scaling the Width, height and resolution “simultaneously” and shed light/or tutorial blog on its implementation.

    EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks

    authored by Mingxing Tan and Quoc V. Le.

  15. rahul July 4, 2019 at 3:37 pm #

    Bro can you explain me implementation of ssd (single shot multibox detector) from scratch it’s really helpful for me..

  16. Nikesh July 6, 2019 at 12:03 pm #

    Can you please explain me the parameters “step_per_epoch” and “validation_steps” in fit or fit_generator method in keras.

    • Adrian Rosebrock July 10, 2019 at 9:56 am #

      The “steps_per_epoch” is the number of batches per epoch, meaning how many batches of data are there per epoch. You determine the steps per epoch by dividing the total number of training images by the batch size (same goes for the validation steps). To learn more about these values you should read Deep Learning for Computer Vision with Python.

  17. Breeve July 19, 2019 at 12:59 am #

    Hellow sir Adrian,

    Thank you very much.. this makes my project complete…

    • Adrian Rosebrock July 25, 2019 at 9:43 am #

      Congrats on completing your project!

  18. lii ismail August 12, 2019 at 5:09 am #

    Hi adrian,

    Do you also have some tips on how to change input shape dimensions for fine-tuning with pytorch. For my case, the trained network is based on 224×224 but my image input is 64×64. Thus, how do we adjust the weight for fine tuning? Hope you can share some tips.

    thank you in advance.

    • Adrian Rosebrock August 16, 2019 at 5:44 am #

      Sorry, I don’t have any guides on PyTorch.

  19. Oscar August 18, 2019 at 2:29 pm #

    Hello, thanks for this cool tutorial!

    One question:

    Can i just load a pre trained model on imagenet like vgg16, train it with my dataset but with shape=128, and then use transfer learning with my same model but with input shape=224? and re train it?

    thank you!

  20. psimeson August 20, 2019 at 12:11 am #

    Does this work if the image dimension is 64x64x1? Basically, there is only one channel rather than 3 channels.

    • Adrian Rosebrock September 5, 2019 at 9:57 am #

      Just stack your 1 channel image to from a 3 channel image:

      output = np.stack([image] * 3)

Before you leave a comment...

Hey, Adrian here, author of the PyImageSearch blog. I'd love to hear from you, but before you submit a comment, please follow these guidelines:

  1. If you have a question, read the comments first. You should also search this page (i.e., ctrl + f) for keywords related to your question. It's likely that I have already addressed your question in the comments.
  2. If you are copying and pasting code/terminal output, please don't. Reviewing another programmers’ code is a very time consuming and tedious task, and due to the volume of emails and contact requests I receive, I simply cannot do it.
  3. Be respectful of the space. I put a lot of my own personal time into creating these free weekly tutorials. On average, each tutorial takes me 15-20 hours to put together. I love offering these guides to you and I take pride in the content I create. Therefore, I will not approve comments that include large code blocks/terminal output as it destroys the formatting of the page. Kindly be respectful of this space.
  4. Be patient. I receive 200+ comments and emails per day. Due to spam, and my desire to personally answer as many questions as I can, I hand moderate all new comments (typically once per week). I try to answer as many questions as I can, but I'm only one person. Please don't be offended if I cannot get to your question
  5. Do you need priority support? Consider purchasing one of my books and courses. I place customer questions and emails in a separate, special priority queue and answer them first. If you are a customer of mine you will receive a guaranteed response from me. If there's any time left over, I focus on the community at large and attempt to answer as many of those questions as I possibly can.

Thank you for keeping these guidelines in mind before submitting your comment.

Leave a Reply

[email]
[email]