Keras Conv2D and Convolutional Layers

In today’s tutorial, we are going to discuss the Keras Conv2D class, including the most important parameters you need to tune when training your own Convolutional Neural Networks (CNNs). From there we are going to use the Keras Conv2D class to implement a simple CNN. We’ll then train and evaluate this CNN on the CALTECH-101 dataset.

The inspiration for today’s post came from PyImageSearch reader, Danny.

Danny asked:

Hi Adrian, I’m having some trouble understanding the parameters to Keras’ Conv2D class.

Which ones are the important ones?

Which ones should I just leave at their default values?

I’m a bit new to deep learning so I’m a bit confused on how to choose the parameter values when creating my own CNN.

Danny asks a great question — there are quite a few parameters to Keras’ Conv2D class. The sheer number can be a bit overwhelming if you’re new to the world of computer vision and deep learning.

In today’s tutorial I’m going to discuss each of the parameters to the Keras Conv2D class, explain each one, and provide examples of situations where and when you would want to set specific values, enabling you to:

  1. Quickly determine if you need to utilize a specific parameter to the Keras Conv2D class
  2. Decide on a proper value for that specific parameter
  3. Effectively train your own Convolutional Neural Network

Overall, my goal is to help reduce any confusion, anxiety, or frustration when using Keras’ Conv2D class. After going through this tutorial you will have a strong understanding of the Keras Conv2D parameters.

To learn more about the Keras Conv2D class and convolutional layers, just keep reading!

Looking for the source code to this post?
Jump right to the downloads section.

Keras Conv2D and Convolutional Layers

In the first part of this tutorial, we are going to discuss the parameters to the Keras Conv2D class.

From there we are going to utilize the Conv2D class to implement a simple Convolutional Neural Network.

We’ll then take our CNN implementation and then train it on the CALTECH-101 dataset.

Finally, we’ll evaluate the network and examine its performance.

Let’s go ahead and get started!

The Keras Conv2D class

The Keras Conv2D class constructor has the following signature:

Looks a bit overwhelming, right?

How in the world are you supposed to properly set these values?

No worries — let’s examine each of these parameters individually, giving you a strong understanding of not only what each parameter controls but also how to properly set each parameter as well.

filters

Figure 1: The Keras Conv2D parameter, filters determines the number of kernels to convolve with the input volume. Each of these operations produces a 2D activation map.

The first required Conv2D parameter is the number of filters  that the convolutional layer will learn.

Layers early in the network architecture (i.e., closer to the actual input image) learn fewer convolutional filters while layers deeper in the network (i.e., closer to the output predictions) will learn more filters.

Conv2D layers in between will learn more filters than the early Conv2D layers but fewer filters than the layers closer to the output. Let’s go ahead and take a look at an example:

On Line 1 we learn a total of 32  filters. Max pooling is then used to reduce the spatial dimensions of the output volume.

We then learn 64  filters on Line 4. Again max pooling is used to reduce the spatial dimensions.

The final Conv2D layer learns 128  filters.

Notice at as our output spatial volume is decreasing our number of filters learned is increasing — this is a common practice in designing CNN architectures and one I recommend you do as well. As far as choosing the appropriate number of filters , I nearly always recommend using powers of 2 as the values.

You may need to tune the exact value depending on (1) the complexity of your dataset and (2) the depth of your neural network, but I recommend starting with filters in the range [32, 64, 128] in the earlier and increasing up to [256, 512, 1024] in the deeper layers.

Again, the exact range of the values may be different for you, but start with a smaller number of filters and only increase when necessary.

kernel_size

Figure 2: The Keras deep learning Conv2D parameter, filter_size, determines the dimensions of the kernel. Common dimensions include 1×1, 3×3, 5×5, and 7×7 which can be passed as (1, 1), (3, 3), (5, 5), or (7, 7) tuples.

The second required parameter you need to provide to the Keras Conv2D class is the kernel_size , a 2-tuple specifying the width and height of the 2D convolution window.

The kernel_size  must be an odd integer as well.

Typical values for kernel_size  include: (1, 1) , (3, 3) , (5, 5) , (7, 7) . It’s rare to see kernel sizes larger than 7×7.

So, when do you use each?

If your input images are greater than 128×128 you may choose to use a kernel size > 3 to help (1) learn larger spatial filters and (2) to help reduce volume size.

Other networks, such as VGGNet, exclusively use (3, 3)  filters throughout the entire network.

More advanced architectures such as Inception, ResNet, and SqueezeNet design entire micro-architectures which are “modules” inside the network that learn local features at different scales (i.e., 1×1, 3×3, and 5×5) and then combine the outputs.

A great example can be seen in the Inception module below:

Figure 3: The Inception/GoogLeNet CNN architecture uses “micro-architecture” modules inside the network that learn local features at different scales (filter_size) and then combine the outputs.

The Residual module in the ResNet architecture uses 1×1 and 3×3 filters as a form of dimensionality reduction which helps to keep the number of parameters in the network low (or as low as possible given the depth of the network):

Figure 4: The ResNet “Residual module” uses 1×1 and 3×3 filters for dimensionality reduction. This helps keep the overall network smaller with fewer parameters.

So, how should you choose your filter_size ?

First, examine your input image — is it larger than 128×128?

If so, consider using a 5×5 or 7×7 kernel to learn larger features and then quickly reduce spatial dimensions — then start working with 3×3 kernels:

If your images are smaller than 128×128 you may want to consider sticking with strictly 1×1 and 3×3 filters.

And if you intend on using ResNet or Inception-like modules you’ll want to implement the associated modules and architectures by hand. Covering how to implement these modules is outside the scope of this tutorial, but if you’re interested in learning more about them (including how to hand-code them), please refer to my book, Deep Learning for Computer Vision with Python.

strides

The strides  parameter is a 2-tuple of integers, specifying the “step” of the convolution along the x and y axis of the input volume.

The strides  value defaults to (1, 1) , implying that:

  1. A given convolutional filter is applied to the current location of the input volume
  2. The filter takes a 1-pixel step to the right and again the filter is applied to the input volume
  3. This process is performed until we reach the far-right border of the volume in which we move our filter one pixel down and then start again from the far left

Typically you’ll leave the strides  parameter with the default (1, 1)  value; however, you may occasionally increase it to (2, 2)  to help reduce the size of the output volume (since the step size of the filter is larger).

Typically you’ll see strides of 2×2 as a replacement to max pooling:

Here we can see our first two Conv2D layers have a stride of 1×1. The final Conv2D layer; however, takes the place of a max pooling layer, and instead reduces the spatial dimensions of the output volume via strided convolution.

In 2014, Springenber et al. published a paper entitled Striving for Simplicity: The All Convolutional Net which demonstrated that replacing pooling layers with strided convolutions can increase accuracy in some situations.

ResNet, a popular CNN, has embraced this finding — if you ever look at the source code to a ResNet implementation (or implement it yourself), you’ll see that ResNet replies on strided convolution rather than max pooling to reduce spatial dimensions in between residual modules.

padding

Figure 5: A 3×3 kernel applied to an image with padding. The Keras Conv2D padding parameter accepts either "valid" (no padding) or "same" (padding + preserving spatial dimensions). This animation was contributed to StackOverflow (source).

The padding  parameter to the Keras Conv2D class can take on one of two values: valid  or same .

With the valid  parameter the input volume is not zero-padded and the spatial dimensions are allowed to reduce via the natural application of convolution.

The following example would naturally reduce the spatial dimensions of our volume:

Note: See this tutorial on the basics of convolution if you need help understanding how and why spatial dimensions naturally reduce when applying convolutions.

If you instead want to preserve the spatial dimensions of the volume such that the output volume size matches the input volume size, then you would want to supply a value of same  for the padding :

While the default Keras Conv2D value is valid  I will typically set it to same  for the majority of the layers in my network and then either reduce spatial dimensions of my volume by either:

  1. Max pooling
  2. Strided convolution

I would recommend that you use a similar approach to padding with the Keras Conv2D class as well.

data_format

Figure 6: Keras, as a high-level framework, supports multiple deep learning backends. Thus, it includes support for both “channels last” and “channels first” channel ordering.

The data format value in the Conv2D class can be either channels_last  or channels_first :

  • The TensorFlow backend to Keras uses channels last ordering.
  • The Theano backend uses channels first ordering.

You typically shouldn’t have to ever touch this value as Keras for two reasons:

  1. You are more than likely using the TensorFlow backend to Keras
  2. And if not, you’ve likely already updated your ~/.keras/keras.json  configuration file to set your backend and associated channel ordering

My advice is to never explicitly set the data_format  in your Conv2D class unless you have a very good reason to do so.

dilation_rate

Figure 7: The Keras deep learning Conv2D parameter, dilation_rate, accepts a 2-tuple of integers to control dilated convolution (source).

The dilation_rate  parameter of the Conv2D class is a 2-tuple of integers, controlling the dilation rate for dilated convolution. Dilated convolution is a basic convolution only applied to the input volume with defined gaps, as Figure 7 above demonstrates.

You may use dilated convolution when:

  1. You are working with higher resolution images but fine-grained details are still important
  2. You are constructing a network with fewer parameters

Discussing dilated convolution is outside the scope of this tutorial so if you are interested in learning more, please refer to this tutorial.

activation

Figure 8: Keras provides a number of common activation functions. The activation parameter to Conv2D is a matter of convenience and allows the activation function for use after convolution to be specified.

The activation  parameter to the Conv2D class is simply a convenience parameter, allowing you to supply a string specifying the name of the activation function you want to apply after performing the convolution.

In the following example we perform convolution and then apply a ReLU activation function:

The above code is equivalent to:

My advice?

Use the activation  parameter if you and if it helps keep your code cleaner — it’s entirely up to you and won’t have an impact on the performance of your Convolutional Neural Network.

use_bias

The use_bias  parameter of the Conv2D class controls whether a bias vector is added to the convolutional layer.

Typically you’ll want to leave this value as True , although some implementations of ResNet will leave the bias parameter out.

I recommend keep the bias unless you have a good reason not to.

kernel_initializer and bias_initializer

Figure 9: Keras offers a number of initializers for the Conv2D class. Initializers can be used to help train deeper neural networks more effectively.

The kernel_initializer  controls the initialization method used to initialize all values in the Conv2D class prior to actually training the network.

Similarly, the bias_initializer  controls how the bias vector is initialized before training starts.

A full list of initializers can be found in the Keras documentation; however, here is what I recommend:

  1. Leave the bias_initialization  alone — it will by default filled with zeros (you’ll rarely if ever, have to change the bias initialization method.
  2. The kernel_initializer  defaults to glorot_uniform , the Xavier Glorot uniform initialization method, which is perfectly fine for the majority of tasks; however, for deeper neural networks you may want to use  he_normal  (MSRA/He et al. initialization) which works especially well when your network has a large number of parameters (i.e., VGGNet).

In the vast majority of CNNs I implement I am either using glorot_uniform  or he_normal  — I recommend you do the same unless you have a specific reason to use a different initializer.

kernel_regularizer, bias_regularizer, and activity_regularizer

Figure 10: Regularization hyperparameters should be adjusted especially when working with large datasets and really deep networks. The kernel_regularizer parameter in particular is one that I adjust often to reduce overfitting and increase the ability for a model to generalize to unfamiliar images.

The kernel_regularizer , bias_regularizer , and activity_regularizer  control the type and amount of regularization method applied to the Conv2D layer.

Applying regularization helps you to:

  1. Reduce the effects of overfitting
  2. Increase the ability of your model to generalize

When working with large datasets and deep neural networks applying regularization is typically a must.

Normally you’ll encounter either L1 or L2 regularization being applied — I will use L2 regularization on my networks if I detect signs of overfitting:

The amount of regularization you apply is a hyperparameter you will need to tune for your own dataset, but I find values of 0.0001-0.001 are good ranges to start with.

I would suggest leaving your bias regularizer alone — regularizing the bias typically has very little impact on reducing overfitting.

I also suggest leaving the activity_regularizer  at its default value (i.e., no activity regularization).

While weight regularization methods operate on weights themselves, f(W), where f is the activation function and W are the weights, an activity regularizer instead operates on the outputs, f(O), where O is the outputs of a layer.

Unless there is a very specific reason you’re looking to regularize the output it’s best to leave this parameter alone.

kernel_constraint and bias_constraint

The final two parameters to the Keras Conv2D class are the kernel_constraint  and bias_constraint .

These parameters allow you to impose constraints on the Conv2D layer, including non-negativity, unit normalization, and min-max normalization.

You can see the full list of supported constraints in the Keras documentation.

Again, I would recommend leaving both the kernel constraint and bias constraint alone unless you have a specific reason to impose constraints on the Conv2D layer.

The CALTECH-101 (subset) dataset

Figure 11: The CALTECH-101 dataset consists of 101 object categories with 40-80 images per class. The dataset for today’s blog post example consists of just 4 of those classes: faces, leopards, motorbikes, and airplanes (source).

The CALTECH-101 dataset is a dataset of 101 object categories with 40 to 800 images per class.

Most images have approximately 50 images per class.

The goal of the dataset is to train a model capable of predicting the target class.

Prior to the resurgence of neural networks and deep learning, the state-of-the-art accuracy on was only ~65%.

However, by using Convolutional Neural Networks, it’s been possible to achieve 90%+ accuracy (as He et al. demonstrated in their 2014 paper, Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition).

Today we are going to implement a simple yet effective CNN that is capable of achieving 96%+ accuracy, on a 4-class subset of the dataset:

  • Faces: 436 images
  • Leopards: 201 images
  • Motorbikes: 799 images
  • Airplanes: 801 images

The reason we are using a subset of the dataset is so you can easily follow along with this example and train the network from scratch, even if you do not have a GPU.

Again, the purpose of this tutorial is not meant to deliver state-of-the-art results on CALTECH-101 — it’s instead meant to teach you the fundamentals of how to use Keras’ Conv2D class to implement and train a custom Convolutional Neural Network.

Downloading the dataset and source code

Interested in following along with today’s tutorial? If so, you’ll need to download both:

  1. The source code to this post (using the “Downloads” section of the post)
  2. The CALTECH-101 dataset

After you have downloaded the .zip of the source code, unarchive it, and then change directory into the keras-conv2d-example  directory:

From there, use the following wget  command to download and unarchive the CALTECH-101 dataset:

Now that we’ve downloaded our code and dataset we can move on inspecting the project structure.

Project structure

To see how our project is organized, simply use the tree  command:

The first directory, 101_ObjectCategories/  is our dataset that we extracted in the last section. It contains 102 folders, so I eliminated the lines that we don’t care about for today’s blog post. What remains is the subset of four object categories previously discussed.

The pyimagesearch/  module is not pip installable. You must use the “Downloads” to grab the files. Inside the module, you’ll find stridendet.py  which contains the StridedNet class.

In addition to stridednet.py , we’ll review train.py in the root folder. Our training script will make use of StridedNet and our small dataset to train a model for example purposes.

The training script will produce a training history plot, plot.png .

A Keras Conv2D Example

Figure 12: A deep learning CNN dubbed “StridedNet” serves as the example for today’s blog post about Keras Conv2D parameters. Click to expand.

Now that we’re reviewed both (1) how the Keras Conv2D class works and (2) the dataset we’ll be training our network on, let’s go ahead and implement the Convolutional Neural Neural network we’ll be training.

The CNN we’ll be using today, “StridedNet”, is one I made up for the purposes of this tutorial.

StridedNet has three important characteristics:

  1. It uses strided convolutions rather than pooling operations to reduce volume size
  2. The first CONV layer uses 7×7 filters but all other layers in the network use 3×3 filters (similar to VGG)
  3. The MSRA/He et al. normal distribution algorithm is used to initialize all weights in the network

Let’s go ahead and implemented StridedNet now.

Open up a new file, name it stridednet.py , and insert the following code:

All of our Keras modules are imported on Lines 2-9, namely Conv2D .

Our StridedNet  class is defined on Line 11 with a single build  method on Line 13.

The build  method accepts six parameters:

  • width : Image width in pixels.
  • height : The image height in pixels.
  • depth : The number of channels for the image.
  • classes : The number of classes the model needs to predict.
  • reg : Regularization method.
  • init : The kernel initializer.

The width , height , and depth  parameters affect the input volume shape.

For "channels_last"  ordering, the input shape is specified on Line 17 where the depth  is last.

We can use the Keras backend to check the image_data_format  to see if we need to accommodate "channels_first"  ordering (Lines 22-24).

Let’s take a lot at how we can build the first three CONV layers:

Each Conv2D  is stacked on the network with model.add .

Notice that for the first Conv2D  layer, we’ve explicitly specified our inputShape  so that the CNN architecture has somewhere to start and build off of. Then, from here forward, each time model.add  is called, the previous layer acts as the input to the next layer.

Taking into account the parameters to Conv2D  discussed previously, you’ll notice that we are using strided convolution to reduce spatial dimensions rather than pooling operations.

ReLU activation is applied (refer to Figure 8) along with batch normalization and dropout.

I nearly always recommend batch normalization because it tends to stabilize training and make tuning hyperparameters easier. That said, it can double or triple your training time. Use it wisely.

Dropout’s purpose is to help your network generalize and not overfit. Neurons from the current layer, with probability p, will randomly disconnect from neurons in the next layer so that the network has to rely on the existing connections. I highly recommend utilizing dropout.

Let’s take a look at more layers of StridedNet:

The deeper the network goes, the more filters we learn.

At the end of most networks we add a fully connected layer:

A single fully connected layer with 512  nodes is appended to the CNN.

Finally, a "softmax"  classifier is added to the network — the output of this layer are the prediction values themselves.

That’s a wrap.

As you can see, Keras syntax is quite straightforward once you know what the parameters mean ( Conv2D  having the potential for quite a few parameters).

Let’s learn how to write a script to train StridedNet with some data!

Implementing the training script

Now that we have implemented our CNN architecture, let’s create the driver script used to train the network.

Open up the train.py  file an insert the following code:

We import our modules and packages on Lines 2-18. Notice that we aren’t importing Conv2D  anywhere. Our CNN implementation is contained within stridednet.py  and our StridedNet  import handles it (Line 6).

Our matplotlib  backend is set on Line 3 — this is necessary so we can save our plot as an image file rather than viewing it in the GUI.

We import functionality from sklearn  on Lines 7-9:

  • LabelBinarizer : For “one-hot” encoding our class labels.
  • train_test_split : For splitting our data such that we have training and evaluation sets.
  • classification_report : We’ll use this to print statistics from evaluation.

From keras  we’ll be using:

  • ImageDataGenerator : For data augmentation. See last week’s blog post for more information on Keras data generators.
  • Adam : An optimizer alternative to SGD.
  • l2 : The regularizer we’ll be using. Scroll up to read about regularizers. Applying regularization reduces overfitting and helps with generalization.

My imutils paths  module will be used to grab the paths to our images in the dataset.

We’ll use argparse  to handle command line arguments at runtime, and OpenCV ( cv2 ) will be used to load and preprocess images from the dataset.

Let’s go ahead and parse the command line arguments now:

Our script can accept three command line arguments:

  • --dataset : The path to the input dataset.
  • --epochs : The number of epochs to train for. By default , we’ll train for 50  epochs.
  • --plot : Our loss/accuracy plot will be output to disk. This argument contains the file path. By default, it is simply "plot.png" .

Let’s prepare to load our dataset:

Before we actually load our dataset, we’ll go ahead and initialize:

  • LABELS : The labels we’ll use for training.
  • imagePaths : A list of image paths for the dataset directory. We’ll filter these based on the parsed class labels from the file paths soon.
  • data : A list to hold our images that our network will be trained on.
  • labels : A list to hold our class labels that correspond to the data.

Let’s populate our data  and labels  lists:

Beginning on Line 42, we’ll loop over all imagePaths . Inside the loop we:

  • Extract the label  from the path (Line 44).
  • Filter only the classes in the LABELS  set (Lines 48 and 49). These two lines cause us to skip any label  not belonging to Faces, Leopards, Motorbikes, or Airplanes classes, respectively, as is defined on Line 32.
  • Load and resize  our image  (Lines 53 and 54).
  • And finally, add the image  and label  to their respective lists (Lines 57 and 58).

There are four actions taking place in the next block:

These actions include:

  • Converting data  to a NumPy array with each image  scaled to the range [0, 1] (Line 62).
  • Binarize our labels  into “one-hot encoding” with our LabelBinarizer  (Lines 65 and 66). This means that our labels  are now represented numerically where “one-hot” examples might be:
    • [0, 0, 0, 1]  for “airplane”
    • [0, 1, 0, 0]  for “Leopards”
    • etc.
  • Split our data  into training and testing (Lines 70 and 71).
  • Initialize our ImageDataGenerator  for data augmentation (Lines 74-76). You can read more about it here.

Now we’re ready to write code to actually train our model:

Lines 80-84 prepare our StridedNet  model , building it with the Adam  optimizer and learning rate decay, our specified input shape, number of classes, and l2  regularization.

From there, on Lines 89-91 we’ll fit our model to the data.  In this case, “fit” means “train” and .fit_generator  means we’re using our data augmentation image data generator).

To evaluate our model, we’ll use the testX  data and we’ll print a classification_report :

And finally we’ll plot our accuracy/loss training history and save it to disk:

Training and evaluating our Keras CNN

At this point, we are ready to train our network!

Make sure you have used the “Downloads” section of today’s tutorial to download the source code and example images.

From there, open up a terminal, change directory to where you have downloaded the code and CALTECH-101 dataset, and then execute the following command:

Figure 13: My accuracy/loss plot generated with Keras and matplotlib for training StridedNet, an example CNN to showcase Keras Conv2D parameters.

As you can see, our network is obtaining ~97% accuracy on the testing set with minimal overfitting!

You can apply deep learning to your own projects

Figure 14: My deep learning book, Deep Learning for Computer Vision with Python, is trusted by members of major universities and corporations — it has helped them and it will help you too.

You don’t need a degree in computer science or mathematics to study deep learning.

Instead, what you need is a book that is designed with the practitioner in mind — a book that not only teaches you the theory of how and why deep learning algorithms work, but then takes that theory and implements it in code so you fully understand it.

Sound too good to be true?

It’s not.

Inside my book, Deep Learning for Computer Vision with Python, you’ll find over 900+ pages of the most complete, comprehensive computer vision and deep learning education available online.

Regardless of whether you’re just getting started in deep learning or you’re already a seasoned deep learning practitioner, my book will help you master computer vision and deep learning through:

  • Super practical walkthroughs that present solutions to actual, real-world problems through image classification (ResNet, Inception, etc.), object detection (Faster R-CNN, SSDs, RetinaNet), and instance segmentation (Mask R-CNNs).
  • Hands-on tutorials (with lots of code) that not only show you the algorithms behind deep learning for computer vision but their implementations as well.
  • A no-nonsense teaching style that is guaranteed to help you master deep learning for image understanding and visual recognition.

But don’t take my word for it — Deep Learning for Computer Vision with Python is trusted by members of major universities and corporations. It has helped them on their journey to CV/DL mastery and I have no doubt it will help you too.

To learn more (and grab your free table of contents + sample chapters PDF), just use the link below:

Grab your free table of contents + sample chapters

Summary

In today’s tutorial, we discussed convolutional layers and the Keras Conv2D class.

You now know:

  • What the most important parameters are to the Keras Conv2D class ( filters , kernel_size , strides , padding )
  • What proper values are for these parameters
  • How to use the Keras Conv2D class to create your own Convolutional Neural Network
  • How to train your CNN and evaluate it on an example dataset

I hope you found this tutorial helpful in understanding the parameters to Keras’ Conv2D Class — if you did, please leave a comment in the comments section.

If you would like to download the source code to this blog post (and to be notified when future tutorials are published here on PyImageSearch), just enter your email address in the form below.

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , ,

29 Responses to Keras Conv2D and Convolutional Layers

  1. ALTAF HUSSAIN December 31, 2018 at 11:44 am #

    Hi, your articles are very interesting, please write article about different type of convolution, like normal convolution, depth wise, shuffle convolution etc.

    • Adrian Rosebrock December 31, 2018 at 1:03 pm #

      Thank you for the suggestion, Altaf. I cannot guarantee if/when I would cover such a topic but I will certainly consider it for the future.

  2. Jaffar December 31, 2018 at 3:05 pm #

    Thanks awesome article.Keep going Adrian.

    • Adrian Rosebrock January 2, 2019 at 9:15 am #

      Thanks Jaffar!

  3. Hilman December 31, 2018 at 6:46 pm #

    Adrian, you never stop giving the sweet stuff!

    Although I am subscribed to your gurus course and already bought your DL4CV book, your blog still is one of my primary source of learning! Everytime I see your post, my adrenaline just rise high. Thank you and keep it coming!

    • Adrian Rosebrock January 2, 2019 at 9:15 am #

      Thanks so much for supporting the PyImageSearch blog, Hillman! I really appreciate it! I’m so happy to hear you are enjoying the PyImageSearch blog 😀

  4. Mwangi Kabare January 2, 2019 at 3:49 am #

    Hello Adrian your blogpost was really informative and for that i salute you.I wanted to request if you could do a blogpost that shows one how to run the keras trained model on a live webcam feed.I will highly appreciate.

  5. Adrian Gay January 2, 2019 at 6:51 am #

    Hi Adrian

    Thanks for creating this tutorial 🙂 You say that “As far as choosing the appropriate number of filters , I nearly always recommend using powers of 2 as the values.” Why a power of 2? Why not something else? And, why not run the training multiple times with different values of the number of filters and compare the performance metrics?

    Thanks

    Adrian

    • Adrian Rosebrock January 2, 2019 at 9:04 am #

      Linear algebra libraries tend to work optimally with powers of two. They are also a nice “round” number for computer scientists. You’ll often see work such as VGGNet, ResNet, etc. that increase the number of filters by powers of 2.

      Secondly, you would absolutely right experiments multiple times with varying parameters — that is called “hyperparameter tuning”.

  6. Xi Wang January 2, 2019 at 8:20 pm #

    As always, your tutorials are so well-written, insightful and easy to understand. I really enjoyed learning from your posts. Thank you for sharing, great work!

    • Adrian Rosebrock January 5, 2019 at 8:58 am #

      Thanks so much, I’m glad you enjoyed them! Thank you for being a PyImageSearch reader.

  7. Alejandro January 8, 2019 at 11:33 pm #

    Thanks Adrian, this is the best tutorial I’ve read about Keras.

    You have a very simple way to explain. I congratulate you.

    Please, How I can get the keras pretrained files using this method and make inferences for object detection (using open cv) . I mean a) model weights and b) model configuration…for example:

    – .pb and .pbtxt (tensorflow)

    – .caffemodel and .prototxt (caffe)

    – yolov3.weights or yolo.cfg (yolo)

    Great work¡¡¡

  8. Pranav Agarwal January 12, 2019 at 8:29 am #

    Great article. Cleared many of my doubts. But a thing is still confusing me. Does it matter if we use Batch Normalization(BN) before Activation function? I recently checked some of the implementation of CNN and many have BN before Activation and getting almost similar results. I am not able to get the reasoning behind this?

    • Adrian Rosebrock January 16, 2019 at 10:15 am #

      Typically I recommend you put your BN layer AFTER your activation. If you place it BEFORE your activation then approximately half of your values will be set to zero during a ReLU. That said, some architectures do put their BN before the activation — it’s an architecture hyperparameter worth tuning. If you’re interested in learning more about where to place the batch normalization layer, including my other tips, suggestions, and best practices, be sure to refer to Deep Learning for Computer Vision with Python.

      • Pranav Agarwal January 20, 2019 at 2:45 pm #

        Thanks!

  9. Klaus Klein February 13, 2019 at 7:41 am #

    Thanks for publishing such an easily comprehensible tutorial, Adrian.

    Just to nitpick: You’re occasionally referring to the kernel_size argument as filter_size (or nxn filters in the text), which might result in mild confusion.

  10. walid March 1, 2019 at 9:09 am #

    Great illustration
    I have still one question

    Why is the Dilation_rate in Keras a tuple and not a single value?

  11. Manuel March 10, 2019 at 6:21 pm #

    I am really enjoying your book.

    I would be interested in applying Grad CAM or Class Activation Maps on my models but all I can see are code for other people pre-trained models.any suggestions?

    • Adrian Rosebrock March 13, 2019 at 3:41 pm #

      Thanks Manuel, I’m glad you’re enjoying it. As far as your question goes, do you have an example for what you are trying to achieve?

  12. Emal Khan March 13, 2019 at 2:14 pm #

    I love your tutorials it really helped me a lot in my final project. But whenever I start reading a post I usually get some doubts then after searching I found the answer here though.
    I have a suggestion it would be better if something like a table of contents of a list of was provided so everyone would be able to find and read topics of their interest.

    • Adrian Rosebrock March 13, 2019 at 3:05 pm #

      Hey Eman — do you mean a table of contents for each individual post? Or an index/list of every post on the PyImageSearch blog?

  13. Martin June 27, 2019 at 12:42 pm #

    Hi Adrian, is there a reason why your images are always 96×96 pixels or 128×128 pixels? I mean, you also choose your kernel that way. But shouldn’t something odd like 128×96 with an 3×2 kernel work as well?

    • Adrian Rosebrock July 4, 2019 at 10:53 am #

      The underlying linear algebra optimization libraries work best for square inputs, but yes, you could use other sizes if you wish.

  14. Jakub October 10, 2019 at 5:46 am #

    If I’ve got grayscale images as an Input. How could I train the model without having to convert it to RGB? I reshaped the image with OpenCV but I’m not quite sure if it’s a correct form to do that, and kearas.fit_generator() gives me an error

    • Adrian Rosebrock October 10, 2019 at 10:08 am #

      Set the depth to be 1 when instantiating the network. From there you can use your grayscale images.

  15. Ibrahim November 10, 2019 at 3:02 am #

    Thanks sir Adrian for this nice tutorial but I am still not clear about “bias vector”. I have another question that what is the criteria under which we categorize data sets into large and small data sets. The same case applies to neural networks. what are light and deep networks ? Please explain these things. Thanks

Before you leave a comment...

Hey, Adrian here, author of the PyImageSearch blog. I'd love to hear from you, but before you submit a comment, please follow these guidelines:

  1. If you have a question, read the comments first. You should also search this page (i.e., ctrl + f) for keywords related to your question. It's likely that I have already addressed your question in the comments.
  2. If you are copying and pasting code/terminal output, please don't. Reviewing another programmers’ code is a very time consuming and tedious task, and due to the volume of emails and contact requests I receive, I simply cannot do it.
  3. Be respectful of the space. I put a lot of my own personal time into creating these free weekly tutorials. On average, each tutorial takes me 15-20 hours to put together. I love offering these guides to you and I take pride in the content I create. Therefore, I will not approve comments that include large code blocks/terminal output as it destroys the formatting of the page. Kindly be respectful of this space.
  4. Be patient. I receive 200+ comments and emails per day. Due to spam, and my desire to personally answer as many questions as I can, I hand moderate all new comments (typically once per week). I try to answer as many questions as I can, but I'm only one person. Please don't be offended if I cannot get to your question
  5. Do you need priority support? Consider purchasing one of my books and courses. I place customer questions and emails in a separate, special priority queue and answer them first. If you are a customer of mine you will receive a guaranteed response from me. If there's any time left over, I focus on the community at large and attempt to answer as many of those questions as I possibly can.

Thank you for keeping these guidelines in mind before submitting your comment.

Leave a Reply

[email]
[email]