Deep Learning with OpenCV

Two weeks ago OpenCV 3.3 was officially released, bringing with it a highly improved deep learning ( dnn ) module. This module now supports a number of deep learning frameworks, including Caffe, TensorFlow, and Torch/PyTorch.

Furthermore, this API for using pre-trained deep learning models is compatible with both the C++ API and the Python bindings, making it dead simple to:

  1. Load a model from disk.
  2. Pre-process an input image.
  3. Pass the image through the network and obtain the output classifications.

While we cannot train deep learning models using OpenCV (nor should we), this does allow us to take our models trained using dedicated deep learning libraries/tools and then efficiently use them directly inside our OpenCV scripts.

In the remainder of this blog post I’ll demonstrate the fundamentals of how to take a pre-trained deep learning network on the ImageNet dataset and apply it to input images.

To learn more about deep learning with OpenCV, just keep reading.

Looking for the source code to this post?
Jump right to the downloads section.

Deep Learning with OpenCV

In the first part of this post, we’ll discuss the OpenCV 3.3 release and the overhauled dnn  module.

We’ll then write a Python script that will use OpenCV and GoogleLeNet (pre-trained on ImageNet) to classify images.

Finally, we’ll explore the results of our classifications.

Deep Learning inside OpenCV 3.3

The dnn module of OpenCV has been part of the opencv_contrib  repository since version v3.1. Now in OpenCV 3.3 it is included in the main repository.

Why should you care?

Deep Learning is a fast growing domain of Machine Learning and if you’re working in the field of computer vision/image processing already (or getting up to speed), it’s a crucial area to explore.

With OpenCV 3.3, we can utilize pre-trained networks with popular deep learning frameworks. The fact that they are pre-trained implies that we don’t need to spend many hours training the network — rather we can complete a forward pass and utilize the output to make a decision within our application.

OpenCV does not (and does not intend to be) to be a tool for training networks — there are already great frameworks available for that purpose. Since a network (such as a CNN) can be used as a classifier, it makes logical sense that OpenCV has a Deep Learning module that we can leverage easily within the OpenCV ecosystem.

Popular network architectures compatible with OpenCV 3.3 include:

  • GoogleLeNet (used in this blog post)
  • AlexNet
  • SqueezeNet
  • VGGNet (and associated flavors)
  • ResNet

The release notes for this module are available on the OpenCV repository page.

Aleksandr Rybnikov, the main contributor for this module, has ambitious plans for this module so be sure to stay on the lookout and read his release notes (in Russian, so make sure you have Google Translation enabled in your browser if Russian is not your native language).

It’s my opinion that the dnn  module will have a big impact on the OpenCV community, so let’s get the word out.

Configure your machine with OpenCV 3.3

Installing OpenCV 3.3 is on par with installing other versions. The same install tutorials can be utilized — just make sure you download and use the correct release.

Simply follow these instructions for MacOS or Ubuntu while making sure to use the opencv and opencv_contrib releases for OpenCV 3.3. If you opt for the MacOS + homebrew install instructions, be sure to use the --HEAD  switch (among the others mentioned) to get the bleeding edge version of OpenCV.

If you’re using virtual environments (highly recommended), you can easily install OpenCV 3.3 alongside a previous version. Just create a brand new virtual environment (and name it appropriately) as you follow the tutorial corresponding to your system.

OpenCV deep learning functions and frameworks

OpenCV 3.3 supports the Caffe, TensorFlow, and Torch/PyTorch frameworks.

Keras is currently not supported (since Keras is actually a wrapper around backends such as TensorFlow and Theano), although I imagine it’s only a matter of time until Keras is directly supported given the popularity of the deep learning library.

Using OpenCV 3.3 we can load images from disk using the following functions inside dnn :

  • cv2.dnn.blobFromImage
  • cv2.dnn.blobFromImages

We can directly import models from various frameworks via the “create” methods:

  • cv2.dnn.createCaffeImporter
  • cv2.dnn.createTensorFlowImporter
  • cv2.dnn.createTorchImporter

Although I think it’s easier to simply use the “read” methods and load a serialized model from disk directly:

  • cv2.dnn.readNetFromCaffe
  • cv2.dnn.readNetFromTensorFlow
  • cv2.dnn.readNetFromTorch
  • cv2.dnn.readhTorchBlob

Once we have loaded a model from disk, the .forward method is used to forward-propagate our image and obtain the actual classification.

To learn how all these OpenCV deep learning pieces fit together, let’s move on to the next section.

Classifying images using deep learning and OpenCV

In this section, we’ll be creating a Python script that can be used to classify input images using OpenCV and GoogLeNet (pre-trained on ImageNet) using the Caffe framework.

The GoogLeNet architecture (now known as “Inception” after the novel micro-architecture) was introduced by Szegedy et al. in their 2014 paper, Going deeper with convolutions.

Other architectures are also supported with OpenCV 3.3 including AlexNet, ResNet, and SqueezeNet — we’ll be examining these architectures for deep learning with OpenCV in a future blog post.

In the meantime, let’s learn how we can load a pre-trained Caffe model and use it to classify an image using OpenCV.

To begin, open up a new file, name it , and insert the following code:

On Lines 2-5 we import our necessary packages.

Then we parse command line arguments:

On Line 8 we create an argument parser followed by establishing four required command line arguments (Lines 9-16):

  • --image : The path to the input image.
  • --prototxt : The path to the Caffe “deploy” prototxt file.
  • --model : The pre-trained Caffe model (i.e,. the network weights themselves).
  • --labels : The path to ImageNet labels (i.e., “syn-sets”).

Now that we’ve established our arguments, we parse them and store them in a variable, args , for easy access later.

Let’s load the input image and class labels:

On Line 20, we load the image  from disk via cv2.imread .

Let’s take a closer look at the class label data which we load on Lines 23 and 24:

As you can see, we have a unique identifier followed by a space, some class labels, and a new-line. Parsing this file line-by-line is straightforward and efficient using Python.

First, we load the class label rows  from disk into a list. To do this we strip whitespace from the beginning and end of each line while using the new-line (‘ \n ‘) as the row delimiter (Line 23). The result is a list of IDs and labels:

Second, we use list comprehension to extract the relevant class labels from rows  by looking for the space (‘ ‘) after the ID, followed by delimiting class labels with a comma (‘ , ‘). The result is simply a list of class labels:

Now that we’ve taken care of the labels, let’s dig into the dnn  module of OpenCV 3.3:

Taking note of the comment in the block above, we use cv2.dnn.blobFromImage  to perform mean subtraction to normalize the input image which results in a known blob shape (Line 31).

We then load our model from disk:

Since we’ve opted to use Caffe, we utilize cv2.dnn.readNetFromCaffe  to load our Caffe model definition prototxt  and pre-trained  model  from disk (Line 35).

If you are familiar with Caffe, you’ll recognize the prototxt  file as a plain text configuration which follows a JSON-like structure — I recommend that you open bvlc_googlenet.prototxt  from the “Downloads” section in a text editor to inspect it.

Note: If you are unfamiliar with configuring Caffe CNNs, then this is a great time to consider the PyImageSearch Gurus course — inside the course you’ll get an in depth look at using deep nets for computer vision and image classification.

Now let’s complete a forward pass through the network with blob  as the input:

It is important to note at this step that we aren’t training a CNN — rather, we are making use of a pre-trained network. Therefore we are just passing the blob through the network (i.e., forward propagation) to obtain the result (no back-propagation).

First, we specify blob  as our input (Line 39). Second, we make a start  timestamp (Line 40), followed by passing our input image through the network and storing the predictions. Finally, we set an end  timestamp (Line 42) so we can calculate the difference and print the elapsed time (Line 43).

Let’s finish up by determining the top five predictions for our input image:

Using NumPy, we can easily sort and extract the top five predictions on Line 47.

Next, we will display the top five class predictions:

The idea for this loop is to (1) draw the top prediction label on the image itself and (2) print the associated class label probabilities to the terminal.

Lastly, we display the image to the screen (Line 64) and wait for the user to press a key before exiting (Line 65).

Deep learning and OpenCV classification results

Now that we have implemented our Python script to utilize deep learning with OpenCV, let’s go ahead and apply it to a few example images.

Make sure you use the “Downloads” section of this blog post to download the source code + pre-trained GoogLeNet architecture + example images.

From there, open up a terminal and execute the following command:

Figure 1: Using OpenCV and deep learning to predict the class label for an input image.

In the above example, we have Jemma, the family beagle. Using OpenCV and GoogLeNet we have correctly classified this image as “beagle”.

Furthermore, inspecting the top-5 results we can see that the other top predictions are also relevant, all of them of which are dogs that have similar physical appearances as beagles.

Taking a look at the timing we also see that the forward pass took < 1 second, even though we are using our CPU.

Keep in mind that the forward pass is substantially faster than the backward pass as we do not need to compute the gradient and backpropagate through the network.

Let’s classify another image using OpenCV and deep learning:

Figure 2: OpenCV and deep learning is used to correctly label this image as “traffic light”.

OpenCV and GoogLeNet correctly label this image as “traffic light” with 100% certainty.

In this example we have a “bald eagle”:

Figure 3: The “deep neural network” (dnn) module inside OpenCV 3.3 can be used to classify images using pre-trained models.

We are once again able to correctly classify the input image.

Our final example is a “vending machine”:

Figure 4: Since our GoogLeNet model is pre-trained on ImageNet, we can classify each of the 1,000 labels inside the dataset using OpenCV + deep learning.

OpenCV + deep learning once again correctly classifes the image.


In today’s blog post we learned how to use OpenCV for deep learning.

With the release of OpenCV 3.3 the deep neural network ( dnn ) library has been substantially overhauled, allowing us to load pre-trained networks via the Caffe, TensorFlow, and Torch/PyTorch frameworks and then use them to classify input images.

I imagine Keras support will also be coming soon, given how popular the framework is. This will likely take be a non-trivial implementation as Keras itself can support multiple numeric computation backends.

Over the next few weeks we’ll:

  1. Take a deeper dive into the dnn  module and how it can be used inside our Python + OpenCV scripts.
  2. Learn how to modify Caffe .prototxt  files to be compatible with OpenCV.
  3. Discover how we can apply deep learning using OpenCV to the Raspberry Pi.

This is a can’t-miss series of blog posts, so be before you go, make sure you enter your email address in the form below to be notified when these posts go live!


If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 11-page Resource Guide on Computer Vision and Image Search Engines, including exclusive techniques that I don’t post on this blog! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , , ,

80 Responses to Deep Learning with OpenCV

  1. Hermann-Marcus Behrens August 21, 2017 at 10:51 am #

    Very cool work! Thanks for your blogposts.

  2. Bayo August 21, 2017 at 11:27 am #

    hello, does the code work on raspberry pi?

    • Adrian Rosebrock August 21, 2017 at 3:37 pm #

      This method will work on the Raspberry Pi, but you’ll need a network small enough to run on the Pi. I’ll covering this in detial in a future blog post.

    • Mas August 24, 2017 at 11:48 am #

      Strongly yes

  3. Ansh August 21, 2017 at 12:38 pm #

    This is great, cant wait to try it! It was about time that OpenCV introduced Deep Learning. I was wondering of the following though –

    It would be great to see if we can use DNN for tracking objects, like the “tracking a ball” example you had blogged. Most of the Neural Nets examples I have seen involved classification or labeling the objects. Are neural network efficient in tracking objects as well? Or does dlib’s object correlation better at it. Which CV method is good (efficient) for what….? it would be great if you can blog about the CV landscape as there are so many methods efficient for different things

    I am motivated for robotics application of CV. Also I am assuming that your consequent blogs will have methods to train a model as well?


    • Adrian Rosebrock August 21, 2017 at 3:36 pm #

      It really depends on exactly what types of objects you are trying to track and under which conditions. Deep learning can be used to track objects, but typically we use correlation filters for this (like in dlib). I’ll consider doing a survey of object tracking methods in the future, thanks for the suggestion!

    • Aleksandr Rybnikov August 21, 2017 at 4:53 pm #

      Object tracking is already in OpenCV dnn. Lightweight yet accurate SSD with MobileNet backbone is in the samples directory

      • Adrian Rosebrock August 22, 2017 at 10:49 am #

        Thanks for sharing (and for your contributions!) Aleksandr. What you’re referring to is actually object detection, the process of determining the (x, y)-coordinates of a given object in an image. Object tracking normally takes place after a location has been identified (which is what I assume Ansh is referring to). “Object detection” and “object tracking” are two different operations.

        Thanks again for the comment I’ll make sure object detection with OpenCV + deep learning is covered in a future blog post as well.

  4. Steven Barnes August 21, 2017 at 12:51 pm #

    It might be useful to mention where to get the python opencv library for python3 for each platform as it is not obvious. You also mention following the install instructions but do not have a link to them, and again they are not that easy to find on the OpenCV site.

    • Adrian Rosebrock August 21, 2017 at 3:35 pm #

      Hi Steven — I actually link to this page which includes OpenCV + Python install instructions for a variety of different platforms and operating systems.

  5. Diogo Aleixo August 21, 2017 at 12:56 pm #

    Hi Adrian

    Is there a way to train another category on imageNet? The one that i want is not available.

  6. Maham Khan August 21, 2017 at 3:03 pm #

    Wow! This is the best thing ever. Deep learning will be so easy with OpenCV. And also thank you Adrian for making the tutorial so quickly, and keep us updated with the latest release. You are doing great contribution for Computer Vision community!
    Much appreciated tutorials. Just by going through your post, one can get the whole idea of the process.

    • Adrian Rosebrock August 21, 2017 at 3:34 pm #

      Thanks Maham! I’m glad you enjoyed the post. There will be plenty more on deep learning + OpenCV 🙂

      • Supra August 21, 2017 at 9:20 pm #

        It doesn’t work with raspberry pi 3 on latest version Raspbian Stretch.
        I’m using OpenCV 3.3.0. And the problem is “No module named cv2”

        • Adrian Rosebrock August 22, 2017 at 10:46 am #

          You need to install OpenCV first. It doesn’t matter if you’re using Raspbian Wheezy, Jessie, or Stretch — OpenCV must first be installed.

  7. Aleksandr Rybnikov August 21, 2017 at 4:59 pm #

    BTW, there is an error in the article. Correct name of the developer of the dnn is Aleksandr Rybnikov, actually it’s me

    • Adrian Rosebrock August 22, 2017 at 10:48 am #

      Thank you for bringing this to my attention. I have updated the blog post 🙂 Thank you again for your wonderful contributions to the OpenCV library. I look forward to help spread the word more regarding your work!

  8. Saumya Rajen Shah August 22, 2017 at 3:29 am #

    Where can we find the imageNet labels?

    • Adrian Rosebrock August 22, 2017 at 10:44 am #

      Please use the “Downloads” section of this blog post. There you will find a .txt file containing the ImageNet labels.

  9. Vincent Thon August 22, 2017 at 5:34 am #

    I Adrian, love your work! Your blog is my main go to place when it comes to computer vision. I have some models trained with tflearn. Do you think I’d be able to utilize those with the cv2.dnn.createTensorFlowImporter?

    • Adrian Rosebrock August 22, 2017 at 10:43 am #

      Hi Vincent — I haven’t tried importing a model trained via TFLearn. I would suggest giving it a try.

  10. Mansoor Nasir August 22, 2017 at 3:11 pm #

    Adrian, this is amazing work, i really appreciate all the efforts you make this step by step tutorial. My only question is, how will we use this with a model trained by TensorFlow?

    Thank you for all your help.

    • Adrian Rosebrock August 22, 2017 at 5:17 pm #

      You would replace cv2.dnn.readNetFromCaffe with cv2.dnn.readNetFromTensorFlow.

  11. knaffe August 23, 2017 at 11:45 pm #

    Thank you for your blogs. I have read all of them.
    How could I load my model trained by myself with tensorflow and use it ?
    By the way, Do you know some effective deep or traditional methods for motion detection running on raspberry PI3 with real-time performance?
    Thank you for your great job again and look forward to your new blogs!!

    • Adrian Rosebrock August 24, 2017 at 3:32 pm #

      1. Please see my reply to “Mansoor” above regarding TensorFlow.

      2. Take a look at this blog post for simple motion detection on the Raspberry Pi.

  12. oguzhan August 24, 2017 at 7:12 am #

    So cool, THX!! We are waiting raspberry pi tutorial 🙂

  13. Megha Shanbhag August 28, 2017 at 4:16 am #

    Hi, I have installed and built openCV 3.3 in my laptop. I have not built Opencv_contrib. When I run the example given in the, i get error stating

    ” File “”, line 34, in
    blob = cv2.dnn.blobFromImage(image, 1, (224, 224), (104, 117, 123))
    AttributeError: ‘module’ object has no attribute ‘blobFromImage'”

    Can you please tell me what could be the issue?

    • Adrian Rosebrock August 28, 2017 at 4:21 pm #

      Can you confirm that you are running OpenCV 3.3?

      The output should be 3.3.0.

      • Boikobo September 5, 2017 at 4:41 am #

        I have a similar issue. It is showing that its opencv 3.3.0 but saying

        blob = cv2.dnn.blobFromImage(image, 1, (224, 224), (104, 117, 123))
        AttributeError: ‘module’ object has no attribute ‘blobFromImage’”

        • Adrian Rosebrock September 5, 2017 at 9:10 am #

          Hi Boikobo — that is indeed very strange. For whatever reason it appears your version of OpenCV was not compiled with “dnn”. I would go back to installing OpenCV and ensure that “dnn” is listed in the “modules to be built” output of CMake.

  14. Smartos August 28, 2017 at 6:08 am #

    great post!

  15. Tham August 29, 2017 at 12:42 am #

    Do you know how to save the model of PyTorch?
    I train and save a simple cnn model by PyTorch, but it cannot loaded by the dnn module(I am using 3.3).

    Complete question can view at StackOverflow(

    • Adrian Rosebrock August 31, 2017 at 8:45 am #

      I have not used PyTorch so unfortunately I do not know the answer to this question. I hope another PyImageSearch reader can help!

  16. Imaduddin A Majid August 29, 2017 at 10:43 am #

    Really great article. Thank you for sharing this with us. I also expected this will work with Keras soon.

  17. Lg September 6, 2017 at 6:05 am #

    Thanks for this post. Really cool stuff.

    I’ve tried with other models like squeezenet, alexnet, bvlc_reference_caffenet with success, the accuracy is good as well.

    Some errors, like with a white cat jumping in a meadow recognized as an artic fox.

    Are there caffe models trained to recognize people ?

    • Adrian Rosebrock September 7, 2017 at 7:06 am #

      Yes, I will actually be covering one for object detection that can detect people in next week’s blog post. Stay tuned 🙂

      • Lg September 9, 2017 at 4:41 am #

        Hi Adrian,

        Looking for models on the Internet, I found several articles about “OXFORD VGG Face dataset”.

        References :

        Then I installed keras_vggface.

        I finally found the caffe model and prototxt. This works very well with your code: “deep-learning-with-opencv”.

        [INFO] classification took 0.66553 seconds
        [INFO] 1. label: Adelaide_Kane, probability: 0.99818
        [INFO] 2. label: Lucy_Hale, probability: 0.00031506
        [INFO] 3. label: Jamie_Gray_Hyder, probability: 0.0001969
        [INFO] 4. label: Odeya_Rush, probability: 0.00010968
        [INFO] 5. label: Sasha_Barrese, probability: 8.4347e-05

        Now, the question is how to train this model with our own pictures or add more people to the dataset.

        I am looking forward to reading your article.

  18. Komal September 7, 2017 at 4:16 am #

    Hey Adrian,
    in Opencv 3.2 I’m getting an error while using blobfFromImage function of dnn. That it’s not there. What are the differences in Opencv 3.2 and OpenCV 3.3 ?

    • Adrian Rosebrock September 7, 2017 at 6:54 am #

      Hi Komal — the “dnn” sub-module was totally re-engineered in OpenCV 3.3. You need to upgrade to OpenCV 3.3.

  19. Sean McLeod September 24, 2017 at 1:31 pm #

    Hi Adrian

    Where are the values used for the mean subtraction (104, 117, 123) documented?


    • Adrian Rosebrock September 26, 2017 at 8:38 am #

      They are the mean values of the ImageNet training set. These values don’t change since the ImageNet dataset is pre-split. Nearly all deep learning publications/implementations that are trained on ImageNet report these values as the mean. I’ve also trained networks on ImageNet by hand and can confirm the values.

  20. Andrew Craton October 2, 2017 at 10:07 am #

    Thanks for all the great work here! Your script works perfectly on the model. However, I’ve trained my own MobileNetSSD caffe model, but am struggling with using the trained model with the script. There appears to be a difference between the trained model and a “deploy” optimized version of the model, where a script called “” is necessary to merge the batchnorm, scale layer weights to the conv layer, to improve the performance. ( I continue to get errors like : Message type “caffe.LayerParameter” has no field named “permute_param”. and Check failed: ReadProtoFromTextFile(param_file, param) Failed to parse NetParameter file: MobileNetSSD_deploy.prototxt. Is it possible that my OpenCV build does not contain the correct modules?

    • Adrian Rosebrock October 2, 2017 at 10:32 am #

      OpenCV’s “dnn” module is brand new, so it’s entirely possible that not all features in Caffe/TensorFlow/Torch etc. have 1-to-1 equivalents in OpenCV. I would suggest posting the error on the official OpenCV forums and seeing what the developers say. Again, I’m not sure what the exact issue is here. Thanks for sharing though, myself and other PyImageSearch readers appreciate it!

    • Steve Cox October 13, 2017 at 9:54 am #

      I have run into this situation as well. 10/13/17. I am using Tensorflow 1.* and Python 3.5 on Windows 10 x64. I don’t want to deal with Docker containers and virtual environment, to much to keep track of. I just want to retrain models and play them back through OpenCV. Using TF on deployed models is over kill. I am using the flowers example. I got TF to retrain the inception (model?) using the flower images. I then take the output graph and try and load it with OpenCV 3.3 (cv2.dnn.readNetFromTensorflow) and get different Unknown Layer errors. I have looked at several different Python scripts you have to run to strip out Layers OpenCV can’t deal with “yet” that exist in the retrained TF model. If someone knows where an ALL IN ONE python script exists, that takes a previously re-trained TF models and converts them so they can be loaded into OpenCV 3.3 dnn that would be great. I am using Python 2.7 and OpenCV 3.3 to do the model prediction. This works fine with Adrian’s caffe example, but not with a retrained TF model. I realize I have a lot to learn on all the nutz and bolts with TF and deployment. Sorry if there is another link somewhere on this site that covers this material. This is by far the best site i have seen on this subject.

      • Adrian Rosebrock October 14, 2017 at 10:42 am #

        As far as I understand it, the TensorFlow loading capabilities of OpenCV 3.3 are no where near as good as the Caffe ones. I’m sure this will mature in future releases of OPenCV 3.3, but for the time being, I might try to (1) use Caffe to train your network or (2) try to convert your TF weights to Caffe format. You might also consider posting on the official OpenCV forums.

  21. Igor October 24, 2017 at 8:09 am #

    Good afternoon Adrian, thanks for the interesting article! Tell me how to determine the coordinates of the detected object, i.e. to outline the detected object.

    • Adrian Rosebrock October 24, 2017 at 10:36 am #

      Hi Igor — I suggest you look at Lines 38-67 on this blog post, Object detection with deep learning and OpenCV

      • Igor October 25, 2017 at 12:17 pm #

        Adrian thank you for a great course. For the convenience of working from your post, it would be convenient to display the contents of your course if the contents of the course are, then tell me where it is. I can’t find it.

  22. RomRoc October 26, 2017 at 10:50 am #

    Really excellent site! Thanks Adrian for your help to go into deep learning and computer vision programming.

    For what I understood, a crucial part is to train deep learning to obtain models. It is challenging mostly because we need a huge dataset to obtain good models.
    Is there any public archive in Internet to download the most common objects models?


  23. Nasir October 28, 2017 at 2:39 pm #

    Hi ,

    I want to get the box around the detection. i have also read your other post but it only supports upto 20 objects detection. Objects i want to get identified are not supported by that model but with this model they does. Kindly can you let me know how can I get coordinates of detected object to track them with this model.

    Thanks in advance

    • Adrian Rosebrock October 31, 2017 at 8:09 am #

      You can’t directly convert a model used for image classification and then use it for object detection. You would need to either (1) train your own object detection network from scratch on objects you are interested in recognizing or (2) fine-tune an existing network that is used for object detection.

  24. Dixon Dick October 29, 2017 at 2:50 am #


    Thanks for all you do, your work is easy to use, foundational and informative.

    Just completed an install for OpenCV 3.3, Python 2.7.12, Ubuntu 16.04.3 using your previous install instructions here:

    Perhaps you have updated these? Didn’t find anything for 3.3 except the back ref to this so I went through it from scratch.

    There were a couple of challenges and I captured my notes. Let me know how to post to you. Might be a bit long for this comment tool.

    Warmest regards,


    • Adrian Rosebrock October 30, 2017 at 3:13 pm #

      Hi Dixon — the instructions should work for OpenCV 3.3 but you’d download and reference OpenCV 3.3 instead. Thanks for your comment.

  25. Nasir October 29, 2017 at 8:26 pm #

    I want coordinates with the detection. But unfortunately I am not able to get the coordinates because I want to get them detect in real time. I have also gone through your other posts but models you are using in them have not the objects which I want to get them detected. I want to achieve it via Dnn module but I am not able to get the direction. Ill be really grateful if you can help in this perspective.

    • Adrian Rosebrock October 30, 2017 at 1:56 pm #

      Hi Nasir — see this post on Real-time object detection with deep learning and OpenCV.

      • Nasir November 2, 2017 at 1:07 pm #

        Hi Adrian. I have already gone through this post but the problem is the model that is being given in that example is just capable of detecting 20 objects where as model given in this example is capable of detecting objects much more. So my concern here is that, that i want to get detect punching bag in real time and find it in image with bounding box but unfortunately I am not able to do so with this model. I am seeking help in this prospective.

        Thanks in advance

        • Adrian Rosebrock November 2, 2017 at 2:08 pm #

          Hi Nasir — I understand your question, please see my reply above. You cannot use the model used in this blog post for object detection — it can only be used for image classification. You would need to either (1) train an object detection model from scratch to detect any objects you are interested in or (2) perform fine-tuning or an existing object detection model. For what it’s worth, I’m covering object detection models (and how to train them) inside Deep Learning for Computer Vision with Python.

  26. Mario November 3, 2017 at 12:08 pm #

    I Adrian, love your work, very very useful!

    I’ve a problem, this algorithm doesn’t find a person in this photo

    How can I add it (and other images) to the training?

  27. lee November 27, 2017 at 11:17 am #


  28. ahmed mansour December 26, 2017 at 6:10 am #

    thank you ver much
    you are very helpfull man
    how i acan make my module
    to identify person faces and
    can identify mobile and chairs and laptop

    • Adrian Rosebrock December 26, 2017 at 3:49 pm #

      You would need to use a dedicated library for face recognition. I cover face recognition inside the PyImageSearch Gurus course. You can also look into strictly deep learning-based face recognition algorithms such as OpenFace and face embeddings.

  29. Dann December 28, 2017 at 12:47 pm #

    Hi Adrian
    I am still new here and hope that you will be able to help me in this. Thankyou

    So I have already downloaded the whole folder from the email. Also, I am using the Raspberry Pi
    For the Deep learning and OpenCV classification results part , am I suppose to right click the deep-learning-opencv and click on “open in terminal”?

    After opening the terminal, am i also just suppose to copy and paste this only?
    “$ python”

    therefore the whole sentence will be pi@raspberrypi:~/Desktop/deep-learning-opencv $ python

    However this was wrong because there was error poping out. Where did i go wrong and What should i copy and paste in order for it to work?

    • Adrian Rosebrock December 28, 2017 at 2:04 pm #

      I would suggest using the “cd” command to change directory to where you downloaded the code. The “$” indicates the shell. You do not need to copy and paste it. You can see examples of how to execute the script in the blog post.

  30. Ajeya B Jois December 29, 2017 at 2:08 am #

    cv2.dnn.blobFromImage(image,scalefactor=1,size=(224,224),mean=(104,117,123)) this line is giving me error eventhough am using open cv 3.3

    please help anyone

    • Adrian Rosebrock December 31, 2017 at 9:53 am #

      What is the error you are getting?

      • Muthukumar January 1, 2018 at 12:52 pm #

        usage: [-h] -i IMAGE -p PROTOTXT -m MODEL -l
        LABELS error: argument -i/–image is required

        • Adrian Rosebrock January 3, 2018 at 1:14 pm #

          You need to supply the command line arguments as I do in this script. Please read on up command line arguments before continuing.

  31. Muthukumar January 1, 2018 at 12:50 pm #

    hi adrian
    while i am running the code it shows erorr.
    the problem is when i am trying to give the” –image images/jemma.png”,
    it shows ‘images ‘ is an invalid syntax.
    i can’t give input arguments form command window.
    how to solve this

  32. hfad January 3, 2018 at 9:56 am #

    how can I read a .caffemodel file? Can I look at the code used in it?
    Also, can I look at the 1000+ images in the datasets used? How and where?
    Thank you so much

    • Adrian Rosebrock January 3, 2018 at 12:51 pm #

      The Caffe model are the weights obtained after training a neural network. The networks covered in this post were trained on the ImageNet dataset. I discuss how to train our own neural networks inside Deep Learning for Computer Vision with Python.

  33. yates January 20, 2018 at 3:54 am #

    That’s cool.Where Can I get the demo?

    • Adrian Rosebrock January 20, 2018 at 8:06 am #

      Please use the “Downloads” section of this blog post to download the source code + example images.


  1. Object detection with deep learning and OpenCV - PyImageSearch - September 11, 2017

    […] couple weeks ago we learned how to classify images using deep learning and OpenCV 3.3’s deep neural network ( dnn ) […]

  2. Deep learning on the Raspberry Pi with OpenCV - PyImageSearch - October 2, 2017

    […] The source code from this blog post is heavily based on my previous post, Deep learning with OpenCV. […]

Leave a Reply