Real-time object detection with deep learning and OpenCV

Today’s blog post was inspired by PyImageSearch reader, Emmanuel. Emmanuel emailed me after last week’s tutorial on object detection with deep learning + OpenCV and asked:

“Hi Adrian,

I really enjoyed last week’s blog post on object detection with deep learning and OpenCV, thanks for putting it together and for making deep learning with OpenCV so accessible.

I want to apply the same technique to real-time video.

What is the best way to do this?

How can I achieve the most efficiency?

If you could do a tutorial on real-time object detection with deep learning and OpenCV I would really appreciate it.”

Great question, thanks for asking Emmanuel.

Luckily, extending our previous tutorial on object detection with deep learning and OpenCV to real-time video streams is fairly straightforward — we simply need to combine some efficient, boilerplate code for real-time video access and then add in our object detection.

By the end of this tutorial you’ll be able to apply deep learning-based object detection to real-time video streams using OpenCV and Python — to learn how, just keep reading.

Looking for the source code to this post?
Jump right to the downloads section.

Real-time object detection with deep learning and OpenCV

Today’s blog post is broken into two parts.

In the first part we’ll learn how to extend last week’s tutorial to apply real-time object detection using deep learning and OpenCV to work with video streams and video files. This will be accomplished using the highly efficient VideoStream  class discussed in this tutorial.

From there, we’ll apply our deep learning + object detection code to actual video streams and measure the FPS processing rate.

Object detection in video with deep learning and OpenCV

To build our deep learning-based real-time object detector with OpenCV we’ll need to (1) access our webcam/video stream in an efficient manner and (2) apply object detection to each frame.

To see how this is done, open up a new file, name it  real_time_object_detection.py  and insert the following code:

We begin by importing packages on Lines 2-8. For this tutorial, you will need imutils and OpenCV 3.3.

To get your system set up, simply install OpenCV using the relevant instructions for your system (while ensuring you’re following any Python virtualenv commands).

Note: Make sure to download and install opencv and and opencv-contrib releases for OpenCV 3.3. This will ensure that the deep neural network ( dnn) module is installed. You must have OpenCV 3.3 (or newer) to run the code in this tutorial.

Next, we’ll parse our command line arguments:

Compared to last week, we don’t need the image argument since we’re working with streams and videos — other than that the following arguments remain the same:

  • --prototxt : The path to the Caffe prototxt file.
  • --model : The path to the pre-trained model.
  • --confidence : The minimum probability threshold to filter weak detections. The default is 20%.

We then initialize a class list and a color set:

On Lines 22-26 we initialize CLASS  labels and corresponding random COLORS . For more information on these classes (and how the network was trained), please refer to last week’s blog post.

Now, let’s load our model and set up our video stream:

We load our serialized model, providing the references to our prototxt and model files on Line 30 — notice how easy this is in OpenCV 3.3.

Next let’s initialize our video stream (this can be from a video file or a camera). First we start the VideoStream  (Line 35), then we wait for the camera to warm up (Line 36), and finally we start the frames per second counter (Line 37). The VideoStream  and FPS  classes are part of my imutils  package.

Now, let’s loop over each and every frame (for speed purposes, you could skip frames):

First, we read  a frame  (Line 43) from the stream, followed by resizing it (Line 44).

Since we will need the width and height later, we grab these now on Line 47. This is followed by converting the frame  to a blob  with the dnn  module (Lines 48 and 49).

Now for the heavy lifting: we set the blob  as the input to our neural network (Line 53) and feed the input through the net  (Line 54) which gives us our detections .

At this point, we have detected objects in the input frame. It is now time to look at confidence values and determine if we should draw a box + label surrounding the object– you’ll recognize this code block from last week:

We start by looping over our detections , keeping in mind that multiple objects can be detected in a single image. We also apply a check to the confidence (i.e., probability) associated with each detection. If the confidence is high enough (i.e. above the threshold), then we’ll display the prediction in the terminal as well as draw the prediction on the image with text and a colored bounding box. Let’s break it down line-by-line:

Looping through our detections , first we extract the confidence  value (Line 60).

If the confidence  is above our minimum threshold (Line 64), we extract the class label index (Line 68) and compute the bounding box coordinates around the detected object (Line 69).

Then, we extract the (x, y)-coordinates of the box (Line 70) which we will will use shortly for drawing a rectangle and displaying text.

We build a text label  containing the CLASS  name and the confidence  (Lines 73 and 74).

Let’s also draw a colored rectangle around the object using our class color and previously extracted (x, y)-coordinates (Lines 75 and 76).

In general, we want the label to be displayed above the rectangle, but if there isn’t room, we’ll display it just below the top of the rectangle (Line 77).

Finally, we overlay the colored text onto the frame  using the y-value that we just calculated (Lines 78 and 79).

The remaining steps in the frame capture loop involve (1) displaying the frame, (2) checking for a quit key, and (3) updating our frames per second counter:

The above code block is pretty self-explanatory — first we display the frame (Line 82). Then we capture a key press (Line 83) while checking if the ‘q’ key (for “quit”) is pressed, at which point we break out of the frame capture loop (Lines 86 and 87).

Finally we update our fps counter (Line 90).

If we break out of the loop (‘q’ key press or end of the video stream), we have some housekeeping to take care of:

When we’ve exited the loop, we stop the fps  counter (Line 93) and print information about the frames per second to our terminal (Lines 94 and 95).

We close the open window (Line 98) followed by stopping the video stream (Line 99).

If you’ve made it this far, you’re probably ready to give it a try with your webcam — to see how it’s done, let’s move on to the next section.

Real-time deep learning object detection results

To see our real-time deep-learning based object detector in action, make sure you use the “Downloads” section of this guide to download the example code + pre-trained Convolutional Neural Network.

From there, open up a terminal and execute the following command:

Provided that OpenCV can access your webcam you should see the output video frame with any detected objects. I have included sample results of applying deep learning object detection to an example video below:

Figure 1: A short clip of real-time object detection with deep learning and OpenCV + Python.

Notice how our deep learning object detector can detect not only myself (a person), but also the sofa I am sitting on and the chair next to me — all in real-time!

The full video can be found below:

Summary

In today’s blog post we learned how to perform real-time object detection using deep learning + OpenCV + video streams.

We accomplished this by combing two separate tutorials:

  1. Object detection with deep learning and OpenCV
  2. Efficient, threaded video streams with OpenCV

The end result is a deep learning-based object detector that can process approximately 6-8 FPS (depending on the speed of your system, of course).

Further speed improvements can be obtained by:

  1. Applying skip frames.
  2. Swapping different variations of MobileNet (that are faster, but less accurate).
  3. Potentially using the quantized variation of SqueezeNet (I haven’t tested this, but imagine it would be faster due to smaller network footprint).

In future blog posts we’ll be discussing deep learning object detection methods in more detail.

In the meantime, be sure to take a look at my book, Deep Learning for Computer Vision with Python, where I’ll be reviewing object detection frameworks such as Faster R-CNNs and Single Shot Detectors!

If you’re interested in studying deep learning for computer vision and image classification tasks, you just can’t beat this book — click here to learn more.

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 11-page Resource Guide on Computer Vision and Image Search Engines, including exclusive techniques that I don’t post on this blog! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , , , , , , ,

435 Responses to Real-time object detection with deep learning and OpenCV

  1. Daniel Funseth September 18, 2017 at 10:55 am #

    wow! this is really impressive, will it be able to run on a RPI 3?

    • Adrian Rosebrock September 18, 2017 at 1:56 pm #

      Please see my reply to “Flávio”.

  2. Flávio Rodrigues September 18, 2017 at 11:01 am #

    Hi, Adrian. Thanks a lot for another great tutorial. Have you got a real-time example working on a Pi 3? (Maybe skipping frames?) I’m just using a Pi (for OpenCV and DL) and I’d like to know to what extent is it’d be usable. What about the frame rate? Nice work as always. Cheers.

    • Adrian Rosebrock September 18, 2017 at 1:56 pm #

      I haven’t applied this method to the Raspberry Pi (yet) but I will very soon. It will be covered in a future blog post. I’ll be sharing any optimizations I’ve made.

      • adams March 19, 2018 at 6:57 am #

        have you done it?

        • Adrian Rosebrock March 19, 2018 at 4:54 pm #

          Yes. Refer to this post.

  3. Nicolas September 18, 2017 at 11:01 am #

    Hi Adrian.

    Excelent, you are a great developer! But, I want to know how develop a face-tracking with opencv and python in the Backend, but capturing video en canvas with HTML5 real-time and after draw and object depending of the Backend´s Response, for example, a Moustache. Too, this tracking has support with head movement and Moustache adapts.

    Thanks.

  4. tommy September 18, 2017 at 11:22 am #

    hi what’s the best FPS on say a typical Macbook assuming u used threading and other optimisations?

    • tommy September 18, 2017 at 11:27 am #

      in particular, does it mean if i used N threads to process the frames, the FPS will be N times better?

      does the dnn module use any threading / multi-core underneath the hood?

      • Adrian Rosebrock September 18, 2017 at 1:54 pm #

        No, the threading here only applies to polling of frames from the video sensor. The threading helps with I/O latency on the video itself. If you’re looking to speedup the actual detection you should push the object detection to the GPU (which I think can only be done with the C++ bindings of OpenCV).

        • Peter December 21, 2017 at 12:58 am #

          Hello Adrian,
          I plan to use jetson TX2 to do the object detection with deeplearning.
          i don’t know if there will be faster if just port the above code to tx2?
          can i have better performance by using tx2 for the upper code with opencv’s deep learning library?

          or do you have any suggestions to make the object detection faster on tx2, what framework and training net is better? use mxnet + mobielnet?

          • Adrian Rosebrock December 22, 2017 at 6:57 am #

            I haven’t tried this code with the TX2 yet, but yes, in general this should run faster on the TX2 provided you can run the model on the GPU directly. I would suggest using the code as is and then obtaining your benchmark. From there it will be possible to provide suggestions. The model included in the blog post uses the MobileNet framework.

        • Peter January 4, 2018 at 10:10 pm #

          Use C++ binding for opencv will speedup the detection on TX2 lot than python binding? do you have a bench mark?

          • Adrian Rosebrock January 5, 2018 at 1:30 pm #

            Sorry, I do not have a benchmark for the TX2 and Python or C++ bindings.

        • mirror January 28, 2018 at 9:34 am #

          hello,what should i do if i want to apply the detection code to a local video on my computer??

          • Adrian Rosebrock January 30, 2018 at 10:28 am #

            You can use the cv2.VideoCapture function and provide the path to your input video. If you’re new to working with OpenCV I would recommend going through Practical Python and OpenCV where I teach the fundamentals. I hope that helps point you on the right path!

        • zhenyuchen March 19, 2018 at 9:41 pm #

          Hi Adrian,
          I replaced the caffe model I trained myself, but I didn’t show a rectangular box. I want to know what the reason is, I look forward to your reply
          Best wishes!
          thank you!

          • Adrian Rosebrock March 20, 2018 at 8:23 am #

            The code in this post filters out “weak” detections by discarding any predictions with a confidence of less than 20%. You can try to set the confidence to zero just to see if your detections are being filtered out.

            If not, your network is simply not returning predictions for your input image. You should consider training your network with (1) more data and (2) data that more closely resembles the test images.

    • Adrian Rosebrock September 18, 2017 at 1:55 pm #

      I used my MacBook Pro to collect the results to this blog post — approximately 5-7 FPS depending on your system specs.

  5. vinu September 18, 2017 at 1:00 pm #

    Thanks
    its so much help
    and i needs to detect only helmet in realtime

    • Ashwin Venkat September 18, 2017 at 9:28 pm #

      Hi interesting thought, did it work

  6. Eng.AAA September 18, 2017 at 1:31 pm #

    Thanks for awesome Tutorials .
    I have question about: can I track the location of the chair in video, I mean if the chair moving can I track its location.
    Thanks

    • Adrian Rosebrock September 18, 2017 at 1:52 pm #

      I would suggest using object detection (such as the method used in this post) to detect the chair/object. Then once you have the object detected pass the location into an object tracking algorithm, such as correlation tracking.

      • Eng.AAA September 18, 2017 at 5:59 pm #

        I hope it will cover with an example in new Deep Learning Book

        Thanks

  7. Sydney September 18, 2017 at 1:36 pm #

    Thanks for the tutorial man. The method is quite effective, running better on a CPU. I am still trying to figure out how i can use a video as my source instead of the webcam.

    • Adrian Rosebrock September 18, 2017 at 1:52 pm #

      Thanks Sydney, I’m glad it worked for you 🙂

      As far as working with video files, take a look at this post.

  8. Ashraf September 18, 2017 at 2:07 pm #

    great article! continue the great way!

    • Adrian Rosebrock September 18, 2017 at 2:21 pm #

      Thanks Ashraf 🙂

  9. Walid Ahmed September 18, 2017 at 2:18 pm #

    Thanks. I waited for 18 Sep to read this blog!

    Just one questions , isnt 0.2 so low as confidence?
    would not this result in low precision?

    • Adrian Rosebrock September 18, 2017 at 2:21 pm #

      With object detection we typically have lower probabilities in the localization. You can tune the threshold probability to your liking.

  10. Jacques September 18, 2017 at 2:18 pm #

    Hey Mate,

    Many thanks for the great example code – just what I needed :)..

    How would this perform on a Pi 3? I intend testing it asap, but I would guess the classification function would be really slow (I was getting up to 4 seconds on your previous tutorial using cv DNN)? Any views on how to compensate for the slower processor?

    Do you believe that this code would be too much for the Pi3?

    -J

    • Adrian Rosebrock September 18, 2017 at 2:21 pm #

      Hi Jacques — please see my reply to “Flávio”. I haven’t yet tried the code on the Raspberry Pi but will be documenting my experience with it in a future blog post.

  11. amitoz September 18, 2017 at 3:20 pm #

    Hey Adrian,

    Once we have detected an object, how difficult you think will it be to segment it in real time using deep learning? Share ur insight pls.

    • Adrian Rosebrock September 20, 2017 at 7:26 am #

      Object detection and object segmentation are two totally different techniques. You would need to use a deep learning network that was trained to perform object segmentation. Take a look at DeepMask.

  12. Kamel Rush September 18, 2017 at 3:46 pm #

    Hi,

    I tried to run the code, but got this:

    File “C:\ProgramData\Miniconda33\lib\site-packages\imutils\convenience.py”, line 69, in resize
    (h, w) = image.shape[:2]
    AttributeError: ‘NoneType’ object has no attribute ‘shape’

    • Adrian Rosebrock September 20, 2017 at 7:25 am #

      It sounds like OpenCV cannot access your webcam. I detail common causes for NoneType errors in this blog post.

    • Leopard Li September 20, 2017 at 9:06 pm #

      Hi, have you resolved this problem? I got this problem too.
      But when I changed src=1 to src=0 in line “vs = VideoStream(src=1).start()”, it just worked!
      Hope this could be helpful to you if it still bothers you.

      • Adrian Rosebrock September 21, 2017 at 7:14 am #

        Thank you for mentioning this. I used src=1 because I have two webcams hooked up to my system. Most people should be using src=0 as they likely only have one webcam. I will update the blog post.

    • Enjoy September 28, 2017 at 9:02 am #

      you can try change Line35 :vs = VideoStream(src=0).start() to vs = VideoStream(usePiCamera=args[“picamera”] > 0).start()

      and add ap.add_argument(“-pi”, “–picamera”, type=int, default=-1,
      help=”whether or not the Raspberry Pi camera should be used”) after Line 14

      it‘s work for me

      • Abhi October 1, 2017 at 12:34 am #

        I get the AttributeError: ‘NoneType’ object has no attribute ‘shape error as well and I tried the solution recommended by Enjoy (since I am getting this error with src=0) but the code does not run on my pi3. Every time run this code the OS crashes and pi reboots. Not sure what I am doing wrong here. Any help is appreciated.

        • Adrian Rosebrock October 2, 2017 at 9:47 am #

          Are you using a USB camera or the Raspberry Pi camera module? Please make sure OpenCV can access your webcam. I detail these types of NoneType errors and why they occur in this post.

          I’ll also be doing a deep learning object detection + Raspberry Pi post later this month.

          • Abhi October 4, 2017 at 8:35 pm #

            I am using the raspberry pi camera. And I can access the camera fine, since I tested it with your pi home surveillance code.

          • Abhi October 4, 2017 at 9:16 pm #

            Also I forgot to mention that I tried the following from your unifying pi camera and cv2 video capture post with the appropriate argument.

            # initialize the video stream and allow the cammera sensor to warmup
            vs = VideoStream(usePiCamera=args[“picamera”] > 0).start()

          • Adrian Rosebrock October 6, 2017 at 5:12 pm #

            Hi Abhi — thank you for the additional comments. Unfortunately, without direct access to your Raspberry Pi I’m not sure what the exact issue is. I’ll be covering object detection using deep learning on the Raspberry Pi the week of October 16th. I would suggest checking the code once I post the tutorial and see if you have the same error.

      • DreamChaser October 8, 2017 at 11:36 pm #

        Thanks for the post! I was having the same ‘NoneType’ error. I changed the camera source but that didn’t fix it. I added your argument update, along with adding –pi=1 to the command line and it worked. Thanks to the author (and everyone else who have posted) – it’s great to have help when you start.

        • Adrian Rosebrock October 9, 2017 at 12:18 pm #

          Thanks for the comment. I’ll be covering how to perform real-time object detection with the Raspberry Pi here on the PyImageSearch blog in the next couple of weeks.

        • Deepak January 27, 2018 at 8:42 am #

          can you please mention the modifications??

    • zhenyuchen March 21, 2018 at 2:00 am #

      I also encountered this problem. Did you solve it? Can you exchange it? Thank you verymuch

  13. zakizadeh September 18, 2017 at 3:55 pm #

    hi .
    i want to get position of Specified object in image . all examples are about multi object detection . but i want to get position of only one object . for example i want to get position of a book in image , not all object in image . only one of them . how can i do that ?

    • Adrian Rosebrock September 20, 2017 at 7:23 am #

      First, you would want to ensure that your model has been trained to detect books. Then you can simply ignore all classes except the book class by checking only the index and probability associated with the book class. Alternatively you could fine-tune your network to apply detect books.

  14. Kevin Lee September 19, 2017 at 1:19 am #

    Thanks for great tutorial.

    Is it running on the cpu? If so, is there a parameter we can change to gpu mode?

    kevin

    • Adrian Rosebrock September 20, 2017 at 7:19 am #

      This runs on the CPU. I don’t believe it’s possible to access the GPU via the Python bindings. I would suggest checking the OpenCV documentation for the C++ bindings.

  15. Arvind Gautam September 19, 2017 at 1:27 am #

    Hi Adrian .

    Its really a great tutorial .You are the Rock star of Computer Vision .

    I have also implemented a Faster-RCNN with VGG16 and ZF network on my own Sports videos to detect logos in the video.I am getting good accuracy with both the networks,but I am able to processed only 7-8 frames/sec with VGG16 and 14-15 frames/sec with ZF network .To process the video in real time,I am skipping every 5th frame. I have compared the results in both the cases (without skipping frames and skipping every 5th frame) having almost same accuracy .Can you guide me that I am doing right thing or not ? What can be the optimal value of skipping the frame to process in real time without hurting the accuracy.

    • Adrian Rosebrock September 20, 2017 at 7:17 am #

      There is no real “optimal” number of frames to skip — you are doing the skip frames correctly. You normally tune the number of skip frames to give you the desired frame processing rate without sacrificing accuracy. This is normally done experimentally.

    • Cong March 14, 2018 at 3:30 am #

      Hi Arvind,

      I have replaced the zf_test.prototxt and ZF_faster_rcnn_final.caffemodel files for use with ZF, but I can’t get it working.

      Can you teach me how to change the code to get it working like tutorial above (Real-time object detection) ?

      Thx !

  16. Luke Cheng September 19, 2017 at 2:32 am #

    Hi I’m just curious how you trained your caffe model because I feel like the training process you used could be really good. thanks!

    • Adrian Rosebrock September 20, 2017 at 7:16 am #

      Please see my reply to “Thang”.

  17. David Killen September 19, 2017 at 8:34 am #

    This is very interesting, thank you. Unless I missed it, you aren’t using prior and posterior probabilities across frames at all. I appreciate that if an object doesn’t move then there is no more information to be extracted but if it were to move slightly but change because of rotation or some other movement then there is some independence and the information can be combined. We can see this when you turn the swivel-chair; the software loses track of it when it’s face on (t=28 to t=30). Is this something that can be done or is it too difficult?

    PS Can you explain why the human-identification is centred correctly at the start of the full video but badly off at the end please? It’s almost as if the swivel chair on the left of the picture is pushing the human-box off to the right, but I can’t see why it would do that.

    • Adrian Rosebrock September 20, 2017 at 7:13 am #

      I’m only performing object detection in this post, not object tracking. You could apply object tracking to detected objects and achieve a smoother tracking. Take a look at correlation tracking methods.

      As for the “goodness” of a detection this is based on the anchor points of the detection. I can’t explain the entire Single Shot Detector (SSD) framework in a comment, but I would suggest reading the original paper to understand how the framework is used. Please see the first blog post in the series for more information.

  18. Jacques September 19, 2017 at 2:53 pm #

    I ran it on my Pi3 last night. works nicely! Each frame takes a little over a second to classify. The rate is quite acceptable. Cool work and looking forward to any optimisations that you think will work..

    How much do you think rewriting the app in C++ will increase the performance on the Pi? I know CV is C/C++, but I am keen to profile the diff in a purely compiled language.

    • Adrian Rosebrock September 20, 2017 at 7:09 am #

      In general you can expect some performance increases when using C/C++. Exactly how much depends on the algorithms that are being executed. Since we are already using compiled OpenCV functions the primary overhead is the function call from Python. I would expect a C++ program to execute faster but I don’t think it will make a substantial difference.

  19. Hubert de Lassus September 19, 2017 at 8:45 pm #

    Great example code! Thank you. How would you modify the code to read an mp4 file instead of the camera?

    • Adrian Rosebrock September 20, 2017 at 7:00 am #

      You would swap out the VideoStream class for a FileVideoStream.

      • Rohit Thakur January 10, 2018 at 2:50 am #

        Can you please explain a little what do you mean by swap out the VideoStream class? As i was trying to use this code for mp4 file and got an error. Please take a look:

        [INFO] loading model…
        [INFO] starting video stream…
        Traceback (most recent call last):
        File “new.py”, line 49, in
        frame = imutils.resize(frame, width=400)

        (h, w) = image.shape[:2]
        AttributeError: ‘tuple’ object has no attribute ‘shape’

        If possible can you tell me where i have to modify the code ?

        • Adrian Rosebrock January 10, 2018 at 12:48 pm #

          By “swapping out” the VideoStream class I mean either:

          1. Editing the videostream.py classes and subclasses in your site-packages directory after installing imutils
          2. Or more easily, copying the code and storing it in your project and then importing your own implementation of VideoStream rather than the one from imutils

          Looking at your error, it appears your call to .read() of VideoStream is returning tuple, not an image. You would need to debug your code to resolve this. Using “print” statements can he helpful here.

  20. Thang September 20, 2017 at 2:49 am #

    Many thanks, but can you show me how to program trained file as in your project you used MobileNetSSD_deploy.caffemodel file.

    • Adrian Rosebrock September 20, 2017 at 6:58 am #

      As I mention in the previous post the MobileNet SSD was pre-trained. If you’re interested in training your own deep learning models, in particular object detectors, you’ll want to go through the ImageNet Bundle of Deep Learning for Computer Vision with Python.

  21. memeka September 20, 2017 at 5:55 am #

    Hi Adrian,

    Thanks for the great series of articles.
    I’ve tried this on an Odroid XU4 (which is more powerful than the RPi – better CPU, better GPU, USB3) – with OpenCV compiled with NEON optimizations and OpenCL enabled (Odroid XU4 has OpenCL working, and GPU in theory should reach 60GFlops).

    Do you know if OpenCL is actually used by net.forward()? It would be interesting to benchmark GPU vs GPU if OpenCL is indeed used.

    I was able to run detection at 3fps (3.01 fps to be exact :D) with no optimizations and 640×480 resolution (no resize in the code), but I am confident I can reach >5fps with some optimizations, because:
    * I have my board underclocked to 1.7Ghz (stock is 2 Ghz, but I can try overclocking up to 2.2 Ghz)
    * I think I/O was the bottleneck, since even underclocked, CPU cores were used ~60%; adding some threading and buffering to the input should speed things up
    * to remove some delay from GTK, I used gstreamer output to tcpsink, and viewed the stream with VLC. This would also work great in the “security camera” scenario, where you want to see a stream remotely over the web.
    (PS: with gstreamer – from command line – I can actually use the hw h264 encoder in the odroid; but the exact same pipeline – well, except the appsink detour – is not working in opencv python; this would be useful to save the CPU for detection and still have h264 streaming, IF I can make it work…)

    I can’t wait to see your optimizations for the RPi, I’ll keep you posted with the framerate I can get on my Odroid 🙂

    • Adrian Rosebrock September 20, 2017 at 6:56 am #

      I’m not sure if OpenCL is enabled by default, that’s a good question. And thanks for sharing your current setup! I’ll be doing a similar blog post as this one for the Raspberry Pi in the future — I’ll be sure to read through your comments and share optimizations and discuss which ones did or did not work (along with an explanation as to why). Thanks again!

      • memeka September 20, 2017 at 9:02 am #

        So I was wrong – webcam was not the bottleneck. Even with threading, I still get 3fps max. I timed and indeed net.forward() takes 300ms. So the only way I may speed this up is getting the CPU to 2 or 2.2Ghz, and trying to write it in C++…

        • Mark November 20, 2017 at 6:27 am #

          FYI, When I switched my face tracking/detector code on RPi3 from Python to C++, I got more than 300% extra FPS improvement, now with multiple faces tracked and resolution 640×480 I easily maintain 10-15FPS on an optimised OpenCV 3.3.1.
          Now I’m exploring how to use Movidious stick VPU 12 Shaves to boost the performance further and get similar FPS with much higher resolutions…

          • Peter January 4, 2018 at 10:39 pm #

            Could you share your C++ code? I want to make a benchmark on TX2 with opencv3.4 compared to python bindings.

            Thanks

      • Tom September 21, 2017 at 4:21 pm #

        For comparison:
        Running the script from this blog post gave 0.4 fps on Raspberry Pi 3.
        Demo that comes with Movidius Compute Stick running SqueezeNet gives 3 fps, though having inference in separate thread from video frames display assures a nice smooth experience. Just mind that it does not support Raspbian Stretch yet, so use archived card img’s (that’s due to built in Python 3.4 vs 3.5).

        • Adrian Rosebrock September 22, 2017 at 8:57 am #

          Thanks for sharing Tom!

  22. memeka September 21, 2017 at 2:08 am #

    With C++, as expected, performance is very similar.
    I’ve tried using Halide dnn, but CPU target didn’t really get an output (I lost patience after >10s), and OpenCL target resulted in a crash due to some missing function in some library…

    So 3 fps is as best as I could get in the end.

    With CPU at 2GHz, it scales it down to 1.8Ghz due to heat.
    But still, cores don’t get used 100% – any idea why? As you can see from here: https://imgur.com/a/D9tdp max usage stays just below 300% from the max 400%, and no core gets used more than 80% – do you know if this is an OpenCV thing?

  23. Ldw September 21, 2017 at 10:40 pm #

    Hi Adrian, I tried running the code and got this : AttributeError: ‘module’ object has no attribute ‘dnn’
    Any ideas what’s the issue? Thanks in advance!

    • Ldw September 21, 2017 at 11:29 pm #

      Just to add on I’ve downloaded OpenCV 3.3’s zip file here. Did i download at the wrong place, or did i download it the wrong way? What i did was just download the zip file from that website and added into my Home from the archive manager. Sorry for bothering!

      • Adrian Rosebrock September 22, 2017 at 8:55 am #

        Once you download OpenCV 3.3 you still need to compile and install it. Simply downloading the .zip files is not enough. Please see this page for OpenCV install instructions on your system.

        • Nermion December 7, 2017 at 7:40 am #

          hi adrian since you have no instructions for how to run this on windows platform, does that mean opencv and this tutorial is not compatible with windows platform? If it is possible, got any links where they talk how to set it up, so I can finish this tutorial? Thanks 🙂

          • Adrian Rosebrock December 8, 2017 at 4:49 pm #

            I don’t support Windows here PyImageSearch blog, I really recommend Unix-based systems for computer vision and especially deep learning. That said, if you have OpenCV 3.3 installed on your Windows machine this tutorial will work. The official OpenCV website provides Windows install instructions.

  24. Roberto Maurizzi September 22, 2017 at 4:02 am #

    Hi Adrian, thanks for your many interesting and useful posts!

    I missed this post and I did try to adapt your previous post to do what you did here by myself, searching docs and more examples, to read frames from a stream coming from an ONVIF surveillance camera that streams rtsp h264 video.

    I’m having trouble with rtsp part: on Windows I get warnings from the cv2.VideoCapture() call ([rtsp @ 000001c517212180] Nonmatching transport in server reply – same from your imutils.VideoStream), on linux I get nothing but the capture isn’t detected as open.

    Any advice about this? I already tried to check my ffmpeg installation, copied it to the same folder from which my python’s loading opencv dlls and if I try ffplay it can stream from the camera (after a few warnings: “‘circular_buffer_size’ option was set but it is not supported on this build (pthread support is required)” )

    I was able to use ffserver to convert the rtsp stream from rtsp/h264 to a mjpeg stream, but it consumes more CPU than running the dnn… any advice?

    Roberto

  25. Lin September 22, 2017 at 5:06 am #

    Hi Adrian,

    Yesterday I leave a reply about the error like:

    AttributeError: ‘NoneType’ object has no attribute ‘shape’

    But today I read the board carefully, found that someone has encountered the same problem.
    And I already resolve the problem .

    Thanks.

    • Adrian Rosebrock September 22, 2017 at 8:51 am #

      Thanks for the update, Lin!

    • Adrian Rosebrock September 22, 2017 at 8:58 am #

      Change src=0 and then read this post on NoneType errors.

  26. Jorge September 23, 2017 at 10:48 pm #

    Hi Adrian. Thanks for your great job. Im thinking about the possibility of applying the recognition only for people in real time on the video stream of four security cameras in mosaic. It would be like having someone watching four cameras at a time and triggering alerts if people are detected in x consecutive frames. Maybe send an email with the pix. What do you think about this and how can be implemented?

    • Adrian Rosebrock September 24, 2017 at 8:43 am #

      You would need to have four video cameras around the area you want to monitor. Depending on how they are setup you could stream the frames over the network, although this would include a lot of I/O latency. You might want to use a Raspberry Pi on each camera to do local on-board processing, although I haven’t had a chance to investigate how fast this code would run on the Raspberry Pi. You also might want to consider doing basic motion detection as a first step.

      • Jorge September 24, 2017 at 9:38 am #

        I was referring to using the mosaic of the four cameras as a single image and running the CNN detector of this post on that image only for the person category. Do you think it would be possible? And what observation or suggestion would you make?

        • Adrian Rosebrock September 24, 2017 at 10:03 am #

          Ah, got it. I understand now.

          Yes, that is certainly possible and would likely work. You might get a few false detections from time to time, such as if there are parts of a person moving in each of the four corners and a classification is applied across the borders of the detections. But that is easily remedied since you’ll be constructing the mosaic yourself and you can filter out detections that are on the borders.

          So yes, this approach should work.

          • Jorge September 24, 2017 at 10:18 am #

            Thanks for the feedback Adrian!!!

  27. Enjoy September 24, 2017 at 12:56 am #

    WHY ?

    Traceback (most recent call last):

    (h, w) = image.shape[:2]
    AttributeError: ‘NoneType’ object has no attribute ‘shape’

    • Adrian Rosebrock September 24, 2017 at 8:41 am #

      Please see my reply to “Lin” above. Change src=0 in the VideoStream class. I’ve also updated the blog post to reflect this change.

  28. Aleksandr Rybnikov September 24, 2017 at 4:06 am #

    Hi Adrian!
    Thanks for the another great post and tutorial!
    As you’ve maybe noticed, bounding boxes are inaccurate – they’re very wide comparing to the real size of object. It happens due to the following thing: you’re using blobFromImage finction, but it takes a central crop from the frame. And this central crop goes to the ssd model. But later you multiply unit box coordinates by full frame size. To fix it you can simply pass cv.resize(frame, (300,300)) as first parameter of blobFromImage() and all will be ok

    • Adrian Rosebrock September 24, 2017 at 8:41 am #

      Thank you for pointing this out, Aleksandr! I’ve updated the code in the blog post. I’ll also make sure to do a tutorial dedicated to the parameters of cv2.blobFromImage (and how it works) so other readers will not run into this issue as well. Thanks again!

  29. Enjoy September 24, 2017 at 10:44 am #

    OpenCV: out device of bound (0-0): 1
    OpenCV: camera failed to properly initialize!

    • Adrian Rosebrock September 24, 2017 at 12:26 pm #

      Please double-check that you can access your webcam/USB camera via OpenCV. Based on your error messages you might have compiled OpenCV without video support.

  30. RicardoGomes September 24, 2017 at 9:46 pm #

    Nice tutorial, I managed to make it run in rpi but it detects objects with error, my television appeared as a sofa and the fan like chair. What could it be?

    • RicardoGomes September 24, 2017 at 9:48 pm #

      In rpi it was very slow, would it need some kind of optimization?

      • Adrian Rosebrock September 26, 2017 at 8:32 am #

        I’ll be covering deep learning-based object detection using the Raspberry Pi in the next two blog posts. Stay tuned!

  31. Henry September 25, 2017 at 2:27 am #

    Hi Adrian,

    Nice tutorial, thank you so much.

    Besides, can the same code accept rtsp/rtmp video stream?
    If the answer is “No”, do you know any python module that can support rtsp/rtmp as video stream input? Many thanks.

    • Adrian Rosebrock September 26, 2017 at 8:29 am #

      This exact code couldn’t be used, but you could explore using the cv2.VideoCapture function for this.

  32. Sydney September 26, 2017 at 11:16 am #

    Hie Adrian. Any pointers on how i can implement this as a web based application?

    • Adrian Rosebrock September 28, 2017 at 9:28 am #

      Are you trying to build this as a REST API? Or trying to build a system that can access a user’s webcam through the browser and then apply object detection to the frame read from the webcam?

      • Sydney September 28, 2017 at 12:14 pm #

        I want to be able to upload a video using a web interface, then perform object detection on the uploaded video showing results on the webpage.

        • Adrian Rosebrock September 28, 2017 at 12:29 pm #

          Can you elaborate on what you mean by “showing the results”? Do you plan on processing the video in the background and then once it’s done show the output video to the user? If you can explain a little more of what exactly you’re trying to accomplish and what your end goal is myself and others can try to provide suggestions.

          • sydney September 30, 2017 at 10:45 am #

            Ok. I need to run the application on google cloud platform and provide an interface for users to upload their own videos.

          • Adrian Rosebrock October 2, 2017 at 9:57 am #

            Have users upload the videos and then bulk process the videos in the background and save the annotations. You can either then (1) draw the bounding boxes on the resulting images or (2) generate a new video via cv2.VideoWriter with the bounding boxes drawn on top of them.

  33. memeka September 27, 2017 at 12:55 am #

    Hi Adrian,

    As mentioned above, I am getting 3fps on detection (~330ms in net.forward()), and I’m saving the output via a gstreamer pipeline (convert to h264, then either store in file, or realtime streaming with hls).

    In order to improve the output fps, I decided to read a batch of 5 frames, do detection on the first, then apply the boxes and text to all 5 before sending them to the gst pipeline.

    Using cv2.VideoCapture, I end up with around the half the previous framerate (so an extra 300-350ms spent in 4xVideoCapture.read()), which I am not very happy with.

    So I decided to modify imutils.WebcamVideoStream to do 5 reads, and I have (f1, f2, f3, f4, f5) = MyWebcamVideoStream.read() – using this approach I only lose ~50ms and I can get close to 15fps output. However, the problem here is that the resulting video has frames out of order. I tried having the 5 read() protected by a Lock, but without much success.

    Any suggestion on how I can keep the correct frame order with threaded WebcamVideoStream?

    Thanks.

    • Adrian Rosebrock September 27, 2017 at 6:40 am #

      The WebcamVideoStream class is threaded so I would suggest using a thread-safe data structure to store the frames. Something like Python’s Queue class would be a really good start.

      • memeka September 27, 2017 at 8:28 am #

        Thanks Adrian,
        I figured out what the problem was: reading 5 frames was actually taking longer than net.forward(), so WebcamVideoStream was returning the same 5 frames as before; by reducing the batch to 4 frames, and also synchronising the threads, I managed to get 2.5 fps detected + extra 3 frames for each detection for a total of 10fps webcam input/ pipeline output.

        • Adrian Rosebrock September 27, 2017 at 8:49 am #

          Congrats on resolving the issue! The speed your getting is very impressive, I’ll have to play around with the Odroid in the future 🙂

          • memeka September 28, 2017 at 4:58 am #

            Thanks Adrian

            Since there are many here who, like me, would like to use this for a security camera, I would like to share my end script, maybe somebody else would find it useful: http://paste.debian.net/988135/
            It reads the input from a .json file, such as: http://paste.debian.net/988136/

            * gst_input defines the source (doesn’t actually have to be gst, “0” will work for /dev/video0 webcam)
            * gst_output defines the output
            * batch_size defines the number of frames read at once. On my system, 4 was optimal (reading 4 frames took similar amount of time to detection on 1 frame)
            * base_confidence defines the minimum confidence for an object to be considered
            * detect_classes contains “class_name”:”confidence” that you want to track (e.g. ‘person’). Note that confidence here can be lower than “base_confidence”
            * detect_timeout defines a time (in s) since a class is considered “detected” again. E.g. if detect_time = 10s, and same class was detected 2s ago, it won’t be considered “detected” again
            * detect_action contains a script to be executed on detection. Script needs to have as input “class”, “confidence”, and “filename”

            The output video (e.g. the HLS stream in the json example above) contains all classes detected w/ surrounding boxes and labels. Of course, detection is done only on the 1st frame out of batch_size, but all frames have the boxes and labels.
            On detecting a class specified in “detect_classes”, the script saves the image in a ‘detected’ folder (in the format timestamp_classname.jpg), then executes the action specified.
            In my case, I can always watch the stream online and see what the camera detects, but I can choose to have an action taken (e.g. send email/notification with a picture) when certain objects are detected.
            With ~330ms net.forward() and a batch of 4, I can achieve reliably 10fps output.

            If somebody has suggestions on how I can improve this, please leave a comment 🙂

          • Adrian Rosebrock September 28, 2017 at 8:58 am #

            Awesome, thanks for sharing memeka!

  34. Ying September 27, 2017 at 12:34 pm #

    hi Adrian,

    thank you so much for your tutorial! I am a big fan!

    I was wondering can I use pre recorded video clips instead of live camera to feed the video stream? Could you suggest how I can achieve this please?

    • Adrian Rosebrock September 28, 2017 at 9:12 am #

      Yes. Please see my reply to “Hubert de Lassus”.

  35. tiago September 27, 2017 at 4:36 pm #

    How can I provide the –prototxt and –model direct argument in source code?

    args = vars(ap.parse_args())

    • Adrian Rosebrock September 28, 2017 at 9:06 am #

      Please read up on command line arguments. You need to execute the script via the command line — that is where you supply the arguments. The code DOES NOT have to be edited.

  36. Foobar October 1, 2017 at 8:30 pm #

    When the network was trained did the training data have bounding boxes in it? Or was it trained without and OpenCV can just get the bounding boxes by itself?

    • Adrian Rosebrock October 2, 2017 at 9:37 am #

      When you train an object detector you need the class labels + the bounding boxes. OpenCV cannot generate the bounding boxes itself.

      • Foobar October 2, 2017 at 8:22 pm #

        Are the bounding boxes drawn on the training data or is there some other method of doing it?

        • Adrian Rosebrock October 3, 2017 at 11:05 am #

          The bounding boxes are not actually drawn on the raw image. The bounding box (x, y)-coordinates are saved in a separate file, such as a .txt, .json, or .xml file.

          • Foobar October 3, 2017 at 4:26 pm #

            Thank you Adrian for your help.

  37. Jussi Wright October 5, 2017 at 4:38 am #

    Hi,

    I got the detector to work on the video with the help of your another blog (https://www.pyimagesearch.com/2017/02/06/faster-video-file-fps-with-cv2-videocapture-and-opencv/).

    But I have a couble of supplementary questions.
    1. How can I easily get a saved video where recognizations are displayed (Can I use cv2.imwrite)?
    2. How can I remove the unnecessary labels I do not need (cat, bottle etc). Removing only the label name produces an error code.
    3. How do I adjust the code so that only the detections with an accuracy of more than 70-80% are displayed.
    4. Do you know ready models for identifying road signs, for example?

  38. Jussi October 5, 2017 at 6:18 am #

    Ok, I found a point for adjusting the accuracy of the detection: ap.add_argument(“-c”, “–confidence”, type=float, default=0.2 <—

    Also I found your blog (https://www.pyimagesearch.com/2016/02/22/writing-to-video-with-opencv/), but I could not find the right settings for me… I get error:
    …argument -c/–codec: conflicting option string: -c

    • Adrian Rosebrock October 6, 2017 at 5:06 pm #

      You need to update your command line arguments. If you have conflicting options, change the key for the command line arguments. I would suggest reading up on command line arguments before continuing.

      To address your other questions:

      1. Answered from your comment.
      2. You cannot remove just the label name. Check the index of the label (i.e., idx) and ignore all that you are uninterested in.
      3. Provide --confidence 0.7 as a command line arguments.
      4. It really depends on the road signs. Most road signs are different in various countries.

  39. chetan j October 6, 2017 at 3:55 am #

    hi,
    great work, nice tutorial

    just one question, i tried to run this code in my system, it works nice but have delay 5 to 8 sec to detect objects.

    how to overcome this problem.

    • Adrian Rosebrock October 6, 2017 at 4:54 pm #

      What are the specs of your system? 5-8 seconds is a huge delay. It sounds like your install of OpenCV may not be optimized.

      • chetan j October 9, 2017 at 3:15 am #

        hi,
        im using reaspbeyy pi 3- code runs fine but have delay of 5 to 8 sec.

        how to resolve this problem

        • Adrian Rosebrock October 9, 2017 at 12:14 pm #

          I will be discussing optimizations and how to improve the frames per second object detection rate on the Raspberry Pi in future posts. I would suggest starting here with a discussion on how to optimize your OpenCV + Raspberry Pi install.

      • inayatullah October 16, 2017 at 3:44 am #

        I have reimplemented the same, but with using sddcaffe for python.When i detector is applied on every second frame then on my system I can get 12 to 14 frames per second. My code is available here

        https://github.com/inayatkh/realTimeObjectDetection

        • Adrian Rosebrock October 16, 2017 at 12:19 pm #

          Thanks for sharing, Inayatullah!

  40. Chetan J October 7, 2017 at 9:51 am #

    I’m using Raspberry Pi 3,
    Code runs fine but slower operation

  41. vinu October 9, 2017 at 7:29 pm #

    hi adrin
    how can i assign a unique id number to each and every human object

    • Adrian Rosebrock October 13, 2017 at 9:16 am #

      What you’re referring to is called “object tracking”. Once you detect a particular object you can track it. I would suggest researching correlation trackers. Centroid-based tracking would be the easiest to implement.

  42. Justin October 9, 2017 at 10:19 pm #

    Hi Adrian.
    Thank you for this post.
    I followed you and made it with Rpi3
    But too slow…
    How can I fix it?

    and when I started real_time_~.py
    I got this message.
    [INFO] loading model…
    [INFO]starting video stream…

    ** (Frame:3570): WARNING **: Error retrieving accessibility bus address: org.freedesktop.DBus.Error.ServiceUnknown: The name org.a11y.Bus was not provided by any .service files

    what should I do??

    • Adrian Rosebrock October 13, 2017 at 9:12 am #

      Please see this post on optimizing the Raspberry Pi for OpenCV. The commenter “jsmith” also has a solution.

      For what it’s worth, this is NOT an error message. It’s just a warning from the GUI library and it can be ignored.

      • Stevie t. May 13, 2018 at 6:42 pm #

        I don’t get it. I have the same error and the page didn’t say anything. Can you tell me a command I can put in to bypass it if it is not a real error.

  43. Dim October 10, 2017 at 1:25 am #

    First of all – thank you forbthia tutorial – very informative. Maybe i missed this but do you have any tutorials on real time custom object detection? I want to add additinal object that is not included in the trained model…

    • Adrian Rosebrock October 13, 2017 at 9:11 am #

      Hi Dim — I cover object detection in detail inside the PyImageSearch Gurus course. I would suggest starting there.

  44. Mindfreak October 11, 2017 at 9:59 am #

    Great work sir.
    but while I am trying to run code it gives me error:

    AttributeError: module ‘cv2’ has no attribute ‘dnn’

    how to solve this error?
    I am using OpenCV 3.2.0 version.
    Thanks in advance..

    • Adrian Rosebrock October 13, 2017 at 8:56 am #

      The dnn module is only available in OpenCV 3.3 and newer. Please upgrade to OpenCV 3.3 and you’ll be able to run the code.

  45. Mahsa October 12, 2017 at 8:32 am #

    Thank you for this awesome tutorial, this works quite nice on my laptop computer whereas it has too much delay on odroid (which I might try out the optimized opencv you’ve posted)

    but Is there a way to retrain the exact model but with fewer classes?? since I only need two of those classes.

    • Adrian Rosebrock October 13, 2017 at 8:45 am #

      You would need to apply fine-tuning to reduce the number of classes or you can just ignore the indexes of classes you don’t care about. However, keep in mind that the total number of classes isn’t going to significantly slow down the network. Yes, less classes means less computation — but there is a ton of computation and depth earlier in the network.

  46. Shenghui Yang October 13, 2017 at 3:53 pm #

    Hi Adrian

    Thanks for the wonderful tutorial. I have a small question. I got an error when running codes:

    AttributeError: ‘module’ object has no attribute ‘dnn’

    I have installed the opencv3.3.0, and it works. How can I deal with it?

    Thank you.

    • Adrian Rosebrock October 14, 2017 at 10:38 am #

      Hmm, I know you mentioned having OpenCV 3.3 installed but it sounds like you may not have it properly installed. What is the output o:

  47. Andrey October 16, 2017 at 12:37 pm #

    This is very motivational post to try this technique. Thank you Adrian.
    How difficult it would be to switch to TensorFlow instead?

    • Adrian Rosebrock October 16, 2017 at 12:54 pm #

      TensorFlow instead of Caffe? That depends on the model. You would need a TensorFlow-based model trained for object detection. As far as I understand, the OpenCV loading capabilities of pre-trained TensorFlow models is still in heavy development and not as good as the Caffe ones (yet). For what it’s worth, I’ll be demonstrating how to train your own custom deep learning object detectors and then deploy them inside Deep Learning for Computer Vision with Python.

  48. Adel October 18, 2017 at 6:32 pm #

    thanks very much for the tutorial … how train the SSD for custome data like hand detection ?

  49. Sunil Badak October 19, 2017 at 11:42 am #

    hi Adrian,
    we are doing a final year B.E project in which we need to give the movement to the Robot depending upon the object that Robot has detected , in such way that that Robot will approach the detected object. Any Idea how to achieve this?. Thanks

    • Adrian Rosebrock October 19, 2017 at 4:42 pm #

      Congrats on doing your final B.E. project, that’s very exciting. Exactly how you achieve this project is really dependent on your robot and associated libraries. Are you using a Raspberry Pi? If so, take a look at the GPIO libraries to help you get started.

  50. John McDonald October 20, 2017 at 9:17 pm #

    Adrian, this is amazing. But what if we want to detect something else besides a chair etc. How could we make our own detector?

  51. Darren October 22, 2017 at 7:13 am #

    will this work on mobile phones? because im currently working with object detection also but im using mobile phones for it

    • Adrian Rosebrock October 22, 2017 at 8:22 am #

      This code is for Python so you would need to translate it to the OpenCV bindings for the programming language associated with your phone, typically Java, Swift, or Objective-C.

  52. Ying October 23, 2017 at 11:47 am #

    Hi Adrian,

    Can I use other caffe model to run this python code? e.g. yolov2, etc?

    • Adrian Rosebrock October 23, 2017 at 12:20 pm #

      OpenCV 3.3’s “dnn” module is still pretty new and not all Caffe models/layers are supported; however, a good many are. You’ll need to test with each model you are interested in.

  53. Justin October 23, 2017 at 12:15 pm #

    Hi Adrian! I’m back!
    Thank you for the answer again.
    Now, I’m trying to use this program for my school project.
    I want to make a sushi detection machine.
    So I need to have the pre-trained data(sushi images caffemodel).
    How can I get it? How can I train and get my own data?
    please let me know. Thank you
    Have a good day.

  54. Win October 24, 2017 at 11:30 am #

    Hi i just want to ask what are the possible algorithms that you’ve used in doing it THANKS

    • Adrian Rosebrock October 24, 2017 at 2:49 pm #

      Hi Win — this blog post was part of a two part series and I detailed MobileNet Single Shot Detectors (the algorithm used) in the prior week’s blog post. You can access that blog post here: Object detection with deep learning and OpenCV.

      • Peter January 4, 2018 at 9:37 pm #

        Hello Adrian,

        I saw the latest openCV version 3.4 was released. An in the release note, it says that ” In particular, MobileNet-SSD networks now run ~7 times faster than in OpenCV 3.3.1. ”
        So I thought if I use the opencv3.4 for your real_time_object_detection.py code, the fps will increase a lot. But in fact, it seems that there no significantly improvement with 3.4.
        1. I used the TX2 platform for the test, one is for opencv3.3 binding with python3.5. the other test is opencv3.4 binding with opencv3.4 with CUDA support (http://www.jetsonhacks.com/2017/04/05/build-opencv-nvidia-jetson-tx2/)

        Do you know where is the problem?

        2. My goal is to reach the 24 fps for object detection on an embedded platform, Now I am trying mobilenet-ssd on tx2 with opencv dnn lib, but seems there is a big gap. Do you have any suggestions?

        Thanks very much. waiting for your replay….

      • Peter January 4, 2018 at 10:03 pm #

        on TX2 with opencv3.4 with CUDA support, only ~5fps for 400*400

        nvidia@tegra-ubuntu:~/Downloads/real-time-object-detection$ python real_time_object_detection.py –prototxt MobileNetSSD_deploy.prototxt.txt –model MobileNetSSD_deploy.caffemodel
        [INFO] loading model…
        [INFO] starting video stream…
        [INFO] elapsed time: 64.09
        [INFO] approx. FPS: 5.30

  55. Ife Ade October 31, 2017 at 9:48 am #

    Hi, please i was wondering if there is a way I could count the number of detection in any image that is passed through the network. Thansk

    • Adrian Rosebrock November 2, 2017 at 2:46 pm #

      Count the number of detections per object? Or for all objects?

      In either case I would use a Python dictionary with the object index as the key and the count as the value. You can then loop over the detections, check to see if the detection passes the threshold, and if so, update the dictionary.

      At the end of the loop you’ll have the object counts. To obtain the number of ALL objects in the image just sum the values of the dictionary.

      • Yadullah Abidi May 2, 2018 at 11:29 am #

        Hey Adrian, I was trying this approach of yours but it doesn’t work. For eg. I open my webcam the am the only person (and object) detected. The confidence is above 90% and the counter just keeps going up. Let’s say there are 4 people in the video stream I am passing to the dnn. I’ve implemented if (CLASSES[idx]==”person”): so that only humans get marked. Now in this case as soon as a person is detected with a 90% confidence, the counter just keeps going up.

        How do I solve this?

        • Adrian Rosebrock May 3, 2018 at 9:32 am #

          You nee to reset your dictionary at the end of your loop. I assume you are counting on a per-frame basis, right? If you do not reset your dictionary, then yes, the counter will keep going up.

          • Yadullah Abidi May 4, 2018 at 2:43 am #

            I assume by resetting my dictionary you are referring to the dict.clear() method which just empties the whole dictionary. I don’t see how does that help me in a video stream. I need to count the number of detections and show them on the output screen at all times which means I need to save them in a variable.

          • Adrian Rosebrock May 9, 2018 at 10:33 am #

            In that case you would need to apply object tracking methods so you don’t accidentally “recount” objects that were already counted. Be sure to take a look at object tracking algorithms such as “centroid tracking” and “Correlation tracking”.

  56. olivia November 10, 2017 at 7:51 am #

    Hallo adrian,
    i have an project to detect an object from ycbcr video streaming and cropping the object.
    do you have a tutorial that can help me? thanks a lot adrian..

    • Adrian Rosebrock November 13, 2017 at 2:12 pm #

      I would suggest basic template matching to get you started.

  57. apollo November 20, 2017 at 5:46 am #

    Thank you for your great help. Could you explain how we can count passenger with bus embed overhead camera

  58. chandiran November 21, 2017 at 7:30 am #

    Hi sir,
    I would like detect whether in webcam mobile phone is showing or not..whether this program will help me or not sir..If it so how can i do it?help me sir.

  59. Rocky November 21, 2017 at 8:22 pm #

    I stumbled upon your website. This is just awesome and thank you for the detailed description. I am getting some ideas on how I can apply your iconcepts/code to other areas 🙂

    I am thinking to apply this on my project which is to highlight text on a computer screen. The idea is simple an user points his mouse to a text which may be in a word document or pdf or picture on his computer screen. If there exists a same word across his screen that will be highlighted. I know this is different but this still using the real time screen recording video stream and tracking the highlighted words. Do you think this can be achieved or do you have any good ideas ? Thanks again

    • Adrian Rosebrock November 22, 2017 at 9:58 am #

      This seems doable, especially if the fonts are the same. There are many ways to approach this problem but if you’re just getting started I would use multi-scale template matching. More advanced solutions would attempt to OCR both texts and compare words.

  60. Sagar November 24, 2017 at 9:20 am #

    I am trying to use this code for googlenet. But it is not working and i can’t find the changes. Can you please suggest me some changes in the code for implement bvlc_googlenet.caffemodel and bvcl_googlenet.prototxt .

    • Adrian Rosebrock November 25, 2017 at 12:24 pm #

      Hi Sagar — I’m not sure what you mean by “it’s not working and I can’t find the changes”. Could you elaborate?

  61. Jacqueline November 24, 2017 at 5:10 pm #

    I am using my MacBook Pro and within VirtualBox Ubuntu doing all of the tutorials. For some reason, I keep getting the message: “no module named imutils.video.” Any idea why this may be? I did the tutorial on drawing the box around the red game and that worked.

    • Adrian Rosebrock November 25, 2017 at 12:19 pm #

      Make sure you install imutils into your Python virtual environment:

  62. Jaitun December 2, 2017 at 6:53 am #

    Hey Adrian! The code is just wonderful, but i have one question. Once we have tracked these objects how could be track them? I saw your blog for tracking a ball but how will we track so many detected objects from their coordinates.

    • Adrian Rosebrock December 2, 2017 at 7:16 am #

      Once you have an object detected you can apply a dedicated tracking algorithm. I’ll be covering tracking algorithms here on the PyImageSearch blog, but in the meantime take a look at “correlation tracking” and “centroid tracking”. Centroid tracking is by far the easiest to implement. I hope that helps!

  63. Zaira Zafar December 2, 2017 at 9:34 am #

    I tried calling the protext and model through file system. But it gives me an error on reading the model file. Can you please guide me on how to read the files through file system, instead of passing them as arguements?

    • Adrian Rosebrock December 5, 2017 at 7:55 am #

      If you do not want to parse command line arguments you can hardcode the paths in your script. You’ll want to delete all code used for command line arguments and then create variables such as:

      And from there use the hardcoded paths.

      This is really overkill though and if you read up on command line arguments you’ll be able to get the script up and running without modifying the code.

      It might also be helpful to see the command you are trying to execute.

      • Zaira Zafar December 9, 2017 at 7:12 am #

        It’s a user oriented application, like snapchat uses learning. I can’t have user passing parameters, user needs to remain ignorant of what is happening in the code.

        • Adrian Rosebrock December 9, 2017 at 7:20 am #

          In that case you should hardcode the parameters. How you package up and distribute the project is up to you but a configuration file or hardcoded values are your best bet.

    • Wajeeha January 4, 2018 at 5:46 am #

      Dear Zaira, I am facing same issue. can you please guide me how you run this code after getting this isuue.

  64. Fardan December 11, 2017 at 2:45 am #

    hello ardian, i’m wondering, how does the SSD doing the image pre-processing step? So they can detect the region proposal. sorry for my fool question

    • Adrian Rosebrock December 12, 2017 at 9:13 am #

      Which pre-processing step are you referring to? Calling cv2.dnn.blobFromImage on the input frame pre-processes the frame and prepares it for prediction.

  65. Tarik December 18, 2017 at 3:08 pm #

    Hello Adrian,

    Thanks for great tutorial. I have a question regarding the number of classes. Is there any model from Caffe that we can use for more classes? If so, can you please point me where I can download use in a way that described in this tutorial. Thanks!

  66. Nicolas December 23, 2017 at 3:25 am #

    How can I train new objects? I do not see the image database!

    • Adrian Rosebrock December 26, 2017 at 4:36 pm #

      For training networks for your own custom objects please take a look at this GitHub repo. The model used in this blog post was pre-trained by the author of the GitHub I just linked to. If you’re interested in training your own custom object detectors from scratch I would also refer you to Deep Learning for Computer Vision with Python.

  67. Huzzi December 25, 2017 at 4:19 pm #

    Hey! This was pretty neat and I am looking forward to taking it further from here.

    I have a few things to clarify: entering q in the console doesn’t seem to quit the program. I believe entering q is supposed to break out of the While loop but it doesn’t seem to do so.
    Also, out of curiosity, did you develop algorithms for MobileNet SSD? And is it only trained for specific objects as mentioned when defining a class?

    • Adrian Rosebrock December 26, 2017 at 3:58 pm #

      You need to click on the active window opened by OpenCV and then hit the q key. This will close the window.

      I did not train this particular MobileNet SSD. A network can only predict objects that it was trained on. However, I do train SSDs (and other deep learning object detection algorithms) inside Deep Learning for Computer Vision with Python.

      • Huzzi January 9, 2018 at 1:00 pm #

        For autonomous RC car, I might need a model that detects STOP/START etc signs. Wondering if you know of any existing model that I could use?

        • Adrian Rosebrock January 10, 2018 at 12:53 pm #

          I don’t know of a pre-trained model off the top of my head. And realistically, the accuracy of the model will depend on your own stop/start signs. You will likely need to train your model from scratch.

          • Huzaifa Asif January 11, 2018 at 7:13 am #

            The issue I dont have any experience in machine learning. Do you have any guide for beginners?

          • Adrian Rosebrock January 11, 2018 at 7:31 am #

            If you are brand new to computer vision and deep learning I would recommend the PyImageSearch Gurus course to help you get up to speed. If you have prior Python experience I would recommend Deep Learning for Computer Vision with Python where I start by discussing the fundamentals of machine learning and then work to more advanced deep learning examples.

            I hope that helps!

  68. Akshra December 28, 2017 at 12:25 pm #

    im very new to this. Im attempting to detect multiple objects and find their distance from the camera of a moving vehicle. Where do you suggest i start?
    Also, the error im getting when i run the above code is “error:the following arguments are required: – p/–prototxt, -m/–model
    How do i enter those?
    Thanks

    • Adrian Rosebrock December 28, 2017 at 2:05 pm #

      The reason you are getting this error is because you are not supplying the command line arguments. Please see the blog post for examples on how to execute the script. I would also suggest reading up on command line arguments.

      • akshra December 28, 2017 at 10:21 pm #

        thanks. I got it to work. HOw can I use this for a moving camera if it is, say, attached to a vehicle?
        Im attempting to detect multiple objects and find their distance from the camera of a moving vehicle.

        • Andre January 14, 2018 at 7:39 am #

          May I know how did you solve it? I’ve read the command line arguments page and can’t get any clue.

  69. latha December 28, 2017 at 11:25 pm #

    if I want to change the size of the class ( i want to detect only person and cat), what would I have to change to get rid of this error?
    label = “{}: {:.2f}%”.format(CLASSES[idx],
    confidence * 100)
    list index out of range

    • Adrian Rosebrock December 31, 2017 at 9:55 am #

      There are a few ways to do this. If you want to truthfully recognize only “person” and “cat” you should consider fine-tuning your network. This will require re-training the network. If you instead want to ignore all classes other than “person” and “cat” you can check CLASSES[idxs] and see if the predicted label is “person” or “cat”.

      • latha January 1, 2018 at 1:47 am #

        thank you so much. This works.

        • FanWah March 12, 2018 at 1:13 pm #

          hi latha, can you tell me which part of the coding did u change? Can you show me?

  70. akshra December 30, 2017 at 11:38 am #

    if I want to get the x and y coordinates of the detected object, how can I do it?

    • Adrian Rosebrock December 31, 2017 at 9:40 am #

      Please see Line 69 where the starting and ending (x, y)-coordinates of the bounding box are computed.

  71. ramky January 1, 2018 at 2:49 am #

    I gotta say this works amazingly. In fact, it even works to some extent on a dynamic camera if it’s attached to the front of a vehicle on a highway(if one reduces the confidence level)
    you’re a life saver.

    • Adrian Rosebrock January 3, 2018 at 1:16 pm #

      Thanks Ramky, I’m glad the script is working for you 🙂

  72. Huzzi January 3, 2018 at 5:49 am #

    Did anyone had any issue related to open cv? It ran the first time but since then I haven’t been able to run it as I keep getting this error:
    ImportError: No module named cv2

    Upon running pip install python-opencv, it gives the following error:
    Could not find a version that satisfies the requirement python-opencv (from versions: )
    No matching distribution found for python-opencv

    Anyone?

    • Adrian Rosebrock January 3, 2018 at 12:53 pm #

      Please follow one of my tutorials for installing OpenCV.

      • Huzzi January 8, 2018 at 8:02 am #

        I did and I got the this error:
        real_time_object_detection_OLD.py: error: the following arguments are required: -p/--prototxt, -m/--model

        • Adrian Rosebrock January 8, 2018 at 2:35 pm #

          Pleas see my reply to Akshra on December 28, 2017. You need to supply the command line arguments to the script.

  73. ahangchen January 4, 2018 at 4:43 am #

    When we use cv2.dnn.blobFromImage to convert a image array to a blob, 0.007843 means the multiplier on the image, why this value so small? I found that default value is 1.0.

    • Adrian Rosebrock January 5, 2018 at 1:35 pm #

      Take a look at this blog post where I discuss the parameters to cv2.dnn.blobFromImage, what they mean, and how they are used.

  74. Reece January 4, 2018 at 7:22 am #

    Hello Adrian,

    Is it possible to use a different model instead of MobileSSD? I find it’s very bad at detecting cars, trucks and the likes using footage from a dash cam.

    As per the tutorial, I would like to track the object whilst providing a label and bounding box, and be able to apply better detection algorithms/methods.

    Any suggestions on which tools to use and how?

    Thanks.

    • Adrian Rosebrock January 5, 2018 at 1:34 pm #

      Right now this is the primary, pre-trained model provided by OpenCV. You cannot take a network trained using MobileNet + SSD and then swap in Faster R-CNN. You would need to re-train the network. Again, I cover this inside Deep Learning for Computer Vision with Python.

      As for tracking, please see my reply to “Eng.AAA” on September 18, 2017.

      I hope that helps!

      • Reece January 7, 2018 at 8:10 am #

        I would like to replace the MobileNet architecture with the VGG16 network architecture. Is this a possible cause in that I would be able to detect objects in a video at a better mAP?

        I have replaced the protobuf files for use with VGG16, but I can’t get it working. Does your book detail how I could use this network to get it working like your tutorial above, but as I had said, to a better precision rate?

        • Adrian Rosebrock January 8, 2018 at 2:47 pm #

          I wouldn’t recommend swapping in VGG, instead use a variant of ResNet. From there you will need to retrain the entire network. You cannot hot-swap the architectures. My book details how to train custom object detectors from scratch on your own datasets. This enables you to create scripts like the one covered in this blog post.

  75. Stefan January 4, 2018 at 7:37 pm #

    Hello there! Loving the tutorial ! I just have one question. When i run the code you sent me via email, i get this error:
    AttributeError : ‘NoneType’ object has no attribute ‘shape’
    Any help would be appreciative! Thank you!

    • Adrian Rosebrock January 5, 2018 at 1:31 pm #

      If you’re getting an error related to “NoneType” I’m assuming the traceback points to where the image is read from your camera sensor. Please take a look at this blog post on NoneType errors and how to resolve them.

  76. Mulia January 5, 2018 at 1:58 am #

    Hi Adrian…..
    Thank You for sharing this wonderful knowledge. I tried the code above and execute the command accordingly. But I got this reply on my command line:

    [INFO] loading model

    net = cv2.dnn.readNetFromCaffe(args[“prototxt”], args[“model”])
    cv2.error: /home/pi/opencv-3.3.0/modules/dnn/src/caffe/caffe_io.cpp:1113: error: (-2) FAILED: fs.is_open(). Can’t open “MobileNetSSD_deploy.prototxt.txt” in function ReadProtoFromTextFile

    Help me with this problem Sir…
    Thank You.

    • Adrian Rosebrock January 5, 2018 at 1:26 pm #

      Double-check your path to the input prototxt and model weights (and make sure you use the “Downloads” section of this blog post to download the code + additional files). The problem here is that your input paths are incorrect.

  77. Dan January 10, 2018 at 8:15 am #

    Hi Adrian, may i know how do i create/tained my own caffe model file? let say for example, i would like to create a new set of pills for hospitality. How can i do it? The second thing would be if there was a new set of pills that comes in, do i have to recreate a whole new caffee model file or i can use the same one?

    • Adrian Rosebrock January 10, 2018 at 12:45 pm #

      Hey Dan, great questions.

      1. If you want to train your own object detectors you can either (1) see this GitHub repo of the developer who trained the model used in this example or (2) take a look at my book, Deep Learning for Computer Vision with Python where I discuss training your own custom object detectors in detail.

      2. If you want to add new objects that you want to recognize/detect you would need to either re-train from scratch or apply transfer learning.

      • dan January 12, 2018 at 2:03 pm #

        hi, honestly i dont mind ordering the book however I feel that its kind of wasted for me to spent so much because i would only be using it once as its more of like a school project. Once its over, i wont have to do this anymore.
        Is there anyway that I am able to get the content on training my own custom object detectors only? Thankyou

        • Adrian Rosebrock January 15, 2018 at 9:29 am #

          If you’re using it for a one-off school project than DL4CV might not be the best fit for you. Training your own custom CNN-based object detectors can be challenging and requires knowledge of a large number of deep learning concepts (all of which the book covers). If you want to share a bit more about your school project and your experience with machine learning/deep learning. I can continue to let you know if the book would be a good fit for you. Or in the absolute worst case I can let you know if your school project is feasible.

  78. Rohit Thakur January 11, 2018 at 11:56 pm #

    Hi Adrian,

    I want to ask you a simple question. It may sounds.
    How can we save the detected result as video file like .mp4 or .avi. As i know we can use cv2.VideoWriter function for this with different codes. Can you help if possible with an example ?

    • Adrian Rosebrock January 12, 2018 at 5:27 am #

      I have two tutorials on using cv2.VideoWriter to write video to disk. You can use them to modify this script to save your video. Take a look at this tutorial to get started. Then from there read this one on only saving specific clips.

  79. Atul Soni January 13, 2018 at 5:46 am #

    Hello ,
    After running the command I am getting this

    python real_time_object_detection.py \
    > –prototxt MobileNetSSD_deploy.prototxt.txt \
    > –model MobileNetSSD_deploy.caffemodel

    [INFO] loading model…
    [INFO] starting video stream…

    VIDEOIO ERROR: V4L2: Pixel format of incoming image is unsupported by OpenCV
    Unable to stop the stream: Device or resource busy

    (h, w) = image.shape[:2]
    AttributeError: ‘NoneType’ object has no attribute ‘shape’

    So what this error means ?

    • Adrian Rosebrock January 15, 2018 at 9:25 am #

      It sounds like OpenCV cannot access your webcam. When you try to read a frame from the webcam it is returning “None”. You can read more about NoneType errors here.

  80. Atul Soni January 15, 2018 at 1:27 am #

    Hello Adrian,
    I tried this tutorial and its working very well.
    But can you please tell me what I need to do If a want to add more objects like watch , wallet so in short how can I provide my own trained model ?

    • Adrian Rosebrock January 15, 2018 at 9:12 am #

      Hey Atul — you would need to:

      1. Gather images of objects you want to detect
      2. Either train your model from scratch or apply transfer learning, such as fine-tuning

      I discuss easy methods to gather your own training dataset here. I then discuss training your own deep learning-based object detectors inside Deep Learning for Computer Vision with Python.

      • Atul Soni January 16, 2018 at 1:40 am #

        Can you please guide me how can I train my own model from scratch or applytransfer learning ?

  81. Marta January 15, 2018 at 3:39 pm #

    Hi Adrian,

    This might look like a really simple question, but I can’t figure it out:

    $ python3 real_time_object_detection.py \ –prototxt MobileNetSSD_deploy.prototxt-txt \ –model MobileNetSSD_deploy.caffemodel
    usage: real_time_object_detection.py [-h] -p PROTOTXT -m MODEL [-c CONFIDENCE]
    real_time_object_detection.py: error: the following arguments are required: -p/–prototxt, -m/–model

    I get this error when I try to run it on the terminal, I don’t understand it because supposedly I define those arguments when I run it, why is this happening?

    Thanks so much,

    Marta.

    • Adrian Rosebrock January 16, 2018 at 1:01 pm #

      It looks like you have properly passed in the command line arguments so I’m not actually sure why this is happening. Can you try replacing --prototxt with -p and --model with -m and see if that helps? Again, the command line arguments look okay to me so I’m not sure why you are getting that error.

  82. ope January 15, 2018 at 10:27 pm #

    i keep getting this error thanks.
    usage: deep_learning_object_detection.py [-h] -i IMAGE -p PROTOTXT -m MODEL
    [-c CONFIDENCE]
    deep_learning_object_detection.py: error: the following arguments are required: -i/–image, -p/–prototxt, -m/–model
    [Finished in 7.0s]

    • Adrian Rosebrock January 16, 2018 at 12:54 pm #

      Hey Ope, I have covered in this in a few different replies. Please ctrl + f and search the comments for your error message. See my reply to “Akshra” on December 28, 2017 for the solution.

  83. Mario Kristanto January 15, 2018 at 10:41 pm #

    Hello Adrian,
    This tutorial is amazing.
    But is it possible to using this code for a video that i have?
    How to change it so it can working with the video not my webcam?

    • Adrian Rosebrock January 16, 2018 at 12:53 pm #

      There are a number of ways to accomplish this. You can use the FileVideoStream class I implemented or you can use a non-thread version using cv2.VideoCapture (also discussed in the post I linked to).

  84. Amit January 16, 2018 at 3:08 am #

    Hi Adrian,

    Here in this tutorial, we have used a pre-trained caffee model. What about we want to train the model according to our requirement? Is there any tutorial which explains how to train the caffee model according to our own requirement? You response will be very useful.

    Thanks!

    • Adrian Rosebrock January 16, 2018 at 12:48 pm #

      Hey Amit, thanks for the comment. If you want to train your own custom deep learning-based object detector please refer to the GitHub of the author who trained the network. Otherwise, I cover how to train your own custom deep learning object detectors inside Deep Learning for Computer Vision with Python.

  85. hashir January 19, 2018 at 10:19 am #

    how much time will be take to complete this process on raspberry pi 3
    python real_time_object_detection.py –prototxt MobileNetSSD_deploy.prototxt.txt –model MobileNetSSD_deploy.caffemodel
    [INFO] loading model…
    [INFO] starting video stream…

    • Adrian Rosebrock January 22, 2018 at 6:42 pm #

      I have provided benchmarks and timings of the code used here over in this blog post.

  86. Deepak January 27, 2018 at 3:07 am #

    I am using PICam and I got tthe error like this

    [INFO] loading model…
    [INFO] starting video stream…
    Traceback (most recent call last):
    File “real_time_object_detection.py”, line 47, in
    frame = imutils.resize(frame, width=400)
    File “/home/pi/.virtualenvs/cv/lib/python3.5/site-packages/imutils/convenience.py”, line 69, in resize
    (h, w) = image.shape[:2]
    AttributeError: ‘NoneType’ object has no attribute ‘shape’

    • Adrian Rosebrock January 30, 2018 at 10:39 am #

      Hey Deepak — make sure you read the comments and/or do a ctrl + f and search the page for your error. I have addressed this question a number of times in the comments section. See my reply to “Atul Soni” on January 13, 2018 to start. Thanks!

  87. Justin January 27, 2018 at 1:06 pm #

    Hey Adrian,

    Do you have any pre-trained models for detecting drones outside?

    • Adrian Rosebrock January 30, 2018 at 10:33 am #

      Sorry, I do not.

  88. Matthew January 30, 2018 at 5:27 pm #

    Do you know how I can take the data that I get from tracking objects and use that towards another program? For example, I want to try and do find open parking spaces at my school and I want to be able to track cars to find if there is an open space or not.

    • Adrian Rosebrock January 31, 2018 at 6:42 am #

      I think that depends on what you mean by “use that towards another program”? The computer vision/deep learning aspect of this would be detecting the open parking spaces. Once you detect an open parking spot it’s up to you what you do with the data. You could send it to mobile devices who have downloaded your parking monitor app. You could send it to a server. It’s pretty arbitrary at that point. I would suggest focusing on training a model to recognize open parking spots to get started.

  89. AMRUDESH BALAKRISHNAN January 31, 2018 at 1:02 am #

    Im getting the following error :
    usage: real_time_object_detection.py [-h] -p PROTOTXT -m MODEL [-c CONFIDENCE]
    real_time_object_detection.py: error: the following arguments are required: -p/–prototxt, -m/–model

    what can i do

    • Adrian Rosebrock January 31, 2018 at 6:37 am #

      To start I would suggest going back to the “Real-time deep learning object detection results” section of this post where I demonstrate how to execute the script. You need to supply the command line arguments to the script when you execute it. If you’re new to command line arguments I would encourage you to read up on them before continuing.

  90. Jimmy January 31, 2018 at 11:20 am #

    Hi Adrian! Good job on this tutorial. I have a question. How can I remove the other Classes, for example I only want to detect the Chair. If it is possible how can I do it. I’m receiving error and freezes the frame when i try to remove the other classes on Line 22 real_time_object_detection.

    • Adrian Rosebrock January 31, 2018 at 12:40 pm #

      Hey Jimmy — you don’t want to remove any of the classes in Line 22. Instead, when you’re looping over the detected objects, use an “if” statement, such as

      • Jimmy February 8, 2018 at 10:53 am #

        Hi there! i tried using that statement under Line 70 but the other classes still appears when I run the code.

        • Adrian Rosebrock February 12, 2018 at 6:44 pm #

          Make sure you double-check your code. If you properly make the check you’ll only be examining “chair” objects — all others will be ignored.

  91. Tinamore February 2, 2018 at 12:05 am #

    Hi, thanks for your great article.

    If i input video with Pi camera, detection is very good. it works very well. I think because the image is very detailed, less noise.

    But i input a stream HD camera CCTV. Most detection is good, but sometime detection is wrong. This is url image wrong:

    https://imgur.com/7Q6ijy7
    https://imgur.com/OOaJAqh

    P/s: I have change code line 48, 49 from 300 to 400. I test that if the 300 to only find the large person image. But i change to 400 then detection small image of person.

    blob = cv2.dnn.blobFromImage (cv2.resize (frame, (400, 400)),
    0.007843, (400, 400), 127.5)

    I do monitoring CCTV system with alert when detection person. But I was often falsely alarmed by the non-person detection

    How to detection more accurately?

    • Adrian Rosebrock February 3, 2018 at 10:46 am #

      There are a few things you can do here, such as increasing the minimum required confidence for a detection from 20% to 50% or even 75%.

      Provided you have enough example data from your cameras you may want to try (1) training a MobileNet + SSD from scratch or (2) fine-tuning the network.

  92. TinaMore February 5, 2018 at 3:22 am #

    Hi,

    I think should output cv2.imshow(“Frame”, cv2.resize(frame, (300, 300))) with frame same input dnn: cv2.resize(frame, (300, 300)).

    Because if not then the dnn will look at the image with a different ratio not same real frame, For example, the image of a person will be pulled higher.

  93. Vijay February 5, 2018 at 6:13 am #

    When I used readNetFromDarknet method, the detection (=net.foward()) array is very different (with shape (845,6)) from that of Caffe model (which has shape (1,1,1,7)). Could you please guide me on how to proceed with the Darknet model detection array? Also, could you please provide some reference to have a deeper understanding of net.forward? Thanks!

    • Adrian Rosebrock February 6, 2018 at 10:19 am #

      Hey Vijay — I haven’t tried the readFromDarknet methdo so I’m not sure about the method. I’ll give it a try in the future and if need be, write a blog post on it. I discuss how deep learning object detection works inside Deep Learning for Computer Vision with Python — this will help you gain insight into what is going on inside a deep learning object detection model.

  94. Rahul February 5, 2018 at 10:59 am #

    Hello Adrian,

    Thanks for the putting this great article.

    I have one question here. If i want to detect the Tree and Buildings. How can i detect that? Is there any simple solution or it will take some efforts.

    Could you please help me in this?

  95. Amit February 6, 2018 at 3:37 am #

    Hi Adrian,

    Could you please suggest me some tutorial in which it has been explained how to create regression box for the detected objects.

    Thanks,
    Amit

  96. Valentin February 8, 2018 at 4:07 am #

    Hi.
    Great tutorial, I was able to make it work with not much trouble using a conda enviroment (install opencv using conda to avoid any problem).
    What do i need to do to:
    1) save the number of persons in the video stream (as a people counter)
    2) how to make it work with a previously recorded video?

    Thanks!

    • Adrian Rosebrock February 8, 2018 at 7:49 am #

      1. To build a people counter you would want to apply a tracking algorithm so you do not double-count people. Take a look at correlation tracking or even centroid tracking.

      2. You can use this with pre-recorded video by using the cv2.VideoCapture function or my FileVideoStream class.

      If you’re interested in learning more about the fundamentals of OpenCV, take a look at my book, Practical Python and OpenCV.

  97. Abhiraj Biswas February 13, 2018 at 1:09 pm #

    How do we put another training set instead of the one you put on the code…pls hello me.. because it’s not recognizing every thing.

    • Adrian Rosebrock February 18, 2018 at 10:21 am #

      Unfortunately, it’s not that simple. You would need to train your own object detector from scratch or apply fine-tuning to an existing model.

  98. susanna js February 14, 2018 at 1:31 am #

    I have downloaded the code from your page. When I executed it in my raspberry pi, i got this error.

    usage: real_time_object_detection.py [-h] -p PROTOTXT -m MODEL [-c CONFIDENCE]
    real_time_object_detection.py: error: argument -p/–prototxt is required

    I don’t know how to proceed on further. Can you send me the procedure to detect objects?

    • Adrian Rosebrock February 18, 2018 at 10:13 am #

      I’ve addressed this question a handful of times in the comments. See my replies to Zaira Zafar, AMRUDESH, and tiago.

  99. Abhishek February 14, 2018 at 4:36 am #

    Hi Adrian,

    I’d like to know how long it took to train your object pool in the real time object detection system. Also, what did you use for training? Also could you explain the caffe model file in it.

  100. Vineet February 14, 2018 at 5:32 am #

    What are the advantages of using a blob here?

    • Adrian Rosebrock February 18, 2018 at 10:11 am #

      The “blob” contains the frame we are passing through the deep neural network. OpenCV requires the “blob” to be in a specific format. You can learn more about it here.

  101. Snair February 15, 2018 at 9:59 pm #

    Hey how long did it take you to train the network? Also, what did u train it on?

    • Adrian Rosebrock February 18, 2018 at 9:54 am #

      See my response to “Nicolas” on December 23, 2017.

  102. owais February 16, 2018 at 11:02 am #

    hi Adrian i am your big fan and also follower i want to know can i detect my own object in real time using this program if yes please let me know

    • Adrian Rosebrock February 18, 2018 at 9:50 am #

      What is “your own object”? Is it an object class that the SSD was already trained on? If so, yes. If not, you would need to train your own SSD from scratch or apply fine-tuning.

  103. Tahirhan February 17, 2018 at 10:17 am #

    Can you make tutorial about how can we train our mobilenet_ssd with our dataset , thanks !

  104. safal bk February 18, 2018 at 12:15 am #

    i have one question sir
    how can i run
    python real_time_object_detection.py \
    –prototxt MobileNetSSD_deploy.prototxt.txt \
    –model MobileNetSSD_deploy.caffemodel
    this command in windows cmd

    • Adrian Rosebrock February 18, 2018 at 9:39 am #

      The command should run just fine on the Windows command line. Did you try running it?

  105. ProjectForKids February 18, 2018 at 11:03 am #

    Dear Adrian,

    I’m amazed by your example code.
    It took me less than 5min to demo real time object detection to my kids thanks to you!
    Thank you for that!

    I’m running it on my laptop and it takes a bit of CPU.
    I have a NVIDIA GeForce GPU on my laptop.
    Is there a way to redirect some of the computation intensive task to this GPU to offload main CPU?

    Wish you a good day

    • Adrian Rosebrock February 22, 2018 at 9:34 am #

      Congrats on getting up and running with real-time object detection so quickly, great job! The models used with OpenCV + Python are not meant to be used on the GPU (easily). This is a big feature request for OpenCV so I imagine it will come soon.

  106. Richard February 19, 2018 at 12:30 pm #

    Hi, I’m Richard. Is it possible to run your code in pycharm. I’m having these errors:

    usage: real_time_object_detection.py [-h] -p MOBILENETSSD_DEPLOY.PROTOTXT -m
    MOBILENETSSD_DEPLOY.CAFFEMODEL
    [-c CONFIDENCE]
    real_time_object_detection.py: error: the following arguments are required: -p/–MobileNetSSD_deploy.prototxt, -m/–MobileNetSSD_deploy.caffemodel

    • Adrian Rosebrock February 22, 2018 at 9:27 am #

      You can use PyCharm to execute the code, but you’ll need to update the command line arguments in the project settings. See this StackOverflow thread for more details.

  107. pooja g. February 21, 2018 at 4:02 am #

    sir,object detection demo can we do without using internet connection

    • Adrian Rosebrock February 21, 2018 at 9:33 am #

      Yes. Just download the code and run it. You don’t need an internet connection once the code is downloaded.

  108. neha February 23, 2018 at 9:58 am #

    can i use another model instead of caffe

    • Adrian Rosebrock February 26, 2018 at 2:07 pm #

      Right now the OpenCV bindings are most stable with Caffe models, but you can potentially use TensorFlow or Torch as well.

  109. Gal February 24, 2018 at 8:08 am #

    Thanks Adrian, the tutorial is very easy and your explanation very helpful. However, the object detector has plenty of false negatives and false positives. Is there a way to improve the detection or to plug in a better model. I understand there are constraints. I look forward to hearing from you.

    Gal

    • Adrian Rosebrock February 26, 2018 at 2:00 pm #

      You may want to consider tuning the minimum confidence parameter to help filter out false negatives. Depending on your dataset and use case you may want to gather example images of classes you want to recognize from your own sensors (such as where the system will be deployed) and then fine-tune the model on these example images.

  110. Niladri February 26, 2018 at 3:06 am #

    Hi Adrian,

    A big thanks for all your post, I follow them regularly..and you have done a superb work in deep learning. One Concept idea which I developed was using my voice message as a input, my drone search and reply me with a voice message for the detected object. Would like to share my drone video.(https://dms.licdn.com/playback/C5100AQGI8Yxgy8JTrg/442b93fc59874c00aae4de3480dcc90b/feedshare-mp4_500/1479932728445-v0ch3x?e=1519722000&v=alpha&t=vBxMhCBwvc9TLuesd-ME7keC2Plc-2iVCx-QlOS8lz8)

    Keep up the good work.

  111. debasmita February 27, 2018 at 4:38 am #

    what modification is needed if i want to only detect the motion? my purpose is to use deep learning techniques to detection of motion NOT THE CLASSIFICATION. please help

    • Adrian Rosebrock February 27, 2018 at 11:26 am #

      Is there a particular reason you want to use deep learning for motion detection? Basic motion detection can be accomplished through simple image processing functions, as I discuss in this blog post.

  112. satyar March 2, 2018 at 12:03 am #

    Hi Adrian,

    gr8 tutorial. I just need small clarification. I want to add/detect an object/ thing which is not there in the class list given by you. So, what should be the criteria to add/detect them in the video? For example, I want to detect my mobile. So, to detect it, I need to add a class called ‘Mobile’ in the class list. After that Do I need to do any additions in ‘MobileNetSSD_deploy.prototxt’ file? Guide me in developing the code. Thanks

    • Adrian Rosebrock March 2, 2018 at 10:28 am #

      The .prototxt file does not have to be updated, but you DO need to either re-train the network from scratch or fine-tune the network to recognize new classes. I discuss how to train and fine-tune your own object detection networks inside Deep Learning for Computer Vision with Python.

  113. Zachiya March 2, 2018 at 4:02 am #

    i got error, and dunno why.

    box = detections[0, 0, i, 3:7] * np.array([w, h , w, h])
    ^
    SyntaxError: invalid syntax

    pls help.

    • Adrian Rosebrock March 2, 2018 at 10:25 am #

      Make sure you use the “Downloads” section of this post to download the source code instead of copying and pasting it. It looks like you likely introduced a syntax error when copying and pasting the code.

  114. hashir March 2, 2018 at 8:40 am #

    hey bro,
    Hey how long did it take to complete this program, bcz i didnt get any output. could u pls explain to solve this..very urgent
    after i running this command(below), it look loke this even after 2 hour
    python real_time_object_detection.py –prototxt MobileNetSSD_deploy.prototxt.txt –model MobileNetSSD_deploy.caffemodel
    [INFO] loading model…
    [INFO] starting video stream…

    • Adrian Rosebrock March 2, 2018 at 10:21 am #

      Hey Hashir — Is the script will run indefinitely until you click on the the open click and press the “q” key on your keyboard.

      • hashir March 7, 2018 at 6:50 am #

        sorry bro, i didnt get any proper result after pressing q on my keyboard

        • Adrian Rosebrock March 7, 2018 at 9:07 am #

          You need to click the open window opened by OpenCV and then press the “q” key on your keyboard.

  115. srikanth March 3, 2018 at 9:17 am #

    is opencv 3.3 or above is mandotary? i am coding all my cv coding in opencv 2.10.. Can u please help to find how can i convert this code to support in cv2

    • Adrian Rosebrock March 7, 2018 at 9:45 am #

      Yes, OpenCV 3.3+ is mandatory for the deep neural network (dnn) module. The code cannot be converted to OpenCV 2.4. You need ti use OpenCV 3.3+.

  116. yousuf March 5, 2018 at 5:30 am #

    hi iam using tensorflow for object detection but my model not detecting object from live camera but it can detect the object from prevideo

  117. Jakub Fracisz March 7, 2018 at 4:58 pm #

    Hi, when i try to run this code it tells me : usage: real_time_object_detection.py [-h] -p PROTOTXT -m MODEL [-c CONFIDENCE]
    real_time_object_detection.py: error: the following arguments are required: -p/–prototxt, -m/–model Do you know what to do?

    Ps. Great article

    • Adrian Rosebrock March 9, 2018 at 9:25 am #

      You need to download the source code to the post, open up a terminal, navigate to where you downloaded it, and execute the script, ensuring you supply the command line arguments. If you’re new to command line arguments, that’s okay, but you should read up on them before trying to execute the script.

      • Jakub Fracisz March 10, 2018 at 10:38 am #

        And how to navigate to where I downloaded it?

        Ps. Can we contact on mail or messanger? I have some questions.

        • Adrian Rosebrock March 14, 2018 at 1:23 pm #

          You need to use the “cd” command. If you’re new to the terminal and Unix/Linux environments that’s totally okay, but I would recommend that you spend a few days learning the fundamentals of how to use the command line before you try executing this code.

  118. Anar March 9, 2018 at 4:40 pm #

    Hi Adrian,

    How to use IP camera instead of webcam?

    Thanks

    • Adrian Rosebrock March 14, 2018 at 1:29 pm #

      I do not have any tutorials on IP cameras yet but I’ll try do one soon. Depending on your webcam and IP stream it’s either very easy and straightforward or quite complicated.

  119. Ahsan Tariq March 11, 2018 at 12:50 pm #

    Hi Adrian, I tried the code but i am facing a problem. I have asked the question in stackoverflow.
    Link to my question is https://stackoverflow.com/questions/44020713/an-exception-has-occurred-use-tb-to-see-the-full-traceback-python

    Kindly check and answer please.

    (email removed my spam filter)

  120. Alice March 12, 2018 at 4:59 am #

    Hi Andrian, I did try to follow your tutorial at: https://www.pyimagesearch.com/2016/12/26/opencv-resolving-nonetype-errors/
    And others but I still have that error:

    File “real_time_object_detection.py”, line 59, in

    (h, w) = image.shape[:2]
    AttributeError: ‘tuple’ object has no attribute ‘shape’

    • Adrian Rosebrock March 14, 2018 at 1:09 pm #

      Double-check that OpenCV can access your USB camera or webcam. Based on the error, it looks to me like OpenCV is unable to access the video stream.

  121. Dev March 13, 2018 at 3:51 am #

    How can i use other training image data sets to train the data..
    for example.. if i want to detect a UAV in the image, what open source training data are available for this?

    • Adrian Rosebrock March 14, 2018 at 12:47 pm #

      I believe Stanford has a pretty cool UAV dataset.

  122. Yadullah Abidi March 14, 2018 at 6:56 pm #

    Hi Adrian, I’d just like to know how do I reduce the number of classes you provided in the CLASSES array. I’d only like to detect Humans and Cars. What are the Necessary changes that I have to make?

    I tried simply deleting those elements from the CLASSES array but that seems to have broken the code.

    Thanks

  123. Yadullah Abidi March 14, 2018 at 6:57 pm #

    Ahh Never mind. It was a bummer on my part. The code runs just fine

    • Adrian Rosebrock March 19, 2018 at 6:06 pm #

      You don’t want to delete elements from the CLASSES array. That will cause an error. Instead, filter on the idx of the detection. See my reply to “latha” December 28, 2017.

  124. Walter suarez March 14, 2018 at 7:29 pm #

    Hello excellent tutorial ..
    first of all forgive me for my bad English. I wanted to know how can you reconnect the camera when there is an error? and second, how can the code be modified so that it recognizes only people? Thank you

    • Adrian Rosebrock March 19, 2018 at 6:05 pm #

      1. Can you elaborate on what you mean by “reconnect the camera when there is an error”? I’m not sure what you mean.

      2. See my reply to “latha” December 28, 2017.

  125. Jay Dodia March 18, 2018 at 3:26 am #

    Oksy Sir, I’ve successfully done the obsctacle detection using my logitech webcam and open cv on my raspberry pi 3. I now would like to ask you, how do I do obstacle avoidance if I mount my webcam on a bot which is running autonomously by maybe reducing its speed when obstacle is detected or change its path when it detects it. Please help me out with it sir.
    You can respond to this on my email address: (email removed by spam filter) as soon as possible.
    Thank You so much.

  126. chirag patil March 21, 2018 at 11:12 am #

    I am getting a segmentation fault while running the code. I have installed opencv version 3 with dnn = on, successfully. any explaination for this?

    • Adrian Rosebrock March 22, 2018 at 9:56 am #

      Sorry, I’m not sure what would be causing this issue. Can you pinpoint exactly which line is causing the error?

  127. harini March 25, 2018 at 12:23 pm #

    while executing the above code i get the following error

    Can’t open “MobileNetSSD_deploy.prototxt.txt”) in ReadProtoFromTextFile, file /home/pi/opencv-3.3.0/modules/dnn/src/caffe/caffe_io.cpp, line 1113 “MobileNetSSD_deploy.prototxt.txt” in function ReadProtoFromTextFile

    can anyone help me out in solving this

    • Adrian Rosebrock March 27, 2018 at 6:22 am #

      Your path to the .prototxt file is incorrect. Double-check your file paths and be sure you read up on command line arguments before continuing.

  128. Mathieu March 31, 2018 at 9:03 pm #

    Hi Adrian,

    Thx for all those tutorials, its helping a lot to learn how to use python and opencv!

    I’m able to make this program works but i’m wondering how to do an action with the answer. (do something if there is one person detected, do something else if there is two etc…).

    Hope you are still having fun with deep learning.

    Math

    • Adrian Rosebrock April 4, 2018 at 12:37 pm #

      You would want to add an “if” statement in the “for” loop on Line 56 that loops over the detections. More specifically, after Line 63, you would want to do something like this:

      • Mathieu April 6, 2018 at 6:56 pm #

        Working perfectly, thank you!

        Is it possible to count the number of person in the screen? ( if there is one person print 1, if there is two, print 2)

        Math

        • Adrian Rosebrock April 10, 2018 at 12:47 pm #

          Yes. Maintain a dictionary for each frame that maintains:

          1. The key of the dictionary as the detected class
          2. The value as the number of objects

          You can loop over each of the detected objects and update the dictionary.

          • Mathieu April 14, 2018 at 7:08 pm #

            Hi Adrian

            Sounds logic but i’m a little bit confuse about how it will look in the code.. can you show me a little example?

            Thank again for yout time.

            Math

          • Adrian Rosebrock April 16, 2018 at 2:29 pm #

            Sorry, I’m absolutely happy to help point you in the right direction but I cannot write code for you. Take a little bit of time to work with Python dictionaries and create a simple script that can count the number of words in a sentence. The same method applies here. Loop over the detected objects and count the number of objects for each class. I have faith in you and I’m confident you can do it! 🙂

  129. Raj April 2, 2018 at 2:35 am #

    Hii Adrian….I need a help…….when I give a video as the input (original video length:5 sec) it runs for about 3 minutes…..what is the reason for this..? Can u plzz help me with this..

    • Adrian Rosebrock April 4, 2018 at 12:29 pm #

      Most video files will play between 18-24 FPS. This method can only run at ~6-7 FPS on most standard CPUs. That said, 3 minutes for about 5 seconds of video is an incredibly long time. What type of hardware are you trying to run this code and object detector on?

      • Raj April 6, 2018 at 4:33 am #

        ubuntu 16.04
        16gb ram
        64 bit os

        • Adrian Rosebrock April 6, 2018 at 8:42 am #

          Given your system specs the object detector should certainly be running at a higher frame rate. How large (in terms of width and height) are your input images?

          • Raj April 9, 2018 at 5:48 am #

            the resolution of video is 640*352

          • Adrian Rosebrock April 10, 2018 at 12:12 pm #

            640×352 should be easily processable by a standard laptop/desktop. To be honest I think there might be an install/configuration problem with your version of OpenCV. Try to re-compile and re-install OpenCV, ideally on a fresh install of an operating system.

          • Raj April 9, 2018 at 5:50 am #

            Is there any method to give 1 FPS as the input from the video…

  130. Ganesh April 2, 2018 at 5:14 am #

    Hello Sir, how to estimate speed of multiple vehicles using opencv python?

    • Adrian Rosebrock April 4, 2018 at 12:26 pm #

      There are a few ways to build such a project. The first is to calibrate your camera. You will then need a method to localize the vehicle. Then apply an object tracking algorithm for each object in the video. Given a calibrated camera and known FPS you can determine how far, and therefore, how fast, an object has moved between subsequent frames in a video. It’s a bit of a tricky process so you’ll want to take your time with it.

  131. Raj April 4, 2018 at 12:42 am #

    I have to count the number of objects in each frame of the video and if the number of objects is less than the previous count ..i have to notify that there is missing of objects..can u help me to do this..plzz

    • Adrian Rosebrock April 4, 2018 at 12:07 pm #

      You will be able to accomplish this using the source code of this post with only tiny modifications. Create a dictionary that counts the number of objects in subsequent frames. If the counts for each object differs, send the alert.

      • Raj April 6, 2018 at 1:36 am #

        thank you:)

  132. Alina April 5, 2018 at 8:49 am #

    Hello Adrian,

    I installed everything on an Ubuntu machine with no errors, however when I run the script I get the following error. Any ideas on how to fix that?

    python real_time_object_detection.py \
    > –prototxt MobileNetSSD_deploy.prototxt.txt \
    > –model MobileNetSSD_deploy.caffemodel

    AttributeError: module ‘cv2’ has no attribute ‘dnn’

    Cheers,
    Alina

    • Adrian Rosebrock April 6, 2018 at 8:55 am #

      Hey Alina — you need to install OpenCV 3.3 or greater. Previous versions of OpenCV did not include the “dnn” module. Double-check your OpenCV version and upgrade if necessary.

      • Alina May 5, 2018 at 7:33 am #

        You were right, I had installed opencv 3 in the beginning. Could you I ask you a question?

        I am trying to give a webm video file as an input, but it throws me an error. What tut can I watch so I can make this work? Would I need to make any changes at the code apart from the part of the giving input stream?

        • Adrian Rosebrock May 9, 2018 at 10:26 am #

          1. Nice, I’m glad the OpenCV issue was resolved.

          2. Without knowing exactly what the error is I cannot provide any guidance. Please keep in mind that I can only provide suggestions or help if you can tell me exactly what issue you are running into.

  133. Fensius April 10, 2018 at 10:11 am #

    Hai adrian , i get stuck here

    [INFO] loading model…

    (h, w) = image.shape[:2]
    AttributeError: ‘NoneType’ object has no attribute ‘shape’

    I’ve seen comment atul soni, I have also tried it with the explanation you gave, I have checks for whether picamera works, I also had to install libjpeg but still can’t. How to solve it? Thank you

  134. Fensius Aritonang April 11, 2018 at 9:14 pm #

    Thanks adrian, it worked!. But there is a problem when doing streaming, frame are displayed very slowly. Is there any way to speed up his fps on a raspberry?

    • Adrian Rosebrock April 13, 2018 at 6:53 am #

      The Pi by itself isn’t suitable for real-time detection using these deep learning models. I provide some benchmarks and explain why in this blog post. For additional speed, try the Movidius NCS.

  135. Anthony April 12, 2018 at 8:29 am #

    Hi Adrian,

    i would like to apologize in advance, because my English isn’t the best it could be, but I really wanted to tell you how much I appreciate your tutorials. They really helped me to deepen my knowledge in the field of OpenCV.

    In a personal project of mine, where I try to incorporate your code in an ROS node I have to face the problem of converting your while loop – where the whole frame processing is taken place into a function.

    But I really struggle to create the appropriate return statement to receive the same results.

    Thanks in advance for your response.
    Cheers

    • Adrian Rosebrock April 13, 2018 at 6:49 am #

      Hi Anthony — thanks for the comment, and no worries, your English is very easy to understand.

      You mentioned a problem with the “while” loop and trying to return a particular result. Could you elaborate more on what the specific issue is with the “while” loop and what you are trying to accomplish?

  136. Aman Sharma April 12, 2018 at 6:23 pm #

    Hi Adrian
    I executed the code but got an error stating that ‘module’ object has no attribute ‘dnn’
    Im using opencv 3.3 and also have opencv_contrib3.3
    module folder have dnn folder also
    yet Im getting error
    Could u please help me out of it.
    Thank you

    • Adrian Rosebrock April 13, 2018 at 6:40 am #

      Hey Aman — it sounds like, for whatever reason, your version of OpenCV does not include the “dnn” module. Perhaps you are using a previous version of OpenCV accidentally? To confirm your OpenCV version open up a Python shell and check:

  137. Yin April 16, 2018 at 6:18 am #

    Hi, I run your project on my Ubantu16.04 No errors occurred, but the window called
    ‘ Frame ‘ is full of green. Nothing can be shown from my notebook front camera. Actually, my camera runs normally under the Win10 system.
    How to solve my problem? I will be grateful if you can help me!

    • Adrian Rosebrock April 16, 2018 at 2:15 pm #

      Hey Yin — are you using the code + model included with this blog post? Or a different model + code?

      It sounds like there are hundreds if not thousands of detections coming from your model. This could be due to false-positives or a bug in your model. Double-check your confidence threshold used to filter out weak predictions as well (you may need to increase the threshold).

      • Yin April 17, 2018 at 3:39 am #

        Yes, I use the code + model included with this blog post.
        Increasing the threshold can’t solve the problem.
        I think maybe there are something wrong with my notebook front camera drive under Linux system. Because I can’t get full video from my camera, Only the top half of the video is shown, the bottom half is all green and no signal.

        • Adrian Rosebrock April 17, 2018 at 9:21 am #

          Unfortunately it does sound like there is a problem with your laptop camera. I would also suggest getting your hands on a USB camera as well so you can debug further.

          • Yin April 17, 2018 at 11:39 am #

            Thank you very much!
            I have solved the problem.
            By the way, if I want to get the video stream with an external camera instead of notebook front camera, can you recommend one? So I can detect other places rather than objects in front of my computer.

          • Adrian Rosebrock April 18, 2018 at 3:06 pm #

            Nice, congrats on resolving the problem. As far as a USB camera goes, I really like my Logitech C920. It’s plug and play compatible with most systems.

          • Daniel April 25, 2018 at 3:04 am #

            Hi Yin,

            can you share how did you solve the problem? I´m facing the same issue but can´t find a solution. I´m working with Ubuntu 16.04 and the webcam works allright in windows 10 and in guvcview in Ubuntu.

            Thanks!!

          • Yin April 29, 2018 at 9:21 am #

            Hi, Daniel. First you should check the connection between your front camera and Ubuntu VM, they should be connected via USB3.0. And then install cheese in your ubuntu shell by this command:
            $ sudo apt-get install cheese
            $ cheese
            It may display captured video by your front camera.

          • Daniel May 2, 2018 at 4:05 am #

            Hi Yin,

            thanks for your advice, it solved the problem!

          • Adrian Rosebrock May 3, 2018 at 9:36 am #

            Awesome, I’m glad it worked! 🙂

  138. Abdoul April 16, 2018 at 4:07 pm #

    As always your tutorials are very clear thank you. I tried it on the raspberry although the rendering is a little slow, that’s not a problem because I want to count(e.g: each 5 fsp) the number of cats. Please can you help me with the syntax to add.
    Thank you in advance

    • Adrian Rosebrock April 17, 2018 at 9:29 am #

      Hey Abdoul, just to clarify from your comment, are you trying to increase the FPS processing rate of the object detection? Or count the total number of cats in each frame? The reason I ask is because I don’t know what you mean by “each 5 fsp” which I interpreted as a typo of “5 FPS” so I’m a bit confused on what you are trying to accomplish.

  139. Mamta April 18, 2018 at 3:15 am #

    Hi,
    I am trying to run your code on the nvidia jetson setup. The code only uses CPU and the GPU utilization is zero. the fps is only 5

    1. Can you tell me if there is a way to assign specific tasks ( like inference ) to GPU using opencv ?

    Thanks

    • Adrian Rosebrock April 18, 2018 at 2:55 pm #

      This may be a silly question, but I assume you compiled OpenCV with GPU and OpenCL support already?

      • Mamta April 20, 2018 at 6:50 am #

        Yes.. compiled with both gpu and opencl support.
        If I use SSD mobilenet in tensorflow and opencv, the GPUs are utilized to maximum capacity.

        Is there an option to set/enable GPU for inferences ?

        • Adrian Rosebrock April 20, 2018 at 9:36 am #

          My understanding (which could be incorrect) is that OpenCL should help determine the most optimized way to run the code. Perhaps my understanding is incorrect. In that case I would suggest opening an issue on the official OpenCV GitHub page. Once you do, definitely post the link back so others, myself included, can learn from it.

  140. AGarg April 19, 2018 at 10:13 am #

    Hello,

    Really useful article, I was all setup in one day!

    SSD seems to reduce the confidence levels for small sized objects any suggestion to improve this.

    • Adrian Rosebrock April 20, 2018 at 10:06 am #

      There are a few ways to handle small-sized objects with SSDs. The “hack” recommended by the others is to increase the resolution of the image passed into the network. This will slow down inference time but will help when detecting smaller objects.

  141. Lee April 24, 2018 at 1:30 am #

    Hi Adrian, this is a really nice article. Any suggestions to add more classes inside the model so that we can detect more object?

    thank you if you can answer my questions.

    • Adrian Rosebrock April 25, 2018 at 5:53 am #

      Hey Lee, I would suggest skimming the comments as I’ve addressed how to add more networks to the model. The gist is that you have two options:

      1. Train a network from scratch
      2. Apply fine-tuning

      I cover both inside Deep Learning for Computer Vision with Python.

  142. beta farhan April 24, 2018 at 6:40 am #

    hello adryan,how can i training my data ? example i will train my book object.. thank you

    • Adrian Rosebrock April 25, 2018 at 5:44 am #

      Hey Beta, it’s awesome to hear that you would like to train your own custom deep learning object detector. I actually cover how to train your own deep learning object detectors inside Deep Learning for Computer Vision with Python. I would suggest starting there.

  143. Yadullah Abidi April 26, 2018 at 4:08 pm #

    Hey Adrian!

    Any ideas on how can I “count” the number of detections? Let’s say I had 3 people walk into the frame from one side and exit from the other side, so how can I count those 3 people and like save that count to a variable?

    • Adrian Rosebrock April 28, 2018 at 6:11 am #

      See my reply to “Ife Ade” on October 31, 2017.

  144. Hari April 27, 2018 at 11:53 pm #

    Hello adrian, how can i know the position of the object?
    example i will detection fire/flame. I will used the position and send it to servo and then pointed on that…

    Thank you

    • Adrian Rosebrock April 28, 2018 at 6:02 am #

      Object detection will give you the (x, y)-coordinates of an object in a frame. Are you trying to move a servo for object tracking? If so, you can move the servo relative to where the object is moving. See this blog post for more information.

  145. Randy April 29, 2018 at 4:01 pm #

    hello Adrian, I tried running the detection on local video file as the input using the opencv video capture function, however, faced some errors as mentioned below.

    File “C:\Users\Raghav\Anaconda3\lib\site-packages\imutils\convenience.py”, line 69, in resize
    (h, w) = image.shape[:2]

    AttributeError: ‘tuple’ object has no attribute ‘shape’

    Your help would be highly appreciated. Thanks

    • Adrian Rosebrock May 3, 2018 at 10:17 am #

      OpenCV is unable to access your webcam. See this blog post for more information on “NoneType” errors.

  146. ghiz April 29, 2018 at 8:36 pm #

    hello

    i used arducam mini 2mp it is working for this?

    • Adrian Rosebrock April 30, 2018 at 12:46 pm #

      I’ve heard that Arducam is making Raspberry Pi compatible cameras due to demand, but that’s all I know. I haven’t tried any of the Arducam cameras with my Raspberry Pi.

  147. Tamer April 30, 2018 at 2:15 am #

    Hi Adrian, I tried to use bvlc_googlenet because i wanted to detect a soccer ball because i am making robo-keeper for my graduation project and i want to detect the ball through each frame and it`s Co-ordinates but it gives me an error ” Can’t open “bvlc_googlenet.prototxt”

    • Adrian Rosebrock April 30, 2018 at 12:37 pm #

      Double check the filepath for your .prototxt file. That’s my best guess. I’ve also heard of cases where the prototxt needs to be modified to be compatible with OpenCV’s DNN module.

  148. Tamer May 3, 2018 at 7:59 pm #

    can i try it with googlenet model and sustain the sliding window?

  149. Jahnavi May 6, 2018 at 2:59 pm #

    Hey! Great post.

    When i’m executing the code i’m getting an error –
    ImportError: No module named imutils.video

    How do I rectify it?

    • Adrian Rosebrock May 9, 2018 at 10:14 am #

      Make sure you install the “imutils” library on your system:

      $ pip install imutils

      If you are using Python virtual environments do not forget to use the “workon” command to access the virtual environment first.

  150. Ann May 9, 2018 at 1:48 am #

    Hi Adrian ,,
    This blog was just mindblowing.
    I was thinking if I want to detect a cup , how should I train the model ?

    • Adrian Rosebrock May 9, 2018 at 9:32 am #

      Hi Ann — thanks for the comment. I’m so happy to hear you are enjoying the PyImageSearch blog! If you want to train your own model to detect a cup, I would recommend you:

      1. Use this blog post to build your own deep learning dataset of “cup” images
      2. Follow the instructions inside Deep Learning for Computer Vision with Python to train your own deep learning object detector

  151. Sp May 10, 2018 at 3:44 pm #

    Thanks,
    You’re really helping many to understand how deep learning works.
    I suggest that you should make a course on deep learning in Udemy.
    If you already have any course or youtube tutorials. Then plz tell me

    • Adrian Rosebrock May 14, 2018 at 12:13 pm #

      I offer a book/complete self-study program on deep learning called Deep Learning for Computer Vision with Python. The book is sold through my website, PyImageSearch. Give it a look and let me know if you have any questions.

  152. Ferdows May 13, 2018 at 4:20 pm #

    Dear Adrian,
    I thank you a lot for such a nice learning environment.
    I have a question, How can I change this code to detect object from video file not live? I know your previous lectures are from file but they are not with deep learning.
    I tried a lot to do, but now it only open the video like picture and freez

    • Adrian Rosebrock May 14, 2018 at 11:57 am #

      You would need to use the cv2.VideoCapture class and supply the path to the input file. Here is an example of reading frames from a video file. I hope that helps!

  153. Brandon May 14, 2018 at 3:32 am #

    Hi Adrian,

    First off I’d like to thank you for your wonderful tutorials. It is very helpful for a python and opencv beginner like myself (computer programming beginner actually).

    I’d like to ask a question about this code… Specifically about its usage with pre-recorded videos rather than live stream.

    I am trying to run the code to detect a 40 second test video… however it is taking approximately 5 minutes for it to process (it appears to slow down the video in order to detect it). At first I thought it’d be harder for the code to detect a livestream rather than a pre-recorded video; however, obviously this is not the case, as you’ve proved it can detect livestream videos in real time. Can you explain why this may be so? Both my webcam and test videos are 30FPS, 1280w/720h resolution, so I had expected that the recorded video would have ran the same if not faster.

    Note: For clarification, I have read the comments on your other tutorial on the “faster video processing with threading”, however, I am on the newer version of python/openCV and the “slow” cv2.videocapture is faster.

    I hope to get a reply on this likely very beginner question.

    Kind regards

    • Adrian Rosebrock May 14, 2018 at 11:51 am #

      It’s actually not always the cause that processing a recorded video will be faster. Recorded videos are normally compressed in some manner and depending on which video codec you are using and which video libraries you have installed it may actually take longer to process a video file rather than a true video stream. Without knowing what file format or your system configuring I’m unfortunately not able to give further advice but I hope that at least points you in the right direction.

  154. Aagam May 14, 2018 at 9:47 am #

    Hello Adrian – Great post! I want to add some other objects like Phone, laptop, ball, … etc. Does it require some other model. Which model I should use? If I have this datasets than how I trained it?

  155. Ajay May 16, 2018 at 4:45 am #

    Hi Sir, After downloading code, I run “python real_time_object_detection.py –prototxt MobileNetSSD_deploy.prototxt.txt –model MobileNetSSD_deploy.caffemodel” command in cmd. It opens webcam and recognizing object but in green frame. My webcam working fine while accessing normally but showing green frame while running the code.

    • Adrian Rosebrock May 17, 2018 at 6:58 am #

      Hey Ajay, that sounds like a driver issue with your webcam/a problem with OpenCV accessing your webcam. Since your webcam is working normally it’s most likely an OpenCV-specific issue. What model of webcam are you using and on which OS?

      • Ajay May 18, 2018 at 5:56 am #

        Hi Sir, I am using Intel RealSense 3D camera which comes in-built with my windows 10 based lenovo laptop. Thanks !

        • Adrian Rosebrock May 22, 2018 at 6:44 am #

          Sorry, I’m not familiar with the Intel RealSense 3D camera. I would suggest contacting Intel support or posting on their developer forums. Sorry I couldn’t be of more help here!

          • Ajay May 22, 2018 at 12:51 pm #

            It’s okay, anyway your blogs are too good on Computer Vision and Neural Networks. Thanks for the help 🙂

  156. Bob Inventor May 16, 2018 at 9:56 pm #

    hi, what do I have to do to get the code to only detect one object. For example, bird. I have tried deleting the classes but get errors and am unsure what to do.

    All I want to detect is bird.

    • Adrian Rosebrock May 17, 2018 at 6:45 am #

      This blog post will solve your exact problem. 🙂

      • Zack Inventor May 19, 2018 at 1:02 pm #

        Thanks! What I am doing is checking to see if ‘bird’ is detected. So even if I ignore all other objects, they are still apart of the CLASSES[idx] so if I do

        If CLASSES[idx] == “bird”:

        bird is only detected when it is the only thing in the camera view.

        If I put a picture a car next to it, It only detects the bird on screen it does not print ‘bird detected’ because it sees the car as well.

        Is there a way so that the only thing possible is “Bird”?

        thanks!

        • Adrian Rosebrock May 22, 2018 at 6:19 am #

          I think you have a problem with your implementation of class filtering. The code in the post I linked to above will enable you to ignore all classes except the ones that you want. Make sure you are using the “Downloads” section of the blog post to download the code rather than implementing it on your own.

  157. vishakraj May 18, 2018 at 3:31 pm #

    usage: real_time_object_detection.py [-h] -p PROTOTXT -m MODEL [-c CONFIDENCE]
    real_time_object_detection.py: error: unrecognized arguments: caffemodel
    what to do..
    thanks in advance

  158. Dimuthu May 24, 2018 at 11:02 pm #

    Dear Adrian,

    Using tensorflow transfer learning I created my custom object detector. It’s work pretty well with the web cam but the problem is when I am run the code using the live feed from the IP cam it does not detect as expected.Kindly guide me to solve this problem.

    By the way earlier there was a delay in live streaming but thanks to your post Real time object detection with deep learining and opencv now there is no delay. 😀

    • Adrian Rosebrock May 25, 2018 at 5:46 am #

      You should be able to take Line 35:

      vs = VideoStream(src=0).start()

      And modify it to be:

      vs = VideoStream(src="rtsp://192.168.1.2:8080/out.h264").start()

      Under the hood VideoStream is threading the cv2.VideoCapture object so you’ll want to research the cv2.VideoCapture function and whatever particular stream you are using.

  159. Devrim Ayyildiz June 13, 2018 at 9:13 am #

    Hi Adrian,

    First of all thank you for your excellent tutorials. I am new to python and completely rookie for the concepts of image recognition, deep learning, etc. Despite that I was able to somewhat follow your code and get it running on my Ubuntu VM with a USB camera in a few hours. This is really great and motivating.

    My goal is to get this setup running on a RaspberryPI board with a USB camera and what I want to do is to control a dog repellent circuit when the python program detects a dog (which will be my dog at home that I don’t want near our main door as she scratches it when I leave her alone). Probably your code will work just fine to meet my goal, but what I had in my mind in the beginning was to train a simple model with some images (or video) of my dog only so that I will have a very limited trained set for one target (i.e. my dog). It will be enough if the algorithm just detects my dog and does not care about detecting any other objects.

    Is there a lightweight (that will run on a raspberryPI board) library that I can use to train a basic model? I may be using the terminology wrong here, but I hope I was able to make myself clear.

    Thanks again!

    • Adrian Rosebrock June 15, 2018 at 12:42 pm #

      There lots of models that can run on the Raspberry Pi, ranging from simple “toy” models for educational purposes all the way up to state-of-the-art networks like SqueezeNet and MobileNet. My suggestion would be to start with this post to train your own model. From there you should go through Deep Learning for Computer Vision with Python to learn how to train more advanced methods. I hope that helps point you in the right direction!

  160. HSU June 15, 2018 at 7:47 am #

    When i run it , i found this error.
    Can you kindly give advise for it.

    usage: real_time_object_detection.py [-h] -p PROTOTXT -m MODEL [-c CONFIDENCE]
    real_time_object_detection.py: error: the following arguments are required: -p/–prototxt, -m/–model

  161. Denis June 17, 2018 at 4:58 pm #

    Hi Adrian,
    When I run the code (using Spyder on Windows), I got SystemExit: 2. I understood that it had something to do with argparse module.

    What I did is simply downloaded your code, opened it in Anaconda’s Spyder, and then run it.Is there anything else I should be running along with main code that I downloaded, or is there some blatant mistake that I might be making here?
    Thanks.

    • Denis June 17, 2018 at 5:31 pm #

      Hi Adrian,

      Sorry to spam this comment section. I found the problem and fixed it (it was the argparse module). These two comments by me should probably be removed from thread as they are do not contribute to anything.
      Thanks!

      • Adrian Rosebrock June 19, 2018 at 8:49 am #

        Congrats on resolving the issue, Denis. I think the comments should stay as other readers may have this question as well.

        I would recommend those reading this comment to read up on command line arguments as they can avoid any headaches if you are running into errors with them.

        Thanks again, Denis! 🙂

  162. Rusiru June 20, 2018 at 10:08 am #

    I just want to say THANK YOU !!!!!!!!!!!!

    • Adrian Rosebrock June 20, 2018 at 4:00 pm #

      And thank you Rusiru for being a PyImageSearch reader 🙂

Trackbacks/Pingbacks

  1. Raspberry Pi: Deep learning object detection with OpenCV - PyImageSearch - October 16, 2017

    […] few weeks ago I demonstrated how to perform real-time object detection using deep learning and OpenCV on a standard […]

Leave a Reply