Basic motion detection and tracking with Python and OpenCV


That son of a bitch. I knew he took my last beer.

These are words a man should never, ever have to say. But I muttered them to myself in an exasperated sigh of disgust as I closed the door to my refrigerator.

You see, I had just spent over 12 hours writing content for the upcoming PyImageSearch Gurus course. My brain was fried, practically leaking out my ears like half cooked scrambled eggs. And after calling it quits for the night, all I wanted was to do relax and watch my all-time favorite movie, Jurassic Park, while sipping an ice cold Finestkind IPA from Smuttynose, a brewery I have become quite fond of as of late.

But that son of a bitch James had come over last night and drank my last beer.

Well, allegedly.

I couldn’t actually prove anything. In reality, I didn’t really see him drink the beer as my face was buried in my laptop, fingers floating above the keyboard, feverishly pounding out tutorials and articles. But I had a feeling he was the culprit. He is my only (ex-)friend who drinks IPAs.

So I did what any man would do.

I mounted a Raspberry Pi to the top of my kitchen cabinets to automatically detect if he tried to pull that beer stealing shit again:

Figure 1: Don't steal my damn beer. Otherwise I'll mount a Raspberry Pi + camera on top of my kitchen cabinets and catch you.

Figure 1: Don’t steal my damn beer. Otherwise I’ll mount a Raspberry Pi + camera on top of my kitchen cabinets and catch you.



But I take my beer seriously. And if James tries to steal my beer again, I’ll catch him redhanded.

Looking for the source code to this post?
Jump right to the downloads section.

A 2-part series on motion detection

This is the first post in a two part series on building a motion detection and tracking system for home surveillance. 

The remainder of this article will detail how to build a basic motion detection and tracking system for home surveillance using computer vision techniques. This example will work with both pre-recorded videos and live streams from your webcam; however, we’ll be developing this system on our laptops/desktops.

In the second post in this series I’ll show you how to update the code to work with your Raspberry Pi and camera board — and how to extend your home surveillance system to capture any detected motion and upload it to your personal Dropbox.

And maybe at the end of all this we can catch James red handed…

A little bit about background subtraction

Background subtraction is critical in many computer vision applications. We use it to count the number of cars passing through a toll booth. We use it to count the number of people walking in and out of a store.

And we use it for motion detection.

Before we get started coding in this post, let me say that there are many, many ways to perform motion detection, tracking, and analysis in OpenCV. Some are very simple. And others are very complicated. The two primary methods are forms of Gaussian Mixture Model-based foreground and background segmentation:

  1. An improved adaptive background mixture model for real-time tracking with shadow detection by KaewTraKulPong et al., available through the cv2.BackgroundSubtractorMOG  function.
  2. Improved adaptive Gaussian mixture model for background subtraction by Zivkovic, and Efficient Adaptive Density Estimation per Image Pixel for the Task of Background Subtraction, also by Zivkovic, available through the cv2.BackgroundSubtractorMOG2  function.

And in newer versions of OpenCV we have Bayesian (probability) based foreground and background segmentation, implemented from Godbehere et al.’s 2012 paper, Visual Tracking of Human Visitors under Variable-Lighting Conditions for a Responsive Audio Art Installation. We can find this implementation in the cv2.createBackgroundSubtractorGMG  function (we’ll be waiting for OpenCV 3 to fully play with this function though).

All of these methods are concerned with segmenting the background from the foreground (and they even provide mechanisms for us to discern between actual motion and just shadowing and small lighting changes)!

So why is this so important? And why do we care what pixels belong to the foreground and what pixels are part of the background?

Well, in motion detection, we tend to make the following assumption:

The background of our video stream is largely static and unchanging over consecutive frames of a video. Therefore, if we can model the background, we monitor it for substantial changes. If there is a substantial change, we can detect it — this change normally corresponds to motion on our video.

Now obviously in the real-world this assumption can easily fail. Due to shadowing, reflections, lighting conditions, and any other possible change in the environment, our background can look quite different in various frames of a video. And if the background appears to be different, it can throw our algorithms off. That’s why the most successful background subtraction/foreground detection systems utilize fixed mounted cameras and in controlled lighting conditions.

The methods I mentioned above, while very powerful, are also computationally expensive. And since our end goal is to deploy this system to a Raspberry Pi at the end of this 2 part series, it’s best that we stick to simple approaches. We’ll return to these more powerful methods in future blog posts, but for the time being we are going to keep it simple and efficient.

In the rest of this blog post, I’m going to detail (arguably) the most basic motion detection and tracking system you can build. It won’t be perfect, but it will be able to run on a Pi and still deliver good results.

Basic motion detection and tracking with Python and OpenCV

Alright, are you ready to help me develop a home surveillance system to catch that beer stealing jackass?

Open up a editor, create a new file, name it , and let’s get coding:

Lines 2-7 import our necessary packages. All of these should look pretty familiar, except perhaps the imutils  package, which  is a set of convenience functions that I have created to make basic image processing tasks easier. If you do not already have imutils installed on your system, you can install it via pip: pip install imutils .

Next up, we’ll parse our command line arguments on Lines 10-13. We’ll define two switches here. The first, --video , is optional. It simply defines a path to a pre-recorded video file that we can detect motion in. If you do not supply a path to a video file, then OpenCV will utilize your webcam to detect motion.

We’ll also define --min-area , which is the minimum size (in pixels) for a region of an image to be considered actual “motion”. As I’ll discuss later in this tutorial, we’ll often find small regions of an image that have changed substantially, likely due to noise or changes in lighting conditions. In reality, these small regions are not actual motion at all — so we’ll define a minimum size of a region to combat and filter out these false-positives.

Lines 16-22 handle grabbing a reference to our vs  object. In the case that a video file path is not supplied (Lines 16-18), we’ll grab a reference to the webcam and wait for it to warm up. And if a video file is supplied, then we’ll create a pointer to it on Lines 21 and 22.

Lastly, we’ll end this code snippet by defining a variable called firstFrame .

Any guesses as to what firstFrame  is?

If you guessed that it stores the first frame of the video file/webcam stream, you’re right.

Assumption: The first frame of our video file will contain no motion and just background — therefore, we can model the background of our video stream using only the first frame of the video.

Obviously we are making a pretty big assumption here. But again, our goal is to run this system on a Raspberry Pi, so we can’t get too complicated. And as you’ll see in the results section of this post, we are able to easily detect motion while tracking a person as they walk around the room.

So now that we have a reference to our video file/webcam stream, we can start looping over each of the frames on Line 28.

A call to  on Line 31 returns a frame that we ensure we are grabbing properly on Line 32.

We’ll also define a string named text  and initialize it to indicate that the room we are monitoring is “Unoccupied”. If there is indeed activity in the room, we can update this string.

And in the case that a frame is not successfully read from the video file, we’ll break from the loop on Lines 37 and 38.

Now we can start processing our frame and preparing it for motion analysis (Lines 41-43). We’ll first resize it down to have a width of 500 pixels — there is no need to process the large, raw images straight from the video stream. We’ll also convert the image to grayscale since color has no bearing on our motion detection algorithm. Finally, we’ll apply Gaussian blurring to smooth our images.

It’s important to understand that even consecutive frames of a video stream will not be identical!

Due to tiny variations in the digital camera sensors, no two frames will be 100% the same — some pixels will most certainly have different intensity values. That said, we need to account for this and apply Gaussian smoothing to average pixel intensities across an 21 x 21 region (Line 43). This helps smooth out high frequency noise that could throw our motion detection algorithm off.

As I mentioned above, we need to model the background of our image somehow. Again, we’ll make the assumption that the first frame of the video stream contains no motion and is a good example of what our background looks like. If the firstFrame  is not initialized, we’ll store it for reference and continue on to processing the next frame of the video stream (Lines 46-48).

Here’s an example of the first frame of an example video:

Figure 2: Example first frame of a video file. Notice how it's a still shot of the background, no motion is taking place.

Figure 2: Example first frame of a video file. Notice how it’s a still-shot of the background, no motion is taking place.

The above frame satisfies the assumption that the first frame of the video is simply the static background — no motion is taking place.

Given this static background image, we’re now ready to actually perform motion detection and tracking:

Now that we have our background modeled via the firstFrame  variable, we can utilize it to compute the difference between the initial frame and subsequent new frames from the video stream.

Computing the difference between two frames is a simple subtraction, where we take the absolute value of their corresponding pixel intensity differences (Line 52):

delta = |background_model – current_frame|

An example of a frame delta can be seen below:

Figure 3: An example of the frame delta, the difference between the original first frame and the current frame.

Figure 3: An example of the frame delta, the difference between the original first frame and the current frame.

Notice how the background of the image is clearly black. However, regions that contain motion (such as the region of myself walking through the room) is much lighter. This implies that larger frame deltas indicate that motion is taking place in the image.

We’ll then threshold the frameDelta  on Line 53 to reveal regions of the image that only have significant changes in pixel intensity values. If the delta is less than 25, we discard the pixel and set it to black (i.e. background). If the delta is greater than 25, we’ll set it to white (i.e. foreground). An example of our thresholded delta image can be seen below:

Figure 4: Thresholding the frame delta image to segment the foreground from the background.

Figure 4: Thresholding the frame delta image to segment the foreground from the background.

Again, note that the background of the image is black, whereas the foreground (and where the motion is taking place) is white.

Given this thresholded image, it’s simple to apply contour detection to to find the outlines of these white regions (Lines 58-60).

We start looping over each of the contours on Line 63, where we’ll filter the small, irrelevant contours on Line 65 and 66.

If the contour area is larger than our supplied --min-area , we’ll draw the bounding box surrounding the foreground and motion region on Lines 70 and 71. We’ll also update our text  status string to indicate that the room is “Occupied”.

The remainder of this example simply wraps everything up. We draw the room status on the image in the top-left corner, followed by a timestamp (to make it feel like “real” security footage) on the bottom-left.

Lines 81-83 display the results of our work, allowing us to visualize if any motion was detected in our video, along with the frame delta and thresholded image so we can debug our script.

Note: If you download the code to this post and intend to apply it to your own video files, you’ll likely need to tune the values for cv2.threshold  and the --min-area  argument to obtain the best results for your lighting conditions.

Finally, Lines 91 and 92 cleanup and release the video stream pointer.


Obviously I want to make sure that our motion detection system is working before James, the beer stealer, pays me a visit again — we’ll save that for Part 2 of this series. To test out our motion detection system using Python and OpenCV, I have created two video files.

The first, example_01.mp4  monitors the front door of my apartment and detects when the door opens. The second, example_02.mp4  was captured using a Raspberry Pi mounted to my kitchen cabinets. It looks down on the kitchen and living room, detecting motion as people move and walk around.

Let’s give our simple detector a try. Open up a terminal and execute the following command:

Below is a .gif of a few still frames from the motion detection:

Figure 5: A few example frames of our motion detection system in Python and OpenCV in action.

Figure 5: A few example frames of our motion detection system in Python and OpenCV in action.

Notice how that no motion is detected until the door opens — then we are able to detect myself walking through the door. You can see the full video here:

Now, what about when I mount the camera such that it’s looking down on the kitchen and living room? Let’s find out. Just issue the following command:

A sampling of the results from the second video file can be seen below:


Figure 6: Again, our motion detection system is able to track a person as they walk around a room.

And again, here is the full vide of our motion detection results:

So as you can see, our motion detection system is performing fairly well despite how simplistic it is! We are able to detect as I am entering and leaving a room without a problem.

However, to be realistic, the results are far from perfect. We get multiple bounding boxes even though there is only one person moving around the room — this is far from ideal. And we can clearly see that small changes to the lighting, such as shadows and reflections on the wall, trigger false-positive motion detections.

To combat this, we can lean on the more powerful background subtractions methods in OpenCV which can actually account for shadowing and small amounts of reflection (I’ll be covering the more advanced background subtraction/foreground detection methods in future blog posts).

But for the meantime, consider our end goal.

This system, while developed on our laptop/desktop systems, is meant to be deployed to a Raspberry Pi where the computational resources are very limited. Because of this, we need to keep our motion detection methods simple and fast. An unfortunate downside to this is that our motion detection system is not perfect, but it still does a fairly good job for this particular project.

Finally, if you want to perform motion detection on your own raw video stream from your webcam, just leave off the --video  switch:


In this blog post we found out that my friend James is a beer stealer. What an asshole.

And in order to catch him red handed, we have decided to build a motion detection and tracking system using Python and OpenCV. While basic, this system is capable of taking video streams and analyzing them for motion while obtaining fairly reasonable results given the limitations of the method we utilized.

The end goal if this system is to deploy it to a Raspberry Pi, so we did not leverage some of the more advanced background subtraction methods in OpenCV. Instead, we relied on a simple yet reasonably effective assumption — that the first frame of our video stream contains the background we want to model and nothing more.

Under this assumption we were able to perform background subtraction, detect motion in our images, and draw a bounding box surrounding the region of the image that contains motion.

In the second part of this series on motion detection, we’ll be updating this code to run on the Raspberry Pi.

We’ll also be integrating with the Dropbox API, allowing us to monitor our home surveillance system and receive real-time updates whenever our system detects motion.

Stay tuned!


If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , , , ,

613 Responses to Basic motion detection and tracking with Python and OpenCV

  1. Fabio G May 26, 2015 at 12:33 pm #

    Freakin awesome! Thanks for the tutorial, waiting for the part 2 😀

    • Adrian Rosebrock May 26, 2015 at 1:12 pm #

      Thanks Fabio, I’m glad you enjoyed it! 🙂

      • Anje May 25, 2016 at 5:49 am #

        This will work only for stationary camera right?? as for moving camera is there any code for motion detection??

        • Adrian Rosebrock May 25, 2016 at 3:20 pm #

          Correct, this code is meant to work with only a stationary, non-moving camera. If you’re using a moving camera, this approach will not work. I do not have any code for motion detection with a moving camera.

          • Raphaël Morency October 18, 2017 at 11:15 pm #

            For moving cameras, i would suggest having a cycle of the movement as the firstframe and reset camera position at every capture, comparing each position to the first frame at that camera position.

      • Wil October 24, 2016 at 10:49 am #

        I want a program made that detects
        The individual change in a pixel. From
        A streamed video. Can you help.

        • Adrian Rosebrock November 1, 2016 at 9:53 am #

          Detecting changes in individual pixel values is as simple as subtracting the two images:

          diff = frame1 - frame2

          The diff variable will then contain the changes in value for each pixel.

      • vinay May 10, 2017 at 10:38 pm #

        File “”, line 55, in
        ValueError: too many values to unpack
        please help!

        • Adrian Rosebrock May 11, 2017 at 8:44 am #

          I would suggest you read the previous comments to this post as the question has been answered multiple times. Take a look at my response to “Alejandro Barredo” for the solution.

      • Ratan January 8, 2019 at 2:44 pm #

        Hello Adrian I want to involve in a similar project but of continuous audio detection in a room and its continuous availability via Dropbox. Have you come across any ideas related to this.

        • Adrian Rosebrock January 11, 2019 at 9:59 am #

          Sorry, I don’t have much experience working with audio detection or audio classification so I can’t really comment here.

  2. Shashank May 26, 2015 at 3:42 pm #

    Very useful and easy to understand tutorial ! Had no clue on motion detection till now , was a really good intro to it!

  3. Andre May 26, 2015 at 3:56 pm #

    Thank you! This is Awesome!
    Can’t wait to implement on my Pi – Part 2

    • Adrian Rosebrock May 26, 2015 at 4:44 pm #

      Glad you enjoyed it Andre! Part 2 is going to be really awesome as well.

  4. David Hoffman May 26, 2015 at 4:55 pm #

    Yet another great article on PyImageSearch. Thanks for the tutorial Adrian!

    • Adrian Rosebrock May 26, 2015 at 5:53 pm #

      Thank you for the kind words David! 😀

  5. Pablo May 26, 2015 at 5:02 pm #

    Awesome work!! Thanks for the code 🙂

    • Adrian Rosebrock May 26, 2015 at 5:53 pm #

      No problem, enjoy!

  6. T. Adachi May 26, 2015 at 7:03 pm #

    Hi, nice article. What was the camera you used? I’m looking for one right now and your choice of camera and the rasp pi might be suitable for my needs.

    • Adrian Rosebrock May 26, 2015 at 7:20 pm #

      I’m using this camera board for the Raspberry Pi. It’s fairly cheap and does a really nice job.

  7. Andrew Bainbridge May 27, 2015 at 4:28 am #

    If you convert the image to HSV instead of grayscale and just look at the H channel, would that improve performance? I suspect it would reject a lot of the shadow because shadows are typically only a variance in V. I think don’t think it would increase the cost significantly. I guess I should download your code and try myself.

    • Satyajityh August 17, 2015 at 1:55 pm #

      Did it work?

      • Raphaël Morency October 18, 2017 at 11:19 pm #

        Could work, but i think HSV is more for color detection.

        With my camera, i find applying no blur and a binary threshold work the best

  8. Moeen May 27, 2015 at 10:33 pm #

    Thank you so this fantastic post.

    I was wondering how does this code react towards a moving camera? Is there any robust and light weight method to detect moving objects with a moving camera, “camera mounted on a quad-copter” ?

    • Adrian Rosebrock May 28, 2015 at 6:28 am #

      Hey Moeen, if your camera is not fixed, such as a camera mounted on a quad-copter, you’ll need to use a different set of algorithms — this code will not work since it assumes a fixed, static background. For color based tracking you could use something like CamShift, which is a very lightweight and intuitive algorithm to understand. And for object/structural tracking, HOG + Linear SVM is also a good choice. My personal suggestion would be to use adaptive correlation filters, which I’ll be covering in a blog post soon.

  9. xcl May 28, 2015 at 11:20 pm #

    hello,I’m doing a task for moving objects detecting and tracking under the dynamic background,so can you give me a good advice ?thanks

    • Adrian Rosebrock May 29, 2015 at 6:45 am #

      How “dynamic” is your background? How often does it change? If it doesn’t change rapidly, you might be able to use some of the more advanced motion detection methods I detailed at the top of this blog post. However, if your environment is totally unconstrained and is constantly changing, I would treat this as an object detection problem rather than a motion detection problem. A standard approach to object detection is to use HOG + Linear SVM, but there are many, many ways to detect objects in images.

    • Mika Peltokorpi March 15, 2017 at 3:57 am #

      Try masking the dynamic and/or non relevant background out before analyzing movement. That is what we did with motion detectors back in 90’s.

      (Semi) auto detection of dynamic background needs a dynamic background video in order to be able to (assist creation of)/create that needed background mask.

  10. sos June 1, 2015 at 5:24 am #

    Hi Adrian,

    very nice tutorial. Thank you but I have a question. Isn’t that, technicaly speaking, presence detection? If you stop moving around your office and just stay still algorithm will box you. Same if you will place something on the table/floor. I understand motion as checking continously difference between each present and past frame. I used capture.sequence from picamera to capture 3 frames as 3 different arrays, than process them, diff and it gives me quite fair results.

    • Adrian Rosebrock June 1, 2015 at 6:30 am #

      Presence detection, motion detection, and background subtraction/foreground extraction all tend to get wrapped up into the same bucket in computer vision. They are slightly different twists on each other and used for different purposes. I have second new post coming out today on motion detection that you should definitely check out as its more true to motion detection than this post is.

  11. Inker June 1, 2015 at 10:13 am #

    Hello Adrian!

    Thank you so much for the comprehensive tutorials! Best that I have seen. 🙂
    Quick question: in this post (, you say:

    “You might guess that we are going to use the cv2.VideoCapture  function here — but I actually recommend against this. Getting cv2.VideoCapture  to play nice with your Raspberry Pi is not a nice experience (you’ll need to install extra drivers) and something you should generally avoid.”

    However in this tutorial, you use cv2.VideoCapture.

    Can you explain the change?

    Thank you again!


    • Adrian Rosebrock June 1, 2015 at 10:25 am #

      Hey Evan, the code in this post is actually not meant to be run on the Raspberry Pi — it’s meant to be run on your desktop/laptop. The motion detection and home surveillance code for the Raspberry Pi is actually available on over here.

      • Inker June 1, 2015 at 10:44 am #

        Ah, ok. My bad.

        The following above threw me off:

        “So I did what any man would do.

        I mounted a Raspberry Pi to the top of my kitchen cabinets to automatically detect if he tried to pull that beer stealing shit again:”

        • Adrian Rosebrock June 1, 2015 at 10:59 am #

          Yeah, perhaps I could have been a bit more clear on that. In the section below it I say:

          In the second post in this series I’ll show you how to update the code to work with your Raspberry Pi and camera board — and how to extend your home surveillance system to capture any detected motion and upload it to your personal Dropbox.

          Indicating that there is a second part to the series, but I can definitely see how it’s confusing.

      • Muzaffar July 29, 2017 at 8:38 am #

        Hi adrian. I just bought myself a raspberry pi 3 model b and a camera board. I have no knowledge on how to use it to run the basic motion detection on it, would u mind guiding me on the steps of how to actually use this raspberry pi 3 b???

        • Adrian Rosebrock August 1, 2017 at 9:50 am #

          It’s great to hear that you just purchased a Raspberry Pi 3 and camera board. If you’re just getting started I would suggest you work through Practical Python and OpenCV. This book will teach you the fundamentals of computer vision and image processing. The Quickstart Bundle and Hardcopy Bundle also include a pre-configured Raspbian .img file with OpenCV pre-installed. Just download the .img flash it to your SD card, and boot. It’s by far the fastest way to get up and running with OpenCV. Be sure to take a look!

  12. asha June 14, 2015 at 10:30 pm #

    Wow! Great tutorial. Thanks.

  13. Matthew June 25, 2015 at 6:50 pm #

    I am stepping through these tutorials on a Pi B+. I am able to get through this tutorial, the only major issue was that initially I had not installed imutils, but after installing it the code it works(kinda) the cursor simply moves to the next line, blinks a handful of times and then the prompt pops back up. I have dropped a few debug lines in the code to ensure the code is executing (and it is), it just doesn’t seem to be executing in a meaningful way. The camera for sure works (tested it after running the code). Any ideas as to what might be happening?

    EDIT: Oops….. I just read the comment that says that this was not meant to be run on a pi….my bad

    • Adrian Rosebrock June 26, 2015 at 5:57 am #

      No worries Matthew! The reason the script doesn’t work is because it’s trying to use the cv2.VideoCapture function to access the Raspberry Pi camera module, which will not work unless you have special drivers installed. To access the Raspberry Pi camera you’ll need the picamera module. I have created a motion detection system for the Raspberry Pi which you can read more about here. I hope that helps!

  14. Almog June 26, 2015 at 1:59 pm #

    Hello Mr Adrian,

    When I’m trying to lunch the code, I am getting this error ” File “”, line 8, in from picamera.array import PiRGBArray”

    I am using a raspberry pi camera, and I used your guide on how to install opencv on rapsberry pi and I didn’t have any error.

    What did I do wrong?

    Thank you

    • Adrian Rosebrock June 26, 2015 at 7:15 pm #

      Hey Almog, have you installed the “picamera[array]” module yet? Executing:

      $ pip install "picamera[array]"

      will install the picamera module with NumPy support. You should also read this post on the basics of accessing the camera module of the Raspberry Pi.

  15. John Beale July 3, 2015 at 10:11 pm #

    I started with your code and got something that is pretty good for detecting cars, and sometimes pedestrians too.
    With an outdoor scene, trees waving around etc. the trick is to update the background reference image without getting it contaminated by moving objects. I’d be happy to make my version available, but it is based on yours and I’m not sure if your code is open source.

    • Adrian Rosebrock July 4, 2015 at 7:38 am #

      Awesome, very nice work John! Feel free to share, I would be very curious to take a look at the code, as I’m sure the rest of the PyImageSearch readers would be as well!

  16. John Beale July 5, 2015 at 10:13 pm #

    Hi Adrian,
    Ok, I put my code here:
    also a post with picture here:
    The code is very specific to that particular camera view; for example there is a line that restricts objects of interest to the upper half of the screen (based on yc coordinate), where the road is, to ignore pedestrians and moving tree shadows in the lower part of the frame.

    • Adrian Rosebrock July 6, 2015 at 6:17 am #

      Thanks so much for sharing John, I look forward to playing around with it! Great work! 🙂

  17. mohamad July 6, 2015 at 3:35 am #

    Dear Adrian
    where is the ‘imutils’ path?
    I need to know folder that include this file on My Raspberry pi 2, after “pip install imutils”
    I search and not found in /usr folder.

    • Adrian Rosebrock July 6, 2015 at 6:15 am #

      Check in the site-packages directory for the Python version that you are using.

      But in general, you don’t need to “know” where pip installs the files. You can simply start using them:

      $ python
      >>> import imutils
      >> ....

  18. tc July 7, 2015 at 2:19 am #

    Hi, thanks for this great tutorial.
    I am new to opencv (and python as well), and trying to follow your steps on this tutorial, but when I running the script, I got this error:
    from convenience import translate
    ImportError: No module named 'convenience'

    I have installed the imutils, but seem something is missing in the package. Any idea why?


    • Adrian Rosebrock July 7, 2015 at 6:29 am #

      Hey TC, what version of Python are you using?

      • TC July 7, 2015 at 6:37 am #

        I am using python 3.4 on a Linux Arch machine.
        However I am able to fix the problem by replacing the
        from convenience import ...
        from imutils.convenience import ....
        in the

        However, I got another error when trying to execute the code (which I downloaded from your site):
        File "", line 61, in
        ValueError: too many values to unpack (expected 2)

        ermm…missing one variables in this line ?
        (cnts, _) = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL,

        • Adrian Rosebrock July 7, 2015 at 8:45 am #

          I figured it was Python 3. The imutils package is only compatible with Python 2.7 — I’ll be updating it to Python 3 very soon. Also, at the top of this post I mention that the code detailed is for Python 2.7 and OpenCV 2.4.X. You’re using OpenCV 3.0 and Python 3, hence the error. The cv2.findContours function changed in OpenCV 3, so change your line to:

          (_, cnts, _) = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

          and it will work.

          • tc July 7, 2015 at 1:13 pm #

            yes..thank you very much. Now it’s working. The problem now is the tracking seem not accurate like the demos above. Is this has something to do with the camera model? Because now I am using the laptop builtin webcam.

          • Adrian Rosebrock July 7, 2015 at 1:31 pm #

            Poor tracking could be due to any number of things, including camera quality, background noise, and more importantly — lighting conditions.

          • tc July 7, 2015 at 2:55 pm #

            I see. Thanks for everything!

          • Nasrullah NMPS May 28, 2017 at 10:09 pm #

            Thanks, Adrian 🙂

  19. Kaspars July 11, 2015 at 4:41 pm #

    Hello Adrian,

    Thank you for your tutorial. It has been very helpful to me. I also have to admit that John’s code has been useful as well.

    I’m trying to make a vehicle detection and tracking program (nothing fancy – mainly for fun). So far I have been very satisfied with the program, but I feel like, that finding a difference between the current frame and the first one is not the best solution for me, because in some test videos it results in false detection, mainly because of huge changes between frames etc.

    Maybe you can give any advice how to improve or fix this? Also – if you have other advices in terms of vehicle detection and tracking, I would be very glad to hear about them.

    Anyway – Thank you in advance.

    • Adrian Rosebrock July 12, 2015 at 7:44 am #

      Hey Kaspars, take a look at my post on performing home surveillance using a (slightly) more robust algorithm on the Raspberry Pi. This method uses a running average as the background model to help prevent those false positives.

      • Kaspars July 12, 2015 at 8:56 am #

        Okay, I will take a look.

        Thank you once again. 🙂

  20. Gabriel Bosse July 13, 2015 at 6:17 pm #

    Thanks a lot for this tutorial. Do you know what would be the best way to record that motion ? Like distance travelled (in pixel) or velocity ?

    • Adrian Rosebrock July 14, 2015 at 6:23 am #

      Hey Gabriel, I have not done any tutorials related to velocity, but it is certainly possible. But in the most simplistic form, the algorithm is quite simple if you define two identifiable markers in a video stream and know the distance between them (in feet, meters, etc.) Then, when an object moves from one marker to the other, you can record how long that travel took, and be able to derive a speed. Again, while I don’t have any tutorials related to velocity, I think this tutorial on computing the distance to an object might be interesting for you.

  21. Supra July 19, 2015 at 8:23 am #

    Can you send me code? I’m using python3. But i used sudo python3
    So iI am focus only python3.

  22. Alexandre July 25, 2015 at 11:29 am #

    Hello , thank you for the tutorial , it was really very good.
    I needed to do a system similar to his but with the use of ip camera . You know what should I do ? I could not get the video from an IP address.
    Thank you so much

    • Adrian Rosebrock July 25, 2015 at 11:51 am #

      Hey Alexandre, you can still use this code with an IP camera, you just need to change the cv2.VideoCapture function to accept the address of the camera. Another approach is to try to parse the stream of the camera directly. I personally have not done thsi before, but I hope it helps get you started.

  23. mohammad July 25, 2015 at 5:13 pm #

    wow . thanks for the tutorial . and thanks for the time you spend to write these tutorials for us 🙂

    thank you very very … much 😉

  24. Anthony July 27, 2015 at 1:30 pm #

    Hello Adrian,

    I have installed imutils in the terminal under CV, if i am not under CV and try to install i get an error message. When i am in python editor and input “import imutils” i get an error stating no module named imutils. I am using Python 2.7.3. Please let me know what I am doing wrong.


    • Adrian Rosebrock July 28, 2015 at 6:40 am #

      You must be in the cv virtual environment to access any packages installed in that environment. Your cv virtual environment is entirely independent from all other packages installed on your system.

      Be sure to access your virtual environment by using the workon command:

      $ workon cv
      $ python
      >>> import imutils

      • Tony July 29, 2015 at 11:52 am #


        Thanks for this, however, I get syntax errors every time i input “Firstframe = none” and “camera.release()” which starts over at >>> instead of … which means I have to do it over again but doesn’t change the outcome. Also, just curious. I noticed at some places if i put in the “# code” the following code doesn’t work and other spots if i don’t put it in the following code doesn’t work. Could you let me know if I need to input the “# code”?

        Thanks, Tony.

        • Adrian Rosebrock July 30, 2015 at 6:42 am #

          Tony: This code is meant to be executed via command line, not via Python IDLE. Please download the source code using the form at the bottom of this post and execute it that way.

      • Felipe M November 12, 2016 at 10:24 am #

        Hi Adrian

        I’m having this same issue, and I also tried to run on cv mode without success, do you have any idea about what is happening?

        Best regards

        • Adrian Rosebrock November 14, 2016 at 12:10 pm #

          Are you referring to the imutils error? If so, you likely did not install imutils into the cv virtual environment:

  25. SAF August 4, 2015 at 10:49 am #

    Hi Adrian,

    Excellent tutorials, both this and the one detailing the use of the camera.
    I am however worried about the performance of the motion detection, even on an RPi 2.
    Due to the capturing process already using lots of CPU, I tried using different threads for capturing and for motion detection, to spread the load on the cores. Thing is, even at 4 FPS, the motion detection consistently lags behind the capturing thread.
    What was your experience with this?

    Code here:


    • Adrian Rosebrock August 4, 2015 at 1:14 pm #

      4 FPS sounds a bit slow. Have you tried processing smaller frames? If you resize the frames to a smaller size, the less data you have to process, and thus the faster your algorithms will run.

      • SAF August 5, 2015 at 7:42 am #

        Yes, I though about that. I don’t know which would have a better precision: capturing directly at a smaller resolution, or capturing at a higher resolution and resizing before processing?

        • Adrian Rosebrock August 6, 2015 at 6:24 am #

          Capturing directly at a smaller resolution should have better speed tradeoffs than capturing at a higher resolution and resizing afterwards (since you can skip the resizing/interpolation step). However, that would be something to test directly and view the results.

  26. irfan August 9, 2015 at 5:54 am #

    hello adrian

    thank you for this tutorial, but i have a problem, i got message
    File “/usr/local/lib/python2.7/dist-packages/imutils/” line 37, in resize
    (h, w) = image.shape[:2]
    AttributeError: ‘NoneType’ object has no attribute ‘shape’

    can you help me ?

    • Adrian Rosebrock August 9, 2015 at 7:04 am #

      Hey Ifran, if you’re getting an error related to the shape of the matrix being None, then the problem is almost certainly that the frame is not being properly read from the webcam/video. Make sure the path you supplied to the video file is correct.

    • taufiq March 23, 2016 at 12:10 pm #

      do u solved this problem ? i have same problem and dont have idea how to solve. im new btw

      • Adrian Rosebrock March 24, 2016 at 5:17 pm #

        Double check that you can access the builtin/USB webcam on your system. If you’re getting an error related to an image/frame being None, then frames are not being properly read from your video stream. If you’re using the Raspberry Pi, you should use this tutorial instead.

  27. Ori August 10, 2015 at 2:53 pm #

    Hi Adrian,

    Thanks for the tutorial!

    I have a question, if we are detecting motion using a delta between the FirstFrame and the new one, and i’m guessing that we are doing something like this:
    delta pixel=abs(firstFrame_pixel – newFrame_pixel).
    if the new pixel will be black and the number that represent black is 0 so we will get the original pixel without ant change.
    and how this pixel will be detect?


    • Adrian Rosebrock August 11, 2015 at 6:32 am #

      Yes, computing the absolute difference is a really simple method to change change in pixel values from frame to frame. I would take a look at Lines 50 and 51 where I compute the absolute difference and then threshold the absolute difference image. All pixels that have a difference > 25 are marked as “motion”.

  28. Kitae August 12, 2015 at 5:10 am #

    hello adrian
    thank you for the tutorial!!
    i followed all tutorial from installing python, opencv and testing video.
    but i have a problem opening ‘’
    nothing happens when i type ‘python’
    i recorded the problem.
    i would be very thankful if you help me.

    thank you!

    • Adrian Rosebrock August 12, 2015 at 6:19 am #

      It seems like for whatever reason OpenCV is not pulling frames from the video or camera feed, I’m not sure exactly why that is. When you compiled and installed OpenCV on your Raspberry Pi, did you see if it had camera/video support? I would suggest using the OpenCV install tutorial I have detailed on the PyImageSearch blog. Step 4 is really important since that is where you pull in the video pre-requisites.

      • Kitae August 13, 2015 at 12:53 pm #

        Thank you for feedback!

        I tried it and it says they are the newest version.
        I wonder that ‘python’ works very well
        and ‘python’ doesn’t work…

        • Adrian Rosebrock August 14, 2015 at 7:22 am #

          Oh, I see the problem now. The script uses the picamera module to access the Raspberry Pi camera. However, the code for this blog post uses the cv2.VideoCapture function which will only work if you have the V4L drivers installed. Instead, this post for motion detection for the Raspberry Pi.

          • abhi March 24, 2017 at 11:24 am #

            please provide solution for this problem.

          • Adrian Rosebrock March 25, 2017 at 9:20 am #

            Please see my previous comment — I have already addressed how to resolve the issue.

          • Jeff April 29, 2017 at 4:43 pm #

            Hi Adrian,

            Awesome website. I was going through the script here and was having quite a bit of fun with it using my night-vision camera. It was interesting to see there was quite a bit of noise between frame to frame. Anyway, the point here is I was working well when all of a sudden after a reboot I am having this problem, the script doesn’t run. Essentially (grabbed) is False and the script breaks. I spent hours scouring this site and other web searches to see what went wrong. I gave up and reinstalled a new version on my Pi 3, the most recent Noobs. I went through and still it does not work. When I try to install the libv4l-dev it says the most recent version is installed. I am not sure what is going on but it was incredibly frustrating because I had it working once!

            A couple other things: I was using an older version of raspbian (at least 6 months) when I first had it working. If I vaguely remember right I might have had an update pending after reboot. However, being sloppy I just kept working. I also installed programs like VLC. This was all before reinstalling a new version of Noobs.

            Since this was a recent comment I am just wondering if there was something broken in a recent update. This is just a guess and the likely scenario is I am doing something wrong. But I had it working, reinstalled the OS, tried the instructions line by line, and still nothing. If you could provide any extra help/direction into the matter I would be much appreciative.

          • Jeff April 29, 2017 at 8:48 pm #

            My previous comment can be amended. The solution was to run the command:

            sudo modprobe bcm2835-v4l2

            I then tested the v4l2 capture using the command

            v4l2-ctl –overlay=1

            and turned it off

            v4l2-ctl –overlay=0

            For whatever reason this fixed the problem.

            This is not intuitive and maybe there is a better approach. But I hope someone with a similar problem may find this helpful.

          • Adrian Rosebrock May 1, 2017 at 1:27 pm #

            Hi Jeff — thanks for sharing. I assume this was for the Raspberry Pi camera module?

          • Eric July 30, 2017 at 4:06 am #

            please provide the link to solve this problem

  29. Hanna August 20, 2015 at 2:06 pm #

    Thanks for another great tutorial Adrian! Your tutorials have given me the ability to jump into working with OpenCV without much startup time.

    • Adrian Rosebrock August 21, 2015 at 7:15 am #

      I’m glad you enjoyed it Hanna! 🙂

  30. haim August 23, 2015 at 8:26 am #


    thanks for the great tutorial! it’s very helpful.
    one question though, in this tutorial you use: camera = cv2.VideoCapture(0)
    while in this tutorial:

    you said you prefer to use picamera module: (from comments)
    “When accessing the camera through the Raspberry Pi, I actually prefer to use the picamera module rather than cv2.VideoCapture. It gives you much more flexibility, including obtaining native resolution. Please see the rest of this blog post for more information on manually setting the resolution of the camera”

    so what changed here?

    • Adrian Rosebrock August 24, 2015 at 6:44 am #

      The main difference is that in the second post I am using the picamera Python module to access the camera attached to the Raspberry Pi. Take a look at the source code of the post and you’ll notice I use the capture_continuous method rather than the cv2.VideoCapture function to access the webcam. But again, that post is specific to the Raspberry Pi and the Pis camera module.

  31. Dan September 3, 2015 at 9:44 pm #

    I am getting an import error no module named pyimagesearch .transform.any ideas what I’ve done wrong

    • Adrian Rosebrock September 4, 2015 at 6:39 am #

      Hey Dan, did you download the source code to this post using the form at the bottom of the page? The .zip of the code download includes the pyimagesearch module. I’m not sure where the transform error is coming from, I assume from the imutils package. So make sure you install imutils:

      $ pip install imutils

  32. Alejandro Barredo September 14, 2015 at 7:51 am #

    I’m trying to test this first part and im having a problem when compiling it:

    Traceback (most recent call last):
    File “***********”, line 60, in
    (_,cnts) = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
    ValueError: too many values to unpack

    I’ve looking for a solution but i couldnt
    could you give me a push
    Thank you

    • Adrian Rosebrock September 14, 2015 at 9:57 am #

      It sounds like you’re using OpenCV 3 which has made changes to the return signature of the cv2.findContours function. Change the line of code to:

      (_, cnts, _) = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE) and the method will work with OpenCV 3.

      • Alciomar October 30, 2017 at 8:37 am #

        Você é o cará meu srsrs 😀

      • Mukesh Bhuriya August 29, 2018 at 1:46 pm #

        sir i have changed the code but still getting same error

        Traceback (most recent call last):
        File “”, line 61, in
        ValueError: too many values to unpack (expected 2)

        • Adrian Rosebrock August 30, 2018 at 8:57 am #

          Are you using the code for the most recent blog post? If so, no changes are required. Just download and execute as is.

  33. urswin September 22, 2015 at 4:42 am #

    Cant wait to try this out, thanks man.

  34. Talha September 29, 2015 at 4:31 pm #

    Hi, I tried to run this code on my python 2.7 with opencv 3.0 but its not working. I am student of Final year and doing fyp. We have a fyp of gesture wheel control chair. We are trying our hard to get more close in this project but some issues are coming.

    Is it possible I can get some help from you. I shall be very thankful to you if you guide me.


    • Adrian Rosebrock September 30, 2015 at 6:31 am #

      Hey Talha — when you say the code is “not working”, what do you mean? Are you getting an error of some kind?

  35. jai October 5, 2015 at 7:22 am #

    Hi, Thanks for the excellent post.
    I was learning Object detection by Opencv and python using your code, Moving object in my video was small (rather human it’s an insect moving on white background) and video was captured by a 13 megapixel Mobile camera. When object starts to move it left a permanent foot print at the initial point and hence tracker will show two rectangle one at the side of origin and the other tracker move according to object current prostitution.

    Why does it detect two contour instead of one which is actually tracking the movement.

    • Adrian Rosebrock October 5, 2015 at 7:08 pm #

      The reason two contours are detected is because the original video frame did not contain the footprint. This is a super simplistic motion detection algorithm that isn’t very robust. For a more advanced method that will help solve this problem, please see this post.

  36. Tim Clemans October 9, 2015 at 6:22 pm #

    I’m a Seattle Police software developer tasked with figuring out how to auto redact police videos to post on Youtube, see Using your code from this post I was able to generate which is a huge improvement on just blurring all the frames. I haven’t figured out how to blur inside the contour. Could you please provide an example of how to do that? So far this is the most reliable thing I’ve found yet. Both tracking an ROI and doing head detection are problematic.

    • Adrian Rosebrock October 10, 2015 at 6:45 am #

      Hey Tim — thanks for the comment. I’ll add doing a blog post on blurring inside head and body ROIs to my queue.

  37. Arm October 29, 2015 at 12:33 am #

    Thank you Adrian! I’ve tried to read and follow and it was so amazing how one can detect motions like that!! 🙂

    • Adrian Rosebrock November 3, 2015 at 10:37 am #

      Thanks for the kind words Arm! 😀

  38. Luis LLanos November 21, 2015 at 10:31 pm #

    Hey Adrian as usual a great post, Maybe Could You suggest some good books or blogs about opencv and java or c++ or android.???? Python is great but sometimes in Industry we need faster results, quickly executions THANKS

  39. MAK December 3, 2015 at 11:18 am #

    Hi!! Great Tutorial.. 🙂

    I was wondering if you can do a tutorial on object detection and tracking from a moving
    camera(UAV/drone). It would be highly appreciated.


    • Adrian Rosebrock December 3, 2015 at 12:50 pm #

      I’ll certainly consider it for the future!

  40. Seungwon Ju December 5, 2015 at 10:21 am #

    Hello. My name is Seungwon Ju from South Korea.

    This is fascinating. I’m following your guide for my Highschool Research Presentation.
    Thanks to you, I could make CCTV with my raspberry Pi without PIR sensor.

    Thank you very much!

    • Adrian Rosebrock December 6, 2015 at 7:19 am #

      I’m happy you enjoyed the post Seungwon Ju — best of luck on your presentation!

  41. Ahmed December 8, 2015 at 7:00 pm #

    Hello Adrian, thank you for sharing this tutorial, it really helped me for completing some tasks, nice to meet you and i’m waiting for the other tutorials 😀

    • Adrian Rosebrock December 9, 2015 at 6:54 am #

      Thanks Ahmed! 🙂

  42. Martin Cremona December 16, 2015 at 3:56 pm #

    Hi Adrian, thank you for this great tutorial! i was looking for something like this.

    I have to ask, how do you achieve it at such a speed?? i have your exact same configuration (or at least that’s what i think), but i can’t make it work as fast as you do. I started from scratch. I followed your tutorial on how to install opencv and python, then imutils and then this project. Do you have something else to improve the performance?? or i’m missing something??

    P.d:sorry for my bad english 🙂

    • Adrian Rosebrock December 17, 2015 at 6:28 am #

      No worries, your english is great. To start, make sure you are using a Pi 2. That’s definitely a requirement for real-time video processing with the Raspberry Pi. Secondly, try to make the image you are processing for motion as small as possible. The smaller the image is, the less data there is, and thus the pipeline will run faster.

      Also, keep an eye on the PyImageSearch blog over the next few weeks. I’ll be releasing some code that allows the frames to be read in a separate thread (versus the main thread). This can give some huge performance gains.

  43. slava December 19, 2015 at 12:37 pm #

    Hey, Adrian, thanks for your work.
    I have a problem while trying to run the code. When i’m typing like:


    in order to get motion detection from the webcam, nothing is going on.
    (i mean i can’t see any result, i think code just executes and that’s it)

    And when i’m trying to execute your example (i downloaded it):

    python --video videos/example_02.mp4

    i get an error

    Can you give me some advice?

    • Adrian Rosebrock December 20, 2015 at 9:45 am #

      Hey Slava: please read through the comments before submitting. I’ve answered this question twice before on this post — see my reply to “Alejandro” and “TC” above for the cv2.findContours fix.

      As for a video stream not displaying up, ensure that your webcam is properly plugged in and OpenCV has been compiled with webcam support.

  44. Aldi December 21, 2015 at 9:56 pm #

    Hello Adrian Great tutorial, I’m using python 3 and opencv 3 I’ve succesfully install imutils.
    the question is why every time I start the program it shows no result or error it just start and stop. I know I have to use python 2.7 and opencv 2.4.x but the raspberry I’m using is installed with opencv 3 and python 3 is there anyway to make it work in the system I’m using

    • Adrian Rosebrock December 22, 2015 at 6:30 am #

      You’re using your Raspberry Pi? I also assume you’re using the Raspberry Pi camera module and not a USB camera? If so, you’ll need to access the Pi camera module. An updated motion detection script that works with the Raspberry Pi can be found here.

  45. slava December 22, 2015 at 4:39 am #

    Yeah, sorry, i found the answer in few mins after i wrote my question.
    Anyway thank you for your reply, that you do not ignore the question that has already been answered.

    • Adrian Rosebrock December 22, 2015 at 6:22 am #

      No worries, I’m happy to hear you found the solution.

  46. Nicholas January 13, 2016 at 10:28 pm #

    can you help me if i want to use another algorithm like phase only correlation or haar-like features, what I must suppose to do??

    • Adrian Rosebrock January 14, 2016 at 6:14 am #

      If you want to train your own Haar classifier, I would give this tutorial a try. I’ll be covering correlation tracking on the PyImageSearch blog in the future.

      Another great alternative is to use HOG + Linear SVM, which tends to have a lower false-positive detection rate than Haar. I cover the implementation inside PyImageSearch Gurus.

  47. Mithun.S January 14, 2016 at 3:21 am #

    Hey Adrian! I’m Mithun from India. I would like to know whether this can be used to do a project on accident detection using video camera.

    • Adrian Rosebrock January 14, 2016 at 6:13 am #

      It certainly could, but you might need to add a bit of machine learning to classify what is a car/truck, and if traffic is flowing in a strange pattern (indicating a car accident).

  48. Nghia Le January 18, 2016 at 10:48 am #

    Thank you, great article and useful to me. I’ll wait for part 2. By the way, I’m doing a traffic monitoring device (detecting speeding, lane encroachment, red light). Raspberry can do that?

    • Adrian Rosebrock January 18, 2016 at 3:18 pm #

      I personally haven’t traffic monitoring on the Pi, so I can’t give an exact answer. My guess is that it can do basic monitoring, but anything above a few FPS is likely unrealistic unless you want to code in C++. To be honest, I think you might need a more powerful system.

  49. Jason Turner February 2, 2016 at 6:00 pm #

    Hi great article and very useful could the code be changed to work with an IP Camera as I Don’t have an pi camera as of yet.

    • Adrian Rosebrock February 4, 2016 at 9:22 am #

      Yes, this could could certainly be used for a Raspberry Pi camera. I’ll try to do a blog post on this in the future.

  50. duygu February 9, 2016 at 9:44 am #

    Hi Adrian,
    Lovely tutorial!!!

    I have a quick question. I made a video shot with my phone cam and implementation is quite shadow sensitive. It detects small light changes on keyboard of my computer as movement for instance.

    Any suggestions to reduce shadow/light sensitivty?

    • Adrian Rosebrock February 9, 2016 at 3:52 pm #

      Lighting conditions are extremely important to consider when developing a computer vision application. As I discuss in the PyImageSearch Gurus course, the success of a computer vision app starts before a single line of code is even written — with the lighting and environment. It’s hard to write code to compensate for poor lighting conditions.

      All that said, I will try to do some blog posts on shadow detection and perhaps even removal in the future.

  51. liudr February 15, 2016 at 12:46 am #

    Thanks for the tutorial. For some reason my setup is not working. I tested with raspistill and my camera has a live feed. Th program will run a few seconds with out output and quits. If I run a few lines of the code, I found that the camera fails to grab any frames with and quits. Any ideas of why the camera may fail to grab frames?

    • Adrian Rosebrock February 15, 2016 at 3:07 pm #

      That’s definitely some strange behavior on the part. Are you executing the code provided in the source code download of this post? Or executing it line-by-line in IDLE?

      • İsmet May 28, 2016 at 2:26 pm #

        Hi Adrian.
        I use Rpi 3 and Rpi Camera Module v1.3. I cant run with live stream. I tried on terminal and Python2 idle. I didnt give error. Camera led didnt light. How can i run with live stream?

        • Adrian Rosebrock May 29, 2016 at 1:57 pm #

          It sounds like your Raspberry Pi is having trouble accessing the camera module. I would start with this tutorial and work your way through it to help debug the issue.

          • Ismet June 2, 2016 at 12:52 pm #

            I can run your code survilance cam with dropbox. But i cant run this code.

          • Adrian Rosebrock June 3, 2016 at 3:05 pm #

            If you can run the home surveillance code, then I presume you’re using the Raspberry Pi camera module. This post assumes you’re using a USB webcam and the cv2.VideoCapture function. You can either update this code to use the Raspberry Pi camera module, or better yet, unify access between USB and Pi camera modules.

  52. Mathilda February 19, 2016 at 8:54 am #

    hi adrian
    thanks for the great tutorial
    I’ve got a problem… the code works, but only for the sample video…
    I want to run it on my own raspberry pi camera video…
    what should I do exactly?
    is it possible to make it work real-time?

  53. Danish March 1, 2016 at 1:52 am #

    Can you please give me something with which I can track motion using my webcam. I don’t have raspberry pi.
    Thanks in Advance

    • Adrian Rosebrock March 1, 2016 at 3:43 pm #

      You can use the code detailed in the blog post you just commented on to track motion using a builtin/USB webcam. All you need is the cv2.VideoCapture function, which this blog posts explains how to do. I also cover how to use the cv2.VideoCapture function for face detection and object tracking inside Practical Python and OpenCV.

  54. Joe March 3, 2016 at 2:05 am #

    So I am getting this error and I am not sure what is going on. Could I get some help and your opinion on it? I get the same error with the downloaded Code along with just copying down the code myself.

    ValueError: too many values to unpack

    • Adrian Rosebrock March 3, 2016 at 7:01 am #

      Please see my reply to “TC” above. You’ll also want to read this blog post on checking your OpenCV version. You’re using OpenCV 3, but the blog post assumes OpenCV 2.4. It’s a simple fix to resolve the issue once you give the post a read.

  55. Rishabh March 13, 2016 at 8:29 am #

    Hi Adrian,

    Could you link us to some of your posts about image processing specific with the PiCamera.
    I keep running into errors trying your codes except for the “accessing-the-raspberry-pi-camera-with-opencv-and-python” post which works flawlessly. But I’d like to see how we can build from that. Again any sort of image processing specific to the PiCamera.

  56. Shrikrishna Padalkar March 14, 2016 at 11:52 am #

    Hello Adrain.
    I am getting the following error:-

    Traceback (most recent call last):
    ValueError: too many values to unpack

    Please help me solve this error.


    • Adrian Rosebrock March 14, 2016 at 3:18 pm #

      Please read the previous comments before posting. Specifically, my replies to Alejandro and TC detail how to solve this problem.

  57. qlkvg March 16, 2016 at 9:26 am #

    I had a brain orgasm while reading. Thanks for awesome tutorial.

  58. Jean-Pierre Lavoie March 19, 2016 at 3:10 pm #

    Hi Adrian,
    This is great and thanks for your feedback for the first tutorials! Now in this one, when I execute the python script: python, I get these error messages:

    Traceback (most recent call last):
    File “”, line 58, in
    ValueError: too many values to unpack

    Any idea what is the problem?
    Thanks a bunch!

    • Adrian Rosebrock March 20, 2016 at 10:43 am #

      Please read through the comments before posting — your question has already been answered multiple times. See my reply to “TC” and “Alejandro” above.

  59. Abhijit March 22, 2016 at 8:59 am #

    I have try to implement this script with windows operating system. I have run script then does not display error but does not display any frame.

    when i have run below command then display next promt but does not display any video frame as per your blog

    C:\Python27>python –video example_01.mp4


    • Adrian Rosebrock March 22, 2016 at 4:16 pm #

      I’m not a Windows user (and I don’t recommend Windows for working with computer vision), but I would suggest (1) double checking that the path to the video file is valid and (2) ensuring that your Windows system has the valid codecs to read the .mp4 file.

  60. Shivam March 28, 2016 at 2:03 am #

    Superb Work Sir, Thanks very much for this tutorial, It is really helpful and the code is easily understandable to a rookie in programming.

    • Adrian Rosebrock March 28, 2016 at 1:31 pm #

      I’m happy I could help Shivam 🙂

  61. Bleddyn Raw-Rees March 30, 2016 at 5:10 am #

    Hi Adrian,

    Firstly, thanks for a brilliant tutorial.

    And secondly I was wondering whether you’d be willing to suggest a way of splitting input video? So what I mean is, for example, if there’s a 10minute clip with 30seconds of motion somewhere in the middle – I would want the output video to just be the 30s (+ a couple of seconds either side perhaps). I’ve worked out that this can be done using FFMPEG, but I’m not sure how to retrieve the in and out points from your code to feed into FFMPEG.

    So I suppose that my questions are:

    1) Is using FFMPEG a necessary/wise choice for splitting the video?
    2) How do I get in and out points from your motion detection code?

    Any advice you could give would be greatly appreciated.


  62. Reza April 16, 2016 at 4:42 am #

    Its work , thanks Adrian . . .. you are pro

    • Adrian Rosebrock April 17, 2016 at 3:32 pm #

      Thanks Reza! 🙂

  63. Ankit Pitroda April 19, 2016 at 3:42 am #

    hey adrian
    Really awesome tutorial from your side
    I am always appriciate your work
    You are really god of opencv

    I am facing one problem.
    Like if I capture video from my camera as you put two tutorial videos; it works fine

    But in the live camera it wan’t work properly.

    What will be the solution?

    • Adrian Rosebrock April 19, 2016 at 6:52 am #

      What type of camera are you using? I would start with that question and then do a bit of research to see if it’s compatible with your system and/or OpenCV. I think the real problem is that your system is unable to access your webcam. Do some debugging and find out why that is. From there, you’ll be able to move forward.

      • ankit May 9, 2016 at 6:24 am #

        no no
        camera is working fine.

        But at the start of the first frame; it shows occupied in my case.
        so if there is no object movment inside the frame still it shows occupied.

        awaiting for reply and thanks for the quick reply..

        • Adrian Rosebrock May 9, 2016 at 6:56 pm #

          Hi Ankit — I think the issue is with your camera sensor warming up and causing the initial frame to be distorted. I would place a call to time.sleep(2.0) after cv2.VideoCapture to ensure your camera sensor has had time to warm up. Another option is to apply a more advanced motion detection algorithm such as the one detailed in this blog post.

          • Sandeep July 5, 2017 at 2:43 am #

            Placing time.sleep(2.0) didn’t work for me.

          • Adrian Rosebrock July 5, 2017 at 5:53 am #

            Are you using a camera or a video file?

  64. Akhil April 19, 2016 at 6:12 am #

    Hi Adrian,

    Your article is very helpful and actually, all the content in this website is very useful. I wanted to ask is the part 2 out ?

    • Adrian Rosebrock April 19, 2016 at 6:47 am #

      Thanks Akhil! And by “Part 2”, do you mean the Raspberry Pi + motion detection post? If so, you can find it here.

  65. Ali April 20, 2016 at 3:45 pm #

    Hi Adrian,

    Thank you very much for this tutorial. I’m new to computer vision! I’m currently working on a project which involves background subtraction technique. Your code uses the first frame as a reference to next frames and that is how it detects motion. All what I need is to have a reference frame that changes over a specified period of time, and then do exactly what the rest of the code does. How do I modify your code (if that’s okay) to achieve that?

    To be more specific; a reference frame that continuously changes over a specified period of time.

    • Adrian Rosebrock April 20, 2016 at 5:57 pm #

      I actually cover how to solve this exact question in this post 🙂

  66. Kevin April 25, 2016 at 11:23 pm #

    Hi Adrian,

    Thank you very much for this tutorial. I’m a student first time learning this.
    i’m want to know this really can use motor servo to tracking? If tracking the background change everything will be the target.

    i want to know anything can help me follow the object had be found

    • Adrian Rosebrock April 26, 2016 at 5:15 pm #

      With this method, you won’t be able to use a servo since the algorithm assumes a static, non-moving background.

  67. Jean-Pierre Lavoie April 28, 2016 at 8:47 pm #

    Hi Adrian. This is a simple question, but how do you rotate the camera 180 degrees in your code? Now it’s upside down the way my camera is setup. Normally with PiCamera I do the following:

    camera.rotation = 180

    and it works. But in your code if I do this after your line:
    camera = cv2.VideoCapture(0)

    I get an error message.

    • Adrian Rosebrock April 30, 2016 at 4:04 pm #

      I would use the cv2.flip function to flip the image upside down:

      frame = cv2.flip(frame, 0)

  68. Wanderson May 1, 2016 at 11:26 pm #

    Hi Adrian, how are you?
    My code doesn’t work very well.

    When I run the program it appears always “occupied”, even when the first frame contains only the background. My webcam is good quality (philips spc 1330). What do you think that is?

    Thanks a bunch!

    • Adrian Rosebrock May 2, 2016 at 7:48 pm #

      This likely due to your camera sensor still warming up when the first frame is grabbed. Either use time.sleep(2.0) after the initial call to cv2.VideoCapture to allow the sensor to warmup, or better yet, use the motion detection method utilized in this blog post.

      • Wanderson Souza May 2, 2016 at 9:16 pm #

        Thanks Adrian!

  69. Akhil May 3, 2016 at 2:19 am #

    HI Adrian,
    I just wanted to know the time complexity of this code, what complexity would this predefined functions be running in?

    • Adrian Rosebrock May 3, 2016 at 5:47 pm #

      Which functions are you specifically referring to?

  70. Wanderson May 4, 2016 at 12:41 am #

    Hello, again, Adrian

    It is possible to use a folder with background images to be used as the first frame?

    Thanks a bunch

    • Adrian Rosebrock May 4, 2016 at 12:32 pm #

      Absolutely! Instead of using a folder of images, I instead use the past N images from a video stream to model the background in this post, but you can easily update it to use a folder of images. The key to this method is to use the cv2.addWeighted function.

  71. furrki May 6, 2016 at 11:51 pm #

    Hi bro. Really nice tutorial. İ really enjoyed that. Thank you for this well-worked tutorial ^_^
    Greetings from Turkey

    • Adrian Rosebrock May 7, 2016 at 12:36 pm #

      No problem, I’m glad you enjoyed it!

  72. Roberto May 10, 2016 at 1:52 pm #

    This has been wonderful to read/follow. Thanks for all the work you put into these, along with the descriptions to really help build and understanding of what’s actually taking place.

    I do have one question, however – What would be the best way to have this change from “Occupied” to “Unoccupied” and reset the motion tracking process? Unless I’ve missed something above I don’t see how that would take place.

    • Adrian Rosebrock May 10, 2016 at 6:17 pm #

      If you would like to totally reset the tracking progress, then you need to update the firstFrame variable to be the current frame at the time you would like to reset the background.

      • Roberto May 11, 2016 at 9:48 am #

        Ahh, that makes perfect sense! I implemented this and some other changes and I have learned much.

        I’m capturing the images now when certain triggers are met with cv2.imwrite(‘\localpath’, img) but now I need to figure out how to clear the “buffer” of the image that is written locally. Each time it does save to local disk it just keeps writing the same image over and over again. What I have tried so far seems to actually release the camera all together instead of just resetting the frame. Any suggestions?

        • Adrian Rosebrock May 12, 2016 at 3:44 pm #

          I’m not sure what you mean by “clear the buffer of the image written locally”? Do you mean simply overwrite the image?

  73. amrutha May 11, 2016 at 3:04 pm #

    thank u sir,awesome tutorial,
    based on which algorithm detection and tracking is performing here,is it meanshift algorithm or other???

    • Adrian Rosebrock May 12, 2016 at 3:38 pm #

      Neither MeanShift nor CamShift is used in this blog post — the tracking is done simply by examining the areas of the frame that contain motion. However, you could certainly incorporate MeanShift or CamShift if you wanted.

  74. amrutha May 12, 2016 at 10:38 am #

    hello sir awesome post,i tried the program by reading static video for detecting moving cars on road,code worked well,i need some detailed info like how the motion detection and tracking is going on ,like only by background subtraction method or some other algorithm,
    i hope u will help me out.

    • Adrian Rosebrock May 12, 2016 at 3:30 pm #

      So if I understand your question correctly, your goal is to create an algorithm that uses machine learning to detect cars in images? If so, I would recommend using the HOG + Linear SVM framework.

  75. Rainyban May 25, 2016 at 8:48 am #

    Hello Adrian!
    Frist, thank you for use your Rpi source code!
    I accept your code in my Rpi3
    It is operating ordinarily
    I want to expand their function!
    I want to save the original image when covers background subtraction

    How can I move imwrite() function??
    Currently, Saved Image is include square.

    once again, Thank you for your Rpi tutorial!

    • Adrian Rosebrock May 25, 2016 at 3:17 pm #

      You can save the original frame to disk by creating a copy of the frame once it’s been read from the video stream:

      frameOrig = frame.copy()

      Then, you can utilize cv2.imwrite to write the original frame to disk:

      cv2.imwrite("path/to/output/file.jpg", frameOrig)

      • Rainyban May 26, 2016 at 3:29 am #

        Thank you Adrian!
        I solved the problem~~
        and then, saved image is original frame

        I have new question… haha..;;
        I want to reduce saving time
        I think one method
        Is it posible??

        1. one thread operation -> if Image Detect; flag = 1
        2. another thread operation -> if flag ==1; imwrite
        I know that python is one thread
        terminal python code value(flag) -> another terminal python code

        what should I do??

        • Adrian Rosebrock May 26, 2016 at 6:20 am #

          Sure, you can absolutely pass saving the image on to another thread. This is a pretty standard producer/consumer relationship. Your main thread puts the frame to be written in a queue. And a thread reads from the queue and writes the frame to file.

  76. Sarai May 29, 2016 at 11:28 pm #

    Awesome tutorial! Totally loved it! easy to understand and very helpful! Thank you for this series! Please keep doing them!

  77. Raghuvaran P May 30, 2016 at 2:38 am #

    Can u please provide the sample video ?

    • Adrian Rosebrock May 31, 2016 at 4:05 pm #

      Please use the “Downloads” section of this blog post to download the source code to this post — it includes example videos that you can use.

  78. Alessio Michelini May 30, 2016 at 9:12 am #

    Did anybody try to run this script on a raspberry pi nano?

    • Adrian Rosebrock May 31, 2016 at 3:54 pm #

      The Pi Nano? Do you mean the Pi Zero? If so, I wouldn’t recommend it. The FPS would be quite low, as I discuss in this blog post.

      • tarun June 2, 2016 at 11:36 am #

        i am using opencv 3.0.0 i followed all the steps in the motion detection but i got nothing i did not got error but my answer was NOTHING!!!!!!

        • Adrian Rosebrock June 3, 2016 at 3:06 pm #

          If you did not receive an error message at all and the script automatically stopped, then OpenCV is having trouble accessing your webcam. Are you using a webcam? Or the Raspberry Pi camera module?

  79. kev June 1, 2016 at 6:45 pm #

    To gracefully exit, you may want to switch your last two lines. First close all windows, then release the camera. Otherwise, system will break with a segmentation fault.

    • Adrian Rosebrock June 3, 2016 at 3:14 pm #

      I haven’t encountered this error before, but if that resolves the issue, thanks for pointing it out Kev!

  80. Bleddyn June 4, 2016 at 8:37 am #

    How hard would it be to track detected motion regions between consecutive frames?

    Using createBackgroundSubtractorMOG2() for example for use with more dynamic backgrounds doesn’t have the results it could have. In ‘Real-time bird detection based on background subtraction’ by Moein Shakeri and Hong Zhang, they deal with the problem by tracking objects between frames and if it is present for N frames then it’s probably a moving object.

    I had a look at your post [] which was interesting and using moments, created lists for x and y coordinates thinking that i could compare elements in a list between successive frames but this happens:

    current_frame_x [0, 159, 139, 31]
    previous_frame_x [0, 141, 29]

    there’s a new element ‘159’ so I cant compare elements like for like…

    Is there a better way basically? I couldn’t figure it out!

    • Adrian Rosebrock June 5, 2016 at 11:31 am #

      There are multiple methods to track motion regions between frames. Correlation-based methods work well. But a simple method is to simply compute the centroids of the objects, store them, compute the centroids from the next frame — and then compute the Euclidean distances between the centroids. The centroids that have the smallest distances can be considered the “same” objects”.

  81. Daniele June 8, 2016 at 8:30 am #

    Hi Adrian,

    First of all, thanks for the great tutorial 😀

    I’m working on a video surveillance system for my thesis and I need a background subtraction algorithm that permits to continously detect the objects even if they stop for a while. I have done various experiments with cv2.createBackgroundSubtractorMOG2() changing the parameter “history”, but, even if I set it to a very big value, even the objects that stop for just a second are recognized as background.
    So, from this point of view, is it possible that your approach is better than those proposed by Zivkovic?

    • Adrian Rosebrock June 9, 2016 at 5:25 pm #

      MOG and MOG2 are certainly good algorithms for background subtraction. This method certainly isn’t “better” — it’s just less computationally expensive. MOG and MOG2 are less suitable for resource constrained devices (such as the Raspberry Pi) since they don’t have enough “computational horsepower” to get the job done.

      • Daniele July 7, 2016 at 1:21 pm #

        If you test the MOG2 algorithm on your video (that one in which you open the door and enter in the room), you can notice that detects many false positive, much more than the absolute difference between frames.
        Probabily MOG2 is not the best indoor detection algorithm and so in this case the absolute difference performs better.

  82. Obiajulu June 8, 2016 at 12:40 pm #


    Thank you for the awesome tutorial. I implemented the techniques but i have difficulty in saving the Video feed on my Rspberry pi and Mac laptop. I tried writing the frames so it could save in the default directory but to no avail. My question is how do i save the video feed using python language and also hashing and signing the video feed to prevent modification. I look forward to a positive response soon.

    • Adrian Rosebrock June 9, 2016 at 5:22 pm #

      I detail how to save webcam clips to file in this blog post. I hope that helps!

  83. Dishant June 14, 2016 at 7:58 am #

    Any suggestions on how it can be use to detect speed of moving object?

    • Adrian Rosebrock June 15, 2016 at 12:37 pm #

      You need to calibrate your camera so you can determine the number of pixels per measurable unit (such as pixels, centimeters, etc.) I detail how to calibrate your camera and use it for measuring the distance between objects in this blog post.

      Once you can measure the distance between objects, you just need to keep track of the Frames Per Second of your pipeline. Dividing the distance traveled by the FPS rate will give you the speed.

  84. Lokesh June 15, 2016 at 2:49 am #

    Hi Adrian ,
    thank you for the awesome tutorial .it is working fine but when iam trying to execute this python script through web server using php it’s not showing anything.Can you please help me out how to execute this python script with php.

    My index.php looks like this :-

    • Adrian Rosebrock June 15, 2016 at 12:29 pm #

      Hey Lokesh — can you elaborate more on what you mean by “executing the Python script with PHP”? You likely don’t want to do that. You can call the system function to call any arbitrary program (including a Python script), but that’s not a good idea, since your PHP script will hang until the Python script finishes.

      • Lokesh June 16, 2016 at 2:29 am #

        Iam trying to run this python script integrating with php .so that it wil capture the video from webcam when iam running through browser but when iam trying to do this it’s not opening the webcam.

        • Adrian Rosebrock June 18, 2016 at 8:25 am #

          This won’t work. Python does not interface with PHP and you can’t pass the result from Python to PHP (unless you figured out how to use message passing between the two scripts). Instead, you should use Python to create a web stream and then have PHP read the results from the web stream. That way, these will be two separate, independent processes.

  85. Teknokent June 22, 2016 at 7:40 am #

    Hi Adrian,
    Well-done for your all studies. That is great job. What do you think about counting people? Did you try it before?
    Nice day!

    • Adrian Rosebrock June 23, 2016 at 1:18 pm #

      It’s certainly possible using this technique. But depending on the types of images/videos you’re working with, you might want to use OpenCV’s built-in person detector.

      • Teknokent July 1, 2016 at 7:40 am #

        thank you so much!

  86. James June 24, 2016 at 5:43 am #

    Hi there,
    I am doing something somewhat similar to this.
    If you were to get the center of the rectangle in each frame, and then make a line joining these centers together (effectively tracking the moving person) how would you go about doing this?

    I have been able to identify the centers in each frame but am struggling to create a list that stores all the history of the centres.

    • Adrian Rosebrock June 25, 2016 at 1:33 pm #

      Hey James — I already explain how to do this in this blog post.

  87. Madhukar Chaubey June 28, 2016 at 9:46 am #

    Can this work with sequence of images instead of live camera frames? What will be the changes? Need help…

    • Adrian Rosebrock June 28, 2016 at 10:46 am #

      Sure, this can absolutely work with a sequence of images instead of a live stream. Instead of looping over video frames, loop over your images from disk. Replace the while loop that loops infinitely over frames from the video stream with a loop that loops over all relevant image son your disk.

  88. Izzat June 28, 2016 at 4:45 pm #

    Hello Adrian Your work is fabulous, i can’t believe it works amazingly.
    One more question; I am using RPi 2 for streaming image frames wirelessly through wifi using MJPG Streamer method(till now i received video frames on a fix IP address and Specific port 8080) and now i need to open that frames in your code and apply the same object detection on the received frames. Can I do it, will you please help me out..??

    • Adrian Rosebrock June 29, 2016 at 2:06 pm #

      It’s been a long time since I’ve had to pass an IP stream into cv2.VideoCapture, but this is exactly how you would do it. I would suggest doing some research on IP streams and the cv2.VideoCapture function together. Otherwise, another approach would be to use a message passing library such as ZeroMQ or pyzmq and pass the serialized frames back and forth.

  89. Andrew July 6, 2016 at 2:04 pm #

    it keeps saying that ‘frame’ and ‘gray’ are not defined. help please? otherwise, great tutorial.

    • Adrian Rosebrock July 6, 2016 at 4:10 pm #

      Hey Andrew — it’s hard to know exactly why you might be running into that issue. Please make sure you have used the “Downloads” section of this tutorial to download the code to this post. If you are copying and pasting the code (or typing it in yourself), you might (unknowingly) be introducing errors to the code.

  90. JP July 26, 2016 at 4:33 pm #

    Thanks for letting search my own answer.
    If the issue “to many Values to unpack” occurs.
    I found my answer here:

  91. Wanderson Souza July 27, 2016 at 11:33 am #

    I have a big question, in your opinion what is the best technique to segment dense amount of people viewed from top. For example, people who get in a train door. Thank you!

    • Adrian Rosebrock July 27, 2016 at 1:54 pm #

      That really depends on the quality of your video stream, the accuracy level required, lighting conditions, computational considerations, etc. For situations with controlled lighting conditions background subtraction methods will work very, very well. For situations where lighting can change dramatically or the “poses” you need to recognize people in can change, then you might need to utilize a machine learning-based approach. That said, I normally recommend starting off with simple background subtraction and seeing how far that gets you.

  92. San July 28, 2016 at 5:45 pm #

    Excellent tutorial as always. Just a small question. So for cosmetics I used

    feed = np.concatenate((frame, thresh), axis = 1)
    cv2.imshow(“Feed”, feed)

    Obviously, cannot concatenate since frame and thresh have different dimension. Is there a workaround?

    • Adrian Rosebrock July 29, 2016 at 8:28 am #

      Do your frame and thresh have the same height? If not, resize the images such that they have the same height so you can concatenate them vertically.

      Secondly, thresh is a single-channel binary image while frame is a 3-channel RGB image. That's not an issue, all you need to do is create 3-channel version of thresh:

      thresh = np.dstack([thresh] * 3)

      From there, you'll be able to concatenate the images.

  93. Cristian Bello August 10, 2016 at 1:27 am #

    hello adrian, I first want to say that your work is excellent, but doubt arises me, you can broadcast live, but I have a problem, the screen is suspended to take some time for idle keyboard or mouse, how I can avoid that?

    • Adrian Rosebrock August 10, 2016 at 9:24 am #

      Hey Cristian — can you elaborate more on what you mean by the screen being “suspended”? I’m not sure what you mean.

      • Cristian Bello August 11, 2016 at 12:55 am #

        hello adrian, I mean when you stop moving the mouse or keyboard a good time and the screen turns off, but all processes continue, energy saving mode of many computers

        • Adrian Rosebrock August 11, 2016 at 10:37 am #

          This really depends on your computer. You would need to investigate any type of “System Preferences” and turn off any settings that would put your system into “Sleep” or “Hibernate” mode.

  94. Tiago Martins August 19, 2016 at 11:26 am #

    Hi Adrian,

    Amazing posts you have…and that bundles, supper helpful 🙂
    I have a question about that step were we calculate the delta from past frame and the current one. Can we know each pixel coordinate that have changed from one frame to another?

    Best regards,

    Tiago Martins

    PS. – Please don’t stop 🙂

    • Adrian Rosebrock August 22, 2016 at 1:36 pm #

      Can you elaborate more on what you mean by “know each pixel coordinate that have changed”? I assume you want to know every pixel value that has changed by some amount? If so, just tale a look at the delta thresholded image. You can adjust the threshold to trivially be one, but the problem is that you’ll get a lot of “noise” by doing that.

  95. Dong il Kum August 27, 2016 at 2:49 am #

    Hi adrian i’m really impressed by your motioni detecting project.
    As i am a novice in opencv or python, i have some questions.
    In our project we want to use this program on the alley so there could be parked cars or laid something else. In that case, the program maybe in ‘occupied condition’ because of cars or other things. Thus i want to add more function like change the first frame image as a new frame which the webcam is looking at if there is nothing detected newly by the camera. But in my opinion this is really difficult to make TT. Could you help or advise us??

  96. Yashvardhan September 23, 2016 at 3:18 pm #

    Hey Adrian,
    I m trying to run this code on my laptop running windows 8. I have installed all the necessary packages but still it is giving me a ValueError:Too many values to unpack at line 57. Please, help me out of this error.

    • Adrian Rosebrock September 27, 2016 at 8:56 am #

      It sounds like you are using OpenCV 3, but this blog post requires OpenCV 2.4. No worries though, this is an easy fix. Please see my reply to “TC” above for the solution.

  97. Julian Harris September 24, 2016 at 2:20 am #

    Really fantastic tutorial, thanks Adrian! It passes the “sleeping kids test”: could I get the whole thing running before my kids woke up? Yes! 🙂

    • Adrian Rosebrock September 27, 2016 at 8:54 am #

      Awesome, great job Julian!

  98. swapnil October 6, 2016 at 3:20 pm #

    its really best tutorial. I like it. in this programme I want to store the occupied object video please tell me which command I used to store the object occupied video

  99. Benjamin October 9, 2016 at 8:13 am #

    great stuff! thanks for the tutorial!
    i’m using a PI camera with v4l2 driver on wheezy. the script works very well with it. I tried it with the old and new camera modul. running it with the new camera modul it is not so easy to find a good threshold level.
    also I wondered if I could run the script with the noir camera modul..? I guess not, but you got an Idea how I could run it?

    • Adrian Rosebrock October 11, 2016 at 1:03 pm #

      I personally haven’t worked with the NoIR camera before. The thresholding is a little different but you can still apply the same basic principles.

  100. Berkay Aras October 13, 2016 at 4:22 am #

    I solved this problem by using reinstalling open CV

    But now; when I do sudo python
    It gives no problem but it’s not showing anything.

    Program is not running?

    Any ideas?

    • Adrian Rosebrock October 13, 2016 at 9:09 am #

      Is the Python script starting and then immediately exiting? Are you trying to access your webcam or use the video file provided in the “Downloads” section of this tutorial?

      • Gbenga January 24, 2017 at 9:06 am #

        Hi Adrian, thanks once again for the amazing tutorial, i have exactly same problem as “Berkay Aras” owns, when I do sudo python
        It gives no problem but it’s not showing anything.

        Program is not running?

        i am using Raspberry pi 2 with installed OpenCV 3.1.0 and picamera. i have no idea why i did not get anything, i am using Downloaded code from your blog.

        any idea please!

        it stop here after executing run command…..

        pi@GbeTest:~ $ python –video videos/example_1.mp4
        pi@GbeTest:~ $

        • Adrian Rosebrock January 24, 2017 at 2:19 pm #

          It looks like the Python script is running just fine, but you aren’t able to read frames from the .mp4 file. I would suggest following this install instructions to ensure you have the proper video codecs installed.

          • Gbenga January 25, 2017 at 6:16 am #

            Adrian, thanks for your reply. in your code, i think you grabbed the frame from your camera as shown here,

            (grabbed, frame) =
            text = “Unoccupied”

            how can i do that? if i want to grab it from file?

            thanks for your help!

          • Adrian Rosebrock January 26, 2017 at 8:25 am #

            If you want to grab a video frame from a file just updated the cv2.VideoCapture initialization to include the path to your input video:


  101. Ravi October 14, 2016 at 1:21 pm #

    Hey Adrian,

    Thank you for sharing it with the community.

    Is it possible to use this for object motion detection? Like, moving car or ball detection?

    What will I have to change to detect the specific shape object without any false detection?

    • Adrian Rosebrock October 15, 2016 at 9:55 am #

      You can certainly use this for object detection, but you’ll need a little extra “special sauce”. I would use motion detection to detect “candidate regions” that need to be classified. From there, I would pass these regions into trained machine learning classifiers (such as HOG + Linear SVM, CNNs, etc.) for the final classification.

  102. saluka October 20, 2016 at 10:56 am #

    when I put a camera outdoors,does it detect rain as a motion.How can I avoid it to detect only humans and sense a motion.

  103. Devid October 23, 2016 at 10:54 pm #

    Hi Adrian,
    Used your codes and did Object tracking using camshaft algorithm -
    It works nicely.
    I just want to implement tracking Pan/tilt.
    So could you please guide us to control 2 servos (x and y direction) according to camshaft tracking.
    Thanxz lot

    • Adrian Rosebrock October 24, 2016 at 8:29 am #

      I don’t have any tutorials on utilizing servos, but I will certainly consider it for a future blog post.

      • Devid October 25, 2016 at 5:54 am #

        Thanxz adrian

  104. Josh October 31, 2016 at 10:45 am #

    Hi Adrian,

    I followed your tutorial and this is really awesome. Thank you so much for sharing your work! I have a question for you. How can I show the frame delta like you have done in some of your tutorial screen shots?


    • Adrian Rosebrock November 1, 2016 at 8:59 am #

      Hey Josh — thanks for the kind words, I’m happy I could help. To display the delta frame simply insert:

      cv2.imshow("Delta", delta)

      I would personally put that line with the other cv2.imshow statements.

  105. tringuyen November 11, 2016 at 12:59 am #

    is this code run on linux by PC?, is it not on raspberry pi? because, i have error,

    • Adrian Rosebrock November 14, 2016 at 12:17 pm #

      You need to install the imutils package into the “cv” virtual environment:

      From there you’ll be able to execute your script without error.

  106. bharath November 24, 2016 at 10:40 pm #

    hello sir
    since iam begginer in computer vision or image processing
    I wanted to detect our own custom please let me know if you have any source code or some useful information where i can resolve this problem.
    Thank you sir in advance

  107. kane November 28, 2016 at 10:52 pm #

    i am having a final project with “people motion detection with Raspberry” that means, after detect people with camera pi, sim900 will sent a message for owner. So i have 2 question:
    1.Can i run this code for my project?
    2.How can i use sim900 with raspberry?
    I read your “home-surveillance-and-motion-detection-with-the-raspberry-pi-python-and-opencv”, but with dropbox, i want to run in no-wifi enviroment?. So i think i can do with this code – basic motion detection and tracking with python and open cv.

    • Adrian Rosebrock November 29, 2016 at 8:01 am #

      To use this code for your project use the “Downloads” section to download the source code. I provide an example of executing the script at the top of the source files.

      From there you should use the accessing Raspberry Pi camera post to modify the code to work with your Raspberry Pi camera module.

      I don’t have any experience with the “sim9000” (and honestly don’t know what it is off the top of my head). I presume you mean sending a txt message. If so, check out the Twilio API.

  108. TBlack November 29, 2016 at 9:29 pm #

    Thanks Adrian,

    I try to test an sample video, it works cool.

    • Adrian Rosebrock December 1, 2016 at 7:42 am #

      Nice job! 🙂

  109. siyer November 30, 2016 at 4:40 am #

    Hi Adrian

    Thanks for the tutorial.

    frame is returning None always even if i pass a local video file to cv2.VideoCapture. No errors per se

    • siyer November 30, 2016 at 11:56 pm #


      I downloaded the code as is and ran , it now seems to exit while finding the contours (line 60) without any errors.

      kindly advice

      kindly ignore, looks like n open cv 2.7 which i am running, the cv2,findcontours returns 3 values, instead of 2 as originally expected in the code. t now moes past.

      • Adrian Rosebrock December 1, 2016 at 7:24 am #

        In OpenCV 2.4, the cv2.findContours function returns 2 values. In OpenCV 3, the function returns 3 values. You can learn more about the differences here.

    • Adrian Rosebrock December 1, 2016 at 7:34 am #

      In that case your version of OpenCV was likely compiled with video codec support. I would suggest following one of my OpenCV install tutorials.

      • siyer December 1, 2016 at 9:13 am #

        Thanks Adrian

        It was not a codec issue. I had to place the opencv_ffmpeg DLLs in one of the PATH’s…

        Secondly, for some reason it does not recognise relative paths for the video file. Have to provide full path.

        Works like a charm (few false positives on a self made video) but great start.

        thanks much

        • Adrian Rosebrock December 5, 2016 at 1:52 pm #

          Nice, congrats on resolving the issue!

      • Sen Young December 5, 2016 at 4:17 am #

        Hello Adrian! Good morning! Thank you very very much!

        I am a student from china .Recently, i was stumped by the question that how to build a system which can count how many people in classroom .It’s your this tutorial that gave me ideas and approaches!

        I’m so glad and lucky to find your website in this wonderful world !

        But some questions still confuse me motion detection can detect many individuals and count the quantity of people at the same time ? If this need some face detector or head and shoulders detector in opencv? Could you give me some ideas or solutions? Thank you very much

        • Adrian Rosebrock December 5, 2016 at 1:26 pm #

          You can use motion detection to count the number of people in a room provided that the motion in the room is only because of people.

          Otherwise, you should consider applying object detection of some kind. I demonstrate how to detect humans in images here.

  110. Chandough December 6, 2016 at 5:48 pm #


    Amazing code. But when I try to execute it, the command line gives me a syntax error for
    File “”, line 1.

    I am not entirely sure where I am wrong, any help is appreciated!

    • Adrian Rosebrock December 7, 2016 at 9:39 am #

      Hey Chandough — I would suggest that you use the “Downloads” section of this tutorial to download the code and execute it. It seems like that you copied and pasted the code from the post into your own project. That’s totally fine, but it can lead to errors like these. This is why I suggest using the “Downloads” section to ensure the code is properly executing on your system.

  111. Moon ki Park December 11, 2016 at 1:47 pm #

    Hi Adrian~

    i saw video in your turtorial about facial recognition by camera

    camera analysis someone after if who is not match
    computer sent message to your phone!

    i have question here !
    what kind of Api use? like twilio, textlocal etc…
    and Are you paying? when computer send message to your phone?

    if you are using free can you tell me?

    • Adrian Rosebrock December 12, 2016 at 10:34 am #

      I am using the Twilio API. To send pictures messages you would have to pay for the API.

  112. David December 11, 2016 at 3:46 pm #

    Interested in whether you think this can run fast enough to track a rocket launch.

    I’m considering automating a tracker to improve model rocket photography/video (3D-printed gearbox/tripod head driven by servos).

    High-end of “small” rockets:

    A bit bigger:

    I realize the changing background is an issue – but if you look at the videos, once the camera head has tilted up, it doesn’t have to move much. I’m thinking I could create a system adapted for the rapid acceleration that only lasts the first fraction of a second.

    Interested in any ideas.

    • Adrian Rosebrock December 12, 2016 at 10:34 am #

      The issue here isn’t so much the speed of the actual pipeline, it’s the FPS of your camera used to capture the video. If you can get a 60-120 FPS camera, sure, I think you could potentially use this method for tracking. The problem here is the changing background, so you should instead try color or correlation filters.

  113. GK December 12, 2016 at 9:06 am #

    Hi Adrian, These are some amazing tutorials. Thank you for sharing it with us.
    Could you tell us how to execute the code form the Python shell and not from cmd?
    That would be of great help.

    Thank you,

    • Adrian Rosebrock December 12, 2016 at 10:25 am #

      Which Python shell are you referring to? The command line version of the Python shell? Or the GUI version? I don’t recommend using the GUI version of IDLE. You should use Jupyter Notebooks for that.

      • GK December 12, 2016 at 11:37 am #

        I was referring to the IDLE shell. I’d like the program to run when I hit “F5”, instead of executing it from the cmd. Would that be possible?
        If you’d like, I can send you a detailed email on what I’m trying to do, and why I’d like the program that way.
        Thank you

        • Adrian Rosebrock December 12, 2016 at 12:41 pm #

          If that’s the case I would suggest using a more advanced IDE such as Sublime Text 2 or PyCharm. Both of these will allow you to run the program via a “hot key” and display the results within the IDE.

          • GK December 12, 2016 at 12:49 pm #

            That’s wonderful. Thank you Adrian. Shall try it out right away.

          • GK December 12, 2016 at 1:57 pm #

            Hi Adrian,
            I tried both PyCharm, and Sublime Text 3, neither of the IDEs would run the program directly. I’m able to run it from the command prompt in the PyCharm, but I was hoping to run it with either “Ctrl+B” or “F5”. Would you be able to shed some light on this issue?
            Thank you,

          • Adrian Rosebrock December 14, 2016 at 8:48 am #

            To be honest, I always execute my programs via command line. I never execute them via the IDE, so I’m not sure what the exact issue would be.

  114. navya December 19, 2016 at 12:17 am #

    I want to stream the USB cam from the raspberry pi and see it on the windows PC monitor(live)

    can i achieve the same using just Linux commands??(I have never worked with python before).
    i have installed putty recently and i am working on it.
    I am a newbie. kindly suggest me.

    BTW sorry, forgot to mention.

    ELP-USB130W01MT-L21 is the model of the camera which i am using.

    and i want the live video on windows PC but not over the web.


    • Adrian Rosebrock December 21, 2016 at 10:42 am #

      If all you want to do is see the frames on a separate machine other than the Pi just use X11 forwarding:

      $ ssh -X pi@your_ip_address

      From there, execute your script and you’ll see the results on your screen.

  115. Jax December 23, 2016 at 1:52 am #

    Hello Adrain.

    I am planning to incorporate a live stream of motion detection, face detection and face recognition and currently i am having problems running the face detection code. When i tried to run a part of your code, it showed AttributeError: ‘module’ object has no attribute ‘cv’. I am usng opencv3 by the way.

    Greatly appreciate your advice.


    • Adrian Rosebrock December 23, 2016 at 10:52 am #

      What is your exact error message? And what line of code is throwing the error?

      • Jax December 25, 2016 at 3:51 am #

        Thankyou for the fast reply

        • Adrian Rosebrock December 31, 2016 at 1:46 pm #

          It looks like you’re using OpenCV 3. Change it to:

          flags = cv2.CASCADE_SCALE_IMAGE

  116. Danny January 22, 2017 at 7:47 am #

    I am thankful for your tutorials and how well you explain everything, thanks a lot!
    Im on the last year of my engineering career and currently looking for a job! When I have the money, I will buy your book, because im interested on doing my thesis about opencv.
    Again, thanks!

    • Adrian Rosebrock January 22, 2017 at 10:11 am #

      Thanks Danny, I’m happy the tutorials have been helpful to you 🙂

  117. Akarsh February 1, 2017 at 5:53 am #


    There seems to a problem while working with the code you provided. It gives “too many arguments to unpack” error on line number 60 of your code. Please have a look at it.

    I’m using python 2.7.6 and Opencv 3.1.0

    • Adrian Rosebrock February 1, 2017 at 12:46 pm #

      Hey Akarsh — please be sure to look through the other comments before posting or at least ctrl+f for your error message. You can resolve the issue by looking at my reply to “TC” above.

  118. silver February 2, 2017 at 2:51 pm #

    thank you.
    how i can count the people in the street, or the car in the street?
    i hope you can add tuterial about calculate the distance by webcam.
    best regards

  119. Dayle February 2, 2017 at 3:46 pm #

    Hi Adrian,

    I was revisiting this post and noticed that you coded a 21 x 21 pixel area for blurring, yet in the text you refer to a 11 x 11 pixel blurring region.

    gray = cv2.GaussianBlur(gray, (21, 21), 0)

    You scale the image width down to 500 pixels, so do you recommend using 4% (20/500) ratio to set up a blurring region (odd number of pixels of course).

    I figure it was a typo, but couldn’t pass up the opportunity to pick your brain:)

    Thanks again and I look forward to reading the new book.

    • Adrian Rosebrock February 3, 2017 at 11:07 am #

      This is a typo in the blog post, thanks for pointing it out. I have updated the text to correctly say 21×21.

      As for your question, you typically choose a blurring size that fits the problem. In some cases this involves trial and error.

  120. Ted February 2, 2017 at 5:48 pm #

    Hi Adrian,

    What ever I do, still get the error ImportError: No module named imutils. Your answer workon cv is not working. Error bash: workon: not found.

    I hope you get me out of this.

    thanks in advance,


    • Adrian Rosebrock February 3, 2017 at 11:05 am #

      Hey Ted — it sounds like your virtual environment has not been configured correctly. If you are using Ubuntu/Linux you’ll want to make sure you have updated your ~/.bashrc file. For Mac, update your ~/.bash_profile

  121. honey February 17, 2017 at 10:13 am #

    Thankyou, for all your responses about the quieries because it really helps me in completeing my entire project in detail

  122. sandeep February 18, 2017 at 7:11 am #

    ValueError: too many values to unpack
    how to fix this

    • Adrian Rosebrock February 20, 2017 at 7:58 am #

      Please read the other comments to this post or doing a ctrl+f search for “ValueError” as I have already discussed this question multiple times in the comments section.

  123. Henk February 18, 2017 at 7:19 am #

    Hi Adrian,
    I installed cv3.1.0 following your tutorial on raspberry-pi version 3 with no errors!
    now i try to install your ‘Basic motion detection and tracking’ and found this error in line 56:

    cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
    ValueError: too many values to unpack (expected 2)

    can you help me out?

    btw, Thanks for the tutorials!

    thanks, Henk

    • Adrian Rosebrock February 20, 2017 at 7:57 am #

      Please read the other comments before posting (or searching for the error message). I have already discussed this question.

  124. noor hamer February 18, 2017 at 10:34 am #

    hello sir ^^
    I chose this topic as a project to me for my last year in information technlogy colleage
    But I dont have full knowledge to do it …
    can you help me ?

    • Adrian Rosebrock February 20, 2017 at 7:55 am #

      What part of this project are you struggling with? If you need to understand the basics of computer vision and OpenCV, definitely consider going through my book, Practical Python and OpenCV, which will help you understand the fundamentals of computer vision and image processing.

  125. sandeep February 20, 2017 at 4:53 am #

    how to display the coordinates of the tracked contour and its centroid

    thanks in advance

    • Adrian Rosebrock February 20, 2017 at 7:37 am #

      You can use the cv2.putText function to display any text you would like to the image.

  126. Luke A February 22, 2017 at 11:39 am #

    Hi – will this only work with specific FPS video streams?

    I am just thinking if the video feed source is 30FPS, will the script run as fast as the video is feeding frames? Or will this result in a backlog of frames being processed?


    • Adrian Rosebrock February 22, 2017 at 1:29 pm #

      This script will run as fast as it can decode and process the frames.

  127. Suganya February 24, 2017 at 5:18 am #

    Hello Adrian,
    I am getting an error no module named numpy. But python packages is updated


    • Adrian Rosebrock February 24, 2017 at 11:23 am #

      It sounds like you may have forgotten to install NumPy on your system. If you are using a Python virtual environment make sure you have NumPy installed there as well:

      • Suganya February 26, 2017 at 8:24 am #

        Thank you.Its solved. I am using Opencv 3. It registers background as contour. Always the status is occupied

  128. Ming February 27, 2017 at 9:35 am #

    hi At first ,thank you for the helpful tutorials.I am getting some questions.When I run the program,it can’t show everthing.It seem the program was run out.And then I can input the next command.I don’t kown if the program is normal.Can you help me?

    (cv)pi@raspberrypi:python_pj/basic-motion-detection $ python3
    (cv)pi@raspberrypi:python_pj/basic-motion-detection $ python3 –video videos
    (cv)pi@raspberrypi:python_pj/basic-motion-detection $

  129. Jim March 2, 2017 at 5:06 pm #

    Hello there, i know that you have mentioned my error before but im not sure how to solve it. To be clear, my error is:

    Traceback (most recent call last):
    File “”, line 4, in
    import imutils
    ImportError: No module named imutils


    Im using OPen Cv 3.1.0 and python 2.7.9 on a Raspbian OS.

    Thank you for your tutorial.

    • Jim March 2, 2017 at 5:25 pm #

      Hello again. I just realised that i can spot two different versions of python running on my system. Has something to do with that?


    • Adrian Rosebrock March 4, 2017 at 9:43 am #

      Are you executing the code inside a Python virtual environment? Or outside the environment? Determine which Python version you are using and then install imutils:

      $ pip install --upgrade imutils

  130. Calix March 4, 2017 at 8:52 am #

    Can I set a video file as my first frame? If so, please help me what code i need. Thanks bro!

  131. Terry March 4, 2017 at 2:14 pm #

    Trying to use this code to track squirrels in my backyard off a video feed. Code is working well. Unfortunately, despite trying different arguments for min size and threshold, there is too much stuff moving and it is putting bounding boxes around many, many items. This is despite me rewriting the first frame to the current frame about every 3 frames of the feed.

    Maybe someone can point me in the right direction as to a methodology. I am trying to: 1. identify likely squirrel objects from a video feed. 2. Grab that image and put it through a CNN to determine squirrel or not. 3. If a squirrel, then track it. I have the tensorflow CNN working. Just not sure of the right approach for 1. and 3.

    Right now the camera is stationary, but in the future I would like the camera to also be panning, if that makes a difference in the recommendation. Thanks in advance for any help.

    • Adrian Rosebrock March 6, 2017 at 3:47 pm #

      If your goal is to recognize various objects and animals, then yes, machine learning is the right way to go here. Squirrels (and other animals) can look very different depending on their poses, in which case you will likely need CNNs for the classification. I would suggest using basic motion detection to give you the ROIs of objects to classify, then passing these ROIs into a CNN to obtain the classification.

      • Terry March 6, 2017 at 6:00 pm #

        Thank you for the response. Your website and examples have been a huge help. My CNN for classification is working well. The motion detection algorithm for an outdoor video is providing far too many ROIs to analyze as many things are moving. This will be especially true if the camera pans.

        I tried simple blob detection converting images to HSV and filtering for grey (squirrel color) and that works well if the squirrel is on a green lawn and not so well when the squirrel is in woods (where there are many things colored grey). Trying adaptive correlation filters worked well on something like deer walking because they move slowly, but has been a bust because the squirrel moves in bursts and changes shape rapidly and the algorithm can’t keep up. I am considering trying YOLO next.

        • David Hoffman May 11, 2017 at 5:11 pm #


          After seeing your comment, I recalled a video from a few years ago at PyCon. The video is here:

          In the video, the presenter describes analyzing the entropy of the squirrel blob (because they have a bushy tail, and hair on their body).

          I hope this helps you.

          Oh…and he demonstrates how he shoots the squirrels with water off of his birdfeeder! There’s a video on his youtube page of that as well!


  132. Hakty March 10, 2017 at 6:14 am #

    Hi, Adrian, great job. I am trying to develop a system to count the number of people in a cafeteria. Tried your example but it misses a lot of people because of people being too close to each other. You mention a more sophisticated method in the article, can you link me to that as I could not find. Thanks and keep the good work.

    • Adrian Rosebrock March 10, 2017 at 3:43 pm #

      There are many methods to detect/count objects in images/videos. My first suggestion would be to use HOG + Linear SVM.

  133. George March 15, 2017 at 6:38 pm #

    I didn’t expect such an awesome post when I started reading this! Love how simplistic it is. Will definitely be buying a raspberry PI and a web cam to try this out (if it works I see myself ending up with many webcams and PIs… hehehe). I have to write a review article on motion detection for my course and this seems to be a solid explanation to start with. Thanks Adrian!

    • Adrian Rosebrock March 17, 2017 at 9:33 am #

      Thank you for the kind words George, I’m glad you enjoyed the post 🙂

  134. Suganya March 21, 2017 at 1:04 am #

    Hi Adrian,
    All your posts are useful to complete my project. Really I could find answer for all my queries from your page. Now I want to use some the packages installed in both python 2.7 and 3.4 in a single program. Is it possible. If so please guide me

    • Adrian Rosebrock March 21, 2017 at 7:08 am #

      I’m not sure what you mean by “use some package installed in both Python 2.7 and Python 3.4” in a single program. Can you elaborate on what you mean and what you are trying to accomplish?

  135. Danijel March 22, 2017 at 6:38 am #

    Your program works fine with opencv version 3.1 but with version 3.2 I got this error

    Traceback (most recent call last):
    File “”, line 61, in
    ValueError: too many values to unpack

    Do you know which changes I need to made in the code in order to not get error?

    • Adrian Rosebrock March 22, 2017 at 8:31 am #

      Actually, this script will need to be updated for all OpenCV 3 versions. It will work out-of-the-box with OpenCV 2.4 (keep in mind this blog post was written well before OpenCV 3 was ever released).

      You should also read the comments before posting or doing a ctrl + f search on this error message. See my reply to “TC” above to the solution to your problem.

  136. Alex March 24, 2017 at 10:36 am #

    Hey Adrian thank you a lot for your work !
    I tried your code but I have a problem with the firstFrame ( To import the background image).
    In fact when I run the code the Thresh windows is completly white …. I import the background :

    I think that I am doing wrong to import the background image.

    Thank you

    • Adrian Rosebrock March 25, 2017 at 9:23 am #

      It sounds like your background image is being marked entirely as motion. Are you using the code from the “Downloads” section of this blog post? Please use this as a starting point.

  137. Rouzbeh Shirvani March 25, 2017 at 12:43 am #

    Thanks for the incredible post. It saved me a lot of time and I learned a lot in this post. I have a quick question, when I try to do the same task with a different video it gives me the following error:
    Also, I made sure to put the video in the same folder and also I tried videos with .mp4, .avi, and .mov formatting and non of them except your own video worked out. I would appreciate your helps.

    VIDEOIO(cvCreateFileCapture_AVFoundation (filename)): raised unknown C++ exception!

    • Rouzbeh Shirvani March 25, 2017 at 12:53 am #

      I figured what was the problem, the videos were not in the same folder

      • Adrian Rosebrock March 25, 2017 at 9:14 am #

        Congrats on resolving the issue Rouzbeh!

        • Rouzbeh Asghari Shirvani March 26, 2017 at 10:41 am #

          Thanks Adrian, I have another question. In the beginning of the post you mentioned that “The methods I mentioned above, while very powerful, are also computationally expensive. And since our end goal is to deploy this system to a Raspberry Pi at the end of this 2 part series, it’s best that we stick to simple approaches. We’ll return to these more powerful methods in future blog posts, but for the time being we are going to keep it simple and efficient.” Are you planning to have some post on the more powerful methods because I tired to look for it in the blog but I was not able to find it.

          • Rouzbeh Shirvani March 26, 2017 at 10:45 am #

            sorry for the typo, I meant tried in the previous comment.

          • Adrian Rosebrock March 28, 2017 at 1:06 pm #

            Yes, I will be covering more advanced background subtraction/motion detection methods in future blog posts (I have not written them yet).

  138. Danijel April 2, 2017 at 3:24 pm #

    The motion detection of videos which your provide works on my raspberry pi 3, but you said that:” Finally, if you want to perform motion detection on your own raw video stream from your webcam, just leave off the –video switch:”
    But when I run without video switch I don’t get any video output on screen and terminal in approximately one second finished execution of command which suggest that program do not detect motion of cam stream. Is this problem related with version of cv which is 3.2 in my case? Can you publish code which will do motion detection from video taken on raspberry pi 3 with open cv 3.2?

    • Adrian Rosebrock April 3, 2017 at 1:57 pm #

      Are you using the Raspberry Pi camera module? If so, you’ll want to update the code to use the template referenced in this post.

      • Danijel April 5, 2017 at 12:49 pm #

        Yes, I’m using camera module version 1.
        Yes, I want to update the code to detect motion from camera module video in real time. Where can I find code which enables that?

        • Adrian Rosebrock April 8, 2017 at 12:43 pm #

          As I mentioned in my previous reply to you, I don’t have the code pre-updated for you. But you can modify this source code to use the Raspberry Pi camera using this post. Alternatively, I recommend using this post which uses the Raspberry Pi camera by default.

  139. Adri Rizal April 8, 2017 at 9:44 am #

    Hi Adrian,
    How can i use this code with my own camera?
    Thank you very much

    • Adrian Rosebrock April 8, 2017 at 12:35 pm #

      What type of camera are you using? USB? Built-in webcam?

  140. Mouhamed Ksiksi April 11, 2017 at 11:08 am #

    Hey Adrian
    I use your source code of motion detection from your link
    the probleme is when the camera tries to detect me when i’m moving ,it tracks and makes contours of everything with dark colors
    I’m using a USB camera pc with capture pictures 640×480
    I need your help .Could you please help me

    • Adrian Rosebrock April 12, 2017 at 1:07 pm #

      I would suggest using a more advanced method of motion detection, as described in this blog post.

    • Fernando September 12, 2017 at 8:16 am #

      Try showing the firstFrame. In my case, the first frame was darker.

      The solution was to time.sleep(2) and to throw away the first frame. So before the loop i did _, frame =

  141. Musa April 17, 2017 at 3:45 pm #

    Sorry I asked this question on the wrong post, was meant for

    • Adrian Rosebrock April 19, 2017 at 12:54 pm #

      Check your edge map and ensure the region you are trying to detect is being correctly found in the edge map (based on your comment, it sounds like it’s not).

  142. Sam April 19, 2017 at 2:48 am #

    Thank you Adrian for this Tutorial.
    Functionality works fine, but accuracy was incorrect.
    As soon as I run the program using mounted Webcam the bounding box for the contour (green) fits the whole window scene (imshow(frame)) with Room Status:Occupied the whole time, while there was no motion at all.

    • Adrian Rosebrock April 19, 2017 at 12:45 pm #

      Hey Sam — it sounds like your camera sensor is still warming up, thus causing the entire region to be marked as motion. I would suggest looping over the first ~10-30 frames and ignoring them before trying to compute motion.

      • Sam April 23, 2017 at 3:13 am #

        Thank you Adrian for your help.
        Unfortunately, it is still the same… The entire region marked as motion (Green).

        Maybe my Webcam!! I will keep trying to fix this problem….
        What is weird I built basic motion detection with Java using same Webcam, and it was fine !!!!

        • Adrian Rosebrock April 24, 2017 at 9:49 am #

          That is indeed very strange. I would also suggest trying this post which computes a rolling average of the frames which is more robust to issues such as this.

  143. Joe April 27, 2017 at 3:45 pm #

    Do you have any pointers for using HoughLinesP in conjunction with the createBackgroundSubtractor() method?

  144. jenith May 3, 2017 at 1:53 am #

    Hello Sir
    I implement the code which is almost same as yours. My question is I want to know some Communication Protocol which can make transmission securely between client and server. Protocol such as (COAP and DTLS). In last I also want to know its implementation part.

    Waiting for positive response…

    • Adrian Rosebrock May 3, 2017 at 5:42 pm #

      Hi Jenith — this isn’t exactly a computer vision question, but I would suggest encoding the image and transmitting. I like ZeroMQ and RabbitMQ for these types of tasks.

  145. Alvaro May 4, 2017 at 4:04 am #

    First of all, thank you for this amazing code. I have been looking for something like this for a while.
    I’m working in a laptop and for real time capture I would like to use an external USB Camera. How can I select that external camera instead of the laptop’s?

    Thanks for your help 🙂

    • Adrian Rosebrock May 4, 2017 at 12:31 pm #

      You simply change the index of cv2.VideoCapture. Assuming your laptops built-in camera is the 0-th index, your USB webcam is like the first index:

      camera = cv2.VideoCapture(1)

  146. thayjes May 5, 2017 at 4:47 pm #

    I am getting an error :
    Error opening file

    I am using OpenCv 3.1 with Python 2.7.
    Anyone has any idea how to fix the error?
    Thanks in advance!

    Your comment is awaiting moderation.

    • thayjes May 5, 2017 at 4:48 pm #

      This is my error:

      error opening file (/build/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp python

      • Adrian Rosebrock May 8, 2017 at 12:31 pm #

        It sounds like there is an issue with the video support in OpenCV. I would suggest following one of my tutorials to install OpenCV on your system.

  147. achraj May 7, 2017 at 10:32 am #

    ohh thanks for such a good tutorial. I was trying so save the video when any motion occur in web cam….how can i do this???
    is there any way to convert that video to gif and save it to my lappy.

    • Adrian Rosebrock May 8, 2017 at 12:23 pm #

      Please see this tutorial where I demonstrate how to save key event video clips to disk.

  148. AdamBlade May 16, 2017 at 8:36 am #

    Hey Adrian, at first really great tutorial (just as any other you have on your website)

    I’m facing one problem trying to run the python program. Nothing happens (python is probably breaking it)
    I’m trying to run it from the file (even from those from you and with your code), not from the webcam.

    When I’m “commenting” the

    “If not grabbed:

    I’ve got the ‘NoneType’ error “object has no attribute ‘shape’ so it looks like the path to the file is wrong but…
    I’ve also checked your object tracking tutorial (with the tennis ball) and there, running program with the same file works.

    I’m confused, do you know what can be the cause of it? OpenCV installed properly just as you demonstrated.

    Will be very grateful for advice.

    • Adrian Rosebrock May 17, 2017 at 9:58 am #

      It sounds like your system is unable to read the frame from the video, likely because your OpenCV install was not compiled with video support. You can read more about these types of NoneType errors with OpenCV here.

  149. Ruben May 21, 2017 at 12:39 pm #

    Hi Adrian,

    Great tuto, it’s working for me. For my application I’d like to know if it’s possible to save only the moving part (the region in the green rectangle) ?

    • Adrian Rosebrock May 25, 2017 at 4:45 am #

      You can use the cv2.imwrite to save individual frames. To save just the region in the green rectangle, simply extract the region of interest using NumPy array slicing. If you’re interested in learning more about the fundamentals of computer vision and image processing, be sure to take a look at Practical Python and OpenCV. My book will help you get up to speed quickly.

  150. Ajit May 30, 2017 at 2:07 am #

    thanks for uploading the project,
    it helped me a lot…….

    • Adrian Rosebrock May 31, 2017 at 1:15 pm #

      Fantastic, I’m happy to hear it Ajit 🙂

  151. navin May 30, 2017 at 6:25 am #

    Hey Adrian,
    Please help me to sort out the error. i am working on opencv_python3.2.0 on windows8.when i run the code, it doesn’t display anything in python shell. and when i execute the command $ python –video videos/example_01.mp4 ,it gives error as SyntaxError: invalid syntax.
    i am new to opencv. please help me how to run the code?

    • Adrian Rosebrock May 31, 2017 at 1:13 pm #

      Hey Navin — it’s been a solid 10 years since I’ve used Windows, so I’m not sure what the exact issue could be. I don’t recommend using Windows for computer vision development. I would suggest using either macOS or Ubuntu, which I provide OpenCV install guides for. In either case, this sounds like a video codec issue. You’ll likely need to re-install OpenCV with video codec support.

  152. Henry June 2, 2017 at 4:55 am #

    Hii Adrian awesome tutorial! i have a problem ValueError: too many values to unpack (expected 2)

    Im using Python 3.6 and openCV3 on windows ball tracking program is ok. Thanks in advance

    • Adrian Rosebrock June 4, 2017 at 5:43 am #

      Hi Henry — please read the other comments on this blog post before posting. I’ve already addressed this question a number of times. Please see my reply to “TC” and “Alejandro” in particular.

  153. Sanu Ann Abraham June 12, 2017 at 12:33 pm #

    When I run the, I get an error saying ” no module named imutils” eventhough I have installed it? Please help

    • Adrian Rosebrock June 13, 2017 at 10:57 am #

      Are you using Python virtual environments? How did you install OpenCV?

      • Julian Espinoza June 19, 2017 at 7:19 pm #

        I actually had a question about the running python in a vitural environment compared to python’s regular environment. I installed Python+OpenCV two different methods, your method and another one I found off youtube. Then ran the code using both the regular and virtual environment and didn’t see a significant change (except that on my other install it has opencv3; read the “Alejandro” post, thanks it worked perfectly).

        Oh but for those who didn’t use your install method will be missing the imutils module, and I ran into errors using “pip install imutils”. But if you use “sudo pip install imutils” than it will install perfectly (for those who didn’t use your install method).

        BTW, read your Practical Python+OpenCV book and loved it. Very easy to comprehend and appreciated how you explained everything. I was curious if you will be coming out with another book that specifically tailors towards camera tracking and more advanced topics?

        • Adrian Rosebrock June 20, 2017 at 10:53 am #

          Hi Julian, thanks for the comment. In terms of “significant change”, I’m not sure what you mean. Can you elaborate?

          As for more advanced content, it sounds like you would be the perfect fit for the PyImageSearch Gurus course. Inside the course I cover much more advanced computer vision algorithms (and in more detail). Be sure to take a look!

          I also have plans to write more books in the future, especially regarding tracking algorithms. But for the time being, be sure to start with PyImageSearch Gurus.

  154. ali June 28, 2017 at 3:14 am #

    hi mr adrian
    i followed your tutorial of “test_video” in raspberry and it worked very good and successfully.
    but i cant run this tutorial (Basic motion detection and tracking with Python and OpenCV ). it dosen’t work . i write this line ” python ” and it without any error go to the next line and nothing do .

    • Adrian Rosebrock June 30, 2017 at 8:21 am #

      Keep in mind that this tutorial assumes you are using a USB webcam. Are you using a USB webcam with your Raspberry Pi or the Pi camera module? In either case, considering using the VideoStream class to make the code compatible with both your Pi camera module and a USB camera.

  155. pani July 19, 2017 at 7:01 am #

    I am working on robot simulation under ROS and I want to use this code for my robot but when I compile the code a syntax error occurs : line 13 unexpected token ‘)’
    the problem is I’ve worked with c++ and don’t know py
    so I would appreciate it if you could help me with running this code

    • Adrian Rosebrock July 21, 2017 at 8:59 am #

      Hi Pani — make sure you use the “Downloads” section of this blog post to download the source code and example video. This will ensure your code matches mine.

  156. Milan July 26, 2017 at 4:39 pm #


    First, thank you for this tutorial!
    Only problem I get that the video is too fast even the examples when I run the program.

    Have you got an idea why this happening?

    • Adrian Rosebrock July 28, 2017 at 9:57 am #

      The goal of OpenCV is to process as many frames as quickly as possible. The reason it seems “fast” to you is because OpenCV is capable of running this particular algorithm at a rate faster than the normal playback rate. If you want to slow it down, insert a time.sleep call at the end of the loop.

  157. Albert Franz July 28, 2017 at 5:10 am #

    Hi Adrian,
    great tutorial !

    But when i have read the title i don’t find an implementation of tracking.
    The code is an implementation of detection but not tracking; in other words tracking is when, after a detection, you identify the object to detect and, frame by frame, you keep the information about it (location, speed, etc) and build a model to predict the position in the next video frame.
    Algorithm models like kalman filter, optical flow, mean-shift or cam-shift.
    I would appreciate your implementation in future tutorials or courses

    Thanks to much


    • Adrian Rosebrock July 28, 2017 at 9:44 am #

      Hi Albert — please see this post for more information on object tracking.

  158. tim kim August 2, 2017 at 3:24 pm #

    does the raspberry pi not work with this code? what do you mean by you will show us how to update it to work with raspberry pi. thanks

    • Adrian Rosebrock August 4, 2017 at 6:57 am #

      Presuming you are using the Raspberry Pi camera module (not a USB webcam), you can use this tutorial to build a motion detection system using the Raspberry Pi.

  159. Kate August 6, 2017 at 3:03 pm #

    Do you have sample code/ tutorial for swiping and zooming gestures.

  160. steven August 12, 2017 at 10:16 pm #

    I don’t know why I thought this was meant to be run on a pi. Spent a lot of time trying to figure out what I was doing wrong then read in the comment section you had a separate on for the pi! ahah well the joke is on me. Great tutorial thanks

    • Adrian Rosebrock August 14, 2017 at 1:12 pm #

      Hi Steven — you are correct, the tutorial on this page is not meant for the Raspberry Pi; however, this one is.

  161. Khang Tran August 15, 2017 at 4:45 pm #

    Thanks Adrian for the tutorial! Everything worked fine except for the Dropbox package. After contacting Dropbox support, I was informed that Client.pyc and no longer exist and is taken over by one file With this curve ball, I was wondering how I can still connect to my Dropbox account without having access to these files.

    Another Question: I am in the middle of creating an Android app to host the live feed and was wondering if there’s a way to stream the video live given the fact that I am programming in Java.

    • Adrian Rosebrock August 17, 2017 at 9:16 am #

      I think you might be replying to the incorrect blog post? The home surveillance + Pi + Dropbox post is over here. In any case, I will be updating that blog post in the next two weeks to work with the latest Dropbox API release.

  162. tonie August 22, 2017 at 1:24 am #

    sorry , its working

    • Adrian Rosebrock August 22, 2017 at 10:44 am #

      Congrats on resolving the issue, Tonie!

  163. denish August 26, 2017 at 2:52 am #

    own video not recording,already stored video to operate it.

    • Adrian Rosebrock August 27, 2017 at 10:36 am #

      Hi Denish — can you elaborate on your comment? Are you trying to apply motion detection to a video file? Or save the results of motion detection to a video file?

  164. Mir haris August 28, 2017 at 4:12 am #

    i want to ask if i want to capture a specific pattern of motion in low light only then what will be the procedure ?
    do i have to store such pattern in it already with whom it matches or stuff ?

    • Adrian Rosebrock August 28, 2017 at 4:22 pm #

      Can you elaborate on what you mean by “pattern of motion”?

      • Mir haris August 29, 2017 at 1:50 am #

        Like seizures..
        and what if it is a live stream ?
        then it needs to match it and alarm.
        i hope you understood now

        • Adrian Rosebrock August 31, 2017 at 8:44 am #

          It sounds like you are referring to to “activity recognition”. This is a very open area of research in computer vision and machine learning. Unfortunately there is no “one size fits all” solution. Most approaches I’ve seen try to build very large datasets first. But again, I can’t recommend a general technique to you.

  165. Mir haris August 28, 2017 at 4:19 am #

    plus can i get your Skype id ?

    • Adrian Rosebrock August 28, 2017 at 4:22 pm #

      Sorry, I don’t share my Skype ID.

      • Mir haris August 29, 2017 at 1:50 am #

        I just wanted some help but that’s fine.

        • Adrian Rosebrock August 31, 2017 at 8:43 am #

          Please realize that I receive over 100 emails and 50+ comments per day on the PyImageSearch blog. I can’t simply hand out my Skype ID, personal Facebook, etc.

  166. Sree August 28, 2017 at 6:10 am #

    Hi sir,
    Actually iam new to the development field will you please help me the same code in android programatically

  167. Raj September 5, 2017 at 4:40 am #

    Is it possible to automatically differentiate between a human,an animal and a vehicle alongwith the picked up motion….can u throw some light and if possible code for the same.

  168. michael September 6, 2017 at 8:21 am #

    Hi adrian, i have like copied the whole codes on an editor, but when i try to run it on the python shell, it just restarts and nothing actually happens. can you tell me what is wrong?

    • Adrian Rosebrock September 7, 2017 at 7:03 am #

      Hi Michael — instead of copying and pasting the code please use the “Downloads” section to download the code. This will ensure the project structure is correct and there are no spacing issues related to copying and pasting. From there, execute the script via your command line.

  169. Mac September 9, 2017 at 11:23 am #

    Hi Adrian, I have a project to do which requires me to detect motion and of multiple humans whereby the camera will be connected to a servo motor. It shud only detect motions of humans and nothing else. Like moving trees shud be neglected. As soon as there’s movement, the camera Stops via Servo motor and records the movement. Seems very complex to me as I have little knowledge on python. Can u help me out on this please?

    • Adrian Rosebrock September 11, 2017 at 9:19 am #

      If you need to detect just humans try using OpenCV’s built-in pedestrian detection.

      I don’t have any tutorials on combining object tracking with a servo but I’ll absolutely consider this for the future.

  170. Ravi Teja September 26, 2017 at 5:55 pm #

    Hi Adrian,

    I want to send an email alert or ring an alarm if it detects any motion in the video. COuld you please help me out how can i do that ?

    • Adrian Rosebrock September 28, 2017 at 9:26 am #

      If you’re interested in ringing an alarm, this post. Sending an email can be accomplished a number of ways. I would actually recommend uploading the image to Amazon S3 and then including a link to it from the email. There are a lot of different libraries you can use for this. I actually prefer external services such SendGrid as it’s very reliable.

      For what it’s worth, I demonstrate how to build your exact application (and send out txt messages) inside the PyImageSearch Gurus course.

  171. David September 28, 2017 at 3:45 am #

    Hi Adrian,

    How to track objects only moving with certain speed in a video ?

    • Adrian Rosebrock September 28, 2017 at 8:58 am #

      Can you elaborate more on what you mean by “certain speed in a video”?

  172. David September 29, 2017 at 4:56 am #

    I don’t want to track all moving objects in a video. For example, if many people are walking in a street I want to track only if someone runs in the street. For that, I have to calculate the speed first.

    • Adrian Rosebrock October 2, 2017 at 10:19 am #

      Hi David — I will try to cover speed calculation in a future blog post. Thank you for the suggestion!

      • David November 26, 2017 at 1:13 pm #

        Hi Adrian,

        Did u get a chance to write a blog post on this? I am eagerly waiting for this.

        • Adrian Rosebrock November 27, 2017 at 1:03 pm #

          I have not, but I do have it in my “ideas list”.

          • David March 2, 2018 at 8:11 am #

            Hi Adrian,
            I am still waiting for your post on this.

          • Adrian Rosebrock March 2, 2018 at 10:23 am #

            I will try to do a blog post on it but I cannot guarantee if or when it will be. I’m happy to accept idea requests and suggestions but that is not a guarantee I will cover them. I’m happy to publish these free tutorials but please do not make assumptions on my time or assume that by commenting on this thread many times that I will absolutely cover it. I do my best to provide as many free tutorials as I can and I kindly ask for your respect in return. Thank you.

  173. Junaid October 19, 2017 at 2:56 pm #

    sir this code is for python2.7 or python 3 or open cv…must reply i m waiting for your response

    • Adrian Rosebrock October 19, 2017 at 4:40 pm #

      The code in the post covers OpenCV 2.4 and Python 2.7. The comments detail how to use OpenCV 3 and Python 3. I’ll also be updating this post in the future.

  174. Junaid October 20, 2017 at 4:18 pm #

    Thanks for your response

  175. Mat October 22, 2017 at 8:30 am #

    Great tutorial, however when I run it the green box covers the entire screen and also the room status will only show occupied when the green box covers

    • Adrian Rosebrock October 22, 2017 at 8:49 am #

      Try inserting a time.sleep(3.0) call before the for loop starts. It sounds like your camera sensor needs time to warm up. I would also suggest using a more advanced motion detection method — this tutorial will help you get started.

      • Mat October 24, 2017 at 10:11 am #

        Maybe, but my camera is still though maybe its not sitting possible maybe?

        • Adrian Rosebrock October 24, 2017 at 10:44 am #

          Yes, your camera does need to sit still. Can you rephrase your question please?

          • Mat October 25, 2017 at 5:52 am #

            the case I use means that my camera sort of sits on a tilt maybe there is a slight movement. But my concern is that its assume that the entire screen is motion as a green border takes up all around the edges

          • Adrian Rosebrock October 25, 2017 at 1:03 pm #

            It is possible that lighting conditions can cause this. Does the camera’s environment have consistent lighting?

  176. Robert October 22, 2017 at 8:49 am #

    For some reason imutils wont work. I download even did pip freeze and it shows up. when I launch a python shell it is imported fine but when I try to write a script it says it can’t be found.

    • Adrian Rosebrock October 22, 2017 at 8:54 am #

      Hey Robert — I assume you are using Python virtual environments? If so, make sure you are in the appropriate Python virtual environment (normally named “cv” if you follow a PyImageSearch + OpenCV install tutorial).

      • Robert October 24, 2017 at 10:01 am #

        But when you run it in a python shell it imports it fine

        yes I start by:

        source ~/.profile
        workon cv
        cd ~/Python_programs
        sudo python

        however an error comes up saying no module called imutils found. But when I’m in my virtual environment and type python to get the shell up, when I type import imutils it works fine

        Yes ive checked and I am in my virtual environment with it saying imutils is imported, just wont work with scripts. Does it have something to do with not supporting python 3.5 or raspian stretch?

        • Adrian Rosebrock October 24, 2017 at 10:41 am #

          Hi Robert. Try updating imutils in your environment:

          $ pip install --upgrade imutils

  177. Febrian Dwi P R S November 1, 2017 at 9:24 am #

    thanks for this great tutorial !
    it really helps me a lot for newbie like me, but i have a problem

    i finish writing the script with exact name on your tutorial, but when i run the “python –video videos/example_01.mp4” command, the video screen doesn’t pop out.

    i already finish your previous tutorial, the access pi camera with python and opencv one. and it worked without problem and the video screen does pop out.
    i need help to solve this,but i don’t know where the error is. i’ll be waiting for your response.

    P.S. sorry for my bad english, 😀

    • Adrian Rosebrock November 2, 2017 at 2:27 pm #

      Please keep in mind that this script assumes you are using a USB/built-in webcam, not the Raspberry Pi camera module. You can either (1) use a USB camera, (2) update the code to use the picamera module, or (3) use the VideoStream class.

      If you’re new to OpenCV and computer vision I would recommend working through Practical Python and OpenCV to help you get up to speed quickly and learn the fundamentals.

      I hope that helps!

      • Febrian Dwi P R S November 3, 2017 at 12:46 am #

        ahhh, so that’s why.
        do you mind to explain me how to make this script work with Pi camera module, or at least give me clues what should i change to make it work ?

        i’m completely blind doing this project since i want to learn by doing your tutorials.

        • Adrian Rosebrock November 6, 2017 at 10:50 am #

          Hi Febrian — please see my previous comment. I would suggest using the VideoStream class to make the code compatible with the Raspberry Pi. You should also refer to Practical Python and OpenCV if you need help learning the fundamentals.

  178. Gradn December 10, 2017 at 10:48 am #

    $ sudo python
    Traceback (most recent call last):
    File “”, line 4, in
    import imutils
    ImportError: No module named imutils
    $ pip install imutils
    Requirement already satisfied: imutils in ./.virtualenvs/cv/lib/python2.7/site-packages

    • Adrian Rosebrock December 12, 2017 at 9:15 am #

      This is because you are trying to execute the script as sudo. To execute the script as root you’ll need to supply the full path to your “cv” Python binary:

      $ sudo ~/.virtualenv/cv/bin/python

  179. marina December 16, 2017 at 7:17 am #

    hello! great tutorial!
    I have a question, can you explain to me how the threshold works?
    I am trying to run your code but Thresh’s frame background is always white and gets black when there is movement, unlike yours which is black and gets white when detecting movement.
    Do you know why is that?
    I have tested the program in both dark and bright rooms but still it is not working.

  180. marina December 18, 2017 at 10:38 am #

    Hello! Great tutorial!
    I have a problem running your code. From the ‘Thresh’ window i can see that almost everything in the background is white and the room is always occupied.
    Also, it doesn’t take every movement.
    Do you know what i am doing wrong?
    I am running the code using my laptop’s camera.

    • Adrian Rosebrock December 19, 2017 at 4:20 pm #

      I would suggest including a time.sleep(3) call and allowing your camera sensor to warm up before you start polling frames from it.

      • marina December 20, 2017 at 8:35 am #

        I have tried that already, and it worked for only once. After I tried to rerun the program, the room was always occupied again.

        • Adrian Rosebrock December 20, 2017 at 9:18 am #

          Hm, that’s is certainly a problem then! My suggestion would be to use a more advanced background subtraction method that uses a rolling average of frames. You can find an implementation here on the PyImageSearch blog.