Basic motion detection and tracking with Python and OpenCV

animated_motion_02

That son of a bitch. I knew he took my last beer.

These are words a man should never, ever have to say. But I muttered them to myself in an exasperated sigh of disgust as I closed the door to my refrigerator.

You see, I had just spent over 12 hours writing content for the upcoming PyImageSearch Gurus course. My brain was fried, practically leaking out my ears like half cooked scrambled eggs. And after calling it quits for the night, all I wanted was to do relax and watch my all-time favorite movie, Jurassic Park, while sipping an ice cold Finestkind IPA from Smuttynose, a brewery I have become quite fond of as of late.

But that son of a bitch James had come over last night and drank my last beer.

Well, allegedly.

I couldn’t actually prove anything. In reality, I didn’t really see him drink the beer as my face was buried in my laptop, fingers floating above the keyboard, feverishly pounding out tutorials and articles. But I had a feeling he was the culprit. He is my only (ex-)friend who drinks IPAs.

So I did what any man would do.

I mounted a Raspberry Pi to the top of my kitchen cabinets to automatically detect if he tried to pull that beer stealing shit again:

Figure 1: Don't steal my damn beer. Otherwise I'll mount a Raspberry Pi + camera on top of my kitchen cabinets and catch you.

Figure 1: Don’t steal my damn beer. Otherwise I’ll mount a Raspberry Pi + camera on top of my kitchen cabinets and catch you.

Excessive?

Perhaps.

But I take my beer seriously. And if James tries to steal my beer again, I’ll catch him redhanded.

Looking for the source code to this post?
Jump right to the downloads section.

OpenCV and Python versions:
In order to run this example, you’ll need Python 2.7 and OpenCV 2.4.X.

A 2-part series on motion detection

This is the first post in a two part series on building a motion detection and tracking system for home surveillance. 

The remainder of this article will detail how to build a basic motion detection and tracking system for home surveillance using computer vision techniques. This example will work with both pre-recorded videos and live streams from your webcam; however, we’ll be developing this system on our laptops/desktops.

In the second post in this series I’ll show you how to update the code to work with your Raspberry Pi and camera board — and how to extend your home surveillance system to capture any detected motion and upload it to your personal Dropbox.

And maybe at the end of all this we can catch James red handed…

A little bit about background subtraction

Background subtraction is critical in many computer vision applications. We use it to count the number of cars passing through a toll booth. We use it to count the number of people walking in and out of a store.

And we use it for motion detection.

Before we get started coding in this post, let me say that there are many, many ways to perform motion detection, tracking, and analysis in OpenCV. Some are very simple. And others are very complicated. The two primary methods are forms of Gaussian Mixture Model-based foreground and background segmentation:

  1. An improved adaptive background mixture model for real-time tracking with shadow detection by KaewTraKulPong et al., available through the cv2.BackgroundSubtractorMOG  function.
  2. Improved adaptive Gaussian mixture model for background subtraction by Zivkovic, and Efficient Adaptive Density Estimation per Image Pixel for the Task of Background Subtraction, also by Zivkovic, available through the cv2.BackgroundSubtractorMOG2  function.

And in newer versions of OpenCV we have Bayesian (probability) based foreground and background segmentation, implemented from Godbehere et al.’s 2012 paper, Visual Tracking of Human Visitors under Variable-Lighting Conditions for a Responsive Audio Art Installation. We can find this implementation in the cv2.createBackgroundSubtractorGMG  function (we’ll be waiting for OpenCV 3 to fully play with this function though).

All of these methods are concerned with segmenting the background from the foreground (and they even provide mechanisms for us to discern between actual motion and just shadowing and small lighting changes)!

So why is this so important? And why do we care what pixels belong to the foreground and what pixels are part of the background?

Well, in motion detection, we tend to make the following assumption:

The background of our video stream is largely static and unchanging over consecutive frames of a video. Therefore, if we can model the background, we monitor it for substantial changes. If there is a substantial change, we can detect it — this change normally corresponds to motion on our video.

Now obviously in the real-world this assumption can easily fail. Due to shadowing, reflections, lighting conditions, and any other possible change in the environment, our background can look quite different in various frames of a video. And if the background appears to be different, it can throw our algorithms off. That’s why the most successful background subtraction/foreground detection systems utilize fixed mounted cameras and in controlled lighting conditions.

The methods I mentioned above, while very powerful, are also computationally expensive. And since our end goal is to deploy this system to a Raspberry Pi at the end of this 2 part series, it’s best that we stick to simple approaches. We’ll return to these more powerful methods in future blog posts, but for the time being we are going to keep it simple and efficient.

In the rest of this blog post, I’m going to detail (arguably) the most basic motion detection and tracking system you can build. It won’t be perfect, but it will be able to run on a Pi and still deliver good results.

Basic motion detection and tracking with Python and OpenCV

Alright, are you ready to help me develop a home surveillance system to catch that beer stealing jackass?

Open up a editor, create a new file, name it motion_detector.py , and let’s get coding:

Lines 2-6 import our necessary packages. All of these should look pretty familiar, except perhaps the imutils  package, which  is a set of convenience functions that I have created to make basic image processing tasks easier. If you do not already have imutils installed on your system, you can install it via pip: pip install imutils .

Next up, we’ll parse our command line arguments on Lines 9-12. We’ll define two switches here. The first, --video , is optional. It simply defines a path to a pre-recorded video file that we can detect motion in. If you do not supply a path to a video file, then OpenCV will utilize your webcam to detect motion.

We’ll also define --min-area , which is the minimum size (in pixels) for a region of an image to be considered actual “motion”. As I’ll discuss later in this tutorial, we’ll often find small regions of an image that have changed substantially, likely due to noise or changes in lighting conditions. In reality, these small regions are not actual motion at all — so we’ll define a minimum size of a region to combat and filter out these false-positives.

Lines 15-21 handle grabbing a reference to our camera  object. In the case that a video file path is not supplied (Lines 15-17), we’ll grab a reference to the webcam. And if a video file is supplied, then we’ll create a pointer to it on Lines 20 and 21.

Lastly, we’ll end this code snippet by defining a variable called firstFrame .

Any guesses as to what firstFrame  is?

If you guessed that it stores the first frame of the video file/webcam stream, you’re right.

Assumption: The first frame of our video file will contain no motion and just background — therefore, we can model the background of our video stream using only the first frame of the video.

Obviously we are making a pretty big assumption here. But again, our goal is to run this system on a Raspberry Pi, so we can’t get too complicated. And as you’ll see in the results section of this post, we are able to easily detect motion while tracking a person as they walk around the room.

So now that we have a reference to our video file/webcam stream, we can start looping over each of the frames on Line 27.

A call to camera.read()  returns a 2-tuple for us. The first value of the tuple is grabbed , indicating whether or not the frame  was successfully read from the buffer. The second value of the tuple is the  frame  itself.

We’ll also define a string named text  and initialize it to indicate that the room we are monitoring is “Unoccupied”. If there is indeed activity in the room, we can update this string.

And in the case that a frame is not successfully read from the video file, we’ll break from the loop on Lines 35 and 36.

Now we can start processing our frame and preparing it for motion analysis (Lines 39-41). We’ll first resize it down to have a width of 500 pixels — there is no need to process the large, raw images straight from the video stream. We’ll also convert the image to grayscale since color has no bearing on our motion detection algorithm. Finally, we’ll apply Gaussian blurring to smooth our images.

It’s important to understand that even consecutive frames of a video stream will not be identical!

Due to tiny variations in the digital camera sensors, no two frames will be 100% the same — some pixels will most certainly have different intensity values. That said, we need to account for this and apply Gaussian smoothing to average pixel intensities across an 11 x 11 region (Line 41). This helps smooth out high frequency noise that could throw our motion detection algorithm off.

As I mentioned above, we need to model the background of our image somehow. Again, we’ll make the assumption that the first frame of the video stream contains no motion and is a good example of what our background looks like. If the firstFrame  is not initialized, we’ll store it for reference and continue on to processing the next frame of the video stream (Lines 44-46).

Here’s an example of the first frame of an example video:

Figure 2: Example first frame of a video file. Notice how it's a still shot of the background, no motion is taking place.

Figure 2: Example first frame of a video file. Notice how it’s a still-shot of the background, no motion is taking place.

The above frame satisfies the assumption that the first frame of the video is simply the static background — no motion is taking place.

Given this static background image, we’re now ready to actually perform motion detection and tracking:

Now that we have our background modeled via the firstFrame  variable, we can utilize it to compute the difference between the initial frame and subsequent new frames from the video stream.

Computing the difference between two frames is a simple subtraction, where we take the absolute value of their corresponding pixel intensity differences (Line 50):

delta = |background_model – current_frame|

An example of a frame delta can be seen below:

Figure 3: An example of the frame delta, the difference between the original first frame and the current frame.

Figure 3: An example of the frame delta, the difference between the original first frame and the current frame.

Notice how the background of the image is clearly black. However, regions that contain motion (such as the region of myself walking through the room) is much lighter. This implies that larger frame deltas indicate that motion is taking place in the image.

We’ll then threshold the frameDelta  on Line 51 to reveal regions of the image that only have significant changes in pixel intensity values. If the delta is less than 25, we discard the pixel and set it to black (i.e. background). If the delta is greater than 25, we’ll set it to white (i.e. foreground). An example of our thresholded delta image can be seen below:

Figure 4: Thresholding the frame delta image to segment the foreground from the background.

Figure 4: Thresholding the frame delta image to segment the foreground from the background.

Again, note that the background of the image is black, whereas the foreground (and where the motion is taking place) is white.

Given this thresholded image, it’s simple to apply contour detection to to find the outlines of these white regions (Line 56).

We start looping over each of the contours on Line 60, where we’ll filter the small, irrelevant contours on Line 62 and 63.

If the contour area is larger than our supplied --min-area , we’ll draw the bounding box surrounding the foreground and motion region on Lines 67 and 68. We’ll also update our text  status string to indicate that the room is “Occupied”.

The remainder of this example simply wraps everything up. We draw the room status on the image in the top-left corner, followed by a timestamp (to make it feel like “real” security footage) on the bottom-left.

Lines 77-80 display the results of our work, allowing us to visualize if any motion was detected in our video, along with the frame delta and thresholded image so we can debug our script.

Note: If you download the code to this post and intend to apply it to your own video files, you’ll likely need to tune the values for cv2.threshold  and the --min-area  argument to obtain the best results for your lighting conditions.

Finally, Lines 88 and 89 cleanup and release the video stream pointer.

Results

Obviously I want to make sure that our motion detection system is working before James, the beer stealer, pays me a visit again — we’ll save that for Part 2 of this series. To test out our motion detection system using Python and OpenCV, I have created two video files.

The first, example_01.mp4  monitors the front door of my apartment and detects when the door opens. The second, example_02.mp4  was captured using a Raspberry Pi mounted to my kitchen cabinets. It looks down on the kitchen and living room, detecting motion as people move and walk around.

Let’s give our simple detector a try. Open up a terminal and execute the following command:

Below is a .gif of a few still frames from the motion detection:

Figure 5: A few example frames of our motion detection system in Python and OpenCV in action.

Figure 5: A few example frames of our motion detection system in Python and OpenCV in action.

Notice how that no motion is detected until the door opens — then we are able to detect myself walking through the door. You can see the full video here:

Now, what about when I mount the camera such that it’s looking down on the kitchen and living room? Let’s find out. Just issue the following command:

A sampling of the results from the second video file can be seen below:

animated_motion_02

Figure 6: Again, our motion detection system is able to track a person as they walk around a room.

And again, here is the full vide of our motion detection results:

So as you can see, our motion detection system is performing fairly well despite how simplistic it is! We are able to detect as I am entering and leaving a room without a problem.

However, to be realistic, the results are far from perfect. We get multiple bounding boxes even though there is only one person moving around the room — this is far from ideal. And we can clearly see that small changes to the lighting, such as shadows and reflections on the wall, trigger false-positive motion detections.

To combat this, we can lean on the more powerful background subtractions methods in OpenCV which can actually account for shadowing and small amounts of reflection (I’ll be covering the more advanced background subtraction/foreground detection methods in future blog posts).

But for the meantime, consider our end goal.

This system, while developed on our laptop/desktop systems, is meant to be deployed to a Raspberry Pi where the computational resources are very limited. Because of this, we need to keep our motion detection methods simple and fast. An unfortunate downside to this is that our motion detection system is not perfect, but it still does a fairly good job for this particular project.

Finally, if you want to perform motion detection on your own raw video stream from your webcam, just leave off the --video  switch:

Summary

In this blog post we found out that my friend James is a beer stealer. What an asshole.

And in order to catch him red handed, we have decided to build a motion detection and tracking system using Python and OpenCV. While basic, this system is capable of taking video streams and analyzing them for motion while obtaining fairly reasonable results given the limitations of the method we utilized.

The end goal if this system is to deploy it to a Raspberry Pi, so we did not leverage some of the more advanced background subtraction methods in OpenCV. Instead, we relied on a simple yet reasonably effective assumption — that the first frame of our video stream contains the background we want to model and nothing more.

Under this assumption we were able to perform background subtraction, detect motion in our images, and draw a bounding box surrounding the region of the image that contains motion.

In the second part of this series on motion detection, we’ll be updating this code to run on the Raspberry Pi.

We’ll also be integrating with the Dropbox API, allowing us to monitor our home surveillance system and receive real-time updates whenever our system detects motion.

Stay tuned!

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 11-page Resource Guide on Computer Vision and Image Search Engines, including exclusive techniques that I don’t post on this blog! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , , , ,

277 Responses to Basic motion detection and tracking with Python and OpenCV

  1. Fabio G May 26, 2015 at 12:33 pm #

    Freakin awesome! Thanks for the tutorial, waiting for the part 2 😀

    • Adrian Rosebrock May 26, 2015 at 1:12 pm #

      Thanks Fabio, I’m glad you enjoyed it! :-)

      • Anje May 25, 2016 at 5:49 am #

        This will work only for stationary camera right?? as for moving camera is there any code for motion detection??

        • Adrian Rosebrock May 25, 2016 at 3:20 pm #

          Correct, this code is meant to work with only a stationary, non-moving camera. If you’re using a moving camera, this approach will not work. I do not have any code for motion detection with a moving camera.

      • Wil October 24, 2016 at 10:49 am #

        I want a program made that detects
        The individual change in a pixel. From
        A streamed video. Can you help.

        • Adrian Rosebrock November 1, 2016 at 9:53 am #

          Detecting changes in individual pixel values is as simple as subtracting the two images:

          diff = frame1 - frame2

          The diff variable will then contain the changes in value for each pixel.

  2. Shashank May 26, 2015 at 3:42 pm #

    Very useful and easy to understand tutorial ! Had no clue on motion detection till now , was a really good intro to it!

  3. Andre May 26, 2015 at 3:56 pm #

    Thank you! This is Awesome!
    Can’t wait to implement on my Pi – Part 2

    • Adrian Rosebrock May 26, 2015 at 4:44 pm #

      Glad you enjoyed it Andre! Part 2 is going to be really awesome as well.

  4. David Hoffman May 26, 2015 at 4:55 pm #

    Yet another great article on PyImageSearch. Thanks for the tutorial Adrian!

    • Adrian Rosebrock May 26, 2015 at 5:53 pm #

      Thank you for the kind words David! 😀

  5. Pablo May 26, 2015 at 5:02 pm #

    Awesome work!! Thanks for the code :)

    • Adrian Rosebrock May 26, 2015 at 5:53 pm #

      No problem, enjoy!

  6. T. Adachi May 26, 2015 at 7:03 pm #

    Hi, nice article. What was the camera you used? I’m looking for one right now and your choice of camera and the rasp pi might be suitable for my needs.

    • Adrian Rosebrock May 26, 2015 at 7:20 pm #

      I’m using this camera board for the Raspberry Pi. It’s fairly cheap and does a really nice job.

  7. Andrew Bainbridge May 27, 2015 at 4:28 am #

    If you convert the image to HSV instead of grayscale and just look at the H channel, would that improve performance? I suspect it would reject a lot of the shadow because shadows are typically only a variance in V. I think don’t think it would increase the cost significantly. I guess I should download your code and try myself.

    • Satyajityh August 17, 2015 at 1:55 pm #

      Did it work?

  8. Moeen May 27, 2015 at 10:33 pm #

    Thank you so this fantastic post.

    I was wondering how does this code react towards a moving camera? Is there any robust and light weight method to detect moving objects with a moving camera, “camera mounted on a quad-copter” ?

    • Adrian Rosebrock May 28, 2015 at 6:28 am #

      Hey Moeen, if your camera is not fixed, such as a camera mounted on a quad-copter, you’ll need to use a different set of algorithms — this code will not work since it assumes a fixed, static background. For color based tracking you could use something like CamShift, which is a very lightweight and intuitive algorithm to understand. And for object/structural tracking, HOG + Linear SVM is also a good choice. My personal suggestion would be to use adaptive correlation filters, which I’ll be covering in a blog post soon.

  9. xcl May 28, 2015 at 11:20 pm #

    hello,I’m doing a task for moving objects detecting and tracking under the dynamic background,so can you give me a good advice ?thanks

    • Adrian Rosebrock May 29, 2015 at 6:45 am #

      How “dynamic” is your background? How often does it change? If it doesn’t change rapidly, you might be able to use some of the more advanced motion detection methods I detailed at the top of this blog post. However, if your environment is totally unconstrained and is constantly changing, I would treat this as an object detection problem rather than a motion detection problem. A standard approach to object detection is to use HOG + Linear SVM, but there are many, many ways to detect objects in images.

  10. sos June 1, 2015 at 5:24 am #

    Hi Adrian,

    very nice tutorial. Thank you but I have a question. Isn’t that, technicaly speaking, presence detection? If you stop moving around your office and just stay still algorithm will box you. Same if you will place something on the table/floor. I understand motion as checking continously difference between each present and past frame. I used capture.sequence from picamera to capture 3 frames as 3 different arrays, than process them, diff and it gives me quite fair results.

    • Adrian Rosebrock June 1, 2015 at 6:30 am #

      Presence detection, motion detection, and background subtraction/foreground extraction all tend to get wrapped up into the same bucket in computer vision. They are slightly different twists on each other and used for different purposes. I have second new post coming out today on motion detection that you should definitely check out as its more true to motion detection than this post is.

  11. Inker June 1, 2015 at 10:13 am #

    Hello Adrian!

    Thank you so much for the comprehensive tutorials! Best that I have seen. :)
    Quick question: in this post (http://bit.ly/1EbNeyY), you say:

    “You might guess that we are going to use the cv2.VideoCapture  function here — but I actually recommend against this. Getting cv2.VideoCapture  to play nice with your Raspberry Pi is not a nice experience (you’ll need to install extra drivers) and something you should generally avoid.”

    However in this tutorial, you use cv2.VideoCapture.

    Can you explain the change?

    Thank you again!

    ~Evan

    • Adrian Rosebrock June 1, 2015 at 10:25 am #

      Hey Evan, the code in this post is actually not meant to be run on the Raspberry Pi — it’s meant to be run on your desktop/laptop. The motion detection and home surveillance code for the Raspberry Pi is actually available on over here.

      • Inker June 1, 2015 at 10:44 am #

        Ah, ok. My bad.

        The following above threw me off:

        “So I did what any man would do.

        I mounted a Raspberry Pi to the top of my kitchen cabinets to automatically detect if he tried to pull that beer stealing shit again:”

        • Adrian Rosebrock June 1, 2015 at 10:59 am #

          Yeah, perhaps I could have been a bit more clear on that. In the section below it I say:

          In the second post in this series I’ll show you how to update the code to work with your Raspberry Pi and camera board — and how to extend your home surveillance system to capture any detected motion and upload it to your personal Dropbox.

          Indicating that there is a second part to the series, but I can definitely see how it’s confusing.

  12. asha June 14, 2015 at 10:30 pm #

    Wow! Great tutorial. Thanks.

  13. Matthew June 25, 2015 at 6:50 pm #

    I am stepping through these tutorials on a Pi B+. I am able to get through this tutorial, the only major issue was that initially I had not installed imutils, but after installing it the code it works(kinda) the cursor simply moves to the next line, blinks a handful of times and then the prompt pops back up. I have dropped a few debug lines in the code to ensure the code is executing (and it is), it just doesn’t seem to be executing in a meaningful way. The camera for sure works (tested it after running the code). Any ideas as to what might be happening?

    EDIT: Oops….. I just read the comment that says that this was not meant to be run on a pi….my bad

    • Adrian Rosebrock June 26, 2015 at 5:57 am #

      No worries Matthew! The reason the script doesn’t work is because it’s trying to use the cv2.VideoCapture function to access the Raspberry Pi camera module, which will not work unless you have special drivers installed. To access the Raspberry Pi camera you’ll need the picamera module. I have created a motion detection system for the Raspberry Pi which you can read more about here. I hope that helps!

  14. Almog June 26, 2015 at 1:59 pm #

    Hello Mr Adrian,

    When I’m trying to lunch the code, I am getting this error ” File “pi_surveillance.py”, line 8, in from picamera.array import PiRGBArray”

    I am using a raspberry pi camera, and I used your guide on how to install opencv on rapsberry pi and I didn’t have any error.

    What did I do wrong?

    Thank you

    • Adrian Rosebrock June 26, 2015 at 7:15 pm #

      Hey Almog, have you installed the “picamera[array]” module yet? Executing:

      $ pip install "picamera[array]"

      will install the picamera module with NumPy support. You should also read this post on the basics of accessing the camera module of the Raspberry Pi.

  15. John Beale July 3, 2015 at 10:11 pm #

    I started with your code and got something that is pretty good for detecting cars, and sometimes pedestrians too. https://www.youtube.com/watch?v=unMbtizfeUY&feature=youtu.be
    With an outdoor scene, trees waving around etc. the trick is to update the background reference image without getting it contaminated by moving objects. I’d be happy to make my version available, but it is based on yours and I’m not sure if your code is open source.

    • Adrian Rosebrock July 4, 2015 at 7:38 am #

      Awesome, very nice work John! Feel free to share, I would be very curious to take a look at the code, as I’m sure the rest of the PyImageSearch readers would be as well!

  16. John Beale July 5, 2015 at 10:13 pm #

    Hi Adrian,
    Ok, I put my code here: https://github.com/jbeale1/OpenCV/blob/master/motion3.py
    also a post with picture here:
    https://www.raspberrypi.org/forums/viewtopic.php?f=43&t=114550&p=784460#p784460
    The code is very specific to that particular camera view; for example there is a line that restricts objects of interest to the upper half of the screen (based on yc coordinate), where the road is, to ignore pedestrians and moving tree shadows in the lower part of the frame.

    • Adrian Rosebrock July 6, 2015 at 6:17 am #

      Thanks so much for sharing John, I look forward to playing around with it! Great work! :-)

  17. mohamad July 6, 2015 at 3:35 am #

    Dear Adrian
    where is the ‘imutils’ path?
    I need to know folder that include this file on My Raspberry pi 2, after “pip install imutils”
    I search and not found in /usr folder.

    • Adrian Rosebrock July 6, 2015 at 6:15 am #

      Check in the site-packages directory for the Python version that you are using.

      But in general, you don’t need to “know” where pip installs the files. You can simply start using them:

      $ python
      >>> import imutils
      >> ....

  18. tc July 7, 2015 at 2:19 am #

    Hi, thanks for this great tutorial.
    I am new to opencv (and python as well), and trying to follow your steps on this tutorial, but when I running the script, I got this error:
    from convenience import translate
    ImportError: No module named 'convenience'

    I have installed the imutils, but seem something is missing in the package. Any idea why?

    TC

    • Adrian Rosebrock July 7, 2015 at 6:29 am #

      Hey TC, what version of Python are you using?

      • TC July 7, 2015 at 6:37 am #

        I am using python 3.4 on a Linux Arch machine.
        However I am able to fix the problem by replacing the
        from convenience import ...
        to
        from imutils.convenience import ....
        in the __init__.py

        However, I got another error when trying to execute the code (which I downloaded from your site):
        File "motion_detector.py", line 61, in
        cv2.CHAIN_APPROX_SIMPLE)
        ValueError: too many values to unpack (expected 2)

        ermm…missing one variables in this line ?
        (cnts, _) = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL,
        cv2.CHAIN_APPROX_SIMPLE)

        • Adrian Rosebrock July 7, 2015 at 8:45 am #

          I figured it was Python 3. The imutils package is only compatible with Python 2.7 — I’ll be updating it to Python 3 very soon. Also, at the top of this post I mention that the code detailed is for Python 2.7 and OpenCV 2.4.X. You’re using OpenCV 3.0 and Python 3, hence the error. The cv2.findContours function changed in OpenCV 3, so change your line to:

          (_, cnts, _) = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

          and it will work.

          • tc July 7, 2015 at 1:13 pm #

            yes..thank you very much. Now it’s working. The problem now is the tracking seem not accurate like the demos above. Is this has something to do with the camera model? Because now I am using the laptop builtin webcam.

          • Adrian Rosebrock July 7, 2015 at 1:31 pm #

            Poor tracking could be due to any number of things, including camera quality, background noise, and more importantly — lighting conditions.

          • tc July 7, 2015 at 2:55 pm #

            I see. Thanks for everything!

  19. Kaspars July 11, 2015 at 4:41 pm #

    Hello Adrian,

    Thank you for your tutorial. It has been very helpful to me. I also have to admit that John’s code has been useful as well.

    I’m trying to make a vehicle detection and tracking program (nothing fancy – mainly for fun). So far I have been very satisfied with the program, but I feel like, that finding a difference between the current frame and the first one is not the best solution for me, because in some test videos it results in false detection, mainly because of huge changes between frames etc.

    Maybe you can give any advice how to improve or fix this? Also – if you have other advices in terms of vehicle detection and tracking, I would be very glad to hear about them.

    Anyway – Thank you in advance.

    • Adrian Rosebrock July 12, 2015 at 7:44 am #

      Hey Kaspars, take a look at my post on performing home surveillance using a (slightly) more robust algorithm on the Raspberry Pi. This method uses a running average as the background model to help prevent those false positives.

      • Kaspars July 12, 2015 at 8:56 am #

        Okay, I will take a look.

        Thank you once again. :)

  20. Gabriel Bosse July 13, 2015 at 6:17 pm #

    Thanks a lot for this tutorial. Do you know what would be the best way to record that motion ? Like distance travelled (in pixel) or velocity ?

    • Adrian Rosebrock July 14, 2015 at 6:23 am #

      Hey Gabriel, I have not done any tutorials related to velocity, but it is certainly possible. But in the most simplistic form, the algorithm is quite simple if you define two identifiable markers in a video stream and know the distance between them (in feet, meters, etc.) Then, when an object moves from one marker to the other, you can record how long that travel took, and be able to derive a speed. Again, while I don’t have any tutorials related to velocity, I think this tutorial on computing the distance to an object might be interesting for you.

  21. Supra July 19, 2015 at 8:23 am #

    @tc
    Can you send me code? I’m using python3. But i used sudo python3
    So iI am focus only python3.

  22. Alexandre July 25, 2015 at 11:29 am #

    Hello , thank you for the tutorial , it was really very good.
    I needed to do a system similar to his but with the use of ip camera . You know what should I do ? I could not get the video from an IP address.
    Thank you so much

    • Adrian Rosebrock July 25, 2015 at 11:51 am #

      Hey Alexandre, you can still use this code with an IP camera, you just need to change the cv2.VideoCapture function to accept the address of the camera. Another approach is to try to parse the stream of the camera directly. I personally have not done thsi before, but I hope it helps get you started.

  23. mohammad July 25, 2015 at 5:13 pm #

    wow . thanks for the tutorial . and thanks for the time you spend to write these tutorials for us :)

    thank you very very … much 😉

  24. Anthony July 27, 2015 at 1:30 pm #

    Hello Adrian,

    I have installed imutils in the terminal under CV, if i am not under CV and try to install i get an error message. When i am in python editor and input “import imutils” i get an error stating no module named imutils. I am using Python 2.7.3. Please let me know what I am doing wrong.

    Tony

    • Adrian Rosebrock July 28, 2015 at 6:40 am #

      You must be in the cv virtual environment to access any packages installed in that environment. Your cv virtual environment is entirely independent from all other packages installed on your system.

      Be sure to access your virtual environment by using the workon command:

      $ workon cv
      $ python
      >>> import imutils
      ...

      • Tony July 29, 2015 at 11:52 am #

        Adrian,

        Thanks for this, however, I get syntax errors every time i input “Firstframe = none” and “camera.release()” which starts over at >>> instead of … which means I have to do it over again but doesn’t change the outcome. Also, just curious. I noticed at some places if i put in the “# code” the following code doesn’t work and other spots if i don’t put it in the following code doesn’t work. Could you let me know if I need to input the “# code”?

        Thanks, Tony.

        • Adrian Rosebrock July 30, 2015 at 6:42 am #

          Tony: This code is meant to be executed via command line, not via Python IDLE. Please download the source code using the form at the bottom of this post and execute it that way.

      • Felipe M November 12, 2016 at 10:24 am #

        Hi Adrian

        I’m having this same issue, and I also tried to run on cv mode without success, do you have any idea about what is happening?

        Best regards

        • Adrian Rosebrock November 14, 2016 at 12:10 pm #

          Are you referring to the imutils error? If so, you likely did not install imutils into the cv virtual environment:

  25. SAF August 4, 2015 at 10:49 am #

    Hi Adrian,

    Excellent tutorials, both this and the one detailing the use of the camera.
    I am however worried about the performance of the motion detection, even on an RPi 2.
    Due to the capturing process already using lots of CPU, I tried using different threads for capturing and for motion detection, to spread the load on the cores. Thing is, even at 4 FPS, the motion detection consistently lags behind the capturing thread.
    What was your experience with this?

    Code here: https://github.com/smarmie/rpi-art

    Thanks.

    • Adrian Rosebrock August 4, 2015 at 1:14 pm #

      4 FPS sounds a bit slow. Have you tried processing smaller frames? If you resize the frames to a smaller size, the less data you have to process, and thus the faster your algorithms will run.

      • SAF August 5, 2015 at 7:42 am #

        Yes, I though about that. I don’t know which would have a better precision: capturing directly at a smaller resolution, or capturing at a higher resolution and resizing before processing?

        • Adrian Rosebrock August 6, 2015 at 6:24 am #

          Capturing directly at a smaller resolution should have better speed tradeoffs than capturing at a higher resolution and resizing afterwards (since you can skip the resizing/interpolation step). However, that would be something to test directly and view the results.

  26. irfan August 9, 2015 at 5:54 am #

    hello adrian

    thank you for this tutorial, but i have a problem, i got message
    File “/usr/local/lib/python2.7/dist-packages/imutils/convenience.py” line 37, in resize
    (h, w) = image.shape[:2]
    AttributeError: ‘NoneType’ object has no attribute ‘shape’

    can you help me ?

    • Adrian Rosebrock August 9, 2015 at 7:04 am #

      Hey Ifran, if you’re getting an error related to the shape of the matrix being None, then the problem is almost certainly that the frame is not being properly read from the webcam/video. Make sure the path you supplied to the video file is correct.

    • taufiq March 23, 2016 at 12:10 pm #

      do u solved this problem ? i have same problem and dont have idea how to solve. im new btw

      • Adrian Rosebrock March 24, 2016 at 5:17 pm #

        Double check that you can access the builtin/USB webcam on your system. If you’re getting an error related to an image/frame being None, then frames are not being properly read from your video stream. If you’re using the Raspberry Pi, you should use this tutorial instead.

  27. Ori August 10, 2015 at 2:53 pm #

    Hi Adrian,

    Thanks for the tutorial!

    I have a question, if we are detecting motion using a delta between the FirstFrame and the new one, and i’m guessing that we are doing something like this:
    delta pixel=abs(firstFrame_pixel – newFrame_pixel).
    if the new pixel will be black and the number that represent black is 0 so we will get the original pixel without ant change.
    and how this pixel will be detect?

    Thanks!

    • Adrian Rosebrock August 11, 2015 at 6:32 am #

      Yes, computing the absolute difference is a really simple method to change change in pixel values from frame to frame. I would take a look at Lines 50 and 51 where I compute the absolute difference and then threshold the absolute difference image. All pixels that have a difference > 25 are marked as “motion”.

  28. Kitae August 12, 2015 at 5:10 am #

    hello adrian
    thank you for the tutorial!!
    i followed all tutorial from installing python, opencv and testing video.
    but i have a problem opening ‘motion_detection.py’
    nothing happens when i type ‘python motion_detection.py’
    i recorded the problem.
    i would be very thankful if you help me.

    thank you!

    https://youtu.be/rXeMjQXMtpU

    • Adrian Rosebrock August 12, 2015 at 6:19 am #

      It seems like for whatever reason OpenCV is not pulling frames from the video or camera feed, I’m not sure exactly why that is. When you compiled and installed OpenCV on your Raspberry Pi, did you see if it had camera/video support? I would suggest using the OpenCV install tutorial I have detailed on the PyImageSearch blog. Step 4 is really important since that is where you pull in the video pre-requisites.

      • Kitae August 13, 2015 at 12:53 pm #

        Thank you for feedback!

        I tried it and it says they are the newest version.
        I wonder that ‘python test_video.py’ works very well
        and ‘python motion_detector.py’ doesn’t work…

        • Adrian Rosebrock August 14, 2015 at 7:22 am #

          Oh, I see the problem now. The test_video.py script uses the picamera module to access the Raspberry Pi camera. However, the code for this blog post uses the cv2.VideoCapture function which will only work if you have the V4L drivers installed. Instead, this post for motion detection for the Raspberry Pi.

  29. Hanna August 20, 2015 at 2:06 pm #

    Thanks for another great tutorial Adrian! Your tutorials have given me the ability to jump into working with OpenCV without much startup time.

    • Adrian Rosebrock August 21, 2015 at 7:15 am #

      I’m glad you enjoyed it Hanna! :-)

  30. haim August 23, 2015 at 8:26 am #

    Hi,

    thanks for the great tutorial! it’s very helpful.
    one question though, in this tutorial you use: camera = cv2.VideoCapture(0)
    while in this tutorial:
    http://www.pyimagesearch.com/2015/03/30/accessing-the-raspberry-pi-camera-with-opencv-and-python/

    you said you prefer to use picamera module: (from comments)
    “When accessing the camera through the Raspberry Pi, I actually prefer to use the picamera module rather than cv2.VideoCapture. It gives you much more flexibility, including obtaining native resolution. Please see the rest of this blog post for more information on manually setting the resolution of the camera”

    so what changed here?

    • Adrian Rosebrock August 24, 2015 at 6:44 am #

      The main difference is that in the second post I am using the picamera Python module to access the camera attached to the Raspberry Pi. Take a look at the source code of the post and you’ll notice I use the capture_continuous method rather than the cv2.VideoCapture function to access the webcam. But again, that post is specific to the Raspberry Pi and the Pis camera module.

  31. Dan September 3, 2015 at 9:44 pm #

    I am getting an import error no module named pyimagesearch .transform.any ideas what I’ve done wrong

    • Adrian Rosebrock September 4, 2015 at 6:39 am #

      Hey Dan, did you download the source code to this post using the form at the bottom of the page? The .zip of the code download includes the pyimagesearch module. I’m not sure where the transform error is coming from, I assume from the imutils package. So make sure you install imutils:

      $ pip install imutils

  32. Alejandro Barredo September 14, 2015 at 7:51 am #

    Hello,
    I’m trying to test this first part and im having a problem when compiling it:

    Traceback (most recent call last):
    File “***********”, line 60, in
    (_,cnts) = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
    ValueError: too many values to unpack

    I’ve looking for a solution but i couldnt
    could you give me a push
    Thank you

    • Adrian Rosebrock September 14, 2015 at 9:57 am #

      It sounds like you’re using OpenCV 3 which has made changes to the return signature of the cv2.findContours function. Change the line of code to:

      (_, cnts, _) = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE) and the method will work with OpenCV 3.

  33. urswin September 22, 2015 at 4:42 am #

    Cant wait to try this out, thanks man.

  34. Talha September 29, 2015 at 4:31 pm #

    Hi, I tried to run this code on my python 2.7 with opencv 3.0 but its not working. I am student of Final year and doing fyp. We have a fyp of gesture wheel control chair. We are trying our hard to get more close in this project but some issues are coming.

    Is it possible I can get some help from you. I shall be very thankful to you if you guide me.

    thanks

    • Adrian Rosebrock September 30, 2015 at 6:31 am #

      Hey Talha — when you say the code is “not working”, what do you mean? Are you getting an error of some kind?

  35. jai October 5, 2015 at 7:22 am #

    Hi, Thanks for the excellent post.
    I was learning Object detection by Opencv and python using your code, Moving object in my video was small (rather human it’s an insect moving on white background) and video was captured by a 13 megapixel Mobile camera. When object starts to move it left a permanent foot print at the initial point and hence tracker will show two rectangle one at the side of origin and the other tracker move according to object current prostitution.

    Why does it detect two contour instead of one which is actually tracking the movement.

    • Adrian Rosebrock October 5, 2015 at 7:08 pm #

      The reason two contours are detected is because the original video frame did not contain the footprint. This is a super simplistic motion detection algorithm that isn’t very robust. For a more advanced method that will help solve this problem, please see this post.

  36. Tim Clemans October 9, 2015 at 6:22 pm #

    I’m a Seattle Police software developer tasked with figuring out how to auto redact police videos to post on Youtube, see http://www.nytimes.com/2015/04/27/us/downside-of-police-body-cameras-your-arrest-hits-youtube.html Using your code from this post I was able to generate https://www.youtube.com/watch?v=w-g1fJs3LgE&feature=youtu.be which is a huge improvement on just blurring all the frames. I haven’t figured out how to blur inside the contour. Could you please provide an example of how to do that? So far this is the most reliable thing I’ve found yet. Both tracking an ROI and doing head detection are problematic.

    • Adrian Rosebrock October 10, 2015 at 6:45 am #

      Hey Tim — thanks for the comment. I’ll add doing a blog post on blurring inside head and body ROIs to my queue.

  37. Arm October 29, 2015 at 12:33 am #

    Thank you Adrian! I’ve tried to read and follow and it was so amazing how one can detect motions like that!! :-)

    • Adrian Rosebrock November 3, 2015 at 10:37 am #

      Thanks for the kind words Arm! 😀

  38. Luis LLanos November 21, 2015 at 10:31 pm #

    Hey Adrian as usual a great post, Maybe Could You suggest some good books or blogs about opencv and java or c++ or android.???? Python is great but sometimes in Industry we need faster results, quickly executions THANKS

  39. MAK December 3, 2015 at 11:18 am #

    Hi!! Great Tutorial.. :)

    I was wondering if you can do a tutorial on object detection and tracking from a moving
    camera(UAV/drone). It would be highly appreciated.

    Thanx!

    • Adrian Rosebrock December 3, 2015 at 12:50 pm #

      I’ll certainly consider it for the future!

  40. Seungwon Ju December 5, 2015 at 10:21 am #

    Hello. My name is Seungwon Ju from South Korea.

    This is fascinating. I’m following your guide for my Highschool Research Presentation.
    Thanks to you, I could make CCTV with my raspberry Pi without PIR sensor.

    Thank you very much!

    • Adrian Rosebrock December 6, 2015 at 7:19 am #

      I’m happy you enjoyed the post Seungwon Ju — best of luck on your presentation!

  41. Ahmed December 8, 2015 at 7:00 pm #

    Hello Adrian, thank you for sharing this tutorial, it really helped me for completing some tasks, nice to meet you and i’m waiting for the other tutorials 😀

    • Adrian Rosebrock December 9, 2015 at 6:54 am #

      Thanks Ahmed! :-)

  42. Martin Cremona December 16, 2015 at 3:56 pm #

    Hi Adrian, thank you for this great tutorial! i was looking for something like this.

    I have to ask, how do you achieve it at such a speed?? i have your exact same configuration (or at least that’s what i think), but i can’t make it work as fast as you do. I started from scratch. I followed your tutorial on how to install opencv and python, then imutils and then this project. Do you have something else to improve the performance?? or i’m missing something??

    P.d:sorry for my bad english :)

    • Adrian Rosebrock December 17, 2015 at 6:28 am #

      No worries, your english is great. To start, make sure you are using a Pi 2. That’s definitely a requirement for real-time video processing with the Raspberry Pi. Secondly, try to make the image you are processing for motion as small as possible. The smaller the image is, the less data there is, and thus the pipeline will run faster.

      Also, keep an eye on the PyImageSearch blog over the next few weeks. I’ll be releasing some code that allows the frames to be read in a separate thread (versus the main thread). This can give some huge performance gains.

  43. slava December 19, 2015 at 12:37 pm #

    Hey, Adrian, thanks for your work.
    I have a problem while trying to run the code. When i’m typing like:

    python motion_detector.py

    in order to get motion detection from the webcam, nothing is going on.
    (i mean i can’t see any result, i think code just executes and that’s it)

    And when i’m trying to execute your example (i downloaded it):

    python motion_detector.py --video videos/example_02.mp4

    i get an error

    Can you give me some advice?
    Thanks

    • Adrian Rosebrock December 20, 2015 at 9:45 am #

      Hey Slava: please read through the comments before submitting. I’ve answered this question twice before on this post — see my reply to “Alejandro” and “TC” above for the cv2.findContours fix.

      As for a video stream not displaying up, ensure that your webcam is properly plugged in and OpenCV has been compiled with webcam support.

  44. Aldi December 21, 2015 at 9:56 pm #

    Hello Adrian Great tutorial, I’m using python 3 and opencv 3 I’ve succesfully install imutils.
    the question is why every time I start the program it shows no result or error it just start and stop. I know I have to use python 2.7 and opencv 2.4.x but the raspberry I’m using is installed with opencv 3 and python 3 is there anyway to make it work in the system I’m using

    • Adrian Rosebrock December 22, 2015 at 6:30 am #

      You’re using your Raspberry Pi? I also assume you’re using the Raspberry Pi camera module and not a USB camera? If so, you’ll need to access the Pi camera module. An updated motion detection script that works with the Raspberry Pi can be found here.

  45. slava December 22, 2015 at 4:39 am #

    Yeah, sorry, i found the answer in few mins after i wrote my question.
    Anyway thank you for your reply, that you do not ignore the question that has already been answered.

    • Adrian Rosebrock December 22, 2015 at 6:22 am #

      No worries, I’m happy to hear you found the solution.

  46. Nicholas January 13, 2016 at 10:28 pm #

    can you help me if i want to use another algorithm like phase only correlation or haar-like features, what I must suppose to do??

    • Adrian Rosebrock January 14, 2016 at 6:14 am #

      If you want to train your own Haar classifier, I would give this tutorial a try. I’ll be covering correlation tracking on the PyImageSearch blog in the future.

      Another great alternative is to use HOG + Linear SVM, which tends to have a lower false-positive detection rate than Haar. I cover the implementation inside PyImageSearch Gurus.

  47. Mithun.S January 14, 2016 at 3:21 am #

    Hey Adrian! I’m Mithun from India. I would like to know whether this can be used to do a project on accident detection using video camera.

    • Adrian Rosebrock January 14, 2016 at 6:13 am #

      It certainly could, but you might need to add a bit of machine learning to classify what is a car/truck, and if traffic is flowing in a strange pattern (indicating a car accident).

  48. Nghia Le January 18, 2016 at 10:48 am #

    Thank you, great article and useful to me. I’ll wait for part 2. By the way, I’m doing a traffic monitoring device (detecting speeding, lane encroachment, red light). Raspberry can do that?

    • Adrian Rosebrock January 18, 2016 at 3:18 pm #

      I personally haven’t traffic monitoring on the Pi, so I can’t give an exact answer. My guess is that it can do basic monitoring, but anything above a few FPS is likely unrealistic unless you want to code in C++. To be honest, I think you might need a more powerful system.

  49. Jason Turner February 2, 2016 at 6:00 pm #

    Hi great article and very useful could the code be changed to work with an IP Camera as I Don’t have an pi camera as of yet.

    • Adrian Rosebrock February 4, 2016 at 9:22 am #

      Yes, this could could certainly be used for a Raspberry Pi camera. I’ll try to do a blog post on this in the future.

  50. duygu February 9, 2016 at 9:44 am #

    Hi Adrian,
    Lovely tutorial!!!

    I have a quick question. I made a video shot with my phone cam and implementation is quite shadow sensitive. It detects small light changes on keyboard of my computer as movement for instance.

    Any suggestions to reduce shadow/light sensitivty?

    • Adrian Rosebrock February 9, 2016 at 3:52 pm #

      Lighting conditions are extremely important to consider when developing a computer vision application. As I discuss in the PyImageSearch Gurus course, the success of a computer vision app starts before a single line of code is even written — with the lighting and environment. It’s hard to write code to compensate for poor lighting conditions.

      All that said, I will try to do some blog posts on shadow detection and perhaps even removal in the future.

  51. liudr February 15, 2016 at 12:46 am #

    Thanks for the tutorial. For some reason my setup is not working. I tested with raspistill and my camera has a live feed. Th program will run a few seconds with out output and quits. If I run a few lines of the code, I found that the camera fails to grab any frames with camera.read() and quits. Any ideas of why the camera may fail to grab frames?

    • Adrian Rosebrock February 15, 2016 at 3:07 pm #

      That’s definitely some strange behavior on the camera.read part. Are you executing the code provided in the source code download of this post? Or executing it line-by-line in IDLE?

      • İsmet May 28, 2016 at 2:26 pm #

        Hi Adrian.
        I use Rpi 3 and Rpi Camera Module v1.3. I cant run with live stream. I tried on terminal and Python2 idle. I didnt give error. Camera led didnt light. How can i run with live stream?

        • Adrian Rosebrock May 29, 2016 at 1:57 pm #

          It sounds like your Raspberry Pi is having trouble accessing the camera module. I would start with this tutorial and work your way through it to help debug the issue.

          • Ismet June 2, 2016 at 12:52 pm #

            I can run your code survilance cam with dropbox. But i cant run this code.

          • Adrian Rosebrock June 3, 2016 at 3:05 pm #

            If you can run the home surveillance code, then I presume you’re using the Raspberry Pi camera module. This post assumes you’re using a USB webcam and the cv2.VideoCapture function. You can either update this code to use the Raspberry Pi camera module, or better yet, unify access between USB and Pi camera modules.

  52. Mathilda February 19, 2016 at 8:54 am #

    hi adrian
    thanks for the great tutorial
    I’ve got a problem… the code works, but only for the sample video…
    I want to run it on my own raspberry pi camera video…
    what should I do exactly?
    is it possible to make it work real-time?

  53. Danish March 1, 2016 at 1:52 am #

    Can you please give me something with which I can track motion using my webcam. I don’t have raspberry pi.
    Thanks in Advance

    • Adrian Rosebrock March 1, 2016 at 3:43 pm #

      You can use the code detailed in the blog post you just commented on to track motion using a builtin/USB webcam. All you need is the cv2.VideoCapture function, which this blog posts explains how to do. I also cover how to use the cv2.VideoCapture function for face detection and object tracking inside Practical Python and OpenCV.

  54. Joe March 3, 2016 at 2:05 am #

    So I am getting this error and I am not sure what is going on. Could I get some help and your opinion on it? I get the same error with the downloaded Code along with just copying down the code myself.

    ValueError: too many values to unpack

    • Adrian Rosebrock March 3, 2016 at 7:01 am #

      Please see my reply to “TC” above. You’ll also want to read this blog post on checking your OpenCV version. You’re using OpenCV 3, but the blog post assumes OpenCV 2.4. It’s a simple fix to resolve the issue once you give the post a read.

  55. Rishabh March 13, 2016 at 8:29 am #

    Hi Adrian,

    Could you link us to some of your posts about image processing specific with the PiCamera.
    I keep running into errors trying your codes except for the “accessing-the-raspberry-pi-camera-with-opencv-and-python” post which works flawlessly. But I’d like to see how we can build from that. Again any sort of image processing specific to the PiCamera.

  56. Shrikrishna Padalkar March 14, 2016 at 11:52 am #

    Hello Adrain.
    I am getting the following error:-

    Traceback (most recent call last):
    ValueError: too many values to unpack

    Please help me solve this error.

    Thanks.

    • Adrian Rosebrock March 14, 2016 at 3:18 pm #

      Please read the previous comments before posting. Specifically, my replies to Alejandro and TC detail how to solve this problem.

  57. qlkvg March 16, 2016 at 9:26 am #

    I had a brain orgasm while reading. Thanks for awesome tutorial.

  58. Jean-Pierre Lavoie March 19, 2016 at 3:10 pm #

    Hi Adrian,
    This is great and thanks for your feedback for the first tutorials! Now in this one, when I execute the python script: python motion_detector.py, I get these error messages:

    Traceback (most recent call last):
    File “motion_detector.py”, line 58, in
    cv2.CHAIN_APPROX_SIMPLE)
    ValueError: too many values to unpack

    Any idea what is the problem?
    Thanks a bunch!
    JP

    • Adrian Rosebrock March 20, 2016 at 10:43 am #

      Please read through the comments before posting — your question has already been answered multiple times. See my reply to “TC” and “Alejandro” above.

  59. Abhijit March 22, 2016 at 8:59 am #

    Hello,
    I have try to implement this script with windows operating system. I have run script then does not display error but does not display any frame.

    when i have run below command then display next promt but does not display any video frame as per your blog

    C:\Python27>python motion_detector.py –video example_01.mp4

    C:\Python27>

    • Adrian Rosebrock March 22, 2016 at 4:16 pm #

      I’m not a Windows user (and I don’t recommend Windows for working with computer vision), but I would suggest (1) double checking that the path to the video file is valid and (2) ensuring that your Windows system has the valid codecs to read the .mp4 file.

  60. Shivam March 28, 2016 at 2:03 am #

    Superb Work Sir, Thanks very much for this tutorial, It is really helpful and the code is easily understandable to a rookie in programming.

    • Adrian Rosebrock March 28, 2016 at 1:31 pm #

      I’m happy I could help Shivam :-)

  61. Bleddyn Raw-Rees March 30, 2016 at 5:10 am #

    Hi Adrian,

    Firstly, thanks for a brilliant tutorial.

    And secondly I was wondering whether you’d be willing to suggest a way of splitting input video? So what I mean is, for example, if there’s a 10minute clip with 30seconds of motion somewhere in the middle – I would want the output video to just be the 30s (+ a couple of seconds either side perhaps). I’ve worked out that this can be done using FFMPEG, but I’m not sure how to retrieve the in and out points from your code to feed into FFMPEG.

    So I suppose that my questions are:

    1) Is using FFMPEG a necessary/wise choice for splitting the video?
    2) How do I get in and out points from your motion detection code?

    Any advice you could give would be greatly appreciated.

    Thanks

  62. Reza April 16, 2016 at 4:42 am #

    Its work , thanks Adrian . . .. you are pro

    • Adrian Rosebrock April 17, 2016 at 3:32 pm #

      Thanks Reza! :-)

  63. Ankit Pitroda April 19, 2016 at 3:42 am #

    hey adrian
    Really awesome tutorial from your side
    I am always appriciate your work
    You are really god of opencv

    I am facing one problem.
    Like if I capture video from my camera as you put two tutorial videos; it works fine

    But in the live camera it wan’t work properly.

    What will be the solution?

    • Adrian Rosebrock April 19, 2016 at 6:52 am #

      What type of camera are you using? I would start with that question and then do a bit of research to see if it’s compatible with your system and/or OpenCV. I think the real problem is that your system is unable to access your webcam. Do some debugging and find out why that is. From there, you’ll be able to move forward.

      • ankit May 9, 2016 at 6:24 am #

        no no
        camera is working fine.

        But at the start of the first frame; it shows occupied in my case.
        so if there is no object movment inside the frame still it shows occupied.

        awaiting for reply and thanks for the quick reply..

        • Adrian Rosebrock May 9, 2016 at 6:56 pm #

          Hi Ankit — I think the issue is with your camera sensor warming up and causing the initial frame to be distorted. I would place a call to time.sleep(2.0) after cv2.VideoCapture to ensure your camera sensor has had time to warm up. Another option is to apply a more advanced motion detection algorithm such as the one detailed in this blog post.

  64. Akhil April 19, 2016 at 6:12 am #

    Hi Adrian,

    Your article is very helpful and actually, all the content in this website is very useful. I wanted to ask is the part 2 out ?

    • Adrian Rosebrock April 19, 2016 at 6:47 am #

      Thanks Akhil! And by “Part 2”, do you mean the Raspberry Pi + motion detection post? If so, you can find it here.

  65. Ali April 20, 2016 at 3:45 pm #

    Hi Adrian,

    Thank you very much for this tutorial. I’m new to computer vision! I’m currently working on a project which involves background subtraction technique. Your code uses the first frame as a reference to next frames and that is how it detects motion. All what I need is to have a reference frame that changes over a specified period of time, and then do exactly what the rest of the code does. How do I modify your code (if that’s okay) to achieve that?

    To be more specific; a reference frame that continuously changes over a specified period of time.

    • Adrian Rosebrock April 20, 2016 at 5:57 pm #

      I actually cover how to solve this exact question in this post :-)

  66. Kevin April 25, 2016 at 11:23 pm #

    Hi Adrian,

    Thank you very much for this tutorial. I’m a student first time learning this.
    i’m want to know this really can use motor servo to tracking? If tracking the background change everything will be the target.

    i want to know anything can help me follow the object had be found

    • Adrian Rosebrock April 26, 2016 at 5:15 pm #

      With this method, you won’t be able to use a servo since the algorithm assumes a static, non-moving background.

  67. Jean-Pierre Lavoie April 28, 2016 at 8:47 pm #

    Hi Adrian. This is a simple question, but how do you rotate the camera 180 degrees in your code? Now it’s upside down the way my camera is setup. Normally with PiCamera I do the following:

    camera.rotation = 180

    and it works. But in your code if I do this after your line:
    camera = cv2.VideoCapture(0)

    I get an error message.

    • Adrian Rosebrock April 30, 2016 at 4:04 pm #

      I would use the cv2.flip function to flip the image upside down:

      frame = cv2.flip(frame, 0)

  68. Wanderson May 1, 2016 at 11:26 pm #

    Hi Adrian, how are you?
    My code doesn’t work very well.

    When I run the program it appears always “occupied”, even when the first frame contains only the background. My webcam is good quality (philips spc 1330). What do you think that is?

    Thanks a bunch!

    • Adrian Rosebrock May 2, 2016 at 7:48 pm #

      This likely due to your camera sensor still warming up when the first frame is grabbed. Either use time.sleep(2.0) after the initial call to cv2.VideoCapture to allow the sensor to warmup, or better yet, use the motion detection method utilized in this blog post.

      • Wanderson Souza May 2, 2016 at 9:16 pm #

        Thanks Adrian!

  69. Akhil May 3, 2016 at 2:19 am #

    HI Adrian,
    I just wanted to know the time complexity of this code, what complexity would this predefined functions be running in?

    • Adrian Rosebrock May 3, 2016 at 5:47 pm #

      Which functions are you specifically referring to?

  70. Wanderson May 4, 2016 at 12:41 am #

    Hello, again, Adrian

    It is possible to use a folder with background images to be used as the first frame?

    Thanks a bunch

    • Adrian Rosebrock May 4, 2016 at 12:32 pm #

      Absolutely! Instead of using a folder of images, I instead use the past N images from a video stream to model the background in this post, but you can easily update it to use a folder of images. The key to this method is to use the cv2.addWeighted function.

  71. furrki May 6, 2016 at 11:51 pm #

    Hi bro. Really nice tutorial. İ really enjoyed that. Thank you for this well-worked tutorial ^_^
    Greetings from Turkey

    • Adrian Rosebrock May 7, 2016 at 12:36 pm #

      No problem, I’m glad you enjoyed it!

  72. Roberto May 10, 2016 at 1:52 pm #

    This has been wonderful to read/follow. Thanks for all the work you put into these, along with the descriptions to really help build and understanding of what’s actually taking place.

    I do have one question, however – What would be the best way to have this change from “Occupied” to “Unoccupied” and reset the motion tracking process? Unless I’ve missed something above I don’t see how that would take place.

    • Adrian Rosebrock May 10, 2016 at 6:17 pm #

      If you would like to totally reset the tracking progress, then you need to update the firstFrame variable to be the current frame at the time you would like to reset the background.

      • Roberto May 11, 2016 at 9:48 am #

        Ahh, that makes perfect sense! I implemented this and some other changes and I have learned much.

        I’m capturing the images now when certain triggers are met with cv2.imwrite(‘\localpath’, img) but now I need to figure out how to clear the “buffer” of the image that is written locally. Each time it does save to local disk it just keeps writing the same image over and over again. What I have tried so far seems to actually release the camera all together instead of just resetting the frame. Any suggestions?

        • Adrian Rosebrock May 12, 2016 at 3:44 pm #

          I’m not sure what you mean by “clear the buffer of the image written locally”? Do you mean simply overwrite the image?

  73. amrutha May 11, 2016 at 3:04 pm #

    thank u sir,awesome tutorial,
    based on which algorithm detection and tracking is performing here,is it meanshift algorithm or other???

    • Adrian Rosebrock May 12, 2016 at 3:38 pm #

      Neither MeanShift nor CamShift is used in this blog post — the tracking is done simply by examining the areas of the frame that contain motion. However, you could certainly incorporate MeanShift or CamShift if you wanted.

  74. amrutha May 12, 2016 at 10:38 am #

    hello sir awesome post,i tried the program by reading static video for detecting moving cars on road,code worked well,i need some detailed info like how the motion detection and tracking is going on ,like only by background subtraction method or some other algorithm,
    i hope u will help me out.

    • Adrian Rosebrock May 12, 2016 at 3:30 pm #

      So if I understand your question correctly, your goal is to create an algorithm that uses machine learning to detect cars in images? If so, I would recommend using the HOG + Linear SVM framework.

  75. Rainyban May 25, 2016 at 8:48 am #

    Hello Adrian!
    Frist, thank you for use your Rpi source code!
    I accept your code in my Rpi3
    It is operating ordinarily
    I want to expand their function!
    I want to save the original image when covers background subtraction

    How can I move imwrite() function??
    Currently, Saved Image is include square.

    once again, Thank you for your Rpi tutorial!

    • Adrian Rosebrock May 25, 2016 at 3:17 pm #

      You can save the original frame to disk by creating a copy of the frame once it’s been read from the video stream:

      frameOrig = frame.copy()

      Then, you can utilize cv2.imwrite to write the original frame to disk:

      cv2.imwrite("path/to/output/file.jpg", frameOrig)

      • Rainyban May 26, 2016 at 3:29 am #

        Thank you Adrian!
        I solved the problem~~
        and then, saved image is original frame

        hmm…
        I have new question… haha..;;
        I want to reduce saving time
        I think one method
        Is it posible??

        1. one thread operation -> if Image Detect; flag = 1
        2. another thread operation -> if flag ==1; imwrite
        I know that python is one thread
        terminal python code value(flag) -> another terminal python code

        what should I do??

        • Adrian Rosebrock May 26, 2016 at 6:20 am #

          Sure, you can absolutely pass saving the image on to another thread. This is a pretty standard producer/consumer relationship. Your main thread puts the frame to be written in a queue. And a thread reads from the queue and writes the frame to file.

  76. Sarai May 29, 2016 at 11:28 pm #

    Awesome tutorial! Totally loved it! easy to understand and very helpful! Thank you for this series! Please keep doing them!

  77. Raghuvaran P May 30, 2016 at 2:38 am #

    Can u please provide the sample video ?

    • Adrian Rosebrock May 31, 2016 at 4:05 pm #

      Please use the “Downloads” section of this blog post to download the source code to this post — it includes example videos that you can use.

  78. Alessio Michelini May 30, 2016 at 9:12 am #

    Did anybody try to run this script on a raspberry pi nano?

    • Adrian Rosebrock May 31, 2016 at 3:54 pm #

      The Pi Nano? Do you mean the Pi Zero? If so, I wouldn’t recommend it. The FPS would be quite low, as I discuss in this blog post.

      • tarun June 2, 2016 at 11:36 am #

        i am using opencv 3.0.0 i followed all the steps in the motion detection but i got nothing i did not got error but my answer was NOTHING!!!!!!

        • Adrian Rosebrock June 3, 2016 at 3:06 pm #

          If you did not receive an error message at all and the script automatically stopped, then OpenCV is having trouble accessing your webcam. Are you using a webcam? Or the Raspberry Pi camera module?

  79. kev June 1, 2016 at 6:45 pm #

    To gracefully exit, you may want to switch your last two lines. First close all windows, then release the camera. Otherwise, system will break with a segmentation fault.

    • Adrian Rosebrock June 3, 2016 at 3:14 pm #

      I haven’t encountered this error before, but if that resolves the issue, thanks for pointing it out Kev!

  80. Bleddyn June 4, 2016 at 8:37 am #

    How hard would it be to track detected motion regions between consecutive frames?

    Using createBackgroundSubtractorMOG2() for example for use with more dynamic backgrounds doesn’t have the results it could have. In ‘Real-time bird detection based on background subtraction’ by Moein Shakeri and Hong Zhang, they deal with the problem by tracking objects between frames and if it is present for N frames then it’s probably a moving object.

    I had a look at your post [http://www.pyimagesearch.com/2016/02/01/opencv-center-of-contour/] which was interesting and using moments, created lists for x and y coordinates thinking that i could compare elements in a list between successive frames but this happens:

    current_frame_x [0, 159, 139, 31]
    previous_frame_x [0, 141, 29]

    there’s a new element ‘159’ so I cant compare elements like for like…

    Is there a better way basically? I couldn’t figure it out!

    • Adrian Rosebrock June 5, 2016 at 11:31 am #

      There are multiple methods to track motion regions between frames. Correlation-based methods work well. But a simple method is to simply compute the centroids of the objects, store them, compute the centroids from the next frame — and then compute the Euclidean distances between the centroids. The centroids that have the smallest distances can be considered the “same” objects”.

  81. Daniele June 8, 2016 at 8:30 am #

    Hi Adrian,

    First of all, thanks for the great tutorial 😀

    I’m working on a video surveillance system for my thesis and I need a background subtraction algorithm that permits to continously detect the objects even if they stop for a while. I have done various experiments with cv2.createBackgroundSubtractorMOG2() changing the parameter “history”, but, even if I set it to a very big value, even the objects that stop for just a second are recognized as background.
    So, from this point of view, is it possible that your approach is better than those proposed by Zivkovic?

    • Adrian Rosebrock June 9, 2016 at 5:25 pm #

      MOG and MOG2 are certainly good algorithms for background subtraction. This method certainly isn’t “better” — it’s just less computationally expensive. MOG and MOG2 are less suitable for resource constrained devices (such as the Raspberry Pi) since they don’t have enough “computational horsepower” to get the job done.

      • Daniele July 7, 2016 at 1:21 pm #

        If you test the MOG2 algorithm on your video (that one in which you open the door and enter in the room), you can notice that detects many false positive, much more than the absolute difference between frames.
        Probabily MOG2 is not the best indoor detection algorithm and so in this case the absolute difference performs better.

  82. Obiajulu June 8, 2016 at 12:40 pm #

    Hi

    Thank you for the awesome tutorial. I implemented the techniques but i have difficulty in saving the Video feed on my Rspberry pi and Mac laptop. I tried writing the frames so it could save in the default directory but to no avail. My question is how do i save the video feed using python language and also hashing and signing the video feed to prevent modification. I look forward to a positive response soon.

    • Adrian Rosebrock June 9, 2016 at 5:22 pm #

      I detail how to save webcam clips to file in this blog post. I hope that helps!

  83. Dishant June 14, 2016 at 7:58 am #

    Any suggestions on how it can be use to detect speed of moving object?

    • Adrian Rosebrock June 15, 2016 at 12:37 pm #

      You need to calibrate your camera so you can determine the number of pixels per measurable unit (such as pixels, centimeters, etc.) I detail how to calibrate your camera and use it for measuring the distance between objects in this blog post.

      Once you can measure the distance between objects, you just need to keep track of the Frames Per Second of your pipeline. Dividing the distance traveled by the FPS rate will give you the speed.

  84. Lokesh June 15, 2016 at 2:49 am #

    Hi Adrian ,
    thank you for the awesome tutorial .it is working fine but when iam trying to execute this python script through web server using php it’s not showing anything.Can you please help me out how to execute this python script with php.

    My index.php looks like this :-

    • Adrian Rosebrock June 15, 2016 at 12:29 pm #

      Hey Lokesh — can you elaborate more on what you mean by “executing the Python script with PHP”? You likely don’t want to do that. You can call the system function to call any arbitrary program (including a Python script), but that’s not a good idea, since your PHP script will hang until the Python script finishes.

      • Lokesh June 16, 2016 at 2:29 am #

        Iam trying to run this python script integrating with php .so that it wil capture the video from webcam when iam running through browser but when iam trying to do this it’s not opening the webcam.

        • Adrian Rosebrock June 18, 2016 at 8:25 am #

          This won’t work. Python does not interface with PHP and you can’t pass the result from Python to PHP (unless you figured out how to use message passing between the two scripts). Instead, you should use Python to create a web stream and then have PHP read the results from the web stream. That way, these will be two separate, independent processes.

  85. Teknokent June 22, 2016 at 7:40 am #

    Hi Adrian,
    Well-done for your all studies. That is great job. What do you think about counting people? Did you try it before?
    Nice day!

    • Adrian Rosebrock June 23, 2016 at 1:18 pm #

      It’s certainly possible using this technique. But depending on the types of images/videos you’re working with, you might want to use OpenCV’s built-in person detector.

      • Teknokent July 1, 2016 at 7:40 am #

        thank you so much!

  86. James June 24, 2016 at 5:43 am #

    Hi there,
    I am doing something somewhat similar to this.
    If you were to get the center of the rectangle in each frame, and then make a line joining these centers together (effectively tracking the moving person) how would you go about doing this?

    I have been able to identify the centers in each frame but am struggling to create a list that stores all the history of the centres.

    • Adrian Rosebrock June 25, 2016 at 1:33 pm #

      Hey James — I already explain how to do this in this blog post.

  87. Madhukar Chaubey June 28, 2016 at 9:46 am #

    Can this work with sequence of images instead of live camera frames? What will be the changes? Need help…

    • Adrian Rosebrock June 28, 2016 at 10:46 am #

      Sure, this can absolutely work with a sequence of images instead of a live stream. Instead of looping over video frames, loop over your images from disk. Replace the while loop that loops infinitely over frames from the video stream with a loop that loops over all relevant image son your disk.

  88. Izzat June 28, 2016 at 4:45 pm #

    Hello Adrian Your work is fabulous, i can’t believe it works amazingly.
    One more question; I am using RPi 2 for streaming image frames wirelessly through wifi using MJPG Streamer method(till now i received video frames on a fix IP address and Specific port 8080) and now i need to open that frames in your code and apply the same object detection on the received frames. Can I do it, will you please help me out..??

    • Adrian Rosebrock June 29, 2016 at 2:06 pm #

      It’s been a long time since I’ve had to pass an IP stream into cv2.VideoCapture, but this is exactly how you would do it. I would suggest doing some research on IP streams and the cv2.VideoCapture function together. Otherwise, another approach would be to use a message passing library such as ZeroMQ or pyzmq and pass the serialized frames back and forth.

  89. Andrew July 6, 2016 at 2:04 pm #

    it keeps saying that ‘frame’ and ‘gray’ are not defined. help please? otherwise, great tutorial.

    • Adrian Rosebrock July 6, 2016 at 4:10 pm #

      Hey Andrew — it’s hard to know exactly why you might be running into that issue. Please make sure you have used the “Downloads” section of this tutorial to download the code to this post. If you are copying and pasting the code (or typing it in yourself), you might (unknowingly) be introducing errors to the code.

  90. JP July 26, 2016 at 4:33 pm #

    Thanks for letting search my own answer.
    If the issue “to many Values to unpack” occurs.
    I found my answer here:
    http://stackoverflow.com/questions/25504964/opencv-python-valueerror-too-many-values-to-unpack

  91. Wanderson Souza July 27, 2016 at 11:33 am #

    I have a big question, in your opinion what is the best technique to segment dense amount of people viewed from top. For example, people who get in a train door. Thank you!

    • Adrian Rosebrock July 27, 2016 at 1:54 pm #

      That really depends on the quality of your video stream, the accuracy level required, lighting conditions, computational considerations, etc. For situations with controlled lighting conditions background subtraction methods will work very, very well. For situations where lighting can change dramatically or the “poses” you need to recognize people in can change, then you might need to utilize a machine learning-based approach. That said, I normally recommend starting off with simple background subtraction and seeing how far that gets you.

  92. San July 28, 2016 at 5:45 pm #

    Excellent tutorial as always. Just a small question. So for cosmetics I used

    feed = np.concatenate((frame, thresh), axis = 1)
    cv2.imshow(“Feed”, feed)

    Obviously, cannot concatenate since frame and thresh have different dimension. Is there a workaround?

    • Adrian Rosebrock July 29, 2016 at 8:28 am #

      Do your frame and thresh have the same height? If not, resize the images such that they have the same height so you can concatenate them vertically.

      Secondly, thresh is a single-channel binary image while frame is a 3-channel RGB image. That's not an issue, all you need to do is create 3-channel version of thresh:

      thresh = np.dstack([thresh] * 3)

      From there, you'll be able to concatenate the images.

  93. Cristian Bello August 10, 2016 at 1:27 am #

    hello adrian, I first want to say that your work is excellent, but doubt arises me, you can broadcast live, but I have a problem, the screen is suspended to take some time for idle keyboard or mouse, how I can avoid that?

    • Adrian Rosebrock August 10, 2016 at 9:24 am #

      Hey Cristian — can you elaborate more on what you mean by the screen being “suspended”? I’m not sure what you mean.

      • Cristian Bello August 11, 2016 at 12:55 am #

        hello adrian, I mean when you stop moving the mouse or keyboard a good time and the screen turns off, but all processes continue, energy saving mode of many computers

        • Adrian Rosebrock August 11, 2016 at 10:37 am #

          This really depends on your computer. You would need to investigate any type of “System Preferences” and turn off any settings that would put your system into “Sleep” or “Hibernate” mode.

  94. Tiago Martins August 19, 2016 at 11:26 am #

    Hi Adrian,

    Amazing posts you have…and that bundles, supper helpful :)
    I have a question about that step were we calculate the delta from past frame and the current one. Can we know each pixel coordinate that have changed from one frame to another?

    Best regards,

    Tiago Martins

    PS. – Please don’t stop :)

    • Adrian Rosebrock August 22, 2016 at 1:36 pm #

      Can you elaborate more on what you mean by “know each pixel coordinate that have changed”? I assume you want to know every pixel value that has changed by some amount? If so, just tale a look at the delta thresholded image. You can adjust the threshold to trivially be one, but the problem is that you’ll get a lot of “noise” by doing that.

  95. Dong il Kum August 27, 2016 at 2:49 am #

    Hi adrian i’m really impressed by your motioni detecting project.
    As i am a novice in opencv or python, i have some questions.
    In our project we want to use this program on the alley so there could be parked cars or laid something else. In that case, the program maybe in ‘occupied condition’ because of cars or other things. Thus i want to add more function like change the first frame image as a new frame which the webcam is looking at if there is nothing detected newly by the camera. But in my opinion this is really difficult to make TT. Could you help or advise us??

  96. Yashvardhan September 23, 2016 at 3:18 pm #

    Hey Adrian,
    I m trying to run this code on my laptop running windows 8. I have installed all the necessary packages but still it is giving me a ValueError:Too many values to unpack at line 57. Please, help me out of this error.

    • Adrian Rosebrock September 27, 2016 at 8:56 am #

      It sounds like you are using OpenCV 3, but this blog post requires OpenCV 2.4. No worries though, this is an easy fix. Please see my reply to “TC” above for the solution.

  97. Julian Harris September 24, 2016 at 2:20 am #

    Really fantastic tutorial, thanks Adrian! It passes the “sleeping kids test”: could I get the whole thing running before my kids woke up? Yes! :)

    • Adrian Rosebrock September 27, 2016 at 8:54 am #

      Awesome, great job Julian!

  98. swapnil October 6, 2016 at 3:20 pm #

    its really best tutorial. I like it. in this programme I want to store the occupied object video please tell me which command I used to store the object occupied video

  99. Benjamin October 9, 2016 at 8:13 am #

    hey,
    great stuff! thanks for the tutorial!
    i’m using a PI camera with v4l2 driver on wheezy. the script works very well with it. I tried it with the old and new camera modul. running it with the new camera modul it is not so easy to find a good threshold level.
    also I wondered if I could run the script with the noir camera modul..? I guess not, but you got an Idea how I could run it?

    • Adrian Rosebrock October 11, 2016 at 1:03 pm #

      I personally haven’t worked with the NoIR camera before. The thresholding is a little different but you can still apply the same basic principles.

  100. Berkay Aras October 13, 2016 at 4:22 am #

    I solved this problem by using reinstalling open CV

    But now; when I do sudo python motion_detector.py
    It gives no problem but it’s not showing anything.

    Program is not running?

    Any ideas?

    • Adrian Rosebrock October 13, 2016 at 9:09 am #

      Is the Python script starting and then immediately exiting? Are you trying to access your webcam or use the video file provided in the “Downloads” section of this tutorial?

  101. Ravi October 14, 2016 at 1:21 pm #

    Hey Adrian,

    Thank you for sharing it with the community.

    Is it possible to use this for object motion detection? Like, moving car or ball detection?

    What will I have to change to detect the specific shape object without any false detection?

    • Adrian Rosebrock October 15, 2016 at 9:55 am #

      You can certainly use this for object detection, but you’ll need a little extra “special sauce”. I would use motion detection to detect “candidate regions” that need to be classified. From there, I would pass these regions into trained machine learning classifiers (such as HOG + Linear SVM, CNNs, etc.) for the final classification.

  102. saluka October 20, 2016 at 10:56 am #

    when I put a camera outdoors,does it detect rain as a motion.How can I avoid it to detect only humans and sense a motion.

  103. Devid October 23, 2016 at 10:54 pm #

    Hi Adrian,
    Used your codes and did Object tracking using camshaft algorithm -https://www.youtube.com/watch?v=T3e5z6qoCpA
    It works nicely.
    I just want to implement tracking Pan/tilt.
    So could you please guide us to control 2 servos (x and y direction) according to camshaft tracking.
    Thanxz lot

    • Adrian Rosebrock October 24, 2016 at 8:29 am #

      I don’t have any tutorials on utilizing servos, but I will certainly consider it for a future blog post.

      • Devid October 25, 2016 at 5:54 am #

        Thanxz adrian

  104. Josh October 31, 2016 at 10:45 am #

    Hi Adrian,

    I followed your tutorial and this is really awesome. Thank you so much for sharing your work! I have a question for you. How can I show the frame delta like you have done in some of your tutorial screen shots?


    Josh

    • Adrian Rosebrock November 1, 2016 at 8:59 am #

      Hey Josh — thanks for the kind words, I’m happy I could help. To display the delta frame simply insert:

      cv2.imshow("Delta", delta)

      I would personally put that line with the other cv2.imshow statements.

  105. tringuyen November 11, 2016 at 12:59 am #

    is this code run on linux by PC?, is it not on raspberry pi? because, i have error,

    • Adrian Rosebrock November 14, 2016 at 12:17 pm #

      You need to install the imutils package into the “cv” virtual environment:

      From there you’ll be able to execute your script without error.

  106. bharath November 24, 2016 at 10:40 pm #

    hello sir
    since iam begginer in computer vision or image processing
    I wanted to detect our own custom objects.so please let me know if you have any source code or some useful information where i can resolve this problem.
    Thank you sir in advance

  107. kane November 28, 2016 at 10:52 pm #

    i am having a final project with “people motion detection with Raspberry” that means, after detect people with camera pi, sim900 will sent a message for owner. So i have 2 question:
    1.Can i run this code for my project?
    2.How can i use sim900 with raspberry?
    I read your “home-surveillance-and-motion-detection-with-the-raspberry-pi-python-and-opencv”, but with dropbox, i want to run in no-wifi enviroment?. So i think i can do with this code – basic motion detection and tracking with python and open cv.

    • Adrian Rosebrock November 29, 2016 at 8:01 am #

      To use this code for your project use the “Downloads” section to download the source code. I provide an example of executing the script at the top of the source files.

      From there you should use the accessing Raspberry Pi camera post to modify the code to work with your Raspberry Pi camera module.

      I don’t have any experience with the “sim9000” (and honestly don’t know what it is off the top of my head). I presume you mean sending a txt message. If so, check out the Twilio API.

  108. TBlack November 29, 2016 at 9:29 pm #

    Thanks Adrian,

    I try to test an sample video, it works cool.

    https://youtu.be/HJBOOZVefXA

    • Adrian Rosebrock December 1, 2016 at 7:42 am #

      Nice job! :-)

  109. siyer November 30, 2016 at 4:40 am #

    Hi Adrian

    Thanks for the tutorial.

    frame is returning None always even if i pass a local video file to cv2.VideoCapture. No errors per se

    • siyer November 30, 2016 at 11:56 pm #

      Adrian

      I downloaded the code as is and ran , it now seems to exit while finding the contours (line 60) without any errors.

      kindly advice


      kindly ignore, looks like n open cv 2.7 which i am running, the cv2,findcontours returns 3 values, instead of 2 as originally expected in the code. t now moes past.

      • Adrian Rosebrock December 1, 2016 at 7:24 am #

        In OpenCV 2.4, the cv2.findContours function returns 2 values. In OpenCV 3, the function returns 3 values. You can learn more about the differences here.

    • Adrian Rosebrock December 1, 2016 at 7:34 am #

      In that case your version of OpenCV was likely compiled with video codec support. I would suggest following one of my OpenCV install tutorials.

      • siyer December 1, 2016 at 9:13 am #

        Thanks Adrian

        It was not a codec issue. I had to place the opencv_ffmpeg DLLs in one of the PATH’s…

        Secondly, for some reason it does not recognise relative paths for the video file. Have to provide full path.

        Works like a charm (few false positives on a self made video) but great start.

        thanks much

        • Adrian Rosebrock December 5, 2016 at 1:52 pm #

          Nice, congrats on resolving the issue!

      • Sen Young December 5, 2016 at 4:17 am #

        Hello Adrian! Good morning! Thank you very very much!

        I am a student from china .Recently, i was stumped by the question that how to build a system which can count how many people in classroom .It’s your this tutorial that gave me ideas and approaches!

        I’m so glad and lucky to find your website in this wonderful world !

        But some questions still confuse me ..how motion detection can detect many individuals and count the quantity of people at the same time ? If this need some face detector or head and shoulders detector in opencv? Could you give me some ideas or solutions? Thank you very much

        • Adrian Rosebrock December 5, 2016 at 1:26 pm #

          You can use motion detection to count the number of people in a room provided that the motion in the room is only because of people.

          Otherwise, you should consider applying object detection of some kind. I demonstrate how to detect humans in images here.

  110. Chandough December 6, 2016 at 5:48 pm #

    Hey!

    Amazing code. But when I try to execute it, the command line gives me a syntax error for
    File “”, line 1.

    I am not entirely sure where I am wrong, any help is appreciated!

    • Adrian Rosebrock December 7, 2016 at 9:39 am #

      Hey Chandough — I would suggest that you use the “Downloads” section of this tutorial to download the code and execute it. It seems like that you copied and pasted the code from the post into your own project. That’s totally fine, but it can lead to errors like these. This is why I suggest using the “Downloads” section to ensure the code is properly executing on your system.

  111. Moon ki Park December 11, 2016 at 1:47 pm #

    Hi Adrian~

    i saw video in your turtorial about facial recognition by camera

    detail
    camera analysis someone after if who is not match
    computer sent message to your phone!

    i have question here !
    what kind of Api use? like twilio, textlocal etc…
    and Are you paying? when computer send message to your phone?

    if you are using free can you tell me?

    • Adrian Rosebrock December 12, 2016 at 10:34 am #

      I am using the Twilio API. To send pictures messages you would have to pay for the API.

  112. David December 11, 2016 at 3:46 pm #

    Interested in whether you think this can run fast enough to track a rocket launch.

    I’m considering automating a tracker to improve model rocket photography/video (3D-printed gearbox/tripod head driven by servos).

    High-end of “small” rockets:

    https://www.youtube.com/watch?v=2xuUloxHdBE

    A bit bigger:

    https://www.youtube.com/watch?v=2xuUloxHdBE

    I realize the changing background is an issue – but if you look at the videos, once the camera head has tilted up, it doesn’t have to move much. I’m thinking I could create a system adapted for the rapid acceleration that only lasts the first fraction of a second.

    Interested in any ideas.

    • Adrian Rosebrock December 12, 2016 at 10:34 am #

      The issue here isn’t so much the speed of the actual pipeline, it’s the FPS of your camera used to capture the video. If you can get a 60-120 FPS camera, sure, I think you could potentially use this method for tracking. The problem here is the changing background, so you should instead try color or correlation filters.

  113. GK December 12, 2016 at 9:06 am #

    Hi Adrian, These are some amazing tutorials. Thank you for sharing it with us.
    Could you tell us how to execute the code form the Python shell and not from cmd?
    That would be of great help.

    Thank you,
    GK

    • Adrian Rosebrock December 12, 2016 at 10:25 am #

      Which Python shell are you referring to? The command line version of the Python shell? Or the GUI version? I don’t recommend using the GUI version of IDLE. You should use Jupyter Notebooks for that.

      • GK December 12, 2016 at 11:37 am #

        I was referring to the IDLE shell. I’d like the program to run when I hit “F5”, instead of executing it from the cmd. Would that be possible?
        If you’d like, I can send you a detailed email on what I’m trying to do, and why I’d like the program that way.
        Thank you

        • Adrian Rosebrock December 12, 2016 at 12:41 pm #

          If that’s the case I would suggest using a more advanced IDE such as Sublime Text 2 or PyCharm. Both of these will allow you to run the program via a “hot key” and display the results within the IDE.

          • GK December 12, 2016 at 12:49 pm #

            That’s wonderful. Thank you Adrian. Shall try it out right away.

          • GK December 12, 2016 at 1:57 pm #

            Hi Adrian,
            I tried both PyCharm, and Sublime Text 3, neither of the IDEs would run the program directly. I’m able to run it from the command prompt in the PyCharm, but I was hoping to run it with either “Ctrl+B” or “F5”. Would you be able to shed some light on this issue?
            Thank you,
            GK

          • Adrian Rosebrock December 14, 2016 at 8:48 am #

            To be honest, I always execute my programs via command line. I never execute them via the IDE, so I’m not sure what the exact issue would be.

  114. navya December 19, 2016 at 12:17 am #

    hey.
    I want to stream the USB cam from the raspberry pi and see it on the windows PC monitor(live)

    can i achieve the same using just Linux commands??(I have never worked with python before).
    i have installed putty recently and i am working on it.
    I am a newbie. kindly suggest me.

    BTW sorry, forgot to mention.

    ELP-USB130W01MT-L21 is the model of the camera which i am using.

    and i want the live video on windows PC but not over the web.

    Thanks.

    • Adrian Rosebrock December 21, 2016 at 10:42 am #

      If all you want to do is see the frames on a separate machine other than the Pi just use X11 forwarding:

      $ ssh -X pi@your_ip_address

      From there, execute your script and you’ll see the results on your screen.

  115. Jax December 23, 2016 at 1:52 am #

    Hello Adrain.

    I am planning to incorporate a live stream of motion detection, face detection and face recognition and currently i am having problems running the face detection code. When i tried to run a part of your code, it showed AttributeError: ‘module’ object has no attribute ‘cv’. I am usng opencv3 by the way.

    Greatly appreciate your advice.

    Thankyou

    • Adrian Rosebrock December 23, 2016 at 10:52 am #

      What is your exact error message? And what line of code is throwing the error?

      • Jax December 25, 2016 at 3:51 am #

        flags=cv2.cv.CV_HAAR_SCALE_IMAGE

        Thankyou for the fast reply

        • Adrian Rosebrock December 31, 2016 at 1:46 pm #

          It looks like you’re using OpenCV 3. Change it to:

          flags = cv2.CASCADE_SCALE_IMAGE

Trackbacks/Pingbacks

  1. Home surveillance and motion detection with the Raspberry Pi, Python, OpenCV, and Dropbox - PyImageSearch - June 1, 2015

    […] last week’s blog post on building a basic motion detection system was awesome. It was a lot of fun to write and the feedback I got from readers like yourself made […]

Leave a Reply