Home surveillance and motion detection with the Raspberry Pi, Python, OpenCV, and Dropbox


Wow, last week’s blog post on building a basic motion detection system was awesomeIt was a lot of fun to write and the feedback I got from readers like yourself made it well worth the effort to put together.

For those of you who are just tuning it, last week’s post on building a motion detection system using computer vision was motivated by my friend James sneaking into my refrigerator and stealing one of my last coveted beers. And while I couldn’t prove it was him, I wanted to see if it was possible to use computer vision and a Raspberry Pi to catch him in the act if he tried to steal one of my beers again.

And as you’ll see by the end of this post, the home surveillance and motion detection system we are about to build is not only cool and simple, but it’s also quite powerful for this particular goal.

Today we are going to extend our basic motion detection approach and:

  1. Make our motion detection system a little more robust so that it can run continuously throughout the day and not be (as) susceptible to lighting condition changes.
  2. Update our code so that our home surveillance system can run on the Raspberry Pi.
  3. Integrate with the Dropbox API so that our Python script can automatically upload security photos to our personal Dropbox account.

We’ll be looking at a lot of code into this post, so be prepared. But we’re going to learn a lot. And more importantly, by the end of this post you’ll have a working Raspberry Pi home surveillance system of your own.

You can find the full demo video directly below, along with a bunch of other examples towards the bottom of this post.

Update: 24 August 2017 — All code in this blog post has been updated to work with the Dropbox V2 API so you no longer have to copy and paste the verification key used in the video. Please see the remainder of this blog post for more details.

Looking for the source code to this post?
Jump right to the downloads section.

Before we start, you’ll need:

Let’s go ahead and get the prerequisites out of the way. I am going to assume that you already have a Raspberry Pi and camera board.

You should also already have OpenCV installed on your Raspberry Pi and be able to access your Raspberry Pi video stream using OpenCV. I’ll also assume that you have already read and familiarized yourself with last week’s post on a building a basic motion detection system.

Finally, if you want to upload your home security photos to your personal Dropbox, you’ll need to register with the Dropbox Core API to obtain your public and private API keys — but having Dropbox API access it not a requirement for this tutorial, just a little something extra that’s nice to have.

Other than that, we just need to pip-install a few extra packages.

If you don’t already have my latest imutils  package installed, you’ll want to grab that from GitHub or install/update it via  pip install --upgrade imutils

And if you’re interested in having your home surveillance system upload security photos to your Dropbox, you’ll also need the dropbox  package: pip install --upgrade dropbox

Note: The Dropbox API v1 is deprecated. This post and associated code download now works with Dropbox API v2.

Now that everything is installed and setup correctly, we can move on to actually building our home surveillance and motion detection system using Python and OpenCV.

So here’s our setup:

As I mentioned last week, my goal of this home surveillance system is to catch anyone who tries to sneak into my refrigerator and nab one of my beers.

To accomplish this I have setup a Raspberry Pi + camera on top of my kitchen cabinets:

Figure 1: Mounting the Raspberry Pi to the top of my kitchen cabinets.

Figure 1: Mounting the Raspberry Pi to the top of my kitchen cabinets.

Which then looks down towards the refrigerator and front door of my apartment:

Figure 2: The Raspberry Pi is pointed at my refrigerator. If anyone tries to steal my beer, the motion detection code will trigger an upload to my personal Dropbox.

Figure 2: The Raspberry Pi is pointed at my refrigerator. If anyone tries to steal my beer, the motion detection code will trigger an upload to my personal Dropbox.

If anyone tries to open the refrigerator door and grab one of my beers, the motion detection code will kick in, upload a snapshot of the frame to my Dropbox, and allow me to catch them red handed.

DIY: Home surveillance and motion detection with the Raspberry Pi, Python, and OpenCV

Alright, so let’s go ahead and start working on our Raspberry Pi home surveillance system. We’ll start by taking a look at the directory structure of our project:

Our main home surveillance code and logic will be stored in pi_surveillance.py . And instead of using command line arguments or hardcoding values inside the pi_surveillance.py  file, we’ll instead use a JSON configuration file named conf.json .

For projects like these, I really find it useful to break away from command line arguments and simply rely on a JSON configuration file. There comes a time when you just have too many command line arguments and it’s just as easy and more tidy to utilize a JSON file.

Finally, we’ll define a pyimagesearch  package for organization purposes, which will house a single class, TempImage , which we’ll use to temporarily write images to disk before they are shipped off to Dropbox.

So with the directory structure of our project in mind, open up a new file, name it pi_surveillance.py , and start by importing the following packages:

Wow, that’s quite a lot of imports — much more than we normally use on the PyImageSearch blog. The first import statement simply imports our TempImage  class from the PyImageSearch package. Lines 3-4 import classes from picamera  that will allow us to access the raw video stream of the Raspberry Pi camera (which you can read more about here). And then Line 8 grabs the Dropbox API. The remaining import statements round off the other packages we’ll need. Again, if you have not already installed imutils , you’ll need to do that before continuing with this tutorial.

Lines 15-18 handle parsing our command line arguments. All we need is a single switch, --conf , which is the path to where our JSON configuration file lives on disk.

Line 22 filters warning notifications from Python, specifically ones generated from urllib3  and the dropbox  packages. And lastly, we’ll load our JSON configuration dictionary from disk on Line 23 and initialize our Dropbox client  on Line 24.

Our JSON configuration file

Before we get too further, let’s take a look at our conf.json  file:

This JSON configuration file stores a bunch of important variables. Let’s look at each of them:

  • show_video : A boolean indicating whether or not the video stream from the Raspberry Pi should be displayed to our screen.
  • use_dropbox : Boolean indicating whether or not the Dropbox API integration should be used.
  • dropbox_access_token : Your public Dropbox API key.
  • dropbox_base_path : The name of your Dropbox App directory that will store uploaded images.
  • min_upload_seconds : The number of seconds to wait in between uploads. For example, if an image was uploaded to Dropbox 5m 33s after starting our script, a second image would not be uploaded until 5m 36s. This parameter simply controls the frequency of image uploads.
  • min_motion_frames : The minimum number of consecutive frames containing motion before an image can be uploaded to Dropbox.
  • camera_warmup_time : The number of seconds to allow the Raspberry Pi camera module to “warmup” and calibrate.
  • delta_thresh : The minimum absolute value difference between our current frame and averaged frame for a given pixel to be “triggered” as motion. Smaller values will lead to more motion being detected, larger values to less motion detected.
  • resolution : The width and height of the video frame from our Raspberry Pi camera.
  • fps : The desired Frames Per Second from our Raspberry Pi camera.
  • min_area : The minimum area size of an image (in pixels) for a region to be considered motion or not. Smaller values will lead to more areas marked as motion, whereas higher values of min_area  will only mark larger regions as motion.

Now that we have defined all of the variables in our conf.json  configuration file, we can get back to coding.

Integrating with Dropbox

If we want to integrate with the Dropbox API, we first need to setup our client:

On Line 27 we make a check to our JSON configuration to see if Dropbox should be used or not. If it should, Line 29 authorizes our app with the API key.

At this point it is important that you have edited the configuration file with your API key and Path. To find your API key, you can create an app on the app creation page. Once you have an app created, the API key may be generated under the OAuth section of the app’s page on the App Console (simply click the “Generate” button and copy/paste the key into the configuration file).

Home surveillance and motion detection with the Raspberry Pi

Alright, now we can finally start performing some computer vision and image processing.

We setup our raw capture to the Raspberry Pi camera on Lines 33-36 (for more information on accessing the Raspberry Pi camera, you should read this blog post).

We’ll also allow the Raspberry Pi camera module to warm up for a few seconds, ensuring that the sensors are given enough time to calibrate. Finally, we’ll initialize the average background frame, along with some bookkeeping variables on Lines 42-44.

Let’s start looping over frames directly from our Raspberry Pi video stream:

The code here should look pretty familiar to last week’s post on building a basic motion detection system.

We pre-process our frame a bit by resizing it to have a width of 500 pixels, followed by converting it to grayscale, and applying a Gaussian blur to remove high frequency noise and allowing us to focus on the “structural” objects of the image.

On Line 60 we make a check to see if the avg  frame has been initialized or not. If not, we initialize it as the current frame.

Lines 69 and 70 are really important and where we start to deviate from last week’s implementation.

In our previous motion detection script we made the assumption that the first frame of our video stream would be a good representation of the background we wanted to model. For that particular example, this assumption worked well enough.

But this assumption is also easily broken. As the time of day changes (and lighting conditions change), and as new objects are introduced into our field of view, our system will falsely detection motion where there is none!

To combat this, we instead take the weighted mean of previous frames along with the current frame. This means that our script can dynamically adjust to the background, even as the time of day changes along with the lighting conditions. This is still quite basic and not a “perfect” method to model the background versus foreground, but it’s much better than the previous method.

Based on the weighted average of frames, we then subtract the weighted average from the current frame, leaving us with what we call a frame delta:

delta = |background_model – current_frame|

Figure 3: An example of the frame delta, the difference between the original first frame and the current frame.

Figure 3: An example of the frame delta, the difference between the averaged frames and the current frame.

We can then threshold this delta to find regions of our image that contain substantial difference from the background model — these regions thus correspond to “motion” in our video stream:

To find regions in the image that pass the thresholding test, we simply apply contour detection. We then loop over each of these contours individually (Line 82) and see if the pass the min_area  test (Lines 84 and 85). If the regions are sufficiently larger enough, then we can indicate that we have indeed found motion in our current frame.

Lines 89-91 then compute the bounding box of the contour, draw the box around the motion, and update our text  variable.

Finally, Lines 94-98 take our current timestamp and status text  and draw them both on our frame.

Now, let’s create the code to handle uploading to Dropbox:

We make a check on Line 101 to see if we have indeed found motion in our frame. If so, we make another check on Line 103 to ensure that enough time has passed between now and the previous upload to Dropbox — if enough time has indeed passed, we’ll increment our motion counter.

If our motion counter reaches a sufficient number of consecutive frames (Line 109), we’ll then write our image to disk using the TempImage  class, upload it via the Dropbox API, and then reset our motion counter and last uploaded timestamp.

If motion is not found in the room (Lines 129 and 130), we simply reset our motion counter to 0.

Finally, let’s wrap up this script by handling if we want to display the security stream to our screen or not:

Again, this code is quite self-explanatory. We make a check to see if we are supposed to display the video stream to our screen (based on our JSON configuration), and if we are, we display the frame and check for a key-press used to terminate the script.

As a matter of completeness, let’s also define the TempImage  class in our pyimagesearch/tempimage.py  file:

This class simply constructs a random filename on Lines 8 and 9, followed by providing a cleanup  method to remove the file from disk once we are finished with it.

Raspberry Pi Home Surveillance

We’ve made it this far. Let’s see our Raspberry Pi + Python + OpenCV + Dropbox home surveillance system in action. Simply navigate to the source code directory for this post and execute the following command:

Depending on the contents of your conf.json  file, your output will (likely) look quite different than mine. As a quick refresher from earlier in this post, I have my Raspberry Pi + camera mounted to the top of my kitchen cabinets, looking down at my kitchen and refrigerator — just monitoring and waiting for anyone who tries to steal any of my beers.

Here’s an example of video being streamed from my Raspberry Pi to my MacBook via X11 forwarding, which will happen when you set show_video: true :

And in this video, I have disabled the video stream, while enabling the Dropbox API integration via use_dropbox: true , we can see the results of motion being detected in images and the results sent to my personal Dropbox account:

Here are some example frames that the home surveillance system captured after running all day:

Figure 7: Examples of the Raspberry Pi home surveillance system detecting motion in video frames and uploading them to my personal Dropbox account.

Figure 4: Examples of the Raspberry Pi home surveillance system detecting motion in video frames and uploading them to my personal Dropbox account.

And in this one you can clearly see me reaching for a beer in the refrigerator:

Figure 8: In this example frame captured by the Raspberry Pi camera, you can clearly see that I am reaching for a beer in the refrigerator.

Figure 5: In this example frame captured by the Raspberry Pi camera, you can clearly see that I am reaching for a beer in the refrigerator.

If you’re wondering how you can make this script start each time your Pi powers up without intervention, see my post on Running a Python + OpenCV script on reboot.

Given my rant from last week, this home surveillance system should easily be able to capture James if he tries steal my beers again — and this time I’ll have conclusive proof from the frames uploaded to my personal Dropbox account.


In this blog post we explored how to use Python + OpenCV + Dropbox + a Raspberry Pi and camera module to create our own personal home surveillance system.

We built upon our previous example on basic motion detection from last week and extended it to (1) be slightly more robust to changes in the background environment, (2) work with our Raspberry Pi, and (3) integrate with the Dropbox API so we can have our home surveillance footage uploaded directly to our account for instant viewing.

This has been a great 2-part series on motion detection, I really hope you enjoyed it. But we’re honestly only scratching the surface on motion detection/background subtraction — this will most certainly not be the last time we cover it on the PyImageSearch blog. So if you want to keep up to date regarding new posts on PyImageSearch, I would definitely recommend signing up for the PyImageSearch Newsletter at the bottom of this page.

And finally, if you enjoyed this tutorial, please consider sharing it with others!


If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , , , ,

687 Responses to Home surveillance and motion detection with the Raspberry Pi, Python, OpenCV, and Dropbox

  1. Cristian TG June 1, 2015 at 1:00 pm #

    Your proyects are awesome. They inspire me.

    Keep it up!

    • Adrian Rosebrock June 1, 2015 at 1:52 pm #

      Thanks Cristian!

      • Andres Acevedo May 31, 2017 at 3:16 pm #

        so do we need the wifi dongle to make this work or what?
        I am just curious

        • Adrian Rosebrock June 4, 2017 at 6:33 am #

          The Raspberry Pi 2 would require a WiFi dongle. The Pi 3 ships with WiFi built-in. Otherwise you could use an ethernet cable. Please note that an internet connection is only required if you want to upload individual frames to Dropbox.

          • mira December 13, 2019 at 2:24 am #

            I got a problem
            after they print ” [info]warming up”

            it shows
            illegal instruction .
            I don’t know why.

          • Adrian Rosebrock December 18, 2019 at 9:54 am #

            How are you accessing your webcam? Is it a USB webcam? Or an RPi camera module?

    • Andrew Hao December 13, 2019 at 4:46 am #

      I will do a project that for human detection on Raspebbery Pi 4. Would will work? I can apply your code for human detection? I will use raspberry model B, picamera with infrared (night vision).

  2. Andy June 1, 2015 at 3:40 pm #

    Hey Adrian-

    What version Pi are you using? What is the oldest version Pi that you think could be used for this project? I would envision trying to set up multiple cameras at my home (I think 4, for the number of entry points into my home) and as a DIY solution I would try to get the oldest/cheapest version Pi that would still be effective 24/7.

    Thanks for posting this! It inspires me to get involved with projects like this!

    • Adrian Rosebrock June 1, 2015 at 4:25 pm #

      Hey Andy, no worries — I updated your previous comment so it reads correctly 🙂

      I am using the Raspberry Pi 2 for this project. You might be able to get away with the B+ for this, but I would really recommend against it. The Pi 2 is substantially faster (4 cores and 1gb of RAM) and is well worth it. In reality, a Pi 2 will cost you $35, along with a camera module per each, so you’re probably looking at $60 per system, which isn’t too bad.

  3. berkay celik June 1, 2015 at 5:40 pm #

    Thanks for the great tutorial. it’s very informative and very well explained.

    • Adrian Rosebrock June 1, 2015 at 7:32 pm #

      Thanks so much Berkay, I’m glad you found it helpful! 🙂

  4. Flo June 1, 2015 at 9:29 pm #

    As always a great tutorial.

    Though instead of copy-pasting the url for the dropbox auth, you could let python handle that as well with webbrowser.open()

    • Adrian Rosebrock June 2, 2015 at 6:42 am #

      Good point. I was using X11 at that point, so launching a browser over X11, even on a local network, can be quite slow, hence I went with the copy-and-paste solution. Definitely not the most elegant solution, but it worked!

  5. Alex June 1, 2015 at 10:57 pm #

    Hi Adrian,

    This is a great post!

    Recently I am trying to make a little surveillance system and have read plenty of your tutorials.

    I have a rough idea that 1) using the frame difference method you mentioned here and last week for initializing the search area; 2) implement OpenCV default hog human detection method around the initialization area and find the bounding box of human beings as the input box in step 3; 3) using the dlib library correlation tracker to track the detected people.

    Will this work or not and is there any suggestions to improve the surveillance performance to make it robust?

    Thanks again for your wonderful tutorials!

    • Adrian Rosebrock June 2, 2015 at 6:41 am #

      Hi Alex — thanks so much, I’m glad you enjoyed the post!

      So in general, the solution to your surveillance system will depend on (1) what gives you the best results, and (2) how complicated you want to make it. Using HOG requires training your own custom object detector which can non-trivial, especially if you are just getting started in computer vision and machine learning. Furthermore, this classifier would only really work to detect what you trained it to detect — it wouldn’t be able to detect true “motion” for arbitrary objects in a video stream.

      That said, I think you’re on the right track. You want to use some basic motion detection, followed by more advanced methods for tracking the bounding box. I don’t think dlib’s correlation tracker has Python bindings, but I know this one for OpenCV does.

      • Tim Clemans June 2, 2015 at 9:59 am #

        Dlib just added a Python binding for correlation tracker, see https://github.com/davisking/dlib/blob/master/python_examples/correlation_tracker.py

        • Adrian Rosebrock June 2, 2015 at 12:46 pm #

          Nice, thanks for passing this along Tim! I’m really excited to play around with it.

          • Gary Lee November 9, 2015 at 8:48 pm #

            Adrian and everyone else here. Does anyone know of sample applications for installing and then calling either DLIB or the MOSSE.PY mentioned above? I’m ready to get the tracking side of this working well. I have detection working pretty well, but now need to go to the next level.

            When I run the MOSSE.PY standalone, it never seems to allow me to draw the rectangles needed to track an object. I’d like to pass an object to it for tracking, but am not sure how.

            And on DLIB, I am not sure how to best install into the virtual environment being used here (Workon CV).

            Adrian, please write a new blog post on this!!!! (grin)

            If anyone has experience, please comment. Thanks

          • Adrian Rosebrock November 10, 2015 at 6:21 am #

            I’ve used the dlib tracker with success in many applications. I’ll add doing a post on installing dlib, along with a post on how to do track with dlib to my queue.

      • Onur February 14, 2018 at 8:45 pm #

        Is it possible to extend it to night surveillance camera as well? Using this night vision camera? Without adding more custom code?


    • asha July 12, 2015 at 2:55 pm #

      Hi Alex, I have made similiar project. With some modification, I combine Adrian’s scripts with OpenCV peopledetect.py sample. I perform HOG human detection when the contours found (countour>0). Need 2-3 seconds to get result from HOG human detection for every frame loop. Not very efficient, but it’s enough for my case. I use Raspberry Pi 2 with Pi Camera.

      Sorry for my bad english.

      • Rohit sharma May 28, 2018 at 2:47 am #

        I think your work will be helpful for me so. If you can provide the sample code and the command. It will be great.
        you can contact me on EMAIL REMOVED

  6. Quan June 2, 2015 at 12:47 am #

    Hi Adrian,

    Thank you for works, it’s very interesting,
    If I don’t use Pi Camera but another usb webcam,
    Is this OK ? how about a usb hub for multi webcam ?

    • Adrian Rosebrock June 2, 2015 at 6:35 am #

      Hi Quan, if you have a USB webcam you can certainly use this code. You’ll just need to modify the code that actually grabs the frames from the camera to use the cv2.VideoCapture function like in this post.

  7. Alain June 3, 2015 at 4:58 pm #

    It looks like really fun to play with (ok, it costed me already an extra Raspberry2B & cam … My A&B wouldn’t do it properly I think, and my 2B was in use already)… I got this even working, which is of course easy with this much details… But instead of writing pictures, I would like to combine the “compromising” pictures into a video… either one for the whole running time or for each “occupied session”.
    It doesn’t look as I can use VideoWriter for rawCapture frames … Would there be an option? And if so, what function should I look at?

    • Adrian Rosebrock June 3, 2015 at 8:25 pm #

      Hi Alain. Indeed, I would definitely suggest using the B+ or the Pi 2 for this example. You’ll get much better results with the Pi 2 since it’s much faster than the B+. As for your question, you can certainly use the cv2.VideoWriter. Take a look at Line 57: frame = f.array. The frame variable is simply a NumPy array which you can pass to the cv2.VideoWriter.

      • Alain June 4, 2015 at 6:11 am #

        Thanks Adrian,
        I found after I wrote the question, an example with the writing of ‘raw’ pictures … But not as clear as yours … I hope I find time to implement this in the next days (and follow the basic advice … RTFM :)) … But I don’t have enough network ports in my living room to do all I need to right now…

        • Alain June 4, 2015 at 7:19 am #

          OK … Found one of my errors already … I was trying to write “frame” to the file, but that one was modified already, and not writing at all … using f.array works, I now added a frame_2 (which is a copy of frame), so I could add the timestamp … but now play-time is over … Time to do something useful … but this will be continued…

          • Eric Page December 29, 2015 at 12:20 am #

            Hi Alain, did you ever get this solved? I need to do the same thing – capture video of my dog when he’s playing around. I’ll be poking around but if you already have the code written, would love to borrow because I’m a python and rPi newbie.

  8. MHB June 4, 2015 at 4:19 am #

    I’m a beginner to the RPi, Python and OpenCV and find your blog posts really helpful! So thank you.

    Maybe I am being silly, but is there anything that should be included in the project directory/pyimagesearch/__init__.py file?

    • Adrian Rosebrock June 4, 2015 at 6:22 am #

      Technically no, the __init__.py file indicates that the pyimagesearch directory is a Python module that can be imported into a script. There are special commands you can put in the __init__.py file, but its real purpose is to indicate the Python interpreter that the directory is a module.

      • MHB June 4, 2015 at 9:59 am #

        Aah, I see. Thanks for replying.

        I’m going to take this further and try to implement a human detection/ known person recognition feature. I’ll do some more research.

        The pi-in-the-sky goal would be getting some crude navigation going using computer vision for a RPi robot 🙂

  9. Jose Carrera June 4, 2015 at 11:44 am #

    Hi Adrian,
    Again your work is amazing, I have a question, the camera that you are using works during night???
    Thanks for your time and work.

    • Adrian Rosebrock June 4, 2015 at 7:42 pm #

      Hey Jose, thank you for such a kind compliment 🙂 The camera I am using does not work well at night. For that you’ll want an IR (infrared) camera.

  10. Mike Brandt June 4, 2015 at 3:52 pm #

    Welp, got it working. You write wonderful tutorials. I just *really* need to pay attention to details. Is there any way to customize this script so that I don’t have to re-authenticate to the API everytime?

    • Adrian Rosebrock June 4, 2015 at 7:38 pm #

      Hey Mike, awesome job getting it working, I’m very excited for you! As for the re-authentication, I’m not sure about that one. I have really only used the Dropbox API for this particular example, so you might want to chat with a more experienced Dropbox developer.

  11. jason June 5, 2015 at 2:13 pm #

    I would like to know if you have ever tinkered with adding audio to the video or what recommendation you might have to address audio.

  12. Grant June 7, 2015 at 3:41 pm #

    Hi Adrian, thanks for the great tutorial!

    I am new to Raspberry Pi and have what is probably a really silly question. I keep getting an error ” No JSON object could be decoded”, even though I have the complete conf.json file in the folder with pi_surveillance.py. Any ideas what I’m doing wrong? Any help would be greatly appreciated.

    • Adrian Rosebrock June 8, 2015 at 6:47 am #

      Hey Grant, that’s definitely quite the strange error message! Did you download the source code to this post or did you copy and paste it into your editor? There is a chance that the copying and pasting might introduce some extra characters. As for debugging the error, I think this StackOverflow thread should be helpful.

      • Grant June 10, 2015 at 12:28 pm #

        Thanks Adrian, I copy and pasted rather than downloading the source code. After downloading it, everything worked wonderfully! Thanks for the help!

  13. Ryan June 10, 2015 at 3:27 am #

    These are great tutorials!
    I don’t understand what the rawCapture variable is for. It seems all the work is done with the frame variable taken from f.array. Do rawCapture and frame point to the same thing and rawCapture.truncate(0) is just used to clear it?

    • Adrian Rosebrock June 10, 2015 at 7:01 am #

      The rawCapture variable actually interfaces with the Raspberry Pi camera and determines the format of the image that is grabbed from the sensor (in this case, in BGR order). Without using rawCapture the capture_continuous wouldn’t know how to grab the frame from the camera sensor.

  14. mohamad June 17, 2015 at 4:12 am #

    Mr Adrian
    thanksfor this tutorial, When run this program, this want “Enter auth code here:” what is this?

    • Adrian Rosebrock June 17, 2015 at 6:07 am #

      If you want to use the Dropbox API integration (so that images can be uploaded to your personal Dropbox account), you need to enter your Dropbox API credentials in the .json file, followed by supplying an authorization code. If you do not want to use the Dropbox API integration, just set the Dropbox variables in the .json file to null:

      "dropbox_key": null,
      "dropbox_secret": null,
      "dropbox_base_path": null,

      • Robert July 7, 2015 at 9:16 pm #

        I have done both with and without Dropbox, but i am curious – is it possible to hard code the auth code so that I don’t have to use a web browser to start it every time?

        • Adrian Rosebrock July 8, 2015 at 6:17 am #

          Hey Robert, that’s a great question — the answer is that I’m honestly not sure. This project was the first time I had used the Dropbox API. I would check the Dropbox API documentation and look for alternative authorization methods.

        • Danny October 20, 2015 at 4:43 pm #

          Hey Robert,
          This might be a few months too late, but I was having the same issue and figured out how to solve it.

          1. The first step is to understand all of the codes that you’re getting from Dropbox.
          When you paste the Dropbox link into your browser, enter your email and password, they give you an auth code which is a temporary and can only be used once. You enter this into the command line and the code pulls your access token and uses it to link to your account. The access token never changes and this is what you need to use.

          2. Now you need to find out what your access token actually is.
          I did this by adding a line of code that says:

          print accessToken

          You’ll have to run the program again, copy the link into your browser, get the auth code, etc. Once you’ve done that all again it should spit out your access Token and save it.

          3. Hard code the access token into the code.
          Comment out the lines that reference the auth code (so you don’t have to deal with those dang auth codes any more)
          Add in a line to define the access token.

          Here’s what my final code looked like…

          Hope this helped!

          • Adrian Rosebrock October 21, 2015 at 5:50 am #

            Awesome, thanks for sharing Danny!

          • Michael November 7, 2015 at 3:09 am #

            hi danny – i’m having the same issue. where did you put
            print accessToken

          • Danny November 18, 2015 at 9:28 pm #

            Hi Michael – I put it at the end of this section of code so it looked like this:

          • Adrian Rosebrock November 19, 2015 at 6:20 am #

            Thanks for sharing Danny. In general, I would recommend commenting out that entire section or even deleting it if you do not want to use the Dropbox API.

          • John Tran March 19, 2016 at 3:13 am #

            Another way to get your access token is:
            Go to your app’s info page
            Scroll down to: “Generated access token” icon and click on it to obtain your access token. You will be seeing the warning says:
            This access token can be used to access your account (your dropbox account) via the API. Don’t share your access token with anyone

          • Robert May 24, 2016 at 8:35 pm #

            Late to the party, actually here looking for something else, noticed you responded…forever ago. Thank you so much for this! It worked flawlessly! Excellent work my friend.

          • I Ketut Gede Baskara February 6, 2017 at 5:36 pm #

            HI danny I already done this, and it’s working but in just first run after that i cannot upload the image captured, why?? any help? Thanks

          • Chandramauli Kaushik March 21, 2017 at 10:49 am #

            Thanks, It saved my project.
            Thanks so much

          • Reed November 1, 2017 at 1:50 pm #

            Hi Danny
            thanks for your sharing. but when I put the same code like yours, it comes out error message
            IndentationError: unexpected indent

            am I right to put my access token here?
            and should I keep my dropbox broser on

          • Adrian Rosebrock November 2, 2017 at 2:21 pm #

            Please make sure you are using the “Downloads” section of this blog post to download the source code. It seems that you are copying and pasting the code and likely introduced a indentation error.

      • Rohan Khosla August 11, 2016 at 9:16 am #

        Even after doing what you said above, am still not able to run the program. It still asks for the auth code. What to do now??
        please help

  15. nipuna June 17, 2015 at 7:45 am #

    Thank you for the tutorial I learned a lot.
    I’m trying to create a system which will track people moving in a corridor and identify the ones spending too much time in the given area using a raspberry pi. Currently I’m thinking about using camshift+kalman filters. Can you give me some advise please? It would be much appreciated. thank you.

    • Adrian Rosebrock June 17, 2015 at 7:56 am #

      Obviously, the first step is to perform some sort of motion detection to determine where people are moving in the corridor. From there, you I would probably suggest optical flow. A better choice could be correlation based methods such as MOSSE. Once you have (1) detected the person, and (2) started the tracking, it’s fairly trivial to start a timer to keep track of the amount of time a person spends in the corridor.

      • nipuna June 17, 2015 at 9:02 am #

        Thank you for your advise. So basically what I have to do is use background subtraction to detect motion and when people are detected use correlation based method(mosse) to track them .Am I correct? and can I track multiple people using this method?
        (I’m fairly new to this field.) thank you!!!

        • Adrian Rosebrock June 17, 2015 at 9:39 am #

          Yep, that’s the general idea! Correlation based methods require an initial bounding box, so you’ll utilize motion detection to grab that initial bounding box and then pass it on to your tracker, whether that’s optical flow, correlation, etc. And if you’re new to computer vision and OpenCV, I would definitely suggest taking a look at Practical Python and OpenCV + Case Studies, it will definitely help you jumpstart your computer vision education.

          • nipuna June 17, 2015 at 10:03 am #

            Thank you so much for your advise. I will definitely go through the links you provided. Keep up the good work. 🙂

  16. mohamad June 23, 2015 at 7:44 am #

    Mr Adrian, I use Webcam logitech C615 for more advantage in quality frames, but this coding for PiCamera, I change line 5 “from picamera.array import PiRGBArray” to “from camera.array import PiRGBArray” , follow this error “No module named camera array”
    I know that the captchar frames (line 54)need to work properly. HELP ME TO DOING.

    • Adrian Rosebrock June 23, 2015 at 10:00 am #

      If you are using a Logitech camera rather than the Raspberry Pi camera, then you will not be able to use the picamera module to access the frames of the video feed. Instead, you’ll have to use the cv2.VideoCapture function as detailed in this post.

  17. asha June 26, 2015 at 1:45 am #

    Great tutorial. I’ve tried it at my office. Perfectly works. Thanks Adrian.

    • Adrian Rosebrock June 26, 2015 at 5:50 am #

      I’m glad it worked for you Asha! 🙂

  18. Robert June 29, 2015 at 10:06 pm #

    I am curious if it is possible to get this running headless. I have tried via SSH with the video not being displayed, but the program is shut down upon exiting the session. Could this be done via xrdp?

    • Adrian Rosebrock June 30, 2015 at 6:33 am #

      This should be possible to run headless provided your camera is connected. You could always SSH into the Pi, start the script, and then push it to the background so it’s still running before exiting your session. You could also start the script on reboot using a cronjob.

  19. Dinika July 2, 2015 at 1:36 am #

    Dear Mr.Adrian,

    Great tutorial.Thank you. I have a small request. Would you be able to do a small tracking example based on correlation filters such as dlib or MOSSE to track multiple objects? I have being trying to do so for a while now with no luck.

    • Adrian Rosebrock July 2, 2015 at 6:38 am #

      Absolutely! Doing a post on correlation filters is very, very high up on my priority list!

      • Babitha June 28, 2017 at 11:30 pm #

        If I don’t want to store video in Dropbox than wt was the changes in code

        • Adrian Rosebrock June 30, 2017 at 8:14 am #

          This question has been addressed multiple times in the comments section. Please read the comments before posting. You simply need to comment out the dropbox import, the code used to connect to the Dropbox API, and the actual upload code.

  20. Scott July 3, 2015 at 7:33 am #

    If you don’t want to have the camera LED active then add…

    # camera led

    To the config.txt and the LED will no longer be active

    • Adrian Rosebrock July 3, 2015 at 10:25 am #

      Nice, thanks for the tip Scott!

    • Tom Kiernan September 30, 2015 at 10:20 pm #

      where is the config.txt file? can this disable_camera_led variable go into the conf.json file?

      • Adrian Rosebrock October 1, 2015 at 6:11 am #

        It should be located in /boot/config.txt. There should also be an option in there that allows the LED to be disabled. Once you modify it, you’ll need to reboot your Pi. This configuration (since it’s a boot configuration) cannot go into the conf.json file.

    • Andrew October 4, 2015 at 2:56 pm #

      This is awesome, great tip Scott!

  21. nipuna July 4, 2015 at 12:37 am #

    Mr. Adrian,

    After performing background subtraction is there a way to create a “fixed size bounding box” instead of using the looping over contours method mentioned here? So it can be passed to the dlib tracker? Any advise would be a great help. Thank you.

    • Adrian Rosebrock July 4, 2015 at 7:35 am #

      Hey Nipuna, I’m not sure what you mean by a “fixed size bounding box”. If you have the initial bounding box that should be enough to pass into the dlib correlation tracker, no?

      • nipuna July 4, 2015 at 10:12 am #

        sorry mr.adrian. my mistake. yes , What I want to know is how to create the intial bounding box. Is using the contour method the only way or is there another way to create the bounding box?

        • Adrian Rosebrock July 4, 2015 at 6:15 pm #

          The actual bounding box is created via the cv2.findContours and cv2.boundingRect functions. If you can obtain the bounding box for an object you want to track, you can then pass it on to something like MOSSE or dlib without too much of an issue.

  22. nipuna July 5, 2015 at 8:25 am #

    Thank you Mr.Adrain. You have helped a lot. I looked at your case studies bundle and learned a lot in a small amount of time. I regret not having a look at it sooner , then I would have been able to save a lot of time I spent on searching the web about image processing. Thank you.

    • Adrian Rosebrock July 5, 2015 at 8:53 am #

      I’m glad myself and the Practical Python and OpenCV + Case Studies books were able to help! 🙂

  23. Martin Maw July 9, 2015 at 5:34 am #

    Thanks a lot Adrian, this is a great tutorial and it helped me (as a python novice) immensely!
    I integrated the flask html streaming from
    and would like to share.

    • Adrian Rosebrock July 9, 2015 at 6:29 am #

      Very nice Martin! I had to remove the code from the bottom of the comment since the formatting got messed up. Can you please create a GitHub Gist for the code and link it by replying to this comment?

    • Ron W January 11, 2016 at 12:33 am #

      Hi Martin,
      Do you have the code for this? I’m attempting to do the exact same thing, your code would help.


  24. David July 14, 2015 at 1:00 am #

    Great work on these tutorials, worked through the pi-camera and opencv installation and setup without a hitch.

    I like your implementation of the dropbox oauth2 process, but made a small change that allows the generated access token to be stored in a text file or in the conf.json. Here’s the file on github for saving the token in the JSON: https://github.com/levybooth/pi_surveillance_auth/blob/master/pi_surveillance_auth.py

    Note that I added: import os.path to the list of imports, and changed the path for saving the images on line 144.

    Thanks again for your excellent courses – so far they’re the only walk-through of opencv with the pi camera module that actually worked for me.

    • Andrew October 4, 2015 at 3:20 pm #

      Thanks for the mods that allow permanent storage of the access token. Does it ever need to be refreshed or is it truly permanent once stored in conf.json?


  25. Neilesh July 15, 2015 at 3:39 pm #

    Hello Adrian,

    While writing this code, I initially tried writing this code by hardcoding the values in the json file (I also didn’t want to use dropbox) and I kept getting a syntax error on line 139 (the “if conf[“show_video”]” part. Then I tried writing the json part in IDLE, but I’m not sure if thats the correct way to write a json file. I was wondering what workaround there is to the json file, if not, then how to properly write the json file.

    Thank you in advance.

    • Adrian Rosebrock July 16, 2015 at 6:29 am #

      The easiest way to get around the JSON file is to just hardcode the values into the code. The JSON file is just meant to make configuration easier — but if you do not want any configuration (and no Dropbox), just hardcode the variables.

      Secondly, I would suggesting downloading the code to the post instead of writing it out line-by-line. Writing it out is a great exercise and something that can help you learn a new language or a technique, but for this problem, it would be best to download the code and having a working “standard” that you can base your modifications off of.

  26. thomas July 19, 2015 at 6:12 am #


    thanks for the project.

    Is there a way to integrate the Dropbox permanently, without requesting a auth code all the time I start the programm?

    • Adrian Rosebrock July 19, 2015 at 7:41 am #

      Hi Thomas, that’s a great question, thanks for asking. I honestly do not know the answer to that question off the top of my head. This project was the first time I had used the Dropbox API. I would suggest going through the Core API and seeing what other functions are available.

  27. nomasteryoda July 23, 2015 at 11:32 am #


    I used the dropbox-uploader script listed on this site … It maintains the API key so that you don’t have to request each time.. http://raspberrypitutorials.themichaelvieth.com/tutorials/raspberry-pi-surveillance-camera-dropbox-upload/

    • Lucas July 28, 2015 at 7:02 am #

      Hey Guys

      I turned off Dropbox integration and am using the other Dropbox uploader script, how do I configure where the images are being saved? I have a shared folder on my desktop that is synced via cron regularly and would like them to go there.

      Another great feature would be to have an email notification when motion is triggered, can anyone give any tips on that? I’m new to Pi and Python 🙂

      Thanks for the awesome tutorial and script btw Adrian 🙂

  28. Darius July 28, 2015 at 6:00 am #

    Thanks for the great tutorial!

    I am facing an issue in letting the python script run on cronjob. I would like it to run every single time the Rpi reboots, without the aid of a monitor.

    I have created a launcher.sh in my /home/pi directory.
    # launcher.sh
    # activate the cv environment ,then execute the python script

    cd /home/pi

    source ~/.profile && workon cv

    python /home/pi/pi_surveillance.py –conf conf.json
    Then I add on the reboot command at crontab on the last line.

    $ sudo crontab -e

    @reboot bash /home/pi/launcher.sh

    But to no avail, it gives this error.

    stdin: is not a tty
    Traceback (most recent call last):
    File “/home/pi/pi_surveillance.py”, line 13, in
    import imutils
    File “/usr/local/lib/python2.7/dist-packages/imutils/__init__.py”, line 5, in$
    from convenience import translate
    File “/usr/local/lib/python2.7/dist-packages/imutils/convenience.py”, line 7,$
    import cv2
    ImportError: No module named cv2

    Anyone has any idea to make it work? Thanks in advance.

    • Adrian Rosebrock July 28, 2015 at 6:31 am #

      Hey Darius, it looks like your cronjob is running as root where the cv virtual environment does not work. You have two options to resolve this. The first is to create the cv virtual environment for the root user. The second option is to modify your launch.sh script to switch to the pi at the top of the file.

      • Darius July 29, 2015 at 1:37 am #

        Thanks for the advice! But now, I face with another problem which I am not too sure about.

        I have created a cron log for debugging purposes. Now, it reflects this instead:

        Traceback (most recent call last):
        File “/home/pi/pi_surveillance.py”, line 28, in
        conf = json.load(open(args[“conf”]))
        IOError: [Errno 13] Permission denied: ‘conf.json’

        I am pretty sure I have granted permission to all the files. Any solution will be much appreciated!! Thanks!

        • Adrian Rosebrock July 29, 2015 at 6:28 am #

          The user executing the script that is trying to access the conf.json file does not have access to read it. You should use chown to switch ownership of the user. A completely terrible hack would be to give full permissions to everyone on the file using chmod 777 conf.json. I would suggest reading up on Unix file permissions before proceeding any further.

      • Eli January 21, 2016 at 8:37 pm #

        The solution I adopted is to activate the user’s crontab, rather than root’s. That way the virtual environment can be initiated as if the user logs in.

        This small change to Darius’ method helped install a background process that survives reboot.

        $ crontab -e -u pi

        where pi is the user name. The rest of the steps are largely as his.

        • Adrian Rosebrock January 22, 2016 at 4:47 pm #

          Nice, thanks for sharing Eli! 🙂

  29. Lucas Young July 28, 2015 at 5:19 pm #

    Hey Adrian

    Great tutorial 🙂 If we dont use Dropbox (because of the need to re-authenticate), how do we make the script save the image to a folder instead?

    I see you do this:

    if conf[“use_dropbox”]:
    # write the image to temporary file
    t = TempImage()
    cv2.imwrite(t.path, frame)

    Could you have an else that sets a path maybe from the conf json and writes the image there?


    • Adrian Rosebrock July 29, 2015 at 6:31 am #

      Hey Lucas, as you suggested, I would just have an else statement and then have a configuration that points to the directory where images should be saved. From there, all you need to do is generate the filename and write it to file using cv2.imwrite.

    • chqshaitan January 10, 2016 at 4:19 pm #

      Hi Andrew,

      Firstly i would like to say thanks for a great site. i have spent many hours on it during the last few days getting to grips with image collection and raspberry pi.

      What a great resource.

      Lucas, i am new to python(been developing in php for years off an on though, so am familiar with programming) to answer you question i modified Andrew’s script so right after the t.cleanup() in the dropbox block i do an else and then write the frame out to a local/network path.

      here is code

      Add the file_base_path to the json configuration file(use forward slashes instead of backslashes as python will convert them to backslashes, saves having to escape them).

      • Adrian Rosebrock January 11, 2016 at 6:40 am #

        It’s Adrian, actually 😉 But thanks for sharing your code.

        • chqshaitan January 11, 2016 at 7:23 am #

          Yea, realised that after i had replied to the post, duh 🙂

  30. Darius July 29, 2015 at 6:31 am #

    Het Adrian, I have found my errors. Now it can boot up with cronjob!
    But when motion is detected and it starts to upload to the dropbox client.

    It comes up with the error:

    Traceback (most recent call last):
    File “/home/pi/pi_surveillance.py”, line 138, in
    client.put_file(path, open(t.path, “rb”))
    IOError: [Errno 13] Permission denied: ‘.//88eec3c0-5b20-406f-9d68-49bd941a7410.jpg’

    I was thinking it was probably because the absolute path should be declared, since I am running on cronjob.

    But I have no idea how do u change the t.path to the absolute path. Needs some enlightenment!

    • Adrian Rosebrock July 29, 2015 at 6:39 am #

      The reason you are getting that error is because your account does not have permission to create a file under that directory. I would suggest reading up on Unix file permissions before you continue. Alternatively, you might be able to get away with modifying the TempImage line to look something like this:

      t = TempImage(basePath="/tmp/")

      The /tmp directory should be writeable without any file permission changes.

  31. Darius July 29, 2015 at 7:01 am #


  32. Bhuvan August 7, 2015 at 7:41 am #

    pi@raspberrypi ~/pi-home-surveillance $ python pi_surveillance.py –conf conf.json
    Traceback (most recent call last):
    File “pi_surveillance.py”, line 6, in
    from dropbox.client import DropboxOAuth2FlowNoRedirect
    ImportError: No module named dropbox.client

    Can you please help me sort out this error ..
    thanks in advance

    • Adrian Rosebrock August 7, 2015 at 8:13 am #

      You need to install the dropbox Python package first:

      $ pip install dropbox

      • davidsilva June 2, 2016 at 7:01 pm #

        I’m having the same problem as Bhuvan. I’ve already installed pip install dropbox.

        • Adrian Rosebrock June 3, 2016 at 3:02 pm #

          Make sure you’re installing it into the same Python virtual environment you’re using for OpenCV as well. For example,

          • sahil February 7, 2017 at 4:23 am #

            sir whrere to find dropbox path
            and my capturing stops as soon as “occupied” occures on image and only one image is stored . what to do to have cantinous capture of images and storing of occupied images
            plz help sir

    • Sky May 28, 2017 at 10:19 am #

      The problems is caused by old version of urllib3. You need to download the pip via Github and update your urllib3. I faced this problems because my pip cannot upgrade the urllib3 it written there owned by OS. Anyways, i managed to solved it this ways.

      • Alejandro June 15, 2017 at 4:13 pm #

        Hi Sky,
        i am currently facing the same problem. However, i did not understand completely how you solved this problem. Did you update pip by installing itself from github, or did you get the urllib3 from github?
        Your help would be much appreciated.

        • Rob July 9, 2017 at 9:34 am #

          Anyone get a solution to this? i’ve install everything in the same virtual environment and still getting an error here. so close yet so far!!!

          • Max November 15, 2017 at 4:55 am #

            Do you still need a solution?

          • C M December 3, 2017 at 5:01 pm #

            Same here. I’ve run the dropbox install within the virtual environment and done the same with urllib3, but still the same error as Bhuvan. Any ideas on what I could try next?

          • Adrian Rosebrock December 5, 2017 at 7:41 am #

            I would suggest updating to the latest version of the Dropbox library to see if that resolves the issue:

            $ pip install --upgrade dropbox

            If you are using Python virtual environments make sure you access them first.

          • C M December 5, 2017 at 5:04 pm #

            Never mind – was still using old version I downloaded before August 2017. Redownloaded and working well, thanks very much.

  33. Gab August 7, 2015 at 2:29 pm #

    I was wondering if you are aware of some code that would allow to check for motion in a specific ROI within a live video ? I am not sure what would be the best way to do it.


    • Adrian Rosebrock August 8, 2015 at 6:32 am #

      If you want to detection motion within only a specific ROI, there are two ways to do it. The first way is to perform NumPy array slicing to crop the region of the image you want to check for motion — then you apply motion detection to only the cropped image, not the entire image.

      Another option is to perform motion tracking on the entire image, and then check the bounding boxes of the contours. If they fall into the (x, y)-coordinates of your ROI, then you know there is motion in your specific region.

      • Gab August 8, 2015 at 12:12 pm #

        Thanks for these advice. I will try that. I’m quite novice with python and programming in general so your website and advice are greatly appreciated !

  34. Kitae August 15, 2015 at 3:51 am #

    Thank you for tutorial!

    the 2 line in ‘pi_surveillance.py’

    no module named pyimagesearch.tempimage

    is there any other package that i have to install?
    I installed imutils and dropbox.
    I made a folder ‘pyimagesearch’ and created a file ‘tempimage.py’ in it

    • Adrian Rosebrock August 15, 2015 at 6:26 am #

      Download the source code using the form at the bottom of this page. You will receive a .zip file of the source code download that includes the pyimagesearch module.

      • Chris October 6, 2016 at 6:46 pm #

        The zip file I downloaded does not include the full directory structure, it only includes the main pi_surveillance.py and json files, but no pyimagesearch module (so no init or tempimage files). Am I missing something? I keep getting the error for missing the pyimagesearch module. Please help, thanks.

        • Adrian Rosebrock October 7, 2016 at 7:24 am #

          Hi Chris — I just checked the .zip of the download. It does indeed include the conf.json, pi_surveillance.py, and pyimagesearch files and directories. Perhaps you accidentally deleted the directory? I would suggest re-downloading the .zip archive.

          • hashir January 16, 2018 at 7:49 am #

            hi Adrian, how many files will be after extracting in zip file

      • sahil January 14, 2017 at 6:15 am #

        plz tell what are things required to edit in downloaded code
        it is showing something like maxretry error and new connection error

        • Adrian Rosebrock January 15, 2017 at 12:07 pm #

          Hey Sahil — this sounds like you have a problem with your internet connection. Please ensure you have a strong connection and retry the download.

  35. Hackpoint August 17, 2015 at 1:00 pm #

    Thank you so much for all your hard work!I followed your tut and finally I finished my own surveillance system,cheers!

    • Adrian Rosebrock August 18, 2015 at 6:45 am #

      Awesome, glad to hear it! 😀

  36. Andre Tampubolon August 30, 2015 at 10:35 pm #

    Hi Adrian,

    This is looks awesome. Seems like I can use it for a pet project.
    BTW, do you have any idea how to adapt the code to use webcam, instead of the Raspberry Pi camera module?

    I use a Logitech C170 webcam, and your other code:

    Works nicely.
    This one doesn’t, though.

    Thank you 🙂

    • Adrian Rosebrock August 31, 2015 at 7:03 am #

      Indeed, this code is meant for the Raspberry Pi camera which uses the picamera module. The picamera module (obviously) is only compatible with the Raspberry Pi camera.

      Luckily, switching it over to use a normal webcam is very simple — checkout the sister post to this one here. All it really amounts to is changing some boilerplate code related to cv2.VideoCapture.

  37. Andre September 13, 2015 at 3:25 pm #

    Hello Adrian

    The tutorials you write is just Amazing. Thank you very much.

    When I run your code Adrian, I see that the processor is only running at mostly 25% and I am getting quite a lag in real time.
    Adrian I am wondering since I have B+ model that has 4 cores is this program running on just one or should it be adapted with multi processing to run on all 4 cores?

    • Adrian Rosebrock September 14, 2015 at 6:17 am #

      Thank you very much Andre, I’m glad you are enjoying the tutorials!

      Which B+ model are you running? The original B+ model had only one core, whereas the Pi 2 has four cores. If I were to make use of multiple cores for this project, I would give a core entirely to performing the “movement detection” allowing the frames from the camera to be read in a non-blocking fashion. That would take a considerable amount of hacking on the codebase, but it certainly could be done.

      • Andre September 14, 2015 at 2:21 pm #

        Cool, I will give it a go. I made a mistake, yes I am using the Pi 2.

  38. Jie September 24, 2015 at 11:50 pm #

    Good.Thank you.

  39. tass September 27, 2015 at 7:33 am #

    Thanks for tutorial.
    How to fix this problems:
    1. (cv)pi@raspberrypi ~ $ ~/.profile
    -bash: /home/pi/.profile: Permission denied

    2. (cv)pi@raspberrypi ~ $ sudo python test_image.py
    (Image:3311): Gtk-WARNING **: cannot open display:


    • Adrian Rosebrock September 27, 2015 at 8:06 am #

      1. Are you trying to edit it or reload it? You need to supply a command, such as vi ~/.profile or source ~/.profile

      2. You should enable X11 forwarding when you login to your pi: ssh -X pi@your_ip_address

      • pramesh March 2, 2019 at 5:30 am #

        Gtk-WARNING **: cannot open display:

        how can we solve this one(i tried ssh -X pi@ipaddress but not worked)

  40. tass September 27, 2015 at 2:58 pm #

    Thanks, ok for the first question, but as for your second answer, it didn’t work.
    I have Windows 10 and Putty of course.

    • Adrian Rosebrock September 28, 2015 at 6:45 am #

      I’m not a Windows users so unfortunately I’m not sure how to enable X11 forwarding on Windows and Putty. However, there has been some discussion about it over in the comments section of this post, so I would start there.

  41. Tom Kiernan September 29, 2015 at 9:31 pm #

    Hi Adrian, I’m enjoying this project, but need your help diagnosing this error message just after launching:

    Traceback (most recent call last):
    File “sophiecam.py”, line 105, in
    ValueError: too many values to unpack

    • Adrian Rosebrock September 30, 2015 at 6:28 am #

      It sounds like you’re using OpenCV 3. This blog post was meant to be used with OpenCV 2.4. But you can change the cv2.findContours line to be:

      cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[-2]

      And that will make it compatible with both OpenCV 3 and OpenCV 2.4.

      EDIT: Below follows a much better method to access contours, regardless of which version of OpenCV you are using:

      • Tom Kiernan September 30, 2015 at 10:43 am #

        Thanks Adrian, I’ll try that tonight. So you and others know how I wound up with OpenCV 3, I’m new at this and started following the steps at the top of this tutorial with:

        “Let’s go ahead and get the prerequisites out of the way. I am going to assume that you already have a Raspberry Pi and camera board.

        You should also already have OpenCV installed on your Raspberry Pi…”

        That link led to: “Install OpenCV and Python on your Raspberry Pi 2 and B+” with an UPDATE: “I have just released a brand new tutorial that covers installing OpenCV 3…”

        So thinking “newer is better”, I went down that path, but ended in the ValueError message. I’m really glad it’s easy to make it compatible with both OpenCV versions!

        • Tom Kiernan September 30, 2015 at 9:42 pm #

          That worked, “Occupied” image uploaded to Dropbox server! But Dropbox wouldn’t sync with my Windows PC because the date_timestamp.jpg filename had colons. Replaced colon with dash in ts=timestamp formating command.

          Next: WiFi, Static IP, launch from boot, live video stream to phone, SMS alerts

          • Adrian Rosebrock October 1, 2015 at 6:11 am #

            Nice, congrats on getting it to work 🙂

          • Tom Kiernan October 1, 2015 at 7:32 pm #

            Adrian, I got WiFi and a static IP to work, and am now wondering how you would approach adding a live video stream to an IP port? I found this link, but not sure… :http://stackoverflow.com/questions/5825173/pipe-raw-opencv-images-to-ffmpeg The pipe to VLC approach sounds the most reliable.

            I want to stick with OpenCV and your planned features. TIA

          • Adrian Rosebrock October 2, 2015 at 7:14 am #

            Hey Tom, I’ll be honest — I have not tried to setup video streaming from the Pi to another system who then reads in the frames and processes them. I’ll look into and perhaps try to do a post on it in the future.

          • Andryan VT October 14, 2015 at 8:21 am #

            hey Tom Kiernan, did you manage to get it to stream to vlc?

      • Martin December 12, 2015 at 9:04 am #

        Hi Adrian,

        thank you for the tutorial!

        using CV3 I still get the error:

        What might be wrong?

        Kind Regards:


        • Adrian Rosebrock December 12, 2015 at 10:07 am #

          This blog post was meant to run with OpenCV 2.4, hence the error message. Please see my reply to Tom Kiernan above to fix the error. Additionally, be sure to read this post.

          • Martin December 12, 2015 at 11:28 am #

            Hi Adrian,

            you fixed it! Thank you.

            this worked:
            (_, cnts, _) = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

            this did not work:
            cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[-2]

            Kind regards:


          • Adrian Rosebrock December 13, 2015 at 7:40 am #

            Were you getting an error for the second one? Because the code does the exact same thing, only with list slicing.

          • Eric Page December 28, 2015 at 10:49 pm #

            Adrian, same thing for me. Your suggestion for Tom did not work but Martin’s code did. Everything else is 100% your code.

            btw, really really nice work on this and every other post I’ve seen of yours.

      • Dan Bornman March 16, 2016 at 11:43 am #

        The opencv 3 documentation lists the following for findContours()
        Python: cv2.findContours(image, mode, method[, contours[, hierarchy[, offset]]]) → image, contours, hierarchy

        Why did you add the ‘[-2]’ ?

        • Adrian Rosebrock March 16, 2016 at 11:51 am #

          In an experiment to make it compatible with OpenCV 2.4 + OpenCV 3, which logically did not work out 🙂 This is how I suggest grabbing contours irrespective of your OpenCV version:

          • Dan Bornman March 16, 2016 at 12:33 pm #

            That works for my opencv 3 install. Thanks!

          • Adrian Rosebrock March 16, 2016 at 1:17 pm #

            No problem, happy to help!

          • Kerem May 29, 2016 at 11:48 pm #

            Hi Adrian,

            The code above is throwing the following error :

            Traceback (most recent call last):
            File “pi_surveillance.py”, line 90, in
            cnts = cv2.findContours(edged.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
            NameError: name ‘edged’ is not defined

            When I change it to the first suggestion you had on how to make the code compatible with OpenCV 3 which is as follows, it works!

            cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[-2]

            Any ideas why I’m having this trouble? Since I have a working solution this question is more academic than anything.

            Thanks much for all your support. Regards,

          • Adrian Rosebrock May 31, 2016 at 4:08 pm #

            Hi Kerem:

            My original comment was incorrect. I actually suggest the following method for the cv2.findContours return tuple to compatible with both OpenCV 2.4 and OpenCV 3:

            You can read more about the change to cv2.findContours between OpenCV versions in this blog post.

      • mati March 29, 2016 at 11:33 am #

        I’ve just test this with OpenCV 3 and Python 3.4.2
        in addition the print function has been changes

        • Adrian Rosebrock March 29, 2016 at 3:38 pm #

          Other than the print function, the cv2.findContours change mentioned above should be the only changes to convert the code from OpenCV 2.4 to OpenCV 3.

  42. Søren Døygaard October 1, 2015 at 11:13 am #

    This is just what I have been waiting for a long time. Any ideas how to make a trigger that starts recording from an event ? I only want to record when there are a burgler and not when I am at home. I have been thinking to use a microphone triggeret from my alarm system ( read the noise that the siren produces) to start taking pictures.

    • Adrian Rosebrock October 2, 2015 at 7:12 am #

      Only recording when a specific event happens would be straight forward. Just modify the if on Line 107 to be if text == "Occupied" and YOUR_EVENT: where you can define whatever criteria you want for the event to trigger.

    • Søren Døygaard October 9, 2015 at 3:59 pm #

      Hello again, now I found out how to implement a microphone. That you to FabLab RUC. The microphone that I have ordered is Microphone Sensor High Sensitivity Sound Detection Module For Arduino AVR PIC. I will keep you posted

  43. Andrew October 3, 2015 at 8:56 pm #

    This is truly awesome! I got this working tonight! A logical extension of this would be to add scheduling constraints in conf.json (so that the capture is only performed on certain days, during certain times). For example, I want to keep surveillance on jewelry in a bedroom, but not when my wife is getting dressed (which usually happens around 6AM on weekdays, and 7AM on weekends).

    I am going to investigate launching this script via cron and killing it off somehow. Not sure how this will work since it is running in a virtual python environment.

    Thanks for writing!


    • Adrian Rosebrock October 4, 2015 at 7:01 am #

      Great suggestion Andrew!

      • Andrew October 4, 2015 at 9:24 pm #

        Here is my solution for starting the script at 8AM on weekdays only, and then killing it after 9 hours (approximately 5PM). Also, I used Scott’s method of storing the Dropbox auth code in conf.json.

        The key is the use of the timeout command, which will kill the process after x hours:

        Here is my /etc/crontab:

        • Adrian Rosebrock October 5, 2015 at 6:30 am #

          Thanks for sharing Andrew!

  44. Cameron October 10, 2015 at 9:52 am #

    Adrian –

    Your work is great! Thanks for providing all of the basics and guidance.

    I’m developing an animatronic scarecrow for halloween. I will be using the motion detection to trigger a sequence of other effects running on another pi. I would like to be able to count the number of people in the frame to alter the behavior of the scarecrow. Exact numbers aren’t needed. I also want to track the direction of the motion so that I can move the scarecrow’s head to follow the “primary” object moving in the frame.

    I’m thinking that it will be best to keep the camera fixed so we don’t introduce additional variables into the motion detection routines. What are your thoughts on how to estimate the object’s distance from the camera? I would need that in order to triangulate it’s position in the frame to calculate the pan & tilt for the head.

    I’d love your insight and any advice you have.

    • Adrian Rosebrock October 11, 2015 at 8:13 am #

      Hey Cameron — I love the idea of using the Pi for Halloween. Here are some tips to point you in the right direction:

      1. Use the len(cnts) to get an approximate number of regions detected containing motion. These may or may not be people, but it will be a good estimate.

      2. As for as determining the primary direction of movement, take a look at this post.

      3. Calculating the distance to an object is also straightforward.

  45. Mindaugas October 13, 2015 at 8:42 am #

    Hi, Adrian,
    I would like to know if it possible to write to Dropbox (or somewhere else) video, not pictures?
    For example: start recording when state is occupied and check every minute if state is still occupied, when proceed recording, else if unoccupied, stop recording.

    • Adrian Rosebrock October 13, 2015 at 2:44 pm #

      You can certainly upload any arbitrary data type to Dropbox, it is not specific to pictures. I don’t have a tutorial related to saving actual video streams to file, but that’s something I can cover on the PyImageSearch blog in the future.

      • Nelson Candia February 10, 2018 at 12:47 pm #

        Did you do this? I’m looking like crazy for a video streaming and recording but no one seems to have found a solution

        • Adrian Rosebrock February 12, 2018 at 6:34 pm #

          I covered how to write video clips to disk here.

  46. Fabs October 13, 2015 at 3:30 pm #

    Is there a cv function which allows me to look exclusively at one specific part of the cameras video? For example I want one python script look at the left half and another one on the right half of the screen. I checked the cv2.rectangle but it just draws instead of “cropping” (?). Thanks

    • Adrian Rosebrock October 14, 2015 at 6:21 am #

      You bet, all you need is to use simple NumPy array slicing. See the “cropping” section towards the bottom of this post.

  47. Andryan VT October 14, 2015 at 4:29 am #

    Hello Adrian, I’m new in Raspberry Pi and Python. Based of what i’m reading, the idea of drawing the box in the object subjected to motion is first we capture the frame then using background subtraction etc2 till drawing contours and drawing the box around the contours right?

    what if i want to build this kind of system
    so basically i want to integrate it with PIR
    first when there’s a movement captured by PIR (human body in this case), my raspi will capture 1 image then capture 10s of video . after that the image and video will be sent to specific email address notifying intruder or motion. so if i want to do the subtraction and drawing the contours within the video, is it possible? or it is only possible if we capture if per frame? thanks

    • Adrian Rosebrock October 14, 2015 at 6:15 am #

      Your intuition is correct — background subtraction must be performed first before we can find contours and draw a bounding box.

      Your project is also 100% possible. I simply saved a single frame to disk which was then uploaded to Dropbox.You could also save a video file by writing each frame to the file, that’s absolutely possible. I don’t have any tutorials on writing frames to video files, but I’ll be sure to do one in the future.

      • Andryan VT October 14, 2015 at 8:30 am #

        did we use videowriter module in opencv? so basically I change the part of Uploading file in dropbox to writing frame to video file using videowriter?

        • Adrian Rosebrock October 14, 2015 at 9:45 am #

          Yep, that is correct!

          • Andryan VT October 14, 2015 at 10:47 am #

            if you are not busy, can you give me the snippets of taking each frame while looping and insert each of them in the video? i’m trying every move possible but i’m kind of stuck (python newbie here hehe)

          • Adrian Rosebrock October 14, 2015 at 11:25 am #

            As I said, I don’t have any code ready for that right now — I’ll be sure to do a tutorial on video writing in the future.

  48. Adams October 17, 2015 at 8:04 am #

    Hello Adrian,
    Your post has been very useful. Will it be possible to get a link to purchase the components required for this project? So that i dont purchase a non-compatible camera module and all that. I am new to raspberry pi. & I will be very grateful if you could help me with that(via my email).

    • Adrian Rosebrock October 18, 2015 at 7:08 am #

      Hey Adams — I actually provide links to the Pi and camera modules I used inside this post.

  49. Cédric Verstraeten October 18, 2015 at 2:24 pm #

    Instead of motion you can also use Kerberos.io, it’s also open-source and a lot more user friendly to install and configure. You can find more information on the website.

  50. Miguel Angel Euclides October 20, 2015 at 3:51 pm #

    Thank you for this very good tutorial,
    How can I record voice that hapen y the area of video survillance?
    Best regards

    • Adrian Rosebrock October 20, 2015 at 5:18 pm #

      Please see my reply to Alain above. I’ll be covering how to write clips to video files in a future PyImageSearch blog post.

  51. Chih-Liang October 29, 2015 at 12:40 am #

    I am new to opencv and thanks for your tutorial.
    I follow your steps to bring up in model B. It woks perfectly.

    Look forward to your future opencv application.

    • Adrian Rosebrock November 3, 2015 at 10:36 am #

      Fantastic, I’m happy to hear it worked for you! 😀

  52. Diocletian November 5, 2015 at 11:35 pm #

    Hi, master!!

    I want to turn on a red LED (using the GPIO) when motion is detected. How can I do that?

    I’m a noob!


  53. Ryan November 6, 2015 at 4:05 pm #

    Is there any way to move this processing on to the GPU?

    • Adrian Rosebrock November 6, 2015 at 4:13 pm #

      Unfortunately not for this particular algorithm (or for the Pi in particular). But you can compile OpenCV itself with OpenCL/CUDA support on your laptop or desktop and leverage the GPU there.

  54. Fang Lin November 9, 2015 at 1:03 am #

    Thank you very much Adrian! This is really an awesome and comprehensive tutorial. You explained all the details about the techniques and the improvement method which I enjoy most. I finished reading your book which is very practical and get this home surveillance project done.

    • Adrian Rosebrock November 9, 2015 at 6:26 am #

      Nice work Fang! I’m glad the book and tutorials were helpeful 🙂

  55. Riens November 11, 2015 at 7:46 am #

    i have problem when i try python pi_surveillance.py –conf conf.json

    i use the code that i downloaded from download code segment via email

    • Adrian Rosebrock November 12, 2015 at 5:50 am #

      It looks like you are using Python 3. The code for this post was intended for Python 2.7, not Python 3. That said, you can make the code compatible by doing a few things, namely changing the print statement to a print function:

      print("[INFO] Authorize this application: {}".format(flow.start()))

      You’ll need to do this for every print statement in the code.

  56. Clifford November 18, 2015 at 2:59 am #

    How can I use this without the use of dropbox? Im a newbie on python and opencv.

    • Adrian Rosebrock November 18, 2015 at 6:25 am #

      Just simply comment out any code related to Dropbox.

  57. Sidd Saran November 20, 2015 at 9:06 am #

    Hi Adrian,

    Thanks for posting this project. I put it together and enjoyed the process of doing so. The instructions are detailed and well written.

    My one small suggestion would be to make the code compatible with OpenCV3.0.0. The modifications are in the Q&A, but having it in the main body of the message would be useful.

    I look forward to your blog where we have a choice of recording video snippets rather than just pictures.

    • Adrian Rosebrock November 20, 2015 at 12:45 pm #

      Thanks for the feedback Sidd. I’m still trying to figure out how to handle the OpenCV 2.4 => OpenCV 3 conversion. As my blog post coming out on Monday will explain, the vast majority of users are still using OpenCV 2.4. It makes it a bit challenging to support both versions.

  58. halfcoder November 27, 2015 at 11:51 am #

    Is this project supports any alarm system so that it can notify the owner in case of intrusion detection before uploading the photo to the dropbox.

    • Adrian Rosebrock November 28, 2015 at 6:46 am #

      You could certainly update the code to trigger an alarm.

  59. BrunoNFL November 28, 2015 at 7:43 am #

    Hey Adrian, I’d like to know if it would be possible to find the numbers of contours programatically.
    I tried many ways to implement that, but I couldn’t get a precise output.
    I just need to get the numbers of contours that are being displayed, for example, if 2 people walk apart from each other, it displays 2 contours, but when I try to display that, I get a much higher number using len(cnts)

    • Adrian Rosebrock November 28, 2015 at 2:12 pm #

      In this particular case, using the number of contours detected isn’t the best method to “count” the number of moving people in the image. If you take a look at the thresholded image from the motion detection step you’ll notice that many parts of the person are actually disconnected. You could try using morphological operations to close these gaps, having only a single “blob” per person. Otherwise, you might want to look into training custom object detectors or depending on your case, use the people detector supplied with OpenCV.

  60. Peter Grove November 30, 2015 at 12:23 pm #

    Many thanks for this and related projects.
    I have eventually managed to get this working. I applied the mod
    “cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[-2] ”
    that you mention above to get it to work with Open CV3. Also changed the colons to dashes in the ts=timestamp formating command that was also mentioned above to make it work with windows). I also rotated the image 180 degrees before saving it to dropbox as my camera is upside down using
    “frame=imutils.rotate(frame,angle=180)” at line 103.
    I am currently experiementing trying to get more images capturing the motion. Has anyone had good results without overloading the pi? My config file currently uses:
    “min_upload_seconds”: 0.5,
    “min_motion_frames”: 1,
    “camera_warmup_time”: 2.5,
    “delta_thresh”: 5,
    “resolution”: [640, 480],
    “fps”: 8,
    “min_area”: 2000

  61. chuanpan December 1, 2015 at 5:03 am #

    Hi, Adrian, I have used your code to build a motion detection system, and I add some GPIO controls. However the control of GPIO must run with sudo, and your code can not run with sudo, so could you please help me to solve this?

    • Adrian Rosebrock December 1, 2015 at 6:27 am #

      The easiest way to solve this is to create a virtual environment for the root user:

      And then sym-link OpenCV into your root virtual environment. Then just make sure you run your GPIO script as root and everything will work.

  62. thom December 2, 2015 at 1:04 pm #

    what you mean

    ‘Finally, we’ll define a pyimagesearch package for organization purposes’

    • Adrian Rosebrock December 2, 2015 at 3:02 pm #

      This simply means to keep code tidy and organized. Otherwise all of your code would have to live in a single file which would be quite messy! Also, make sure you download the code using the form at the bottom of this post which includes the pyimagesearch module.

  63. Andryan VT December 4, 2015 at 4:14 am #

    Hello Adrian, I’m currently implementing your motion detection into my system ( I used the part 2 one) and I want to cite the technique to my paper. Is there any formal paper to cite on or books? Cause citing from website isn’t permitted from my uni. Based on what I’m reading, you are not implementing Improved Gaussian Mixture and that 2 paper above in the motion detection right? Thank you in advance (I’m already posting this comment in the part 1 section)

    • Adrian Rosebrock December 4, 2015 at 6:24 am #

      Hey Andryan, you are correct, I am not using a GMM based method or anything advanced. It’s simply keeping track of the past N frames and performing a subtraction. There is a well known and extremely simple method for background subtraction so there I’m not even sure what the “original” paper was that used this (if there even was one).

      • Andryan VT December 5, 2015 at 2:20 am #

        Thanks for your reply. I will try finding the paper for it haha ! Btw, does a GMM based method too much for Raspberry Pi?

        • Adrian Rosebrock December 5, 2015 at 6:21 am #

          If you’re using the Pi 2, you might be able to use the GMM method. If you don’t perform shadow detection that will also help speed things up. But in reality, I wouldn’t expect to get any more than a few frames per second performance.

  64. Peter Grove December 5, 2015 at 3:19 pm #

    I am trying to detect when the evening/night prevents a good image so that. Can you please give me a pointer as to a simple calculation I can do on the frame to see if evening has come.


    • Adrian Rosebrock December 6, 2015 at 7:18 am #

      If you keep track of the past N frames (where N would likely have to contain 15-30 minutes worth of frames), you could compute their average over few minutes. Once the average falls below a preset threshold (meaning the average is getting “darker”, you can say that night has come.

  65. Marc Boudreau December 8, 2015 at 5:48 pm #

    Hi Adrian,
    I kept getting:

    Could’nt find my error so I donwloaded the source code with your form.
    I copied and getting the same error.

    Any thoughts?

    • Adrian Rosebrock December 9, 2015 at 6:55 am #

      Marc: You need to supply the --conf switch via command line argument, like this:

      $ python pi_surveillance.py --conf conf.json

      • Marc December 10, 2015 at 5:23 am #

        That did it!
        Lots of cheers when I got it to work!

        • Adrian Rosebrock December 10, 2015 at 6:46 am #

          Nice, glad to hear it!

  66. Dima December 8, 2015 at 9:14 pm #

    Hi, your tutorials are superb, thank you. I’m having noob issues with TempImage, I touched the files and the made the directory with files that you mentioned in the beginning, but when I run “python pi_surveillance.py –conf conf.json”, there’s an import error. ImportError: cannot import name TempImage. Help!

    • Adrian Rosebrock December 9, 2015 at 6:52 am #

      Hey Dima, I would suggest downloading the source code using the form at the bottom of this post and then comparing your code to mine. My guess is that you forgot to include the __init__.py file.

      • Dima December 9, 2015 at 11:10 am #

        Hey, thanks for the response, I created all the files myself and must have messed something up, downloading the code solved that.

        I do have another question: what JSON params should I tweak to send more photos to Dropbox? It would nice for the camera to take more photos per second – does this require tweaking FPS or combinations of parameters or am I missing something?

        • Adrian Rosebrock December 10, 2015 at 6:54 am #

          You’ll need to update both min_upload_seconds and min_motion_frames. The smaller you make those values, the more images will be uploaded to Dropbox.

  67. Bert December 10, 2015 at 10:21 pm #

    Hi Adrian,
    I am new to Raspberry Pi and like your project very much,

    I was wondering:
    now you (and we all) can detect motion would it be possible to enhance this project and detect in wich direction the motion is moving?

    And as a next step: add a subproject with to 2 servos and move the camera in that direction.
    And from there: a next subproject could be that if the detected motion area is small maybe to zoom in with the camera.

    By doing so we can monitor a bigger location (camera totally zoomed out),
    And if motion is detected in a part off the monitored area (N,NE,E,SE,S,SW,W,NW) we can center this area by moving the camera.

    The next step could be that if the motion-part is less then x percent of the whole screen we could zoom in.

    Wouldn’t that be great?

    I definitly get a raspberry of my own and will look into this.
    I’m not sure if i’m capable to enhance the project this way 😉

    Keep up the good work!

  68. Marc Boudreau December 12, 2015 at 2:56 pm #

    Hi Adrian,
    Having trouble with DropBox integration.
    Does App need to be “in production” for it to work?

    • Adrian Rosebrock December 13, 2015 at 7:38 am #

      Can you clarify what you mean by “in production”? Also, if you’re having issues with the Dropbox API, you might also want to post on the official Dropbox developer forums.

      • Marc Boudreau December 14, 2015 at 12:01 pm #

        I see your point. More a Dropbox issue than coputer vision.
        By “in production” I meant, when you create an app, it’s listed as “in developpement”, to have it public, you need to apply for “in production”.

        I see now that this was not related to my problems of having it back up.
        It can back up with the status “in development”.

        I didn’t get working like you have it in your video, but I did get it to work with a “acess token”, that eleminates the problem of having to authorise each time.

        I see that a few people in the comments were asking about this.

        Here is the link I followed to get working with a token.


        Hope it helps someone.

        • Adrian Rosebrock December 14, 2015 at 5:32 pm #

          Thanks for the followup Marc, I certainly appreciate it and I’m sure other PyImageSearch readers will as well 🙂

      • sahil May 27, 2017 at 5:14 am #

        dropbox app on mobile does not notify on image uploaded
        anything i can do to receive notification

        • Adrian Rosebrock May 28, 2017 at 12:59 am #

          I’m honestly not sure what the exact issue is there. I would post on the Dropbox Developer Forums to see if they can help further (I am not a Dropbox API expert).

  69. hiroshi December 18, 2015 at 12:53 am #

    Hi Adrian,
    When I run my “pi_surveillance.py” I get an error:

    Any idea?

    • Adrian Rosebrock December 18, 2015 at 5:52 am #

      Please see my comment to Kitae above. You need to download the source code associated with this post using the form at the bottom or the post or re-create the exact same directory structure I used. If you’re just getting started with Python, I would suggest downloading the code — the code will then run without error.

  70. luuqee December 21, 2015 at 8:08 am #

    Hey Adrian, you are indeed an awesome teacher.
    Thank you for all the tutorials you had done and taught us.

    Just wondering, is it possible to live stream the feed from any device connected to the LAN by assessing through the browser?
    Perhaps like MJPEG streamer or some sort?


    • Adrian Rosebrock December 21, 2015 at 5:36 pm #

      Sure, it’s definitely possible — although I wouldn’t use OpenCV directly for it. I would use something like ffmpeg.

      • luuqee April 14, 2016 at 1:11 am #

        Thanks Adrian. Now waiting for the lengthy process of compiling and installing and hopefully it will turn out well.

        Also, is it possible to integrate any kind notification for this project? Trying to add notification feature such as email notification or SMS notification when a picture has been uploaded to the DropBox.
        Hope you can guide me to the right source(s) so I can experiment on it.

        Thanks again.

        • Adrian Rosebrock April 14, 2016 at 4:46 pm #

          If you’re looking to create a SMS notification, then I would use the Twilio API. I’ll be doing a tutorial on this in the future.

          • luuqee July 20, 2016 at 6:48 am #

            thanks. would love to read on it when its done.
            so far i managed to use ZAPP integrated app for the email notifications.
            Will look on the Twilio API to compare.
            Also, still in the process on trying the ffmpeg, somehow i just cant get it to work. Will try again, maybe i did a few careless mistakes.

            Thanks again Adrian.
            -i might end up buying your book haha. keep up your good work. cheers.

          • Adrian Rosebrock July 20, 2016 at 2:33 pm #

            I personally really like Twilio and find it extremely easy to use. I’ve used it successfully in a number of projects.

  71. claude December 22, 2015 at 1:37 am #

    just awesome!

    Thanks a bunch Adrian, can’t wait to find out about smarter image processing techniques to eliminate more tricky changing background conditions.


    • Adrian Rosebrock December 22, 2015 at 6:28 am #

      No worries Claude, I’ll be covering more advanced motion detection/background subtraction algorithms in the future. Stay tuned!

  72. chqshaitan January 10, 2016 at 4:33 pm #

    Hi Adrian,

    I am going to use this code to monitor a bird feeding table that i have in the garden. What is involved if i want do to following?

    1)Take an individual photo of each countour?(would i simply do a cv2.imwrite just before you create the rectangle?

    2) would it be feasible, if motion is detected to take a still photo at the full resolution capabilities of the camera (ie 5 megapixels) when motion is detected? as i suspect that some of the birds are going to be very small, so will not be very clear on the 640pixel image.



    • Adrian Rosebrock January 11, 2016 at 6:40 am #

      Hey Ray, thanks for the comment. To address your questions:

      1. Yes, if you wanted to create a photo for each contour, just loop over them, extracting the bounding box (i.e., ROI), and use cv2.imwrite to write the image to file. All of this should be done before drawing the rectangles.

      2. I’m not sure if the Raspberry Pi allows for both video capture mode at a lower resolution and then single photo capture at a high resolution. You might want to consult the picamera documentation to see if it’s possible.

  73. Eric Page January 20, 2016 at 1:37 am #

    Adrian, we conversed briefly on Hacker News re the treat dispenser. I’ve had to modify your code quite a bit and, while overall successful, am definitely still having some issues.

    The big one is that I can’t seem to turn the camera off. i.e once I initialize it, the camera red light is always on, even after I’m done using it for that particular round of treats. All of your examples seem to run continuously. I’ve looked through the OpenCV docs but can’t find anything. Is there some method that I’ve missed?

    btw, the way it runs is that, when the Pi receives an MQQT message, it instantiates a Dispenser object which in turn instantiates a Camera object. Red light turns on. Then
    if dispenser.isMotionVerified()

    isMotionVerified and takeVideo really just call the same methods in the Camera class.

    I’d like the camera to shut down once I’m done with that sequence above. Any direction on where I could look?

    btw, if what I’ve done is useful in any way happy to share that code. Maybe you want to build your own treat dispenser? or maybe we collaborate on a beer dispenser…

    • Adrian Rosebrock January 20, 2016 at 1:47 pm #

      Hey Eric! A beer dispenser does sound pretty awesome! 😉

      After reading your comment, I’m not sure I entirely understand the question. You want to turn off the camera after a round of treats have been dispensed? If you do that, how will you know to turn it back on again so that motion will be detected?

      All that said, you might want to try using the with Python command, that way when the camera instance goes out of scope, it’s automatically deleted. Something like the pseudo-code below should help:

      • Eric Page January 20, 2016 at 9:01 pm #

        thanks, Adrian.

        re knowing when to turn on, I only care about motion after the treat is dispensed (triggered either via email or MQQT) so I turn on motion detection system ~10 seconds post dispensing. Either Pickles is home or not.

        I tried what you showed for 20-30 minutes. was stuck so switched over to a different tactic and solved my first problem. My real problems, in descending priority
        1) 1st video would record fine but subsequent videos weren’t overwriting the first one, as I desired it to
        2) the red light is always on

        I moved the cv2.videoWriter code from init to takeVideo method of my camera class and that solved the first problem. The red light always being on is just an annoyance not a huge problem.

        thanks again.

        • Adrian Rosebrock January 21, 2016 at 5:06 pm #

          Congrats on resolving the first issue!

          Regarding #2, have you tried editing your config.txt file?

          • Eric Page February 2, 2016 at 12:19 am #

            Hi Adrian, I missed your response. No but I will try that shortly. I’m sure that’ll work. Having some other issues related to the treat triggering mechanism that I’m working through right now…

  74. Andre Brown January 20, 2016 at 10:34 pm #

    Hi Adrian
    I have got OpenCV 3.1.0 running with python 2.7 on a raspberry pi 2 using your great tutorial. the test_video.py runs fine.
    however as I try to run the code in this tutorial, at the step to run the pi_surveillance.py code to link the dropbox I get an error:

    Please help – I assume I have not loaded the pyimagesearch module or package but I am new to coding and can’t find out what to do

    • Adrian Rosebrock January 21, 2016 at 5:01 pm #

      Please be sure to download the source code associated with this post using the “Downloads” form above. You’ll be able to download a .zip of the code that has the correct directory structure. I would suggest starting there if you are new to coding.

  75. Misagh February 2, 2016 at 12:07 am #

    Hey Adrian Thank you for your great work. I was wondering how can i display the Delta on the screen next to the live video stream.

    • Adrian Rosebrock February 2, 2016 at 10:27 am #

      All you need to do is use the cv2.imshow method:

      cv2.imshow("Frame Delta", frameDelta)

  76. Misagh February 2, 2016 at 12:46 am #

    I have found the way to do it. thank you for the great work again.

  77. halfcoder February 7, 2016 at 1:30 am #

    will this project work with opencv 3.0 and is there any need for wifi adapter in this project??

    • Adrian Rosebrock February 8, 2016 at 3:51 pm #

      If you do not want to upload the images to Dropbox, then you do not need a WiFi adapter or an ethernet connection for this project. Simply comment out the code related to uploading to Dropbox.

      As for the code working with OpenCV 3, all you need to do is change the cv2.findContours call to:

      (_, cnts, _) = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

      You can read more about the change to cv2.findContours between OpenCV 2.4 and OpenCV 3 in this post.

  78. Miraj February 10, 2016 at 9:25 am #

    Awesome post Adrian, got everything working.

    I was wondering what’s the advantage of this method of background subtraction vs. the one outlined on the OpenCV site (here: http://docs.opencv.org/master/db/d5c/tutorial_py_bg_subtraction.html#gsc.tab=0)

    I understand Median Blur is slightly computationally intensive, but removes speckles. I was more asking what the advantages are around using the BackgroundSubtractor functions available already and periodically calling those?

    I’m essentially looking for a way to determine a way to determine the background of a frame if there is existing movement throughout the video (such as people walking around and/or sitting down).

    Thanks a bunch.

    • Adrian Rosebrock February 10, 2016 at 4:32 pm #

      OpenCV does indeed have built-in methods for background subtraction. The problem is that they are quite computationally expensive and less suitable for running on the Raspberry Pi — but they are tend to do a better job at background subtraction. I plan on writing a blog post on the more advanced methods of background subtraction built into OpenCV later this year, so be sure to keep an eye on the blog!

      • Miraj February 13, 2016 at 9:55 am #

        Oh that makes sense—thanks for the info.

        Quick question regarding the camera.capture_continuous function—how is that called? Is that just capturing the current frame present in the video capture, or does it build up a backlog / buffer of frames not yet processed yet (capturing each frame according to the frame rate)?


        • Adrian Rosebrock February 13, 2016 at 11:29 am #

          It’s capturing the current frame. If you need a backlog/buffer, you should use the Queue Python data-structure.

  79. Andreas Lan February 17, 2016 at 4:56 am #

    Hello Adrian,

    thank you for your code samples. It workes brilliantly and the movement detection in front of my door works without problem. Your introduction to OpenCV and your knowledge about setting things up on Pi helped me a lot. Without your guide installation of all the libraries would have taken weeks.

    Any idea how to increase frame rate? I already adopted a threading aproach (Picamera frame collection thread and surveillance image processing thread), but using your surveillance algorithm I do get 7 fps only on a Raspeberry Pi 2 Model B using 640×480 video and 500 pix width for image processing. As another mystery on another identical Pi 2 mod B I get an even lower frame rate.
    Any idea to increase frame rates? Is it possible to split the movement detection into 2 threads so to run on 2 cores also?

    • Adrian Rosebrock February 17, 2016 at 12:35 pm #

      Fantastic! It’s great to hear that both the installation guide and motion detection tutorial worked for you 🙂

      As for increasing the FPS processing rate, yep, I cover that too. As the tutorial notes, you’ll want to use your Pi 2 and treading to get the most performance out of the system.

  80. Satoshi February 19, 2016 at 7:36 pm #

    This blog is perfect guide for me. I can replicate home surveillance system on my raspberry pi and dropbox. Thank you Adrian, great work!!

    • Adrian Rosebrock February 22, 2016 at 4:29 pm #

      Thanks Satoshi! 😀

  81. halfcoder February 23, 2016 at 1:09 am #

    how to get these variables
    “dropbox_key”: “YOUR_DROPBOX_KEY”,
    “dropbox_secret”: “YOUR_DROPBOX_SECRET”,
    “dropbox_base_path”: “YOUR_DROPBOX_APP_PATH”,

    • Adrian Rosebrock February 23, 2016 at 3:23 pm #

      You need to signup for the Dropbox API.

      • halfcoder March 31, 2016 at 7:30 am #

        Thnx for the tutorial it worked for me without dropbox integretion but I’m facing some problems with dropbox.
        I did get the key and secret but what will be the dropbox_base_path .what is it the location where our snapshots are stored or anything else

        • Adrian Rosebrock March 31, 2016 at 2:56 pm #

          The dropbox_base_path should be the full directory path to your “Apps” directory in Dropbox.

          • halfcoder April 1, 2016 at 4:12 am #

            can u post a sample base path actually I’m not getting it I have made an app in dropbox and downloaded the dropbox and it has only one folder containing the get started pdf

          • Adrian Rosebrock April 1, 2016 at 3:15 pm #

            The system I’m currently using right now doesn’t have my original source code. The next time I get back to my Pi, I’ll post the example path. In the meantime, you should read the Dropbox development documentation, specifically regarding the “App” directory.

          • halfcoder April 1, 2016 at 4:05 pm #

            thanku a lot..i managed to upload the snapshots on dropbox
            The base path was just home/dropbox..lol

          • halfcoder April 2, 2016 at 4:30 am #

            Thanx man finally it worked for me..Yo

        • gourav February 11, 2017 at 1:51 am #

          hey hi did u install dropbox on raspberry pi or windows i am also facing the same problem can u pls explain how to do it

  82. chuanpan February 28, 2016 at 1:40 pm #

    Hello Adrian,first of all, thanks for your guidance, I have built a motion detection system according to your blog, however, I want to add some GPIO control, so that I use the wiringpi module, and the GPIO control is function well except under the cv environment, the error hint is: no module named wiringpi, could you help me to figure it out? thanks very much!

    • Adrian Rosebrock February 29, 2016 at 3:31 pm #

      I haven’t used wiringpi before. Does it require root privileges to run? Or can you execute it as a normal user?

      • chuanpan March 1, 2016 at 6:31 am #

        I have figure it out, but the new problem is that I want to command the PIN output a 3.3 voltage while motion was detected, and this function should only executed once within 1 hour(even the motion was detected again) , could you help with that?

        • Adrian Rosebrock March 1, 2016 at 3:36 pm #

          I honestly don’t have much experience working directly with voltage. If I do any tutorials on that in the future, I’ll let you know. But as for the second part of your question, you can easily keep track of the “1 hour” mark by using either the time or datetime package of Python. Just record the timestamp of the last event, and check to see if an hour has passed. If so, call the vevent again.

          • chuanpan March 1, 2016 at 7:07 pm #

            thank you, love your posts very much, awesome!

  83. Robert Fullagar March 2, 2016 at 11:28 am #


    As requested link to my updated code – https://www.dropbox.com/sh/81qgnioyawlh961/AADBSe9y5_x3ejeGzX0DN7wDa?dl=0

    1. Has a check to see if dropbox is available before trying to send the file, stopping the script from stopping
    2. Checker.py is like a launcher (run it not the pi_surveillance.py), if the pi_surveillance.py isnt running or has stops the checker.py will start/restart it
    3. The pi_surveillance.py auto logs into dropbox using a generated token you add to config file, facilitating auto restart no human interaction needed!.

    I am a newbie python programmer, but the code is functional 🙂

    Have fun!


    • Adrian Rosebrock March 3, 2016 at 7:08 am #

      Awesome, thanks for sharing Robert!

      • Robert Fullagar March 8, 2016 at 5:36 am #

        Your welcome….

        I wondered why the image files werent showing up in my MS windows dropbox plugin…then I realised they had : (colon) as separators in the time stamp of the file name, Windows doesnt like these. I changed them to – in the code and all the images are now visible in my windows drop box.

        Cheers again Adrian



  84. zainy March 11, 2016 at 3:59 pm #

    hay need a help i had done mootion detection using blob detection now need to draw shape on image using contour could u help me with that plz am usin python (opencv3.0) plz

  85. Chris March 14, 2016 at 1:41 pm #

    Hey Adrian,

    Sick post man! works like a charm.

    Thought I would post to let you and other people know where I encountered issues.
    1. I was using python 3.4 and so I had some syntax issues there. (fixed these issues via the comments in this post)
    2. (my own oops) I was editing the code on my windows machine in notepad++ and when I pasted it into putty some lines didn’t paste nice (easy fix)
    3. last issue, I was in root when I did this project so I ran into issues when forwarding x11 (im using windows, the fix included allowing ssh login for root http://tinyurl.com/jo5fxrj and installing xming for putty http://tinyurl.com/zyrn7p8)

    quick question: is there an easy way to edit the code so the picture taken doesn’t include the green box?

    again, great post! Cheers!

    • Adrian Rosebrock March 14, 2016 at 3:14 pm #

      Thanks for sharing Chris! And yes, you can absolutely disregard the green box — simply comment out Line 96 and this will remove the green box from the image. Alternatively, if you still want the green box displayed to your screen (but not written to the file), make a copy of the image before drawing on it via: orig = frame.copy() and then only write the orig image to file.

  86. Brendan Allen March 15, 2016 at 4:48 pm #

    Is there any way to get the captured images to be sent in an email or text message? Btw I am really looking forward to building this system.

    • Adrian Rosebrock March 16, 2016 at 8:12 am #

      Absolutely. I’ll be doing a blog post on this in the future, but in the meantime, you’ll want to read up on the Twilio API.

      • Martin N. April 22, 2017 at 10:30 am #

        +1! I have successfully managed to trigger a text message, but only after the “if text == ‘occupied’ section. So you can imagine what I am getting…100s of text messages when I only want one or two.

  87. Dan Bornman March 16, 2016 at 10:11 am #

    I tried running this example using opencv 3 and python 2.7 and I’m getting the following error.
    picamera.exc.PiCameraValueError: Incorrect buffer length for resolution 640x480

  88. Roger March 17, 2016 at 5:17 am #

    Hi I have the following issue:
    [INFO] warming up…
    [INFO] starting background model…
    Traceback (most recent call last):
    File “pi_surveillance.py”, line 88, in
    ValueError: too many values to unpack

    any ideas how to resolve

    • Adrian Rosebrock March 17, 2016 at 10:35 am #

      Please be sure to read the comments before submitting. I have answered this question in reply to “Tom Kiernan” above.

  89. Javier March 19, 2016 at 5:05 am #

    HI Adrian,
    Thank you very much for this usefull post.

    Sorry for my bad English. I’ve a RPi using mjpeg-streamer and picam to check out the garden. I would like to adapt your code to use the remote streaming as source.
    Is it possible ?


    • Adrian Rosebrock March 19, 2016 at 9:11 am #

      Absolutely! I’ll be doing a series of blog posts on how to write frames and read frames in MJPEG format using strictly Python and OpenCV.

  90. Yong Shean March 26, 2016 at 12:17 pm #

    Hey, your posts are amazing 🙂

    My SD card keep running out of space while compiling OpenCV, apparently 8GB is too small to fit 🙁 What size of SD card do you recommend so that I will be able to follow all your steps? And what should be the specs of the SD card? Class-10? UHS?

    • Adrian Rosebrock March 27, 2016 at 9:08 am #

      I used an 8GB card for the install, and while it was tight, there was enough room to compile and install OpenCV successfully. That said, it would be worth upgrading to a 16GB card if at all possible. I normally go with the Sandisk cards. Otherwise, you might want to try deleting Wolfram Alpha and Libre Office from your install of Raspbian to free up some space.

  91. Chams March 29, 2016 at 9:44 am #

    hi i have an usb camera could you please send to me the modification of the code

    • Adrian Rosebrock March 29, 2016 at 3:40 pm #

      I’m a bit too busy to modify the code myself, but I suggest starting with this blog post to adapt the code to work with both the Raspberry Pi and a USB camera. You also might be interested in this post as well.

  92. Matea Majstorovic April 4, 2016 at 8:31 am #

    Thank you for this post !

  93. franklin April 8, 2016 at 3:35 pm #

    I get the camera to work for a few days and then it stops. I reset and all is well for a few days. Anyone get the same problem?

    • Adrian Rosebrock April 13, 2016 at 7:17 pm #

      It sounds like it might be a connection or a hardware problem. Double check the ribbon connection from the camera to your Pi and ensure it is still connected.

  94. Reza April 14, 2016 at 3:55 am #

    hey Adrian i was running this program but i have an eror but i cant resolve this , canu help me to resolve ??

    the commend is ” (Security Feed:1414): Gtk-WARNING **: cannot open display:”


    • Adrian Rosebrock April 14, 2016 at 4:43 pm #

      Please read the comments before posting — See my reply to “tass” above.

  95. joakim körling April 18, 2016 at 3:42 pm #

    Great stuff Adrian – even bought and read your book!

    For an application I am thinking about I need to get up to at least 30fps with processing similar to this application.

    I get around 8fps with my Raspberry Pi 3 / Jessie Lite / VNC doing the above. What can I expect, is it possible to reach 30 fps.


    • Adrian Rosebrock April 18, 2016 at 4:38 pm #

      Thanks for picking up a copy, I hope you’re enjoying it! As for getting up to 30 FPS is challenging, but it is possible. I would suggest starting here and reading up on how threading can improve your FPS processing rate. You’ll also want to keep your frames as small as possible without sacrificing accuracy. The smaller the frames are, the less data there is to process, thus your pipeline will run faster.

  96. MJ April 20, 2016 at 12:50 am #

    Instead of using the Pi Camera module, could you use a usb webcam like the logitech c270 or c920? I’m thinkig about handling the motion detection using a PIR sensor on the GPIO. Have you tried either approach?

    • Adrian Rosebrock April 20, 2016 at 6:05 pm #

      Absolutely, you can certainly use a USB webcam. My USB webcam of choice is the Logitech C920. You can learn how to access both USB webcams and the Raspberry Pi camera module (without changing a single line of code) in this post.

      I’ll also be doing some GPIO tutorials in the coming weeks, so be sure to keep an eye on the PyImageSearch blog!

  97. Eric April 20, 2016 at 7:31 pm #

    Hi Adrian,

    Amazing tutorials! Thank you for writing them! I got this project up and running with DropBox and a raspberry Pi 2 w/ PiCamera.

    I’ve been reading your articles/tutorials on FPS and using threading. I was wondering if you had any advice on the best way to add threading to this project. I’m still a beginner, but learning more and more.

    Again, thank you for your tutorials and info, it’s awesome!

    -Eric V

    • Adrian Rosebrock April 21, 2016 at 5:01 pm #

      So if you’ve already read the adding threading to webcam access, then you know enough to get started. I would suggest ripping out the code related to cv2.VideoCapture and replacing it with the VideoStream. Start small and create a simple script that utilizes the VideoStream. And then, piece by piece, incorporate the rest of the home surveillance code.

  98. Jean-Pierre Lavoie April 21, 2016 at 3:15 pm #

    Hi Adrian. The code is working and I have connection with Dropbox. When I run it it creates the dropbox base path directory I define in the conf.json file. I see this in my Dropbox account on my PC. It does detect motion and I see the UPLOAD in terminal when there is motion. The one thing that doesn’t work: I don’t see the photos in my Dropbox account… Even if the terminal says it is uploading the files. Do you have an idea about the problem?

    • Adrian Rosebrock April 21, 2016 at 4:55 pm #

      That sounds very, very strange. I would suggest reading out to the Dropbox API forums and see if this is a known issue or a potential problem with your account.

      • Jean-Pierre Lavoie April 21, 2016 at 10:23 pm #

        I’m seeing another weird line that may explain the problem. When running the code, after the Success dropbox account linked, warming up and starting background model.. there is this line in terminal:

        xlib: extension “RANDR” missing on display “1:1.0”.

        Then when I create movement I see the UPLOAD lines. Like I said the specific path folder are created in my Dropbox account, but photo files are not uploaded.

        Just wandering if that xlib line may interfere or explain something.

        • Adrian Rosebrock April 22, 2016 at 11:44 am #

          I’m honestly not sure. Again, I’m not a Dropbox developer — this is actually the first time I ever used the API! I would suggest consulting the Dropbox docs or posting the official Dropbox developer forums.

  99. Milla May 3, 2016 at 1:31 pm #

    Hi Adrian,
    I was wondering if i can do something like this using ip camera instead of pi-cam module or usb camera, but i can’t find any clear tutorial anywhere (I’m a noob). Also is it possible to connect streaming camera with any sensors? I plan on doing some kind of smoke detector using TGS 2600 smoke sensor, so when the sensor detected smoke, the camera will take a picture automatically.. but i’m not sure what i have to do first 🙁 please help me.. Anyway, thankyou so much for your always useful tutorials. keep it up 🙂

    • Adrian Rosebrock May 3, 2016 at 5:43 pm #

      You can absolutely do this using video streaming. In fact, the cv2.VidepCature function can accept IP streams as an input to the function. I’ll try to do a blog post on this in the future.

      As for your second question, you can indeed incorporate additional sensors and take an image based on the sensor output. Make sure you pay attention to next week’s blog post on detecting objects and then sounding an alarm using the GPIO library.

    • Paul February 14, 2018 at 7:21 pm #


      Did you get the ip web cam steaming to work if so what code needs to be updated / configured?

  100. Rijal Nasution May 5, 2016 at 8:45 am #

    Hi Rian!
    i have problem when i try python pi_surveillance.py –conf conf.json

    Traceback (most recent call last):
    File “pi_surveillance.py”, line 2, in
    from pyimagesearch.tempimage import TempImage
    ImportError: No module named pyimagesearch.tempimage

    how to solve it? thank you

    • Adrian Rosebrock May 5, 2016 at 9:07 am #

      It’s Adrian, actually. And please see my reply to “Kitae” above. It discusses how to resolve your error.

      • Rijal Nasution May 7, 2016 at 7:19 pm #

        thank you very much for great tutorials!

    • red July 4, 2016 at 11:18 pm #


      try playing around with changing directories, mine was something like /home/pi/pi-home-surveillance then run python pi_surveillance.py –conf conf.json

      Also download the code at the top of the post and copy the files like init.py which will tell the python interpreter that imagesearch is a module.

  101. Nitin May 8, 2016 at 5:17 am #

    Hi Adrian,

    Its really awesome project, I’m doing it form my personal use.
    But how I should do it auto run and able to access token automatically, because everytime I have to take update token and paste it their.
    So how I do it automatically.

    Please suggest me

    • Adrian Rosebrock May 8, 2016 at 8:13 am #

      This project was the first time I used the Dropbox API, so I don’t know the in’s and out’s of the API. Please see the other comments in this blog post, in particular the one from “Danny” above who details how the hardcode the token.

      • Nitin May 8, 2016 at 3:27 pm #

        Thanks, I’ll try and revert you after sucess

  102. Raj Gonja May 12, 2016 at 10:32 am #

    Wonderful explain project tutorial! I have set up on Raspberry Pi 3 and all is well save for high number of contour tracking boxes (too numerous). Which manner would be ideal to eliminate multiple contours detected? Sometimes camera will identify too many motions when none is present and must be terminated then restart to operate normal.

    Also it seems when more than half of image or frame is taken up by “green tracking boxes” status goes to Occupied and never returns to Unoccupied despite stillness in frame, wonder why this never resets?

    Any suggestions?

    • Adrian Rosebrock May 12, 2016 at 3:32 pm #

      To handle multiple bounding boxes that pertain to the same object, I would suggest utilizing morphological operations (such as closing/dilation) to bridge the gaps between the objects in the mask. As for the green tracking boxes taking up most of the image, be sure to debug the parameters to the script (especially the threshold value) by examining the output of the mask.

  103. Rav May 23, 2016 at 6:11 pm #

    Awesome! Thanks for the post. I have a camera at our RC Field and it has a nice feature that would be nice to have implemented in python & opensource. Motion detection alarm/action/snap trigger when motion is across a line in a particular direction. With this feature I now get people only when they are walking facing the camera… no more butts:)

  104. Andre Brown May 23, 2016 at 9:01 pm #

    Hi Adrian
    I’ve got all this working, thanks for a great tutorial.
    Do you know how I can get the files to be sent to Dropbox without needing to enter the code each time? I am trying to get it to run on a stand-alone battery powered pi box with no keyboard and screen so would like to pre-authorise the dropbox account rather than have to put in a new code each time.

    • Adrian Rosebrock May 25, 2016 at 3:33 pm #

      Please see the other comments on this post. I’ve answered this question multiple times. I’m also not a Dropbox developer. This was the first time I used their API, mainly for demonstration purposes.

    • Kerem May 30, 2016 at 6:31 pm #

      See earlier comments on this topic. There is a way to do what you need. Its been discussed and documented.

  105. JJ June 10, 2016 at 11:33 am #

    Hi Adrian,

    Thanks for the awesome project.
    However, I encountered an error at the dropbox integration page. I pasted the authorization code into the program but I encountered a “dropbox.rest.ErrorResponse: [400] u’invalid_grant'”.

    I tried it alternatively by typing the generated access token from the dropbox app, but I got the same error. I don’t know what I am doing wrong here, please help.

    • David May 8, 2017 at 4:42 am #

      Hi JJ,

      I ran into this problem as well and determined, by adding try/except blocks that it is an authentication problem. Basically you need to regenerate the key each time you start the Python application (use the link that is output in the terminal, then copy/paste the key into the terminal prompt).


  106. Jonathan June 28, 2016 at 10:30 pm #


    It’s a wonderful project!

    Is it possible to measure size of the object in meters, feet or other unit?

    Best Regards.

  107. Erwin June 29, 2016 at 12:56 am #

    Hi Adrian,

    I stumbled upon this hidden gem and I would like to say thank you for your contribution to the programming community.
    I am trying to build a people counter using opencv and this is the closest thing to accomplish it. 🙂
    Just wondering how do you add counting detected people on to this script? I’m pretty new with python as well.

    Thank you very much in advance.

    • Adrian Rosebrock June 29, 2016 at 2:01 pm #

      A simple, hacky way would just be to check the len(cnts) to count the number of contour regions in the mask. This would be a crude estimate to the number of people in the stream. Another approach would be to use a dedicated people detector.

      • Erwin June 30, 2016 at 12:00 am #

        Thank you again mate! I’ll update you once I complete this project.

  108. Brendan June 29, 2016 at 4:51 pm #

    Hi Adrian,

    When I try to execute the program I get an error stating that there is no module named picamera.array. However, when I go into python and search for the module it comes up and I can see it, so it seems that the module does exist. Why is this happening?

    • Adrian Rosebrock June 30, 2016 at 12:22 pm #

      Did you install the picamera[array] module in a virtual environment? Or globally? If you installed it into a virtual environment, then you’ll need to execute your Python script from within the virtual environment. If you installed it globally, re-install it in the Python virtual environment.

  109. red July 4, 2016 at 11:20 pm #

    This may have been mentioned, but I didn’t see it specifically in the comments. Has anyone tried launching the pi_surveillance.py –conf conf.json from ssH? This is the error I get. I tried using the -Y and -X in my ssH command.

    [SUCCESS] dropbox account linked
    [INFO] warming up…
    [INFO] starting background model…

    (Security Feed:14093): Gtk-WARNING **: cannot open display:

    • Adrian Rosebrock July 5, 2016 at 1:43 pm #

      You need to SSH into our Pi with X11 forwarding. If that is not working, then you should do some additional research on troubleshooting X11 forwarding:

      $ ssh -X pi@your_ip_address

  110. Benjamin Reynolds July 22, 2016 at 9:13 am #

    Could you do a tutorial with the Raspberry Pi for outside motion detection? I need to monitor my driveway which has trees, leaves, birds and squirrels that I don’t want it to detect. I only want it to detect cars and people. I have a Raspberry Pi 3 and the Pi Camera module.

    • Adrian Rosebrock July 22, 2016 at 10:53 am #

      Thanks for the suggestion Benjamin. I’ll try to do a tutorial for this. In the meantime, you might want to consider training a custom object detector for cars and people. Also, you can apply contour filtering to only detect and report on “large” objects, such as cars/people. This would be a simple heuristic that would work as a good starting point.

  111. Philip Hoyos August 20, 2016 at 5:10 am #

    Hi Adrian
    Great tutorials! I’ve been reading them with great interest! Thanks for sharing! I’ve been trying to adjust the size of the output picture and it doesn’t seem to help that I adjust the resolution. Do you know how I can change it?
    Thanks for your help!

    • Adrian Rosebrock August 22, 2016 at 1:33 pm #

      Hey Philip, I’m not sure what you mean by adjust the size of the output picture. Can you please elaborate.

      • Philip Hoyos September 16, 2016 at 7:10 am #

        Hi Adrian
        Thanks for replying. I’m trying to output a high resolution picture. Right now I get a very low resolution picture at only <40kb. When I look at your pictures they seem to be at a higher resolution. When I adjust the resolution size to e.g. 1600×1200 the script does not output a picture in that size. How do I achieve this?
        Thank you for your time!

        • Adrian Rosebrock September 16, 2016 at 8:17 am #

          After Line 57, make a copy of the frame:

          orig = frame.copy()

          Then, when motion is detected, you can instead upload orig, which will be your higher resolution image. In general, we rarely process images larger than 600 pixels along the largest dimension, so if you want a higher resolution frame, just clone the original before processing it.

  112. Sebastián H. August 22, 2016 at 12:48 am #

    Hi Adrian!
    Thank you very much for your tutorials! They are extremely helpful!
    I’ve been trying to use this code on my Rasbperry Pi’s (2 and 3) but I can’t manage to make it work.
    My issue is that ‘cv2.imwrite(…)’ saves a black image, every time. I’m suspicious that ‘picamera.array.PiRGBArray’ is returning a zero array but I have not been able to confirm it. I’ve also tried ‘VideoStream’ from ‘imutils.video’ and I’m getting the same result: black/empy images.
    Any ideas on why this could be happening? Also, how do you debug something like this on a Raspberry Pi?

    • Adrian Rosebrock August 22, 2016 at 1:27 pm #

      It sounds like there is an issue with your version of the picamera package or a firmware issue with your Raspberry Pi camera module. Keep an eye on next week’s blog post where I’ll be addressing these issues directly.

  113. Islam August 24, 2016 at 8:51 am #

    Many thanks Adrian for your great effort..
    I use Rasperypi 2 but the quality of images uploaded to dropbox is low quality…
    Secondly, how can record video while motion detection untill motion stop and then upload this video to dropbox or send notification email to can access remote pi and play video on it and stream in real time.

    • Adrian Rosebrock August 24, 2016 at 12:13 pm #

      I’m not sure what you mean by “low quality” of your images. If the images are low quality, then you might want to check that your Pi camera is reading “quality” images in the first case.

      Secondly, I cover writing video clips to file using OpenCV here. You can combine these two scripts to upload the the video to Dropbox or send you an email notification.

      • Islam August 25, 2016 at 3:37 pm #

        Many thanks Adrian

  114. reza August 25, 2016 at 7:38 am #

    can i run this project with open cv 3 ?

    • Adrian Rosebrock August 25, 2016 at 8:36 am #

      Please see the other comments on this post. Yes, you can run this with OpenCV 3. You just need to update the cv2.findContours function call. Either take a look at the other comments, or read this blog post on the differences between cv2.findContours between OpenCV 2.4 and OpenCV 3.

  115. laviniut August 25, 2016 at 9:55 am #

    I want to send an alarm through bluetooth when i get some motion but in cv environment i get an error: no module named bluetooth when i import it in python.
    how can i use bluetooth in cv environment?

    • Adrian Rosebrock August 30, 2016 at 12:53 pm #

      I personally have never used the “bluetooth” package before, but you should be able to install it into the cv virtual environment via pip:

  116. Islam September 1, 2016 at 7:05 pm #

    Many thanks Adrian for your great effort..
    The program is working fine but i need to make it auto start at reboot of raspberry pi ..
    I tried launcher.sh but it works ./launcher.sh from shell terminal but when i wrote it at rc.local don’t work at reboot so what can i do to do that? Appreciate your fast response

    • Adrian Rosebrock September 2, 2016 at 7:02 am #

      Take a look at this blog post where I demonstrate how to run a Python + OpenCV script on reboot.

      • Islam September 3, 2016 at 7:52 pm #

        Many thanks Adrian for your help
        I finally get it……..
        My error that while auto starting of on_reboot.sh, home-surveillance.py and load of conf.json file starting ok and Picamera Red Light indicates that python file is working but after 5 seconds only the red light goes OFF indicating the python file is stopped.

        The solution that is very very simple ONLY change conf.json parameter “show_video” = false >>>> That is it!

  117. Jeck September 4, 2016 at 3:34 am #

    Hi adrian

    I just want to ask for advice? I want to make a vehicle and speed tracking system using this concept. I am using a raspberry pi 3 with jessie with opencv 3 (i used your installation tutorial). I am about to buy a picamera ver 2 but is this the one I really need? There are wide angle cameras and adjustable focus cameras as well. Which one should I buy? Lastly, can you give some tips on how I should do my project? Thanks

    • Adrian Rosebrock September 5, 2016 at 8:05 am #

      The version 2 of the Raspberry Pi camera module is indeed the latest version. If you want to use a Raspberry Pi camera module, go with this one. If you want a USB camera, I really refer the Logitech C920. It’s plug and play compatible with the Pi and does a really good job for the price.

      As for vehicle speed detection and tracking, start by keeping the project simple. Use basic background subtraction to find cars in semi-controlled environments. Then use the approximate frame rate to derive speed. It won’t be extremely accurate, but it will get you started, which is the most important aspect in this case.

  118. Islam September 7, 2016 at 9:28 am #

    Dear Adrian
    When i use video file using keyclipwriter then i use dropbox for upload the recorded files, unfortunately my raspberry pi is stopped from motion detection until uploading is finished… so that if there somehow procedure which enable pi to provide motion detection while uploading process.
    Many thanks

    • Adrian Rosebrock September 8, 2016 at 1:22 pm #

      Spawn a different thread to upload the recorded files, problem solved 🙂

  119. Stduiologe September 8, 2016 at 11:41 pm #

    Hi I was wondering if it was possible to not always have to authorize the application and do it somewhat automatically.

    I saw under the developer page from dropbox that a access token can be generated and that I have implicit access set to grant.

    which changes do I have to make to the python script to just be able to run the script and not have to authenticate it?

    Thanks for any hints

  120. Robert mar September 14, 2016 at 7:33 am #


    Thanks for the tutorial. I tried it and it works! 🙂
    Instead of images, I would like to stream/record video of few seconds (perhaps until motion is not detected any more). Which part of the code I should change for that? do you have some good examples?

    I have also one question: I just read that it is possible to use picamera library for motion detection without needing opencv. Do you suggest using the picamera library alone for such a project to avoid the complexity of OpenCV and the time it takes to install and compile?

    • Adrian Rosebrock September 15, 2016 at 9:34 am #

      You certainly can use picamera without OpenCV, that’s not a problem at all. But if you do, then you completely lose the ability to process the frames for motion (or any other processing you want to apply). You could use a different library, such as scikit-image. SciPy also provides (very basic) image operations. But in general, you’ll end up using OpenCV eventually.

      As for updating the code to write video to file, I would use this blog post as inspiration to get you started.

    • Martin N November 11, 2017 at 1:07 pm #

      Hi Robert,
      I managed to combine Adrian’s home surveillance/motion detection on this page with the key event video clips to record video clips once motion is detected. I had a heck of a time figuring it out and thought I’d share it in case anyone else is running into the same issues.

      Code is on my github: https://github.com/mnoah66/home-surveillance-2

  121. Agustin Leira October 5, 2016 at 7:02 pm #

    Hi Adrian, thank you for all the work you are doing in this blog, it is outstanding, I find your work very interesting, I have been studying and testing some of your projects, specially this one, I found something that maybe you or someone already found and fixed it, I have a raspberry pi 2 and an infrared camera, I followed your tutorial and everything works like charm the program sends pictures to dropbox like it should, but after a while (a couple of hours I guess) my raspberry hangs and I have to reboot to have it operational again, I have been searching for a solution, some people say that maybe is the power supply, others say that maybe is a memory leak, I just wanted to know if anyone has faced something similar to this. Thank you again for all your contribution, your work is an inspiration.

    • Adrian Rosebrock October 6, 2016 at 6:51 am #

      I personally haven’t encountered this issue before, but a good way to find out is to log the memory usage of the Raspberry Pi. I would setup a shell script that every 5 minutes logs the output of $ free -m to file along with the timestamp. That way, when your Pi shuts off you can reboot it and check the log. If memory usage is increasing, it’s a memory leak. If not, it’s a power supply problem.

      • Agustin Leira October 10, 2016 at 3:59 pm #

        Yes definitely it was a power supply problem, there was no memory issues, it was a problem with my wifi dongle, the wifi dongle entered in a power saving mode, and it was there, where the raspberry pi used to lost the connection with the network.

        Before solving this issue I could not set my raspberry pi into running for more than a couple of hours without rebooting it , now I have my raspberry pi into running for the last 4 days in a row.

        To solve my problem I just did the following:
        ping router >/dev/null &
        where router is the router’s address.

        Thank you very much.

        • Adrian Rosebrock October 11, 2016 at 12:54 pm #

          Great job resolving the issue! And thank you for sharing the solution.

  122. Davood October 9, 2016 at 10:02 pm #

    hello AA (Amazing Adrian),
    I have this error;
    File “pi_surveillance.py”, line 34
    print “[INFO] Authorize this application: {}”.format(flow.start())


    • Adrian Rosebrock October 11, 2016 at 1:01 pm #

      It sounds like you are using Python 3. The code for this blog post was written for Python 2.7. You can resolve the issue by changing the print statement to a print function:

      print("[INFO] Authorize this application: {}".format(flow.start()))

  123. Walter October 11, 2016 at 2:22 pm #

    Hi Adrian,
    Great tutorial, thank you.
    I would like to know how to position the frame at 0x0 in the display. Is there a utility or a command that can do that? Thanks

  124. Stein Castillo October 13, 2016 at 4:42 pm #


    Thanks a lot for this great tutorial. I am guessing this must be by far one of your most popular posts!

    Starting from your work, I’ve added a couple of functionalities in case that the community is interested in using them:
    *send an email to a Gmail account with an image attached when motion is detected
    *record the activity in a log file
    *Change the color of the room status text (just for extra coolness!)

    the code can be found at https://github.com/steincastillo/Pi_surveillance.git

    Hope you like it and thanks again for sharing your experience!!!

    • Adrian Rosebrock October 15, 2016 at 9:58 am #

      Nice job Stein, thanks for sharing! I really like the email functionality as well.

  125. Steve Silvi October 22, 2016 at 7:28 pm #

    After installing Silvan Melchior’s RPi_Cam_Web_Interface (which requires Apache2 to run), my Python scripts error out with an “Out of resources (other than memory)” message. I understand that only one process can use the camera at a time, so I’m thinking that even though I am not directly accessing the camera with the Cam_Web_Interface, there’s a process running in the background that’s preventing Python from successfully executing the picamera script. Anybody know if this could be the case, and if so, how to (temporarily) disable other processes from locking the camera? Thanks for any help provided.

    Update to my previous post:

    After opening the Cam_Web_Interface web page and clicking on the “Stop camera” button, I can now access the camera via the Python scripts.

  126. Dylan October 30, 2016 at 8:01 am #

    Hi there Adrian

    Dylan here again.

    Your code works amazingly well and I have added the code I need executed when motion is detected.

    Please please save me some pain and list the exact modification that one needs to make in order to run this blog post’s code using a USB webcam rather than the Picam.

    I have battled to adapt this code to use USB webcams using your other guides.

    Please do help.

    • Adrian Rosebrock November 1, 2016 at 9:10 am #

      Hey Dylan, while I’m happy to help others and point readers in the right direction I cannot provide custom code for each and every exact situation. There are more than enough (free) resources on this blog to help you create a Python script that uses a USB webcam rather than a Raspberry Pi camera. I would suggest starting with this post that will help you learn how to access boththe Raspberry Pi camera and a USB webcam using the same functions.

  127. Dan October 30, 2016 at 4:20 pm #

    Thank you for such a great tutorial. I’m new to all this so I’m confused on where to extract the zip file to. I saw some folks with the same error as Kitae but I’m just not sure where to send the files. Thanks.

    • Adrian Rosebrock November 1, 2016 at 9:05 am #

      You can exact the .zip file anywhere you would like on your system. Then, open up a terminal, change directory to where you unzipped the archive and execute the Python script.

  128. Matt S November 11, 2016 at 12:39 pm #

    Adrian, I got a I recently purchased the CanaKit Raspberry Pi 3 Complete Starter Kit and the Raspberry Pi 5MP 1080P Camera NoIR (No IR Filter) Night Vision Module with the intent of setting up a night-time camera to film myself sleep walking, which I thought would be hilarious. I’m going to run through your tutorial just to get more experience with the Pi and Python coding (I work as a software engineer so I don’t think I’m going to be too helpless). I’ll report back with any questions that pop up after I’m done with your tutorial in regards to modifying things to fit my specific night-time motion sensing video recording needs!

    • Adrian Rosebrock November 14, 2016 at 12:14 pm #

      This sounds like an awesome project Matt, I’m excited to see the end results!

      • Matt S February 28, 2017 at 3:58 pm #

        Adrian, I’ve been lazy recently, but overcame that and completed this in just a few minutes (you make it easy with steal-able source code). Now onto your ‘saving-key-event-video-clips-with-opencv’ tutorial to combine these two into a sleepwalking capture device. My only concern is my pi camera working in the dark (I’m not convinced it’s actually an IR camera!). I’ll update when I finish up.

        • Adrian Rosebrock March 2, 2017 at 6:56 am #

          Congrats on the progress Matt, nice job!

  129. Tommy November 14, 2016 at 8:47 am #

    Hi Adrian,

    I’m so happy about your website, but I’ve one question.

    After making my access to DropBox (as the guy comment behind), my biggest problem is error named:

    ValueError: too many values to unpack


    • Adrian Rosebrock November 14, 2016 at 12:00 pm #

      Hey Tommy — before posting please do a simple ctrl + f and search for your error message in the comments section. I have already addressed this error multiple times. Look at my replies to Tom, Martin, and Roger above.

  130. Enrico Reticcioli November 15, 2016 at 7:51 am #

    Hi Adrian,
    really thanks for your tutorial. It works!! I have found some problem:
    1) I don’t have the virtualenvs folders but just the cv folders
    2) I had the problem with the cv2.CHAIN_APPROX_SIMPLE), i don’t know why but there was a variable in line 88 called (cnts,_) and I rename it just cnts and work with just these change.

    Now that it works i have some question:
    If a person stay satic or move really slow this sytem doesn’t detect it, it’s correct? How can I recognize the person?
    There is a variable inside your project that recognize the number of object in movement?

    • Adrian Rosebrock November 16, 2016 at 1:46 pm #

      If a person doesn’t move or moves really slowly then eventually this method will “average” them out of the detection. In this case, you need to tune Line 76, specifically the alpha weight parameter. Once you’ve detected a person you might want to try using correlation tracking or CamShift to track them.

      As for the number of regions in an image that can contain motion, it’s simply: len(cnts)

      • Enrico Reticcioli November 19, 2016 at 5:48 pm #

        Hi Adrian and thamks for your reply,
        This work it’s really grat. Now I’m try to turn on a Led when the system detect a motion, but when I run the program he said that RPi.GPIO isn’t a module. I try to turn on the same Led outside the cv and it’s work. Do you know why I have this problem? Inside the cv I have already uninstall and reinstall the RPi library but it doesn’t work.

        Edit: I’m sorry Adrian I found the solution in one of your post. Thanks again

        • Adrian Rosebrock November 21, 2016 at 12:36 pm #

          Nice job resolving the issue Enrico! For any other readers who have a question regarding OpenCV + RPi.GPI together, please refer to this post.

  131. David November 22, 2016 at 6:15 am #

    Dear Adrian,
    Thanks a lot for your great tutorials! I’m a beginner with Python and OpenCV, but nevertheless it made sense and was explained beautifully.
    I was able to combine your previous post (Basic motion detection and tracking with Python and OpenCV) and this one to make it run properly under Windows 7 with Python 3.5.2. The averaging have removed the noise, but the problem I’m now facing is that in case that an object moved in the frame, and then stopped moving, the system will go back to “unoccupied” state, even though the object (i.e. person stealing my beer) is still in the room. How would you suggest solving that issue?

    • Adrian Rosebrock November 22, 2016 at 12:29 pm #

      If an object stops moving, then by definition, no motion is occurring. If you want to maintain a larger history of motion you’ll want to play with the weight parameter to cv2.accumulateWeighted(gray, avg, 0.5). In this case it’s 0.5, but you can decrease the value to have the “history” of motion last longer.

      Alternatively, once you’ve detected motion you can apply methods such as CamShift and correlation filters to continue to track the object (even if it stops moving).

  132. amri yahya November 25, 2016 at 10:12 pm #

    hai adrian , i would like to ask you , what the purpose you attached the wifi at usb port ? is it necessary for this tutorial ? if this tutorial need internet connection , how i can connect to my wifi’s house ?

    • Adrian Rosebrock Novem