Multiple cameras with the Raspberry Pi and OpenCV

multiple_cameras_animated

I’ll keep the introduction to today’s post short, since I think the title of this post and GIF animation above speak for themselves.

Inside this post, I’ll demonstrate how to attach multiple cameras to your Raspberry Pi…and access all of them using a single Python script.

Regardless if your setup includes:

  • Multiple USB webcams.
  • Or the Raspberry Pi camera module + additional USB cameras…

…the code detailed in this post will allow you to access all of your video streams — and perform motion detection on each of them!

Best of all, our implementation of multiple camera access with the Raspberry Pi and OpenCV is capable of running in real-time (or near real-time, depending on the number of cameras you have attached), making it perfect for creating your own multi-camera home surveillance system.

Keep reading to learn more.

Looking for the source code to this post?
Jump right to the downloads section.

Multiple cameras with the Raspberry Pi and OpenCV

When building a Raspberry Pi setup to leverage multiple cameras, you have two options:

  • Simply use multiple USB web cams.
  • Or use one Raspberry Pi camera module and at least one USB web camera.

The Raspberry Pi board has only one camera port, so you will not be able to use multiple Raspberry Pi camera boards (unless you want to perform some extensive hacks to your Pi). So in order to attach multiple cameras to your Pi, you’ll need to leverage at least one (if not more) USB cameras.

That said, in order to build my own multi-camera Raspberry Pi setup, I ended up using:

  1. A Raspberry Pi camera module + camera housing (optional). We can interface with the camera using the picamera  Python package or (preferably) the threaded VideoStream  class defined in a previous blog post.
  2. A Logitech C920 webcam that is plug-and-play compatible with the Raspberry Pi. We can access this camera using either the cv2.VideoCapture  function built-in to OpenCV or the VideoStream  class from this lesson.

You can see an example of my setup below:

Figure 1: My multiple camera Raspberry Pi setup.

Figure 1: My multiple camera Raspberry Pi setup.

Here we can see my Raspberry Pi 2, along with the Raspberry Pi camera module (sitting on top of the Pi 2) and my Logitech C920 webcam.

The Raspberry Pi camera module is pointing towards my apartment door to monitor anyone that is entering and leaving, while the USB webcam is pointed towards the kitchen, observing any activity that may be going on:

Figure 2: The Raspberry Pi camera module and USB camera are both hooked up to my Raspberry Pi, but are monitoring different areas of the room.

Figure 2: The Raspberry Pi camera module and USB camera are both hooked up to my Raspberry Pi, but are monitoring different areas of the room.

Ignore the electrical tape and cardboard on the USB camera — this was from a previous experiment which should (hopefully) be published on the PyImageSearch blog soon.

Finally, you can see an example of both video feeds displayed to my Raspberry Pi in the image below:

Figure 3: An example screenshot of monitoring both video feeds from the multiple camera Raspberry Pi setup.

Figure 3: An example screenshot of monitoring both video feeds from the multiple camera Raspberry Pi setup.

In the remainder of this blog post, we’ll define a simple motion detection class that can detect if a person/object is moving in the field of view of a given camera. We’ll then write a Python driver script that instantiates our two video streams and performs motion detection in both of them.

As we’ll see, by using the threaded video stream capture classes (where one thread per camera is dedicated to perform I/O operations, allowing the main program thread to continue unblocked), we can easily get our motion detectors for multiple cameras to run in real-time on the Raspberry Pi 2.

Let’s go ahead and get started by defining the simple motion detector class.

Defining our simple motion detector

In this section, we’ll build a simple Python class that can be used to detect motion in a field of view of a given camera.

For efficiency, this class will assume there is only one object moving in the camera view at a time — in future blog posts, we’ll look at more advanced motion detection and background subtraction methods to track multiple objects.

In fact, we have already (partially) reviewed this motion detection method in our previous lesson, home surveillance and motion detection with the Raspberry Pi, Python, OpenCV, and Dropbox — we are now formalizing this implementation into a reusable class rather than just inline code.

Let’s get started by opening a new file, naming it basicmotiondetector.py , and adding in the following code:

Line 6 defines the constructor to our BasicMotionDetector  class. The constructor accepts three optional keyword arguments, which include:

  • accumWeight : The floating point value used for the taking the weighted average between the current frame and the previous set of frames. A larger accumWeight  will result in the background model having less “memory” and quickly “forgetting” what previous frames looked like. Using a high value of accumWeight  is useful if you except lots of motion in a short amount of time. Conversely, smaller values of accumWeight  give more weight to the background model than the current frame, allowing you to detect larger changes in the foreground. We’ll use a default value of 0.5 in this example, just keep in mind that this is a tunable parameter that you should consider working with.
  • deltaThresh : After computing the difference between the current frame and the background model, we’ll need to apply thresholding to find regions in a frame that contain motion — this deltaThresh  value is used for the thresholding. Smaller values of deltaThresh  will detect more motion, while larger values will detect less motion.
  • minArea : After applying thresholding, we’ll be left with a binary image that we extract contours from. In order to handle noise and ignore small regions of motion, we can use the minArea  parameter. Any region with > minArea  is labeled as “motion”; otherwise, it is ignored.

Finally, Line 17 initializes avg , which is simply the running, weighted average of the previous frames the BasicMotionDetector  has seen.

Let’s move on to our update  method:

The update  function requires a single parameter — the image we want to detect motion in.

Line 21 initializes locs , the list of contours that correspond to motion locations in the image. However, if the avg  has not been initialized (Lines 24-26), we set avg  to the current frame and return from the method.

Otherwise, the avg  has already been initialized so we accumulate the running, weighted average between the previous frames and the current frames, using the accumWeight  value supplied to the constructor (Line 32). Taking the absolute value difference between the current frame and the running average yields regions of the image that contain motion — we call this our delta image.

However, in order to actually detect regions in our delta image that contain motion, we first need to apply thresholding and contour detection:

Calling cv2.threshold  using the supplied value of deltaThresh  allows us to binarize the delta image, which we then find contours in (Lines 37-45).

Note: Take special care when examining Lines 43-45. As we know, the cv2.findContours  method return signature changed between OpenCV 2.4 and 3. This codeblock allows us to use cv2.findContours  in both OpenCV 2.4 and 3 without having to change a line of code (or worry about versioning issues).

Finally, Lines 48-52 loop over the detected contours, check to see if their area is greater than the supplied minArea , and if so, updates the locs  list.

The list of contours containing motion are then returned to calling method on Line 55.

Note: Again, for a more detailed review of the motion detection algorithm, please see the home surveillance tutorial.

Accessing multiple cameras on the Raspberry Pi

Now that our BasicMotionDetector  class has been defined, we are now ready to create the multi_cam_motion.py  driver script to access the multiple cameras with the Raspberry Pi — and apply motion detection to each of the video streams.

Let’s go ahead and get started defining our driver script:

We start off on Lines 2-9 by importing our required Python packages. Notice how we have placed the BasicMotionDetector  class inside the pyimagesearch  module for organizational purposes. We also import VideoStream , our threaded video stream class that is capable of accessing both the Raspberry Pi camera module and built-in/USB web cameras.

The VideoStream  class is part of the imutils package, so if you do not already have it installed, just execute the following command:

Line 13 initializes our USB webcam VideoStream  class while Line 14 initializes our Raspberry Pi camera module VideoStream  class (by specifying usePiCamera=True ).

In the case that you do not want to use the Raspberry Pi camera module and instead want to leverage two USB cameras, simply changes Lines 13 and 14 to:

Where the src  parameter controls the index of the camera on your machine. Also note that you’ll have to replace webcam  and picam  with webcam1  and webcam2 , respectively throughout the rest of this script as well.

Finally, Lines 19 and 20 instantiate two BasicMotionDetector ‘s, one for the USB camera and a second for the Raspberry Pi camera module.

We are now ready to perform motion detection in both video feeds:

On Line 24 we start an infinite loop that is used to constantly poll frames from our (two) camera sensors. We initialize a list of such frames  on Line 26.

Then, Line 29 defines a for  loop that loops over each of the video stream and motion detectors, respectively. We use the stream  to read a frame  from our camera sensor and then resize the frame to have a fixed width of 400 pixels.

Further pre-processing is performed on Lines 37 and 38 by converting the frame to grayscale and applying a Gaussian smoothing operation to reduce high frequency noise. Finally, the processed frame is passed to our motion  detector where the actual motion detection is performed (Line 39).

However, it’s important to let our motion detector “run” for a bit so that it can obtain an accurate running average of what our background “looks like”. We’ll allow 32 frames to be used in the average background computation before applying any motion detection (Lines 43-45).

After we have allowed 32 frames to be passed into our BasicMotionDetector’s, we can check to see if any motion was detected:

Line 48 checks to see if motion was detected in the frame  of the current video stream .

Provided that motion was detected, we initialize the minimum and maximum (x, y)-coordinates associated with the contours (i.e., locs ). We then loop over the contours individually and use them to determine the smallest bounding box that encompasses all contours (Lines 51-59).

The bounding box is then drawn surrounding the motion region on Lines 62 and 63, followed by our list of frames  updated on Line 66.

Again, the code detailed in this blog post assumes that there is only one object/person moving at a time in the given frame, hence this approach will obtain the desired result. However, if there are multiple moving objects, then we’ll need to use more advanced background subtraction and tracking methods — future blog posts on PyImageSearch will cover how to perform multi-object tracking.

The last step is to display our frames  to our screen:

Liens 70-72 increments the total  number of frames processed, followed by grabbing and formatting the current timestamp.

We then loop over each of the frames  we have processed for motion on Line 75 and display them to our screen.

Finally, Lines 82-86 check to see if the q  key is pressed, indicating that we should break from the frame reading loop. Lines 89-92 then perform a bit of cleanup.

Motion detection on the Raspberry Pi with multiple cameras

To see our multiple camera motion detector run on the Raspberry Pi, just execute the following command:

I have included a series of “highlight frames” in the following GIF that demonstrate our multi-camera motion detector in action:

Figure 4: An example of applying motion detection to multiple cameras using the Raspberry Pi, OpenCV, and Python.

Figure 4: An example of applying motion detection to multiple cameras using the Raspberry Pi, OpenCV, and Python.

Notice how I start in the kitchen, open a cabinet, reach for a mug, and head to the sink to fill the mug up with water — this series of actions and motion are detected on the first camera.

Finally, I head to the trash can to throw out a paper towel before exiting the frame view of the second camera.

A full video demo of multiple camera access using the Raspberry Pi can be seen below:

Summary

In this blog post, we learned how to access multiple cameras using the Raspberry Pi 2, OpenCV, and Python.

When accessing multiple cameras on the Raspberry Pi, you have two choices when constructing your setup:

  1. Either use multiple USB webcams.
  2. Or using a single Raspberry Pi camera module and at least one USB webcam.

Since the Raspberry Pi board has only one camera input, you cannot leverage multiple Pi camera boards — atleast without extensive hacks to your Pi.

In order to provide an interesting implementation of multiple camera access with the Raspberry Pi, we created a simple motion detection class that can be used to detect motion in the frame views of each camera connected to the Pi.

While basic, this motion detector demonstrated that multiple camera access is capable of being executed in real-time on the Raspberry Pi — especially with the help of our threaded PiVideoStream  and VideoStream  classes implemented in blog posts a few weeks ago.

If you are interested in learning more about using the Raspberry Pi for computer vision, along with other tips, tricks, and hacks related to OpenCV, be sure to signup for the PyImageSearch Newsletter using the form at the bottom of this post.

See you next week!

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 11-page Resource Guide on Computer Vision and Image Search Engines, including exclusive techniques that I don’t post on this blog! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , , , , , , , , , ,

60 Responses to Multiple cameras with the Raspberry Pi and OpenCV

  1. Fred January 18, 2016 at 12:17 pm #

    Amazing useful post, as always!

    Keep on the good work Adrian, PyImageSearch is definitively THE blog for those who are developing their skills in cv.

    All of good!

    • Adrian Rosebrock January 18, 2016 at 3:17 pm #

      Thanks Fred! 😀

  2. @turbinetamer January 18, 2016 at 4:17 pm #

    Thanks for the improved line numbers for the python source !!!
    They are much more readable in my Firefox browser.

  3. Ryan January 18, 2016 at 7:29 pm #

    Awesome write up, is there a way to include IP camera’s rather than USB ones?

    • Adrian Rosebrock January 20, 2016 at 1:52 pm #

      Indeed, it is. I’ll try to cover IP cameras in a future blog post.

  4. Joe January 18, 2016 at 9:29 pm #

    Wow! I really need this :) thanks for sharing.

    More power!

    • Adrian Rosebrock January 20, 2016 at 1:52 pm #

      Thanks Joe!

  5. Ahmed Atef January 19, 2016 at 7:20 am #

    Hi,

    Appreciate your great work , thank you.

    I notice that you are using logitech webcam,

    Can i use microsoft lifecam cinema with raspberry pi?

    • Adrian Rosebrock January 20, 2016 at 1:51 pm #

      I have never used any of the Microsoft LifeCams before, but you should consult this list of USB compatible webcams for the Pi.

  6. Phil January 19, 2016 at 7:32 am #

    Hi Adrian, thanks for another great tutorial.
    Up until now, I’ve been running OpenCV on my Raspberry Pi, logged into the GUI. I just tried booting the Pi to the console instead and on running any OpenCV project which uses ‘imread’, I get a GTK error – ‘gtk warning cannot open display’. I’ve read that this is something to do with the X11 server.
    Have you tried OpenCV when booted into the console instead of the GUI? Basically I would like to be able to start my project as soon as the Pi boots up and figured it would be a waste of resources having the GUI running in the background.

    • Adrian Rosebrock January 20, 2016 at 1:50 pm #

      Indeed, anytime you want to use the cv2.imshow method, you’ll need to have a window server running (such as X11). If you want to start a Python script at boot and have it run in the background, just comment out all of your cv2.imshow and cv2.waitKey calls and your program will run just fine.

  7. Girish January 19, 2016 at 10:25 pm #

    Hi Adrian

    Great work, Thanks a lot for sharing the code, I implemented this code and tested it out.

    I can see it working but I see an error message on the command window it just says “select time out”.

    Can we ignore this or is there a way to fix this ?

    BTW did you see this error in your implementation ?

    Regards
    Girish

    • Adrian Rosebrock January 20, 2016 at 1:48 pm #

      I have not seen that error message before. It seems to be an I/O related error, perhaps with Python accessing the Raspberry Pi camera?

      • Girish January 21, 2016 at 1:58 am #

        Hi Adrian,

        Thanks for your response, I ran the code you had published for long time and it worked fine, thought I am seeing the message ” Select time out” it does not seems to be impacting the function (may be dropping frames but not sure) still it working fine with Two Logictec C170 Webcams. I do not have Pi Cameras. (I am not sure why you are not seeing this message.)

        Once again, great work, fantastic post, thanks a lot for sharing your code, I will run the code with more time integrate my own image processing routines and see how it goes

        Regards
        Girish

        • Adrian Rosebrock January 21, 2016 at 5:00 pm #

          If you are using two Logitech cameras, then make sure you have changed the code to:

          Otherwise, you’ll end up trying to access a Raspberry Pi camera module that isn’t setup on your system. In fact, that’s likely where the “select time out” error is coming from.

          • Girish January 26, 2016 at 9:01 am #

            HI Adrian,

            I had done it exactly like the way you did, in the first time itself

            Stil I see the message “Select Timeout” my wild guess it may be due to the OS or the USB/Webcam drivers running on my RPi, can you share which model of RPi you are using which Linux image you are using, so that I can replicate the exact setup you have and give it a try

            Another difference I can think of is, I am using C170 Logitech camera not sure this will make a difference or not

  8. Bolkar January 20, 2016 at 2:44 am #

    Thanks for the very nice post.

    Would it be possible to use ip cameras? I have already deployed couple of them on a regular dvr. It would be very interesting to apply this in an ip setup.

    • Adrian Rosebrock January 20, 2016 at 1:43 pm #

      Absolutely. I’ll try to do post on IP cameras with OpenCV in the future.

  9. Melrick Nicolas January 20, 2016 at 8:05 am #

    Amazing! that would be helpful in the near future

    • Adrian Rosebrock January 20, 2016 at 1:41 pm #

      I should have another example of using multiple cameras on the Pi again next week :-) Stay tuned.

  10. amancio January 23, 2016 at 2:16 pm #

    Hey Adrian,
    your multiple-cameras–rpi does not display the images
    on my monitor;however, a separate program to just
    capture the image and immediately display the image
    using cv2.imgshow does work.

    I looked around in the net and I have seen instances
    in which people complained that cv2.imgshow does
    not update the window properly…

    Got any ideas as to why your script does not work?
    Thanks

    • Adrian Rosebrock January 25, 2016 at 4:14 pm #

      As your other comment mentioned, you need to use the cv2.waitKey method, which the Python script does include on Line 82.

  11. Dmitrii January 25, 2016 at 5:09 am #

    Hi, Adrain! Such a great story!
    Could u tell about the monitor u’ve used for?

  12. Wyn February 15, 2016 at 11:37 pm #

    I’d love to see this combined with storing the video or outputting to a web interface to get a full featured home surveillance system out of it.

    • Adrian Rosebrock February 16, 2016 at 3:40 pm #

      Absolutely. I’ll be doing some tutorials related to video streaming and saving “interesting” clips of a video soon. Keep an eye on the PyImageSearch blog! :-)

  13. Kaibofan February 18, 2016 at 3:43 am #

    great!

  14. salim February 20, 2016 at 4:06 am #

    Great work, Thanks
    can i use smart phone camera??

    • Adrian Rosebrock February 22, 2016 at 4:29 pm #

      Personally, I have never tried tying a smartphone camera to OpenCV. I’m not sure if this is possible for some devices without jailbreaking it.

  15. sarath February 29, 2016 at 11:18 pm #

    My Pi camera video quality are very poor. How could i improve it?

    • Adrian Rosebrock March 1, 2016 at 3:41 pm #

      Can you elaborate on what you mean by “video quality is very poor”? In what way?

  16. Krishna March 10, 2016 at 1:32 pm #

    Hi Adrian,
    Thanks for the tutorial, Is it possible to achieve stereoscopic vision with Rpi Camera and a USB webcam?

    • Adrian Rosebrock March 13, 2016 at 10:30 am #

      I personally haven’t tried with the Raspberry Pi, but in general, the same principles should apply. However, if you intend on doing stereo vision, you’ll need two USB webcams, not just one.

      • Leo April 2, 2016 at 5:55 pm #

        Why is not possible to use a RPi camera and a USB one? What is the maximum resolution?

        • Adrian Rosebrock April 3, 2016 at 10:23 am #

          You can, but I wouldn’t recommend it. For stereo vision applications (ideally) both cameras should have the same sensors.

          • vorney thomas June 21, 2016 at 6:51 pm #

            stereo vision need two same camera, since they have the same intrinsic parameter and external parameter, you need this value to calculate the secens depth .

  17. Arnold Adikrishna March 16, 2016 at 12:02 am #

    Hi Adrian. Great tutorial. Great work. And thanks for sharing with us. I have one quick question.

    I ran the program, everything went smooth. Nonetheless, when I press the ‘q’ button the program terminated, but one of my webcams did not stop working, and the terminal did not show the ‘>>>’ anymore. It seemed working on an infinite loop.

    Any idea what is going wrong?

    I am using two usb-webcams (and I have already modified your code so that it can work well with two usb-webcams), and my OS is windows 10.

    Looking forward to hearing from you. Thanks.

    -Arnold

    • Adrian Rosebrock March 16, 2016 at 8:11 am #

      Just to clarify, are you executing the code via a Python shell/IDLE rather than the terminal? The code is meant to be executed via command line (not IDLE), so that could be the problem.

      • Arnold Adikrishna March 30, 2016 at 2:02 am #

        Yes, you are right. Once I executed the code from command prompt, everything was fine. Thanks for your response :)

        • Adrian Rosebrock March 30, 2016 at 12:49 pm #

          No problem, I’m happy it worked out :-)

  18. Mike Grainger April 2, 2016 at 9:40 am #

    Adrian:

    Please continue with these blogs I am finding them very educational. My question, you make a reference to a ‘multi-object tracking’ tutorial coming in the future. I would like to add a + to that article in hopes that it will land higher on your priority list. To that end, do you have an idea when you will be releasing such an article?

    Regards,

    Mike

    • Adrian Rosebrock April 3, 2016 at 10:28 am #

      Hey Mike — thanks for suggesting multi-object tracking. I will do a tutorial on it, but to be honest, I’m not sure exactly when that will be. I’ll be sure to keep you in the loop! Comments like these help me prioritize posts, so thanks for that :-)

  19. Glenn April 3, 2016 at 1:09 am #

    Hey Adrian,

    When I run this script my pi reboots. I was able to get both camera to turn on for a split second but then the pi shuts down pretty quickly. Any idea what could be going on?

    • Adrian Rosebrock April 3, 2016 at 10:22 am #

      That’s quite strange, I’m not sure what the problem is. It seems like the cameras might be drawing too much power and the Pi is shutting down? You might want to post on the official Raspberry Pi forums and see if they have any suggestions.

  20. Fad May 4, 2016 at 2:50 am #

    Hi Adrian
    what algorithms used to detect motion ?
    regards
    Fad

  21. William May 10, 2016 at 11:05 am #

    Hi,

    My context isnt exactly the same since I use the C++ interface of OpenCV, and I am using Linux on a PC (but I plan to go on Raspberry Pi after). I have a problem using multiple cameras though and I hoped that you would have some clues on the cause for that.

    The problem is that I cannot open 2 USB cameras at the same time without having an error from video4linux (the Linux’s API for webcams, which OpenCV relies on, or so I understand).

    Do you have any clues ?

    Regards

    • Adrian Rosebrock May 10, 2016 at 6:22 pm #

      Hey William, thanks for the comment. I’ve never tried to use the C++ interface to access multiple cameras before, so I’m unfortunately not sure what the error is. However, it seems like the same logic should apply. You should be able to create two separate pointers, where each points to the different USB camera src.

  22. James May 25, 2016 at 4:31 pm #

    I’m having a problem installing cv2. I have openCV installed, but cv2 still cannot be found on the Pi. Any suggestions?

    • Adrian Rosebrock May 27, 2016 at 1:36 pm #

      Please refer to the “Troubleshooting” section of this post for information on debugging your OpenCV install.

  23. tita June 1, 2016 at 8:42 pm #

    Wow great tutorial..
    how about 3 usb cameras???

    • Adrian Rosebrock June 3, 2016 at 3:12 pm #

      Sure, absolutely. You would just need to create third webcam variable and read from it:

  24. vorney thomas June 2, 2016 at 3:38 pm #

    Dear Adrian Rosebrock

    I plan to do the visual SLAM subject by using raspi computer board connected two usb camera Logitech C920, but i dont know to get the two image and stream frame at the same time,can you give me some practical advice?
    look forward your response!

    • Adrian Rosebrock June 3, 2016 at 3:04 pm #

      Using the exact code in this blog post, you read frames from two different video sensors at the same time. So I’m not sure what you’re asking?

  25. Arman June 3, 2016 at 3:13 pm #

    i want to see that camera’s view from another PC or desktop .. is that possible ??

    • Adrian Rosebrock June 3, 2016 at 3:15 pm #

      You would normally stream the output from the video stream to a second system. I haven’t created a tutorial on doing this, but it’s certainly something I will consider for the future!

  26. Carlos June 12, 2016 at 11:10 pm #

    Hey Adrian

    First of all, thanks for sharing this in such a detailed way, much appreciated!

    I would like to activate GPIO pins when each camera senses motion, like Camera 0 –> GPIO 22 and Camera 1 GPIO 23.

    How can I identify this?

    Thanks a lot!!

    • Adrian Rosebrock June 15, 2016 at 12:54 pm #

      I would suggest using using this blog post as a starting point. You’ll need to combine GPIO code with OpenCV code, which may seem tricky, but once you see my example, it’s exactly pretty straightforward.

  27. erik b. June 18, 2016 at 12:07 am #

    Adrian, would you be so kind as to point me in the direction of using just ONE camera (the PiCam (IR)) and being able to save the output motion capture mpeg (OR have the ability to save the output motion capture as PNG files) to a NAS on the same network of the raspberry pi?

    i just need the back end software thats processed on the Pi that does what i just mentioned. i am a python novice, but i am pretty sure that i can follow how things are being processed (like you have in this blog post..which is very awesome..almost exactly what i am trying to achieve..)

    also, how hard is it to change the motion box? instead of the motion sensor box being a solid red line..how could you change that into a box that looks like this one pictured (link: http://docs.unrealengine.com/latest/images/Engine/UMG/UserGuide/Styling/BorderExample.jpg) – without the arrows and the filled in box in the middle.
    OR something like this (link: http://www.codeproject.com/KB/audio-video/Motion_Detection/3.jpg)
    the second one being preferred method of highlighting the motion in the field of view.

    i can build out the frontend webpage to view either the mpeg captures and/or PNG captures stored on the NAS with no problems.

    thank you very much in advance..i am building several of these cameras..and the software you have shared is the best that i have found so far!

    • Adrian Rosebrock June 18, 2016 at 8:14 am #

      If you’re trying to save video clips that contain motion (or any other “key event”), I would recommend reading this tutorial where I explain how to do exactly that.

      As for your second question, changing the motion box becomes a “drawing” problem at that point. It’s not exactly hard, but it’s not exactly easy either. You’ll need to use built in OpenCV drawing functions to create arrows, rectangles, etc. It’s a bit of a pain in the ass, but certainly possible.

      If you want to draw just the motion field, I would get rid of the bounding box and just call cv2.drawContours on the region instead.

Leave a Reply