Building a Raspberry Pi security camera with OpenCV

In this tutorial, you will learn how to build a Raspberry Pi security camera using OpenCV and computer vision. The Pi security camera will be IoT capable, making it possible for our Raspberry Pi to to send TXT/MMS message notifications, images, and video clips when the security camera is triggered.

Back in my undergrad years, I had an obsession with hummus. Hummus and pita/vegetables were my lunch of choice.

I loved it.

I lived on it.

And I was very protective of my hummus — college kids are notorious for raiding each other’s fridges and stealing each other’s food. No one was to touch my hummus.

But — I was a victim of such hummus theft on more than one occasion…and I never forgot it!

I never figured out who stole my hummus, and even though my wife and I are the only ones who live in our house, I often hide the hummus in the back of the fridge (where no one will look) or under fruits and vegetables (which most people wouldn’t want to eat).

Of course, back then I wasn’t as familiar with computer vision and OpenCV as I do now. Had I known what I do at present, I would have built a Raspberry Pi security camera to capture the hummus heist in action!

Today I’m channeling my inner undergrad-self and laying rest to the chickpea bandit. And if he ever returns again, beware, my fridge is monitored!

To learn how to build a security camera with a Raspberry Pi and OpenCV, just keep reading!

Looking for the source code to this post?
Jump right to the downloads section.

Building a Raspberry Pi security camera with OpenCV

In the first part of this tutorial, we’ll briefly review how we are going to build an IoT-capable security camera with the Raspberry Pi.

Next, we’ll review our project/directory structure and install the libraries/packages to successfully build the project.

We’ll also briefly review both Amazon AWS/S3 and Twilio, two services that when used together will enable us to:

  1. Upload an image/video clip when the security camera is triggered.
  2. Send the image/video clip directly to our smartphone via text message.

From there we’ll implement the source code for the project.

And finally, we’ll put all the pieces together and put our Raspberry Pi security camera into action!

An IoT security camera with the Raspberry Pi

Figure 1: Raspberry Pi + Internet of Things (IoT). Our project today will use two cloud services: Twilio and AWS S3. Twilio is an SMS/MMS messaging service. S3 is a file storage service to help facilitate the video messages.

We’ll be building a very simple IoT security camera with the Raspberry Pi and OpenCV.

The security camera will be capable of recording a video clip when the camera is triggered, uploading the video clip to the cloud, and then sending a TXT/MMS message which includes the video itself.

We’ll be building this project specifically with the goal of detecting when a refrigerator is opened and when the fridge is closed — everything in between will be captured and recorded.

Therefore, this security camera will work best in the same “open” and “closed” environment where there is a large difference in light. For example, you could also deploy this inside a mailbox that opens/closes.

You can easily extend this method to work with other forms of detection, including simple motion detection and home surveillance, object detection, and more. I’ll leave that as an exercise for you, the reader, to implement — in that case, you can use this project as a “template” for implementing any additional computer vision functionality.

Project structure

Go ahead and grab the “Downloads” for today’s blog post.

Once you’ve unzipped the files, you’ll be presented with the following directory structure:

Today we’ll be reviewing four files:

  • config/config.json : This commented JSON file holds our configuration. I’m providing you with this file, but you’ll need to insert your API keys for both Twilio and S3.
  • pyimagesearch/notifications/twilionotifier.py : Contains the TwilioNotifier  class for sending SMS/MMS messages. This is the same exact class I use for sending text, picture, and video messages with Python inside my upcoming Raspberry Pi book.
  • pyimagesearch/utils/conf.py : The Conf  class is responsible for loading the commented JSON configuration.
  • detect.py : The heart of today’s project is contained in this driver script. It watches for significant light change, starts recording video, and alerts me when someone steals my hummus or anything else I’m hiding in the fridge.

Now that we understand the directory structure and files therein, let’s move on to configuring our machine and learning about S3 + Twilio. From there, we’ll begin reviewing the four key files in today’s project.

Installing package/library prerequisites

Today’s project requires that you install a handful of Python libraries on your Raspberry Pi.

In my upcoming book, all of these packages will be preinstalled in a custom Raspbian image. All you’ll have to do is download the Raspbian .img file, flash it to your micro-SD card, and boot! From there you’ll have a pre-configured dev environment with all the computer vision + deep learning libraries you need!

Note: If you want my custom Raspbian images right now (with both OpenCV 3 and OpenCV 4), you should grab a copy of either the Quickstart Bundle or Hardcopy Bundle of Practical Python and OpenCV + Case Studies which includes the Raspbian .img file.

This introductory book will also teach you OpenCV fundamentals so that you can learn how to confidently build your own projects. These fundamentals and concepts will go a long way if you’re planning to grab my upcoming Raspberry Pi for Computer Vision book.

In the meantime, you can get by with this minimal installation of packages to replicate today’s project:

  • opencv-contrib-python : The OpenCV library.
  • imutils : My package of convenience functions and classes.
  • twilio : The Twilio package allows you to send text/picture/video messages.
  • boto3 : The boto3  package will communicate with the Amazon S3 files storage service. Our videos will be stored in S3.
  • json-minify : Allows for commented JSON files (because we all love documentation!)

To install these packages, I recommend that you follow my pip install opencv guide to setup a Python virtual environment.

You can then pip install all required packages:

Now that our environment is configured, each time you want to activate it, simply use the workon  command.

Let’s review S3, boto3, and Twilio!

What is Amazon AWS and S3?

Figure 2: Amazon’s Simple Storage Service (S3) will be used to store videos captured from our IoT Raspberry Pi. We will use the boto3 Python package to work with S3.

Amazon Web Services (AWS) has a service called Simple Storage Service, commonly known as S3.

The S3 services is a highly popular service used for storing files. I actually use it to host some larger files such as GIFs on this blog.

Today we’ll be using S3 to host our video files generated by the Raspberry Pi Security camera.

S3 is organized by “buckets”. A bucket contains files and folders. It also can be set up with custom permissions and security settings.

A package called boto3  will help us to transfer the files from our Internet of Things Raspberry Pi to AWS S3.

Before we dive into boto3 , we need to set up an S3 bucket.

Let’s go ahead and create a bucket, resource group, and user. We’ll give the resource group permissions to access the bucket and then we’ll add the user to the resource group.

Step #1: Create a bucket

Amazon has great documentation on how to create an S3 bucket here.

Step #2: Create a resource group + user. Add the user to the resource group.

After you create your bucket, you’ll need to create an IAM user + resource group and define permissions.

  • Visit the resource groups page to create a group. I named my example “s3pi”.
  • Visit the users page to create a user. I named my example “raspberrypisecurity”.

Step #3: Grab your access keys. You’ll need to paste them into today’s config file.

Watch these slides to walk you through Steps 1-3, but refer to the documentation as well because slides become out of date rapidly:

Figure 3: The steps to gain API access to Amazon S3. We’ll use boto3 along with the access keys in our Raspberry Pi IoT project.

Obtaining your Twilio API keys

Figure 4: Twilio is a popular SMS/MMS platform with a great API.

Twilio, a phone number service with an API, allows for voice, SMS, MMS, and more.

Twilio will serve as the bridge between our Raspberry Pi and our cell phone. I want to know exactly when the chickpea bandit is opening my fridge so that I can take countermeasures.

Let’s set up Twilio now.

Step #1: Create an account and get a free number.

Go ahead and sign up for Twilio and you’ll be assigned a temporary trial number. You can purchase a number + quota later if you choose to do so.

Step #2: Grab your API keys.

Now we need to obtain our API keys. Here’s a screenshot showing where to create one and copy it:

Figure 5: The Twilio API keys are necessary to send text messages with Python.

A final note about Twilio is that it does support the popular What’s App messaging platform. Support for What’s App is welcomed by the international community, however, it is currently in Beta. Today we’ll be demonstrating standard SMS/MMS only. I’ll leave it up to you to explore Twilio in conjunction with What’s App.

Our JSON configuration file

There are a number of variables that need to be specified for this project, and instead of hardcoding them, I decided to keep our code more modular and organized by putting them in a dedicated JSON configuration file.

Since JSON doesn’t natively support comments, our Conf  class will take advantage of JSON-minify to parse out the comments. If JSON isn’t your config file of choice, you can try YAML or XML as well.

Let’s take a look at the commented JSON file now:

Lines 5 and 6 contain two settings. The first is the light threshold for determining when the refrigerator is open. The second is a threshold for the number of seconds until it is determined that someone left the door open.

Now let’s handle AWS + S3 configs:

Each of the values on Lines 9-11 are available in your AWS console (we just generated them in the “What is Amazon AWS and S3?” section above).

And finally our Twilio configs:

Twilio security settings are on Lines 14 and 15. The "twilio_from"  value must match one of your Twilio phone numbers. If you’re using the trial, you only have one number. If you use the wrong number, are out of quota, etc., Twilio will likely send an error message to your email address.

Phone numbers can be formatted like this in the U.S.: "+1-555-555-5555" .

Loading the JSON configuration file

Our configuration file includes comments (for documentation purposes) which unfortunately means we cannot use Python’s built-in json  package which cannot load files with comments.

Instead, we’ll use a combination of JSON-minify and a custom  Conf  class to load our JSON file as a Python dictionary.

Let’s take a look at how to implement the Conf  class now:

This class is relatively straightforward. Notice that in the constructor, we use json_minify  (Line 9) to parse out the comments prior to passing the file contents to json.loads .

The __getitem__  method will grab any value from the configuration with dictionary syntax. In other words, we won’t call this method directly — rather, we’ll simply use dictionary syntax in Python to grab a value associated with a given key.

Uploading key video clips and sending them via text message

Once our security camera is triggered we’ll need methods to:

  • Upload the images/video to the cloud (since the Twilio API cannot directly serve “attachments”).
  • Utilize the Twilio API to actually send the text message.

To keep our code neat and organized we’ll be encapsulating this functionality inside a class named TwilioNotifier  — let’s review this class now:

On Lines 2-4, we import the Twilio Client , Amazon’s  boto3 , and Python’s built-in  Thread .

From there, our TwilioNotifier  class and constructor are defined on Lines 6-9. Our constructor accepts a single parameter, the configuration, which we presume has been loaded from disk via the Conf  class.

This project only demonstrates sending messages. We’ll be demonstrating receiving messages with Twilio in an upcoming blog post as well as in the Raspberry Pi Computer Vision book.

The send  method is defined on Lines 11-14. This method accepts two key parameters:

  • The string text msg
  • The video file, tempVideo . Once the video is successfully stored in S3, it will be removed from the Pi to save space. Hence it is a temporary video.

The send  method kicks off a Thread  to actually send the message, ensuring the main thread of execution is not blocked.

Thus, the core text message sending logic is in the next method, _send :

The _send  method is defined on Line 16. It operates as an independent thread so as not to impact the driver script flow.

Parameters ( msg  and tempVideo ) are passed in when the thread is launched.

The _send  method first will upload the video to AWS S3 via:

  • Initializing the s3  client with the access key and secret access key (Lines 18-21).
  • Uploading the file (Lines 25-27).

Line 24 simply extracts the filename  from the video path since we’ll need it later.

Let’s go ahead and send the message:

To send the message and have the video show up in a cell phone messaging app, we need to send the actual text string along with a URL to the video file in S3.

Note: This must be a publicly accessible URL, so ensure that your S3 settings are correct.

The URL is generated on Lines 30-33.

From there, we’ll create a Twilio client  (not to be confused with our boto3 s3  client) on Lines 36 and 37.

Lines 38 and 39 actually send the message. Notice the to , from_ , body , and media_url  parameters.

Finally, we’ll remove the temporary video file to save some precious space (Line 42). If we don’t do this it’s possible that your Pi may run out of space if your disk space is already low.

The Raspberry Pi security camera driver script

Now that we have (1) our configuration file, (2) a method to load the config, and (3) a class to interact with the S3 and Twilio APIs, let’s create the main driver script for the Raspberry Pi security camera.

The way this script works is relatively simple:

  • It monitors the average amount of light seen by the camera.
  • When the refrigerator door opens, the light comes on, the Pi detects the light, and the Pi starts recording.
  • When the refrigerator door is closed, the light turns off, the Pi detects the absence of light, and the Pi stops recording + sends me or you a video message.
  • If someone leaves the refrigerator open for longer than the specified seconds in the config file, I’ll receive a separate text message indicating that the door was left open.

Let’s go ahead and implement these features.

Open up the detect.py  file and insert the following code:

Lines 2-15 import our necessary packages. Notably, we’ll be using our TwilioNotifier , Conf  class, VideoStream , imutils , and OpenCV.

Let’s define an interrupt signal handler and parse for our config file path argument:

Our script will run headless because we don’t need an HDMI screen inside the fridge.

On Lines 18-21, we define a signal_handler  class to capture “ctrl + c” events from the keyboard gracefully. It isn’t always necessary to do this, but if you need anything to execute before the script exits (such as someone disabling your security camera!), you can put it in this function.

We have a single command line argument to parse. The --conf  flag (the path to config file) can be provided directly in the terminal or launch on reboot script. You may learn more about command line arguments here.

Let’s perform our initializations:

Our initializations take place on Lines 30-52. Let’s review them:

  • Lines 30 and 31 instantiate our Conf  and TwilioNotifier  objects.
  • Two status variables are initialized to determine when the fridge is open and when a notification has been sent (Lines 34 and 35).
  • We’ll start our VideoStream  on Lines 39-41. I’ve elected to use a PiCamera, so Line 39 (USB webcam) is commented out. You can easily swap these if you are using a USB webcam.
  • Line 44 starts our signal_handler  thread to run in the background.
  • Our video writer  and frame dimensions are initialized on Lines 50-52.

It’s time to begin looping over frames:

Our while  loop begins on Line 55. We proceed to read  a frame  from our video stream (Line 58). The frame  undergoes a sanity check on Lines 62 and 63 to determine if we have a legitimate image from our camera.

Line 59 sets our fridgePrevOpen  flag. The previous value must always be set at the beginning of the loop and it is based on the current value which will be determined later.

Our frame  is resized to a dimension that will look reasonable on a smartphone and also make for a smaller filesize for our MMS video (Line 66).

On Line 67, we create a grayscale image from frame  — we’ll need this soon to determine the average amount of light in the frame.

Our dimensions are set via Lines 70 and 71 during the first iteration of the loop.

Now let’s determine if the refrigerator is open:

Determining if the refrigerator is open is a dead-simple, two-step process:

  1. Average all pixel intensities of our grayscale image (Line 75).
  2. Compare the average to the threshold value in our configuration (Line 78). I’m confident that a value of 50  (in the config.json  file) will be an appropriate threshold for most refrigerators with a light that turns on and off as the door is opened and closed. That said, you may want to experiment with tweaking that value yourself.

The fridgeOpen  variable is simply a boolean indicating if the refrigerator is open or not.

Let’s now determine if we need to start capturing a video:

As shown by the conditional on Line 82, so long as the refrigerator was just opened (i.e. it was not previously opened), we will initialize our video writer .

We’ll go ahead and grab the startTime , create a tempVideo , and initialize our video writer  with the temporary file path (Lines 84-90). The constant 0x21  is for H264 video encoding.

Now we’ll handle the case where the refrigerator was previously open:

If the refrigerator was previously open, let’s check to ensure it wasn’t left open long enough to trigger an “Intruder has left your fridge open!” alert.

Kids can leave the refrigerator open by accident, or maybe after a holiday, you have a lot of food preventing the refrigerator door from closing all the way. You don’t want your food to spoil, so you may want these alerts!

For this message to be sent, the timeDiff  must be greater than the threshold set in the config (Lines 98-102).

This message will include a msg  and video to you, as shown on Lines 107-117. The msg  is defined, the writer  is released, and the notification is set.

Let’s now take care of the most common scenario where the refrigerator was previously open, but now it is closed (i.e. some thief stole your food, or maybe it was you when you became hungry):

The case beginning on Line 120 will send a video message indicating, “Your fridge was opened on {{ day }} at {{ time }} for {{ seconds }}.”

On Lines 123 and 124, our notifSent  flag is reset if needed. If the notification was already sent, we set this value to False , effectively resetting it for the next iteration of the loop.

Otherwise, if the notification has not been sent, we’ll calculate the totalSeconds  the refrigerator was open (Lines 131 and 132). We’ll also record the date the door was opened (Line 133).

Our msg  string is populated with these values (Lines 136-138).

Then the video writer  is released and the message and video are sent (Line 142-147).

Our final block finishes out the loop and performs cleanup:

To finish the loop, we’ll write the frame  to the video writer  object and then go back to the top to grab the next frame.

When the loop exits, the writer  is released, and the video stream is stopped.

Great job! You made it through a simple IoT project using a Raspberry Pi and camera.

It’s now time to place the bait. I know my thief likes hummus as much as I do, so I ran to the store and came back to put it in the fridge.

RPi security camera results

Figure 6: My refrigerator is armed with an Internet of Things (IoT) Raspberry Pi, PiCamera, and Battery Pack. And of course, I’ve placed some hummus in there for me and the thief. I’ll also know if someone takes a New Belgium Dayblazer beer of mine.

When deploying the Raspberry Pi security camera in your refrigerator to catch the hummus bandit, you’ll need to ensure that it will continue to run without a wireless connection to your laptop.

There are two great options for deployment:

  1. Run the computer vision Python script on reboot.
  2. Leave a screen  session running with the Python computer vision script executing within.

Be sure to visit the first link if you just want your Pi to run the script when you plug in power.

While this blog post isn’t the right place for a full screen demo, here are the basics:

  • Install screen via: sudo apt-get install screen
  • Open an SSH connection to your Pi and run it: screen
  • If the connection from your laptop to your Pi ever dies or is closed, don’t panic! The screen session is still running. You can reconnect by SSH’ing into the Pi again and then running screen -r . You’ll be back in your virtual window.
  • Keyboard shortcuts for screen:
    • “ctrl + a, c”: Creates a new “window”.
    • ctrl + a, p” and “ctrl + a, n”: Cycles through “previous” and “next” windows, respectively.
  • For a more in-depth review of screen , see the documentation. Here’s a screen keyboard shortcut cheat sheet.

Once you’re comfortable with starting a script on reboot or working with screen , grab a USB battery pack that can source enough current. Shown in Figure 4, we’re using a RavPower 2200mAh battery pack connected to the Pi power input. The product specs claim to charge an iPhone 6+ times, and it seems to run a Raspberry Pi for about +/-10 hours (depending on the algorithm) as well.

Go ahead and plug in the battery pack, connect, and deploy the script (if you didn’t set it up to start on boot).

The commands are:

If you aren’t familiar with command line arguments, please read this tutorial. The command line argument is also required if you are deploying the script upon reboot.

Let’s see it in action!

Figure 7: Me testing the Pi Security Camera notifications with my iPhone.

I’ve included a full dem0 of the Raspberry Pi security camera below:

Interested in building more projects with the Raspberry Pi, OpenCV, and computer vision?

Figure 8: Catching a furry little raccoon with an infrared light/camera connected to the Raspberry Pi.

Are you interested in using your Raspberry Pi to build practical, real-world computer vision and deep learning applications, including:

  • Computer vision and IoT projects on the Pi
  • Servos, PID, and controlling the Pi with computer vision
  • Human activity, home surveillance, and facial applications
  • Deep learning on the Raspberry Pi
  • Fast, efficient deep learning with the Movidius NCS and OpenVINO toolkit
  • Self-driving car applications on the Raspberry Pi
  • Tips, suggestions, and best practices when performing computer vision and deep learning with the Raspberry Pi

If so, you’ll definitely want to check out my upcoming book, Raspberry Pi for Computer Visionto learn more about the book (including release date information) just click the link below and enter your email address:

From there I’ll ensure you’re kept in the know on the RPi + Computer Vision book, including updates, behind the scenes looks, and release date information.

Summary

In this tutorial, you learned how to build a Raspberry Pi security camera from scratch using OpenCV and computer vision.

Specifically, you learned how to:

  • Access the Raspberry Pi camera module or USB webcam.
  • Setup your Amazon AWS/S3 account so you can upload images/video when your security camera is triggered (other services such as Dropbox, Box, Google Drive, etc. will work as well, provided you can obtain a public-facing URL of the media).
  • Obtain Twilio API keys used to send text messages with the uploaded images/video.
  • Create a Raspberry Pi security camera using OpenCV and computer vision.

Finally, we put all the pieces together and deployed the security camera to monitor a refrigerator:

  • Each time the door was opened we started recording
  • After the door was closed the recording stopped
  • The recording was then uploaded to the cloud
  • And finally, a text message was sent to our phone showing the activity

You can extend the security camera to include other components as well. My first suggestion would be to take a look at how to build a home surveillance system using a Raspberry Pi where we use a more advanced motion detection technique. It would be fun to implement Twilio SMS/MMS notifications into the home surveillance project as well.

I hope you enjoyed this tutorial!

To download the source code to this post, and be notified when future tutorials are published here on PyImageSearch, just enter your email address in the form below!

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , ,

72 Responses to Building a Raspberry Pi security camera with OpenCV

  1. Adrian Mvd March 25, 2019 at 12:00 pm #

    Hi Adrian, looks really interesting! just a quick question regarding AWS: S3 is a 12 months free service. Just wondering if it’s not possible to use a service that will remain free of charge. Thank you

    • Adrian Rosebrock March 25, 2019 at 12:21 pm #

      You can use whatever service you want. Dropbox is another service with a really good API and free tiers. I used AWS/S3 since it’s very popular and one that I’ve used in past with success.

  2. David Bonn March 25, 2019 at 12:01 pm #

    Great post, Adrian. I love hummus too.

    • Adrian Rosebrock March 25, 2019 at 12:20 pm #

      Thanks David!

  3. Clarence chhoa hua sheng March 25, 2019 at 12:01 pm #

    What should I do if the source code is run in my desktop? How do I initialise my camera at the pi? What should I change in this line – vs=VideoStream(usePicamera=true).start().
    This works if the desktop connected straight to camera. But now my camera is at the pi. What should I change. Pls help me

    • Adrian Rosebrock March 25, 2019 at 12:21 pm #

      Sorry, I’m not sure I’m understanding your question. You would want to open up a command line, navigate to where you downloaded the source code, and execute the script.

    • stephan March 26, 2019 at 4:14 am #

      Run the code on the pi

    • Bob O March 30, 2019 at 4:32 pm #

      I would run the code on the pi, but save the videos to the desktop if you want to look at them from there. You can also set a watchdog script on the desktop to process the video when it is written into the folder (shared on the PC, shared on the PI, or shared on a NAS if you have one). Just be careful about storing them on the pi, because it fills up pretty quick if you are recording a common event like kids using the fridge.

  4. Rufino March 25, 2019 at 12:31 pm #

    Hi
    Have you considered that your wont have wifi with the fridge closed? what happens If I open and close the door 3 times in a minute?
    As getting staterd tutorial it’s ok, anyway thanks for sharing your knowledge.
    Waiting for your new book

    • Adrian Rosebrock March 25, 2019 at 1:43 pm #

      I didn’t have any issues with WiFi and the Pi in the fridge.

      You should also take a look at my reply to “Carl” as well.

  5. Carl March 25, 2019 at 12:45 pm #

    That is a strange looking case the pi is in. Is that moisture proof by chance? I can’t imagine a pi lasting long in a damp cold environment.

    • Adrian Rosebrock March 25, 2019 at 1:42 pm #

      Putting the Pi actually in the fridge was mainly done for fun. I would of course recommend putting this above the fridge, up high above a cabinet, etc. for a “long-term” monitoring solution

  6. Horelvis March 25, 2019 at 1:34 pm #

    hahaha, I like beer more!, I see that you too!! 😉

    • Adrian Rosebrock March 25, 2019 at 1:41 pm #

      Haha, thanks Horelvis 🙂

  7. Virgil March 25, 2019 at 1:40 pm #

    HI Adrian,
    Love your updates and am pleased to say I have completed a couple of your tutorials, one on facial recognition.
    My thought is, could you add face capture to this application and then use one of the free facial recognition services to add a name to the Hummus thief and not send notifications if it is a non-thief person?
    I know the full AI face rec would not run well (or at all) on a Pi or Pi zero, but using the WiFi and API features of either Google Vision or Amazon Rekognition could we pack a face grab into the project and still fit on the Pi?

    • Adrian Rosebrock March 25, 2019 at 1:44 pm #

      Nice, I’m glad you’re enjoying the tutorials, Virgil! And congrats on completing the face recognition guide.

      Yes, you could combine face recognition with this method and only send a notification if the person is not recognized. I would suggest starting with this Raspberry Pi face recognition tutorial.

      • Virgil March 25, 2019 at 3:28 pm #

        Thanks, will do!!!

  8. Mattia March 25, 2019 at 2:14 pm #

    Hi Adrian
    Great tutorial, two questions
    1)In your tutorials on surveillance systems with RPi you often use online storage services, what do you think about saving videos on a local disk (USB) and periodically delete older files.
    2)Have you ever tried telepot to send messagges /media on telegram?

    • Adrian Rosebrock March 25, 2019 at 4:16 pm #

      1. That’s entirely dependent on your use case and whether or not that makes sense. You could save to local storage if the media didn’t require immediate attention or backups offline (just in case the Pi dies).

      2. Sorry, I have not.

    • Martin N March 25, 2019 at 9:31 pm #

      Telegram has a pretty straightforward API with the ability to send an MP4 from the filesystem. See here: https://martinnoah.com/python-reminders-with-telegram.html

  9. Martin N March 25, 2019 at 9:35 pm #

    Yet another awesome how-to! Thank you Adrian!

    • Adrian Rosebrock March 27, 2019 at 8:46 am #

      Thanks Martin, I’m glad you enjoyed it!

  10. Bruno March 26, 2019 at 2:22 am #

    Great job. Can you show how you install this on the Raspberry Pi?

    • Adrian Rosebrock March 27, 2019 at 8:45 am #

      As in configure your Pi? If so, follow this tutorial.

  11. Chris March 26, 2019 at 9:22 am #

    Hi Adrian,
    perfect tutorial – as usual 🙂
    I wanted to know if there is an easy way to use this technique to create something like a “palm shutter” (see this video here: https://www.youtube.com/watch?v=YuznhHvOyc4)

    Thanks and regards
    Chris

  12. Shreyas March 26, 2019 at 12:34 pm #

    Hi adrian,
    I’m working on a front door security system and I am already using your facial recognition tutorial. Could you please tell me if I can use this code with the facial recognition one.
    As in, if I can send just a picture of the visitor at the door through this method and not if the face is in the dataset.
    Thanks.
    p.s: your tutorials are amazing and really interesting.

    • Adrian Rosebrock March 27, 2019 at 8:35 am #

      Great question, Shreyas. I’ll be covering that exact use case in my upcoming Raspberry Pi + Computer Vision book. Stay tuned!

  13. Jane March 27, 2019 at 4:47 am #

    Hi Adrian,
    Thanks for the tutorial!
    I have a question, is it possible to record the footage in H.265 codec?

  14. Katerina Romanchuk March 27, 2019 at 5:29 am #

    Very cool, thank you!

    • Adrian Rosebrock March 27, 2019 at 8:25 am #

      You are welcome!

  15. Zubair Ahmed March 27, 2019 at 5:30 am #

    As usual a very well thought out and nicely crafted post. Thanks for writing it

    There is a slight typo in the ‘I’ve included a full deme of the Raspberry Pi security camera below:’ line ‘demo’

    • Adrian Rosebrock March 27, 2019 at 8:24 am #

      Thanks Zubair!

  16. Mohammed RAZZOK March 28, 2019 at 4:48 am #

    I like you man , keep going in what you do , wishing you every success in life that you can imagine

    • Adrian Rosebrock April 2, 2019 at 6:33 am #

      Thanks Mohammed, I really appreciate that 🙂

  17. Aureo M Zanon March 29, 2019 at 1:21 am #

    Thanks Adrian for the initiative. I have been developing applications with the RPi 3B+ and the Intel Movidius NCS using YOLO. I am a big fan of open source hardware and software. Looking forward to reading your new book.

    • Adrian Rosebrock April 2, 2019 at 6:23 am #

      Thanks so much, Aureo 🙂

  18. Kostas March 31, 2019 at 8:48 am #

    Hello. Very interesting article. Can you please list the Rasberry Pi accessories that i need to built this system?

    • Adrian Rosebrock April 2, 2019 at 6:00 am #

      Hey Kostas — the hardware accessories are included in this tutorial (including links to them).

  19. bhavana April 2, 2019 at 3:23 am #

    hello sir,
    thank you for your tutorial.
    is it possible to capture image using “TemFile” ?

    • Adrian Rosebrock April 2, 2019 at 5:41 am #

      You mean using the “TempFile” class? Yes, absolutely. Just write the individual frame to disk using “cv2.imwrite”.

  20. delta1071 April 14, 2019 at 5:38 am #

    I have the script functioning properly, but the videos are recorded upside down. I’ve tried adding camera.vflip = True to the script right after the imports are done, but it seems that PiCamera() needs to be imported first. When “from picamera import PiCamera” is added to the imports, the script exits with an “Out of resources” error. Apparently something else is already using the camera, which can produce this error. I can’t determine where the initial camera configuration code is so the “vflip” code can be inserted. Any ideas? Thanks.

    • Adrian Rosebrock April 18, 2019 at 7:27 am #

      1. You can use the cv2.flip function to flip your frame as well.
      2. As far as the error goes, it sounds like you’re not closing the script properly and something else is accessing your camera module. I’m not sure what is going on there, unfortunately.

  21. Craig April 15, 2019 at 11:09 am #

    Great tutorial. Would there be a way to modify the code to pre-record the last 10 seconds of video in a circular buffer then pre-pend these 10 seconds of video to videos detected with motion or objects?

    I know there are security cameras on the market that already do this. It’s nice to have the pre-record feature as it gives you an accurate understanding of what is going on. (Before & After motion is detected)

    • Adrian Rosebrock April 18, 2019 at 7:11 am #

      You mean something like this?

      • Craig April 30, 2019 at 12:18 am #

        Thank you, this is exactly what I’m looking for. If there was an IOS app to view the history of video clips within the cloud, I would have a solution to replace the ineffective surveillance cameras I’m currently using.

        • Adrian Rosebrock May 1, 2019 at 11:37 am #

          Just in case you’re interested, I’m showing how to build a security camera with the key clip writer inside Raspberry Pi for Computer Vision — that would also help you get a jumpstart on your project.

  22. Shreyas April 26, 2019 at 10:35 am #

    Hi adrian,
    Can you tell how to change the fridge lighting part which starts the picamera to when a face is detected?
    As in when any face is detected it’ll start recording!
    Thanks.

  23. sophia April 26, 2019 at 10:56 am #

    This is an amazing tutorial! I’d greatly appreciate your guidance on a project that I’m interested in working on – People in my residential complex don’t always pick up after their dogs! I’d like to deploy a model on a camera that sends an alert whenever someone does not pick up after their dog. It’d help me greatly to get your advice on the steps involved for me to accomplish this. Thanks.

    • Adrian Rosebrock May 1, 2019 at 12:04 pm #

      That’s actually more of a challenging problem than it seems. You would want to look into activity recognition. Specifically, you need to gather images/video of dogs posturing as they go followed by their owners picking up after them. You can then train a model to detect that activity.

  24. quynhnt May 13, 2019 at 2:12 pm #

    Hi Adrian
    This is an amazing tutorial!
    I have a question why when I added the “cv2.imshow(“Frame”, frame)” command , the video uploaded to aws is a 0 byte blank video, can you explain why?

    • Adrian Rosebrock May 15, 2019 at 2:51 pm #

      That’s really odd, I’m not sure why that would have happened.

  25. savita shinde May 14, 2019 at 7:14 am #

    can we use raspberry pi camera v2 model instead of web cam ?
    if yes
    is there any changes in program are needed or not ?

    • Adrian Rosebrock May 15, 2019 at 2:39 pm #

      Yes. See Lines 39 and 40.

  26. Vikas June 20, 2019 at 11:27 pm #

    What version of raspberry pi you used for this?

    • Adrian Rosebrock June 26, 2019 at 1:48 pm #

      For this tutorial I used a RPi 3.

  27. izz August 1, 2019 at 3:54 am #

    Hi, can you explain how to send video file as attachment in email using python and raspberry pi? Like alert system.

  28. Anna September 18, 2019 at 3:13 am #

    Awesome post, thanks for sharing.

    • Adrian Rosebrock September 18, 2019 at 6:43 am #

      You are welcome!

  29. Iskandar September 25, 2019 at 5:18 am #

    Hey Adrian, I love your project but I have a question for you plus i can’t find anyone in the comment section that are having the same problem as me.

    1. My problem was i got

    “OpenCV: FFMPEG: tag 0x00000021/’!???’ is not found (format ‘mp4 / MP4 (MPEG-4 Part 14)’)'”

    Can you give me a solution for this problem? I would love to make this project successful.

    2. On line 32 – 33, what do you mean by publicly accessable URL for my S3 bucket? Do i need to copy and paste the URL link on my browser for my S3 bucket?

    p.s I have already made my bucket to public access.

    I look forward to your answer, thank you so much for your time.

    • Adrian Rosebrock September 25, 2019 at 10:30 am #

      1. I’m not sure about that error. I have not encountered that one before.

      2. When you create an S3 bucket there is an option to make a folder/directory publicly available — make sure that directory is publicly accessible, otherwise you will not be able to view your uploads.

    • Arthur J Autz October 2, 2019 at 4:16 pm #

      Hi Iskandar, I had the same error on my rp3 with openCV3.
      I changed the 0x021 tag to ‘avc1’ (lowercase) and was able to get it to work.

      Best.

      P.S. Thank you Adrian!

      • Adrian Rosebrock October 3, 2019 at 12:18 pm #

        Awesome, thanks so much for sharing this Arthur!

  30. Arthur September 27, 2019 at 5:04 pm #

    Is the Twilio AUTH_ID the same as the secret? Or is the auth id your login ID?

    • Adrian Rosebrock October 3, 2019 at 12:38 pm #

      Yes, that would be your secret key.

  31. Arthur J Autz October 1, 2019 at 6:18 pm #

    Hi, I have been following your tutorial. I think its great, however I’ve run into an issue.
    I have my aws access info in my config.json file. I got a notification from aws that I can’t have that public for security reasons. How to best setup my github for this project then? It is possible to keep it public?

    Thank you,
    Arthur J.

    • Adrian Rosebrock October 3, 2019 at 12:23 pm #

      You should be able to make a bucket public via AWS’ interface. I would check with AWS support as that sounds like a problem with your account.

  32. Arthur J Autz October 2, 2019 at 5:03 pm #

    Great job, thank you very much.
    Question, in the detect script for the condition when you send a message that the fridge was open on… for …

    Isn’t there a line missing setting the notifSend variable to True from False?

    Thank you!

  33. David J Buscher October 3, 2019 at 10:36 am #

    Adrian,
    Excellent presentation. I am involved in a new venture requiring computer vision and LTE-M communications from a mobile platform. Raspberry Pi looks like a great platform to solve our problem. While I am an electrical engineer and a programmer on occasion, I am looking for a consultant that could help us do a prototype mobile Raspberry Pi 4 video application, make some decisions on trigger events in a video and send video and/or still frames to a cell phone or file as you suggest in your article.

    If you have an interest could you send me your interest and a consulting rate and I can send you an NDA so we can discuss further. Thanks for considering this idea.

    Dave Buscher

Before you leave a comment...

Hey, Adrian here, author of the PyImageSearch blog. I'd love to hear from you, but before you submit a comment, please follow these guidelines:

  1. If you have a question, read the comments first. You should also search this page (i.e., ctrl + f) for keywords related to your question. It's likely that I have already addressed your question in the comments.
  2. If you are copying and pasting code/terminal output, please don't. Reviewing another programmers’ code is a very time consuming and tedious task, and due to the volume of emails and contact requests I receive, I simply cannot do it.
  3. Be respectful of the space. I put a lot of my own personal time into creating these free weekly tutorials. On average, each tutorial takes me 15-20 hours to put together. I love offering these guides to you and I take pride in the content I create. Therefore, I will not approve comments that include large code blocks/terminal output as it destroys the formatting of the page. Kindly be respectful of this space.
  4. Be patient. I receive 200+ comments and emails per day. Due to spam, and my desire to personally answer as many questions as I can, I hand moderate all new comments (typically once per week). I try to answer as many questions as I can, but I'm only one person. Please don't be offended if I cannot get to your question
  5. Do you need priority support? Consider purchasing one of my books and courses. I place customer questions and emails in a separate, special priority queue and answer them first. If you are a customer of mine you will receive a guaranteed response from me. If there's any time left over, I focus on the community at large and attempt to answer as many of those questions as I possibly can.

Thank you for keeping these guidelines in mind before submitting your comment.

Leave a Reply

[email]
[email]