OpenCV, RPi.GPIO, and GPIO Zero on the Raspberry Pi

Last week we learned a bit about Python virtual environments and how to access the RPi.GPIO and GPIO Zero libraries along with OpenCV.

Today we are going to build on that knowledge and create an “alarm” that triggers both an LED light to turn on and a buzzer to go off whenever a specific visual action takes place.

To accomplish this, we’ll utilize OpenCV to process frames from a video stream. Then, when a pre-defined event takes place (such as a green ball entering our field of view), we’ll utilize RPi.GPIO/GPIO Zero to trigger the alarm.

Keep reading to find out how it’s done!

Looking for the source code to this post?
Jump right to the downloads section.

OpenCV, RPi.GPIO, and GPIO Zero on the Raspberry Pi

In the remainder of this blog post, we’ll be using OpenCV and the RPi.GPIO/GPIO Zero libraries to interact with each other. We’ll use OpenCV to process frames from a video stream, and once a specific event happens, we’ll trigger an action on our attached TrafficHAT board.


To get started, we’ll need a bit of hardware, including a:

  • Raspberry Pi: I ended up going with the Raspberry Pi 3, but the Raspberry Pi 2 would also be an excellent choice for this project.
  • TrafficHAT: I purchased the TrafficHAT, a module for the Raspberry Pi consisting of three LED lights, a buzzer, and push-button from RyanTeck. Given that I have very little experience with GPIO programming, this kit was an excellent starting point for me to get some exposure to GPIO. If you’re just getting started as well, be sure to take a look at the TrafficHat.

You can see the TrafficHat itself in the image below:

Figure 1: The TrafficHAT module for the Raspberry Pi, which includes 3 LED lights, a buzzer, and push button, all of which are programmable via GPIO.

Figure 1: The TrafficHAT module for the Raspberry Pi, which includes 3 LED lights, a buzzer, and push button, all of which are programmable via GPIO.

Notice how the module simply sits on top of the Raspberry Pi — no breakout board, extra cables, or soldering required!

To trigger the alarm, we’ll be writing a Python script to detect this green ball in our video stream:

Figure 2: The green ball we will be detecting in our video stream. If the ball is found, we'll trigger an alarm by buzzing the buzzer and lighting up the green LED light.

Figure 2: The green ball we will be detecting in our video stream. If the ball is found, we’ll trigger an alarm by buzzing the buzzer and lighting up the green LED light.

If this green ball enters the view of our camera, we’ll sound the alarm by ringing the buzzer and turning on the green LED light.

Below you can see an example of my setup:

Figure 3: My example setup including the Raspberry Pi, TrafficHAT board, USB webcam, and green ball that will be detected.

Figure 3: My example setup including the Raspberry Pi, TrafficHAT board, USB webcam, and green ball that will be detected.

Coding the alarm

Before we get started, make sure you have read last week’s blog post on accessing RPi.GPIO and GPIO Zero with OpenCV. This post provides crucial information on configuring your development environment, including an explanation of Python virtual environments, why we use them, and how to install RPi.GPIO and GPIO Zero such that they are accessible in the same virtual environment as OpenCV.

Once you’ve given the post a read, you should be prepared for this project. Open up a new file, name it , and let’s get coding:

Lines 2-8 import our required Python packages. We’ll be using the VideoStream  class from our unifying picamera and cv2.VideoCapture into a single class with OpenCV post. We’ll also be using imutils, a series of convenience functions used to make common image processing operations with OpenCV easier. If you don’t already have imutils  installed on your system, you can install it using pip :

Note: Make sure you install imutils  into the same Python virtual environment as the one you’re using for OpenCV and GPIO programming!

Lines 11-14 then handle parsing our command line arguments. We only need a single switch here, --picamera , which is an integer indicating whether the Raspberry Pi camera module or a USB webcam should be used. If you’re using a USB webcam, you can ignore this switch (the USB webcam will be used by default). Otherwise, if you want to use the picamera  module, then supply a value of --picamera 1  as a command line argument when you execute the script.

Now, let’s initialize some important variables:

Lines 19 and 20 accesses our VideoStream  and allow the camera sensor to warmup.

We’ll be applying color thresholding to find the green ball in our video stream, so we initialize the lower and upper boundaries of the green ball pixel intensities in the HSV color space (be sure to see the “color thresholding” link for more information on how these values are defined — essentially, we are looking for all pixels that fall within this lower and upper range).

Line 28 initializes the TrafficHat  class used to interact with the TrafficHat module (although we could certainly use the RPi.GPIO library here as well). We then initialize a boolean variable (Line 29) used to indicate if the green LED is “on” (i.e., the green ball is in view of the camera).

Next, we have the main video processing loop of our script:

On Line 32 we start an infinite loop that continuously reads frames from our video stream. For each of these frames, we resize the frame to have a maximum width of 500 pixels (the smaller the frame is, the faster it is to process) and then convert it to the HSV color space.

Line 42 applies color thresholding using the cv2.inRange  function. All pixels in the range greenLower <= pixel <= greenUpper  are set to white and all pixels outside this range are set to black. This enables us to create a mask  representing the foreground (i.e., green ball, or lack thereof) in the frame. You can read more about color thresholding and ball detection here.

Lines 43 and 44 perform a series of erosions and dilations to remove any small “blobs” (i.e., noise in the frame that does not correspond to the green ball) from the image.

From there, we apply contour detection on Lines 48-50, allowing us to find the outline of the ball in the mask .

We are now ready to perform a few checks, and if they pass, raise the alarm:

Line 54 makes a check to ensure at least one contour was found. And if so, we find the largest contour in our mask  (Line 58), which we assume is the green ball.

Note: This is a reasonable assumption to make since we presume there will be no other large, green objects in view of our camera.

Once we have the largest contour, we compute the minimum enclosing circle of the region, along with the center (x, y)-coordinates — we’ll be using these values to actually highlight the area of the image that contains the ball.

In order to protect against false-positive detections, Line 64 makes a check to ensure the radius of the circle in sufficiently large (in this case, the radius must be at least 10 pixels). If this check passes, we draw the circle and centroid on the frame (Lines 66-68).

Finally, if the the LED light is not already on, we trigger the alarm by buzzing the buzzer and lighting up the LED (Lines 72-75).

We have now reached our last code block:

Line 78-80 handle if no green ball is detected and the LED light is on, indicating that we should turn the alarm off.

Now that we are done processing our frame , we can display it to our screen on Lines 83 and 84.

If the q  key is pressed, we break from the loop and perform a bit of cleanup.

Arming the alarm

To execute our Python script, simply issue the following command:

If you want to utilize the Raspberry Pi camera module rather than a USB webcam, then just append the --picamera 1  switch:

Below you can find a sample image from my output where the green ball is not present:

Figure 4: Since the green ball is not detected, the LED and buzzer remain off.

Figure 4: Since the green ball is not detected, the LED and buzzer remain off.

As the above figure demonstrates, there is no green ball in the image — and thus, the green LED is off.

However, once the green ball enters the view of the camera, OpenCV detects it, thereby allowing us to utilize GPIO programming to light up the green LED:

Figure 5: When the green ball is detected, the LED lights up and the buzzer goes off.

Figure 5: When the green ball is detected, the LED lights up and the buzzer goes off.

The full output of the script can be seen in the video below:


In this blog post, we learned how to utilize both OpenCV and the RPi.GPIO/GPIO Zero libraries to interact with each other and accomplish a particular task. Specifically, we built a simple “alarm system” using a Raspberry Pi 3, a TrafficHAT module, and color thresholding to detect the presence of a green ball.

OpenCV was used to perform the core video processing and detect the ball. Once the ball was detected, we raised an alarm by buzzing the buzzer and lighting up an LED on the TrafficHat.

Overall, my goal of this blog post was to demonstrate a straightforward computer vision application that can blend both OpenCV and GPIO libraries together. I hope it serves as a starting point for you future projects!

Next week we’ll learn how to take this example alarm program and make it run as soon as the Raspberry Pi boots up!

Be sure to enter your email address in the form below to be notified the second this next blog post is published!

See you next week…


If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , ,

24 Responses to OpenCV, RPi.GPIO, and GPIO Zero on the Raspberry Pi

  1. Mike May 9, 2016 at 2:07 pm #

    Thanks as always for taking the time to do all of these tutorials. The GPIO info is the last piece in the puzzle to getting my number plate recognition system to open up a gate. (Using your motion detection tutorial too, to make sure it only runs the heavy number plate reader code when a car is pulling up).

    Looking forward to your next instalment. Perhaps we could delve into some machine learning?

    • Adrian Rosebrock May 9, 2016 at 6:44 pm #

      Very cool, congrats on the progress with your project! I’ll be doing more machine learning tutorials, focusing on deep learning tutorials in the near future as well.

  2. Jon F May 9, 2016 at 2:53 pm #

    How would you go about using opencv for use with a camera that switches to night mode where it uses IR LED’s and the image is black and white? Can we use HSV method for this? I am trying to get this to work with a RTSP stream on an outdoor camera.

    • Adrian Rosebrock May 9, 2016 at 6:43 pm #

      If you’re processing a black and white image, then the HSV color space won’t be much of a help. Instead, you would need to apply the cv2.threshold function to attempt to segment the object of interest based on grayscale pixel intensities. Unfortunately, this simple method isn’t always possible, so you may need to resort to applying template matching or even training a custom object detector based on what you’re looking for in the black and white image.

  3. suzaini May 24, 2016 at 1:34 am #

    Hi Andrian, I’m newbie in this field. Why when i run the program it’s keep telling me no module named

    • Adrian Rosebrock May 25, 2016 at 3:30 pm #

      Make sure you have installed imutils via pip:

      $ pip install imutils

  4. sagar gharte June 8, 2016 at 4:03 am #

    what if i used another buzzer hardware..??

    • Adrian Rosebrock June 9, 2016 at 5:29 pm #

      You can certainly use any other type of GPIO hardware, but the general flow of the program remains the name. You’ll still need to access the hardware via the GPIO library. Exactly which inputs you activate is entirely dependent on what hardware you use.

  5. erick Lopez August 2, 2016 at 2:26 pm #

    Hi ! I´m Erick From Mexico and i dont know why i have an error in the module
    hsv = cv2.cvtColor(frame,cv2.COLORBGR2HSV)

    • Adrian Rosebrock August 2, 2016 at 2:55 pm #

      Hey Erick — it looks like you’re missing the underscore:

      hsv = cv2.cvtColor(frame,cv2.COLOR_BGR2HSV)

  6. Reza April 25, 2017 at 1:17 am #

    Good day.
    I checked your blog and found you use Raspberry pi for License plate recognition.
    So, would you please send me more information about it?
    I am CCTV camera distributor in Iran and have plan to produce some APNR camera.
    I need below information:
    1- Do you have image processor software?
    2-Do you have integrated software on Raspberry pi?
    3-I need specification of your product.
    4-How much is price of it?
    5-How can I have one PC of product as sample?

    Best Regards

    • Adrian Rosebrock April 25, 2017 at 11:50 am #

      Hi Reza — I teach students how to build their own custom ANPR systems inside the PyImageSearch Gurus course; however, the code isn’t meant to be licensed out as an off-the-shelf ANPR solution. If you’re interested in learning about ANPR, I would suggest joining the PyImageSearch Gurus course and trying the lessons.

  7. Aiden Ralph June 23, 2017 at 7:43 am #

    Hi Adrian,

    Great posts and examples as usual. I’ve really got the hang of making classifiers and getting OpenCV to do what I want based off your posts.

    I’m struggling to see where in the code you could substitute the traffic hat for something else i.e how would you get the above example to trigger a specific GPIO pin to turn on, when there is a positive match on the ball – as opposed to sending a command to your traffichat?

    Does anyone else know?

    • Adrian Rosebrock June 27, 2017 at 6:44 am #

      Hi Aiden — I cover that exact question in this blog post where I use the raw GPIO code rather than the TrafficHat library.

  8. leonardo November 29, 2017 at 12:34 am #

    Hi Adrian,
    I have a problem , When I run the program, only the message appears:
    “[INFO] waiting for camera to warmup…”

    • Adrian Rosebrock November 30, 2017 at 3:39 pm #

      Does the script automatically exit or report an error?

  9. Aditya Garg January 11, 2018 at 10:06 am #

    i used my python file to execute the programme .my programme is not executing the same on reboot

  10. Marcelo Rovai February 6, 2018 at 5:37 pm #

    Great tutorial!

    I adapted your code to have a LED turned ON when an object is detected by PiCam. Worked fine!
    Here the code:
    I will try to develop a Pan/Tilt Camera tracking. I published a tutorial with the Servos part. Now I will try to incorporate your code into it.
    Thanks a lot!

    • Chamara madushanka December 27, 2018 at 1:40 am #

      I am newbie and this was a great tutorial for me.Thanks for helping like this…
      I have a problem it detects the green ball without any problem.And i used another code to blink led.It also work.But it never turn on when detects the green ball.I used Marcelo Rovai’s code.Thanks for that also.Please give a help

      Thanks again..


  11. Bram Werbrouck August 6, 2018 at 3:24 am #

    When executing the code (in my virtual environment ‘CV’ & command: python –picamera 1)I have the following error:

    ImportError: No module names ‘picamera’

    Can you help me please??

    • Adrian Rosebrock August 7, 2018 at 6:46 am #

      You need to install the “picamera” library on your system:

      $ pip install "picamera[array]"

  12. Sudipta Chatterjee November 28, 2019 at 6:18 am #

    thankyou sir for your beautiful tutorials.
    i want to control GPIO from RGB open cv programming. Useing colour detection i want to control GPIO.
    but every time my programming is not running.
    can you please find out a solution ??


  1. Running a Python + OpenCV script on reboot - PyImageSearch - May 16, 2016

    […] In last week’s post, I demonstrated how to create an “alarm” program that detects this green ball in a video stream: […]

Before you leave a comment...

Hey, Adrian here, author of the PyImageSearch blog. I'd love to hear from you, but before you submit a comment, please follow these guidelines:

  1. If you have a question, read the comments first. You should also search this page (i.e., ctrl + f) for keywords related to your question. It's likely that I have already addressed your question in the comments.
  2. If you are copying and pasting code/terminal output, please don't. Reviewing another programmers’ code is a very time consuming and tedious task, and due to the volume of emails and contact requests I receive, I simply cannot do it.
  3. Be respectful of the space. I put a lot of my own personal time into creating these free weekly tutorials. On average, each tutorial takes me 15-20 hours to put together. I love offering these guides to you and I take pride in the content I create. Therefore, I will not approve comments that include large code blocks/terminal output as it destroys the formatting of the page. Kindly be respectful of this space.
  4. Be patient. I receive 200+ comments and emails per day. Due to spam, and my desire to personally answer as many questions as I can, I hand moderate all new comments (typically once per week). I try to answer as many questions as I can, but I'm only one person. Please don't be offended if I cannot get to your question
  5. Do you need priority support? Consider purchasing one of my books and courses. I place customer questions and emails in a separate, special priority queue and answer them first. If you are a customer of mine you will receive a guaranteed response from me. If there's any time left over, I focus on the community at large and attempt to answer as many of those questions as I possibly can.

Thank you for keeping these guidelines in mind before submitting your comment.

Leave a Reply