Accessing the Raspberry Pi Camera with OpenCV and Python

raspi_still_example_1

Over the past year the PyImageSearch blog has had a lot of popular blog posts. Using k-means clustering to find the dominant colors in an image was (and still is) hugely popular. One of my personal favorites, building a kick-ass mobile document scanner has been the most popular PyImageSearch article for months. And the first (big) tutorial I ever wrote, Hobbits and Histograms, an article on building a simple image search engine, still gets a lot of hits today.

But by far, the most popular post on the PyImageSearch blog is my tutorial on installing OpenCV and Python on your Raspberry Pi 2 and B+. It’s really, really awesome to see the love you and the PyImageSearch readers have for the Raspberry Pi community — and I plan to continue writing more articles about OpenCV + the Raspberry Pi in the future.

Anyway, after I published the Raspberry Pi + OpenCV installation tutorial, many of the comments asked that I continue on and discuss how to access the Raspberry Pi camera using Python and OpenCV.

In this tutorial we’ll be using picamera, which provides a pure Python interface to the camera module. And best of all, I’ll be showing you how to use picamera to capture images in OpenCV format.

Read on to find out how…

IMPORTANT: We’ll be building off my original tutorial on installing OpenCV and Python on your Raspberry Pi. If you do not already have OpenCV + Python configured and installed correctly on your Raspberry Pi, please take the time now to review the tutorial and setup your own Raspberry Pi with Python + OpenCV.

Looking for the source code to this post?
Jump right to the downloads section.

OpenCV and Python versions:
This example will run on Python 2.7/Python 3.4+ and OpenCV 2.4.X/OpenCV 3.0+.

Step 1: What do I need?

To get started, you’ll need a Raspberry Pi camera board module.

I got my 5MP Raspberry Pi camera board module from Amazon for under $30, with shipping. It’s hard to believe that the camera board module is almost as expensive as the Raspberry Pi itself — but it just goes to show how much hardware has progressed over the past 5 years. I also picked up a camera housing to keep the camera safe, because why not?

Assuming you already have your camera module, you’ll need to install it. Installation is very simple and instead of creating my own tutorial on installing the camera board, I’ll just refer you to the official Raspberry Pi camera installation guide:

Assuming your camera board and properly installed and setup, it should look something like this:

Figure 1: Installing the Raspberry Pi camera board.

Figure 1: Installing the Raspberry Pi camera board.

Step 2: Enable your camera module.

Now that you have your Raspberry Pi camera module installed, you need to enable it. Open up a terminal and execute the following command:

This will bring up a screen that looks like this:

Figure 2: Enabling the Raspberry Pi camera module using the raspi-config command.

Figure 2: Enabling the Raspberry Pi camera module using the raspi-config command.

Use your arrow keys to scroll down to Option 5: Enable camera, hit your enter key to enable the camera, and then arrow down to the Finish button and hit enter again. Lastly, you’ll need to reboot your Raspberry Pi for the configuration to take affect.

Step 3: Test out the camera module.

Before we dive into the code, let’s run a quick sanity check to ensure that our Raspberry Pi camera is working properly.

Note: Trust me, you’ll want to run this sanity check before you start working with the code. It’s always good to ensure that your camera is working prior to diving into OpenCV code, otherwise you could easily waste time wondering when your code isn’t working correctly when it’s simply the camera module itself that is causing you problems.

Anyway, to run my sanity check I connected my Raspberry Pi to my TV and positioned it such that it was pointing at my couch:

Figure 3: Example setup of my Raspberry Pi 2 and camera.

Figure 3: Example setup of my Raspberry Pi 2 and camera.

And from there, I opened up a terminal and executed the following command:

This command activates your Raspberry Pi camera module, displays a preview of the image, and then after a few seconds, snaps a picture, and saves it to your current working directory as output.jpg .

Here’s an example of me taking a photo of my TV monitor (so I could document the process for this tutorial) as the Raspberry Pi snaps a photo of me:

Figure 4: Sweet, the Raspberry Pi camera module is working!

Figure 4: Sweet, the Raspberry Pi camera module is working!

And here’s what output.jpg  looks like:

Figure 5: The image captured using the raspi-still command.

Figure 5: The image captured using the raspi-still command.

Clearly my Raspberry Pi camera module is working correctly! Now we can move on to the some more exciting stuff.

Step 4: Installing picamera.

So at this point we know that our Raspberry Pi camera is working properly. But how do we interface with the Raspberry Pi camera module using Python?

The answer is the picamera module.

Remember from the previous tutorial how we utilized virtualenv  and virtualenvwrapper  to cleanly install and segment our Python packages from the the system Python and packages?

Well, we’re going to do the same thing here.

Before installing picamera , be sure to activate our cv  virtual environment:

By sourcing our .profile  file, we ensure that we have the paths to our virtual environments setup correctly. And from there we can access our cv  virtual environment.

Note: If you are installing the the picamera  module system wide, you can skip the previous commands. However, if you are following along from the previous tutorial, you’ll want to make sure you are in the cv  virtual environment before continuing to the next command.

And from there, we can install picamera by utilizing pip:

IMPORTANT: Notice how I specified picamera[array]  and not just picamera .

Why is this so important?

While the standard picamera  module provides methods to interface with the camera, we need the (optional) array  sub-module so that we can utilize OpenCV. Remember, when using Python bindings, OpenCV represents images as NumPy arrays — and the array  sub-module allows us to obtain NumPy arrays from the Raspberry Pi camera module.

Assuming that your install finished without error, you now have the picamera  module (with NumPy array support) installed.

Step 5: Accessing a single image of your Raspberry Pi using Python and OpenCV.

Alright, now we can finally start writing some code!

Open up a new file, name it test_image.py , and insert the following code:

We’ll start by importing our necessary packages on Lines 2-5.

From there, we initialize our PiCamera object on Line 8 and grab a reference to the raw capture component on Line 9. This rawCapture  object is especially useful since it (1) gives us direct access to the camera stream and (2) avoids the expensive compression to JPEG format, which we would then have to take and decode to OpenCV format anyway. I highly recommend that you use PiRGBArray  whenever you need to access the Raspberry Pi camera — the performance gains are well worth it.

From there, we sleep for a tenth of a second on Line 12 — this allows the camera sensor to warm up.

Finally, we grab the actual photo from the rawCapture  object on Line 15 where we take special care to ensure our image is in BGR format rather than RGB. OpenCV represents images as NumPy arrays in BGR order rather than RGB — this little nuisance is subtle, but very important to remember as it can lead to some confusing bugs in your code down the line.

Finally, we display our image to screen on Lines 19 and 20.

To execute this example, open up a terminal, navigate to your test_image.py  file, and issue the following command:

If all goes as expected you should have an image displayed on your screen:

Figure 6: Grabbing a single image from the Raspberry Pi camera and displaying it on screen.

Figure 6: Grabbing a single image from the Raspberry Pi camera and displaying it on screen.

Note: I decided to add this section of the blog post after I had finished up the rest of the article, so I did not have my camera setup facing the couch (I was actually playing with some custom home surveillance software I was working on). Sorry for any confusion, but rest assured, everything will work as advertised provided you have followed the instructions in the article!

Step 6: Accessing the video stream of your Raspberry Pi using Python and OpenCV.

Alright, so we’ve learned how to grab a single image from the Raspberry Pi camera. But what about a video stream?

You might guess that we are going to use the cv2.VideoCapture  function here — but I actually recommend against this. Getting cv2.VideoCapture  to play nice with your Raspberry Pi is not a nice experience (you’ll need to install extra drivers) and something you should generally avoid.

And besides, why would we use the cv2.VideoCapture  function when we can easily access the raw video stream using the picamera  module?

Let’s go ahead and take a look on how we can access the video stream. Open up a new file, name it test_video.py , and insert the following code:

This example starts off similarly to the previous one. We start off by importing our necessary packages on Lines 2-5.

And from there we construct our camera  object on Line 8 which allows us to interface with the Raspberry Pi camera. However, we also take the time to set the resolution of our camera (640 x 480 pixels) on Line 9 and the frame rate (i.e. frames per second, or simply FPS) on Line 10. We also initialize our PiRGBArray  object on Line 11, but we also take care to specify the same resolution as on Line 9.

Accessing the actual video stream is handled on Line 17 by making a call to the capture_continuous  method of our camera  object.

This method returns a frame  from the video stream. The frame then has an array  property, which corresponds to the frame  in NumPy array format — all the hard work is done for us on Lines 17 and 20!

We then take the frame of the video and display on screen on Lines 23 and 24.

An important line to pay attention to is Line 27: You must clear the current frame before you move on to the next one!

If you fail to clear the frame, your Python script will throw an error — so be sure to pay close attention to this when implementing your own applications!

Finally, if the user presses the q  key, we break form the loop and exit the program.

To execute our script, just open a terminal (making sure you are in the cv  virtual environment, of course) and issue the following command:

Below follows an example of me executing the above command:

As you can see, the Raspberry Pi camera’s video stream is being read by OpenCV and then displayed on screen! Furthermore, the Raspberry Pi camera shows no lag when accessing frames at 32 FPS. Granted, we are not doing any processing on the individual frames, but as I’ll show in future blog posts, the Pi 2 can easily keep up 24-32 FPS even when processing each frame.

So, what now?

Now that you can access the video stream of your Raspberry Pi, I would suggest taking a look at my post on building a custom home surveillance system using motion detection. This motion detection tutorial is one of my favorite on the PyImageSearch blog and it’s super easy to follow — not to mention, you get to build a really cool, real-world computer vision project!

And if you’re really interested in leveling-up your computer vision skills, you should definitely check out my book, Practical Python and OpenCV + Case Studies. My book not only covers the basics of computer vision and image processing, but also teaches you how to solve real world computer vision problems including face detection in images and video streams, object tracking in video, and handwriting recognition.

raspberry_pi_in_post

All code examples covered in the book are guaranteed to run on the Raspberry Pi 2 as well! Most programs will also run on the B+ model, but might be a bit slow due to the limited computing power of the B+.

Just click here to learn more.

Summary

This article extended our previous tutorial on installing OpenCV and Python on your Raspberry Pi 2 and B+ and covered how to access the Raspberry Pi camera module using Python and OpenCV.

We reviewed two methods to access the camera. The first method allowed us to access a single photo. And the second method allowed us to access the raw video stream from the Raspberry Pi camera module.

In reality, there are many ways to access the Raspberry Pi camera module, as the picamera documentation details. However, the methods detailed in this blog post are used because (1) they are easily compatible with OpenCV and (2) they are quite speedy. There are certainly more than one way to skin this cat, but if you intend on using OpenCV + Python, I would suggest using the code in this article as “boilerplate” for your own applications.

In future blog posts we’ll take these examples and use it to build computer vision systems to detect motion in videos and recognize faces in images.

Be sure to sign up for the PyImageSearch Newsletter to receive updates when new Raspberry Pi and computer vision posts go live, you definitely don’t want to miss them!

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 11-page Resource Guide on Computer Vision and Image Search Engines, including exclusive techniques that I don’t post on this blog! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , ,

351 Responses to Accessing the Raspberry Pi Camera with OpenCV and Python

  1. Christian April 4, 2015 at 12:52 pm #

    Hello Adrian,
    Thank you for this new demonstration, which works very well!
    I wait for the following episode with impatience, to understand how to capture a detecting motion and tracking a person.
    Thank you very much to share with us your experiment.
    Christian

  2. Fabio G April 4, 2015 at 2:33 pm #

    Awesome Adrian!
    Thanks a lot for this! (and for how cleanly everything is explained).
    I’ll make sure to stay tuned.

    Fabio Guarini

  3. Dieter April 7, 2015 at 8:58 am #

    Hello Adrian,
    Thanks for the documentation. But I have a question to Frame rate. In the example above we have a frame rate of 32, but I get only 10 Images per second. Would it be faster to use c/C++ instead?

    Dieter

    • Adrian Rosebrock April 7, 2015 at 1:51 pm #

      Are you using the Raspberry Pi B/B+ or the Pi 2? You can easily get 32 FPS on the Pi 2. Using C/C++ will almost certainly be faster, but I would check ensure that your camera is working properly. It could be possible that the frames are not being read from the camera fast enough.

      • Razmik January 22, 2016 at 9:26 pm #

        Hello Adrian:

        I thank you for opening the door for many of us beginners to run OpenCV on raspberry.

        I just ran “test_video.py” example above, but I am getting about 10 frames per second instead of 32. I am using raspberry Pi 2.
        Could I be missing something obvious?

        Very best regards,
        Razmik Karabed

        • Adrian Rosebrock January 23, 2016 at 2:10 pm #

          You can try increasing the FPS by using threading. Also, reducing the frame dimensions from 640 x 480 to 320 x 240 will also dramatically increase your frame rate.

      • Aurangzaib November 19, 2016 at 9:51 am #

        is picamera capable for live video streaming to do face recognition

        • Adrian Rosebrock November 21, 2016 at 12:38 pm #

          Face recognition algorithms don’t “care” where the video stream is from, as long as you can read the frames from a stream. Inside the PyImageSearch Gurus course I demonstrate how to do facial recognition using the Raspberry Pi picamera module.

      • Andres Acevedo June 1, 2017 at 1:12 pm #

        for raspberry pi 3 people should know that “Enable/disable connection to the Raspberry Pi Camera”. is the options you want to click on after you have done sudo raspi-config. Then you will have to enable the camera from there. Different then raspberry pi 2 =]

  4. Kronos April 8, 2015 at 3:56 am #

    How did you get X11 to work with OpenCV? Are you using the same cv2.imshow("image", image) call? or something special? I can’t seem to get it to work with my windows system.

    • Adrian Rosebrock April 8, 2015 at 6:16 am #

      Are you using X11 forwarding when ssh’ing into your Pi? I’m not sure about windows systems, but the command on Unix systems is ssh -X pi@ipaddr. Your code will not have to change at all and you’ll still be able to use cv2.imshow.

      • Kronos April 8, 2015 at 10:06 am #

        I finally got it to work. It was a combination of things I believe. I’ve come to realize that the RasPi sometimes gets into a bad state and needs to be rebooted. Also, I don’t think that I was capturing the waitkey correctly and the script was ending too soon.

      • Kronos April 8, 2015 at 10:12 am #

        Another question though. I’ve noticed that using OpenCV doesn’t always get me the native resolution of the camera. Do you know how to get this? This is true when using the built-in Surface Pro 2 camera and my current web cam. Both should be 720p, but I’m getting something in the range of 480. Do you know how to get this to the native resolution?

        I’m simply using the cam = cv2.VideoCapture() to obtain the device and (grabbed, image) = cam.read() to grab the image.

        • Adrian Rosebrock April 8, 2015 at 10:36 am #

          When accessing the camera through the Raspberry Pi, I actually prefer to use the picamera module rather than cv2.VideoCapture. It gives you much more flexibility, including obtaining native resolution. Please see the rest of this blog post for more information on manually setting the resolution of the camera.

          • Kronos April 8, 2015 at 11:20 am #

            PiCamera isn’t limited to the native camera? Meaning that I can use that module with any webcam?

          • Adrian Rosebrock April 8, 2015 at 11:39 am #

            It depends. If it’s USB based you might be out of luck. But if your “native” camera can be plugged into the slot in the Pi, then you’re in business (I’m not familiar with the cameras you mentioned previously).

          • Kronos April 8, 2015 at 11:42 am #

            Ok that makes sense. I found this as well. I’ll check it out and see if it works:

            http://stackoverflow.com/a/20120262/447015

  5. Nhat Quang April 8, 2015 at 7:39 am #

    Thank you very much! This blog is very helpful ! I ‘m looking forward to your new blogs!

  6. Dave April 10, 2015 at 3:12 pm #

    Hi Adrian!!
    First of all, THANKS A BUNCH MAN!! I understood almost all the proccesses of this tutorial and the previous one, and i had no errors.

    But i have a question, I am using a GoPiGo Robot from Dexter industries, and i would like to make the robot follow an object. Using this code, i’ve seen that the capturing of the images goes a little bit slow, so in real time i don’t know if i could get the same faster in order to do the tracking.

    Do you have an idea how it could be faster?

    And, Do you have any post related with the tracking objects?

    Thanks again Adrian!

    • Adrian Rosebrock April 11, 2015 at 8:57 am #

      Hi Dave, I’m not familiar with the GoPiGo, but if the image capturing is slow then you probably want to reduce the FPS and the resolution. I also cover tracking objects inside my book, Practical Python and OpenCV.

      • John Tran January 7, 2016 at 4:26 pm #

        Hello Adrian:

        First, thanks for the wonderful blogs.
        I have a similar problem like Dave. but I’ve been using the Pi camera module. I used your method (test_image.py) to capture the image before processing it, I figured out that the major timing approximately 1.3 sec (for Pi 2) is spending on taking an image. I used the time() function to calculate the time. Also, when I tried to remove the line 12 (time.sleep(0.1)), the quality of a taken image wasn’t good enough for my application. Would you describe more details how to reduce the FPS and resolution in order to decrease the timing for taking an image?

        Thank you so much!

        • Adrian Rosebrock January 8, 2016 at 6:32 am #

          Hey John, the resolution of the camera is controlled by Lines 9 and 11. Lower the resolution to 320 x 240, and you should see a substantial pickup. Secondly, I’m not sure if you noticed or not, but I just did a series of blog posts on how to increase the FPS of your Pi using Python and OpenCV. Click here to read more.

  7. Max April 11, 2015 at 4:10 am #

    Hello Adrian,

    Amazing tutorial! Everything works great, except that I’m able to get only 2-3 fps with the code above. I am using the model B+. I have recorded videos with >30 fps using raspivid, so I know that the camera module works just fine. What is it that I am missing?

    Thanks,
    Max

    • Adrian Rosebrock April 11, 2015 at 8:53 am #

      Hi Max, that is definitely pretty strange, I’m not sure why you would only be getting 2-3 FPS. Maybe try reducing the image resolution and see if that helps?

  8. v-l April 18, 2015 at 12:41 pm #

    Adrian,

    thank you for these tutorials. i have succesfully completed the first tutorial about the installations, and steps 1-4 of this tutorial. The code in step 5 however, returns: “gtk warning ** cannot open display: :0.0”.

    i am have the rpi b+ connected to my television with an hdmi cable. have you, or anyone else, seen and fixed this problem before? when i google this error i only get the suggestion to export the display variable (as i did) but that does not seem to work.

    Thank you!

    • Adrian Rosebrock April 18, 2015 at 1:29 pm #

      Hey Vincent, are you running the command from the X GUI? It sounds like you’re simply executing the command from the command line (which you absolutely should do), but you need to have your GUI launched. Run startx to load the X interface, then open up a terminal and execute the script.

      • v-l April 19, 2015 at 3:14 pm #

        Adrian,

        that fixed it. thank you!

        • Adrian Rosebrock April 20, 2015 at 6:54 am #

          No problem!

      • David July 5, 2016 at 4:52 am #

        Hi,
        I understand that is impossible to run a py program with uv4l driver without a GUI ?
        I have the same problem ok Gtk-warning ….. My script works fine under X interface.

        Thanks

        • Adrian Rosebrock July 5, 2016 at 1:41 pm #

          How are you accessing your Raspberry Pi? Via SSH? VNC?

  9. Rick April 19, 2015 at 6:09 am #

    Hi Adrian,

    First of all, a wonderful post. I followed your previous post to install OpenCV and Python in Raspbian without any problem. I’ve just gone through this post and tried both test_image.py and test_video.py without ‘real’ problem as well.

    The only issue I have is it seems that I’m not quite getting 32 FPS for my RPi2 at 640×480. I compared the video with raspivid -d and feel that at 640×480, it wasn’t as smooth. On the other hand, if I change the resolution to 480×320, they seems comparable. So, I’m wondering how to get a more precise FPS info for realtime video. Any suggestions?

    Thanks

    • Adrian Rosebrock April 19, 2015 at 7:11 am #

      Are you running any other applications on the Pi while trying capturing the video? You should definitely be able to get 20+ FPS at 640 x 480 without a real issue. Also make sure that the camera is connected properly. It’s rare, but I’ve seen situations where the camera connection is a bit loose and while the Pi can see that the camera is there, the actual frame rate drops.

      • Rick April 19, 2015 at 8:01 am #

        Hi Adrian,

        No, just running X and a terminal. The connection to the camera should be fine as I simply change the resolution to 480×320 and it works. Also running raspivid -d without problem. Anyway, in order to quantify it, I need a way to measure the frame rate. May be record it and see from there.

  10. Desperated_user April 23, 2015 at 10:09 am #

    Hi Adrian,

    i was using exactly your code with the same peripherals; the only difference is that I am using Raspberry Pi and not Raspberry Pi 2.

    As there are more users having problems with the framerate:

    Might it be possible that the normal Raspberry Pi is that much worse when it comes to framerate than the new Raspberry Pi 2?

    I have a program running where I do some blob detection and get maybe 2-3 frames with a resolution of 640&480.

    Also with your code I do not get much more than maybe 5 fps @640&480.

    • Adrian Rosebrock April 23, 2015 at 10:30 am #

      Absolutely — the original Raspberry Pi is much, much slower than the P 2. I would recommend upgrading to the Pi 2 if you can, it’s definitely worth it.

      • Desperated_user April 23, 2015 at 10:37 am #

        Wow – that was an unexpected quick reply – thanks for that!

        Since I already have the hardware would it make sense to re-write the code to C in order to receive at least 15 frames?
        What would you say?

        • Adrian Rosebrock April 23, 2015 at 11:58 am #

          You’ll likely get some performance gains by dropping down into C, but in reality the previous Pi’s only had one core so there’s only so much performance that you can really gain. If you really want to obtain faster performance, pick up a Pi 2.

  11. poolsidebill April 27, 2015 at 10:46 pm #

    Nice blog entry, as always, Adrian!

    I was also seeing slow and sluggish frame rate when I was forwarding the video back to my PC via SSH. The frame rate was much better when I setup a VNC server on the Pi2, connected to it from the PC, and then started test_video.py on the VNC’s desktop.

    I’m prettty sure I wouldn’t have any issues if I just hooked a monitor to the Pi, but I like the monitor-less configuration better.

    FYI, link for VNC setup on Pi that I used: http://elinux.org/RPi_VNC_Server

    • Adrian Rosebrock May 1, 2015 at 7:03 pm #

      Absolutely, VNC will increase lag dramatically. The Pi 2 is actually running at a pretty high framerate, but the problem arises when you try to stream the results back to your VNC client.

  12. Pedro May 4, 2015 at 1:32 pm #

    Hi,

    I’ve managed to put your example working. I just need now to flip the image (to then convert the image to HSV and track the brightest spot on the image) and I can’t do that with cv.flip() and cv2.flip() functions.

    Can you give me any clue?

    • Adrian Rosebrock May 4, 2015 at 1:47 pm #

      Hey Pedro, what do you mean by “flipping” the image? Do you mean flipping the image horizontally or vertically? Or are you trying to convert directly to the HSV color space? If you want to find the brightest spot in an image, you’ll also want to take a look at this post.

      • Pedro May 5, 2015 at 4:33 am #

        Thanks for the reply Adrian.

        I want to rotate the image 180º.

        About that post, I’ll try that method using the camera to do the processing in live stream.

        • Adrian Rosebrock May 5, 2015 at 5:23 am #

          If you want to rotate the image 180 degrees, you’ll need the cv2.rotate function. I cover the very basic image processing functions in this post as well as in my book, Practical Python and OpenCV.

          • Pedro May 5, 2015 at 6:35 am #

            Thank you very much Adrian.

            I found on that post what I needed! And I’ve also managed to found the way to make it spot a laser!

            Thank you for all the help, that’s all I needed (I hope!)!

          • Adrian Rosebrock May 5, 2015 at 11:14 am #

            Very nice! What was your approach to spot the laser?

          • Pedro May 11, 2015 at 11:32 am #

            As I found on some articles, the laser spot must be the brightest point on the screen.
            But this isn’t so linear to apply on the raspberry pi camera. I have to be careful with the ambient birghtness. If the ambient is too bright, it detect false positives.

            Now I’m with another problem: processing speed. My program can only process a frame every 150ms. This is much slow for my application 🙁 I need now to find another faster solution and when I find the resolution for my problems I post them here!

            Thank you once again!

          • Adrian Rosebrock May 11, 2015 at 12:55 pm #

            Very nice solution Pedro, congrats! And a quick way to obtain faster FPS is to simply downscale the image. Less data to process == faster runtime.

  13. Marcellus May 5, 2015 at 8:11 pm #

    Hello Adrian,

    Thanks for this awesome tutorial!!! Everything went smoothly!!!! Now, I just need to figure out how to read letters and numbers from images, taken by the pi camera….do you have a tutorial on reading text from images???

    Regards,

    M

    • Adrian Rosebrock May 6, 2015 at 7:08 am #

      Hi Marcellus, I actually cover the basics of recognizing digits inside my book, Practical Python and OpenCV + Case Studies. There is a chapter dedicated entirely to recognizing digits that you could absolutely use and modify to your needs.

      • Marcellus May 6, 2015 at 8:25 am #

        Thanks!

  14. Dave May 7, 2015 at 7:01 am #

    Hey Adrian!

    When i do:

    sudo python myfile.py i get this error

    ImportError:No module named cv2

    and when i do python myfile.py it works correctly. But i need to use opencv with sudo, due to i need to communicate with another extension wich requires super user permission. Is there some way i can do that ?

    Thanks again!

    • Adrian Rosebrock May 7, 2015 at 7:12 am #

      Hey Dave, I’ve actually never done this before, but here’s my best guess: If you want to use OpenCV inside a virtual environment for the root user, then you’ll need to switch over to the root user account and repeat steps 7 and 10 for root. I’m not sure if the sudo command will be able to access the root virtual environment once it’s created, you may actually have to switch over to the root account to run the script and ensure it accesses the virtual environment.

      • Dave May 7, 2015 at 11:24 am #

        Sorry about my ignorance Adrian, but what do you mean by “steps 7 an 10 for root?

        Thanks a bunch!

        • Adrian Rosebrock May 7, 2015 at 11:59 am #

          You’ll need to launch a root shell: $ sudo /bin/bash and then create your virtual environment and sym-link OpenCV as the root user, which should be in the directory /root.

          • Dave May 9, 2015 at 7:17 am #

            Thanks Adrian! I’ll try it.

    • Kobe November 27, 2015 at 8:25 am #

      I had the same issue.
      You can fix it by changing wich python you use.
      like this: sudo /home/pi/.virtualenvs/opencv/bin/python
      and change opencv with your environment. This did the trick for me

  15. Andrew May 8, 2015 at 1:14 pm #

    Hello Adrian!

    First of all I’d like to say that you are one of the greatest computer-vision tutors I have seen 🙂

    I have a question related to this article: If I want to capture two consecutive frames from a video stream (at every iteration), what is the correct method? I would like to compute some differences between frames for detecting motion.

    Best regards!

    • Adrian Rosebrock May 8, 2015 at 3:04 pm #

      Thanks for the kind compliment Andrew! 😀

      If you want to compare the difference between two frames, you will need two variables: the previous frame and the current frame. Right after Line 20 I would check to see if the previous frame has been initialized or not. If not, initialize it as the current frame. Otherwise, you’ll have the current and previous frame together and you’ll be able to compute the differences between them.

      I do have some pretty epic plans to cover motion detection with the Raspberry Pi, so definitely stay tuned!

  16. PABLITO May 16, 2015 at 9:42 pm #

    Hi Adrian, I’m new with raspberry so I have some questions. After finish your tutorial I can’t import cv2, but if i work out of virtualenv I can import cv2. Do you know what happens?
    Please help me.

    • Adrian Rosebrock May 17, 2015 at 7:24 am #

      Hi Pablito, this tutorial actually built on my previous tutorial on installing OpenCV on your Raspberry Pi. Take a look at Step 10 where the OpenCV library is sym-linked into the virtual environment.

      You do not have to use virtual environments if you do not want, it’s just good practice.

  17. Stephen May 22, 2015 at 1:39 am #

    Hello Adrian! Firstly, excellent tutorial. I enjoyed the use of the virtual environment in the first tutorial and this one did a beautiful job of following on. I have got my RaspiPiCam working using the code you have provided and can take still and motion video. However, I am now lost at the next step in the OpenCV testing process.

    When I go to the available OpenCV Samples under the “/opencv-2.4.10/samples/python2” folder and attempt to run them, they do not recognize the RaspiPiCam stream. In particular, they do not like statements like: “try: video_src = video_src[0]” (as found in facedetect.py).

    I believe that there is a method to get OpenCV to directly play video from a RaspiPiCam using Python (as found here: http://raspberrypi.stackexchange.com/questions/17068/using-opencv-with-raspicam-and-python), but I can’t get it to work and was wondering if you had a more direct / elegant solution.

    I am trying to avoid re-writing all of the existing OpenCV samples simply so that they work with my RaspiPiCam instead of an actual USB cam. Thanks!!

    • Adrian Rosebrock May 22, 2015 at 5:14 am #

      Hi Stephen, I definitely appreciate not wanting to rewrite all of the OpenCV examples as that can be quite time consuming and tedious. If you want to work directly with the Raspberry Pi camera module, you can try installing the uv4l drivers. However, they can also be a pain to install. And more importantly, those drivers are not kernel level drivers — they will run as users threads. This means that they will be a bit slow.

      In general, I think you have two options:

      1. Update the OpenCV examples to use the Raspberry Pi code that I have detailed above.

      2. Purchase a USB camera, like the Logitech C210. All you need to do is plug the camera into the Pi and it should be automatically recognized. And from there you won’t have to change any code in the OpenCV examples.

      I know that’s probably not the answer you were hoping for, but there isn’t exactly a clean and elegant solution to this particular problem.

      • Stephen May 22, 2015 at 11:11 pm #

        I thought that this might be the case. Thank you very much for the quick response and your thoughts!

        • Adrian Rosebrock May 23, 2015 at 7:11 am #

          No problem, glad to help! Let me know which route you end up going.

  18. Joe Landau June 1, 2015 at 8:47 pm #

    Step 5, running headless with putty, fails for me with the message “Gtk-WARNING **: cannot open display”, at line 19. I got it to work using the RasPi desktop over VNC, but only after I had rerun my profile in that environment. Enabling X11 forwarding in the putty configuration did not work in my case.

    • Adrian Rosebrock June 2, 2015 at 6:44 am #

      Hi Joe — I’m sorry to hear that X11 forwarding did not work, that’s very strange. I don’t have a Windows system, and thus no access to Putty, so I can’t give it a shot to replicate the error. But whenever I ssh into my Pi with X11 forwarding from my command line, I can tell you that my command looks like this: $ ssh pi@my_ip_address

      I hope some fellow Windows users on the blog can help out!

  19. pipe June 3, 2015 at 11:36 am #

    Hi Adrian Rosebrock
    I am working in a project which i have to scan qr code and barcode by using python.Does it existe a tutoriel for doing that .
    thx
    Best regards

    • Adrian Rosebrock June 3, 2015 at 8:27 pm #

      Hey, thanks for the comment. I don’t have any tutorials related to scanning the actual barcode, but I do have a tutorial on detecting barcodes in images which you may find useful.

  20. Rafi June 9, 2015 at 10:55 pm #

    Hi Adrian,

    I’m having a problem with step 5. I get an error when running ‘python test_image.py’:

    OpenCV Error: Unspecified error (The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script) in cvShowImage, file /home/pi/opencv-2.4.10/modules/highgui/src/window.cpp, line 501
    Traceback (most recent call last):
    File “test_image.py”, line 19, in
    cv2.imshow(“Image”, image)
    cv2.error: /home/pi/opencv-2.4.10/modules/highgui/src/window.cpp:501: error: (-2) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function cvShowImage

    All the installation steps worked, so I don’t think this is a problem with that. I think this is an issue with X11 forwarding. I use this command to log in to the pi: ‘ssh -X pi@piaddr’ I’m using a mac and RPi B+. I tried running ‘startx &’ and ‘/etc/X11/Xsession’ but neither worked. How do you get images to display over your ssh connection? Is there any other setup required?

    Thanks,

    Rafi

    • Adrian Rosebrock June 10, 2015 at 7:05 am #

      Hey Rafi, it looks like you didn’t perform Step 3 of the Raspberry Pi + OpenCV install tutorial. Step 3 involves installing libgtk2.0-dev. Go back to Step 3, install libgtk2.0-dev, and then re-compile and install OpenCV and this should take care of the problem.

  21. glemar June 9, 2015 at 11:06 pm #

    hello guys.. may i ask something .. is it possible when the camera recognize an object then it convert it into speech. bcause we have a project design that is related with this post. Our project is for the blind impaired person, entitled ” Audio navigator for blind impaired”. the concept of this, it is a device wearable by a blind then we planned to use PI cam to detects motions and object that can gives awareness to a blind person of whats happening in his environment. for ex. the camera detects traffic light, the camera will automatically recognize it that it is a traffic light, then color sign green,orange and red that can tells to a blind when he/she can cross in certain road to avoid accident. in this case, we hope that it would help to them to make life easier by using this invention. pls anyone who can give ideas .. honestly my knowledge of making this is very limited so thats why i decided to approach this site. i am a student only.. pls forgive my grammar… thank you.. Godbless you all

    • Adrian Rosebrock June 10, 2015 at 7:02 am #

      Converting an image to text (and then to speech) is a pretty challenging project and is still under active research. Both Stanford and Google are currently researching methods for automatic image captioning which captions images via text strings. From there, those text strings can be passed on to speech algorithms. But it’s still an incredibly challenging problem and very much in its infancy.

  22. JBeale July 2, 2015 at 12:16 pm #

    Very interesting; thanks for this RPi-OpenCV tutorial! I have started doing something similar as you can see here: https://www.raspberrypi.org/forums/viewtopic.php?f=43&t=114550
    although I am using OpenCV 3.0.0 instead of 2.4 as I gather you are using. As long as you’re compiling the library yourself, it is just as easy to install 3.0 and it was actually faster to compile than the Pi2 timings you listed for your install.

    Doing foreground extraction and blob detection I see only 8 fps at 320×240 resolution on a RPi2, but I have not tried to optimize my algorithms as yet, and that includes reading the h264 input file instead of taking a live feed.

    • Adrian Rosebrock July 2, 2015 at 5:15 pm #

      Very nice, thanks for sharing! I’ll actually have OpenCV 3.0 install instructions for the Raspberry Pi 2 online within the next 2 weeks (pretty excited to get them pushed online). Your project sounds great so far, congrats! And 8 FPS for background subtraction using the MOG methods sounds about right.

  23. Neilesh July 12, 2015 at 5:00 pm #

    Hello Adrian,
    First off, I want to thank you for these awesome tutorials.
    My problem is that when I write test_image.py, and run it I get an error “picamera.exc.PiCameraMMALError: Camera component couldn’t be enabled: Out of resources (other than memory)”. I was wondering if that is an error due to the RPi or with my camera, or with something else?

    • Adrian Rosebrock July 13, 2015 at 6:28 am #

      Hey Neilesh — I have honestly never encountered that error before. That definitely sounds like an error related to your Raspberry Pi or the camera, not OpenCV.

  24. Girish July 14, 2015 at 1:46 am #

    Hi Adrian

    Do you have a sample code to access multiple camera in a single raspberry pi, I mean attach two to three camera to the same Raspberry pi and read frames from each of them process it and save it ?

    • Adrian Rosebrock July 14, 2015 at 6:18 am #

      Hey Girish — I don’t have any complete examples, but all you need is the cv2.VideoCapture(0) function where the 0 is the first camera, 1 would be the second camera, 2 would be the third, etc. Just maintain a list of capture objects and you’ll be able to access each of the cameras.

  25. Josh July 27, 2015 at 10:55 am #

    Hello Adrian,

    Step 5, i get

    “Traceback (most recent call last):
    File “”, line 1, in
    import cv2
    ImportError: No module named cv2″

    When running import cv2.

    Any ideas as to what might be happening and how to overcome this? I have completed your “installing openCV and Python on raspberry pi” tutorial. Thanks for your help.

    Josh

    • Adrian Rosebrock July 28, 2015 at 6:43 am #

      It sounds like you are not in the cv virtual environment. Use the workon command to access it before executing any code that accesses OpenCV:

      $ workon cv
      $ python
      >>> import cv2
      ...

  26. gTat August 5, 2015 at 4:12 am #

    Hey Adrian !

    thaks for this tutorial !

    I have a RP 2 with the camera module installed correctly I think and performed all the steps from opencv installation to this tuto and everything seems to work fine but I can’t acheive a fps above 15 in 640×480 ( a simple calculation displayed every 2 sec). nothing else is working in the same time. I access the Pi through VNC, using direct access ( hmdi cable, keyboard ..) don’t seems to provide the desired performance ( fps << 32).

    I have tried to remove the imshow thinking that displaying images with opencv would lead to performance drop but I've not noticed a significant gain in performance…
    I don't think this is about the exposure, but I don't have tested it in a very luminous condition.
    should I deactivate automatic exposure ? ( is it even possible ?)

    I know you have answered several questions like this but I can't understand why I can't acheived the 32 fps as you do 🙁

    do you have any idea ?

    thanks again for your time and tutorials which are excellent !

    • Adrian Rosebrock August 5, 2015 at 6:19 am #

      The first way you can increase FPS is to simply reduce your image size. One way you might be able to boost performance is to take a look at the V4L2 drivers for the Pi. I personally haven’t had much luck with them, but I know others that have. The V4L2 drivers can (theoretically) improve your frame rate and let you use the cv2.VideoCapture function rather than the picamera module which should improve the FPS a little bit.

  27. gTat August 5, 2015 at 4:46 am #

    Hi again,

    if opencv can only work with images at around 15 fps ( I don’t know why but let say)
    Do you think it is possible to have a high frame rate display , an other thread processing the frame that give back results as an overlay ?

    thanks in advance !! 🙂

    • Adrian Rosebrock August 5, 2015 at 6:16 am #

      Absolutely! It’s very common to dedicate one thread to grabbing frames from the camera device and then handing them off to the thread that is doing the actual processing of the images. This ensures that the main thread is not delayed by the polling of the camera.

  28. Pippo August 10, 2015 at 8:53 am #

    Hi,

    Compliments for your article.

    Do you think is it possible to build up an head counter with raspberry pi?

    I have a shop and would like to count people coming in and going out from the shop door placing a camera over the door.

    Tks

    • Adrian Rosebrock August 10, 2015 at 10:32 am #

      Hey Pippo, thanks for the comment. And yes, it’s absolutely possible to build a head counter to count people with the Raspberry Pi. I think you might like this particular post on motion detection and tracking to get you started.

  29. Tyrone August 10, 2015 at 7:51 pm #

    So far so good. I have tried other ways to get opencv on my pi using the pi cam. And have waisted some serious time. Dude you totally hooked it up. Thank you so much. Have you thought of doing a automatic pan and tilt face follow with an arduino, and a couple of servos?

    • Adrian Rosebrock August 11, 2015 at 6:30 am #

      Hey Tyrone, thanks for the awesome suggestion, I’ll definitely look into it!

  30. irfan August 12, 2015 at 2:09 pm #

    hai adrian, thank you for your tutorial

    why if me use syntax cv2.absdiff with the picamera array inside in syntax, i always have error, the error say

    “size of input do not match (the operation is neither ‘array op array’ (where arrays have same size and the same numberof channels), nor ‘array op scalar’, nor ‘scalar op array’) in arithm_op….”

    i have same resolution with the parameter in syntax cv2.absdiff, but why i have problem like that ? can you help me ?

    • Adrian Rosebrock August 13, 2015 at 7:08 am #

      When you take the difference between two images they need to be the same size (in terms of width and height) and in terms of channels. Either the two images you are trying to compare do not have the same width and height and/or one is grayscale and the other is RGB. Make sure all the dimensions match before using the cv2.absdiff function.

  31. Gary Lee August 18, 2015 at 2:14 pm #

    This is failing to run with an import error “No module named cv2”

    Any ideas?

    This is a on a fresh install of Rasperian, Open CV / Python 2.74 (using your instructions), and I am in the virtual environment CV when running the code.

    thanks


    FIXED: Really strange… when I left the machine last night, I had completed all steps of the install CV and Python including the last step which tested it. I noticed earlier today that the .profile change made last night was no longer there (which I fixed) by repeating step 7. I then repeated step 10 to fix this problem of cv2 not being defined. If there is anything I am missing about the virtual environment, and things I need to add elsewhere in Unix, please let me know. Thanks

    • Adrian Rosebrock August 19, 2015 at 6:46 am #

      You’re the second person in the past 72 hours who has mentioned that their changes to the .profile file disappeared (in the other persons case, it was after a reboot. That is really strange behavior, I’m honestly not sure about that one. If you don’t mind, could you post on the official Raspberry Pi forums and see if they have any suggestions? I would love to know why the updates are being overwritten.

  32. Rajat Saxena August 27, 2015 at 1:24 am #

    Hi, how can i get time for each frame using the cv.GetCaptureProperty() in the code mentioned above for capturing the video stream.

    • Adrian Rosebrock August 27, 2015 at 6:20 am #

      Since we’re using the picamera module, you won’t be able to access any other properties associated with the camera like you would in OpenCV. You can dump an image to file directly with the timestamp included; otherwise, just use the datetime module to grab the current time as the frame is read.

  33. Rafael Varela September 11, 2015 at 5:56 pm #

    Hi Adrian

    Thank you very much! for This blog is very good.

  34. Isaac Low September 20, 2015 at 11:26 am #

    Hi, Adrian. Thanks for your guidance over Raspberry PI. But I have a question here. After the Raspberry Pi camera module has been installed, how could I actually open up the terminal as shown in Step 2? Is that necessary to use Linux platform? Thanks for your prompt reply, much appreciated.

    • Adrian Rosebrock September 21, 2015 at 7:06 am #

      There are many ways to open up a terminal using the Pi, but I would suggest going through the official Raspberry Pi documentation for more info on launching a terminal.

  35. Jeffrey Batista September 21, 2015 at 10:33 pm #

    Hi Adrian, Before I purchased the course i wanted to know if it included a way that i can always have the camera live without recording, when a hang gesture is detect take a picture? is this possible? Thank you so much I’m starting to become a big fan of the website.

    • Adrian Rosebrock September 22, 2015 at 12:01 am #

      The course itself does not include a method to perform hand gesture recognition, but that is something that I hope to cover in the future. In the meantime, it is covered inside the PyImageSearch Gurus course.

      • Jeffrey Batista September 22, 2015 at 8:24 am #

        I just claimed my spot. So I’m guessing i would have to wait for the next course to start in order for me to learn how to keep the camera feed live all the time ?

        • Adrian Rosebrock September 23, 2015 at 6:45 am #

          You certainly don’t have to join in the course if you don’t want to, I was simply saying that I haven’t had a chance to cover hand guest recognition on the PyImageSearch blog yet, but it will be covered inside the PyImageSearch Gurus course.

          Leaving the your webcam feed running all the time is pretty easy. Just SSH into your Pi. Launch screen. Execute your script. Close your screen session. Then log out of your Pi. Your script will run without you having to be attached to the Pi.

  36. Soren September 26, 2015 at 10:11 am #

    Hi Adrian, I have an error when I try to run the code of test_video.py which is
    TypeError: ‘float” object is not iterable
    Can you help me and thank for this post

    • Adrian Rosebrock September 26, 2015 at 11:08 am #

      What line of code is throwing that error?

  37. slava October 13, 2015 at 1:03 am #

    Hey, Adrian! Thank you for this post, it’s wonderful.
    I got an error while trying to execute test_image.py.

    Can you help me with this problem?
    Thanks

    • Adrian Rosebrock October 13, 2015 at 7:11 am #

      If you’re getting an error related to the image not being defined, then I would go back to the test script and examine the output of the image = rawCapture.array is a valid. It could be that the Raspberry Pi camera itself is not configured properly.

  38. Zahra October 14, 2015 at 8:05 am #

    Hi
    I am new with the raspberry pi and i installed opencv and python on my raspberry pi.. Now I want to take an image by my pi camera using opencv. Already I followed your steps carefully but finally when i write $python test_image.py .. I saw that the light of the camera is working, it wants to take a picture but it dosn’t take!! and this is the warning error that is displayed for me :
    (Image:2312):Gtk-WARNING**:cannot open display:

    How can I solve this error ??? plz help me as soon as possible 🙁

    • Adrian Rosebrock October 14, 2015 at 9:47 am #

      Please see my response to Kronos and Joe Landau above.

  39. Murat Gozu October 25, 2015 at 9:24 am #

    Dear Adrian,
    First of all thank you very much for your great support on either how to install opencv into RP and how to use picamera. On the past I tried some other methods on other web sites for installing opencv, but I was not succesfull, but with your support finally iI did it.
    Anyway, Now I am able to run python opencv exmples with a usb camera, but after installing pi camera using pip install picamera “array”
    I am having issue. When I triy your example , the python compiler says there is no array method in picamera. What is wrong, could you please help, thank you

    • Adrian Rosebrock October 26, 2015 at 6:16 am #

      I can see two things that might have went wrong here. First, make sure you are in your virtual environment when installing the picamera[array]:

      Lastly, make sure you have quotes around “picamera[array]” when you install it:

      $ pip install "picamera[array]"

  40. Michael November 6, 2015 at 8:00 pm #

    A+ on a Pi 2. thanks Adrian

    • Adrian Rosebrock November 7, 2015 at 6:18 am #

      Nice, I’m glad it worked for you Michael! 🙂

  41. Fidel LC Ricafranca November 18, 2015 at 4:06 pm #

    Hi Adrian, this is some great work to get started with the picamera. However, I would like to clarify on the use of this particular line (#22) on video capture

    > key = cv2.waitKey(1) & 0xFF

    What’s the importance of this part?

    • Adrian Rosebrock November 18, 2015 at 6:52 pm #

      If you use the cv2.imshow without using cv2.waitKey then your window will show up and then disappear immediately. The cv2.waitKey function allows the window to be stayed open and optionally grabs the key that is pressed.

  42. Sidd Saran November 19, 2015 at 12:25 pm #

    Hi Adrian,

    This is a detailed, well written and nicely explained tutorial. Thank you very much for sharing it.

    The only suggestion I would have is to add something on X11 forwarding. It may even be a link to your favorite blog that goes over how to do it. I had to take a little detour to get this working to see the images and the stream. I used Xming and Putty following the instructions from here: http://laptops.eng.uci.edu/instructional-computing/incoming-students/using-linux/how-to-configure-xming-putty

    I am looking forward to all the possibilities and interesting projects using CV.

    -Sidd

    • Adrian Rosebrock November 20, 2015 at 6:34 am #

      Hey Sidd — Thanks for the note on X11 forwarding. Once you have X11 installed (whether or OSX or Linux), it can be done using a simple command:

      $ ssh -X pi@pi_ip_address

      On OSX, you’ll need to download and install Quartz first.

      The only issue with X11 forwarding is that it can be a bit slow for streaming the results back from a webcam/video device.

      • Harsh December 17, 2016 at 8:49 pm #

        I have a raspberry pi and HDMI monitor but do not have the keyboard. Is it possible for me to log in to Pi using SSH and forward display to HDMI monitor?
        Currently, I am able to forward X11 on my Windows laptop using XMing and Putty but video streaming is very slow and wants to use raspberry pi HDMI output on a monitor.

        • Adrian Rosebrock December 18, 2016 at 8:39 am #

          Streaming frames over a network will be slower than natively displaying them to an attached monitor. There are ways to speed up the process (i.e., gstreamer) but overall, if you want minimal lag you should be viewing the frames on a monitor attached to the Pi.

  43. Maniac November 20, 2015 at 12:58 pm #

    Just the one a newbie needs. Perfect. Keep it up.
    I enjoyed following the steps and for a change something from the net works as it is described.

    • Adrian Rosebrock November 21, 2015 at 7:28 am #

      I’m glad the tutorial helped! 🙂

  44. Stephan November 20, 2015 at 3:01 pm #

    Hi everybody,
    i installed Opencv 3.0, Python 3.2.3 on my Raspi2 (followed Adrians nice tutorial…).
    When I start test_image.py I get an error as soon I move around the mouse over the Image Window:
    GLib-GObject-WARNING**: Attempt to add property GtkSettings::gtk-label-select-on-focus after class was initialised.
    I don´t ssh to my raspi…I use the HDMI Output.

    When I google the error it looks like I´m not the only one – but I´m definetly one of the Noobs who don´t know how to solve it 😉
    Can anybody help me with this?

    @Adrian..really nice work you are doing here. Nice tutorials and blog post. Keep it up!

    Stephan the Kraut

    • Adrian Rosebrock November 21, 2015 at 7:28 am #

      Hey Stephan — I’ve ran into that GTK warning myself. I’ve installed OpenCV on hundreds of systems, but it only seems to happen on the Raspberry Pi. I’m honestly not what causes it, but it’s clearly from the GTK library. It doesn’t affect OpenCV at all, other than the warning message is displayed to the terminal which can be a bit annoying.

    • Hayden June 27, 2016 at 10:26 am #

      Had the same exact problem. Did you find a solution? I want to believe it could be a bad connection, but upon trying the test_image.py the ole pi cam is working for sure just not the video stream.

  45. April lee November 22, 2015 at 6:13 pm #

    Hi

    I would like to know if the face recognition method can be able to identify a person through comparing it with another photo taken before??

    Thanks

    • Adrian Rosebrock November 23, 2015 at 6:33 am #

      Absolutely. In fact, that’s how most face recognition algorithms such as Eigenfaces and LBPs for face recognition are trained. Both are covered inside the PyImageSearch Gurus course.

  46. oscar November 29, 2015 at 1:27 am #

    Hello Adrian,
    I’m developing a UAV (rover) , using also your code for item recognition.
    It works fine. THanks for your great job.

    I’d like now to live streaming the results ( captured images + the circle ).

    How can I pass those result to my web server?
    (I’m using tornado)
    BR
    Oscar

    • Adrian Rosebrock November 29, 2015 at 7:07 am #

      Hey Oscar — there are a number of ways to pass the results to your web server. Inside this post I show how to develop a computer vision web API that images can be uploaded to. I use Django for this project (not Tornado), but the principles are still the same.

  47. Dimas Rangga November 29, 2015 at 4:06 am #

    Thanks for the lesson sir, now I can access my camera on raspberry pi. hehe….

  48. whitney November 30, 2015 at 10:05 pm #

    Thank you so much for this! I am using for my design project and we fried our pi and had to start over. It worked perfectly the first time so Now we must do it again. But now we are getting a gtk warning and I have read about to install gtk, re compile and install open cv. thank YOU!!!!

    • Adrian Rosebrock December 1, 2015 at 6:29 am #

      Are you getting an error or a warning related to GTK? If you’re getting an error regarding unable to open the display, then I assume you’re SSH’ing into your Pi. You’ll need to enable X11 forwarding:

      $ ssh -X pi@ip_address

  49. Marcwolf November 30, 2015 at 11:44 pm #

    Many thanks for such a great article. I started to work with the PI v1 some time back but found that it was too slow for my processing needs. However seeing that the camera has been better integrated into Python makes it a lot easier.

    One thing that has always been a issue re OpenCV and SimpleCV is Blobs. Hopefully the newer OpenCV has made it easier.

    Many thanks
    Marc

    • Adrian Rosebrock December 1, 2015 at 6:28 am #

      Hey Marc, are you referring to the blob detector that comes with OpenCV? If so, I can try to write a blog post on it in the future to help clarify things.

  50. Ragu December 6, 2015 at 12:14 am #

    Hi Adrian, Thanks for the article. Its so well explained. Followed the steps and was able to get the image and video working with Opencv.. Keep it up. 🙂

    • Adrian Rosebrock December 6, 2015 at 7:12 am #

      Thanks Ragu! 🙂

  51. bbz December 11, 2015 at 6:18 am #

    Hi, one week ago there was no eror, now its happening like this

    pi@raspberrypi ~ $ raspistill -o output.jpg
    mmal: No data received from sensor. Check all connections, including the Sunny one on the camera board

    • Adrian Rosebrock December 11, 2015 at 6:27 am #

      I’m not sure about that one. I would (1) make sure that the camera is enabled via raspi-config (just in case it somehow got turned off) and (2) double and triple check the connections on your board. If you’re still getting an error, you might need to post on the official Raspberry Pi forums.

  52. Adams January 2, 2016 at 10:39 am #

    Thanks for the tutorial Sir..I kindly wish to know if it will be possible to get the camera running automatically when my raspberry starts…any script file which could execute the profile, work on and subsequently the camera working. I will really be grateful if you could help me out.

    2) How could i get dropbox to send me a notification anytime a picture is added into it

    3) what do i have to change if im to upload the images on some server, later to be retrieved via an app?

    • Adrian Rosebrock January 2, 2016 at 3:25 pm #

      To get a Python script automatically running when your Raspberry Pi starts, I would suggest using crontab. You can specify to run a shell script on reboot. Inside this shell script you should put the source, workon, and python commands to run your script.

      As for Dropbox sending you a notification, I’m not sure what you mean by that. Total disclaimer: I am not a Dropbox developer and this was the first time I used their API. I would suggest posting on the Dropbox Forum.

      Finally, you should look into using the pysftp package for uploading to a server.

  53. Yvder January 5, 2016 at 8:34 am #

    Thank you for the tutorial, very clear.

    But I’m facing an issue while running the command: python test_image.py (Step 5):

    (Image:1448): Gtk-WARNING **: cannot open display:

    Any help, please?

    • Adrian Rosebrock January 5, 2016 at 1:56 pm #

      Please see my reply to Joe Landau above.

  54. Stephan January 7, 2016 at 3:08 am #

    Can you explain me how i create a QR-Code reader with this?

    • Adrian Rosebrock January 7, 2016 at 6:35 am #

      I don’t cover how to read a QR code on this blog, but I demonstrate how to detect barcodes in video streams. I’ve never personally tried it, but I’ve heard that zbar is a good library for reading barcodes.

  55. Shameel January 11, 2016 at 7:23 am #

    Hi Adrian,
    GOOD WORK MAN!! Worked with me perfect.
    Now I want to change the settings of the camera programmatically. Settings like brightness, saturation, contrast, exposure, etc. Can you help me with this?
    Thanks

    • Adrian Rosebrock January 11, 2016 at 8:03 am #

      I would suggest taking a look at the picamera documentation. The page linked to demonstrates how to adjust brightness. Similar examples can be found throughout the docs.

  56. Dean January 26, 2016 at 6:36 pm #

    Hello Adrian,

    First I want to thank you for all your great tutorials, they have been a great resource. Next, I have installed OpenCV and Python on my raspberry pi 2 successfully. I then followed this tutorial and was able to run all the way through both step 5 and step 6 successfully the first time around. Because they worked as planned I set my work aside for about a week. When I returned and tried to run the same scripts in the virtual environment without making any changes I get the same GTK warning as those previous comments. The window opened the first time and now the window will not open. I am not running my pi through ssh but rather the HDMI port. My question is then, are there any updates or additional libraries needed to fix this warning? Your help is greatly appreciated.

    All the best

    • Adrian Rosebrock January 26, 2016 at 7:00 pm #

      Which GTK warning are you getting? An error related to the display being unable to open? Or the one related to the gtk-label-select? If it’s the latter, you can ignore this warning. I’m not sure why it happens, but it seems to be Raspberry Pi specific and it will not impact your usage of OpenCV. If it’s the latter, then make sure you have launched the Raspberry Pi desktop and are not trying to execute it via the command line at the Pi boots into.

      As for the image opening a first time, but not a second, that is very, very strange behavior and not something I have encountered before.

  57. Primoz February 10, 2016 at 7:21 am #

    Hello!

    First of all Adrian Rosebrock thank you very much for a great tutorial.

    I have tried the live display code above and it works fine, but if I increase the resolution to like 1296×972 the fps drop allot. There is a big difference if I compare it with raspivid live display.

    For now I’m not doing any image processing.

    Is there any solution to increase the fps to like 25 (resolution > 640×480) so that the stream will be smooth? I need these for kind of live magnification. And maybe I will have to draw a line on every image so that’s why I would like to use OpenCV and Python instead of raspivid.

    • Adrian Rosebrock February 10, 2016 at 4:34 pm #

      Realistically, if you want to obtain ~25 FPS, your images will need to be smaller than 640 x 480. The larger your resolution becomes, the more data there is, and hence the processing rate will drop. I personally haven’t tried this, but you might want to install the V4L2 drivers so you can access the Raspberry Pi camera module via the cv2.VideoCapture function and see if FPS rates improve.

      • Primoz February 11, 2016 at 1:36 am #

        Thank you very much for your answer.

        Yesterday I found that I can make an overlay if I use picamera preview. So I can easily draw a line on a live stream and it works great. That’s all I need for now.

        I will take a look at v4l2 driver if I ever need to make some processing on a stream.

        • Adrian Rosebrock February 11, 2016 at 9:23 am #

          Interesting. How do you draw on the picamera preview? I haven’t seen that done before.

  58. Marcus Ward February 26, 2016 at 11:30 am #

    Hey Adrian,

    I have a Raspberry Pi Model B+ and I was successful setting up the picamera in order to get a image up to step #5, but when it’s time to get a videostream in step#6, the python code will go through and I can clearly see that the LED n the camera is on but I am not seeing the window come up with a videostream of myself. What am I doing wrong? Maybe my FPS shouldn’t be the same as the one in your tutorial but I already tried to lessen it and it does the same thing.

    • Adrian Rosebrock February 26, 2016 at 1:50 pm #

      How are you accessing your Pi? Are you ssh’ing into your Pi or using VNC?

  59. Maria March 1, 2016 at 3:45 am #

    i need to detect circles in video with c++ using raspicam and hough transform

    • Adrian Rosebrock March 1, 2016 at 3:39 pm #

      Hey Maria, I actually cover circle detection in this blog post, but I only have Python code, not C++. I hope that helps point you in the right direction at least.

  60. daniel March 6, 2016 at 1:46 pm #

    Hi Adrian
    first of all i want to thank you for this very useful tutorial.
    I want to ask about step 4, which im trying to install pi camera.
    how long should i wait for the installation? because when the installation reach “Running setup.py bdist_wheel for numpy . . . -” it stops very long there and nothing in progress any further… (sry for my bad english) thank you!

    • Adrian Rosebrock March 7, 2016 at 4:13 pm #

      For a Raspberry Pi 2, the installation can take 15-20 minutes. For a model B, it can take anywhere from 45-60 minutes In either case, you’ll likely want to go make a cup of coffee or go for a long walk while NumPy installs 🙂

  61. nico March 9, 2016 at 4:06 am #

    Dear Adrian,

    first off all, thanks for the great turtorial.
    Is it possible to show the frame in fullscreen without any border at the top?
    Or to show the frame in a specified coordination. It always shown up in the left bottom corner.
    Thank you.

    Nico

    • Adrian Rosebrock March 9, 2016 at 4:41 pm #

      I don’t think it’s possible to show the frame “fullscreen” with OpenCV. The GUI functions included with OpenCV are meant to be barebones and used for debugging and building simple GUI-based projects. For more advanced GUI operations, I suggest using either Tkinter or Qt.

      As for placing the frame in a specified coordinate, yes, you can actually accomplish that using the cv2.moveWindow function:

      cv2.moveWindow("WindowName", x, y)

  62. mima March 10, 2016 at 8:45 am #

    Hi,i need to incresase fps to 60 i’m using camera board with raspicam
    my code in c++
    i tried using raspiCamCvSetCaptureProperty(capture,CV_CAP_PROP8_FPS,60)
    but no effect 🙁

    • Adrian Rosebrock March 10, 2016 at 11:58 am #

      I don’t have any C++ code on this blog, but I would encourage you to read this blog post on increasing the FPS processing rate of your video pipeline.

  63. John Tran March 18, 2016 at 3:10 pm #

    Hello Adrian,

    Should you recommend a way to save the video stream to the working directory in order to play it later?

    Thanks in advance!

  64. Jean-Pierre Lavoie March 18, 2016 at 3:47 pm #

    Hi Adrian,
    I did your previous tutorial to install OpenCV and did everything here up to step 5 to do try to display the image with the test_image.py script.

    I’m in cv environment and when I type:
    python test_image.py

    I get this error message:
    (Image:31875): Gtk-WARNING **: cannot open display:

    And I obviously don’t see the image. Any idea about my problem?
    Thanks. JP

    • Adrian Rosebrock March 19, 2016 at 9:15 am #

      It sounds like you’re SSH’ing into your Pi. Please see my reply to Kronos above — you need to enable X11 forwarding in your SSH command:

      $ ssh -X pi@your_ip_address

  65. Neal March 30, 2016 at 8:17 am #

    Hi,Thankyou for openning a new door for me to know pi,i am a student in China,i had a question that when i were tring do”sudo apt-get install libgtk2.0-dev”, it can’t work. And i am sure using the new “source.list”,how can i use the pi B+ to install the environment of Opencv?
    Please figure it out! Very Thanks!
    PS:Is it because these “software” out of date ?
    PS2:I can’t buy your bool in China! PITY!

    • Adrian Rosebrock March 30, 2016 at 12:44 pm #

      Hey Neal — I’m sorry to hear about the issue with the book in China. Send me an email, perhaps we can figure out a workaround. As for the libgtk-2.0-dev issue, what is the error message you are getting?

  66. Kav March 31, 2016 at 4:11 pm #

    Hey Adrian,

    I’m measuring the read rates for capture_continuous and it looks like every three or four frames, it takes significantly longer to generate a frame.

    The read times look like this.

    Frame 1: .02 (Seconds)
    Frame 2: .03 (Seconds)
    Frame 3: .02 (Seconds)
    Frame 4: .11 (Seconds)

    It’s happening in a somewhat regular pattern (every third/fourth frame takes 3/4 times as long ).

    Any idea what’s going on or is that the expected behavior of capture_continuous.

    Kav

    • Adrian Rosebrock April 1, 2016 at 3:20 pm #

      Very interesting, I can’t say I’ve ever encountered that before (or measured it). You might want to try posting on the picamera GitHub to see if they know anything about it. I would also encourage you to try using threading to facilitate faster frame reads as well.

  67. Cem May 3, 2016 at 1:13 pm #

    Hi Adrian .Thanks for the tutorial.I will ask how can I make a real time face recognition on raspberry pi over this tutorial or do you have another tutorial for this ?I will be very happy if you help me ,thanks.

    • Adrian Rosebrock May 3, 2016 at 5:44 pm #

      I don’t have any real-time face recognition tutorials publicly available; however, I do cover it in detail inside PyImageSearch Gurus.

  68. John-Paul May 10, 2016 at 1:41 pm #

    Was wondering if it is possible to run 2x Pi Cameras from the same Raspberry Pi?

    I know there is only a single camera connector, but could a second be added via the GPIO pins or could the camera be chained together?

    If a second Pi is required, what is the maximum length of the cable? Does it have to be a ribbon cable or are round cables available?

    Thanks for your time.

    • Adrian Rosebrock May 10, 2016 at 6:18 pm #

      I’ve seen various hacks online that have chained together up to 4 Pi Cameras, but in general, I don’t recommend this. Instead, I would just connect multiple USB cameras. It’s much easier this way 🙂

  69. Murat Gozu May 23, 2016 at 10:07 pm #

    Hi Adrian
    I am trying to install to raspberry pi camera module since last year
    What I did is;
    – source ~/.profile
    – workon cv
    – pip install “picamera[array]”

    Finally I wrote the test_image.py code to test, but there is no any luck ever to run the code

    It gives me the error
    ImportError: No module named array

    I guess I am the only one who is able to use raspberry pi camera

    Any help will pe approximated

    Thanks
    Murat Gozu

    • Adrian Rosebrock May 25, 2016 at 3:32 pm #

      It definitely sounds like picamera did not install correctly. Try manually typing in the pip install "picamera[array]" command to ensure there are no formatting issues during the copy and paste.

  70. Michael June 1, 2016 at 7:52 pm #

    Hi Adrian,

    Everything has been great and is work up to step 5. I have copied the code for capturing an image and saved it etc.

    Running ‘python testimage.py’ from the terminal executes the code correctly.

    However, when I run it from the python shell in IDLE3, i get the error:

    Traceback (most recent call last):
    File “/home/pi/.testimage.py”, line 6, in
    import cv2
    ImportError: No module named ‘cv2’

    I have run the source ~/.profile and workon cv commands in the terminal before opening IDLE3.
    I guess i’m lacking a little understanding about environments.

    Is this behaving correctly?
    If so, what is the advantage to running the program from the terminal, as opposed to from the IDLE3 shell? Because to me, it just feels more natural to press F5 from the actual program.

    Thanks!
    Michael

    • Adrian Rosebrock June 3, 2016 at 3:13 pm #

      I presume you’re using the GUI version of IDLE? If so, that’s the problem. Unfortunately, IDLE does not respect Python virtual environments like the command line does. I would suggest either using the command line version or IDLE, or, better yet, use something like IPython Notebooks.

  71. Anje June 9, 2016 at 3:04 am #

    Hi Adrian,

    When I run my code I’m getting error as

    from picamera.array import PiRGBArray
    ImportError : No module named picamera.array

    What would be the reason?

    Thanks.
    Angel Jenifer

    • Adrian Rosebrock June 9, 2016 at 5:15 pm #

      Make sure you have installed the picamera[array] library:

      • Antony May 31, 2017 at 9:20 am #

        Hi Adrian,

        I have the same issue above, though definitely installed “picamera[array]”, and now says already satisfied, but still same error when running in virt_env.
        “no module named picamera.array”?
        It looks to me that the issue is that picamera[array] is automatically installing to python3 when I have opencv3 installed for python2.

        I either get no picamer, or no cv2 module? I can’t get the 2 to work together on python2. Even running:
        $ sudo pip2 install “picamera[array]” says already satisfied, though I know it is not?

        • Adrian Rosebrock May 31, 2017 at 12:59 pm #

          It sounds like you are not installing the picamera[array] library into your Python 2.7 virtual environment where OpenCV 3 is installed. Can you run pip freeze from your virtual environment and ensure that picamera is listed?

          • Stefan Gabriel June 13, 2017 at 12:13 pm #

            Yeah so i only had to run it with workon cv. Now it works.

          • Adrian Rosebrock June 16, 2017 at 11:38 am #

            Congrats on resolving the issue Stefan!

  72. Davood June 20, 2016 at 5:42 am #

    Thanks a lot for nice guidance and suggestions.

    My question is; in recording video, is it possible to change the “frame rate” and the “frame size”? I mean, for example, first I choose the frame rate =32 and size=(640,480), then after some time I change the frame rate =45 and size (320,240), but all in one program. is it possible?

    If it is possible, then can I store the output video in one file? I mean, some portion of output file has different frame rate and size with other portion. is it possible?

    • Adrian Rosebrock June 20, 2016 at 5:25 pm #

      If I understand your question correctly, you want to store one frame size at a given FPS for a period of time and then later store a different frame size at a different FPS? If so, no, that’s not possible. You can adjust the frame size simply by making your frame larger/smaller to fit the original output dimensions — but you cannot adjust the FPS. You would need to create two separate output files.

      • Davood June 21, 2016 at 5:43 am #

        thank you.

        you mean for changing FPS, I have to make another output file. But I can use only one code. In that code, if I change the FPS, then i store the images in another output file.

        When I change the FPS or frame size, No need to reboot the raspberry, or no need to use another code, am I right?

        • Adrian Rosebrock June 23, 2016 at 1:32 pm #

          Correct, you can use the same code. But if you want to change FPS or frame dimensions, you should create a separate output file.

  73. Jim June 21, 2016 at 4:03 pm #

    Hello Adrian, Thank you for the lesson, very helpful.

    I can successfully run the test_image.py script but when I run the test_video.py script I get no error but the image is black. I am using the Raspberry Pi 3 board and the 8 MP Ver2.1 camera.

    • Adrian Rosebrock June 23, 2016 at 1:24 pm #

      I’m not sure what you mean by “image is black” — but the code should still work with the newer v2 camera module.

      • jim June 23, 2016 at 3:45 pm #

        The video image output to the monitor (via HDMI) is solid black. However, if I test the camera using a raspvid command, like: “raspivid -t 5000 -o” the video image is as expected.

        I also tested with the V1 pi camera with same results. I tried a better power supply (2 Amp) with same results. I reduced fps and image size with same results.

        • Adrian Rosebrock June 25, 2016 at 1:38 pm #

          That is very strange. Can you confirm which version of picamera you are using? I’ve heard there are some issues with the latest 1.11 release.

      • jim June 24, 2016 at 4:18 pm #

        I found the fix to my Black Video issue and I hope you will add this simple step to your lesson. After setting up RP3/OpenCV using Install guide: Raspberry Pi 3 + Raspbian Jessie + OpenCV 3, simply reboot the RP before trying to execute any CV related code. Following reboot, the black video image issue goes away. I repeated the complete install twice with the same results, on the third try I rebooted before executing opencv related code and all is well.

        • Adrian Rosebrock June 25, 2016 at 1:43 pm #

          Hey Jim – Congrats on resolving the issue, but to be honest, I’ve never encountered it before. It’s a good tip for anyone else who encounters this issue, but to be honest, this sounds like something very specific to your setup.

      • Another-Jim June 26, 2016 at 11:39 am #

        Hi Adrian, thanks for all the lessons.

        I also have the same Issue as the other Jim. When I run the test_video.py script I get a small window appear called Frame but the window is just black, no video appearing. Im running on a Pi 3 using the first gen camera. The LED on the front of the camera turns red as normal. any ideas? no error messages are output to the terminal window and the script exits when the Q key is pressed.

        Im using a Raspian installation as detailed in your lesson for installing PI 3 with open CV, and im in the CV virtual env.

        thanks
        Jim

        • Adrian Rosebrock June 28, 2016 at 11:00 am #

          Hi Jim — please see my comment to “red” below to see the resolution to this problem. Also, I assume you’re using Python 2.7? Or are you using Python 3?

          • Another_Jim June 28, 2016 at 12:55 pm #

            perfect! that fixed it, many thanks Adrian

            Now for the cool stuff

          • Adrian Rosebrock June 29, 2016 at 2:07 pm #

            Nice, glad to hear it 🙂

  74. Eric N June 23, 2016 at 2:04 pm #

    Hey Adrian,

    I’m running a Raspberry Pi 3 B with the new 8 MP PiCamera. I have installed and am successfully running the (cv) environment. I got the document scanner to work with your example images and with images I have previously saved, but would now like to integrate my PiCamera into the scanner and skin detection programs (and others).

    I’ve downloaded the source files, but am running into trouble at “Step 5: Accessing a single image of your Raspberry Pi using Python and OpenCV.” When I run test_example.py, it gets hung up on line 18 camera.caputre(raw.Capture, format = “bgr”). It gives me the following error- “TyperError: startswith first arg must be bytes or a tuple of bytes, not str”. Please let me know if you’ve got any insight into what could be going wrong or if you need more information.

    Thanks in advance, I really appreciate all of these tutorials.

    • Adrian Rosebrock June 25, 2016 at 1:46 pm #

      Man, I’ve been getting a lot of emails regarding this. Can you please confirm what version of picamera you’re running? I have a bad feeling that it might be the latest 1.11 version which if you look at the GitHub issues, is having a ton of problems. Luckily, I think the fix is simply to downgrade to a previous version. Please let me know which picamera version you’re running so I can confirm this (I’m traveling right now and don’t have physical access to my Pi).

      • Eric N June 27, 2016 at 8:56 am #

        Hi Adrian,

        Thanks so much for the response. I ran pip list while in the (cv) virtual environment, and it appears you were right. My picamera is version (1.11). What is the easiest way for me to downgrade to 1.10?

        • Adrian Rosebrock June 28, 2016 at 10:55 am #

          Please see my reply to “red” above.

      • Felipe L June 30, 2016 at 1:14 am #

        Adrian, thank you for sharing your knowledge. I also had the aforementioned issue and wanted to let you know that it worked fine once I downgraded picamera to 1.10 as suggested.

        Thank you!

        • Adrian Rosebrock June 30, 2016 at 12:18 pm #

          Thanks for letting me know Felipe, I appreciate it!

    • Jamin July 22, 2016 at 1:06 pm #

      It might help to uninstall picamera “sudo apt-get remove ” (python-picamera or python3-picamera [or both]) and then do a pip install… http://picamera.readthedocs.io/en/release-1.11/install.html#alternate-distro-installation

  75. red June 27, 2016 at 9:23 am #

    Away from my pi unfortunately but the “error” I get is something like gttk: lense focus was initialized and my frame is pitch black no picture for python test_video.py and I am in the source~/.profile
    workon cv
    python

    • Adrian Rosebrock June 28, 2016 at 10:55 am #

      I have a feeling that you’re using picamera v1.11 and Python 2.7. Try downgrading to picamera v1.10 and this should resolve the blank/black frame issue:

      There are some issues with the most recent version of picamera that are causing a bunch of problems for Python 2.7 and Python 3 users.

      • Paul July 24, 2016 at 8:24 pm #

        Same problem here. I found that if I set the format to ‘rgb’, the sample will display the stream, but in the wrong colors. However, I did modify the sample so that a camera.capture occurs on the exit, and this file is saved as RGB with no issues on displaying the correct colors when I do an imshow on the resulting file..

        • Adrian Rosebrock July 27, 2016 at 2:35 pm #

          Hey Paul — thanks for sharing. I’ll write an updated blog post that details some of the common errors in this comment thread.

      • Ben May 31, 2017 at 8:46 pm #

        Downgrading picamera worked for me!

        • Adrian Rosebrock June 4, 2017 at 6:30 am #

          Congrats on resolving the issue, Ben!

  76. Marina July 3, 2016 at 5:39 pm #

    Hello Adrian,
    Thanks for all the tutorials. It’s been really helpful. This tutorial worked great through vnc, and everything run fine. However, when I try to run test_image.py through terminal I get the error, “(Image:2063): Gtk-WARNING **: cannot open display:”. I tried to ssh in using -x like you recommended to a previous commenter, but it didn’t fix the error. Do you know any way to resolve this issue?
    Thanks 🙂

    • Adrian Rosebrock July 5, 2016 at 1:53 pm #

      The “X” should be capitalized, like this:

      $ ssh -X pi@your_ip_address

      From there, try running the test_image.py script.

      • Marina July 11, 2016 at 7:00 pm #

        It still pulls up a Gtk warning. I tried going into ssh_config and changing ssh Forward X11 to yes, but it still doesn’t work. Then I tried using xhost, but it always pulls up an error.

        • Adrian Rosebrock July 12, 2016 at 4:35 pm #

          Hey Marina — I’m sorry to hear about the continued issues with X11 forwarding. But in this case, I’m honestly not sure what the particular problem may be.

          • Marina July 15, 2016 at 4:47 pm #

            Ok, well thanks anyway 🙂

  77. Aris July 4, 2016 at 8:48 pm #

    Hey Adrian

    I would like too first say a huge thanks for the tutorials you offer here and take the same opportunity to drop a quick question.

    I was able to run the image test, but the video test is giving me a black screen. I am not sure if you have any idea of what is happening.

    But if I use raspivid -o video.h264 it 50000, it streams and record the video with no issue.

    • Adrian Rosebrock July 5, 2016 at 1:44 pm #

      It sounds like you are using picamera=1.11 which is known to have some issues. I would suggest upgrading your picamera installation:

      $ pip install --upgrade "picamera[array]"

  78. Pablo July 5, 2016 at 4:07 pm #

    Hi Adrian, great tutorial!

    I installed OpenCV 3.1 with python 3 on my Raspi3 from your tutorial: http://www.pyimagesearch.com/2016/04/18/install-guide-raspberry-pi-3-raspbian-jessie-opencv-3/

    Everything works, but the video feed is slow and laggy! Even using a resolution of 320×320. Isn’t this strange on a raspi3?

    • Adrian Rosebrock July 5, 2016 at 4:44 pm #

      It’s actually not surprising that the video feed feels laggy. The reason for this is because you’re performing frame polling in the same loop that displays it to your screen — your I/O latency is huge. You can actually increase the FPS of your processing pipeline by utilizing threading.

  79. Bipin July 6, 2016 at 2:42 am #

    Hi Adrian,
    I got these errors when i executed “test_image.py ” code

    how to solve this error ??

    • Adrian Rosebrock July 6, 2016 at 4:13 pm #

      It sounds like you’re using picamera=1.11. There are some know issues with picamera=1.11. I would suggest upgrading to the latest release which will resolve the error:

      $ pip install --upgrade picamera

  80. IgorPot July 8, 2016 at 10:49 am #

    Hello Adrian,

    I apologize for not sticking to the topic entirely (I don’t have OpenCV installed, yet). I captured the images with picamera (1.10 for now) in different formats, but I don’t know (I’m a beginner and am overwhelmed by the volume of Python documentation) how to view YUV, RGB and Bayer data stored in, e. g., ‘image.data’ file. Can it be done with some Python module or do I need to install some application like OpenCV? I also would like to compress the captured image – would you recommend gzip and other Python modules or something else?
    Thank you and I’m sorry if I’m bothering you.

    • Adrian Rosebrock July 9, 2016 at 7:30 am #

      Without knowing what exactly you’re trying to do with the different color spaces, I’m not sure what your end goal is. If you would like to capture a frame from a video stream, convert them to a bunch of different color spaces, and then write them to file, then yes, using OpenCV would be better for this. You can specify which color space you would like a frame returned as in the picamera module — but you’ll only get one color space. For all the others, you should use OpenCV to perform the color space conversion for you. After your script runs, it’s likely easy enough to create a tar or gzip file from the terminal.

      • IgorPot July 10, 2016 at 4:58 am #

        For now I captured a still image with raspistill and with the use of picamera library. In the latter case I captured in YUV, RGB, and Bayer data, each in the files, e. g. image.data. (I followed Advanced recipes from picamera 1.10 manual.) Yesterday I installed Gimp and opened them but I’m not sure if that was OK (I pasted one on https://www.raspberrypi.org/forums/viewtopic.php?f=43&t=153414&p=1007734#p1007734).

        My goal (it’s for diploma) is to compress the captured image with different tools (don’t know what, yet, probably zlib, gzip, jpeg, pngmaybe even some isolated classical method), lossy and lossless – in Python and in Matlab.

        I would compare the results (by file size and visually) so I guess I need some unencoded, uncompressed image to begin with – that’s why I wanted YUV and/or RGB and raw Bayer. If that is how it’s done in practice (I’m a beginner …).

        • Adrian Rosebrock July 11, 2016 at 10:16 am #

          If your goal is to compare various compression algorithms, then you’ll want to store the raw bitmaps of the images to start. Bitmap images perform no compression and are quite large since they store every RGB pixel at every RGB location. I would start with bitmap files and then explore various compression algorithms.

  81. m.h.najafi July 21, 2016 at 2:26 am #

    Hi Adrian,
    Please Help me
    I ran “test_video.py” but i get a black window that its name is “Frame”
    if i delete “use_video_port=True” from the program the window shows the camera pictures but it is very slow.
    what is wrong with my program?
    thanks

    • Adrian Rosebrock July 21, 2016 at 12:42 pm #

      It sounds like your Raspberry Pi camera drivers may be out of date. Try running:

      $ sudo raspi-update

      To update your Pi, restart the Pi, and then give it another try.

  82. Samuel Landin July 26, 2016 at 7:51 pm #

    Hello! First of all, thank you very much for your blog, it is super interesting and I’ve learned a lot in the last months about Python + OpenCV (I’d been using OpenCV in C++ during college). I have a problem, when I finish the program, the kernel dies and the camere does not shutdown, is there something (like freeing memory) I’m missing? I’m working in Spyder. Hope you can answer my question. Greeting from Mexico!

    • Adrian Rosebrock July 27, 2016 at 1:58 pm #

      I personally haven’t used Spyder — can you confirm the existence of this bug when executing via the terminal as well? Either way, it sounds like a threading error. I would suggest inserting print statements into the code to determine exactly where the threading error is happening.

  83. Nils H August 2, 2016 at 5:15 am #

    Hi! First off – thanks for some amazing tutorials!
    I’m a total rookie with rpi and programming, but I’ve got this far now. When I try to use the output and test the camera, everything works fine. But when I run your test_image.py i get a image that is very under-exposed, and almost dark. And when I run test_video.py, the frame is completely black. Have I done something wrong, or am I missing something?

    Its worth to mention – I strugglede quite a bit with the OpenCV installation, and although the installatino of 3.0 went smooth perhaps some remains of the first try is corrupting the process?

    • Adrian Rosebrock August 2, 2016 at 3:03 pm #

      If your frame is completely black, it sounds like you might be using outdated Raspberry Pi drivers. I would suggest running rpi-update and updating the Raspberry Pi firmware.

  84. tom zhu August 10, 2016 at 10:54 pm #

    Hi Adrian,
    Thanks for the teaching.
    I followed every step on “install openCV 3 on a Raspberry Pi 3 runing Raspbian Jessie” successfully, and then followed every step on this post successfully, until that we I did: $pip install “picamera[array]”, the computer said : picamera 1.12 does not provide the extra ‘array’ .
    What is the problem, and how to fix it?

    • Adrian Rosebrock August 11, 2016 at 10:39 am #

      Hey Tom — I’m not particularly sure about that error message. I just ran:

      $ pip install "picamera[array]"

      And it worked just fine. Just to make sure, type your commands into the command line instead of copying and pasting, just in case that is causing any problems.

  85. tom August 11, 2016 at 10:31 pm #

    Hi Adrian,

    Thanks for the quick reply.

    No error message is seen this time when I run $ pip install “picamera[array]” .
    But then I have following error message when I run $ python test_image.py:
    (Image:2229): Gtk-WARNING **: cannot open display:

    And I do not see any image on the TV on which my Raspberry Pi 3 Model B is connected, nor on the computer terminal which connected to the Raspberry Pi through SSH.

    I did all those steps through SSH connection. I see that you already mentioned above that using $ ssh –X to overcome this problem. But I am using Window. I do not know how have the similar setting for Window.

    I was trying to avoid the problem with SSH, therefore, I tried to run $ python test_image.py locally (i.e. open a terminal on the TV with the keyboard and mouse that are on the Raspberry Pi 3 Model B. I first tried to run $ source ~/.profile, but I got the following error message:
    Bash: ~/.profile: No such file or directory

    Is this normal? I can run what you teach only through SSH, not locally?

    • Adrian Rosebrock August 12, 2016 at 10:49 am #

      You can still execute the code locally, that’s not a problem. As for your particular error message, it seems like your ~/.profile file is not being found. Are you sure it exists? And are you executing the code as any other user than the pi user?

      Finally, regarding the -X switch for X11 forwarding, your Windows SSH client should have an option for X11 forwarding — be sure to search through the options for it.

  86. tom August 13, 2016 at 9:40 pm #

    Hi Adrian,

    I see the image on my PC terminal now, after having installed the Xming on the PC and selected the proper setting for x11 on the putty on the PC. But only a portion of the image is displayed.

    I have ran the test_video.py. It works fine. I have got the image, though the frame rate was likely lower than 30.

    I still cannot run $ source ~/.profile locally. How can I tell what user I am when I am executing the code?

    I did not do anything extra locally. It was just whatever the state or user when the raspberry Pi was turn on. And I did not do anything extra remotely either. It was just whatever the state or user when I log-in through the SSH.

    • Adrian Rosebrock August 14, 2016 at 9:21 am #

      If you’re running the test_video.py script over the network, then yes, the frame rate will appear quite low due to I/O latency. This is because it takes considerable time to transmit each individual frame over the network from your Pi and to your machine.

      Finally, keep in mind that source ~/.profile should only be executed on your Pi (whether via a standard terminal or SSH). You do not need to execute it on your local machine.

  87. milad August 20, 2016 at 2:25 am #

    when is write pip install “picamera[array]”
    its open” requirement already satisfied (use –upgrade to upgrade) : picamera [array] in/us”

    • Adrian Rosebrock August 22, 2016 at 1:33 pm #

      It sounds like you already have the picamera[array] module installed on your system. To upgrade to the latest version you can use:

      $ pip install --upgrade "picamera[array]"

  88. vivek August 25, 2016 at 2:11 am #

    how to capture image manually by pressing any key?
    for that how we get camera stream on screen?

    • Adrian Rosebrock August 25, 2016 at 7:14 am #

      Your video stream should be automatically displayed to your screen using the cv2.imshow function as detailed in this image. You can then save any frame using cv2.imwrite.

  89. Rossi August 25, 2016 at 4:22 pm #

    I’d like to thank you so much for your help. I’m using your documents to make my first steps with Raspberry Pi. I’m from Brazil, and it’s complicate to find out documents in portuguese so rich of details like yours. Thank you again.

    • Adrian Rosebrock August 30, 2016 at 12:47 pm #

      No problem Rossi, I’m happy I could have been of help.

  90. JinYoung Hwang August 29, 2016 at 11:34 pm #

    Hi
    I want to know how to access mulitple in one rasberrypi,I mean C++ and shell access VideoCapture(0) at the same time.
    In the C++, I use People Counting. link: https://www.youtube.com/watch?v=0Q31goWSDgA
    In the shell, I also use web streaming.
    Can I use one pi cam to use multiple access?
    Thank you…..

  91. Umesh August 30, 2016 at 1:45 pm #

    Hi. i have a problem. my camera works fine for Image capturing. but when i run video capturing code it wont display anything. only black screen get appear. whats the reason for this???no error shows out though

    • Adrian Rosebrock August 31, 2016 at 1:43 pm #

      Hey Umesh — take a look at my latest blog post where I demonstrate how to resolve this issue.

  92. Eduard October 6, 2016 at 2:17 pm #

    Hi Adrian,
    Thank you for this and other tutorials that helped me a lot.
    I run into this Gtk-WARNING **: cannot open display:
    But my situation is slightly different and then the solutions from the comments above does not work for me.
    The first is that I’m using the Raspberry Pi Compute module, and it does not have enough space to install full Raspbian, so I have Raspbian lite, just command line, no desktop, and I cant install xorg, as the device does not have enough space for both opencv and xorg.
    Without xorg I can not forward x11 through SSH.
    Do you know what I can do to fix it? It has to be possible as raspistill is able to display the image in the screen without problem, just imshow() I have this problem.

    • Adrian Rosebrock October 7, 2016 at 7:32 am #

      Hmm, this is an interesting question. If you can’t install the X window manager can you see if there is any other (smaller, lighter) window manager you can install? I honestly haven’t tried this without a window manager so I don’t know the solution off the top of my head.

      • Eduard October 12, 2016 at 8:02 am #

        Ok, thank you.
        I though that maybe the lighter solution could be to install something lighter than raspbian-lite, I’m trying with minibian-wifi and use a nodejs server to output the video. I’ll write if I’m successful with it and upload code to github. By now I’m fighting with the wifi for some reason is getting disconnected from time to time.
        I will try using node-python, I’m not sure if there is a better way to do it.

  93. Ekhwan October 7, 2016 at 1:20 am #

    hey Adrian,thanks for another great post.As I followed along your post I got the test_image.py running perfectly like magic.However ,when I wrote (And the copy-pasted) the code of test_video.py it doesn’t work.A window comes up ,where I recon the video would be streamed ,but it’s just a black window.NO video streaming.If you read this comment then please do walk me through this problem.

  94. Jasper October 12, 2016 at 2:00 pm #

    Hi Adrian! I meet some problems. I am connecting to the raspberry pi through SSH, using Putty on the windows and I use a VNC viewer. It all works nice until the step of “python test_image.py”, and it says that (Image:1554) Gtk-WARNING **: cannot open display. I have read the comments of Joe Landau but I still can not understand how to fix it. If I run “python test_image.py” on the SSH(Putty), the above error message occurs. If I run “python test_image.py” on the LXTerminal in the VNC viewer, it says that “no module named cv2”. And I try to type in the command “workon cv” in the LXTerminal in the VNC viewer, it says that workon:command not found.
    Do you have any clue?
    Thank you!

    • Adrian Rosebrock October 13, 2016 at 9:14 am #

      The GTK warning can be resolved by X11 forwarding. I’m not a Windows user and I don’t use Putty so I’m not sure how to enable X11 forwarding from Putty, but it is 100% possible — you’ll just need to do a little research from there.

      As for the LXTerminal from VNC try using the following commands:

  95. shankh October 22, 2016 at 1:19 pm #

    hi adrian. im doing a project on Raspberry Pi based face recognition for the blind person. i need to connect two cameras with raspbery pi 3. but the problem is how to the video stitching . And i aslo need a wearable device to notify the person about the video,in audio format. any help how to connect the wearable device with raspberry pi?

  96. Omer Bulut October 27, 2016 at 3:55 pm #

    Hi Adrian;
    Thanks a lot for your documentation and demonstration. I have a little problem here. I used same version of raspbian with raspberry 3. The image worked well but i didn`t get video. I got only small window which was all black. Could please help me about that? than you very much !

    • Adrian Rosebrock November 1, 2016 at 9:24 am #

      That sounds like an issue with your firmware version. this blog post details how to resolve the issue.

  97. David Bracke November 27, 2016 at 2:34 pm #

    Hello Adrian,

    Thanks so much! Your instructions (here and others I’ve read through) have been extremely insightful.

    However, in this instance, I’ve run into a difficulty with Step 5. I’ve got my virtual environment up and going, but I keep getting an “ImportError: No module named ‘picamera’ ” message. I’ve double-checked that picamera and picamera.array are installed in the virtual environment:

    pip-3.2 install picamera

    Attempting to upgrade is either a requirement already up-to-date (picamera), or has a major error and cancels itself (picamera[array]).

    So, then, picamera is installed in my virtual environment, if I understand correctly, but I keep getting the ImportError message whenever I run the test_video code.

    What’s going on? (And is there anything else you need to know?)

    • Adrian Rosebrock November 28, 2016 at 10:22 am #

      Hey David, I had to edit your comment to remove some of the output from the terminal — the output was destroying the formatting of the terminal.

      That said, it sounds like you’re explicitly using pip-3.2 to install the picamera module which is incorrect. Once you enter your virtual environment you only need to “pip”. The virtual environment automatically determines the correct version of pip to use.

      For example, here is the correct way to install picamera into your virtual environment:

  98. Randy December 2, 2016 at 11:49 am #

    Hi the single image works, but on the video stream I get only a black screen (without error message). This is the case for 640×480 and 320×240 on 29-32fps.. system: raspi3, opencv3.1,

    UPDATE: problem solved with RPI update:

    sudo rpi-update

    fantastic 🙂

  99. WANG Chen December 13, 2016 at 9:30 am #

    Sorry, there is one point block me -_-|| How to build up a new file and insert the code when I use Putty to connect my Raspberry pi?

    • Adrian Rosebrock December 14, 2016 at 8:30 am #

      You should use a terminal-based text editor. The easiest terminal text editor to use for beginners is nano:

      $ nano /path/to/your/file

  100. Vivek January 7, 2017 at 3:01 pm #

    Hi Adrian

    Could you please do a tutorial on how to stream using netcat, or gstreamer..? I am trying to make a pan and tilt object tracking device on raspberry pi using python and opencv. My confusion is how to use python bindings of gstreamer within the code so that the video can be monitored on a pc.

    Thank you..!

    • Adrian Rosebrock January 9, 2017 at 9:16 am #

      I will certainly consider doing a tutorial on gstreamer. It’s been on my “idea list” for awhile, but I’ve been focusing on neural network topics lately. I’ll try to get back to doing a tutorial on gstreamer, but I’m not sure when that might be.

  101. Leigh January 18, 2017 at 3:32 am #

    Hi Adrian, I appreciate your tutorials and will be buying your book as soon as my budget is opened for the new year. In the meantime, I am wanting to use the Raspi camera with your ball tracking tutorial. Can you tell me what I need to do to use the pi camera instead of a usb cam?

    • Adrian Rosebrock January 18, 2017 at 7:07 am #

      Hey Leigh — I would use this blog post as your starting point. Use the template I have provided and ensure you can read frames from the Pi camera video stream. From there, any code within the while loop of the ball tracking example needs to be included inside the for loop of this example. Keep in mind that you’ll have to replace .read with the appropriate picamera module calls.

  102. Dhanika January 20, 2017 at 10:44 am #

    Hello Adrian,

    I have started doing a project with raspberry pi and image processing and the robot is supposed to track a ball. What camera module will be better to use for speedy applications? Is that RPi camera? Or a normal webcam?

    Awaiting quick reply 🙂

    • Adrian Rosebrock January 20, 2017 at 10:51 am #

      Either one will work for simple ball tracking. If you go with a USB webcam I would recommend the Logitech C920.

  103. Novice216 January 21, 2017 at 2:27 am #

    Hi Adrain, the blog was very helpful in setting up raspberry pi. Also the tips and tricks were very useful. Thanks a lot.

    • Adrian Rosebrock January 22, 2017 at 10:19 am #

      Fantastic, I’m happy to hear it! Congrats on getting OpenCV installed on your system.

  104. Adam January 21, 2017 at 7:05 pm #

    Another great tutorial! Thank you!

    • Adrian Rosebrock January 22, 2017 at 10:13 am #

      Thanks Adam, I’m happy you enjoyed it!

  105. seif El-Din January 23, 2017 at 9:48 am #

    Hello Adrian,

    Firstly, thanks for this tutorial, but I have a question

    I am a beginner in raspberry pi, and I want to make shapes recognition like cube or cylinder shape, so what is the steps to do it ?

    thank you 🙂

    • Adrian Rosebrock January 24, 2017 at 2:28 pm #

      I would suggest starting with this tutorial on shape detection.

  106. Jen February 2, 2017 at 10:21 am #

    Hello, this is an awesome post. This will be my first Pi project. What Pi do you use? what about the housing it is in? lastly what is CV? and how do you get it. Thanks!

    • Adrian Rosebrock February 3, 2017 at 11:11 am #

      I would recommend the Raspberry Pi 3 for computer vision. The Pi 2 is also suitable but since the Pi 3 is a little faster, I would use that. Regarding the “cv” virtual environment, you create it when following my OpenCV install tutorials. You could also purchase a copy of my Practical Python and OpenCV book and get a pre-configured Raspbian .img file with OpenCV pre-installed.

  107. Solveig February 16, 2017 at 6:53 am #

    Hello Adrian,
    i write my Bachelor Thesis now. I have a Robot with a Raspberry Pi and an Android Application. My job is to shorten the time of the transfer process between RasPberry and Android, when the camera of RasPBerry makes a video.
    Previously, this process takes about 2 seconds.
    I wrote the code “test_video.py” from you into the RasPberry, but when I lately connect the robot to the Android application, “MediaPlayer error (1, -2147483648)” comes out. So my question is this: what should I do about it?

    There were previously the written codes, but not of me. I dont know where is the problem.

    • Adrian Rosebrock February 16, 2017 at 9:46 am #

      Hey Solveig — what method are you currently using to send frames from the Pi to your Android app? I would suggest using a message passing library such as PyZMQ or RabbitMQ.

  108. Milán Vincze February 19, 2017 at 10:47 am #

    Hello Adrian,

    I followed your tutorials but I came to a problem, the frame is full black when I started the test_video.py. A bought a pi camera module V2 which is 8MP. I installed opencv 3.0 and python 2.7. Could you recomand a solution?

    thank you

    • Adrian Rosebrock February 20, 2017 at 7:43 am #

      It sounds like you need to upgrade the firmware on your Raspberry Pi. See this post for more information.

      • Milán Vincze February 26, 2017 at 4:05 pm #

        Thank you!

        I updated the firmware and it worked.

        • Adrian Rosebrock February 27, 2017 at 11:10 am #

          Fantastic, I’m happy to hear that was the solution 🙂

  109. Satish Y C February 21, 2017 at 4:16 am #

    Hi adrian,
    I’m trying to get the image frame of vehicle number plate using my camera module. can you help me out with the opencv code?

  110. Ken W March 3, 2017 at 10:19 am #

    Another black image here, but I had just “upgraded” picamera to 1.13 from 1.10. After running “pip install picamera==1.10” to roll back to 1.10, the program works fine!

    • Adrian Rosebrock March 4, 2017 at 9:37 am #

      THanks for sharing Ken — and congrats on resolving the issue! For what it’s worth, I also discuss common Raspberry Pi camera module errors in this post.

  111. Giridharan Ravichandhran March 6, 2017 at 12:02 am #

    Dear Mr. Adrian i am currently using raspberry pi3 with jessie installed for my project and i am trying to Accessing the Raspberry Pi Camera with OpenCV and Python. after all five steps done including creating file test_image.py, i am using $python test_image.py command following error occurs:-

    /home/pi/.virtualenvs/cv/local/lib/python3.4/site-packages/picamera/encoders.py:545: PiCameraResolutionRounded: frame size rounded up from 1366×768 to 1376×768
    width, height, fwidth, fheight)))

    (Image:1235): Gtk-WARNING **: cannot open display:
    please help me in this at the earliest
    thanks

    • Adrian Rosebrock March 6, 2017 at 3:40 pm #

      It sounds like you might be SSH’ing into your Pi, in which case you should enable X11 forwarding to see the output image:

      $ ssh -X pi@your_ip_address

      • Giridharan Ravichandhran March 6, 2017 at 11:58 pm #

        hello dearest Mr.Adrian apparently issue is fixed and i must thanking you tons and tons, thank you so much to your real support cheers !!! thing is i were using an windows machine and i use to take the raspberry pi 3 in remote over SSH by using putty, now after you clarified to use above suggested command, i disabled SSH and worked on Raspi itself and tried the command python test_image /video.py and it worked well and i enabled SSH again tried taking raspi machine in remote by using windows machine through putty and tried your suggested command $ ssh -X pi@10.1.105.37 and again tried to execute command python test_image.py and still the following different error occurs

        Traceback (most recent call last):
        File “test_video.py”, line 5, in
        import cv2
        ImportError: No module named cv2
        Many Thanks,

        • Adrian Rosebrock March 8, 2017 at 1:14 pm #

          Once you SSH into your Raspberry Pi you need to access your Python virtual environment which would have your OpenCV bindings installed:

  112. Ulrich March 13, 2017 at 5:06 pm #

    Hello Adrian,
    Thanks for your for your great tutorial! I followed your introductions in this blog and it works after several problems. When I run your program test_video.py on my raspberry pi 2 I got the following warning:

    (cv) pi@raspberrypi ~ $ python test_video.py

    (Frame:2586): GLib-GObject-WARNING **: Attempt to add property GtkSettings::gtk-label-select-on-focus after class was initialised
    q(cv) pi@raspberrypi ~ $ q

    What does this warning mean? What is going wrong?
    Nevertheless the video stream is shown in the window. (sometimes the pi needs a restart)

    Would be great to get help.
    Thanks a lot,
    Regards Ulrich

    • Adrian Rosebrock March 15, 2017 at 9:06 am #

      This is not an error, just a warning. It comes from the GTK library. It can be ignored as it is not related to OpenCV.

  113. meshack March 14, 2017 at 8:57 am #

    hello Adrian,

    so a while ago i followed your tutorial on how to install opencv3 on raspbian jessie and i successfully installed the opencv and i installed
    Python 3.4.2 successfully as well now i am following your tutorial on how to Access the Raspberry Pi Camera with OpenCV and Python and i stumbled across an error when i try to execute the test_image.py file please what can i do to fix this and thank you in advance 🙂

    Error found below:

    (python test_image.py
    File “test_image.py”, line 1
    Python 3.4.2 (default, Oct 19 2014, 13:31:11)
    ^
    SyntaxError: invalid syntax)

    • Adrian Rosebrock March 15, 2017 at 8:54 am #

      It sounds like you are trying to execute the Python script inside IDLE — don’t do this. Execute the script via the command line (no IDLE).

  114. Patrick Ronan March 15, 2017 at 6:21 pm #

    Hi Adrian,

    Great tutorials. I am doing a final year project of an autonomous car with vision processing. I’ve followed the openCV 3 install tutorial and now this one and all works fine until the test_video.py. I just get a black screen when i run this script. Any ideas ???

    Regards,

    Paddy

    • Adrian Rosebrock March 17, 2017 at 9:34 am #

      Refer to this post to resolve the issue.

  115. dakna March 21, 2017 at 4:56 am #

    Hello
    I was faced to the same problem of slow frame rate (2 or 3 frames per second ) when using putty with X11 forwarding , but when switching to VNC all things works fine

    • Adrian Rosebrock March 21, 2017 at 7:02 am #

      Thanks for sharing Dakna!

  116. Anbazhagan March 22, 2017 at 1:08 am #

    i go error like

    ImportError: cannot import name ‘PiRGBArray’

    how to solve it…

    • Adrian Rosebrock March 22, 2017 at 8:35 am #

      It sounds like you haven’t installed the picamera library. Go back and ensure picamera has been installed.

  117. Grat Crabtree March 26, 2017 at 5:51 pm #

    Typo in test_video.py
    format=”bgr”
    should be “rgb”

    It messed with me for a bit – was getting black screen

    Great resource anyway!

    • Adrian Rosebrock March 28, 2017 at 1:05 pm #

      That’s not a typo — it should be “bgr”. OpenCV represents images in BGR order rather than RGB. If you were getting a blank screen, you likely need to update your Raspberry Pi firmware, as discussed in this post.

  118. mehdi March 27, 2017 at 4:04 am #

    xlib etension RANDR is missing on display 1:0 !! what this mean ?

    • mehdi March 27, 2017 at 4:08 am #

      i’m using tighvncserver in raspberry and remmina ubuntu in laptop

      • mehdi March 27, 2017 at 4:12 am #

        using sudo python test.py i get : client is not authorised to access to server and no image is showen

    • Adrian Rosebrock March 28, 2017 at 1:03 pm #

      Can you try SSH’ing into your Pi with X11 forwarding and seeing if that resolves your error?

    • Dapeng May 4, 2017 at 12:04 pm #

      Hi,Mehdi. Have you fixed this problem. I have the same problem. can you share with me your solution? Thanks

  119. Ahmed March 29, 2017 at 9:43 pm #

    Hi Adrian,
    I followed your steps into executing the code and also downloading opencv. Everything was installed successfully. However when I try to run the code using Putty and Xming in my laptop through SSH I get an error message saying:


    PuTTY X11 proxy: Unsupported authorisation protocol

    (Image:927): Gtk-WARNING **: cannot open display: localhost:10.0

    However, when I connect my raspberry pi 3+ using an HDMI cable to my TV I don’t get an error. I just want it to work through my laptop as it would be much more convenient rather than me having to connect and disconnect to my TV to check if it is working. I hope you can still try to help me.

    • Adrian Rosebrock March 31, 2017 at 1:54 pm #

      You need to enable X11 forwarding from your SSH connection to see the image on your screen. As for the error message from PuTTY, I’m not sure what is causing that. I would check the PuTTY documentation.

  120. Sufi April 4, 2017 at 6:01 am #

    Hi Adrian,

    Can we do face detection through pi camera which should be live?

    I have followed yours tutorials on basic motion detection & home surveillance , It worked for me but can we do face detection through pi camera?If yes can u guide me what to do.

    Thanks in advance

  121. JX Gian April 14, 2017 at 11:34 am #

    Thanks for the great step-by-step guide. I’ve follow all your steps until I encountered an error upon running “python test_image.py” (or python test_video.py” for that matter). It says “cannot connect to X server”. Would you happen to know what might the cause be? I’m using a Pi Camera module as well, running on a Raspberry Pi 3 Model B.

    • Adrian Rosebrock April 16, 2017 at 8:56 am #

      How are you accessing your Raspberry Pi? Over SSH? VNC?

  122. djoseph April 14, 2017 at 3:51 pm #

    Hi,
    I got to the step:

    python test_image.py

    and got the following error: I called the above file picamera.py

    Traceback (most recent call last):

    ImportError: No module named array

    Tried this from a python 3 virtual env as well.

    Will appreciate any input. Thanks.

    • Adrian Rosebrock April 16, 2017 at 8:55 am #

      Make sure you install NumPy + picamera[array] into your Python virtual environment:

      • djoseph May 5, 2017 at 1:25 pm #

        both the packages are present. still get the error.

        • Adrian Rosebrock May 8, 2017 at 12:34 pm #

          Hi djoseph — I’m sorry to hear about the continued error. Unfortunately, without physical access to your machine I’m not sure what the exact error is. I would validate that both packages have installed into your Python virtual environment via pip freeze. Also double-check that you are inside your Python virtual environment when executing the script.

  123. Alaa Zoghby April 17, 2017 at 10:37 pm #

    Thanks for your help>> please I need a python code for video streaming using Webcam rather rpi camera.

    • Adrian Rosebrock April 19, 2017 at 12:51 pm #

      Please refer to this blog post to help accessing either the webcam or Raspberry Pi camera module using the same class.

  124. Matze April 22, 2017 at 2:19 pm #

    Dear Mr. Adrian,
    first i have to say, you do a very great job!

    I have quite the same problem like:
    Giridharan Ravichandhran March 6, 2017 at 12:02 am #

    I also using raspberry pi3 with jessie and i am trying to Accessing the Raspberry Pi Camera with OpenCV and Python2.7.
    But after $python test_image.py command following error occurs:-

    (Image:807): Gtk-WARNING **: cannot open display:

    please help me with this problem
    (and i’m sorry for any mistakes of my english, i’m german 🙂
    thanks

    • Adrian Rosebrock April 24, 2017 at 9:39 am #

      It sounds like you’re SSH’ing into your Raspberry Pi rather than executing the script using a keyboard + HDMI monitor. This isn’t a problem, but you do need to supply the -X flag to SSH to utilize X11 forwarding:

      $ ssh -X pi@your_ip_address

      • Matze April 24, 2017 at 10:30 am #

        I’m direct on the Raspberry (Keyboard, Mouse(with USB-Bluetooth-connector), Monitor (HDMI))
        Is there an other possibility for this error?

        Thanks a lot!

        • Adrian Rosebrock April 24, 2017 at 12:04 pm #

          That is very strange if you’re using a local setup. I would suggest (re)installing X11, then re-compiling OpenCV.

  125. Matze April 24, 2017 at 4:19 pm #

    You are right. That’s strange 🙂
    But that’s could be the core of my problem ==> what is X11?
    I’m so sorry. I realy not an expert.

    Thanks!

    • Adrian Rosebrock April 25, 2017 at 11:51 am #

      X11 is a window manager typically used by Unix user interfaces.

  126. Ameer April 25, 2017 at 5:41 pm #

    hello Adrian
    i ran your code and it worked but i had some issues :
    1) i need to detect faces and on the other hand i need to recognize these faces so i need a high res. pictures, so i try to keep around 768*480 most of the time but i get a very high delay when i run the code, I blinked a LED in front of the raspberry pi cam and could see the blinking on the screen after 1 sec, and thats kinda bad for my application.

    is theres a way to reduce this delay ??

    one more thing, how can i measure this delay ?
    Thanks for your time, I appreciate it .

    • Adrian Rosebrock April 25, 2017 at 9:09 pm #

      To start, make sure you are using threading to access your video stream. Secondly, keep a large copy of the frame, but only apply face detection to a smaller, resized version. Face detection can be an expensive operation, especially for the Raspberry Pi. The less data you have to process, the better. If a face is detected, extract it from the higher resolution frame (multiply bounding box coordinates by ratio of new image size to old image size), and then pass into your face recognizer. I would also inspect your code to determine what the bottleneck is in case it is not the detection phase.

      • Ameer April 26, 2017 at 5:27 pm #

        Great, if i got you correctly i reshaped the code to the following

        and indeed i got a better results, but one more thing left, i wish to try different resolution and measure the corresponding delay over the camera, how could this be accomplished, were should i search to do this, i have GooGled all the OpenCV forums and Python Blogs without any clue on how to calculate this delay, i wish you could guide me for the first dew steps too.
        Thanks a lot again, your making me a huge favor with your generous help.

        • Adrian Rosebrock April 28, 2017 at 9:43 am #

          I’m not sure what you mean by “delay”, but if you want to measure the frame processing rate of your pipeline, I would follow the instructions in this blog post by using the FPS class.

  127. Alex April 26, 2017 at 10:45 am #

    Hi Adrian,

    Thanks a lot for the great tutorials.

    I have (almost) everything working … I am using the python2.7 with the open CV.

    The problem is that I can’t figure out how to install the “picamera[array]” to be synced with the python2.7, it is automatically installed into the python3.4 which is not working with the opencv from some reason… thanks a lot.

    • Adrian Rosebrock April 28, 2017 at 9:51 am #

      Hey Alex — are you using Python virtual environments? Or are you trying to install “picamera[array]” globally on your system?

  128. Govini April 29, 2017 at 12:05 pm #

    Hi! I followed your earlier tutorial to install OpenCV 3.0.0 on Jessie. I installed in python 3.4.2. I created the test_image.py file but when i try to execute in the terminal it shows:
    Traceback (most recent call last):
    File “test_image.py”, line 5, in
    import cv2
    ImportError: No module named cv2
    Any reason for this?

    • Adrian Rosebrock May 1, 2017 at 1:32 pm #

      Hi Govini — if you are having trouble importing the cv2 library you’ll want to refer to the “Troubleshooting” section of in this tutorial. Most of the time this error is caused by you not being in your Python virtual environment before executing the script.

  129. AKA April 29, 2017 at 5:48 pm #

    Hi thanks for all this. Compiling OpenCV worked flawless (RPI3, all fresh, including fast sd card) The RPI processor (4 cores) became quite hot.

  130. Davids May 11, 2017 at 11:12 pm #

    Hi Adrian, I would like to know if the Raspberry Pi camera module needs to be calibrated to have images without distortion and to know the intrinsic and extrinsic parameters of the camera, if so could you please tell me how I can calibrate it, thank you very much.

    • Adrian Rosebrock May 15, 2017 at 9:00 am #

      You would need to calibrate the Raspberry Pi camera yourself. I would suggest starting here to learn how to do so.

  131. Ali May 13, 2017 at 10:56 am #

    Hi, I downloaded your program and tried to execute it, but nothing happens, just after two seconds, (cv)pi@raspberrypi: ~ $ reappears, can you help me out? I installed open cv with the help of your tutorials

    • Adrian Rosebrock May 15, 2017 at 8:47 am #

      It sounds like your Raspberry Pi cannot access your camera module. Make sure the camera module is enabled via raspi-config and then run raspistill to see if the camera can save an image to disk. You should also double-check the camera connection on the Pi as you may have installed it incorrectly.

  132. Matt May 15, 2017 at 1:14 am #

    Hi Adrian,

    I am just starting out on this, and your instructions are very good. I appreciate the dedication you have to this. I got all of my install done correctly (I believe) as I achieved all of the checks you showed. However when I try to run the test_image.py I get the error:
    “can’t find ‘__main__’ module in ‘test_image.py’ ”

    I’m sure it’s something simple, but do you have any idea what I did wrong?

    thank you,
    -Matt

    • Adrian Rosebrock May 15, 2017 at 8:34 am #

      Hi Matt — I would suggest that you use the “Downloads” section of this blog post to download the source code and then try executing it. It might be the case that there was as error during your copy and paste of the code into a new file.

  133. Adam May 20, 2017 at 3:53 pm #

    Hi Adrian,
    I was wondering if there is any way to use usb cameras with the pi with opencv python. I’m trying to use two of them to get 3d images. Is there a better alternative?

    • Adrian Rosebrock May 21, 2017 at 5:10 am #

      Absolutely. See this post on accessing USB and/or Raspberry Pi camera module with the Pi. You should then read this post on multiple cameras with the Raspberry Pi.

  134. arslan May 28, 2017 at 2:52 pm #

    can we do motion detection using webcam instead of picam?

  135. Daniel June 2, 2017 at 5:20 pm #

    Adrian

    I know this post has been up for a couple years now, but it’s still great. Thanks a lot. Now I’m going to start on your tutorial using Django and try to stream video on to my Raspberry web server so I can view the video on a web browser. Any suggestions are welcomed.

    • Adrian Rosebrock June 4, 2017 at 5:38 am #

      Hi Daniel — thank you for the kind words, it’s much appreciated. I don’t have any tutorials on streaming frames directly from the camera to a web server, but I’ll add it to my queue of ideas. Thank you for the suggestion!

Trackbacks/Pingbacks

  1. Install OpenCV and Python on your Raspberry Pi 2 and B+ - PyImageSearch - May 13, 2015

    […] I’m a big fan of learning by example, so a good first step would be to read this blog post on accessing your Raspberry Pi Camera with the picamera  module. This tutorial details the exact […]

  2. Common errors using the Raspberry Pi camera module - PyImageSearch - August 29, 2016

    […] start, I am going to assume that you’ve already followed the instructions in Accessing the Raspberry Pi Camera with OpenCV and Python and installed the picamera  library on your […]

Leave a Reply