Unifying picamera and cv2.VideoCapture into a single class with OpenCV


Over the past two weeks on the PyImageSearch blog, we have discussed how to use threading to increase our FPS processing rate on both built-in/USB webcams, along with the Raspberry Pi camera module.

By utilizing threading, we learned that we can substantially reduce the affects of I/O latency, leaving the main thread to run without being blocked as it waits for I/O operations to complete (i.e., the reading of the most recent frame from the camera sensor).

Using this threading model, we can dramatically increase our frame processing rate by upwards of 200%.

While this increase in FPS processing rate is fantastic, there is still a (somewhat unrelated) problem that has been bothering me for for quite awhile.

You see, there are many times on the PyImageSearch blog where I write posts that are intended for use with a built-in or USB webcam, such as:

All of these posts rely on the cv2.VideoCapture  method.

However, this reliance on cv2.VideoCapture  becomes a problem if you want to use the code on our Raspberry Pi. Provided that you are not using a USB camera with the Pi and are in fact using the picamera module, you’ll need to modify the code to be compatible with picamera , as discussed in the accessing the Raspberry Pi Camera with Python and OpenCV post.

While there are only a few required changes to the code (i.e., instantiating the PiCamera  class and swapping out the frame read loop), it can still be troublesome, especially if you are just getting started with Python and OpenCV.

Conversely, there are other posts on the PyImageSearch blog which use the picamera  module instead of cv2.VideoCapture . A great example of such a post is home surveillance and motion detection with the Raspberry Pi, Python, OpenCV and Dropbox. If you do not own a Raspberry Pi (or want to use a built-in or USB webcam instead of the Raspberry Pi camera module), you would again have to swap out a few lines of code.

Thus, the goal of this post is to a construct a unified interface to both picamera  and cv2.VideoCapture  using only a single class named VideoStream . This class will call either WebcamVideoStream  or PiVideoStream  based on the arguments supplied to the constructor.

Most importantly, our implementation of the VideoStream  class will allow future video processing posts on the PyImageSearch blog to run on either a built-in webcam, a USB camera, or the Raspberry Pi camera module — all without changing a single line of code!

Read on to find out more.

Looking for the source code to this post?
Jump right to the downloads section.

Unifying picamera and cv2.VideoCapture into a single class with OpenCV

If you recall from two weeks ago, we have already defined our threaded WebcamVideoStream  class for built-in/USB webcam access. And last week we defined the PiVideoStream  class for use with the Raspberry Pi camera module and the picamera  Python package.

Today we are going to unify these two classes into a single class named VideoStream .

Depending on the parameters supplied to the VideoStream  constructor, the appropriate video stream class (either for the USB camera or picamera  module) will be instantiated. This implementation of VideoStream  will allow us to use the same set of code for all future video processing examples on the PyImageSearch blog.

Readers such as yourselves will only need to supply a single command line argument (or JSON configuration, etc.) to indicate whether they want to use their USB camera or the Raspberry Pi camera module — the code itself will not have to change one bit!

As I’ve mentioned in the previous two blog posts in this series, the functionality detailed here is already implemented inside the imutils package.

If you do not have imutils  already installed on your system, just use pip  to install it for you:

Otherwise, you can upgrade to the latest version using:

Let’s go ahead and get started by defining the VideoStream  class:

On Line 2 we import our WebcamVideoStream  class that we use for accessing built-in/USB web cameras.

Line 5 defines the constructor to our VideoStream . The src  keyword argument is only for the cv2.VideoCapture  function (abstracted away by the WebcamVideoStream  class), while usePiCamera , resolution , and framerate  are for the picamera  module.

We want to take special care to not make any assumptions about the the type of hardware or the Python packages installed by the end user. If a user is programming on a laptop or a desktop, then it’s extremely unlikely that they will have the picamera  module installed.

Thus, we’ll only import the PiVideoStream  class (which then imports dependencies from picamera ) if the usePiCamera  boolean indicator is explicitly defined (Lines 8-18).

Otherwise, we’ll simply instantiate the WebcamVideoStream  (Lines 22 and 23) which requires no dependencies other than a working OpenCV installation.

Let’s define the remainder of the VideoStream  class:

As we can see, the start , update , read , and stop  methods simply call the corresponding methods of the stream  which was instantiated in the constructor.

Now that we have defined the VideoStream  class, let’s put it to work in our videostream_demo.py  driver script:

We start off by importing our required Python packages (Lines 2-7) and parsing our command line arguments (Lines 10-13). We only need a single switch here, --picamera , which is used to indicate whether the Raspberry Pi camera module or the built-in/USB webcam should be used. We’ll default to the built-in/USB webcam.

Lines 16 and 17 instantiate our VideoStream  and allow the camera sensor to warmup.

At this point, all the hard work is done! We simply need to start looping over frames from the camera sensor:

On Line 20 we start an infinite loop that continues until we press the q  key.

Line 23 calls the read  method of VideoStream  which returns the most recently read frame  from the stream (again, either a USB webcam stream or the Raspberry Pi camera module).

We then resize the frame (Line 24), draw the current timestamp on it (Lines 27-30), and finally display the frame to our screen (Lines 33 and 34).

This is obviously a trivial example of a video processing pipeline, but keep in mind the goal of this post is to simply demonstrate how we can create a unified interface to both the picamera  module and the cv2.VideoCapture  function.

Testing out our unified interface

To test out our VideoStream class, I used:

To access the built-in camera on my OSX machine, I executed the following command:

Figure 1: Accessing the built-in camera on my OSX machine with Python and OpenCV.

Figure 1: Accessing the built-in camera on my OSX machine with Python and OpenCV.

As you can see, frames are read from my webcam and displayed to my screen.

I then moved over to my Raspberry Pi where I executed the same command to access the USB camera:

Followed by this command to read frames from the Raspberry Pi camera module:

The results of executing these commands in two separate terminals can be seen below:

Figure 2: Accessing both the Raspberry Pi camera module and a USB camera on my Raspberry Pi using the exact same Python class.

Figure 2: Accessing both the Raspberry Pi camera module and a USB camera on my Raspberry Pi using the exact same Python class.

As you can see, the only thing that has changed is the command line arguments where I supply --picamera 1 , indicating that I want to use the Raspberry Pi camera module — not a single line of code needed to be modified!

You can see a video demo of both the USB camera and the Raspberry Pi camera module being used simultaneously below:


This blog post was the third and final installment in our series on increasing FPS processing rate and decreasing I/O latency on both USB cameras and the Raspberry Pi camera module.

We took our implementations of the (threaded) WebcamVideoStream  and PiVideoStream  classes and unified them into a single VideoStream  class, allowing us to seamlessly access either built-in/USB cameras or the Raspberry Pi camera module.

This allows us to construct Python scripts that will run on both laptop/desktop machines along with the the Raspberry Pi without having to modify a single line of code — provided that we supply some sort of method to indicate which camera we would like to use, of course, This can easily be accomplished using command line arguments, JSON configuration files, etc.

In future blog posts where video processing is performed, I’ll be using the VideoStream  class to make the code examples compatible with both your USB camera and the Raspberry Pi camera module — no longer will you have to adjust the code based on your setup!

Anyway, I hope you enjoyed this series of posts. If you found me doing a series of blog posts (rather than one-off posts on a specific topic) beneficial, please let me know in the comments thread.

And also consider signing up for the PyImageSearch Newsletter using the form below to be notified when new blog posts are published!


If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , , , , ,

144 Responses to Unifying picamera and cv2.VideoCapture into a single class with OpenCV

  1. Kenny January 4, 2016 at 10:45 am #

    Awesome stuff, Adrian! Thanks for your continual enthusiasm in sharing your breadth of knowledge in computer vision!

  2. Harvey January 4, 2016 at 2:18 pm #

    I would be interested in what you think of this as a vision platform: https://www.kickstarter.com/projects/pine64/pine-a64-first-15-64-bit-single-board-super-comput

    • Adrian Rosebrock January 4, 2016 at 2:33 pm #

      It seems very similar to the Pi, depending on which model is used. The fact that the processor is faster is nice. But personally, the 64-bit support is what would make me excited. It will be interesting to see how the project evolves.

  3. Remy January 5, 2016 at 10:35 am #

    Phenomenal work Mr. Rosebrock and great tutorial. I went from approx. 12 FPS to 86 FPS (81 FPS displayed) using a crappy ip camera. (trendnet tv-ip572PL).

    • Adrian Rosebrock January 5, 2016 at 1:53 pm #

      Very nice! However, it’s important to keep in mind that you’re likely not getting 86 FPS from the physical camera sensor. Instead, your video processing loop is fast enough to process 86 FPS, hence why I use the term FPS processing rate in the blog post. It’s a subtle, but important nuance to keep in mind. 🙂 In any case, congrats on the improvement!

  4. Mats Önnerby January 11, 2016 at 5:00 pm #

    The latest Raspbian comes with V4L2 drivers preinstalled that make the picamera show up the same way as a webcamera. All you need to do is to add a line bcm-2835-v4l2 to /etc/modules and then reboot. I have tested and the program above works in both modes. I also tested the Real-time barcode detection and it works too, even if it’s a bit slow.

    • Adrian Rosebrock January 12, 2016 at 6:32 am #

      Awesome, thanks for the tip Mats. I didn’t realize Raspbian Jessie came with V4L2 drivers pre-installed, that’s great.

    • patrick January 18, 2016 at 9:25 pm #

      Thanks Mats, V4l2 makes things simpler.
      BTW the line should be bcm2835-v4l2 (no dash after bcm).

      • Rishabh March 13, 2016 at 9:32 am #

        Hi guys, I’m a big newbie at this. Do i write this line in the terminal or in my python code? Thanks!

    • HAJIRA April 13, 2017 at 12:24 pm #

      Mats Önnerby — Could you please provide me the the way to install the v4l2 drivers…Need it badly for my project.!

  5. Ark Nieckarz January 12, 2016 at 11:42 am #

    Great tutorials but will this work under Windows?
    Meaning using a built-in camera (like on laptops), USB or some other video input stream under Windows.

    • Adrian Rosebrock January 12, 2016 at 11:58 am #

      Yes, provided that you can access your webcam stream (either USB or otherwise) using the cv2.VideoCapture method, this code should work with Windows.

  6. Bob January 13, 2016 at 8:49 pm #


    Once you define the VideoStream class, you apparently load it from the videostream_demo.py with:

    from imutils.video import VideoStream

    What file is this new class put into or called, and where is it located?

    • Adrian Rosebrock January 14, 2016 at 6:16 am #

      I have already implemented the functionality in imutils, my open-source set of OpenCV convenience functions.

      You can install imutils using pip:

      $ pip install imutils

      And from there you can import the VideoStream class like I do in videostream_demo.py

      • Paul January 3, 2018 at 12:29 pm #

        Don’t forget that this script also requires cv2 (found in opencv)
        sudo pip install opencv

        • Adrian Rosebrock January 3, 2018 at 12:49 pm #

          I do not recommend installing OpenCV via pip just yet. There are a number of optimizations not used in the pip install. You also will not have the additional contrib packages as well. Installing OpenCV on the Raspberry Pi (for the time being) is best done when compiled from source. I have a number of tutorials on this.

  7. patrick January 17, 2016 at 7:23 pm #

    Great !! now I see double….Getting closer to stereoscopic stuff Doctor Rosebrock ?

    Always a pleasure going through your tutorials, keep on doing this great work Adrian .

    • Adrian Rosebrock January 18, 2016 at 3:22 pm #

      I honestly have never done any stereoscopic work before, although that is an avenue I would like to explore 🙂

  8. VIJAYA KUMAR February 3, 2016 at 12:01 pm #

    hello adrian i’m vijaya i want to do my B.E project in digital image processing will you please help me in how to configure the opncv and the python in windows……….thanks in advance…………….

    • Adrian Rosebrock February 4, 2016 at 9:17 am #

      Hey Vijaya, congrats on working on your BE project, that’s great. I’m sure you’re excited to graduate. But to be honest, I haven’t used a Windows system in 9+ years and have never setup OpenCV on Windows OS. If you have a question related to Raspbian, Ubuntu, or OSX, I can do my best to point you in the right direction though.

  9. CAO March 4, 2016 at 9:49 am #

    Hi Adrian,

    Do you think this method could work with the DS325 camera from SoftKinetics ?

    • Adrian Rosebrock March 6, 2016 at 9:22 am #

      I personally haven’t used that particular camera before, but from what I understand, it’s a 3D camera. I don’t have much experience with 3D sensors, although it’s something that I hope to explore in future blog posts. In short, I can’t give an honest answer to your question.

  10. Bosten April 2, 2016 at 3:04 pm #


    So I have installed imutils and am trying to run your program on my mac OS X machine with its built in webcam. For some reason I can’t get idle to work with imutils but it does work with the terminal. Also, when I run your program from python in the terminal, no window displaying the feed shows up. What am I doing wrong? I’m a beginner in all this so there is most likely something I’m doing wrong.

    I also have a question about your program. Does it continue displaying the webcam feed? All the other things that I have tried result in a crash in python after about a minute of showing the feed.

    • Adrian Rosebrock April 3, 2016 at 10:25 am #

      If you’re not getting a video to show up and the Python script is automatically exiting, then you should double check that your cameras are properly connected to the Pi. I would also start with this blog post on accessing the Raspberry Pi camera. It will give you a good starting point with less complex code.

  11. Marcus April 18, 2016 at 8:06 am #


    Hey is there a reason why when I get to the line:

    frame = vs.read()

    it tells me vs is not defined?

    • Adrian Rosebrock April 18, 2016 at 4:45 pm #

      It sounds like you may have not downloaded the source code to the blog post and are missing part of the videostream.py implementation. Make sure you use the “Downloads” form at the bottom of this post to grab all the code in the post.

  12. Lukas Vosyka May 5, 2016 at 7:08 pm #

    Hi Adrian,

    just curious – is there a reason you do not use the resolution for the VideoStream class in case of using a USB webcam. I mean the cv2.VideoCature would support it, kind of like this:

    cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)

    This would make the imutils in this case more versatile, no?


    • Adrian Rosebrock May 6, 2016 at 4:34 pm #

      Great suggestion. Although the problem I’ve ran into with this is that not all webcams obey/use the .set settings. Instead, I leave this to the programmer and user of the imutils library to determine if they want to use this functionality or not.

      • wally August 8, 2018 at 12:08 pm #

        Sorry for the reply to old stuff, but the:

        cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)


        AttributeError: ‘WebcamVideoStream’ object has no attribute ‘set’

        I’m using imutils-0.4.6

        The resolution= with the PiCamera module works although it usually “rounds” the height to be something mod 8, i.e. a quarter Picam video frame ends up height 544 instead of 540.

        I know this webcam honors the setting with raw cv2 captures, but then the code changes ripple as cv2 requires:

        ret, frame = vs.read()

        whereas your imutil gets a frame with:
        frame = vs.read()

        If I’m missing something obvious about python and CV2 I appologise, but Google brings me back to this blog.

        • Adrian Rosebrock August 9, 2018 at 2:52 pm #

          You cannot call the .set method on the code>WebcamVideoStream object. You need to call it on the cv2.VideoCapture object.

  13. MP May 7, 2016 at 11:38 am #

    Hi Adrian,
    i am new to python as per your tutorial i have installed the imultils on my raspberry pi and also created the videostream_demo.py and put all the code shown above in one file.
    i guess i am doing wrong here.

    can you guide which file need to be created what code goes in which file.

    one more question can any usb camera will work i have microsoft lifecam vx-1000 which was lying with me.


    • Adrian Rosebrock May 7, 2016 at 12:33 pm #

      If you’re just getting started learning Python, I would use the “Downloads” form on this page to grab the source code to this post. This will demonstrate how you need to structure your files and where to put the code in each file.

      As for your webcam, I have never used the Microsoft Lifecam before. Are you trying to use it on your laptop/desktop or on a Raspberry Pi?

  14. Alexandra May 13, 2016 at 10:23 am #

    Hello Adrian!

    Thanks for all the information you provide! You explain unbelievably well!

    I have a question: do you know if it’s possible to access a GigE 5 Allied Vision camera into Python( it’s an IP camera) for further image processing ?


    • Adrian Rosebrock May 13, 2016 at 11:30 am #

      I personally don’t have any experience with that camera — but I’ll try to do some IP streaming tutorials in the near future.

      • Siju February 9, 2019 at 11:30 pm #

        Do you have similar posts on Wifi IP cameras

        • Adrian Rosebrock February 14, 2019 at 1:48 pm #

          Sorry, no, I do not have any posts for WiFi IP cameras. I may cover that as a future topic but I cannot guarantee if/when that will be.

  15. Andy June 8, 2016 at 9:02 pm #

    How I can replace camera.release? with the Video Stream, because I think you didn’t define this.

    • Adrian Rosebrock June 9, 2016 at 5:18 pm #

      Great point Andy. You’ll want to call vs.camera.release() before calling cv2.destroyAllWindows.

      • Jeff Ward January 5, 2017 at 11:50 am #

        ‘PiCamera’ has no ‘release()’ method. It does have ‘close()’.

      • Jeff Ward January 7, 2017 at 1:06 am #

        When using WebcamVideoStream, would it be ‘vs.stream.release()’?

        • Adrian Rosebrock January 7, 2017 at 9:23 am #

          Correct, I meant to say vs.stream.release(). Thank you for pointing this out.

  16. Marcelo Aragao July 25, 2016 at 9:54 am #

    Congratulations Adrian! Great blog!
    How do I use other resolution?
    I have changed:
    def __init__(self, src=0, usePiCamera=False, resolution=(1024, 768), framerate=32):
    And comment:
    #frame = imutils.resize(frame, width=400)
    But still 320×240 image resolution

    • Adrian Rosebrock July 27, 2016 at 2:31 pm #

      You can change the resolution when you initialize the object, like this:

      vs = VideoStream(resolution=(1024, 768))

      • Joanacelle December 10, 2017 at 5:24 am #

        this solution does not solve the problem of resolution =( help me please

        • kwseow January 27, 2018 at 9:06 am #

          did you manage to solve this?

          • Adrian Rosebrock January 30, 2018 at 10:35 am #

            A less optimal solution, but one useful for debugging, would be to access the picamera object directly and see if you can modify the resolution during initialization.

  17. vivek September 24, 2016 at 5:55 am #

    where we create the first code? after installing imutlis what are the steps for making class? i didnt get it where it stores? explain briefly

    • Adrian Rosebrock September 27, 2016 at 8:53 am #

      Open up a text editor (whichever text editor you prefer) and start inserting the code. I would also suggest that you use the “Downloads” section of this blog post to download the source code — that way you will have a copy of the code that is working. From there you should try to code the example yourself.

  18. H December 5, 2016 at 9:34 am #

    Hi Adrian, is it possible to apply this to code that is being used to do multi-scale image template matching? I’ve had an attempt and kept getting this error

    ”The camera is already using port %d ‘ % splitter_port)
    picamera.exc.PiCameraAlreadyRecording: The camera is already using port 0 ‘

    I think the problem is because I’m trying to put every frame into an array so I can grayscale to template match better.. Do you know a way around this?

    • Adrian Rosebrock December 5, 2016 at 1:23 pm #

      It sounds like you might have another script/program that is accessing your Raspberry Pi camera module. If you want to perform template matching with the code in this blog post you’ll need to combine the two scripts together.

  19. Chris January 6, 2017 at 10:01 pm #

    Hi Adrian, thanks for your post and a few of the others I have used! I am working with a remote camera on Raspberry Pi. I was planning on sending the frame back to my mac with Pyro4. At first glance, it seems like it might be tricky to get picamera running on OS X but that Pyro4 is trying to deserialize the object I send from my Pi back to a picamera type.

    I am doing this to avoid doing heavy processing on the Pi. I eventually need to do face recognition on the frame too which I am doing on the frame I get from cv2. How good was your performance on the Pi doing face detection? Is it worthwhile for me to continue this video streaming approach?

    Thanks again!

    • Adrian Rosebrock January 7, 2017 at 9:27 am #

      Are you asking whether the Pi is suitable for face detection or face recognition? Face detection can easily be run on the Pi without a problem. Face recognition on the other hand is substantially slower. You would be lucky to get more than 1-2 FPS for face recognition using basic algorithms.

      I would consider using a message passing library like zeromq and then passing frames with detected faces to a system with more computational power if you intend on using any type of advanced face recognition algorithms (such as OpenFace).

  20. Nick January 7, 2017 at 3:58 pm #

    I just have to say a big thank you Adrian! I’m doing a project with a Pi + OpenCV and I’ve had several problems but your awesome guides have helped me greatly! Thanks again!
    Keep on rocking

    • Adrian Rosebrock January 9, 2017 at 9:15 am #

      Thanks Nick, I’m happy I could help 🙂 Have a great day.

  21. vivek January 12, 2017 at 6:49 am #

    can i get same class for raspberry pi camera with c++ for capturing video and image operations like this one?
    help me on this

    • Adrian Rosebrock January 12, 2017 at 7:53 am #

      I only offer Python + OpenCV code on this blog, not C++. Perhaps another reader can convert this implementation to C++ for you.

  22. vicky February 10, 2017 at 4:10 am #

    hi adrian , I am doing my BE project in image processing and i am new to pi can u hlp me how to detect a shape by using pi camera video stream

    • Adrian Rosebrock February 10, 2017 at 2:00 pm #

      I cover shape detection in this tutorial. You’ll need to utilize the code in this post to access the frames from the Raspberry Pi video stream, then apply the shape detector to each frame. If you’re just getting started with computer vision and OpenCV, I would suggest going through Practical Python and OpenCV.

  23. Biswajit February 13, 2017 at 2:12 am #

    Hi Adrian.what is the type of the frame in frame = vs.read()?Is it a direct image or not ? how can I encode it into base64 or any other string representation of this image ? Thanks in advance.

    • Adrian Rosebrock February 13, 2017 at 1:37 pm #

      The frame itself is a NumPy array with shape (h, w, d) where “h” is the height, “w” is the width, and “d” is the depth. You can use these functions to encode/decode base64.

  24. Christian February 14, 2017 at 4:41 am #

    Hi Adrian,

    great article, thanks for sharing-the code can get more cpu-friendly if you utilize a queue in the VideoStream class. By doing so, synchronization between main- and background-thread gets achieved and cpu load drops from 25 % to 10% on a Raspberry Pi 3.

    from Queue import Queue

    items = Queue()
    def update():
    frame = self.rawCapture ….

    def read():
    return items.get()

    • Adrian Rosebrock February 14, 2017 at 1:21 pm #

      Nice tip Christian. I also cover queueing in this post as well.

  25. Anbu March 21, 2017 at 4:11 am #

    i got the error like

    ImportError: No module named webcamvideostream

    but i have installed numpy(array)

  26. Reshal March 22, 2017 at 5:15 pm #

    Adrian fantastic work with these Raspberry Pi tutorials. Well done! It is actually amazing with what one can do with the Pi.

    I have a few questions, would it be possible to use this and then stream it to a server? The server would be setup on the Pi. And an html page is created which contains the a url to access this feed. If so how would one do it?

    The aim is to access this image processed feed through a browser on an android, or ios or pc device.

    • Adrian Rosebrock March 23, 2017 at 9:29 am #

      If you want to send the stream directly to another source I would skip OpenCV entirely and use something like gstreamer.

  27. Dustin March 24, 2017 at 10:48 pm #

    Hey Adrian,

    Thank you for posting these tutorials. They have been incredibly helpful for my senior design project. We are creating a robot that tracks swimmers moving in a pool to help monitor their form.

    I am encountering an intermittent issue while using this class with the PiCamera. When I run the demo script the first time I boot, everything works fine. I get this error if I try to close the program and re-run it:

    frame = imutils.resize(frame,width=800)
    File “/usr/local/lib/python2.7/dist-packages/imutils/convenience.py”, line 69, in resize
    (h, w) = image.shape[:2]
    AttributeError: ‘NoneType’ object has no attribute ‘shape’

    This error doesn’t occur when I run the script using a webcam, only with the PiCamera. Do you have any idea why this occurs?

    • Adrian Rosebrock March 25, 2017 at 9:16 am #

      This error could be due to a variety of reasons. To start, have you updated your instantation of VideoStream to access the Raspberry Pi camera module? Can you access your Raspberry Pi camera module via the command line?

  28. Twinkle April 1, 2017 at 3:00 am #

    Thank you for your previous post that helped in increasing the fps of my raspberry pi camera.
    However while running videostream_demo.py, I am getting the following error:

    What could be the solution?

    Thanks in advance.

    • Twinkle April 1, 2017 at 3:14 am #

      I even tried running the program through command line , still same error occurs

      • Paul January 3, 2018 at 12:27 pm #

        Make sure you see a /dev/video0 or like device. I had the same problem with my USB cam not being detected. I had to replug it to get it to run.

    • Adrian Rosebrock April 3, 2017 at 2:13 pm #

      The problem here is that your system is not able to correctly access your webcam/video stream/Raspberry Pi camera. Please see this post on NoneType errors for more information.

  29. Vince July 24, 2017 at 1:51 pm #

    Thank you Adrian for this amazing write up and for the write up on how to install opencv 3 for python. 10/10.

    I got this working, however when I try to display the image with a larger resolution, the performance drops fps wise. If I comment out the resize and setup up the resolution when declaring vs, I get what I want resolution wise, but the rates drop. Any ideas what I am doing wrong? Is opencv just this slow with the pi? If I just use pythons picam module, it can show this resolution at high fps and you mention getting high fps with your methods here.

    Thanks! (Using Pi model 2B with picam )

    • Adrian Rosebrock July 24, 2017 at 3:27 pm #

      The higher the resolution, the more data has to be read from the camera. The more data, the slower the performance. This is simply a side effect of using the picamera module.

      • Vince July 24, 2017 at 4:50 pm #

        While I agree with the statement, more data = slower performance, as I said earlier using just picamera module I get fast fps at high resoultion, but I dont have access to the frame, hence where opencv comes into play.

        picamrea script:

        from picamera import PiCamera
        import time

        camera = PiCamera()


        With this script I get full resolution at a fast fps, I would say 30-60.

        When I run the script in your post, when I go to full resolution, the system bogs down. I have to let resolution = (200,200) to get anything acceptable (low lag, decent fps).

        Im just wondering why, if the picam and picamera module are capable of fast performance, why I am not getting that with opencv? Is it my execution? What fps are expected just showing an image within opencv at full picam resoution? Just want to make sure this is what is expected using opencv on the pi or if I missed a setting.

        I tried using a usb cam, with the same results.

        • Adrian Rosebrock July 28, 2017 at 10:16 am #

          To be honest, this is to be expected when writing video processing scripts with OpenCV. The frames have to be read from the camera, converted to a NumPy array, and displayed to your screen. As far as I understand, start_preview does not have to convert to a NumPy array and can instead display the stream directly to your desktop. For more details I would suggest asking the picamera developers.

  30. Franko September 14, 2017 at 3:20 pm #

    How can I import raspi_cam_iterface into opencv on another computer

  31. Jim Liu September 18, 2017 at 8:52 pm #

    Hi Adrian,

    When I used the VideoStream class for the input video file, the output of ‘frame=vs.read()’ is None. Debugging into the function read(self) of WebcamVideoStream, I found self.frame is None. Can you help me on this? Thanks.

    • Adrian Rosebrock September 20, 2017 at 7:20 am #

      It sounds like OpenCV cannot access your webcam. Please double-check that your webcam is properly connected and you can access it via the cv2.VideoCapture function.

  32. Syed Tauseef September 24, 2017 at 4:38 am #

    This guide really helped me out for my project , now i want to get a resolution of 1280×720 @ @25 fps where should i change the code only in VideoStream and how ? please explain .

    And would like to get your opinion ,My project is avoiding obstacle using optical flow later adding SURF for robust detection in quadcopters .Now purely I m concentration on optical flow and to detect objects. so I basically get video frames from pi camera using pi3 and calculate the optical flow (CalcOpticalFlowPyrLK) for object detection in selected ROI . My question is how can we reduce the computation time is there any way of threading ? since I will be using SURF in future .

    • Adrian Rosebrock September 24, 2017 at 7:14 am #

      Are you trying to do this with your USB webcam? Or the Raspberry Pi camera module?

      As for your project, I’m actually a little worried that 2D algorithms wouldn’t be sufficient. Quadcopters can move quite fast so you’ll need to balance speed with accuracy. The other issue here is that avoiding obstacles best works with stereo/depth so you can compute the depth of the image. I also think you should include other sensors into the copter (radar for instance). A purely CV approach would be very tricky to build and you would get better results if you incorporated multiple sensors and didn’t rely strictly on CV.

      • Syed Tauseef September 24, 2017 at 2:17 pm #

        First of Thanks for the replies !

        I m going to work with Raspberry pi camera . Can Raspberry pi 3 handle Optical flow and SURF algorithm like hybrid algorithm ? without using much computation time .

        Planning to incorporate ultrasonic sensor with this .I m having payload constrains so using 2 camera is not possible in my qaud.

        • Adrian Rosebrock September 26, 2017 at 8:36 am #

          It’s worth testing, but realistically applying optical flow and real-time keypoint matching would likely be too much for the Raspberry Pi.

          • Syed Tauseef October 5, 2017 at 11:57 am #

            Optical flow runs smooth should try along with key-point matching only the videoStream hangs and lags even with threading but cpu usage is 50 % approx donno why it lags .will update you with both running as hybrid algorithm

  33. daniel October 22, 2017 at 12:04 pm #

    Im using VideoStream script, can I rotate the video output from raspi cam ?

    • Adrian Rosebrock October 23, 2017 at 6:13 am #

      Yes. Take a look at the cv2.warpAffine and cv2.flip functions. I would also suggest reading through my book, Practical Python and OpenCV where I discuss the basics of computer vision and image processing.

  34. olivia November 6, 2017 at 8:53 am #

    hallo again adrian.. thank for saving my life..
    adrian is that your code is compatible with logitech webcam c270?
    or just c920?
    i’m using raspberry pi 3 model b

    • Adrian Rosebrock November 6, 2017 at 10:23 am #

      Yes, the code should work with the C270. If you are getting NoneType errors please refer to this blog post.

      • olivia November 6, 2017 at 10:31 am #

        thank you so much adrian

  35. David November 28, 2017 at 7:00 am #

    Hi Adrian,
    I found a big difference between Picamera and OpenCv image capture. When I’m using VideoCapture class, I can modify the array with my own pixels (for example, I can overlayed a photo in the image (after convert it to an array too). But with Picamera.array class, array is defined like read-only, my technic to overlay doesn’t work.
    And I found my video faster refreshed with videoCapture.


  36. fariborz December 5, 2017 at 7:08 am #

    Hello Mr Adrian
    Thank you for a good tutorial
    I have a question
    How to use the PiCamer library features and commands when using this method for picamera
    Like the camera.iso = 100 command
    Or Framerate command, or other commands for this library
    Because when I use this method to capture the camera, I can not use the rest of its library commands in the program.
    Thanks if you can guide me

    • Adrian Rosebrock December 5, 2017 at 7:22 am #

      I would suggest creating your own “VideoStream” and/or “PiVideoStream” class and then modifying either (1) the constructor to accept any relevant parameters you would need or (2) modifying the class directly.

      Additionally, before you call .start you could also reinstantiate the self.stream object as well.

      I hope that helps!

  37. Ben December 16, 2017 at 11:04 pm #

    Hi Adrian, I am very new to this so please excuse my ignorance.

    you start by saying “get started by defining the VideoStream class” can you explain how to do this? Do we create a new file called VideoStream with our chosen text editor or am I missing something here?

    • Adrian Rosebrock December 19, 2017 at 4:29 pm #

      Hey Ben, make sure you download the imutils package which includes the VideoStream class.

      Can you also install it via:

      $ pip install imutils

  38. Jamie February 5, 2018 at 8:01 pm #

    Hey Adrian, great work. Your articles are always well written and easy to follow.

    I’ve been running into some trouble attempting to stream video from the Raspberry Pi over a network to a pipe created on another machine. Have you ever explored this method of transmitting and consuming a video feed with opencv?

    Any advice on the following would be greatly appreciated. https://stackoverflow.com/questions/48611517/os-x-10-12-6-netcat-nc-cannot-use-mkfifo-named-pipe-raspberry-pi-3-camera-stre

    • Adrian Rosebrock February 6, 2018 at 10:06 am #

      Are you trying to stream all frames as fast as possible to a separate system? Or just stream a few frames such as frames that have certain objects, etc.?

      • Jamie February 6, 2018 at 9:03 pm #

        Adrian, the idea was to use a PI’s camera as an input over a network and stream the data to a computer that is capable of processing multiple feeds simultaneously. So the best quality as fast as possible.

        On the server side of things that’s where I’d want to process frames, grab faces, and store the frame plus the faces it captured.

        I think I found a recipe that could do the trick. Check out 4.9. Capturing to a network stream (http://picamera.readthedocs.io/en/release-1.9/recipes1.html#capturing-to-a-network-stream) from the Raspberry PI…

        The only problem is that they’re not pulling in the stream with openCV. I assume I can adapt the method you illustrate in this article (https://www.pyimagesearch.com/2015/03/30/accessing-the-raspberry-pi-camera-with-opencv-and-python/) to do that. Step 6, test_python.py Line 17.

        Am I on the right track, do you think that would work?
        Let me know if I’m making any incorrect assumptions.

      • Jamie February 7, 2018 at 8:32 am #

        I’m trying to stream all frames as fast as possible to a separate system. I left another comment about a possible approach but it looks like it was deleted.

        Any direction is appreciated. Thanks.

        • Adrian Rosebrock February 8, 2018 at 7:58 am #

          Hey Jamie — PyImageSearch gets a lot of comments and due to spam reasons, I need to moderate them all. I cannot spend my whole day inside the blog waiting for new comments to come in so I only go through them once every 48-72 hours. I appreciate your patience. I see you have resolved the issue in another comment. I have replied there as well.

  39. Jamie February 7, 2018 at 6:45 pm #

    Adrian, ended up successfully getting this working. https://stackoverflow.com/a/48675107/2355051

    Any advice for optimizing the stream at larger resolutions would be appreciated.

    Thanks again for showcasing your research and tests, without them it would have taken forever to find a solution to illustrate this proof of concept.

    • Adrian Rosebrock February 8, 2018 at 7:53 am #

      Hey Jamie, congrats on getting the stream working, nice job! Take a look at gstreamer and see if you can stream the raw capture directly through gstreamer to your endpoint. This will enable you to encode and compress the stream.

  40. Phil February 27, 2018 at 1:13 am #

    I’m getting this error when I try to run the code in this tutorial. Any idea what’s going on?

    Traceback (most recent call last):
    File “test.py”, line 18, in

    from picamera.array import PiRGBArray
    ImportError: No module named picamera.array

    • Phil February 27, 2018 at 1:17 am #

      Ignore this. I somehow missed installing python-picamera!

      • Adrian Rosebrock February 27, 2018 at 11:27 am #

        Congrats on resolving the issue 🙂

  41. Phil February 27, 2018 at 1:25 am #

    How can I write the captured stream to a file? With cv2, I used to use cv2.VideoWriter_fourcc and cv2.VideoWriter , but with this code, it just produces an empty .avi file.

    • Adrian Rosebrock February 27, 2018 at 11:28 am #

      If it cv2.VideoWriter is producing an empty video file it’s likely that your system does not have the proper video codecs installed. Working with OpenCV and output video can be a pain, but I do my best to detail the process in this blog post.

  42. Daoud Ghannam April 3, 2018 at 4:28 pm #

    thank you for the tutorials you provide.

    i followed each step above and when i run videostream_demo.py i get the following error:

    File “videostream_demo.py”, line 2, in
    from imutils.video import VideoStream
    ImportError: No module named imutils.video

    im not sure where is the problem even though i installed imutils and updated and im running from cv.

    what to do ?

    • Adrian Rosebrock April 4, 2018 at 12:09 pm #

      Make sure you installed “imutils” into the “cv” Python virtual environment:

      • Daoud Ghannam April 8, 2018 at 12:53 pm #

        Already did this before but didnt work :/

        • Daoud Ghannam April 8, 2018 at 12:56 pm #

          (cv) pi@raspberrypi:~ $ sudo pip install imutils
          Requirement already satisfied: imutils in /usr/local/lib/python3.5/dist-packages

          • Adrian Rosebrock April 10, 2018 at 12:23 pm #

            Leave off the “sudo”.

  43. Rafael April 18, 2018 at 4:23 pm #

    can i use this library for connect to RTSP stream from IP camera ???

    • Adrian Rosebrock April 20, 2018 at 10:11 am #

      Yes, update the src of the VideoStream to point to your RTSP stream. That should work.

  44. Ebrahim April 24, 2018 at 4:03 pm #

    Hello Adrian
    I don’t understand this line, and in programming i have problem from this line.

    ap.add_argument(“-p”, “–picamera”, type=int, default=-1,
    help=”whether or not the Raspberry Pi camera should be used”)

    usage: q.py [-h] -p PROTOTXT -m MODEL [-c CONFIDENCE]
    q.py: error: the following arguments are required: -p/–prototxt, -m/–model

    please help me

    • Adrian Rosebrock April 25, 2018 at 5:29 am #

      If you are new to command line arguments and how to use them, that’s okay, but make sure you read up on them before continuing.

  45. Gianni May 28, 2018 at 2:17 am #

    Hi Adrian,
    This is a very good tutorial, it fixed a lot of my problem when I am doing real time tracking.
    I have a question related to your tutorial series.

    I have an issue with naming the camera, I have 2 camera on pointing on the right another is left. I define in the program which camera is which.

    The index of videocapture is always changing when I restart my embedding system, so the program always mixed up left and right.

    I try to find online how to define the index and how to relate it to my comport but I can’t see anything useful.

    because the image from 2 cameras is similar so I cant use image processing to distinguish between the two. the only thing I can use is com port.
    Do you know how to relate the com port to the opencv video capturing index?

    Many Many thanks, and please keep it up !!!!!!!!!!!!!!!

    • Gianni May 28, 2018 at 2:31 am #

      so my question in simplify is can we define camera index in videocapture by comport (USB)

      • Adrian Rosebrock May 28, 2018 at 9:29 am #

        Hi Gianni, I’m glad you found the code useful! However, I’m not sure why your cameras may be changing indexes. I did a quick Google search for “opencv videocpature index changes” and it appears that others are encountering this issue as well. I read a few of the answers and it seems like it may be an OS issue, not OpenCV itself. I’m sorry I don’t have the answer to the question but rest assured, you’re not the only OpenCV user with the problem. If you find out what the problem was please come back and let us know.

  46. Prajesh Sanghvi July 30, 2018 at 8:27 am #

    Thanks a million Adrian to you and your team!!, you guys are amazing!

    • Adrian Rosebrock July 31, 2018 at 9:49 am #

      Thanks Prajesh 🙂

  47. LONG ZHAI July 31, 2018 at 6:16 pm #

    if I use usb webCamera, PiVideoStream is going to cast error.

    • Adrian Rosebrock August 2, 2018 at 9:39 am #

      Could you share your exact error message?

  48. abdul October 15, 2018 at 3:31 am #

    import cv2 is not working.(no module found)

  49. Fulvio Mascara December 10, 2018 at 3:10 pm #

    Hi Adrian,

    First of all, congratulations for remarkable work in your blog, opening our minds to the Computer Vision world, with simplicity and easy explanations.

    I’m building a facial recognition with Raspberry Pi and I have a doubt about improving FPS:

    Using a picamera, do i need to implement the code of your second article to improve the FPS (opening threads for I/O) or i just need to call PiVideoStream from your imutils lib instead of OpenCV VideoCapture?

    Thanks in advance.

    Best Regards,

    • Adrian Rosebrock December 11, 2018 at 12:41 pm #

      You can just use the “VideoStream” class — that class will automatically call “PiVideoStream” and will use threads under the hood.

  50. Macoy December 27, 2018 at 6:10 am #

    Hi! Is it possible to run this code using a usb camera. And if ever how could I implement it.

    Help is much appreciated! Thank you!

    • Adrian Rosebrock December 27, 2018 at 10:04 am #

      All code in this blog post is compatible with both USB cameras and the Raspberry Pi camera module. Are you running into an issue with the code?

      • Macoy December 27, 2018 at 10:44 pm #

        vs = VideoStream(usePiCamera=args[“picamera”] > 0).start()

        Is this line necessary if I will use USB camera?

        • Adrian Rosebrock January 2, 2019 at 9:43 am #

          That line checks to see whether or not the Raspberry Pi camera is to be used. The flag is set via command line argument. You don’t need to modify the code to use a USB camera, just ensure --picamera 0 when executing the script.

  51. Dayle December 27, 2018 at 10:58 pm #

    Hi Adrian,

    I can’t say thank you enough for all the work you do and highly recommend your books to anyone diving in to computer vision/deep learning.

    I find VideoStream works brilliantly on a Raspberry Pi for detecting and tracking objects with a low resolution video source. Based on movement of objects detected in my low resolution video stream, I want to grab images from a higher resolution video stream for identification with a CNN.

    My problem is I find OpenCV consumes a lot of processing power grabbing and decoding every high resolution video frame it receives rather than passively waiting to grab and decode just the one frame you want.

    Any thoughts on addressing this problem?

    Thanks Again

    • Adrian Rosebrock January 2, 2019 at 9:42 am #

      I’m glad you’re finding the VideoStream class helpful, Dayle!

      As far as decoding the high resolution frame have you tried explicitly setting the resolution via:

      vs = VideoStream(usePiCamera=True, resolution=(320, 240))

  52. Dayle February 6, 2019 at 3:32 pm #

    Hi Adrian,

    Thanks for replying so quickly. I got side trapped down another wormhole and finally can get back to my original problem – this one.

    I use VideoStream and set the resolution in the URL. For example the low resolution video feed I do motion detection on is
    rtspurl = “rtsp://×180&fps=30”

    vs = VideoStream(src=rtspurl, usePiCamera=False).start()
    image = vs.read()

    That works brilliantly.

    My problem is I also want to take the occasional hi resolution 1280×720 image snapshot from another video feed from the same camera and then identify the objects in it using the CNN approaches you have been teaching.

    The issue as I understand it is that to use opencv, or your implementation of it inside VideoStream, you cannot simply sample an arbitrary image frame on demand. Instead you must read each video frame in sequence and discard every frame and copy the odd one you actually want. In my case, around 25% of my Pi’s CPU is being used just to read hires video images and immediately discard them. Sadly I can’t computationally afford that.

    I’m wondering if there is a process efficient manner I can sample a single image frame from rtsp source. I’m also working with gstreamer and ffmpeg if that is of any help.

    Once again,


    • Adrian Rosebrock February 7, 2019 at 7:00 am #

      I’m not sure what you mean by “sampling an arbitrary image frame on demand”. The VideoStream class is threaded and will constantly keep fetching frames from the camera. You then read the frame from the main “while” loop of your code. You’ll need to insert logic to handle saving any frames — once “vs.read” is called then a new frame is grabbed from the VideoStream class.

  53. Dayle February 8, 2019 at 1:14 pm #


    My understanding is VideoStream() uses VideoCapture.read() to ingest every video frame and convert it to an image, which is then accessed via VideoStream.read(), which is great when you need to analyze every video frame, but computationally inefficient if you only need to access the occasional video frame.

    My solution is to ingest every video frame using VideoCapture.grab() and only convert those frames I want to an image using VideoCapture.retrieve().

    For 1280×720 H.264 video at 15 fps, CPU usage on a RPi3 for waiting to capture an image goes from 19% down to 11%, while a similar 640×360 video improves from 6% down to 3%.

    Does this make sense as an improvement to VideoStream()?

    • Adrian Rosebrock February 14, 2019 at 2:00 pm #

      VideoCapture will grab each and every frame from the stream and will block execution until a new frame is ready. VideoStream runs in a thread behind the scenes, always grabbing the most recent frame and always having it ready for you. It’s a non-blocking operation.

  54. Joy September 25, 2019 at 4:25 am #

    Hi Adrian,
    Thanks for the tutorial, it helped a lot.
    But I had a problem while adding some codes.
    I want to capture a frame, save it and send it to email while running OpenVino using NSC, but I have an error that it doesn’t save any data(jpg). When I searched for solutions, I read that cv2.VideoCapture(0) does not work in PiCamera.
    Could you help me to VideoCapture with PiCamera instead of using “if usePiCamera”?


    • Adrian Rosebrock September 25, 2019 at 10:31 am #

      The cv2.VideoCapture function will work with the RPi camera module if you have the V4L2 drivers installed. Otherwise, you use:

      VideoStream(usePiCamera=True) to access your RPi camera module.

  55. Ityav January 8, 2020 at 4:54 pm #

    Python 2.7 is gone now.
    How can i modify this code to run in python 3.X?

    • Adrian Rosebrock January 16, 2020 at 10:43 am #

      This code is compatible with Python 3.


  1. Multiple cameras with the Raspberry Pi and OpenCV - PyImageSearch - January 21, 2016

    […] A Raspberry Pi camera module + camera housing (optional). We can interface with the camera using the picamera  Python package or (preferably) the threaded VideoStream  class defined in a previous blog post. […]

Before you leave a comment...

Hey, Adrian here, author of the PyImageSearch blog. I'd love to hear from you, but before you submit a comment, please follow these guidelines:

  1. If you have a question, read the comments first. You should also search this page (i.e., ctrl + f) for keywords related to your question. It's likely that I have already addressed your question in the comments.
  2. If you are copying and pasting code/terminal output, please don't. Reviewing another programmers’ code is a very time consuming and tedious task, and due to the volume of emails and contact requests I receive, I simply cannot do it.
  3. Be respectful of the space. I put a lot of my own personal time into creating these free weekly tutorials. On average, each tutorial takes me 15-20 hours to put together. I love offering these guides to you and I take pride in the content I create. Therefore, I will not approve comments that include large code blocks/terminal output as it destroys the formatting of the page. Kindly be respectful of this space.
  4. Be patient. I receive 200+ comments and emails per day. Due to spam, and my desire to personally answer as many questions as I can, I hand moderate all new comments (typically once per week). I try to answer as many questions as I can, but I'm only one person. Please don't be offended if I cannot get to your question
  5. Do you need priority support? Consider purchasing one of my books and courses. I place customer questions and emails in a separate, special priority queue and answer them first. If you are a customer of mine you will receive a guaranteed response from me. If there's any time left over, I focus on the community at large and attempt to answer as many of those questions as I possibly can.

Thank you for keeping these guidelines in mind before submitting your comment.

Leave a Reply