Unifying picamera and cv2.VideoCapture into a single class with OpenCV


Over the past two weeks on the PyImageSearch blog, we have discussed how to use threading to increase our FPS processing rate on both built-in/USB webcams, along with the Raspberry Pi camera module.

By utilizing threading, we learned that we can substantially reduce the affects of I/O latency, leaving the main thread to run without being blocked as it waits for I/O operations to complete (i.e., the reading of the most recent frame from the camera sensor).

Using this threading model, we can dramatically increase our frame processing rate by upwards of 200%.

While this increase in FPS processing rate is fantastic, there is still a (somewhat unrelated) problem that has been bothering me for for quite awhile.

You see, there are many times on the PyImageSearch blog where I write posts that are intended for use with a built-in or USB webcam, such as:

All of these posts rely on the cv2.VideoCapture  method.

However, this reliance on cv2.VideoCapture  becomes a problem if you want to use the code on our Raspberry Pi. Provided that you are not using a USB camera with the Pi and are in fact using the picamera module, you’ll need to modify the code to be compatible with picamera , as discussed in the accessing the Raspberry Pi Camera with Python and OpenCV post.

While there are only a few required changes to the code (i.e., instantiating the PiCamera  class and swapping out the frame read loop), it can still be troublesome, especially if you are just getting started with Python and OpenCV.

Conversely, there are other posts on the PyImageSearch blog which use the picamera  module instead of cv2.VideoCapture . A great example of such a post is home surveillance and motion detection with the Raspberry Pi, Python, OpenCV and Dropbox. If you do not own a Raspberry Pi (or want to use a built-in or USB webcam instead of the Raspberry Pi camera module), you would again have to swap out a few lines of code.

Thus, the goal of this post is to a construct a unified interface to both picamera  and cv2.VideoCapture  using only a single class named VideoStream . This class will call either WebcamVideoStream  or PiVideoStream  based on the arguments supplied to the constructor.

Most importantly, our implementation of the VideoStream  class will allow future video processing posts on the PyImageSearch blog to run on either a built-in webcam, a USB camera, or the Raspberry Pi camera module — all without changing a single line of code!

Read on to find out more.

Looking for the source code to this post?
Jump right to the downloads section.

Unifying picamera and cv2.VideoCapture into a single class with OpenCV

If you recall from two weeks ago, we have already defined our threaded WebcamVideoStream  class for built-in/USB webcam access. And last week we defined the PiVideoStream  class for use with the Raspberry Pi camera module and the picamera  Python package.

Today we are going to unify these two classes into a single class named VideoStream .

Depending on the parameters supplied to the VideoStream  constructor, the appropriate video stream class (either for the USB camera or picamera  module) will be instantiated. This implementation of VideoStream  will allow us to use the same set of code for all future video processing examples on the PyImageSearch blog.

Readers such as yourselves will only need to supply a single command line argument (or JSON configuration, etc.) to indicate whether they want to use their USB camera or the Raspberry Pi camera module — the code itself will not have to change one bit!

As I’ve mentioned in the previous two blog posts in this series, the functionality detailed here is already implemented inside the imutils package.

If you do not have imutils  already installed on your system, just use pip  to install it for you:

Otherwise, you can upgrade to the latest version using:

Let’s go ahead and get started by defining the VideoStream  class:

On Line 2 we import our WebcamVideoStream  class that we use for accessing built-in/USB web cameras.

Line 5 defines the constructor to our VideoStream . The src  keyword argument is only for the cv2.VideoCapture  function (abstracted away by the WebcamVideoStream  class), while usePiCamera , resolution , and framerate  are for the picamera  module.

We want to take special care to not make any assumptions about the the type of hardware or the Python packages installed by the end user. If a user is programming on a laptop or a desktop, then it’s extremely unlikely that they will have the picamera  module installed.

Thus, we’ll only import the PiVideoStream  class (which then imports dependencies from picamera ) if the usePiCamera  boolean indicator is explicitly defined (Lines 8-18).

Otherwise, we’ll simply instantiate the WebcamVideoStream  (Lines 22 and 23) which requires no dependencies other than a working OpenCV installation.

Let’s define the remainder of the VideoStream  class:

As we can see, the start , update , read , and stop  methods simply call the corresponding methods of the stream  which was instantiated in the constructor.

Now that we have defined the VideoStream  class, let’s put it to work in our videostream_demo.py  driver script:

We start off by importing our required Python packages (Lines 2-7) and parsing our command line arguments (Lines 10-13). We only need a single switch here, --picamera , which is used to indicate whether the Raspberry Pi camera module or the built-in/USB webcam should be used. We’ll default to the built-in/USB webcam.

Lines 16 and 17 instantiate our VideoStream  and allow the camera sensor to warmup.

At this point, all the hard work is done! We simply need to start looping over frames from the camera sensor:

On Line 20 we start an infinite loop that continues until we press the q  key.

Line 23 calls the read  method of VideoStream  which returns the most recently read frame  from the stream (again, either a USB webcam stream or the Raspberry Pi camera module).

We then resize the frame (Line 24), draw the current timestamp on it (Lines 27-30), and finally display the frame to our screen (Lines 33 and 34).

This is obviously a trivial example of a video processing pipeline, but keep in mind the goal of this post is to simply demonstrate how we can create a unified interface to both the picamera  module and the cv2.VideoCapture  function.

Testing out our unified interface

To test out our VideoStream class, I used:

To access the built-in camera on my OSX machine, I executed the following command:

Figure 1: Accessing the built-in camera on my OSX machine with Python and OpenCV.

Figure 1: Accessing the built-in camera on my OSX machine with Python and OpenCV.

As you can see, frames are read from my webcam and displayed to my screen.

I then moved over to my Raspberry Pi where I executed the same command to access the USB camera:

Followed by this command to read frames from the Raspberry Pi camera module:

The results of executing these commands in two separate terminals can be seen below:

Figure 2: Accessing both the Raspberry Pi camera module and a USB camera on my Raspberry Pi using the exact same Python class.

Figure 2: Accessing both the Raspberry Pi camera module and a USB camera on my Raspberry Pi using the exact same Python class.

As you can see, the only thing that has changed is the command line arguments where I supply --picamera 1 , indicating that I want to use the Raspberry Pi camera module — not a single line of code needed to be modified!

You can see a video demo of both the USB camera and the Raspberry Pi camera module being used simultaneously below:


This blog post was the third and final installment in our series on increasing FPS processing rate and decreasing I/O latency on both USB cameras and the Raspberry Pi camera module.

We took our implementations of the (threaded) WebcamVideoStream  and PiVideoStream  classes and unified them into a single VideoStream  class, allowing us to seamlessly access either built-in/USB cameras or the Raspberry Pi camera module.

This allows us to construct Python scripts that will run on both laptop/desktop machines along with the the Raspberry Pi without having to modify a single line of code — provided that we supply some sort of method to indicate which camera we would like to use, of course, This can easily be accomplished using command line arguments, JSON configuration files, etc.

In future blog posts where video processing is performed, I’ll be using the VideoStream  class to make the code examples compatible with both your USB camera and the Raspberry Pi camera module — no longer will you have to adjust the code based on your setup!

Anyway, I hope you enjoyed this series of posts. If you found me doing a series of blog posts (rather than one-off posts on a specific topic) beneficial, please let me know in the comments thread.

And also consider signing up for the PyImageSearch Newsletter using the form below to be notified when new blog posts are published!


If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 11-page Resource Guide on Computer Vision and Image Search Engines, including exclusive techniques that I don’t post on this blog! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , , , , ,

67 Responses to Unifying picamera and cv2.VideoCapture into a single class with OpenCV

  1. Kenny January 4, 2016 at 10:45 am #

    Awesome stuff, Adrian! Thanks for your continual enthusiasm in sharing your breadth of knowledge in computer vision!

  2. Harvey January 4, 2016 at 2:18 pm #

    I would be interested in what you think of this as a vision platform: https://www.kickstarter.com/projects/pine64/pine-a64-first-15-64-bit-single-board-super-comput

    • Adrian Rosebrock January 4, 2016 at 2:33 pm #

      It seems very similar to the Pi, depending on which model is used. The fact that the processor is faster is nice. But personally, the 64-bit support is what would make me excited. It will be interesting to see how the project evolves.

  3. Remy January 5, 2016 at 10:35 am #

    Phenomenal work Mr. Rosebrock and great tutorial. I went from approx. 12 FPS to 86 FPS (81 FPS displayed) using a crappy ip camera. (trendnet tv-ip572PL).

    • Adrian Rosebrock January 5, 2016 at 1:53 pm #

      Very nice! However, it’s important to keep in mind that you’re likely not getting 86 FPS from the physical camera sensor. Instead, your video processing loop is fast enough to process 86 FPS, hence why I use the term FPS processing rate in the blog post. It’s a subtle, but important nuance to keep in mind. 🙂 In any case, congrats on the improvement!

  4. Mats Önnerby January 11, 2016 at 5:00 pm #

    The latest Raspbian comes with V4L2 drivers preinstalled that make the picamera show up the same way as a webcamera. All you need to do is to add a line bcm-2835-v4l2 to /etc/modules and then reboot. I have tested and the program above works in both modes. I also tested the Real-time barcode detection and it works too, even if it’s a bit slow.

    • Adrian Rosebrock January 12, 2016 at 6:32 am #

      Awesome, thanks for the tip Mats. I didn’t realize Raspbian Jessie came with V4L2 drivers pre-installed, that’s great.

    • patrick January 18, 2016 at 9:25 pm #

      Thanks Mats, V4l2 makes things simpler.
      BTW the line should be bcm2835-v4l2 (no dash after bcm).

      • Rishabh March 13, 2016 at 9:32 am #

        Hi guys, I’m a big newbie at this. Do i write this line in the terminal or in my python code? Thanks!

    • HAJIRA April 13, 2017 at 12:24 pm #

      Mats Önnerby — Could you please provide me the the way to install the v4l2 drivers…Need it badly for my project.!

  5. Ark Nieckarz January 12, 2016 at 11:42 am #

    Great tutorials but will this work under Windows?
    Meaning using a built-in camera (like on laptops), USB or some other video input stream under Windows.

    • Adrian Rosebrock January 12, 2016 at 11:58 am #

      Yes, provided that you can access your webcam stream (either USB or otherwise) using the cv2.VideoCapture method, this code should work with Windows.

  6. Bob January 13, 2016 at 8:49 pm #


    Once you define the VideoStream class, you apparently load it from the videostream_demo.py with:

    from imutils.video import VideoStream

    What file is this new class put into or called, and where is it located?

    • Adrian Rosebrock January 14, 2016 at 6:16 am #

      I have already implemented the functionality in imutils, my open-source set of OpenCV convenience functions.

      You can install imutils using pip:

      $ pip install imutils

      And from there you can import the VideoStream class like I do in videostream_demo.py

  7. patrick January 17, 2016 at 7:23 pm #

    Great !! now I see double….Getting closer to stereoscopic stuff Doctor Rosebrock ?

    Always a pleasure going through your tutorials, keep on doing this great work Adrian .

    • Adrian Rosebrock January 18, 2016 at 3:22 pm #

      I honestly have never done any stereoscopic work before, although that is an avenue I would like to explore 🙂

  8. VIJAYA KUMAR February 3, 2016 at 12:01 pm #

    hello adrian i’m vijaya i want to do my B.E project in digital image processing will you please help me in how to configure the opncv and the python in windows……….thanks in advance…………….

    • Adrian Rosebrock February 4, 2016 at 9:17 am #

      Hey Vijaya, congrats on working on your BE project, that’s great. I’m sure you’re excited to graduate. But to be honest, I haven’t used a Windows system in 9+ years and have never setup OpenCV on Windows OS. If you have a question related to Raspbian, Ubuntu, or OSX, I can do my best to point you in the right direction though.

  9. CAO March 4, 2016 at 9:49 am #

    Hi Adrian,

    Do you think this method could work with the DS325 camera from SoftKinetics ?

    • Adrian Rosebrock March 6, 2016 at 9:22 am #

      I personally haven’t used that particular camera before, but from what I understand, it’s a 3D camera. I don’t have much experience with 3D sensors, although it’s something that I hope to explore in future blog posts. In short, I can’t give an honest answer to your question.

  10. Bosten April 2, 2016 at 3:04 pm #


    So I have installed imutils and am trying to run your program on my mac OS X machine with its built in webcam. For some reason I can’t get idle to work with imutils but it does work with the terminal. Also, when I run your program from python in the terminal, no window displaying the feed shows up. What am I doing wrong? I’m a beginner in all this so there is most likely something I’m doing wrong.

    I also have a question about your program. Does it continue displaying the webcam feed? All the other things that I have tried result in a crash in python after about a minute of showing the feed.

    • Adrian Rosebrock April 3, 2016 at 10:25 am #

      If you’re not getting a video to show up and the Python script is automatically exiting, then you should double check that your cameras are properly connected to the Pi. I would also start with this blog post on accessing the Raspberry Pi camera. It will give you a good starting point with less complex code.

  11. Marcus April 18, 2016 at 8:06 am #


    Hey is there a reason why when I get to the line:

    frame = vs.read()

    it tells me vs is not defined?

    • Adrian Rosebrock April 18, 2016 at 4:45 pm #

      It sounds like you may have not downloaded the source code to the blog post and are missing part of the videostream.py implementation. Make sure you use the “Downloads” form at the bottom of this post to grab all the code in the post.

  12. Lukas Vosyka May 5, 2016 at 7:08 pm #

    Hi Adrian,

    just curious – is there a reason you do not use the resolution for the VideoStream class in case of using a USB webcam. I mean the cv2.VideoCature would support it, kind of like this:

    cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)

    This would make the imutils in this case more versatile, no?


    • Adrian Rosebrock May 6, 2016 at 4:34 pm #

      Great suggestion. Although the problem I’ve ran into with this is that not all webcams obey/use the .set settings. Instead, I leave this to the programmer and user of the imutils library to determine if they want to use this functionality or not.

  13. MP May 7, 2016 at 11:38 am #

    Hi Adrian,
    i am new to python as per your tutorial i have installed the imultils on my raspberry pi and also created the videostream_demo.py and put all the code shown above in one file.
    i guess i am doing wrong here.

    can you guide which file need to be created what code goes in which file.

    one more question can any usb camera will work i have microsoft lifecam vx-1000 which was lying with me.


    • Adrian Rosebrock May 7, 2016 at 12:33 pm #

      If you’re just getting started learning Python, I would use the “Downloads” form on this page to grab the source code to this post. This will demonstrate how you need to structure your files and where to put the code in each file.

      As for your webcam, I have never used the Microsoft Lifecam before. Are you trying to use it on your laptop/desktop or on a Raspberry Pi?

  14. Alexandra May 13, 2016 at 10:23 am #

    Hello Adrian!

    Thanks for all the information you provide! You explain unbelievably well!

    I have a question: do you know if it’s possible to access a GigE 5 Allied Vision camera into Python( it’s an IP camera) for further image processing ?


    • Adrian Rosebrock May 13, 2016 at 11:30 am #

      I personally don’t have any experience with that camera — but I’ll try to do some IP streaming tutorials in the near future.

  15. Andy June 8, 2016 at 9:02 pm #

    How I can replace camera.release? with the Video Stream, because I think you didn’t define this.

    • Adrian Rosebrock June 9, 2016 at 5:18 pm #

      Great point Andy. You’ll want to call vs.camera.release() before calling cv2.destroyAllWindows.

      • Jeff Ward January 5, 2017 at 11:50 am #

        ‘PiCamera’ has no ‘release()’ method. It does have ‘close()’.

      • Jeff Ward January 7, 2017 at 1:06 am #

        When using WebcamVideoStream, would it be ‘vs.stream.release()’?

        • Adrian Rosebrock January 7, 2017 at 9:23 am #

          Correct, I meant to say vs.stream.release(). Thank you for pointing this out.

  16. Marcelo Aragao July 25, 2016 at 9:54 am #

    Congratulations Adrian! Great blog!
    How do I use other resolution?
    I have changed:
    def __init__(self, src=0, usePiCamera=False, resolution=(1024, 768), framerate=32):
    And comment:
    #frame = imutils.resize(frame, width=400)
    But still 320×240 image resolution

    • Adrian Rosebrock July 27, 2016 at 2:31 pm #

      You can change the resolution when you initialize the object, like this:

      vs = VideoStream(resolution=(1024, 768))

  17. vivek September 24, 2016 at 5:55 am #

    where we create the first code? after installing imutlis what are the steps for making class? i didnt get it where it stores? explain briefly

    • Adrian Rosebrock September 27, 2016 at 8:53 am #

      Open up a text editor (whichever text editor you prefer) and start inserting the code. I would also suggest that you use the “Downloads” section of this blog post to download the source code — that way you will have a copy of the code that is working. From there you should try to code the example yourself.

  18. H December 5, 2016 at 9:34 am #

    Hi Adrian, is it possible to apply this to code that is being used to do multi-scale image template matching? I’ve had an attempt and kept getting this error

    ”The camera is already using port %d ‘ % splitter_port)
    picamera.exc.PiCameraAlreadyRecording: The camera is already using port 0 ‘

    I think the problem is because I’m trying to put every frame into an array so I can grayscale to template match better.. Do you know a way around this?

    • Adrian Rosebrock December 5, 2016 at 1:23 pm #

      It sounds like you might have another script/program that is accessing your Raspberry Pi camera module. If you want to perform template matching with the code in this blog post you’ll need to combine the two scripts together.

  19. Chris January 6, 2017 at 10:01 pm #

    Hi Adrian, thanks for your post and a few of the others I have used! I am working with a remote camera on Raspberry Pi. I was planning on sending the frame back to my mac with Pyro4. At first glance, it seems like it might be tricky to get picamera running on OS X but that Pyro4 is trying to deserialize the object I send from my Pi back to a picamera type.

    I am doing this to avoid doing heavy processing on the Pi. I eventually need to do face recognition on the frame too which I am doing on the frame I get from cv2. How good was your performance on the Pi doing face detection? Is it worthwhile for me to continue this video streaming approach?

    Thanks again!

    • Adrian Rosebrock January 7, 2017 at 9:27 am #

      Are you asking whether the Pi is suitable for face detection or face recognition? Face detection can easily be run on the Pi without a problem. Face recognition on the other hand is substantially slower. You would be lucky to get more than 1-2 FPS for face recognition using basic algorithms.

      I would consider using a message passing library like zeromq and then passing frames with detected faces to a system with more computational power if you intend on using any type of advanced face recognition algorithms (such as OpenFace).

  20. Nick January 7, 2017 at 3:58 pm #

    I just have to say a big thank you Adrian! I’m doing a project with a Pi + OpenCV and I’ve had several problems but your awesome guides have helped me greatly! Thanks again!
    Keep on rocking

    • Adrian Rosebrock January 9, 2017 at 9:15 am #

      Thanks Nick, I’m happy I could help 🙂 Have a great day.

  21. vivek January 12, 2017 at 6:49 am #

    can i get same class for raspberry pi camera with c++ for capturing video and image operations like this one?
    help me on this

    • Adrian Rosebrock January 12, 2017 at 7:53 am #

      I only offer Python + OpenCV code on this blog, not C++. Perhaps another reader can convert this implementation to C++ for you.

  22. vicky February 10, 2017 at 4:10 am #

    hi adrian , I am doing my BE project in image processing and i am new to pi can u hlp me how to detect a shape by using pi camera video stream

    • Adrian Rosebrock February 10, 2017 at 2:00 pm #

      I cover shape detection in this tutorial. You’ll need to utilize the code in this post to access the frames from the Raspberry Pi video stream, then apply the shape detector to each frame. If you’re just getting started with computer vision and OpenCV, I would suggest going through Practical Python and OpenCV.

  23. Biswajit February 13, 2017 at 2:12 am #

    Hi Adrian.what is the type of the frame in frame = vs.read()?Is it a direct image or not ? how can I encode it into base64 or any other string representation of this image ? Thanks in advance.

    • Adrian Rosebrock February 13, 2017 at 1:37 pm #

      The frame itself is a NumPy array with shape (h, w, d) where “h” is the height, “w” is the width, and “d” is the depth. You can use these functions to encode/decode base64.

  24. Christian February 14, 2017 at 4:41 am #

    Hi Adrian,

    great article, thanks for sharing-the code can get more cpu-friendly if you utilize a queue in the VideoStream class. By doing so, synchronization between main- and background-thread gets achieved and cpu load drops from 25 % to 10% on a Raspberry Pi 3.

    from Queue import Queue

    items = Queue()
    def update():
    frame = self.rawCapture ….

    def read():
    return items.get()

    • Adrian Rosebrock February 14, 2017 at 1:21 pm #

      Nice tip Christian. I also cover queueing in this post as well.

  25. Anbu March 21, 2017 at 4:11 am #

    i got the error like

    ImportError: No module named webcamvideostream

    but i have installed numpy(array)

    • Adrian Rosebrock March 21, 2017 at 7:04 am #

      The command would actually be:

      $ pip install "picamera[array]"

      Not “numpy(array)”.

      Also make sure you are running the latest version of imtuils:

      $ pip install --upgrade imutils

  26. Reshal March 22, 2017 at 5:15 pm #

    Adrian fantastic work with these Raspberry Pi tutorials. Well done! It is actually amazing with what one can do with the Pi.

    I have a few questions, would it be possible to use this and then stream it to a server? The server would be setup on the Pi. And an html page is created which contains the a url to access this feed. If so how would one do it?

    The aim is to access this image processed feed through a browser on an android, or ios or pc device.

    • Adrian Rosebrock March 23, 2017 at 9:29 am #

      If you want to send the stream directly to another source I would skip OpenCV entirely and use something like gstreamer.

  27. Dustin March 24, 2017 at 10:48 pm #

    Hey Adrian,

    Thank you for posting these tutorials. They have been incredibly helpful for my senior design project. We are creating a robot that tracks swimmers moving in a pool to help monitor their form.

    I am encountering an intermittent issue while using this class with the PiCamera. When I run the demo script the first time I boot, everything works fine. I get this error if I try to close the program and re-run it:

    frame = imutils.resize(frame,width=800)
    File “/usr/local/lib/python2.7/dist-packages/imutils/convenience.py”, line 69, in resize
    (h, w) = image.shape[:2]
    AttributeError: ‘NoneType’ object has no attribute ‘shape’

    This error doesn’t occur when I run the script using a webcam, only with the PiCamera. Do you have any idea why this occurs?

    • Adrian Rosebrock March 25, 2017 at 9:16 am #

      This error could be due to a variety of reasons. To start, have you updated your instantation of VideoStream to access the Raspberry Pi camera module? Can you access your Raspberry Pi camera module via the command line?

  28. Twinkle April 1, 2017 at 3:00 am #

    Thank you for your previous post that helped in increasing the fps of my raspberry pi camera.
    However while running videostream_demo.py, I am getting the following error:

    What could be the solution?

    Thanks in advance.

    • Twinkle April 1, 2017 at 3:14 am #

      I even tried running the program through command line , still same error occurs

    • Adrian Rosebrock April 3, 2017 at 2:13 pm #

      The problem here is that your system is not able to correctly access your webcam/video stream/Raspberry Pi camera. Please see this post on NoneType errors for more information.

  29. Vince July 24, 2017 at 1:51 pm #

    Thank you Adrian for this amazing write up and for the write up on how to install opencv 3 for python. 10/10.

    I got this working, however when I try to display the image with a larger resolution, the performance drops fps wise. If I comment out the resize and setup up the resolution when declaring vs, I get what I want resolution wise, but the rates drop. Any ideas what I am doing wrong? Is opencv just this slow with the pi? If I just use pythons picam module, it can show this resolution at high fps and you mention getting high fps with your methods here.

    Thanks! (Using Pi model 2B with picam )

    • Adrian Rosebrock July 24, 2017 at 3:27 pm #

      The higher the resolution, the more data has to be read from the camera. The more data, the slower the performance. This is simply a side effect of using the picamera module.

      • Vince July 24, 2017 at 4:50 pm #

        While I agree with the statement, more data = slower performance, as I said earlier using just picamera module I get fast fps at high resoultion, but I dont have access to the frame, hence where opencv comes into play.

        picamrea script:

        from picamera import PiCamera
        import time

        camera = PiCamera()


        With this script I get full resolution at a fast fps, I would say 30-60.

        When I run the script in your post, when I go to full resolution, the system bogs down. I have to let resolution = (200,200) to get anything acceptable (low lag, decent fps).

        Im just wondering why, if the picam and picamera module are capable of fast performance, why I am not getting that with opencv? Is it my execution? What fps are expected just showing an image within opencv at full picam resoution? Just want to make sure this is what is expected using opencv on the pi or if I missed a setting.

        I tried using a usb cam, with the same results.

        • Adrian Rosebrock July 28, 2017 at 10:16 am #

          To be honest, this is to be expected when writing video processing scripts with OpenCV. The frames have to be read from the camera, converted to a NumPy array, and displayed to your screen. As far as I understand, start_preview does not have to convert to a NumPy array and can instead display the stream directly to your desktop. For more details I would suggest asking the picamera developers.


  1. Multiple cameras with the Raspberry Pi and OpenCV - PyImageSearch - January 21, 2016

    […] A Raspberry Pi camera module + camera housing (optional). We can interface with the camera using the picamera  Python package or (preferably) the threaded VideoStream  class defined in a previous blog post. […]

Leave a Reply