Increasing Raspberry Pi FPS with Python and OpenCV

fps_demo_pi2_picamera_with_display

Today is the second post in our three part series on milking every last bit of performance out of your webcam or Raspberry Pi camera.

Last week we discussed how to:

  1. Increase the FPS rate of our video processing pipeline.
  2. Reduce the affects of I/O latency on standard USB and built-in webcams using threading.

This week we’ll continue to utilize threads to improve the FPS/latency of the Raspberry Pi using both the picamera  module and a USB webcam.

As we’ll find out, threading can dramatically decrease our I/O latency, thus substantially increasing the FPS processing rate of our pipeline.

Looking for the source code to this post?
Jump right to the downloads section.

Note: A big thanks to PyImageSearch reader, Sean McLeod, who commented on last week’s post and mentioned that I needed to make the FPS rate and the I/O latency topic more clear.

Increasing Raspberry Pi FPS with Python and OpenCV

In last week’s blog post we learned that by using a dedicated thread (separate from the main thread) to read frames from our camera sensor, we can dramatically increase the FPS processing rate of our pipeline. This speedup is obtained by (1) reducing I/O latency and (2) ensuring the main thread is never blocked, allowing us to grab the most recent frame read by the camera at any moment in time. Using this multi-threaded approach, our video processing pipeline is never blocked, thus allowing us to increase the overall FPS processing rate of the pipeline.

In fact, I would argue that it’s even more important to use threading on the Raspberry Pi 2 since resources (i.e., processor and RAM) are substantially more constrained than on modern laptops/desktops.

Again, our goal here is to create a separate thread that is dedicated to polling frames from the Raspberry Pi camera module. By doing this, we can increase the FPS rate of our video processing pipeline by 246%!

In fact, this functionality is already implemented inside the imutils package. To install imutils  on your system, just use pip :

If you already have imutils  installed, you can upgrade to the latest version using this command:

We’ll be reviewing the source code to the video  sub-package of imutils  to obtain a better understanding of what’s going on under the hood.

To handle reading threaded frames from the Raspberry Pi camera module, let’s define a Python class named PiVideoStream :

Lines 2-5 handle importing our necessary packages. We’ll import both PiCamera  and PiRGBArray  to access the Raspberry Pi camera module. If you do not have the picamera Python module already installed (or have never worked with it before), I would suggest reading this post on accessing the Raspberry Pi camera for a gentle introduction to the topic.

On Line 8 we define the constructor to the PiVideoStream  class. We’ll can optionally supply two parameters here, (1) the resolution  of the frames being read from the camera stream and (2) the desired frame rate of the camera module. We’ll default these values to (320, 240)  and 32 , respectively.

Finally, Line 19 initializes the latest frame  read from the video stream and an boolean variable used to indicate if the frame reading process should be stopped.

Next up, let’s look at how we can read frames from the Raspberry Pi camera module in a threaded manner:

Lines 22-25 define the start  method which is simply used to spawn a thread that calls the update  method.

The update  method (Lines 27-41) continuously polls the Raspberry Pi camera module, grabs the most recent frame from the video stream, and stores it in the frame  variable. Again, it’s important to note that this thread is separate from our main Python script.

Finally, if we need to stop the thread, Lines 38-40 handle releasing any camera resources.

Note: If you are unfamiliar with using the Raspberry Pi camera and the picamera  module, I highly suggest that you read this tutorial before continuing.

Finally, let’s define two more methods used in the PiVideoStream  class:

The read  method simply returns the most recently read frame from the camera sensor to the calling function. The stop  method sets the stopped  boolean to indicate that the camera resources should be cleaned up and the camera polling thread stopped.

Now that the PiVideoStream  class is defined, let’s create the picamera_fps_demo.py  driver script:

Lines 2-10 handle importing our necessary packages. We’ll import the FPS  class from last week so we can approximate the FPS rate of our video processing pipeline.

From there, Lines 13-18 handle parsing our command line arguments. We only need two optional switches here, --num-frames , which is the number of frames we’ll use to approximate the FPS of our pipeline, followed by --display , which is used to indicate if the frame read from our Raspberry Pi camera should be displayed to our screen or not.

Finally, Lines 21-26 handle initializing the Raspberry Pi camera stream — see this post for more information.

Now we are ready to obtain results for a non-threaded approach:

Line 31 starts the FPS counter, allowing us to approximate the number of frames our pipeline can process in a single second.

We then start looping over frames read from the Raspberry Pi camera module on Line 34.

Lines 41-43 make a check to see if the frame  should be displayed to our screen or not while Line 48 updates the FPS counter.

Finally, Lines 61-63 handle releasing any camera sources.

The code for accessing the Raspberry Pi camera in a threaded manner follows below:

This code is very similar to the code block above, only this time we initialize and start the threaded PiVideoStream  class on Line 68.

We then loop over the same number of frames as with the non-threaded approach, update the FPS counter, and finally print our results to the terminal on Lines 89 and 90.

Raspberry Pi FPS Threading Results

In this section we will review the results of using threading to increase the FPS processing rate of our pipeline by reducing the affects of I/O latency.

The results for this post were gathered on a Raspberry Pi 2:

  • Using the picamera  module.
  • And a Logitech C920 camera (which is plug-and-play capable with the Raspberry Pi).

I also gathered results using the Raspberry Pi Zero. Since the Pi Zero does not have a CSI port (and thus cannot use the Raspberry Pi camera module), timings were only gathered for the Logitech USB camera.

I used the following command to gather results for the picamera  module on the Raspberry Pi 2:

Figure 1: Increasing the FPS processing rate of the Raspberry Pi 2.

Figure 1: Increasing the FPS processing rate of the Raspberry Pi 2.

As we can see from the screenshot above, using no threading obtained 15.46 FPS.

However, by using threading, our FPS rose to 226.67, an increase of over 1,366%!

But before we get too excited, keep in mind this is not a true representation of the FPS of the Raspberry Pi camera module — we are certainly not reading a total of 226 frames from the camera module per second. Instead, this speedup simply demonstrates that our for  loop pipeline is able to process 226 frames per second.

This increase in FPS processing rate comes from decreased I/O latency. By placing the I/O in a separate thread, our main thread runs extremely fast — faster than the I/O thread is capable of polling frames from the camera, in fact. This implies that we are actually processing the same frame multiple times.

Again, what we are actually measuring is the number of frames our video processing pipeline can process in a single second, regardless if the frames are “new” frames returned from the camera sensor or not.

Using the current threaded scheme, we can process approximately 226.67 FPS using our trivial pipeline. This FPS number will go down as our video processing pipeline becomes more complex.

To demonstrate this, let’s insert a cv2.imshow  call and display each of the frames read from the camera sensor to our screen. The cv2.imshow  function is another form of I/O, only now we are both reading a frame from the stream and then writing the frame to our display:

Reducing the I/O latency and improving the FPS processing rate of our pipeline using Python and OpenCV.

Figure 2: Reducing the I/O latency and improving the FPS processing rate of our pipeline using Python and OpenCV.

Using no threading, we reached only 14.97 FPS.

But by placing the frame I/O into a separate thread, we reached 51.83 FPS, an improvement of 246%!

It’s also worth noting that the Raspberry Pi camera module itself can reportedly get up to 90 FPS.

To summarize the results, by placing the blocking I/O call in our main thread, we only obtained a very low 14.97 FPS. But by moving the I/O to an entirely separate thread our FPS processing rate has increased (by decreasing the affects of I/O latency), bringing up the FPS rate to an estimated 51.83.

Simply put: When you are developing Python scripts on the Raspberry Pi 2 using the picamera  module, move your frame reading to a separate thread to speedup your video processing pipeline.

As a matter of completeness, I’ve also ran the same experiments from last week using the fps_demo.py  script (see last week’s post for a review of the code) to gather FPS results from a USB camera on the Raspberry Pi 2:

Figure 3: Obtaining 36.09 FPS processing rate using a USB camera and a Raspberry Pi 2.

Figure 3: Obtaining 36.09 FPS processing rate using a USB camera and a Raspberry Pi 2.

With no threading, our pipeline obtained 22 FPS. But by introducing threading, we reached 36.09 FPS — an improvement of 64%!

Finally, I also ran the fps_demo.py  script on the Raspberry Pi Zero as well:

Figure 4: Since the Raspberry Pi Zero is a single core/single threaded machine, the FPS processing rate improvements are very small.

Figure 4: Since the Raspberry Pi Zero is a single core/single threaded machine, the FPS processing rate improvements are very small.

With no threading, we hit 6.62 FPS. And with threading, we only marginally improved to 6.90 FPS, an increase of only 4%.

The reason for the small performance gain is simply because the Raspberry Pi Zero processor has only one core and one thread, thus the same thread of execution must be shared for all processes running on the system at even given time.

Given the quad-core processor of the Raspberry Pi 2, it’s suffice to say the Pi 2 should be used for video processing.

Summary

In this post we learned how threading can be used to increase our FPS processing rate and reduce the affects of I/O latency on the Raspberry Pi.

Using threading allowed us to increase our video processing rate by a nice 246%; however, its important to note that as the processing pipeline becomes more complex, the FPS processing rate will go down as well.

In next week’s post, we’ll create a Python class that incorporates last week’s WebcamVideoStream  and today’s PiVideoStream  into a single class, allowing new video processing blog posts on PyImageSearch to run on either a USB camera or a Raspberry Pi camera module without changing a single line of code!

Sign up for the PyImageSearch newsletter using the form below to be notified when the post goes live.

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 11-page Resource Guide on Computer Vision and Image Search Engines, including exclusive techniques that I don’t post on this blog! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , , , , , ,

119 Responses to Increasing Raspberry Pi FPS with Python and OpenCV

  1. Tony Waite December 28, 2015 at 2:20 pm #

    Great post!
    The code ran ‘straight out of the box’ for me, albeit I had to run it from the ‘standard’ prompt rather than the ‘cv2’ wrapper.
    Your tutorials are fantastically helpful!
    Do you have any plans to incorporate a QR reader?

    • Adrian Rosebrock December 29, 2015 at 8:00 am #

      Thanks Tony! At the present time I don’t have any plans to do any tutorials for a QR reader, although that’s something I would like to explore in the future. In the meantime, you should take a look at the zbar library.

  2. Rudolph December 28, 2015 at 3:57 pm #

    Hi Adrian,

    I absolutely love your blogs. I like your more recent posts that shows how To more efficiently use Python. Or use Python in a better way to complement the use of CV. Also dome of the the things you do i. Your example code it very applicable to not only image processing. But other problem domains as well.

    Keep up the good work.

    • Adrian Rosebrock December 29, 2015 at 7:59 am #

      Thanks Rudolph! 🙂

  3. Tonyv December 29, 2015 at 11:45 am #

    Thanks Adrian, again, for all your very informative blogs. It is a great improvement over older blogs that the comments now match actual line numbers; thanks!

    So, now I have a problem which I’m unable to fathom:
    I’m running my pi2 headless, but with a HDMI display attached, over a ssh -X link. I can see the camera from this blog image on my monitor, which is in itself a miracle!

    However, presumably because of the ssh latency , I’m only getting:

    #####
    tony@pibot0:~/work/picamera$ python picamera_fps_demo.py –display 1
    [INFO] sampling frames from picamera module…
    [INFO] elasped time: 6.97
    [INFO] approx. FPS: 14.48
    [INFO] sampling THREADED frames from picamera module…
    [INFO] elasped time: 4.22
    [INFO] approx. FPS: 23.72
    #####

    A significant improvement, on which I’d like to do some further investigation, but for now I’d like the camera image displayed on the HDMI screen as well as the monitor. I can’t seem to figure out how to make that happen.

    Can you help, please?

    PS, the typo “elasped time” comes straight from your code 🙂

    • Adrian Rosebrock December 30, 2015 at 7:05 am #

      You are correct, the X11 forwarding is what is causing the slow down.

      As for your question, I’m not sure I understand. You want to display an image on your HDMI screen (that is physically connected to your Pi) along with the same image to the system you are using to SSH into the Pi?

      Unfortunately, I don’t think that’s an easy thing to do. You’ll likely need to create a producer/consumer setup where your first Python script reads frames from the camera device (i.e., the producers) and then other Python scripts (i.e., consumers) are able to take the frame and display it.

      • tonyv December 30, 2015 at 9:57 am #

        Thanks for your prompt reply, Adrian.

        Perhaps I’m envisaging too many steps at a time. I guess initially I’m really wanting to display the image on the attached HDMI display, instead of the X11 display.

        I then want to do some processing, such as shape recognition, as per your other tutorials, and display the results on the remote monitor.

        I had envisaged perhaps running a third thread, which extracts the current (or last) frame from the real-time HDMI display, and sends that over the X11 link, while allowing the data acquisition and processing to continue at full speed.

        Does that make sense?

        • Adrian Rosebrock December 30, 2015 at 2:48 pm #

          That does indeed make sense. In either case, I think the best way to accomplish this is with a messaging passing library such as RabbitMQ or pyzmq.

  4. Christoph Viehoff December 31, 2015 at 9:27 pm #

    Got the code and that wasn’t it. Also found all the files from imutils in directory
    /home/pi/.virtualenvs/cv3/lib/python3.2/site-packages/imutils/video/

    fps.py is there
    plus the __init__.py file:

    Not sure where else to look

    • Adrian Rosebrock January 1, 2016 at 7:29 am #

      Hey Christoph — make sure you have downloaded and installed the latest version of imutils:

      $ pip install --upgrade imutils --no-cache-dir

      This will ensure that the latest version is installed and that a cached version is not used.

      You can also find the imutils code on GitHub.

      • Christoph Viehoff January 4, 2016 at 1:26 pm #

        Did this but it is already installed

        • Adrian Rosebrock January 4, 2016 at 2:35 pm #

          Can you list the contents of the site-packages directory of your cv3 virtual environment?

          $ ls -l ~/.virtualenvs/cv/lib/python2.7/site-packages

          And see which version of imutils is installed? It should be v0.3.3.

  5. Nick December 31, 2015 at 10:19 pm #

    Adrian, I’m curious. I’m running this threading exercise with some boilerplate image processing (for motion) from earlier on in your posts and am only seeing rather marginal improvements on the RPi2. From PiCamera module without threading, I see ~10FPS and with threading I’m getting ~14FPS. Is this likely due to imshow?

    What I’m really curious about is piping this to a website of sorts so it can actually be apart of a responsive security system.

    Thanks, and I’ve probably spent about 10hrs on your website over the past week learning all this! Super intuitive the way you write it up, and I can’t thank you enough for commenting all your code, as a self-taught developer, the comments and tutorials you provide are invaluable.

    • Adrian Rosebrock January 1, 2016 at 7:26 am #

      Is your boilerplate pipeline doing anything other than calling cv2.imshow? It’s important to keep in mind that the more processing you do inside the loop, the longer each loop will take, and thus your overall FPS processing rate will drop.

      If you are doing more than just using cv2.imshow, I would remove it and then re-run your script. You should see another improvement.

      And thanks for the kind words regarding the blog, I’m happy you have found it so useful! 🙂

  6. foxmjay January 1, 2016 at 11:14 am #

    I struggled for long time to get opencv compiled on the RPI to use it with RaspiCam c++ library and got around 24FPS which is quiet good . and now you are showing we can get over 50FPS ! with simple threading and with my favorite programming language :d , this is amazing . Thanks a lot for sharing it with us .

  7. Ben January 11, 2016 at 7:08 pm #

    Nice post. I like how you do different experiments with the Raspberry Pi and the camera. It reminds me of the time when I was playing with Opencv and the model B a few years ago. One thing I don’t understand is that how were you only getting around 15 FPS without doing multithreading? The only processing you were doing was resizing on 320*240 frames. In part one you were using a 1080P webcam and you could get around 30 FPS with the opencv library. Is picamera library extremely slow or am I missing something important? Because when I was doing heavy processing (resizing ,filtering,greyscale,flip,face detection,…) on similar sized frames with the model B I could achieve around 10FPS. I just ordered a Raspberry Pi2 so I was expecting better performance.

    • Adrian Rosebrock January 12, 2016 at 6:30 am #

      The picamera library is implemented in Python, so yes, it can be a bit slower than cv2.VideoCapture which is implemented in C++ — and then Python binaries supplied. But also keep in mind that the Pi 2 only has a 900mhz processor. Yes, it’s quadcore and multi-threaded, but until you write code to actually take advantage of multiple threads, you won’t be able to utilize the benefits of the processor. When you run tests like this, you quickly realize how much performance is lost to I/O latency.

  8. Hunt January 14, 2016 at 2:29 pm #

    This blog is wonderful and exactly what I need for my current project. However I noted when running your fps_demo.py as is with the same Logitech c920, I received significantly fewer frames than you show in your screenshot. I’m running a Raspberry Pi 2 and the script is reporting around 9 fps for single threaded and fps 13 for multi-threaded.

    I followed your tutorial for setting up OpenCV with Python 2.7, but aside from that don’t have much else on the Pi so far.

    Could there be something I’m doing wrong/havent setup that could account for this discrepancy?

    • Adrian Rosebrock January 16, 2016 at 9:38 am #

      Hey Hunt, I’m honestly not sure why your numbers do not match mine exactly. Do you know if you have any additional drivers installed, such as V4L2? Also, are you executing your script locally on the Pi or remotely via SSH?

      • Hunt January 20, 2016 at 5:28 pm #

        I’ve executed it both remotely and locally, with the same results. I do have V4L2 installed and made sure my OpenCV was built with support on. However, running “v4l2-ctl –info” for the camera device info prints:

        “Driver Info (not using libv4l2): Driver name: uvcvideo …”

        I was under the impression that the uvcvideo driver was dependent on v4l2 (http://unix.stackexchange.com/questions/116904/understanding-webcam-s-linux-device-drivers), but it seems I’ve got a lot of searching to do to see what’s really going on.

  9. Steve January 18, 2016 at 3:30 am #

    I tried the raspicam vs VideoCapture method… but notice the frame rate seems slightly better (if not the same) using cv2.VideoCapture on RPi2.

    I think I had to ‘sudo modprobe bcm2835-v4l2’ to get that working…

  10. Trish Mapow February 20, 2016 at 11:02 pm #

    Hi, great tutorial! Do you know how I would be able to have one thread that continually displays current frames, and another one that detects faces; so when the detect face script is finished it changes the live frame?

    • Adrian Rosebrock February 22, 2016 at 4:27 pm #

      That’s absolutely possible, but I don’t have any code for that. I would suggest starting by reading through Practical Python and OpenCV where I discuss how to detect faces in images and video streams. You’ll want to use a separate thread for this code. Then, your main thread can monitor the previous thread and see if any new bounding boxes are returned.

  11. Kai Bergmann March 7, 2016 at 4:22 pm #

    Hi Adrian, thank you for all your great tutorials.

    I’ve been playing around with this one for some time and there is something I don’t quite understand.

    The read() function simply gives us the last captured frame. There is nothing that prevents it from delivering the same image more than once.

    As I can tell you only use this read() funciton in the loop of your fps-test method. It doesn’t measure how many frames from the streams are captured – in an extreme case the same frame could be delivered in every call of read().

    In contrast the fps test for the non-threaded approach really captures a frame in every loop execution. The numbers can’t be compared in a meaningful way.

    Can you enlighten me? Maybe my understanding of “fps” just differs from yours.

    • Adrian Rosebrock March 8, 2016 at 4:19 pm #

      Hey Kai, I would suggest giving this post a read, which better explains the context of FPS. We aren’t measuring the true FPS of the physical camera sensor; instead, we are measuring the Frames Per Second processing rate of our video processing pipeline. The numbers can be compared in a meaningful way by measuring the actual throughput of our processing pipeline.

      The examples presented in this blog post are quite simple, but in reality, most video processing pipelines are more computationally expensive, so the background thread (i.e., removing a blocking I/O operation) can help speedup the pipeline — that is what the blog post is trying to demonstrate.

  12. José March 19, 2016 at 7:57 am #

    Hi Adrian, thanks for this awesome post.
    I’m learning python, and now I get the camera pi.
    I have one question.
    In all of my python’s codes I need add:
    camera. vflip=True
    because my pi camera is inverted.
    I try to add these line only in your PiVideoStream in :
    (….)
    self.camera = PiCamera()
    self.camera.resolution = resolution
    self.camera.framerate = framerate
    self. camera. vflip=True
    (….)

    When I execute the fps test with display, the image don’t change when the thread start . I don’t know that it’s wrong.
    ( I’m not english speaker)

    • Adrian Rosebrock March 19, 2016 at 9:10 am #

      I personally haven’t tried the vflip flag before, so I’m honestly not sure about it. You might want to open up an “Issue” on the official official GitHub for picamera.

      Also, instead of using picamera to perform the image flip, you could use OpenCV instead:

      frame = cv2.flip(frame, 0)

  13. Erik March 20, 2016 at 4:13 pm #

    Hi,

    I modified the pivideostream.py to include the horizontal and vertical flip attributes and they work fine for me. My picamera hangs upside down.

    It allows me to select whether i need to flip or not when calling PiVideoStream.

    in picamera_fps_demo.py modify
    vs = PiVideoStream(vf=True,hf=True).start()

    in pivideostream.py add the last two lines

    • Adrian Rosebrock March 20, 2016 at 4:22 pm #

      Thanks for sharing Erik! And for those who are interested, you can obtain the same affect of flipping the frame via OpenCV and the cv2.flip function.

  14. Yash March 21, 2016 at 11:21 am #

    Hi Adrian,

    Thanks for the amazing tutorial. I tried implementing as per the tutorial and it works great!

    Of late, I was reading about the Global Interpreter Lock that python has and from what I have understood, python threads don’t actually execute in parallel. So, I was wondering how is this code actually managing to get the speed boost? Generally people use multi-processing with overhead message passing but your tutorial deals with multi-threading only.

    Is it because the bottleneck over here is the I/O latency and not the CPU processing time, hence though threads are not actually running in parallel, we never have to deal with the latency arising because of I/O?

    • Adrian Rosebrock March 21, 2016 at 6:34 pm #

      You’re exactly right — the bottleneck here is the I/O latency, not the CPU, so by moving the I/O to a separate thread, our main thread is free to run unblocked, not having to wait for the I/O.

      • Emil August 28, 2016 at 11:09 pm #

        Will there be a further improvement in the FPS If the multiprocessing module is used in order to run the camera capturing thread on a separate core?

        • Adrian Rosebrock August 29, 2016 at 1:55 pm #

          In general, no. Threads are used for I/O bound tasks while processes are used for heavy computations.

  15. Sainandan Ramakrishnan March 24, 2016 at 8:46 am #

    Hey there Adrian!

    Absolutely helpful all of your tutorials.

    I tried multi-threading on Windows on my Laptop and the improvement is DRASTIC.

    But as soon as I try the very same thing on my Raspberry Pi B+(remotely accessed on my laptop via X11 port forwarding), not only do I get significantly slower results, BUT the threaded FPS happens to be even SLOWER than the VideoCapture one!! 🙁

    I understand that your multi-threading results were demonstrated on a Pi 2, but can’t it be done on a B+ as well?

    • Adrian Rosebrock March 24, 2016 at 5:11 pm #

      There are two reasons for this. The first is that the B+ has only a single core while the Pi 2 has four cores, making threading a much more efficient operation. Secondly, X11 forwarding adds yet another level of I/O overhead and will dramatically hurt your performance. Instead of executing your script via X11 either (1) turn off the cv2.imshow call by commenting it out or (2) execute your script keyboard + monitor. This will improve the results on the B+.

  16. Anders Bering March 28, 2016 at 4:16 pm #

    Hi

    I’ve been trying to get a python script up and running with some streaming from my NOiR cam.

    And I’ve been succesfull in executing your above script. however what i want was to capture a stream do something with it transcode it to H264 using the pi’s HW acceleration and the send it to a server. But just getting the above code to run with an acceptable FPS is beyond me.

    I am of cause trying to capture a stream in 1920×1080. Is this to much to ask for in the python script.

    I have this working using the gstream method. and i could use the python script to setup the gstreamer instead, it would just be nice to have it all in the same spot.

    • Adrian Rosebrock March 28, 2016 at 6:23 pm #

      Realistically, yes, I think capturing 1920×1080 is a bit too much for the simple picamera module. If you’re looking to get a reasonable FPS, your frames should be a maximum of 640 x 480 (at least that’s the rule of thumb I use from my experience). As the size of your frames increase, the number of frames per second the pipeline can process will drop dramatically. The larger the image is, the more data there is to process — thus the script starts to run slower.

      • Anders Bering March 30, 2016 at 5:16 am #

        Ok thank you.

        However it is possible for me just to start the picamera streaming and it runs just fine with fin FPS.
        but the problem is when i capture the image and pass it to openCv then the drop to about 2 fps with threading and 1.5 fps without.

        I am looking for a way to make motion detection, and the start a high res stream to a server.

        so maybe capture a low res stream in python and when it detect call gstreamer and send it to my server (i have gstreamer working as it is now. whit capture and H264 encoding)

        • Adrian Rosebrock March 30, 2016 at 12:46 pm #

          So just to clarify, what resolution are you capturing your frames at? The larger your frames are, the less FPS you’ll be able to process. As an aside, I currently have open GitHub issue with picamera to see if it’s possible to capture multiple stream resolutions at the same time. It can be done using multiple resolutions + files, but I’m not sure if it’s possible to capture the raw stream.

          • Anders Bering March 31, 2016 at 11:07 am #

            My plan is to do motion detection and a steam of 320×200 (low res). if it then detects anything the raspi should start sending a stream of 1920x1080p to my server.

            It is possible to use gstreamer to split a stream into two with different resolution. this could be used to transmit both a 1920×1080 stream to my server while doing motion detection on a low res stream of 320×200

          • Adrian Rosebrock March 31, 2016 at 2:53 pm #

            Thanks for the tip on gstreamer, I’ll be sure to give this a try inside of utilizing picamera.

  17. Rock March 29, 2016 at 12:34 pm #

    Raspicam is limited with 1080p30fps.
    In that case, thread won’t help to break that limitation. Am I right?

    • Adrian Rosebrock March 29, 2016 at 3:37 pm #

      Correct, you cannot break the limitations of the sensors themselves.

      • Rock March 30, 2016 at 7:53 am #

        Thanks for your confirmation. It’s really helpful.

    • Anders Bering March 30, 2016 at 5:51 am #

      I’m only achieving bout 2 FPS

  18. Jacob April 14, 2016 at 9:48 am #

    Hello Adrian

    I did some investigation of your code using threaded and stumbled upon a problem I’m not sure you are aware of. The frame rate is actually not improving, it is just the same frame being received multiple times. I implemented a method to check if the frame had actually changed between calls and then got a better estimation of the correct frame rate, which is around 6 fps at 640×480.

    • Adrian Rosebrock April 14, 2016 at 10:37 am #

      Thanks for the comment Jacob. As I do a better job explaining in this post, the goal of this series is to increase the FPS processing rate of the pipeline. Or more simply, how many frames our while loop can process in a second. The distinction is subtle, but important. That said, I would definitely like to update the PiVideoStream class to only return a frame when a new one has been polled from the camera.

      • Jindrich May 13, 2016 at 9:55 am #

        Adrian, than you for all the effort you’re putting in these tutorials. You helped me a lot in my adventures with OpenCV and Raspberry Pi.

        For my application I need to process as many frames as possible (who doesn’t?) while avoiding duplicate frames. I stumbled upon the video_threaded.py in OpenCV samples. I adapted the script for Raspberry Pi and I was able to get 16 fps at 320×240 on Raspberry Pi 3.

        https://github.com/Itseez/opencv/blob/master/samples/python/video_threaded.py

        Do you think this is a good approach to increase fps?

        • Adrian Rosebrock May 13, 2016 at 11:32 am #

          If your goal is to read as many (new) frames as possible and then process them, then this is a standard producer/consumer relationship. Your “producer” is the frame reader (single thread) which only grabs new frames and sends them to the consumer. The consumers should be a set of processes that look for new frames in the queue and process them. There are many, many ways to accomplish this in Python, but as long as you use a producer/consumer relationship, you’ll be okay.

  19. khosro April 23, 2016 at 10:23 am #

    hello Adrian
    how can i change raspberry pi camera setting in PiVideoStream class ?
    note that i have opencv3.0.0 without virtual env and install imutils whit
    sudo pip install imutils
    (iwant to change shutter speed of camera)
    regards

    • Adrian Rosebrock April 25, 2016 at 2:10 pm #

      You’ll need to modify the imutils directly. I would suggest downloading the code directly and then modifying the PiVideoStream class to your liking.

  20. Brian May 15, 2016 at 8:52 pm #

    When you define a Python class named PiVideoStream, where and how is it saved? Is it saved as a separate file in the same folder as picamera_fps_demo.py? Does it have an extension? I noticed this file wasn’t included in the downloads so I wasn’t sure.

    I’m asking because I’ve run into the following error:
    “No module named video.pivideostream”

    Thanks!

    • Adrian Rosebrock May 16, 2016 at 9:13 am #

      The reason the pivideostream.py file wasn’t included in the download of the code is because it’s already online and part of the imutils Python package. Make sure you install imutils (or upgrade to the latest version) before running the code in this blog post.

  21. Alex May 21, 2016 at 6:43 am #

    Thanks for the tutorial! I followed on a pi zero with the official pi camera and thought someone might be interested in how well performs.

    With display:
    Not threaded – 9.16 sec = 11.03 fps
    Threaded – 6.19 sec = 16.16 fps

    Without display:
    Not threaded – 5.16 sec = 19.59 fps
    Threaded – 3.04 = 32.86 fps

    • Adrian Rosebrock May 21, 2016 at 8:10 am #

      Thanks for sharing Alex! Although in general, I don’t really recommend the Pi Zero for video processing since it has only one core (while the Pi 2 and Pi 3 have four cores).

  22. Chris Willing June 4, 2016 at 10:51 pm #

    Could I suggest a small change to the the fps function in FPS modules? If ‘self._end = datetime.datetime.now()’ is added immediately before ‘return self._numFrames / self.elapsed()’, then the fps function may be used anywhere in a pipeline (provided _start has been set). For instance, I added a call to fps at the end of the timestamp overlay in the video window so I can see the frame rate ‘live’.

  23. Yadullah Abidi June 9, 2016 at 12:10 pm #

    Hi Adrian!
    I was wondering how do I implement this code in my python script for image processing?

    • Adrian Rosebrock June 9, 2016 at 5:14 pm #

      This code is already implemented in the imutils library. Just install imutils and you’ll be able to use it!

  24. Hytham June 25, 2016 at 3:56 am #

    i used the code but it doesnt show the stream cv2.imshow(“Frame”, frame)
    please help me

    • Adrian Rosebrock June 25, 2016 at 1:41 pm #

      It sounds like your Raspberry Pi is having trouble accessing your video stream. Double check that you can access your Raspberry Pi camera module. I would suggest starting with this post.

  25. Jon Lee July 15, 2016 at 7:13 pm #

    Awesome tutorial! Do you know how to get this same boost in fps with CSI cameras?

    I should’ve specified that I am using a rpi 3 and if that would change anything.

    • Adrian Rosebrock July 18, 2016 at 5:21 pm #

      So you’re using the standard Raspberry Pi camera module? That shouldn’t change anything at all. You’ll still get an increase with threading.

  26. Adam Smith August 15, 2016 at 9:52 am #

    This is an amazingly helpful post. Thank you Adrian!

    I removed the line “frame = imutils.resize(frame, width=400)” from my threaded process, making the window go to the default 320×240 buffer size instead. I was able to achieve over 350fps from my threaded process on my Raspberry Pi 3 after doing that!

    I currently get around 100fps while using multiple threads, even after applying transformations like gaussian blur, grayscale, thresholding and blob detection to each frame. A 10x increase fom the 10fps I was getting before. Thanks again!

    • Adrian Rosebrock August 16, 2016 at 1:03 pm #

      No problem Adam, happy I could help! Just keep in mind that the 350 FPS is the number of frames per second that you can theoretically process using your loop. This code measures the actual throughput processing rate of the video pipeline. As you add more steps to your pipeline, this will start to decrease.

  27. Charles August 18, 2016 at 12:28 pm #

    Hi Adrain, when you use the pi camera, you set the frame rate to 32 in this line: def __init__(self, resolution=(320, 240), framerate=32) , what does 32 mean here? Is it the ‘true’ frame rate of the pi camera? If my frame rate is 128 after applying the multiple threading, does it mean that each image sent from pi camera will be processed 3 times in the loop?

    • Adrian Rosebrock August 22, 2016 at 1:41 pm #

      Yes, that is the intended, “true” rate of the camera. The 128 implies that you can feed a total of 128 frames per second through your video processing pipeline. Whether or not your camera is physically capable of reading 128 frames per second is dependent on the actual hardware of the camera.

  28. Islam August 27, 2016 at 3:32 pm #

    Thanks for the tutorial! I followed on Raspberry Pi2 using pi camera and USB Camera

    Picamera
    With display:
    Not threaded – 9.52 sec = 10.61 fps
    Threaded – 2.69 sec = 37.12 fps

    Without display:
    Not threaded – 4.35 sec = 23.23 fps
    Threaded – 0.91 = 110.08 fps

    USB Webcam
    With display:
    Not threaded – 8.42 sec = 11.96 fps
    Threaded – 5.91 sec = 16.93 fps

    Without display:
    Not threaded – 6.85 sec = 14.61 fps
    Threaded – 3.65 = 27.37 fps

    I found very perfect performance for Pi Camera vs Web Cam A4TECH model:PX-835MU

  29. Peni August 30, 2016 at 12:54 am #

    Dear Adrian,

    I need to edit the resolution, currently the class PiVideoStream is using 320 x 240 pixels, I need to change this.

    • Adrian Rosebrock August 30, 2016 at 12:41 pm #

      Just change Line 68 to include the resolution parameter:

      vs = PiVideoStream(resolution=(640, 480).start()

      • Matt March 14, 2017 at 8:37 pm #

        Hi Adrian,

        Thanks for the great tutorial! Similar question to the above on changing the frame resolution:

        If I want to change the camera.awb_gains or camera.contrast of the threaded stream, do I just add it in the PiVideoStream() call? For example vs = PiVideoStream(contrast=40).start()

        Thanks,
        Matt

        • Matt March 14, 2017 at 11:17 pm #

          Sorry, one quick additional question. Is it possible to change the resolution to a different aspect ratio than 320×240? I seem to be having trouble with 16:9 video where the frame gets all scrambled.

          Thanks again!
          Matt

          • Adrian Rosebrock March 15, 2017 at 8:49 am #

            The PiVideoStream class abstracts away the internal picamera object. I would suggest modifying the class to (1) adjust the resolution from within the constructor or (2) accept a pre-configured PiCamera object. I hope that helps!

          • Matt March 15, 2017 at 5:02 pm #

            Yep, I think that makes sense! My python is….not good 🙂 but I’ll see what I can manage. Thanks!

  30. amrosik September 1, 2016 at 6:53 pm #

    What if the processing pipeline is so complex, that the image processing itself is slower than the framerate the picamera is potentially delivering? Am I right in saying that this threading method only makes sense, if the processing pipeline is not too complex? For example, if the processing pipeline is a hough transform, which costs tremendous amount of cpu time.

    If I get you right, then in this case we should put the processing and the image aquisition into one single thread, and execute them in serial. Because otherwise, in the threaded approach, the cpu would spend time streaming frames, which are not going to be processed anyways, since the processing loop hasnt finnished yet.

    • amrosik September 1, 2016 at 7:55 pm #

      In my current houghcircles application your threaded approach gives much better results (thanks by the way). Maybe houghcircles is not costly enough?

      I wonder if you can get even better results by not only threadening the stream, but actually making a seperate process out of it, by using multiprocessing library?

      • Adrian Rosebrock September 2, 2016 at 7:00 am #

        It is actually extremely likely that at some point your image processing pipeline will not run in real-time, or you run into a roadblock where you need to optimize the living heck out of the application.

        Does that mean that threading is actually a waste of time?

        Actually, quite the opposite.

        Keep in mind that reading the frames from our video stream is a blocking I/O operation. This would actually slow down our video processing pipeline even further since we would need to wait for the next frame to be read. By using threading, we can always grab the most recently read frame from the stream without having to wait for it.

  31. farbod September 13, 2016 at 8:37 am #

    Hello Adrian, how can I show up python code properly in a comment, like you do in your blog posts?

    I disoverd a really bad thing:
    My image processing loop is able to process 10 frames (each 720×720 ) per second, so each loops takes about 0.1s. Setting up the PiVideoStream instance with a framerate of 40, and a resolution of 720×720 should be more than enough. Theoretically a framerate of 10 fps would give the same outcome.

    What I discoverd is, that apparently the REAL framerate of the camera is lower than 10! So I am grabbing and processing the same frame multiple times. Changing the framerate parameter of PiVideoStream doesn’t make a namable difference.

    And another discovery:

    by introducing the option camera.sensor_mode into the PiVideoStream class one is able to set the camera mode to 7, for example, (see here: http://picamera.readthedocs.io/en/release-1.12/fov.html), which ensures a minimum fps of 40 at a resolution of 640×480.
    After specifying the sensor mode to 7, my image processing has apparently been getting slower!
    Before that, it took 0.1s to process one 720×720 frame. Now with the sensor_mode specified it takes 3 times longer to go through one loop. What the F*? This all makes no sense to me. I really need your help.

    The PiVideoStream class established the camera.capture_continuous method.
    Is it possible to use instead the camera.capture_sequence method? According to the picamera docs the latter is faster. But I dont know how to make a threaded stream out of it.

    • Adrian Rosebrock September 13, 2016 at 12:47 pm #

      Without having physical access to your camera, it’s really hard to diagnose what the exact issue might be. It may be unlikely, but it’s certainly possible that you might have a faulty Raspberry Pi camera module. I would suggest using the raspivid tool to capture videos directly to file and monitor the FPS there as well. Secondly, I would suggest posting on the picamera GitHub Issues to see if there are any known problems as well.

  32. amrosik September 13, 2016 at 9:53 am #

    I noticed that the real framerate is much lower than specified, even slower than the processing(which is 10 processing loops per second). how is that possible?
    Since I dont know, how to insert code blocks into this comment, I posted the full question + code on raspberry.stackexchange, see here: http://raspberrypi.stackexchange.com/questions/54886/picams-real-framerate-is-too-slow-camera-modes-are-strange

    • Adrian Rosebrock September 13, 2016 at 12:43 pm #

      Can you elaborate on what you mean by the “real framerate”? Are you talking about the limitations of the physical camera sensor?

  33. Roger Costandi October 22, 2016 at 12:03 pm #

    Hi Adrian,

    I thought I could share the results of running the test program on a Raspberry Pi 3 (Raspbian GNU/Linux 8 (jessie)) :

    • Adrian Rosebrock October 23, 2016 at 10:13 am #

      Thanks for sharing the results Roger, it’s much appreciated!

  34. Kirill October 29, 2016 at 1:18 pm #

    Adrian, thank you for this post. It inspired me to move my cv project to raspberry + picamera and result are very promising. However, placing data analysing code inside main thread drops FPS back down. As discussed before, multiprocessing could be a key to this problem. I could not find information regarding multiprocessing in your other posts. For raspberry it turns out to be very important issue to keep maximum FPS. Some basic example of using multiprocessing in your code would be very useful. Hope I am not asking too much.

    • Adrian Rosebrock November 1, 2016 at 9:18 am #

      Thank you for the suggestion Kirill. I will certainly consider doing more advanced and optimized posts directly for the Raspberry Pi in the future.

  35. Dylan B November 20, 2016 at 11:33 am #

    Adrian, I am a newbie to python and raspberry pi, please help! I am running a simple open cv2 program(with picam) to draw a rectangle around a face. It is a little choppy and I want to use your imutils package to solve the issue.
    However I don’t understand how to use the downloaded imutils package files. Can you post a clear step by step process of threading the pi. The blog just seems to explain each part of the program but I want to know how to use threading in a program.
    What file of the imutils package do I use? How do a interface the imutils package with my code?
    Thanks!

    • Adrian Rosebrock November 21, 2016 at 12:33 pm #

      Hey Dylan — you would normally let pip install the imutils package for you:

      $ pip install imutils

      If you are new to the world of computer vision, OpenCV, and Python I would really encourage you to read through my book, Practical Python and OpenCV. This book will help you get started with computer vision easily. I also include a downloadable Raspbian .img file that has OpenCV + Python and all other necessary Python packages pre-installed. Be sure to take a look!

  36. Dylan B November 21, 2016 at 5:45 pm #

    Last question, so once I do: $ pip install imutils , Do I just include its directory/package import name (import imutils) on the top of my program? Is it that easy to make it thread? I thought I would need to restructure my current program to make it work as a thread.
    Does the recommended book and or Raspbian.img file have threading in it?

    Thanks for getting back to me so quickly, I will definitely look into the book for Christmas!
    Happy Holidays!

    • Adrian Rosebrock November 22, 2016 at 12:35 pm #

      Yes, once you run pip install imutils you would import it at the top of your Python file just like any other Python package.

      I don’t know what the code of your old project looks like, but I would suggest using the template I provided here as a starting point for the threading.

      And yes, the Raspbian .img file that comes with my book already has imutils installed with the threading component.

  37. Dylan B November 22, 2016 at 9:09 pm #

    Adrian, when I do: pip install imutils, it wont install, it says “errorno 13 Permission denied”
    I think this is why the example code above does not work. Why is is not allowing me access?

    • Adrian Rosebrock November 23, 2016 at 8:34 am #

      It sounds like you’re trying to install imutils into your system install of Python and not a local install or Python virtual environment. In that case you need sudo permission:

      $ sudo pip install imutils

  38. Ghanendra January 15, 2017 at 8:01 am #

    Hi Adrian, how to display the FPS on current frame ?

    • Adrian Rosebrock January 15, 2017 at 12:00 pm #

      I would suggest using the cv2.putText function. A good example of cv2.putText can be found in this post.

  39. Adam January 28, 2017 at 11:02 am #

    Hey Adrian,

    Great tutorial as usual! I very much enjoyed learning that polling from a camera stream is a heavy IO operation that can benefit from multi-threading.

    I have a question and I apologize if it’s a duplicate. I Couldn’t find it in the comments thread.
    When I do not use the display (-d 1 option) I get a serious improvement (over 1000% as you get). When I do use the display I get very low FPS (somewhere around the 1 or 2 FPS). See below:

    Multithreaded- no display:

    [INFO] elasped time: 0.09
    [INFO] approx. FPS: 1172.77

    Multithreaded- display:

    [INFO] elasped time: 42.68
    [INFO] approx. FPS: 2.34

    From other comments I read that the X11 is a serious bottleneck and it makes sense. However, I also noticed that when you use the display you get around 51 FPS. Are there any specific X11 configurations you are using?

    p.s
    I am using RPi 2

    • Adrian Rosebrock January 29, 2017 at 2:44 pm #

      Hey Adam — when you use X11 you need to transmit the frame over the network. This is a serious I/O overhead. When I gathered the results for this tutorial I was using a physical display connected to my Pi via a HDMI cable. No specific X11 configurations were used.

  40. Matt March 16, 2017 at 4:46 pm #

    Hi Adrian,

    Thanks again for this tutorial! Is it possible to simultaneously record an h264 video file while this stream is providing frames? Do I need to adjust the PiVideoStream class to give it the record attribute?

    Thanks,
    Matt

    • Adrian Rosebrock March 17, 2017 at 9:25 am #

      I don’t think a simultaneous recording + video stream access is directly possible with picamera (although I’ve heard mentions of it in the GitHub Issues), but what you could do is write the frames to file via OpenCV and cv2.VideoWriter.

  41. Oguzhan April 17, 2017 at 12:16 pm #

    Hi Adrian firstly great thanks for your helpful tutorial we are following your amazing blog with excitedly. i have this results with display mode :
    [INFO] elasped time: 4.49
    [INFO] approx. FPS: 20.22

    [INFO] elasped time: 0.79
    [INFO] approx. FPS: 126.90

    When i run the script i have display just a few second. i want to display screen infinitely how should i modify to code to display infinitely ?
    Regards

    • Adrian Rosebrock April 19, 2017 at 12:59 pm #

      Hey Oguzhan — can you elaborate more on what you mean by “display infinitely”?

  42. Oguzhan April 17, 2017 at 1:08 pm #

    thanks for amazing tutorial! .How can i move the frame for my video processing script ?
    Regards

    • Adrian Rosebrock April 19, 2017 at 12:57 pm #

      What do you mean by “move the frame”?

  43. Gaurav April 21, 2017 at 1:55 am #

    Hi Adrian,

    Thanks for the great blog. I implemented this code in one of my projects on Pi, but the code exits gracefully without any error or crash dump. My processing block includes face detection using haar cascades on a background subtracted frame. I’m not able to understand the root cause of the exit. Can you please share your thoughts?

    https://github.com/gmish27/PeoplCpounter
    When I execute ‘python main.py’ the code exits as soon as a detection occurs.

    • Adrian Rosebrock April 21, 2017 at 10:50 am #

      Hi Gaurav — are you able to process any of the frames in the script? Or does the script exit as soon as you start the Python script?

  44. Anastasios Selalmazidis April 26, 2017 at 11:52 am #

    Great article Adrian,

    there is a typo somewhere, you mention 14.46 FPS for the RPi Zero but on the image we can see that it is 15.46 FPS

    • Adrian Rosebrock April 28, 2017 at 9:49 am #

      Thank you for pointing this out Anastasios! I have updated the blog post.

  45. maymuna April 27, 2017 at 4:42 am #

    hi adrian, i have been following your tutorials for my FYP . i run your code of video streaming on my pi it worked well but when i extend my code for face detection the frame rate becomes very slow because of processing. please guide me with it as i have done upto facial recognition but its too slow.

    • Adrian Rosebrock April 28, 2017 at 9:33 am #

      To start, make sure you resize your frame before applying face detection (the less data there is to process, the faster your algorithm will run). Also keep in mind that face detection is a slow process. Every step you add to your frame processing pipeline the slower it will become. For what its’ worth, I cover how to perform face detection + face recognition on the Raspberry Pi inside the PyImageSearch Gurus course.

  46. Rob May 9, 2017 at 1:44 pm #

    What would be the best way to go if I want to process data from 2 cameras (capture 2 cameras using Raspberry Pi multi camera adapter http://www.arducam.com/multi-camera-adapter-module-raspberry-pi/)?

    • Adrian Rosebrock May 11, 2017 at 8:53 am #

      Hey Rob — I don’t know about the multi-camera adapter, but you can use this blog post to help you access multiple video streams on your Raspberry Pi.

  47. hishaam July 20, 2017 at 6:31 am #

    hello adrain,
    Thank you for the wonderful tutorial
    i had run picamera_fps_demo.py in that their is function cv2.imshow(‘frame’,frame) though having this function i cant see the image(window is not opened)

    • Adrian Rosebrock July 21, 2017 at 8:54 am #

      How are you accessing your Raspberry Pi? Via an HDMI monitor? VNC? SSH?

  48. albert July 24, 2017 at 11:56 am #

    Hi Adrian, i’m using this for an outdoor project but have noticed that my video is very dark. Is there anyway to increase the brightness and detail of dark area’s while still maintaining a fast streaming rate?
    Thanks!

    • Adrian Rosebrock July 24, 2017 at 3:29 pm #

      You can actually adjust the brightness setting of your Raspberry Pi camera. Simply follow the documentation.

      • albert July 25, 2017 at 7:52 pm #

        Ok cool thanks!

  49. Albert July 26, 2017 at 4:50 pm #

    Hi Adrian, to ask another question. I want to do basic color recognition on a video stream from the pi(with v2.1 camera) but even with multithreading it’s only doing 15ish frames per second(without any processing). I only really need about a 100px horizontal line of the image for my project, to try and speed up the image stream is it possible to just take those pixels(1280×100 from the centre of the screen)?

    I thought about inputing that as the image resolution but assumed it would just shrink down the vertical dimensions when what i want is just a small section of the vertical pixels in the middle of the screen without having to crop the image after it’s been read as this would likely not improve the speed. Is this possible or is there anything else you can think of to improve the speed?
    Thanks Albert

    • Adrian Rosebrock July 28, 2017 at 9:56 am #

      Reading frames at 1280px is likely why your processing pipeline is so slow. Can you reduce the size of your resolution? That will dramatically increase your throughput. And yes, you can process just a specific area of a frame. Just apply basic NumPy array slicing/cropping to extract the region.

      • albert August 1, 2017 at 7:29 am #

        Hi Adrian, i could but i would loose the distance, basically i want to be able to recognise an object like a qr code at distance, so reducing the resolution directly affects the range of the device. Is there any way to digitally zoom with the camera(so i can get a high resolution from a certain part of an image(get the range but with less pixels == faster)?

        Thanks Albert

        • Adrian Rosebrock August 1, 2017 at 9:33 am #

          If you want to “zoom” in on a specific ROI using a lower resolution image you would have to resize the ROI via interpolation. This could lead to the ROI looking interpolated. Because of this, it’s best that you work with the higher resolution image (even though it will be slower).

          • albert August 2, 2017 at 11:38 am #

            ok thanks!

Leave a Reply