Increasing webcam FPS with Python and OpenCV

fps_demo_osx_zoomed

Over the next few weeks, I’ll be doing a series of blog posts on how to improve your frames per second (FPS) from your webcam using Python, OpenCV, and threading.

Using threading to handle I/O-heavy tasks (such as reading frames from a camera sensor) is a programming model that has existed for decades.

For example, if we were to build a web crawler to spider a series of webpages (a task that is, by definition, I/O bound), our main program would spawn multiple threads to handle downloading the set of pages in parallel instead of relying on only a single thread (our “main thread”) to download the pages in sequential order. Doing this allows us to spider the webpages substantially faster.

The same notion applies to computer vision and reading frames from a camera — we can improve our FPS simply by creating a new thread that does nothing but poll the camera for new frames while our main thread handles processing the current frame.

This is a simple concept, but it’s one that’s rarely seen in OpenCV examples since it does add a few extra lines of code (or sometimes a lot of lines, depending on your threading library) to the project. Multithreading can also make your program harder to debug, but once you get it right, you can dramatically improve your FPS.

We’ll start off this series of posts by writing a threaded Python class to access your webcam or USB camera using OpenCV.

Next week we’ll use threads to improve the FPS of your Raspberry Pi and the picamera module.

Finally, we’ll conclude this series of posts by creating a class that unifies both the threaded webcam/USB camera code and the threaded picamera  code into a single class, making all webcam/video processing examples on PyImageSearch not only run faster, but run on either your laptop/desktop or the Raspberry Pi without changing a single line of code!

Looking for the source code to this post?
Jump right to the downloads section.

Use threading to obtain higher FPS

The “secret” to obtaining higher FPS when processing video streams with OpenCV is to move the I/O (i.e., the reading of frames from the camera sensor) to a separate thread.

You see, accessing your webcam/USB camera using the cv2.VideoCapture  function and the .read()  method is a blocking operation. The main thread of our Python script is completely blocked (i.e., “stalled”) until the frame is read from the camera device and returned to our script.

I/O tasks, as opposed to CPU bound operations, tend to be quite slow. While computer vision and video processing applications are certainly quite CPU heavy (especially if they are intended to run in real-time), it turns out that camera I/O can be a huge bottleneck as well.

As we’ll see later in this post, just by adjusting the the camera I/O process, we can increase our FPS by as much as 379%!

Of course, this isn’t a true increase of FPS as it is a dramatic reduction in latency (i.e., a frame is always available for processing; we don’t need to poll the camera device and wait for the I/O to complete). Throughout the rest of this post, I will refer to our metrics as an “FPS increase” for brevity, but also keep in mind that it’s a combination of both a decrease in latency and an increase in FPS.

In order to accomplish this FPS increase/latency decrease, our goal is to move the reading of frames from a webcam or USB device to an entirely different thread, totally separate from our main Python script. 

This will allow frames to be read continuously from the I/O thread, all while our root thread processes the current frame. Once the root thread has finished processing its frame, it simply needs to grab the current frame from the I/O thread. This is accomplished without having to wait for blocking I/O operations.

The first step in implementing our threaded video stream functionality is to define a FPS  class that we can use to measure our frames per second. This class will help us obtain quantitative evidence that threading does indeed increase FPS.

We’ll then define a WebcamVideoStream  class that will access our webcam or USB camera in a threaded fashion.

Finally, we’ll define our driver script, fps_demo.py, that will compare single threaded FPS to multi-threaded FPS.

Note: Thanks to Ross Milligan and his blog who inspired me to do this blog post.

Increasing webcam FPS with Python and OpenCV

I’ve actually already implemented webcam/USB camera and picamera  threading inside the imutils library. However, I think a discussion of the implementation can greatly improve our knowledge of how and why threading increases FPS.

To start, if you don’t already have imutils  installed, you can install it using pip :

Otherwise, you can upgrade to the latest version via:

As I mentioned above, the first step is to define a FPS  class that we can use to approximate the frames per second of a given camera + computer vision processing pipeline:

On Line 5-10 we define the constructor to our FPS  class. We don’t require any arguments, but we do initialize three important variables:

  • _start : The starting timestamp of when we commenced measuring the frame read.
  • _end : The ending timestamp of when we stopped measuring the frame read.
  • _numFrames : The total number of frames that were read during the _start  and _end  interval.

Lines 12-15 define the start  method, which as the name suggests, kicks-off the timer.

Similarly, Lines 17-19 define the stop  method which grabs the ending timestamp.

The update  method on Lines 21-24 simply increments the number of frames that have been read during the starting and ending interval.

We can grab the total number of seconds that have elapsed between the starting and ending interval on Lines 26-29 by using the elapsed  method.

And finally, we can approximate the FPS of our camera + computer vision pipeline by using the fps  method on Lines 31-33. By taking the total number of frames read during the interval and dividing by the number of elapsed seconds, we can obtain our estimated FPS.

Now that we have our FPS  class defined (so we can empirically compare results), let’s define the WebcamVideoStream  class which encompasses the actual threaded camera read:

We define the constructor to our WebcamVideoStream  class on Line 6, passing in an (optional) argument: the src  of the stream.

If the src  is an integer, then it is presumed to be the index of the webcam/USB camera on your system. For example, a value of src=0  indicates the first camera and a value of src=1  indicates the second camera hooked up to your system (provided you have a second one, of course).

If src  is a string, then it assumed to be the path to a video file (such as .mp4 or .avi) residing on disk.

Line 9 takes our src  value and makes a call to cv2.VideoCapture  which returns a pointer to the camera/video file.

Now that we have our stream  pointer, we can call the .read()  method to poll the stream and grab the next available frame (Line 10). This is done strictly for initialization purposes so that we have an initial frame stored in the class.

We’ll also initialize stopped , a boolean indicating whether the threaded frame reading should be stopped or not.

Now, let’s move on to actually utilizing threads to read frames from our video stream using OpenCV:

Lines 16-19 define our start  method, which as the name suggests, starts the thread to read frames from our video stream. We accomplish this by constructing a Thread  object using the update  method as the callable object invoked by the run()  method of the thread.

Once our driver script calls the start  method of the WebcamVideoStream  class, the update  method (Lines 21-29) will be called.

As you can see from the code above, we start an infinite loop on Line 23 that continuously reads the next available frame from the video stream  via the .read()  method (Line 29). If the stopped  indicator variable is ever set, we break from the infinite loop (Lines 25 and 26).

Again, keep in mind that once the start  method has been called, the update  method is placed in a separate thread from our main Python script — this separate thread is how we obtain our increased FPS performance.

In order to access the most recently polled frame  from the stream , we’ll use the read  method on Lines 31-33.

Finally, the stop  method (Lines 35-37) simply sets the stopped  indicator variable and signifies that the thread should be terminated.

Now that we have defined both our FPS  and WebcamVideoStream  classes, we can put all the pieces together inside fps_demo.py :

We start off by importing our necessary packages on Lines 2-7. Notice how we are importing the FPS  and WebcamVideoStream  classes from the imutils library. If you do not have imutils  installed or you need to upgrade to the latest version, please see the note at the top of this section.

Lines 10-15 handle parsing our command line arguments. We’ll require two switches here: --num-frames , which is the number of frames to loop over to obtain our FPS estimate, and --display , an indicator variable used to specify if we should use the cv2.imshow  function to display the frames to our monitor or not.

The --display  argument is actually really important when approximating the FPS of your video processing pipeline. Just like reading frames from a video stream is a form of I/O, so is displaying the frame to your monitor! We’ll discuss this in more detail inside the Threading results section of this post.

Let’s move on to the next code block which does no threading and uses blocking I/O when reading frames from the camera stream. This block of code will help us obtain a baseline for our FPS:

Lines 19 and 20 grab a pointer to our video stream and then start the FPS counter.

We then loop over the number of desired frames on Line 23, read the frame from camera (Line 26), update our FPS counter (Line 35), and optionally display the frame to our monitor (Lines 30-32).

After we have read --num-frames  from the stream, we stop the FPS counter and display the elapsed time along with approximate FPS on Lines 38-40.

Now, let’s look at our threaded code to read frames from our video stream:

Overall, this code looks near identical to the code block above, only this time we are leveraging the WebcamVideoStream  class.

We start the threaded stream on Line 49, loop over the desired number of frames on Lines 53-65 (again, keeping track of the total number of frames read), and then display our output on Lines 69 and 70.

Threading results

To see the affects of webcam I/O threading in action, just execute the following command:

Figure 1: By using threading with Python and OpenCV, we are able to increase our FPS by over 379%!

Figure 1: By using threading with Python and OpenCV, we are able to increase our FPS by over 379%!

As we can see, by using no threading and sequentially reading frames from our video stream in the main thread of our Python script, we are able to obtain a respectable 29.97 FPS.

However, once we switch over to using threaded camera I/O, we reach 143.71 FPS — an increase of over 379%!

This is clearly a huge decrease in our latency and a dramatic increase in our FPS, obtained simply by using threading.

However, as we’re about to find out, using the cv2.imshow  can substantially decrease our FPS. This behavior makes sense if you think about it — the cv2.show  function is just another form of I/O, only this time instead of reading a frame from a video stream, we’re instead sending the frame to output on our display.

Note: We’re also using the cv2.waitKey(1)  function here which does add a 1ms delay to our main loop. That said, this function is necessary for keyboard interaction and to display the frame to our screen (especially once we get to the Raspberry Pi threading lessons).

To demonstrate how the cv2.imshow  I/O can decrease FPS, just issue this command:

Figure 2: Using the cv2.imshow function can reduce our FPS -- it is another form of I/O, after all!

Figure 2: Using the cv2.imshow function can reduce our FPS — it is another form of I/O, after all!

Using no threading, we reach 28.90 FPS. And with threading we hit 39.93 FPS. This is still a 38% increase in FPS, but nowhere near the 379% increase from our previous example.

Overall, I recommend using the cv2.imshow  function to help debug your program — but if your final production code doesn’t need it, there is no reason to include it since you’ll be hurting your FPS.

A great example of such a program would be developing a home surveillance motion detector that sends you a txt message containing a photo of the person who just walked in the front door of your home. Realistically, you do not need the cv2.imshow  function for this. By removing it, you can increase the performance of your motion detector and allow it to process more frames faster.

Summary

In this blog post we learned how threading can be used to increase your webcam and USB camera FPS using Python and OpenCV.

As the examples in this post demonstrated, we were able to obtain a 379% increase in FPS simply by using threading. While this isn’t necessarily a fair comparison (since we could be processing the same frame multiple times), it does demonstrate the importance of reducing latency and always having a frame ready for processing.

In nearly all situations, using threaded access to your webcam can substantially improve your video processing pipeline.

Next week we’ll learn how to increase the FPS of our Raspberry Pi using the picamera module.

Be sure to enter your email address in the form below to be notified when the next post goes live!

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 11-page Resource Guide on Computer Vision and Image Search Engines, including exclusive techniques that I don’t post on this blog! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , , ,

66 Responses to Increasing webcam FPS with Python and OpenCV

  1. Paul Nta December 21, 2015 at 12:44 pm #

    Thank you ! Great tuto !
    I’m wondering if (in a production app) we should use a lock or something to synchronize access to the frame which is a shared resource, right ?

    • Adrian Rosebrock December 21, 2015 at 5:32 pm #

      If it’s a shared resource, then yes, you should absolutely use a lock on the image data, otherwise you can run into a synchronization issue.

    • Luke August 8, 2016 at 2:26 am #

      Same here – I assumed you should have a thread acquire and release so it isn’t reading the image wihile it is being written? Apparently assinging a value to be a numpy array is atomic – or it doesn’t really matter if it was the last frame, not the very latest?

      Looks like if you have ANY processing you need to have it out of that fetching image thread, and it runs pretty fast.

  2. Jürgen December 21, 2015 at 1:00 pm #

    Hi Adrian,

    looks like the increase in fps is a fake; you get a frame immediately when required, but looks like it is still the former frame when the program body is executed faster than the physical frame rate of the camera. What do you think?

    Jürgen

    • Adrian Rosebrock December 21, 2015 at 5:34 pm #

      Very true — and at that point it depends on your physical camera. If your loop is faster than the physical frame rate of the camera, then the increase in FPS is not as realistic. This is further evidenced when we use the cv2.imshow function to simulate a more “realistic” scenario. In either case though, threading should be used since it can increase FPS.

  3. David Kadouch December 22, 2015 at 2:57 am #

    Hi Adrian

    This is great. I am myself experimenting with a multithreaded app that runs opencv and other libraries and I’m already using your video stream class.

    Special note for OSX users: I’ve run into a limitation in opencv on OSX, the command cv2VideoCapture(0) can only be issued in the main thread, so take this into account when designing your app. See http://stackoverflow.com/questions/20445762/unable-to-start-camera-capture-in-python-opencv-cv2-using-a-non-main-thread for more info

    • Adrian Rosebrock December 22, 2015 at 6:25 am #

      Thanks for sharing David — that’s a great tip regarding cv2.VideoCapture can only be executed from the main thread.

  4. Hotte December 22, 2015 at 12:12 pm #

    Awesome, Adrian!! Can’t wait to read the tutorial about fps increase for Raspberry Pi using the picamera module!

  5. Pär December 22, 2015 at 3:36 pm #

    Great tutorial (as allways)! and good timing too…
    I’m just trying to make the image gathering threaded for my raspberry pi project to improve the framerate. Without threading but with the use of a generator type of code for the image handling I improved the framerate by around 2 times but hopefully threading will do more. Another thing that is interesting is how to optimize the framerate vs the opencv computational time to reach a good balance. Jurgen mentioned that several frames could be similar and then it is no need to make calculation on that second frame (at least not in my case). On a raspberry pi 2 there is 4 cores and distributing the collection of frame data and calculations in a good way would improve the performance. Do you have any thoughts or advice about that?

    • Adrian Rosebrock December 23, 2015 at 6:35 am #

      If you’re using the Pi 2, then distributing the frame gathering to a separate thread will definitely improve performance. In fact, that’s exactly what next week’s blog post is going to be about 😉

  6. Sean McLeod December 22, 2015 at 4:06 pm #

    Hi Adrian

    In the single threaded case you’re limited to 30fps because that is the framerate of the camera in this case and you’re not really achieving 143fps in the multi-threaded case since you’re simply processing the same frame multiple times. The 143fps is really a measure of the amount of time the imutils.resize() takes, i.e. ~6.9ms. So the comparison between 30fps and 143fps isn’t really a fair and accurate comparison.

    I recently had a project where we ended up using the same approach, i.e. grabbing the webcam frames on a secondary thread and doing the OpenCV processing on the main python thread. However this wasn’t in order to increase the fps processing rate, rather it was to minimize the latency of our image processing. We were measuring aircraft control positions by visually tracking targets on the aircraft controls to record their position during flight and needed to synchronize the recording of the control positions with another instrument we use to record the aircraft’s attitude etc.

    So we needed as little latency as possible in order to keep the control positions synchronized with the other aircraft data we were recording. We didn’t need a high sampling rate, i.e. we were happy with 10Hz as long as the latency was as small as possible.

    Our camera output at 1080@30Hz and our image processing (mainly Hough circles) took longer than the frame period of ~33ms and if we read the camera frames on the main thread the OS would buffer roughly 5 frames if we didn’t read them fast enough. So going with the multithreaded approach we could always retrieve the latest frame as soon as our image processing was complete, so at a lower rate than the camera rate but with minimizing latency.

    Cheers

    • Adrian Rosebrock December 23, 2015 at 6:34 am #

      Indeed, you’re quite right. The 143 FPS isn’t a fair comparison. I was simply trying to drive home the point (as you suggested) of the latency. Furthermore, simply looping over a set of frames (without doing any processing besides resizing) isn’t exactly fair of what a real-world video processing would look like either.

      • Sean McLeod December 24, 2015 at 3:36 pm #

        Hi Adrian

        But I think that overall you’ve made it more confusing mixing up fps and latency. If your main point that you were trying to drive home is the win in terms of latency then that should be in the title, in the examples your provide etc.

        Sort of like mixing up a disk’s transfer rate and latency.

        Cheers

        • Adrian Rosebrock December 25, 2015 at 12:33 pm #

          I’ll be sure to make the point more clear in the next post on Raspberry Pi FPS and latency.

    • dassgo July 12, 2016 at 5:47 am #

      Hi Sean,
      I have the same problem as yours. I need in my project the minimum latency as possible. Due to the opencv internal buffer I have to use threads. I am working with several 8Mp cameras, each of them with its own thread. But using threads then I face the “select timeout” problem. Did you have the same problem? By the way, did you use locks to access the variable “frame”?

  7. Ross Milligan December 23, 2015 at 4:31 am #

    Nice tutorial – thanks for the mention!

    I was experiencing a subtly different problem with webcam frames in my apps, which led me to use threading. I was not so concerned with the speed of reading frames, more that the frames were getting progressively stale (after running app for a minute or so on Raspberry Pi, the frame served up was a number of frames behind the real world). Perhaps my app loop was too slow interrogating the webcam and was being served up a cached image? By using a thread I was able to interrogate the webcam constantly and keep the frame fresh.

    • Adrian Rosebrock December 23, 2015 at 6:31 am #

      Thanks for the tip Ross — you’re definitely the inspiration behind where this post came from.

    • Sean McLeod December 24, 2015 at 3:19 pm #

      Hi Ross

      See my comment above, we saw the same issue as you on the ODROID we were using. On our system it looked like the OS/v4l/OpenCV stack was maintaining a buffer on the order of 5 frames if we didn’t retrieve frames as fast as the camera’s frame rate, which meant we ended up with an extra latency on the order of 5x33ms = 165ms.

      So we ended up pulling the usb web camera images at the camera’s frame rate on a secondary thread so that we were always processing the latest web camera image even though overall our main video processing only ran at 10fps.

      We initially tried to see if there was a way to prevent this buffering but weren’t able to find a way to disable it, so we ended up with the multi-threading approach.

      Cheers

  8. Chris Viehoff December 23, 2015 at 4:10 pm #

    I installed imutils but still get this error when I run the program:
    ImportError: No module named ‘imutils.video’
    Running python 3.4 and opencv2

    • Adrian Rosebrock December 23, 2015 at 6:41 pm #

      Make sure you have the latest version of imutils:

      $ pip install --upgrade imutils --no-cache-dir

      • Jesse June 4, 2016 at 1:35 pm #

        I have the latest version installed, but I’m still getting the error. Please help if you can. All I need is something simple that can display an image on the screen from a USB webcam, and can start automatically at boot. I am running a Raspberry Pi Zero and Raspbian Jessie. The webcam is a rather cheap GE model, with YUYV 640×480 support. I have already tried multiple programs, but only luvcview gave a usable picture, and it broke itself when attempting at auto-start script. Any help at all would be useful! Thank you in advance!

        • Adrian Rosebrock June 5, 2016 at 11:30 am #

          I detail how to create an autostart script here. This should also take care of the error messages you’re getting since you’ll be accessing the correct Python virtual environment where ideally you have installed the imutils package.

  9. Surya February 5, 2016 at 5:17 am #

    Thank you for a great tutorial. I am working with applications like SURF, Marker detection etc. but i need to increase the FPS for the above mentioned applications. Will this approach work with OpenCV C++? If yes, how?

    • Adrian Rosebrock February 5, 2016 at 9:18 am #

      Yes, the same approach will work for C++. The exact code will vary, but you should do some research on creating and managing separate threads in C++.

  10. David February 6, 2016 at 2:34 am #

    This is a classic producer/consumer problem.
    Here we have a camera (the producer) that is delivering frames at a constant rate in real time
    and a frame reading program (the consumer) that is not processing the frames in real time.
    In this case we must have a frame queue with a frame count. If the frame count is > 0 then the consumer consumes a frame – reducing the frame count. If the frame count is zero then the consumer must wait until the frame count rises above zero before consuming a frame.
    There can be any number of consumers but the producer must serialize access to the frame queue (using locks/semaphores).
    I’m wondering if Adrian has fully covered this aspect in his book and tutes…

    • Adrian Rosebrock February 6, 2016 at 9:55 am #

      I don’t cover this in detail inside Practical Python and OpenCV, but it’s something that I do plan on covering in future blog posts.

      Unless you want to process every frame that is read by the camera, I actually wouldn’t use a producer-consumer relationship for this. Otherwise, as you suggested, if your buffer is > 0, then you’ll quickly build up a backlog of frames that need to be processed, making the approach unsuitable for real-time applications.

  11. Bram June 6, 2016 at 11:09 am #

    I don’t see a performance increase on Windows 7. Process explorer (confirmed by cpu usage graph in Task Manager) confirms cv2 is already divided in multiple threads??

    • Adrian Rosebrock June 7, 2016 at 3:22 pm #

      I’m not a Windows user, so I’m not entirely sure how Windows handles threading. The Python script won’t be divided across multiple process, but a new thread should be spawned. Again, I haven’t touched a Windows system in over 9+ years, so I’m probably not the right person to ask regarding this question.

  12. Stijn July 3, 2016 at 5:27 am #

    Hello,

    Thank you very much for sharing this information.
    One drawback of this method is as mentioned in the comments that you kind of lose the timestamp / counter information when a frame was shot by the camera.

    Now the funny thing is a timesteamp is provided if you use the capture_continous function and provide a string as argument.
    So diving a bit deeper in the code of this function, don’t you think we could just add a timestamp / counter in the “else” condition? It might make the code that later processes these images a bit more efficient since a mechanism can be made to avoid processing the same frame twice.
    Not sure if you have an opinion on this one / ever experimented with it 🙂 ?

    http://picamera.readthedocs.io/en/release-1.10/_modules/picamera/camera.html#PiCamera.capture_continuous

  13. Aaron July 11, 2016 at 9:10 am #

    hey is there a use for the variable “grabbed”? I can’t see it being used anywhere… I might be misunderstanding a lot though!

    • Adrian Rosebrock July 11, 2016 at 10:12 am #

      The stream.read function returns a 2-tuple consisting of the frame itself along with a boolean indicating if the frame was successfully read or not. You can use the grabbed boolean to determine if the frame was successfully read or not.

  14. Kevin July 15, 2016 at 7:13 pm #

    Hello Adrian,

    Thank you for the very useful blog post. You explain everything very clearly, especially to someone very new to python and image processing. Have you ever made a blog post regarding packages/module hierarchy? In some of your other blog posts you explicitly tell us how to name a file, but I’m confused about how the FPS and WebcamVideoStream classes are defined in the directory. More specifically, what names should those files have (is there a naming convention?) Where in the project are they typically located? How are they pulled “from imutils.video”? I know these are very basic questions, but I haven’t found a resource online that explains this clearly.

    Thanks again for your work.

    • Adrian Rosebrock July 18, 2016 at 5:22 pm #

      Hey Kevin — this is less of a computer vision question, but more of a Python question. I would suggest investing some time reading about Python project structure and taking a few online Python courses. I personally like the RealPython.com course. The course offered by CodeAcademy is also good.

  15. yair levi July 24, 2016 at 12:00 pm #

    hi!

    Thanks for the great tutorial.
    Is there any way to make the .read() method block until there is a new frame to return?
    If not, is there any efficient way to determine if the old frame is the same as the new frame so I can ignore it?

    thanks again

    • Adrian Rosebrock July 27, 2016 at 2:38 pm #

      As far as I know, you can’t block the .read() method until a new frame is read. However, a simple, efficient method to determine if a new frame is ready is to simply hash the frame and compare hashes. Here is an example of hashing NumPy arrays.

  16. Peni Jitoko August 2, 2016 at 11:48 pm #

    Hi,

    I followed up on the tutorial, after executing “python picamera_fps_demo.py” the results were:

    [INFO] sampling frames from ‘picamera’ module…
    [INFO] elapsed time: 3.57
    [INFO] approx. FPS: 28.32
    [INFO] sampling THREADED frames from ‘picamera’ module…
    [INFO] elapsed time: 0.48
    [INFO] approx. FPS: 208.95

    but when I ran “python picamera_fps_demo.py –display 1” the results were:

    [INFO] sampling frames from ‘picamera’ module…
    [INFO] elapsed time: 8.54
    [INFO] approx.FPS: 11.83
    [INFO] sampling THREADED frames from ‘picamera’ module…
    [INFO] elapsed time: 6.54
    [INFO] approx. FPS: 15.29

    So it ran slower, I’m not sure what is the real issue, but I’m using a Pi 3 fresh out of the box, using a 5V 1A power source, I’m connected the Pi 3 to the laptop using Xming and putty via LAN cable.

    • Adrian Rosebrock August 4, 2016 at 10:18 am #

      Looking at your results, it seems that in both cases the threaded version obtained faster FPS processing rate. 15.29 FPS is faster than 11.83 FPS and 208.95 FPS is faster than 28.32 FPS. So I’m not sure what you mean by running slower?

      • Peni Jitoko August 4, 2016 at 7:32 pm #

        Thanks for the reply Adrian, when I see the video feed from the Pi camera on my laptop, there is a large delay, when I wave my hand over the camera it almost takes 2 to 3 seconds then I see my hand in the video feed, is this delay normal or is it an issue.

        • Adrian Rosebrock August 7, 2016 at 8:23 am #

          So you’re accessing the video feed via X11 forwarding or VNC? That’s the problem then. Your Pi is reading the frames just fine, it’s just the network overhead of sending the frames from the Pi to your VNC or X11 viewer. If you were to execute the script on your Pi with a keyboard + HDMI monitor, the frames would look much more fluid.

          • Peni Jitoko August 8, 2016 at 5:57 pm #

            Hi Adrian,

            I’m accessing the video feed via X11 forwarding, thanks for helping me identify what the problem really is. A lot of your tutorials has provided me with the basic foundation for my project, there is no other place I would recommend a beginner like me to start off learning image and video processing.

          • Adrian Rosebrock August 8, 2016 at 6:35 pm #

            Great job resolving the issue Peni, I’m glad it was a simple fix. And thank you for the kind words, I’m happy I can help out 🙂

  17. Samuel August 9, 2016 at 3:37 am #

    Hi Adrian, thank you for the post!
    I have a question that is currently bothering me a bit, that is: when we use multi threads as this post’s approach, does that mean we are using multi cores? or are we just using single core with 2 threads? cuz I’m currently up to some real-time project and trying to do it using multi threads, what surprised me is that the number of frames i can process per second actually decreased a bit, cuz i thought if i’m using a different core for reading in the video i should at least save the reading time and be able to process a bit more frames per second right?
    Thanks again for your help!

    • Adrian Rosebrock August 10, 2016 at 9:31 am #

      Hey Samuel — we are using a single core with 2 threads. Typically, when you work with I/O operations it’s advisable to utilize threads since most of your time is spent waiting for new frames to be read/written. If you’re looking to parallelize a computation, spread it across multiple cores and processors.

  18. Shirosh August 14, 2016 at 2:35 pm #

    Hi.! How to use this python code with other image processing task. should we run parallel these two codes using terminal? how to do this? please help me.
    Anyway your blog is the best

    • Adrian Rosebrock August 16, 2016 at 1:09 pm #

      Hey Shirosh — I’m not sure what you mean by “use this Python code with other image processing task”. Can you please elaborate?

  19. Cassiano Rabelo September 8, 2016 at 10:44 pm #

    Hello Adrian,
    I’ve noticed that cv2.imshow on OS X is muuuuch slower than its counterpart on windows.

    The following benchmark runs in 15 seconds on a virtualised windows inside my mac, but it takes as long as 2 minutes to run on OS X itself!
    https://db.tt/9FklKUpJ

    Do you know what could be the reason and possible fix? Thanks a lot!
    Best,
    Cassiano

    • Adrian Rosebrock September 9, 2016 at 10:54 am #

      That’s quite strange, I’m not sure why that may be. I don’t use Windows, but I’ve never noticed an issue with cv2.imshow between various operating systems.

  20. Nam Taehun November 6, 2016 at 12:54 am #

    Hi, Adrian.
    I have a question.
    Where can I save ‘fps_demo.py’ ?

    • Adrian Rosebrock November 7, 2016 at 2:50 pm #

      You can save it anywhere on your system that you would like. I would suggest using the “Downloads” section of this tutorial to download the code instead though.

  21. mazyar November 28, 2016 at 4:04 am #

    hi adrian
    thank you for this good project
    when i run this project
    after increse fps show a core dump segmentation fault error . can you help me ??..

    • Adrian Rosebrock November 28, 2016 at 10:17 am #

      If you’re getting a segmentation fault then the threading is likely causing an issue. My best guess is that the stream object is being destroyed while it’s in the process of reading the next frame. What OS are you using?

  22. Yakup Emre December 12, 2016 at 3:40 am #

    Hi,

    I am trying to use Ps3 EYE Camera on ubuntu for my OpenCv project. This camera support 640×480 up to 60fps and 320×240 resolution up to 187fps. I am sure you know what I mention about. I can set each one of this values on windows with CodeLaboratories driver. But on ubuntu, I use ov534.c driver and QT v4L2 software. Even though I ‘m seeing all of configuration settings of this camera on v4L2, I can’t set over 60fps. I can set any value under 60fps. Do you have an idea about this problem. What can I do for setting at least 120fps?

    • Adrian Rosebrock December 12, 2016 at 10:26 am #

      Unfortunately, I don’t have any experience with the PS3 Eye Camera, so I’m not sure what the proper parameter settings/values are for it. I hope another PyImageSearch reader can help you out!

  23. Jon R February 4, 2017 at 11:41 pm #

    Hey Adrian,
    I’m working on an image processing pipeline with live display that runs at 75FPS without cv2.imshow() and 13 FPS with no screen output (OS-X FPS #s). I need a live output to the screen, and I want to maximize framerate. I already tried using Pygame’s blit command as an imshow() replacement, but got about the same speed. Are you aware of a module/command that will get an efficient screen refresh? If I’m lucky there will be an approach that will transfer from OS X to Raspberry Pi without too many hitches.
    Thanks.

    • Adrian Rosebrock February 7, 2017 at 9:23 am #

      Hey Jon — when it comes to displaying frames to screen I’ve always used cv2.imshow. Unfortunately I don’t know of any other methods to speedup the process. Best of luck with the project!

  24. Hanifudin March 22, 2017 at 4:35 am #

    Hello adrian.. i recently mixing this code with image filtering.. but the image had a quite delay,, the image is like quite frezee.. not like cv2.videocapture(-1). What happen..? I dont know this..Can you explain me..?
    and if i use only serial usart communication i/o to send x y coordinate out.. where the code part must be change.. ? Thanks

    • Adrian Rosebrock March 22, 2017 at 8:34 am #

      Hey Hanifudin — I’m honestly not sure what you are trying to ask. Can you please try to elaborate on your question? I don’t understand.

      • Hanifudin March 22, 2017 at 10:47 pm #

        1. I send coordinate x and y target with serial pin. But when i using threading,, raspberry pi cannot send data via serial..so, how i activate serial tx rx in threading mode..?

        2. In threading mode,, the image is quite freeze,, like paused and lagging (not smooth)..it doesnt like non threading mode.. so how i make it smooth like non threading mode..?

        Im sorry i’ve bad english

        • Adrian Rosebrock March 23, 2017 at 9:25 am #

          Unfortunately, I don’t have much experience sending serial data via threading in this particular use case. I would try to debug if this is a computer vision problem by removing any computer vision code and ensure your data is being properly sent. As for the slow/freezing video stream, I’m not sure what the exact problem is. You might be overloading your hardware or there might be a bug in your code. It’s impossible for me to know without seeing your machine.

          • Hanifudin March 23, 2017 at 7:22 pm #

            Oh,, i know how why captured image is quite freeze,, im using usb hub to connect webcam,, wifi adapter and wireless mouse.. when i connect without usb hub it solved..

            And sending serial data in threading mode is waiting to solved

Trackbacks/Pingbacks

  1. Increasing Raspberry Pi FPS with Python and OpenCV - PyImageSearch - December 28, 2015

    […] Last week we discussed how to: […]

  2. Unifying picamera and cv2.VideoCapture into a single class with OpenCV - PyImageSearch - January 4, 2016

    […] blog, we have discussed how to use threading to increase our FPS processing rate on both built-in/USB webcams, along with the Raspberry Pi camera […]

  3. Real-time panorama and image stitching with OpenCV - PyImageSearch - January 25, 2016

    […] the past month and a half, we’ve learned how to increase the FPS processing rate of builtin/USB webcams and the Raspberry Pi camera module. We also learned how to unify access to both USB webcams and […]

  4. Faster video file FPS with cv2.VideoCapture and OpenCV - PyImageSearch - February 6, 2017

    […] I’ve mentioned in previous posts, the .read  method is a blocking operation — the main thread of your Python + OpenCV […]

Leave a Reply