Faster video file FPS with cv2.VideoCapture and OpenCV


Have you ever worked with a video file via OpenCV’s cv2.VideoCapture  function and found that reading frames just felt slow and sluggish?

I’ve been there — and I know exactly how it feels.

Your entire video processing pipeline crawls along, unable to process more than one or two frames per second — even though you aren’t doing any type of computationally expensive image processing operations.

Why is that?

Why, at times, does it seem like an eternity for cv2.VideoCapture  and the associated .read  method to poll another frame from your video file?

The answer is almost always video compression and frame decoding.

Depending on your video file type, the codecs you have installed, and not to mention, the physical hardware of your machine, much of your video processing pipeline can actually be consumed by reading and decoding the next frame in the video file.

That’s just computationally wasteful — and there is a better way.

In the remainder of today’s blog post, I’ll demonstrate how to use threading and a queue data structure to improve your video file FPS rate by over 52%!

Looking for the source code to this post?
Jump right to the downloads section.

Faster video file FPS with cv2.VideoCapture and OpenCV

When working with video files and OpenCV you are likely using the cv2.VideoCapture  function.

First, you instantiate your cv2.VideoCapture  object by passing in the path to your input video file.

Then you start a loop, calling the .read  method of cv2.VideoCapture  to poll the next frame from the video file so you can process it in your pipeline.

The problem (and the reason why this method can feel slow and sluggish) is that you’re both reading and decoding the frame in your main processing thread!

As I’ve mentioned in previous posts, the .read  method is a blocking operation — the main thread of your Python + OpenCV application is entirely blocked (i.e., stalled) until the frame is read from the video file, decoded, and returned to the calling function.

By moving these blocking I/O operations to a separate thread and maintaining a queue of decoded frames we can actually improve our FPS processing rate by over 52%!

This increase in frame processing rate (and therefore our overall video processing pipeline) comes from dramatically reducing latency — we don’t have to wait for the .read  method to finish reading and decoding a frame; instead, there is always a pre-decoded frame ready for us to process.

To accomplish this latency decrease our goal will be to move the reading and decoding of video file frames to an entirely separate thread of the program, freeing up our main thread to handle the actual image processing.

But before we can appreciate the faster, threaded method to video frame processing, we first need to set a benchmark/baseline with the slower, non-threaded version.

The slow, naive method to reading video frames with OpenCV

The goal of this section is to obtain a baseline on our video frame processing throughput rate using OpenCV and Python.

To start, open up a new file, name it , and insert the following code:

Lines 2-6 import our required Python packages. We’ll be using my imutils library, a series of convenience functions to make image and video processing operations easier with OpenCV and Python.

If you don’t already have imutils  installed or if you are using a previous version, you can install/upgrade imutils  by using the following command:

Lines 9-12 then parse our command line arguments. We only need a single switch for this script, --video , which is the path to our input video file.

Line 15 opens a pointer to the --video  file using the cv2.VideoCapture  class while Line 16 starts a timer that we can use to measure FPS, or more specifically, the throughput rate of our video processing pipeline.

With cv2.VideoCapture  instantiated, we can start reading frames from the video file and processing them one-by-one:

On Line 19 we start looping over the frames of our video file.

A call to the .read  method on Line 21 returns a 2-tuple containing:

  1. grabbed : A boolean indicating if the frame was successfully read or not.
  2. frame : The actual video frame itself.

If grabbed  is False  then we know we have reached the end of the video file and can break from the loop (Lines 25 and 26).

Otherwise, we perform some basic image processing tasks, including:

  1. Resizing the frame to have a width of 450 pixels.
  2. Converting the frame to grayscale.
  3. Drawing the text on the frame via the cv2.putText  method. We do this because we’ll be using the cv2.putText  function to display our queue size in the fast, threaded example below and want to have a fair, comparable pipeline.

Lines 40-42 display the frame to our screen and update our FPS counter.

The final code block handles computing the approximate FPS/frame rate throughput of our pipeline, releasing the video stream pointer, and closing any open windows:

To execute this script, be sure to download the source code + example video to this blog post using the “Downloads” section at the bottom of the tutorial.

For this example we’ll be using the first 31 seconds of the Jurassic Park trailer (the .mp4 file is included in the code download):

Let’s go ahead and obtain a baseline for frame processing throughput on this example video:

Figure 1: The slow, naive method to read frames from a video file using Python and OpenCV.

Figure 1: The slow, naive method to read frames from a video file using Python and OpenCV.

As you can see, processing each individual frame of the 31 second video clip takes approximately 47 seconds with a FPS processing rate of 20.21.

These results imply that it’s actually taking longer to read and decode the individual frames than the actual length of the video clip!

To see how we can speedup our frame processing throughput, take a look at the technique I describe in the next section.

Using threading to buffer frames with OpenCV

To improve the FPS processing rate of frames read from video files with OpenCV we are going to utilize threading and the queue data structure:

Figure 2: An example of the queue data structure. New data is enqueued to the back of the list while older data is dequeued from the front of the list. (source: Wikipedia)

Figure 2: An example of the queue data structure. New data is enqueued to the back of the list while older data is dequeued from the front of the list. (source: Wikipedia)

Since the .read  method of cv2.VideoCapture  is a blocking I/O operation we can obtain a significant speedup simply by creating a separate thread from our main Python script that is solely responsible for reading frames from the video file and maintaining a queue.

Since Python’s Queue data structure is thread safe, much of the hard work is done for us already — we just need to put all the pieces together.

I’ve already implemented the FileVideoStream class in imutils but we’re going to review the code so you can understand what’s going on under the hood:

Lines 2-4 handle importing our required Python packages. The Thread  class is used to create and start threads in the Python programming language.

We need to take special care when importing the Queue  data structure as the name of the queue package is different based on which Python version you are using (Lines 7-12).

We can now define the constructor to FileVideoStream :

Our constructor takes a single required argument followed by an optional one:

  • path : The path to our input video file.
  • queueSize : The maximum number of frames to store in the queue. This value defaults to 128 frames, but you depending on (1) the frame dimensions of your video and (2) the amount of memory you can spare, you may want to raise/lower this value.

Line 18 instantiates our cv2.VideoCapture  object by passing in the video path .

We then initialize a boolean to indicate if the threading process should be stopped (Line 19) along with our actual Queue  data structure (Line 23).

To kick off the thread, we’ll next define the start  method:

This method simply starts a thread separate from the main thread. This thread will call the .update  method (which we’ll define in the next code block).

The update  method is responsible for reading and decoding frames from the video file, along with maintaining the actual queue data structure:

On the surface, this code is very similar to our example in the slow, naive method detailed above.

The key takeaway here is that this code is actually running in a separate thread — this is where our actual FPS processing rate increase comes from.

On Line 34 we start looping over the frames in the video file.

If the stopped  indicator is set, we exit the thread (Lines 37 and 38).

If our queue is not full we read the next frame from the video stream, check to see if we have reached the end of the video file, and then update the queue (Lines 41-52).

The read  method will handle returning the next frame in the queue:

We’ll create a convenience function named more  that will return True  if there are still more frames in the queue (and False  otherwise):

And finally, the stop  method will be called if we want to stop the thread prematurely (i.e., before we have reached the end of the video file):

The faster, threaded method to reading video frames with OpenCV

Now that we have defined our FileVideoStream  class we can put all the pieces together and enjoy a faster, threaded video file read with OpenCV.

Open up a new file, name it , and insert the following code:

Lines 2-8 import our required Python packages. Notice how we are using the FileVideoStream  class from the imutils  library to facilitate faster frame reads with OpenCV.

Lines 11-14 parse our command line arguments. Just like the previous example, we only need a single switch, --video , the path to our input video file.

We then instantiate the FileVideoStream  object and start the frame reading thread (Line 19).

Line 23 then starts the FPS timer.

Our next section handles reading frames from the FileVideoStream , processing them, and displaying them to our screen:

We start a while  loop on Line 26 that will keep grabbing frames from the FileVideoStream  queue until the queue is empty.

For each of these frames we’ll apply the same image processing operations, including: resizing, conversion to grayscale, and displaying text on the frame (in this case, our text will be the number of frames in the queue).

The processed frame is displayed to our screen on Lines 40-42.

The last code block computes our FPS throughput rate and performs a bit of cleanup:

To see the results of the  script, make sure you download the source code + example video using the “Downloads” section at the bottom of this tutorial.

From there, execute the following command:

Figure 3: Utilizing threading with cv2.VideoCapture and OpenCV leads to higher FPS and a larger throughput rate.

Figure 3: Utilizing threading with cv2.VideoCapture and OpenCV leads to higher FPS and a larger throughput rate.

As we can see from the results we were able to process the entire 31 second video clip in 31.09 seconds — that’s an improvement of 34% from the slow, naive method!

The actual frame throughput processing rate is much faster, clocking in at 30.75 frames per second, an improvement of 52.15%.

Threading can dramatically improve the speed of your video processing pipeline — use it whenever you can.

What about built-in webcams, USB cameras, and the Raspberry Pi? What do I do then?

This post has focused on using threading to improve the frame processing rate of video files.

If you’re instead interested in speeding up the FPS of your built-in webcam, USB camera, or Raspberry Pi camera module, please refer to these blog posts:


In today’s tutorial I demonstrated how to use threading and a queue data structure to improve the FPS throughput rate of your video processing pipeline.

By placing the call to .read  of a cv2.VideoCapture  object in a thread separate from the main Python script we can avoid blocking I/O operations that would otherwise dramatically slow down our pipeline.

Finally, I provided an example comparing threading with no threading. The results show that by using threading we can improve our processing pipeline by up to 52%.

However, keep in mind that the more steps (i.e., function calls) you make inside your while  loop, the more computation needs to be done — therefore, your actual frames per second rate will drop, but you’ll still be processing faster than the non-threaded version.

To be notified when future blog posts are published, be sure to enter your email address in the form below!


If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 11-page Resource Guide on Computer Vision and Image Search Engines, including exclusive techniques that I don’t post on this blog! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , ,

56 Responses to Faster video file FPS with cv2.VideoCapture and OpenCV

  1. Steve Goldsmith February 6, 2017 at 11:14 am #

    This didn’t work for me using a CHIP SoC. I saw exactly the same frame rate. A simpler method is to move the UVC code into a separate process (mjpg-streamer) and use a simple socket based client thus removing the need for VideoCapture. I get precise FPS and better overall performance this way. See my project which includes the Python code and performance tests.

    • Adrian Rosebrock February 7, 2017 at 9:09 am #

      Thanks for sharing Steve!

  2. ghanendra February 6, 2017 at 12:08 pm #

    Hey Adrian, another awesome tutorial, Thanks a lot.
    I have one problem, I ‘m working on getting frames from camera over network, can we use threaded frames there?
    What if we are using cv2.capture in Tkinter? How to increase fps processing in Tkinter.

    • Adrian Rosebrock February 7, 2017 at 9:08 am #

      I actually demonstrate how to use OpenCV and TKinter in this blog post. The post assumes you’re reading from a video stream, but you can swap it out for a file video easily.

      • Ghanendra February 13, 2017 at 10:11 am #

        Hi Adrian, I think you missed my first question..

        When I read frames from a video using cv2.VideoCapture, speed of reading the frames is much faster than the normal speed of the video. How to read frames from the video at constant rate?

        Hi Adrian, I’m using TCP protocol over network to receive frames, fps is very low.
        What can I do for that?

        • Adrian Rosebrock February 13, 2017 at 1:34 pm #

          OpenCV doesn’t actually “care” what the true FPS of the video is. The goal, in the case of OpenCV, is to read and process the frames as fast as possible. If you want to display frames at a constant rate you’ll need to insert time.sleep calls into the main loop. Again, OpenCV isn’t directly made for video playback.

          As or transmitting frames over a TCP protocol, that is more network overhead which will certainly reduce your overall FPS. You should look into gstreamer and FFMPEG to speedup the streaming process.

          • Ghanendra February 16, 2017 at 10:51 pm #

            Thank a lot Adrian.

  3. Luis February 6, 2017 at 1:01 pm #

    Hey Adrian, good work as always, i have implemented this in a real time application in order to achieve some LPR cameras, however, i have to stream the feed to a web server. My camera has h264 support (logitech c920) but i haven’t found how to pull the raw h264 frames from camera using opencv. Do you have any experience with this? I’m sure this would increase even more my fps since no decoding-enconding would be requiered.

  4. Florent February 7, 2017 at 4:44 am #

    Hi Adrian,
    I think there may be a problem with the python3 implementation:

    When i use python 2.7.9, i got:

    python –video videos/jurassic_park_intro.mp4
    [INFO] elasped time: 8.21
    [INFO] approx. FPS: 116.42

    python –video videos/jurassic_park_intro.mp4

    [INFO] starting video file thread…
    [INFO] elasped time: 6.59
    [INFO] approx. FPS: 145.07

    But with Python 3.4.2, i got:

    python –video videos/jurassic_park_intro.mp4
    [INFO] elasped time: 8.20
    [INFO] approx. FPS: 116.62

    python –video videos/jurassic_park_intro.mp4

    [INFO] starting video file thread…
    [INFO] elasped time: 31.33
    [INFO] approx. FPS: 30.51

    • Adrian Rosebrock February 7, 2017 at 9:38 am #

      Hi Florent — I just double-checked on my system. My Python 2.7 “fast” method does seem to be slightly faster than the Python 3.5 “fast” method. Perhaps there is a difference in the Queue data structure between Python versions that I am not aware of (or maybe the threading?).

      However, I am not able to replicate your slower method being substantially speedier than the fast, threaded method (Python 3.5):

      $ python –video videos/jurassic_park_intro.mp4
      [INFO] elasped time: 44.06
      [INFO] approx. FPS: 21.70

      $ python –video videos/jurassic_park_intro.mp4
      [INFO] starting video file thread…
      [INFO] elasped time: 38.52
      [INFO] approx. FPS: 24.82

      Regardless, it does seem like the Python 3 version is slower. This might be due to the Python 3 + OpenCV 3 bindings on a difference in the Queue data structure that I am not aware of.

      • Frederik Kratzert February 9, 2017 at 9:42 am #

        Hey Adrian and Florent,

        I tried the same on two different environments (Ubuntu 16.04 with python 3.5, Windows 10 with python 3.5) and got comparable results as Florent. needs approx 4 times as long as the slow version. The numbers are pretty comparable to the ones of Florent.

        I noticed, that once the video has reached the end and the Queue function is not loading more images, the fps rate increases dramatically (like i suppose it should be normally with the threading).

        Any other thoughts?

        • Adrian Rosebrock February 10, 2017 at 12:21 pm #

          This must be a Python 3 versioning difference then. I’ll have to do some research and see if there was a change in how Python 3 handles either the Queue data structure or threading.

    • Matheus March 28, 2017 at 11:38 am #

      Same thing here:
      Python 3.6.0
      OpenCV 3.2.0-dev

      python3 –video videos/jurassic_park_intro.mp4
      [INFO] starting video file thread…
      [INFO] elasped time: 64.48
      [INFO] approx. FPS: 14.83

      python3 –video videos/jurassic_park_intro.mp4
      [INFO] elasped time: 29.66
      [INFO] approx. FPS: 32.23

      That is very weird. The “slow” version is more than 2x faster. Quite Ironic!

      • Adrian Rosebrock March 28, 2017 at 12:47 pm #

        Indeed! It’s certainly a Python 3 specific issue.

        • Wim Valcke April 22, 2017 at 5:39 am #

          Hi Adrian,

          I have a solution to the problem with python3.
          The reason is that the stream reader in the thread is continuously trying to push something into the queue in a direct tight while loop. On the other side the main thread is trying to get data from the queue. As locking is necessary, or the video thread or the main thread can access the queue.
          As the video thread has less work then the main thread it’s continuously trying to push the data if at any moment we a free slot in the queue. But this limits the access from the main thread to get data out of it.
          Just adding a small time.sleep(0.001) inside the while loop at the beginning gives a little breathing room for the main thread to get data of the queue.
          Now we have a different story. On my system the slow version

          [INFO] elasped time: 11.11
          [INFO] approx. FPS: 86.03

          fast version before the change

          [INFO] elasped time: 48.18
          [INFO] approx. FPS: 19.84

          The fast version after the change

          [INFO] elasped time: 8.13
          [INFO] approx. FPS: 102.72

          It’s a bit faster than the non thread version, reason is the overlapping time of reading a frame and processing a frame in different threads.

          Moral of the story is that tight while loops are never a good idea when using threading.
          This solution should work also for python 2.7.

          I just have another remark. The main thread assumes that if the queue is empty, the video is at the end. If we would have a faster consumer than a producer this is a problem.
          My suggestion is to add a method to FileVideoStream

          def running(self):
          # indicate that the thread is still running

          The current implementation changes self.stopped to True is the video file is at the end.

          In the main application can be changed like this. Also the sleep of 1 second at the beginning (which was there to let the queue fill up) can be left out.
          You nicely see the video showing and you see the queue size filling up in a few secs to max.

          while fvs.running():
          # grab the frame from the threaded video file stream, resize
          # it, and convert it to grayscale (while still retaining 3
          # channels)
          frame =
          frame = imutils.resize(frame, width=450)
          frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
          frame = np.dstack([frame, frame, frame])

          All the best,


          • Adrian Rosebrock April 24, 2017 at 9:46 am #

            Hi Wim — thanks for sharing. I would like to point out that the Queue data structure is thread safe in both Python 2.7 and Python 3. It’s also strange that the slowdown doesn’t happen with Python 2.7 — it only seems to happen with Python 3. Your time.sleep call seems to do the trick as it likely prevents the semaphores from constantly being polled, but again, it still seems to be Python 3 specific.

  5. Jon February 7, 2017 at 2:00 pm #

    Hi Adrian,

    Would there be a way to use this technique to process videos much quicker? Say I had a video that was 1 hour long, could I then use threading to process the video in less than an hour? What happens if I try to use the VideoWriter to save the frames from the thread? Will the resulting video file play at a much faster speed than the original input video?

    • Adrian Rosebrock February 10, 2017 at 2:18 pm #

      I would suggest using 1 thread to read the frames from a video file and store them in a queue. Then, create as many processes as you have cores on your CPU and have them process the frames in parallel. This will dramatically speedup your throughput.

      As for using cv2.VideoWriter, please refer to this blog post.

  6. Kenny February 8, 2017 at 11:26 am #

    Thanks Adrian! As always, there’s always something new to learn from you and fellow enthusiasts on your web! 🙂

    • Adrian Rosebrock February 10, 2017 at 12:21 pm #

      Thanks Kenny!

  7. TomKom February 9, 2017 at 8:52 am #

    Hi Adrian,
    Do you think it’s feasible to change queue length dynamically?
    I’ve got a script that does detection (1st frame) and then tracking (remaining 24 frames), in an endless loop, taking images from camera.
    The problem I’m facing is that during detection some frames are ‘lost’ and the tracker is not able to pick up the object.
    When using the queue it’s a bit more random as queue fills up quickly (quicker than the script takes them away from the front/bottom of the queue) so it’s constantly full. When detection happens, new frames are not added to the queue so they’re lost, though in a more random order now.
    Do you have any recommendations for this issue?
    Thank you,

    • Adrian Rosebrock February 10, 2017 at 2:04 pm #

      You can certainly change the queue length dynamically, but you would have to implement a thread-safe queue with a dynamic size yourself. Actually, I’m not sure about this, but if you instantiate Queue() without any data parameters it might allow for the queue to shrink and expand as needed.

  8. Brandon February 11, 2017 at 11:38 am #

    Great stuff as always, Adrian.
    In some of my code, after I run

    cap = cv2.VideoCapture(vidfile)

    I then get things like file fps or number of frames using various cap.get calls, example:
    fps = cap.get(cv2.CAP_PROP_FPS)

    Or, to start the read at a specific frame I would run:
    cap.set(cv2.CAP_PROP_POS_MSEC, startframe*1000/fps)

    Is there a way to use get and set on the videoCapture object which with this method would be in FileVideoStream?

    • Adrian Rosebrock February 13, 2017 at 1:49 pm #

      Great question Brandon. I personally don’t like using the cap.get and get.set calls as they can be extremely temperamental based on your OpenCV version, codecs installed, or if you’re working with a webcam versus file on disk.

      That said, the best way to modify this code would be to update the construct to accept a pre-initialized cv2.VideoCapture object rather than having the constructor built it itself.

  9. Gilles February 12, 2017 at 5:01 pm #

    Hi Adrian,
    Good article as always ! Recently, I came up with the exact same need, separate processing and input/output for a small script.

    I guess that your example has a few drawbacks that I would like to point out :

    – Never use sleep function (line 20.) to synchronize your threads. It is always a source of confusion and misconception. Because you are using the queue emptiness as an exit condition, you have to fill it before. I would suggest another approach where reading is performed by the main thread and the processing by another thread. You could use the Queue.join() Queue.task_done() as a way to synchronize threads. Usually this pattern is best achieved with a last message enqueued to kill the processing thread.

    – Threading in python comes with some limitations. One of them is GIL (Global Interpreter Lock) which means that even if you are using many threads only one of them is running at once. This is a major drawback using threading in python. Obviously, if you are only relying on bindings (opencv here) you can overcome it (the GIL should be released in a c/c++ bindings). As an alternative, I would recommend the multiprocessing module.

    – Depending on the application, I would consider using a smaller queue but more processing threads. Obviously, you rely on a tradeoff between reading time and processing time.


    • Adrian Rosebrock February 13, 2017 at 1:40 pm #

      Hi Gilles, great points. I used time.sleep in this case as a matter of simplicity, but yes, you are right. As for threading versus multiprocessing, for I/O operations it’s common to use simple threads. Is there a particular reason you’re recommending multiprocessing in this case?

  10. Gilles February 13, 2017 at 6:22 pm #

    As a rule of thumb, I would always recommend using multiprocessing rather than threading. In Python (as well as some other interpreted language), even if you are running multiple threads, only one of them goes at a time. This is called GIL (Global InterpreterLock). So, using many threads with pure Python code won’t bring you the expected parallel execution.On the other hand, multiprocessing relies on separate processes and interprocess communication. Each process runs its own python interpreter, allowing for parallel execution of pure python code.
    Obviously, in the example you provided, using threading for I/O is not a big issue. The I/O occurs in opencv, and GIL is released when going deep in opencv (reaching c++ native code).
    Last point, using multiprocessing can ease pure Python parallel execution easily and efficiently using multiprocessing.Pool and multiprocessing.Queue (same interface as standard Queue).


    • Adrian Rosebrock February 14, 2017 at 1:27 pm #

      Thanks for the tips Giles. I’ve always defaulted to threads for I/O heavy tasks and then processes for computation heavy tasks. I’ll start playing around with multi-processing for other tasks as well now.

  11. Raghav February 18, 2017 at 11:57 am #

    Hey Adrian! cool post as always. Just a typo though.. in the line “feeing up our main thread”, freeing is misspelt

  12. Kim Willem van Woensel Kooy February 20, 2017 at 9:00 am #

    Hey Adrian! I have learned much from your blog posts. I’m also looking for ways to speed up my VideoCapture-functions, so this post was excellent. But I’m wondering if it is possible to skip frames in a video file? I’m trying to detect motion with a simple pixel based matching (thresholding), and I want to make an if statement telling the program to skip the next 24 frames if no motion is detected. If motion is detected I want to process every frame until no motion is detected. See my problem?
    I’m using for looping through the frames.

    • Adrian Rosebrock February 22, 2017 at 1:48 pm #

      Instead of using .read() to read and decode each frame, you could actually use .grab which is much faster. This would allow you to seek N frames into the video without having to read and decode each of the previous N – 1 frames. I’ve heard it’s also possible to use the .set method as well, but I haven’t personally tried this.

  13. Chandramauli Kaushik February 28, 2017 at 2:09 pm #

    Something strange happened in my case:
    read_frames_slow was playing the video perfectly.
    read_frames_fast was playing the video slowly.
    But the opposite should happen, right!!!!!

    Here is the Terminal OUTPUT:

    (cv) Chandramaulis-MacBook-Pro:increaseFPS Mauli$ python –video videos/jurassic_park_intro.mp4
    [INFO] starting video file thread…
    2017-03-01 00:32:21.473 python[43433:374316] !!! BUG: The current event queue and the main event queue are not the same. Events will not be handled correctly. This is probably because _TSGetMainThread was called for the first time off the main thread.
    Terminated: 15

    (cv) Chandramaulis-MacBook-Pro:increaseFPS Mauli$ python –video videos/jurassic_park_intro.mp4
    2017-03-01 00:32:43.113 python[43481:374632] !!! BUG: The current event queue and the main event queue are not the same. Events will not be handled correctly. This is probably because _TSGetMainThread was called for the first time off the main thread.
    Terminated: 15

    Why is this happening?

    • Chandramauli Kaushik February 28, 2017 at 2:11 pm #

      BTW, I am using python 3.5.

      • Adrian Rosebrock March 2, 2017 at 6:57 am #

        As the other comments have suggested, this seems to be an issue with Python 3 only. For Python 2.7, the threaded version is much faster. I’m not sure why it’s so much slower for Python 3.

  14. Phil Birch March 6, 2017 at 10:21 am #

    I came across a problem with your code with my awful slow hard dive. If another application has heavy use of the hard drive the queue drops to zero and your more() method will return False prematurely, ending the video reading before the end of the file. However there’s a quick fix. The get method from the Queue has a blocking option and instead of testing the queue length in more() test the self.stopped flag instead

    def read(self):
    # return next frame in the queue
    return self.Q.get(block=True, timeout=2.0)
    def more(self):
    # return True if there are still frames in the queue
    return not self.stopped

    This also means you can remove the time.sleep(1.0) line from the main

    • Adrian Rosebrock March 6, 2017 at 3:35 pm #

      Thanks for sharing Phil!

      • ap April 28, 2017 at 5:41 pm #

        Python 3.5, I’m seeing same behavior as Chandramauli Kaushik, the FAST version is at least

        I used a different video, here are the speeds

        SLOW=60.29 FPS
        FAST=16.28 FPS

        • Adrian Rosebrock May 1, 2017 at 1:49 pm #

          As mentioned in previous comments, this is definitely a Python 3 specific issue. I’m not sure why it happens only for Python 3.5 + OpenCV 3, but not Python 2.7 + OpenCV 3 or Python 2.7 + OpenCV 2.4. My guess is that there was a change in how the Queue data structure works in Python 3, but again, I’m not entirely sure.

  15. Igor May 1, 2017 at 2:31 pm #

    Hi, Adrian! Any upcoming fix for python3 ? On 3.6 same behavor as everybody has… I see that “slow” version is getting 120% CPU + 290Mb RAM on my Macbook, “fast” version takes 175% CPU and 990Mb RAM (Reading .MOV container with H.264 FullHD video). So Assuming higher CPU/memory usage there should be an effect, but it plays like 30-50% slower…

    • Adrian Rosebrock May 3, 2017 at 6:03 pm #

      As I mentioned in previous comments, I’m not sure what the issue is with Python 3 and why it takes so much longer than Python 2.7. I would appreciate it if another PyImageSearch reader could investigate this issue further as I’m pretty sure the issue is with the Queue data structure.

  16. Igor May 1, 2017 at 2:37 pm #

    Oh, and I have read all your posts, thanks! Very easy to read and understand for noob like me 🙂
    May I have advise from you about this questions:

    1. What is the best way to detect traffic signals? (I am using color tracking through HSV threshold and then detecting shape through HoughCircles . But I am getting a lot of false positives when red car is crossing or someone brake lights is on 🙂
    2. What is the best way to detect standing/moving cars? Train Haar?

    Thank you!

    • Adrian Rosebrock May 3, 2017 at 6:02 pm #

      1. To detect traffic signals I would train a custom object detector to recognize the traffic signal. From there you could apply color thresholding to determine which light is “on”.

      2. This really depends on the data that you are working with. HOG + Linear SVM detectors might work here if the video is fixed and non-moving, but it will be harder to generalize across datasets. In that case you might consider a more advanced approach such as deep learning.

  17. Cooper June 21, 2017 at 8:41 pm #

    I switched my script over from the usual VideoCapture to your threaded method and am only getting a 1-2% better FPS : /

    • Adrian Rosebrock June 22, 2017 at 9:27 am #

      Which version of OpenCV and Python are oyu using?

  18. Daniel June 23, 2017 at 6:01 am #

    I was able to replicate the performance improvements with the demo script for Python 2.7. However, not within my own custom app code for some reason.

    I ended up playing with a few file formats instead and learned the following from benchmarking via the FPS class.

    Test Files:
    101MB 1080p .mp4 (Drone raw file)
    26MB 1080p .mov (Quicktime re-export from raw file)
    18MB 1080p .m4v (Quicktime re-export from raw file)

    – The smaller .mov file processed just as slowly as the raw .mp4 file.
    – The smallest .m4v file processed 2x faster than the raw .mp4 file.

  19. Frank July 3, 2017 at 7:05 am #

    Hi, Adrian Rosebrock, very good works! I would like to asking you one question, is ) method I/O bound or CPU bound? As far we know, decoding video is slow, so the read( ) method is blocked. If it is CPU bound, should we use multiprocessing?

  20. Lee July 15, 2017 at 10:31 pm #

    Hi, Adrian! I am trying to use collections.deque to cache frames, because I don’t need to check if it is full, but when it run, I always get this: “IndexError: pop from an empty deque‘’. Maybe deque cache object slower than queue.Queue. I am not sure whether my program has problem.(Python: 3.5.2, Opencv: 3.2.0)

    • Adrian Rosebrock July 18, 2017 at 10:02 am #

      Hi Lee — I’m not exactly sure what the issue is there. The error message states that the dequeue is empty. I would suggest checking your logic in the code to ensure the frame is getting added to the dequeue.

  21. Sohaib September 18, 2017 at 3:25 pm #

    I am not sure what is happening but the slow script is giving me an FPS of 105.07 while the threaded one gives 20.63.

    Did a newer version of opencv make a better implementation of .read()?

    • Adrian Rosebrock September 20, 2017 at 7:24 am #

      I assume you’re using Python 3? If so, check the rest of the comments on this page. It appears that Python 3 uses a different implementation of the Queue class which dramatically slows down the frame reads. I’m not sure why this is and would request PyImageSearch readers with experience in Python 2.7 and Python 3 differences as well as multi-threading to investigate this.

  22. David September 23, 2017 at 4:07 pm #

    Hi Adrian,
    I use your code with a little robot to overlay gps position, date, time, .. on camera video.
    It works fine.
    I just modified the class to include videowriter to record video on a USB3 stick.
    I saw the difference between read() method in an another thread and in the main one. Well done.
    Problem begins when I want to play the recorded video. For example, a 10s video is played in 7s… The overlayed time runs too faster regardless of fps set on videowriter().
    Can you explain why ?


    • Adrian Rosebrock September 24, 2017 at 8:47 am #

      Hi David — I actually discuss the cv2.VideoWriter class in more detail over here. While OpenCV does a really great job reading frames from video files it can be problematic writing the frames back out to disk. The short version is that I’m not really sure why this is happening even if you are adjusting your FPS parameters. I’m sorry I couldn’t be of more help here, but I would suggest looking into the video I/O libraries you have installed and see if there are any known issues with OpenCV.

  23. Reza Ghoddoosian October 13, 2017 at 4:14 pm #

    Hi Adrian
    I was wondering why you did not use queue for reading from a webcam with the same method in one of your tutorials
    A i have understood, using a queue will make you use every single frame while, in contrast, without using queue you may lose a couple frames in between. maybe in a live stream your dont care about this but in a video file with a limited frame number you do. right?

    • Adrian Rosebrock October 14, 2017 at 10:38 am #

      Yes, a queue will make you read every single frame, but that’s desirable for real-time applications. You could easily lag behind if your video processing pipeline is complex. Instead, you read the most recent frame from the camera buffer.

Leave a Reply