OpenCV Object Tracking

In last week’s blog post we got our feet wet by implementing a simple object tracking algorithm called “centroid tracking”.

Today, we are going to take the next step and look at eight separate object tracking algorithms built right into OpenCV!

You see, while our centroid tracker worked well, it required us to run an actual object detector on each frame of the input video. For the vast majority of circumstances, having to run the detection phase on each and every frame is undesirable and potentially computationally limiting.

Instead, we would like to apply object detection only once and then have the object tracker be able to handle every subsequent frame, leading to a faster, more efficient object tracking pipeline.

The question is — can OpenCV help us achieve such object tracking?

The answer is undoubtedly a resounding “Yes”.

To learn how to apply object tracking using OpenCV’s built-in object trackers, just keep reading.

Looking for the source code to this post?
Jump right to the downloads section.

OpenCV Object Tracking

In the first part of today’s blog post, we are going to briefly review the eight object tracking algorithms built-in to OpenCV.

From there I’ll demonstrate how we can use each of these object trackers in real-time.

Finally, we’ll review the results of each of OpenCV’s object trackers, noting which ones worked under what situations and which ones didn’t.

Let’s go ahead and get started tracking objects with OpenCV!

8 OpenCV Object Tracking Implementations

Figure 1: Drone footage of a car in motion being tracked with the MedianFlow tracker.

You might be surprised to know that OpenCV includes eight (yes, eight!) separate object tracking implementations that you can use in your own computer vision applications.

I’ve included a brief highlight of each object tracker below:

  1. BOOSTING Tracker: Based on the same algorithm used to power the machine learning behind Haar cascades (AdaBoost), but like Haar cascades, is over a decade old. This tracker is slow and doesn’t work very well. Interesting only for legacy reasons and comparing other algorithms. (minimum OpenCV 3.0.0)
  2. MIL Tracker: Better accuracy than BOOSTING tracker but does a poor job of reporting failure. (minimum OpenCV 3.0.0)
  3. KCF Tracker: Kernelized Correlation Filters. Faster than BOOSTING and MIL. Similar to MIL and KCF, does not handle full occlusion well. (minimum OpenCV 3.1.0)
  4. CSRT Tracker: Discriminative Correlation Filter (with Channel and Spatial Reliability). Tends to be more accurate than KCF but slightly slower. (minimum OpenCV 3.4.2)
  5. MedianFlow Tracker: Does a nice job reporting failures; however, if there is too large of a jump in motion, such as fast moving objects, or objects that change quickly in their appearance, the model will fail. (minimum OpenCV 3.0.0)
  6. TLD Tracker: I’m not sure if there is a problem with the OpenCV implementation of the TLD tracker or the actual algorithm itself, but the TLD tracker was incredibly prone to false-positives. I do not recommend using this OpenCV object tracker. (minimum OpenCV 3.0.0)
  7. MOSSE Tracker: Very, very fast. Not as accurate as CSRT or KCF but a good choice if you need pure speed. (minimum OpenCV 3.4.1)
  8. GOTURN Tracker: The only deep learning-based object detector included in OpenCV. It requires additional model files to run (will not be covered in this post). My initial experiments showed it was a bit of a pain to use even though it reportedly handles viewing changes well (my initial experiments didn’t confirm this though). I’ll try to cover it in a future post, but in the meantime, take a look at Satya’s writeup(minimum OpenCV 3.2.0)

My personal suggestion is to:

  • Use CSRT when you need higher object tracking accuracy and can tolerate slower FPS throughput
  • Use KCF when you need faster FPS throughput but can handle slightly lower object tracking accuracy
  • Use MOSSE when you need pure speed

Satya Mallick also provides some additional information on these object trackers in his article as well.

Object Trackers have been in active development in OpenCV 3. Here is a brief summary of which versions of OpenCV the trackers appear in:

Figure 2: OpenCV object trackers and which versions of OpenCV they appear in. I recommend OpenCV 3.4+ if you plan to use the built-in trackers.

Note: Despite following the instructions in this issue on GitHub and turning off precompiled headers, I was not able to get OpenCV 3.1 to compile.

Now that you’ve had a brief overview of each of the object trackers, let’s get down to business!

Object Tracking with OpenCV

To perform object tracking using OpenCV, open up a new file, name it , and insert the following code:

We begin by importing our required packages. Ensure that you have OpenCV installed (I recommend OpenCV 3.4+) and that you have imutils  installed:

Now that our packages are imported, let’s parse two command line arguments:

Our command line arguments include:

  • --video : The optional path to the input video file. If this argument is left off, then the script will use your webcam.
  • --tracker : The OpenCV object tracker to use. By default, it is set to kcf  (Kernelized Correlation Filters). For a full list of possible tracker code strings refer to the next code block or to the section below, “Object Tracking Results”.

Let’s handle the different types of trackers:

As denoted in Figure 2 not all trackers are present in each minor version of OpenCV 3+.

There’s also an implementation change that occurs at OpenCV 3.3. Prior to OpenCV 3.3, tracker objects must be created with cv2.Tracker_create  and passing an uppercase string of the tracker name (Lines 22 and 23).

For OpenCV 3.3+, each tracker can be created with their own respective function call such as cv2.TrackerKCF_create . The dictionary, OPENCV_OBJECT_TRACKERS , contains seven of the eight built-in OpenCV object trackers (Lines 30-38). It maps the object tracker command line argument string (key) with the actual OpenCV object tracker function (value).

On Line 42 the tracker  object is instantiated based on the command line argument for the tracker and the associated key from OPENCV_OBJECT_TRACKERS .

Note: I am purposely leaving GOTURN out of the set of usable object trackers as it requires additional model files.

We also initialize initBB  to None  (Line 46). This variable will hold the bounding box coordinates of the object that we select with the mouse.

Next, let’s initialize our video stream and FPS counter:

Lines 49-52 handle the case in which we are accessing our webcam. Here we initialize the webcam video stream with a one-second pause for the camera sensor to “warm up”.

Otherwise, the --video  command line argument was provided so we’ll initialize a video stream from a video file (Lines 55 and 56).

Let’s begin looping over frames from the video stream:

We grab a frame  on Lines 65 and 66 as well as handle the case where there are no frames left in a video file on Lines 69 and 70.

In order for our object tracking algorithms to process the frame faster, we resize  the input frame to 50 pixels (Line 74) — the less data there is to process, the faster our object tracking pipeline will run.

We then grab the width and height of the frame as we’ll need the height later (Line 75).

Now let’s handle the case where an object has already been selected:

If an object has been selected, we need to update the location of the object. To do so, we call the update  method on Line 80 passing only the frame  to the function. The update  method will locate the object’s new position and return a success  boolean and the bounding box  of the object.

If successful, we draw the new, updated bounding box location on the frame  (Lines 83-86). Keep in mind that trackers can lose objects and report failure so the success  boolean may not always be True .

Our FPS throughput estimator is updated on Lines 89 and 90.

On Lines 94-98 we construct a list of textual information to display on the frame . Subsequently, we draw the information on the frame  on Lines 101-104.

From there, let’s show the frame  and handle selecting an object with the mouse:

We’ll display a frame  and continue to loop unless a key is pressed.

When the “s” key is pressed, we’ll “select” an object ROI using cv2.selectROI . This function allows you to manually select an ROI with your mouse while the video stream is frozen on the frame:

Figure 3: Selecting an object’s ROI with the mouse and cv2.selectROI.

The user must draw the bounding box and then press “ENTER” or “SPACE” to confirm the selection. If you need to reselect the region, simply press “ESCAPE”.

Using the bounding box info reported from the selection function, our tracker  is initialized on Line 120. We also initialize our FPS counter on the subsequent Line 121.

Of course, we could also use an actual, real object detector in place of manual selection here as well.

Last week’s blog post already showed you how to combine a face detector with object tracking. For other objects, I would suggest referring to this blog post on real-time deep learning object detection to get you started. In a future blog post in this object tracking series, I’ll be showing you how to combine both the object detection and object tracking phase into a single script.

Lastly, let’s handle if the “q” key (for “quit”) is pressed or if there are no more frames in the video, thus exiting our loop:

This last block simply handles the case where we have broken out of the loop. All pointers are released and windows are closed.

Object Tracking Results

In order to follow along and apply object tracking using OpenCV to the videos in this blog post, make sure you use the “Downloads” section to grab the code + videos.

From there, open up a terminal and execute the following command:

Be sure to edit the two command line arguments as desired: --video  and --tracker .

If you have downloaded the source code + videos associated with this tutorial, the available arguments for --video  include the following video files:

  • american_pharoah.mp4
  • dashcam_boston.mp4
  • drone.mp4
  • nascar_01.mp4
  • nascar_02.mp4
  • race.mp4
  • …and feel free to experiment with your own videos or other videos you find online!

The available arguments for --tracker  include:

  • csrt
  • kcf
  • boosting
  • mil
  • tld
  • medianflow
  • mosse

Refer to the section, “8 OpenCV Object Tracking Implementations” above for more information about each tracker.

You can also use your computer’s webcam — simply leave off the video file argument:

In the following example video I have demonstrated how OpenCV’s object trackers can be used to track an object for an extended amount of time (i.e., an entire horse race) versus just short clips:

Video and Audio Credits

To create the examples for this blog post I needed to use clips from a number of different videos.

A massive “thank you” to Billy Higdon (Dashcam Boston), The New York Racing Association, Inc. (American Pharaoh), Tom Wadsworth (Multirotor UAV Tracking Cars for Aerial Filming), NASCAR (Danica Patrick leads NASCAR lap), GERrevolt (Usain Bolt 9.58 100m New World Record Berlin), Erich Schall, and Soularflair.


In today’s blog post you learned how to utilize OpenCV for object tracking. Specifically, we reviewed the eight object tracking algorithms (as of OpenCV 3.4) included in the OpenCV library:

  • CSRT
  • KCF
  • Boosting
  • MIL
  • TLD
  • MedianFlow

I recommend using either CSRT, KCF, or MOSSE for most object tracking applications:

  • Use CSRT when you need higher object tracking accuracy and can tolerate slower FPS throughput
  • Use KCF when you need faster FPS throughput but can handle slightly lower object tracking accuracy
  • Use MOSSE when you need pure speed

From there, we applied each of OpenCV’s eight object trackers to various tasks, including sprinting, horse racing, auto racing, drone/UAV tracker, and dash cams for vehicles.

In next week’s blog post you’ll learn how to apply multi-object tracking using a special, built-in (but mostly unknown) OpenCV function.

To be notified when next week’s blog post on multi-object tracking goes live, just enter your email address in the form below!


If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!


222 Responses to OpenCV Object Tracking

  1. Navaneeth Krishnan July 30, 2018 at 11:28 am #

    Adrain i hava seen your face detection video it was nice. But some time there were false positives occuring. then i heard about openface for face recognition. why dont you make a tutorial on openface it will be very helpful

    • Adrian Rosebrock July 30, 2018 at 1:22 pm #

      Navaneeth — you requested that very topic last week. I am aware of your comment and have acknowledged it. I love taking requests regarding what readers want to learn more above, but I need to kindly ask you to please stop requesting the same topic on every post. If I can cover it in the future I certainly will.

  2. Fan July 30, 2018 at 12:22 pm #

    Hi Adam, great post as always! I wonder if you can do a separate post that use a similar approach(semi-supervised) for pixel-wise tracking?

    • Adrian Rosebrock July 30, 2018 at 1:18 pm #

      It’s Adrian actually, not Adam 😉 I’ll be doing a pixel-wise segmentation post soon.

  3. Jean-Marie July 30, 2018 at 12:55 pm #

    Hi, Very nice ! I test on RC plane video footage. Don’t know if bug or now but when bounding box fail to capture objet (rc plane), you must restart the loop from scratch (with video from timestamp zero) which is quite annoying. Is there any solution to delete the memorize bounding box with keyboard shortcut ? If I look long footage, it let better chance to successful track plane or another plane. Thanks !!! I tested goturn and performance is really poor I find (12/15 fps). Native code seems to run on CPU not GPU. Will investigate !!!

    • Adrian Rosebrock July 30, 2018 at 1:21 pm #

      I haven’t done enough investigate work with GOTURN but just like any other machine learning algorithm, if the model was not trained on data representative of what it will see in the “real world” then it’s not going to perform well. I suspect GOTURN was not trained on RC plane video footage or objects that may appear in such footage.

      As far as restarting the video all you should have to do is re-instantiate the actual OpenCV object tracker inside the “if” statement that is used to catch the “c” key. Something like this should work:

  4. Singh July 30, 2018 at 1:46 pm #

    Hi Adrian,
    I am getting the following error when I am trying to execute the program!

    Traceback (most recent call last):
    File “”, line 35, in
    “csrt”: cv2.TrackerCSRT_create,
    AttributeError: module ‘cv2.cv2’ has no attribute ‘TrackerCSRT_create’

    • Adrian Rosebrock July 30, 2018 at 2:01 pm #

      The only time I’ve seen a similar error is from this post on saliency detection. If you check out the comments section, in particular this comments thread you’ll find what should be the solution. It appears there is a problem with your OpenCV install and/or version. Reading the thread I linked you to should help resolve it.

    • KhanhPhamDinh July 30, 2018 at 2:20 pm #

      I think your version is official and some private proprietaries cause they dropout several functionality. The way to deal with was install opencv-contrib-python on this helpful link:

      • Adrian Rosebrock July 30, 2018 at 3:06 pm #

        Thanks so much for sharing the solution, KhanhPhamDinh!

        • Lu August 6, 2018 at 5:37 pm #

          You need to uninstall opencv before installing opencv-contrib
          Make sure no console is running that has imported cv2 while you execute your installing process
          Run the cmd in Administration Mode (Go to the .exe of the cmd program, right click and select ‘Run as Administrator’)

          • Vinod April 18, 2019 at 5:26 am #

            Thanks a ton!! 🙂
            uninstalled opencv-pthon and opencv-contrib-python
            installed cv and then contib. which worked.

    • Aaron July 31, 2018 at 12:44 pm #

      I’ve also run into this problem. I’m working on an Ubuntu 16.04 virtual machine – following Adrian’s build instructions for OpenCV (v3.4) I also installed opencv_contrib_python without any luck. It appears only to be CSRT. The rest of the trackers are there.

      • Aaron July 31, 2018 at 12:49 pm #

        And I’ve just discovered my issue. I’m on OpenCV 3.4.0, and the minimum as mentioned above is 3.4.2.

    • cuong September 27, 2018 at 3:15 am #

      I also encountered the same error. Have you solved it yet?

    • Vinod April 18, 2019 at 5:02 am #

      same here :/

      got a solution?

      • Vinod April 18, 2019 at 5:27 am #

        uninstalled opencv-pthon and opencv-contrib-python
        installed cv and then contib.


  5. David July 30, 2018 at 2:07 pm #

    Hi Adrian,
    Is it possible to get object position during traking operation (delta position in relation of the center of the screen for example) ? I would like to plug the camera on a motorized support to “re-centering” tracked object on real-time operation.


    • Adrian Rosebrock July 30, 2018 at 3:08 pm #

      Hey David, I’m not sure I fully understand your question. If you know the center (x, y)-coordinates of the screen along with the centroid or corners of the predicted bounding box you can always compute the distance between them to obtain the delta. But again, I may not be understanding your question properly.

  6. KhanhPhamDinh July 30, 2018 at 2:15 pm #

    Dear fellow. I ‘m window user and although i follow your manual to drag the objective region but it’s unable to track (without any blue rectangle appear). Could you help me? Best regards!

    • Adrian Rosebrock July 30, 2018 at 3:07 pm #

      If you select the bounding box and then the bounding box immediately disappears after selection then the “success” boolean is false. The object tracker cannot track your object. You should try a different object tracker, keeping in mind my pros/cons listed in the post.

  7. Anis July 30, 2018 at 2:20 pm #

    I tried the code and I got this error

    AttributeError: module ‘cv2.cv2’ has no attribute ‘TrackerCSRT_create’

    I am using a Mac-OS Sierra + Python3 and OpenCV 3.4.1

    What could be the problem?

    • Adrian Rosebrock July 30, 2018 at 3:06 pm #

      Make sure you read my reply to “Singh” as I have addressed this question.

    • Mehmet Egemen February 1, 2019 at 4:54 am #

      uninstall opencv-python and then only install opencv-contrib-python.

      As you can see here ,
      opencv-python is the main package and opencv-contrib-python is main package and contrib modules so installing them together is wrong, just install contrib.

      • Adrian Rosebrock February 1, 2019 at 6:35 am #

        Mehmet is 100% correct. Thanks Mehmet!

      • Andrea Cassigoli November 21, 2019 at 5:37 am #

        Hi all,
        I had the same problem as Singh.
        I followed Mehmet suggestion and, after unistalling opencv-python and installing ONLY opencv-contrib-python, everything worked fine for me!!!
        Thank you very much!!!

  8. Jean-Marie July 30, 2018 at 2:53 pm #

    thanks Adrian for the tips for re-instantiate the object tracker: perfectly works !
    About Goturn , even on horse race or dashcam_boscam video, tracking is lost or become larger and larger with Goturn… Something is probably not well configured. Really cool piece of code ! love it …

    • Adrian Rosebrock July 30, 2018 at 3:08 pm #

      Awesome, I’m glad that fix worked! 😀

  9. Carlos July 30, 2018 at 5:23 pm #

    Excellent post! Do you have a recommendation for traffic lights recognition ?

    • Adrian Rosebrock July 30, 2018 at 5:34 pm #

      I would suggest training your own custom object detector. Faster R-CNNs and SSDs would be a good start. Use this post to help you get started. From there I cover how to train your own custom deep learning-based object detectors inside Deep Learning for Computer Vision with Python. Two of the chapters even have a self-driving car theme, including front/rear view vehicle recognition and traffic sign recognition.

  10. Florian July 30, 2018 at 5:36 pm #

    i had implemented something similar. using yolo once per second +object follwing from the meantime, together with a matrix to calculate the matching. depending on the algo, if you have 50+ objects it can get complex.

    • Adrian Rosebrock July 31, 2018 at 9:45 am #

      Congrats on the successful project, Florian! As you noted it can get very complex depending on how many objects you’re tracking. In a future post in this series I’m going to be demonstrating more efficient multi-object tracking by distributing the trackers across multiple cores on a processor.

  11. Hermann-Marcus Behrens July 30, 2018 at 9:04 pm #

    Very impressive. Love your work.

    • Adrian Rosebrock July 31, 2018 at 9:43 am #

      Thank you, I really appreciate it 🙂

  12. Sudharsan July 31, 2018 at 12:52 am #

    Hi Adrian,
    Is it possible to use in raspberry pi 3B+?
    You are doing a great work in computer vision
    Thanks for the tutorial.

    • Adrian Rosebrock July 31, 2018 at 9:41 am #

      Try it and see! 😉 But yes, the code will work on a Pi, just use a faster tracker like MOSSE as the Pi is slower than a normal laptop/desktop.

      • Sudharsan August 3, 2018 at 4:17 am #

        Thanks for the reply it works well with MOSSE. It also works with Picamera too.

        • Adrian Rosebrock August 7, 2018 at 7:08 am #

          Thanks for following up, Sudharsan 🙂

  13. atom July 31, 2018 at 2:49 am #

    Hi Adrian,
    (may be my question is not right place) in your posts on object detection I’ve read, the prototxt and model files are already available, attached for download. So can you make a post to show how to train a neural network using some well-known frameworks (I found you prefer caffe). Actually, I just want to see if I can experience “like-a-boss” feeling in making complete neural network: I capture (collect) images as many, throw them in the network I prepared, get a cup of coffee for waiting, export into model files and lastly apply these files to your post.
    Say I’m insane if you like!

    • Adrian Rosebrock July 31, 2018 at 9:40 am #

      Hey Atom — I actually cover how to train your own custom models inside Deep Learning for Computer Vision with Python. That is by far the most comprehensive book to training your own deep learning models. Otherwise, I think you’ll have fun playing with this tutorial.

      • atom July 31, 2018 at 10:32 pm #

        Thanks so much, Adrian, I truly appreciate that

        • Adrian Rosebrock August 1, 2018 at 6:33 am #

          Happy too, Atom 🙂

  14. Markus July 31, 2018 at 4:53 am #

    dear Adrian:
    I am a Chinese studen.I am sorry that myEnglish is not good enough.I have a question that
    how can i input my own vidio and where i should write my path to the vidio flie.
    Because of my poor English,I think too much time.if you can answer me,i am very aprriciated

    • Adrian Rosebrock July 31, 2018 at 9:39 am #

      There are no modifications needed to the code. Just update the --video switch to point to your video file when you execute the script:

      $ python --video path/to/your/video.mp4

      • Markus August 1, 2018 at 4:48 am #

        amazing!thank you so much ,i have succeed in run the code with my local video !
        i am very appreciated for your help!

  15. Ray Casler July 31, 2018 at 5:03 am #


    Would this work on a raspberry pi with pi camera?

    • Adrian Rosebrock July 31, 2018 at 9:38 am #

      Yes, but you would want to use one of the faster detectors like MOSSE as the Raspberry Pi isn’t as fast as a standard laptop/desktop.

  16. sam July 31, 2018 at 5:59 am #

    Hello Adrian,

    What is the best algorithm to distinguish head covered (scarf, Cap, etc.) from uncovered head? For detection and tracking purpose.

    • Adrian Rosebrock July 31, 2018 at 9:38 am #

      There are a few ways to approach this:

      1. Use a face detector to detect a face and then run a classifier on the face region to determine covered/uncovered
      2. Train an object detector to detect covered/uncovered

      The first method is easier to implement but will be more prone to missing faces altogether as sometimes covered heads/faces are significantly harder to detect if the model was never trained on those images.

  17. Gabe July 31, 2018 at 1:26 pm #

    Hey Adrian,

    Thanks for the tutorial, I am trying to run the code and whether I give no video argument and just use my webcam or if I give a path to a video on my computer whenever I press ‘s’ to draw the bounding box I get this message:

    Traceback (most recent call last):
    File “”, line 120, in
    TypeError: Required argument ‘boundingBox’ (pos 3) not found

    I’m not sure what I am doing wrong since I just used your source code. If you have any tips to fix this please let me know. Thanks!

    • Adrian Rosebrock August 1, 2018 at 6:30 am #

      Hi Gabe, just to confirm a few things:

      1. You used the “Downloads” section of the blog post to download the code rather than copied + pasted, right? Sometimes readers accidentally introduce errors if they try to copy and paste which is why I always recommend using the “Downloads” section.
      2. Which version of OpenCV are you using?

      • Rob August 30, 2018 at 1:30 pm #

        Hi Adrian,

        my experience is that this happens for all the trackers I’ve tried you say will compile under 3.2 (my version of OpenCV). As soon as “s” is hit, the program terminates with the above error.

        Python 3.5.2
        OpenCV 3.2
        Trackers tried: kcf, medianflow and mil

        I am using your code (downloaded) and one of the .mp4s provided


        • Adrian Rosebrock September 5, 2018 at 9:27 am #

          Hey Rob — I’ve had a lot of issues with OpenCV 3.2 and the object tracking algorithms. I would really recommend you use at least OpenCV 3.3 and ideally OpenCV 3.4+.

  18. CRay July 31, 2018 at 3:23 pm #

    This is one of the great websites for OpenCV and Python implementation. I’m thankful for Adrian. I also purchased the book package earlier.
    I’m not a software engineer, but I’m trying to learn.
    I’m also looking for someone who can integrate what Adrian is showing us in terms of Facial recognition into an existing surveillance system, like ZoneMinder (, or iVideon ( or Kerberos (
    I prefer Zoneminder as I can control all data.
    basically, I’m looking for a integration or creating a plugin that would in realtime (or near realtime) do face recognition using OpenCV. I would also like to incorporate other types of detection such as license plate detection, obj3ect counter etc.
    I have a budget established for this. If anyone is familiar with Zomeminder and can develop something like this, pls let me know

    • Adrian Rosebrock August 1, 2018 at 6:31 am #

      Thank you for picking up a copy of my book, I hope you are enjoying it so far! Regarding your project send me an email and we can get your job listed on PyImageJobs.

  19. Byron July 31, 2018 at 6:03 pm #


    Thanks for the post. Very interesting. Came across an error on the construction of the argument parser…..

    usage: [-h] [-t TRACKER]

    • Adrian Rosebrock August 1, 2018 at 6:32 am #

      The error is related to your command line arguments not being supplied. If you’re new to command line arguments and how they work with Python, that’s okay, but make sure you read this post first so you can get up to speed.

  20. Learner August 1, 2018 at 8:15 am #

    I am unable to create a Frame and track the same on the video not sure why, am on ubuntu and installed OpenCv through Anaconda framework.. even during first run i got an Attribute Error which I resolved on reading KhanhPhamDinh (thanks).

    Any help is most welcomed.

    • Adrian Rosebrock August 2, 2018 at 9:29 am #

      When you say you are unable to “create a frame” are you saying that the Python script does not open a window? That it crashes immediately with another error? If you can be more specific I can try to provide a recommendation.

  21. farshad August 1, 2018 at 11:55 pm #

    thanks a lot Adrian. good job again. What is the differences between object tracker and particle filter in opencv?

    • Adrian Rosebrock August 2, 2018 at 9:24 am #

      You can use particle filtering as a form of object tracking; however, particle filtering methods have a number of other assumptions, including Monte Carlo-based algorithms, dynamic systems, and random perturbations.

  22. reza August 2, 2018 at 4:20 am #

    Hi Adrian.tnx for your great post.I have a question and I hope you can help me.I want to build a people counter that installed at the entrance of store.what’s your best solution for doing this?face detection?remove background of video?frames subtraction?…

    • Adrian Rosebrock August 2, 2018 at 9:19 am #

      I’ll actually be covering how to build a person counter later in this object tracking series, stay tuned!

      • reza August 2, 2018 at 5:42 pm #

        Thank you so much.Adrian how long does it take?less than one month or more?

        • Adrian Rosebrock August 7, 2018 at 7:43 am #

          The person counter post will be published this Monday.

  23. madhu dande August 2, 2018 at 8:36 pm #

    unable to track the car using any of models. While executing the program the Boston dash cam video is launching but as mentioned i couldn’t able to track. need your support

    • Adrian Rosebrock August 7, 2018 at 7:41 am #

      Make sure you click the window opened by OpenCV (not your terminal window) and then press the “s” key on your keyboard to select the ROI.

  24. Greg August 3, 2018 at 10:57 am #

    Great post! Just wondering if you have any advice about how one might go about working with trackers in OpenCV on the Android platform.

  25. SINAN OGUZ August 5, 2018 at 2:27 pm #

    Hi, Adrian, this is my first comment :), first of all thanks for the your web site, I’ve learnt many things about image processing.

    I have question automatic tracking initilization, for this post(OpenCv Object Tracking)

    You said “Of course, we could also use an actual, real object detector in place of manual selection here as well.”

    What is your suggestions instead of selecting ROI? Is there a new post about it?

    Thanks for all 🙂

    • Adrian Rosebrock August 7, 2018 at 6:47 am #

      This post shows you how an object detector can automatically detect ROIs and track them.

  26. M Tayab Khan August 6, 2018 at 3:59 am #

    hi Adrian,
    I am getting the following error while running the file

    error: the following arguments are required: -p/–prototxt, -m/–model

    • Adrian Rosebrock August 7, 2018 at 6:45 am #

      You need to supply the command line arguments to the script when you’re executing it (which you’re not doing). If you’re new to command line arguments I recommend you read this post to get up to speed. From there you’ll be able to execute the script.

      • M Tayab Khan August 11, 2018 at 8:26 am #

        Thank You v much sir,

        The mistake was a very basic one and I am great full to you that you guide me in a good way. you are doing a great job. keep it up. Most of the time I was stuck in the same problem but now it will be solved.

  27. SAM August 7, 2018 at 9:34 am #

    Hello Adrian,
    thanks for continuous nice work.
    I tried CSRT tracker, tracking result seems good; however when the target disappears from the screen and then it shows again , it isn’t detected
    Any support to control this situation ( losing target)

    Thanks again.

    • Adrian Rosebrock August 9, 2018 at 3:07 pm #

      Hey Sam — you can actually combine the techniques from this post with the previous post on centroid tracking. I’ll be providing an example of such an implementation in next week’s blog post on people tracking.

      • SAM August 10, 2018 at 11:32 am #

        Thanks Adrian,

        in your post it was face detection on a pertained model, for the tracking example the object am tracking is not pertained so how can i achieve it

        • Adrian Rosebrock August 15, 2018 at 9:27 am #

          If you have not trained an object detector to detect your object you will not be able to detect it which means you will not be able to track it. What object are you trying to detect?

          • SAM August 15, 2018 at 10:07 am #

            the idea is to select any object and track it in the video ; can i train the system to detect a selected object on the runtime ? haar cascade or any other algorithm

          • Adrian Rosebrock August 15, 2018 at 10:53 am #

            No, the actual object detector needs to be trained ahead of time. Once you’ve detected the actual object you can track it.

  28. Ching June Hao August 9, 2018 at 1:31 pm #

    Hi Adrian, thanks for such a good tutorial. But when I was trying to run the code, there’s nothing happen. What will be the possible error? I’m using python 3.6 with OpenCV version 3.4.1.

    Thanks in advance!

    • Adrian Rosebrock August 9, 2018 at 2:34 pm #

      Could you provide more details on what you mean by “nothing is happening”? Does the script automatically exit? Is an error thrown? Does the script run but no output displayed?

  29. Jindřich Soukup August 13, 2018 at 10:14 am #

    Hi Adrian,
    thanks a lot for your tutorial. I have problems with detection of object in complicated conditions (dark object in dark region of the scene). I’m also trying to detect, when the target disappears the scene. I’ve noticed that together with the detected position the tracker also gives the ‘success’ information which in unfortunately always True (even for the case when tracking failed). In this thread is discussed that one of the solution is to modify opencv function to give response of the tracker instead of binarized True/False information. Unfortunately, I’m not proficient enough to make such changes of the code. Is there another possible way?
    Thanks a lot.

    • Adrian Rosebrock August 15, 2018 at 8:50 am #

      Unfortunately there is not an easy solution to this problem. In fact, handling the case when object trackers lose track of an object is one of the hardest components of an object tracking algorithm — I would argue that it’s far from solved. It’s not an easy solution and unfortunately for any other level of granularity you would need to modify the OpenCV source code.

  30. Jeff August 13, 2018 at 10:02 pm #

    I was able to get it to work with the pi camera by changing line 51 from

    vs = VideoStream(src=0).start()


    vs = VideoStream(usePiCamera=1).start()

    • Adrian Rosebrock August 15, 2018 at 8:36 am #

      Yep, that’s all you need to do!

  31. Jeff August 13, 2018 at 10:13 pm #

    At the end of your post you say CSRF instead of CSRT a few times.

    • Adrian Rosebrock August 15, 2018 at 8:36 am #

      Thanks Jeff. Post updated 🙂

  32. vipul August 14, 2018 at 1:29 pm #

    Traceback (most recent call last):
    File “”, line 114, in
    TypeError: Required argument ‘boundingBox’ (pos 3) not found

    how to solve this!!

    • Adrian Rosebrock August 15, 2018 at 8:22 am #

      Hey Vipul — which version of OpenCV are you using?

      • vipul August 15, 2018 at 2:29 pm #


  33. vipul August 15, 2018 at 2:30 pm #

    “csrt”: cv2.TrackerCSRT_create,
    AttributeError: ‘module’ object has no attribute ‘TrackerCSRT_create’


    • Adrian Rosebrock August 15, 2018 at 3:38 pm #

      You should take a look at Figure 2, Vipul — you need at least OpenCV 3.4 for the CSRT tracker. Since you’re on OpenCV 3.3 you should comment out the CSRT tracker and the MOSSE tracker, then the code will work for you.

  34. Greg August 16, 2018 at 9:25 am #

    Any advice on tracking when the background colors are similar to the object being tracked? I have been experimenting with tracking a green ball. The CSRT tracker does a decent job indoors but when i move outside and roll it through the grass it fails. I can detect the ball but the tracker loses it and ends up locked onto a spot on the grass.

    • Adrian Rosebrock August 17, 2018 at 7:21 am #

      How are you detecting objects in the image? Is it via just simple color thresholding? If so, you should use a more advanced method like a dedicated object detector (HOG + Linear SVM, SSDs, Faster R-CNNs, etc.). I have an example of such an object detection + object tracking pipeline here.

  35. Greg August 17, 2018 at 12:31 pm #

    Yes, I’m currently just using color thresholding to detect the green ball. To test the tracker I put the ball on a high contrast surface and the ball is detected. I then use the detection to start tracking. If I then roll the ball off the high contrast surface onto the grass, the object detector fails.

    • Greg August 17, 2018 at 12:32 pm #

      I mean the object tracker fails when the ball leaves the high contrast surface, not the detector.

      • Adrian Rosebrock August 17, 2018 at 12:43 pm #

        That’s not too terribly surprising given depending on how of a difference the high contrast surface is (comparatively) and how fast the ball is moving (the faster it moves, the harder it will be for the trackers to work properly). The solution is to use a hybrid approach that balances tracking with detection — you can find the method in this post.

  36. sathish August 21, 2018 at 3:52 am #

    Hello Adrian,

    Actually i am in python3 and when i execute the below code it is showing me the error.

    File “”, line 8
    (major, minor) = cv2.__version__.split(“.”)[:2]
    SyntaxError: invalid syntax.

    I am new to python and when i googled i found that “the tuple is deprecated and we need to split” something… bla bla..

    Can you help me on this error?

    • Adrian Rosebrock August 22, 2018 at 9:39 am #

      Make sure you use the “Downloads” section of this blog post to download the source code rather than trying to copy and paste. My guess is that you introduced an error in the code when copying and pasting or trying to “code along” with the post.

  37. sathish August 22, 2018 at 8:00 am #

    Hello Adrian,

    I am getting error when i try to run the python file with –video arguments. Can you please help me on this. I am on windows

  38. Walid August 27, 2018 at 3:08 pm #

    Thanks a lot Adrian
    A couple of question,

    1-How can I make the tracker return false when the object leaves the scene?

    2-how can I deal with an object that changes its dimensions in frame?

    Best Regards

    • Adrian Rosebrock August 30, 2018 at 9:13 am #

      1. The tracker itself will return “False” via the success boolean. You can also track the bounding box and once it hits the boundaries of the frame you could mark it as “leaving the scene” as well.

      2. You would want to use an object tracking method like CSRT which works well in changes in scale.

  39. Eyup Ucmaz September 6, 2018 at 1:18 pm #

    Hi Adrian,
    Is it possible that I follow the object I already defined? So when the camera is turned on, can I find the object I defined? If possible, how can it be done?

    • Adrian Rosebrock September 11, 2018 at 8:38 am #

      Hey Eyup — object tracking and object detection are two different types of algorithms. What type of object are you trying to detect?

  40. Yogesh September 10, 2018 at 3:22 am #

    Hi Adrian, i am using Open cv 3.4.2 but still getting below error-

    tracker = cv2.TrackerBoosting_create
    AttributeError: module ‘cv2.cv2’ has no attribute ‘TrackerBoosting_create’

    • Adrian Rosebrock September 11, 2018 at 8:16 am #

      Your question has been answered in this reply.

  41. Andres Gomez September 11, 2018 at 11:00 am #

    Hi Adrian, I followed your tutorial to install Opencv on Raspberry and your people counter as well, but I made some changes on the code: I used substraction background to detect movement on the video and FindContours for enclosing it in a rectangle. However I use CentroidTracker which you made in the previous tutorials

    Though I made those changes, still is very slow on Raspberry PI 3B so I have to implement MOSSE to increase the speed. I saw many implementation about MOSSE algorithm but it always use ROI to track the object. My question is: What is the ROI output? Is a coordinate?
    Because my plan is:

    # select the bounding box of the object we want to track (make
    # sure you press ENTER or SPACE after selecting the ROI)
    #initBB = cv2.selectROI(“Frame”, frame, fromCenter=False,
    # showCrosshair=True)
    initBB =(x,y,x+w,x+h)

    # start OpenCV object tracker using the supplied bounding box
    # coordinates, then start the FPS throughput estimator as well
    tracker.init(frame, initBB)

    But it doesn’t work very well.

    PS: initBB need to be an array to have all the coordinates of each contours. I wiil put again my question but i don’t know if my question has been deleted or it just doesn’t appear on the page.

  42. Andres Gomez September 11, 2018 at 11:03 am #

    I realize that I would have a problem if a use MOSSE algorithm with findcontours because in each frame the function FIndContours will find the same contour until it disappears and I will pass to MOSSE algorithm and it will count the same person many times. So I m considering to do people counter on C++ to be faster tahn python

  43. Fabio September 19, 2018 at 4:37 pm #

    Hi. I’ve getting the warning from “ import VideoStream” (line2), error: No module named But i’ve installed imutils as you did before. Can you tell me what it means? Thx btw. Keep work!

    • Adrian Rosebrock October 8, 2018 at 1:22 pm #

      It sounds like you have not installed “imutils” correctly. Please make sure you double-check.

  44. LZQthePlane September 28, 2018 at 6:16 am #

    Hi Adrian, I am learning CV lfrom your blogs, they are brilliant.
    However, a error came out when I run this script, it happened in line 120: tracker.init(frame, initBB), error is “AttributeError: ‘builtin_function_or_method’ object has no attribute ‘init'”, I am using opencv 3.4, could you please help me out ?
    By the way, HAPPY WEDDING !

    • Adrian Rosebrock October 8, 2018 at 12:23 pm #

      Hm, I’m not sure what may be causing that error. Are you using the code from the “Downloads” section of this blog post? Or did you copy and paste? Make sure you use the “Downloads” section to avoid any accidental errors.

    • MKM January 22, 2019 at 12:56 pm #

      this actually happened to me too, turns out i forgot to add ‘()’ after cv2.TrackerBlaBla_create, so you might want to look over your syntax again. Hope this helps, cheers!

  45. Rob October 4, 2018 at 6:28 pm #


    Yet another awesome blog.

    Question, if the object leaves the view panel, how does one get rid of the remaining bounding box that is left on the edge?

    And let’s say, that we decide that after awhile we want to track a different object instead? The current setup allows me to open another panel and make a new selection, but it doesn’t seem to track the new selection, nor does it destroy the previous object being tracked.

    • Adrian Rosebrock October 8, 2018 at 9:56 am #

      Hey Rob, that’s partially a limitation of each object tracker algorithm. Some object tracking algorithms do a poor job of detecting if an object has left the scene of view. Instead, you should try to monitor the (x, y)-coordinates of the object and if they “hang around” the border of the frame for too long you can mark them as “disappeared”.

    • Bilal Javaid October 16, 2018 at 12:33 pm #

      Is the issue that the tracker is not reporting failure, or that the tracker reports failure but the green box is still on the frame?
      If the issue is that the CSRT tracker does not report failure, I just posted about that below.

  46. Dan Richter October 9, 2018 at 2:55 pm #

    Hi Adrian,

    I would like to program Stable Multi-Target Tracking in Real-Time see

    Any advice would be appreciated very much.

  47. lico October 12, 2018 at 3:56 am #


    I use the script and try it,everything is OK, but if I re-select ROI, than the tracker.update() function always return False, so it can not modify ROI and re-track it.

    And I also find even I modify the ROI, but sometime it still show origin ROI.

    So i want to know how I can re-select ROI and make if effect immediately.


  48. Bilal Javaid October 16, 2018 at 12:30 pm #

    I am using Python 3.7 with OpenCV 3.4.3. Everything is working, I tried the examples. I noticed CSRT is more accurate, as was mentioned in the blog post. However, when I used CSRT , the tracker never fails, even when the object is completely out of the frame, it does not report any failure.
    When I use other algorithms like KCF, it works as expected and reports failures as expected.
    So I am not sure why CSRT is not reporting any failure.

    • Adrian Rosebrock October 20, 2018 at 8:05 am #

      It’s all in the underlying mechanics of the algorithm itself. Some algorithms are very good at reporting failure, others are not. There is no true, perfect object tracker. We pick and choose which one to use knowing there are going to be failures.

  49. Sabarish Kumar Amaravadi October 17, 2018 at 5:08 am #

    I am just curious. How do visual tracking community handles(FP or FN??) nans for calculating recall vs success rate?

  50. Hashim Muhammed October 18, 2018 at 3:01 am #

    how can i run this code with example video as input

    • Adrian Rosebrock October 20, 2018 at 7:46 am #

      The blog post already shows you how, Hashim. Just supply the path to your video file via command line argument:

      $ python --video dashcam_boston.mp4 --tracker csrt

      • Saumya January 10, 2019 at 11:48 am #

        Hi Adrian,
        I am unable to pass the path through command line. Can you please help me out, and also the camera opens a frame but it’s not capturing anything, please do help.

        • Adrian Rosebrock January 11, 2019 at 9:36 am #

          If you are new to command line arguments I would recommend reading this post first.

  51. Binh Nguyen October 22, 2018 at 11:02 am #

    Hi Adrian,

    Thanks for your useful tutorial.
    Is there any solution for tracking object via multiple cameras?

    Best regards,

    • Adrian Rosebrock October 22, 2018 at 1:00 pm #

      That might depend on your use case. Are you tracking multiple a single object across multiple cameras with different viewing angles?

  52. Loren October 22, 2018 at 6:29 pm #

    Hello Adrian,

    Another fascinating video! Do you have instructions on using an IP PTZ camera connected to a Raspberry Pi(3) and opencv to grab video frames? Then control the PTZ for object tracking and other opencv tasks over the net. I have seen some code using urllib, but I do not know what would be the best way to get started? Thank you for any suggestions.

    • Adrian Rosebrock October 29, 2018 at 2:16 pm #

      Sorry, I do not have any tutorials for an IP PTZ camera. If I do cover that in the future I’ll be sure to let you know!

  53. tong October 26, 2018 at 3:51 am #

    thanks a lot!

  54. Joe November 2, 2018 at 3:16 pm #

    Thanks for the video!
    Is there a way to automate this? I’m thinking of using this on a portable device that would run as a loop, without a user input. So how would you recommend initializing the object recognition without drawing the box manually?

    • Adrian Rosebrock November 6, 2018 at 1:34 pm #

      You would need to apply a dedicated object detector to first detect the object. From there you can track it. I provide an example of such a system in OpenCV People Counter blog post.

  55. Major November 3, 2018 at 8:33 am #

    hi I have a problem currently using raspi and i get this error while running the code “error: the following arguments are required: -p/–prototxt, -m/–model” how do I fix it. Im new in Raspi btw.

  56. Marcelo November 12, 2018 at 2:49 pm #

    Congratulations for another amazing tutorial!!!
    Adrian, why do you use frame[1] after reading from a video file? In other tutorials from you, you just use frame =

    Thanks again!

    • Adrian Rosebrock November 13, 2018 at 4:33 pm #

      Because I need to check if we’re working with (1) a video file stream or (2) a webcam stream. The two are handled differently.

  57. Haris Ahmed November 24, 2018 at 10:59 am #

    Hi Adrian,
    thanks for the tutorial, it’s working great but i need to ask the question that it is running slow on my PC, it take 4 to 5 second per frame so what should i do to increase the speed of yolo,
    I will be very thankful for your advice

    • Adrian Rosebrock November 25, 2018 at 8:58 am #

      To achieve true real-time performance you would need to use your GPU. Unfortunately OpenCV’s “dnn” module does not yet work well with many GPUs, but more GPU support is coming.

  58. zultan dimitry November 25, 2018 at 2:44 pm #

    you choose the ROI by pausing the video with S and then making a window on the object you want to track

    what i need help in is :: (i hope you help me)

    1- i want to choose the object without pausing the video nor selecting a window i mean the selection window is static and i just click left click to track

    2- everytime i click right click on another object it deletes the previous one and track the new object

    3- control the selection window size by the mouse wheel|| here is the code im using now

    • Adrian Rosebrock November 26, 2018 at 2:33 pm #

      That is certainly possible but keep in mind that OpenCV’s GUI functionality is extremely limited. You should use a dedicated GUI library like Tkinter, QT, or Kivy.

  59. Sivaraman Sundaram November 28, 2018 at 8:03 am #


    What mechanism would you recommend to delete a tracker once the object moves out of the frame. I tried with CSRT and it the tracker is maintained and returns success whenever I update even if the object has moved out of the frame

    • Adrian Rosebrock November 30, 2018 at 9:11 am #

      That’s likely a limitation of the tracker itself. Once you detect an object at the end of the frame (based on the predicted coordinates and image dimensions) you can insert code logic to ignore the tracker using “if/else” statements. You could even use the del Python keyword if necessary.

  60. David December 8, 2018 at 8:13 pm #

    Hey – love the tutorial – great for newbies like me.
    A few questions:
    – is there any benefit to forcing some contrast on the video feed before the tracking is applied?
    – is there a way to save off the captured region so that it can be used again ?


    • Adrian Rosebrock December 11, 2018 at 12:55 pm #

      1. That really depends on your exact use case. Typically we try to train object detectors with varying levels of contrast to help improve the detection, not adjust contrast at test time.

      2. You mean save the ROI itself? Just use cv2.imwrite

  61. Tomo December 9, 2018 at 5:35 pm #

    Could you descript how to use tracking to obtain orientation of object? Thanks for help 🙂

  62. Rimjhim January 9, 2019 at 10:18 am #

    We are neither able to fetch the local video nor access the webcam. Only the webcam frame is opening but no image is capturing. Please respond as soon as possible.

  63. Rimjhim January 18, 2019 at 10:04 am #

    Can I know that which method (out of point-based, kernel-based and silhouette based) we are using for object tracking.

  64. Madhava Jay January 25, 2019 at 2:58 am #

    Awesome article and as usual Adrian your website is a constant go to reference on all things CV / OpenCV and python! 🙂

    I am using the python api for tracking and mixing it with an SSD object detector to handle the initial detection of objects.

    When the slower neural network gives me a box I match it with existing boxes and do some non-max suppression and then resume tracking those boxes or add new trackers for new ones.

    However when I have a new box from the SSD which I consider a sort of new ground truth for the trackers I want to update them with this information.

    As per this post it seems this isn’t possible in Python even though the C++ API has it.

    Instead I have to delete the tracker and create / init a new one.
    While this works fairly well there is quite a bit of CPU overhead in constantly creating new trackers which is causing problematic performance.

    I am using CSRT and I tried a cheaper tracker like MOSSE but it seemed like the number of inits went up possibly because the tracker is poorer so theres less quality matches etc. Either way if I was able to update the box in an existing tracker rather than re-init im sure it would be faster.

    Does anyone know about this? It seems ridiculous. I would consider making the changes myself if it’s as trivial as updating the python API in opencv and recompiling but surely its not there for a reason?


    • Adrian Rosebrock January 25, 2019 at 6:46 am #

      Thanks for the kind words, Madhava. Have you seen my previous tutorial on building an OpenCV people counter? In that post I provide my implementation of balancing object detection with tracking. It should be able to help you with your project.

  65. Amin Salari February 6, 2019 at 8:18 am #

    I am wondering if we want to use the tracker for gray scaled images, do we need to set some parameters in ‘create’ method (which I couldn’t do or maybe I don’t know how to), or just supplying gray scale images to ‘init’ and ‘update’ methods are sufficient.

    • Adrian Rosebrock February 7, 2019 at 7:07 am #

      You would need to convert the image to grayscale first. From there you can supply the frames to the tracker.

  66. Yonghun Lee February 7, 2019 at 2:44 pm #

    Thanks for this wonderful article. I’ve been trying to use this program with an ipcamera. I can successfully connect, start the program, and draw a box around the object I want to track in real time. However, once I press enter, the live stream cannot keep up with the actual real-time video on my ipcamera. If possible, can you give me an advice on how to fix this issue? I’d greatly appreciate it!

    • Adrian Rosebrock February 14, 2019 at 3:04 pm #

      Sorry, I don’t have any tutorials on IP camera streaming. I may consider covering them in the future. Best of luck with the project!

  67. Philip W February 10, 2019 at 10:19 pm #

    Hey Adrian,

    Nice blog, will definitely follow your blog more and try some of your implementation out.
    I got a question that I would like to asked, not sure if this problem had been answered already (if so please point me to the right direction). So my question is:

    If I trained my object tracking algorithm to recognized my face, and let say then I implement the algorithm in a drone to follow me from behind while I ride around. Since I trained the algorithm to learn to recognised my face, but not the back of my head (and to be honest the back of the head of anyone with the same hair colour and length looks is highly similar), and let say the clothing I’m wearing has different colour schemes at the front and back (i.e. white in front, black at back), how can the algorithm be able to identify me from behind in real time and without the need to wait for it to re-train itself to learn what do I look like from behind?

    • Adrian Rosebrock February 14, 2019 at 1:42 pm #

      I personally don’t like using “pure computer vision” for these types of problems. It would be much easier for your drone to track you in other ways, mainly physical sensors emitting a signal for the drone. Computer vision can be used to facilitate better tracking, such as applying a person detector every N frames and then correlating hat with sensor data, but I really wouldn’t use “pure computer vision” here.

  68. Miranda February 17, 2019 at 1:42 pm #

    I still cannot reset the bounding box by clicking on “escape” button. However, I could manage to reset a bounding box by adding another key. so if key == “r”, I will clear and initialize the tracker object:
    tracker = OPENCV_OBJECT_TRACKERS[trackerName]()
    then taking the bounding box by cv2.selectROI. But, I am still interested to know why escape doesn’t reset the bounding box. Any thoughts? Thanks!

    • Adrian Rosebrock February 20, 2019 at 12:38 pm #

      I’m honestly not sure. What version of OpenCV are you using?

  69. Michael Hill March 1, 2019 at 9:11 am #

    Thanks for the great tutorial! I also used your tutorial on installing OpenCV on raspberry pi. They were both excellent. I have a Raspberry Pi 3 B+. I am getting ~5-6 fps with Mosse with these videos and the default resolution of 500×500. Aside from reducing the resolution, do you have any tips on increasing the fps? Perhaps using black and white instead of color (I’m a noob so apologies if this is an idiotic suggestion). I have a Kingston Class 10 64 gb micro SD and I am also wondering if my SD card could be slowing things down a bit. I find the Raspberry Pi to be a bit laggy even when typing commands in the terminal. Is this normal?

    • Adrian Rosebrock March 5, 2019 at 9:56 am #

      The color or the object itself is not going to improve the FPS of the object tracking — the computation is all contained with the object tracking algorithm. You likely won’t be able to get more than that performance out of the Pi without reducing image resolution.

  70. abdullah March 4, 2019 at 11:38 pm #

    sincerely I want to thank you for the clear knowledge you are delivering

    • Adrian Rosebrock March 5, 2019 at 8:31 am #

      Thanks Abdullah!

  71. Rami March 15, 2019 at 8:49 pm #

    Hi Adrian,

    Many thanks for all great tutorials!

    My question: is setting the trackers parameters available in python?
    I am using opencv 4 now, I have installed it in python and c++
    I need to set the psr_threshold in the CSRT tracker, but I only managed to do it in C++ not in python.

    TrackerCSRT::Params param;
    track = TrackerCSRT::create(param);

    Can I do something equivalent in python with opencv 4 or any previous version?

    • Adrian Rosebrock March 19, 2019 at 10:18 am #

      You’ll want to take a look at OpenCV’s documentation. Open up a Python shell and type help(cv2.TrackerCSRT_create) — that will give you the available methods on the object. From there you can explore the documentation.

  72. Nipuna March 20, 2019 at 3:08 am #

    I am getting an error saying – – video not recognised…please help

    • Adrian Rosebrock March 22, 2019 at 9:47 am #

      That sounds like an issue with your command line arguments. If you are new to command line arguments you should read this tutorial.

  73. WITTI March 28, 2019 at 11:15 pm #

    hello , adrian , i hope that you are doing well , and let me thank you for sharing your knowledges . i am going to use kcf tracker for custom object tracking purpose , the detector is based on yolo object detection. i tried to use opencv built-in trackers but the trackers suffer from heavy occlusion, since my goal is to overcome high occlusion. i want to use “implemented kcf tracker ” instead of opencv built-in tracker. will the performance be improved if i do that ? your suggestion will be helpful . thanks in advance.

    • Adrian Rosebrock April 2, 2019 at 6:24 am #

      I would suggest trying with the KCF tracker. Without knowing more about your dataset/video/end goals it’s hard for me to definitively say. Give it a try and see, let your empirical results guide your decisions.

  74. Johan April 2, 2019 at 4:54 pm #

    Hi Adrian,

    Thanks for a great post.
    Could you please indicate how I should do if I want to track multiple squares in the same video?

  75. Ray April 9, 2019 at 2:27 am #

    Once again, great stuff!

    So I want to detect a person, and/or an object, say a ball in a video stream. Once detected, I’m sending an alert to a third party platform. the issue I’m having is that if I’m running the recognition/detection at 15-20 FPS, I’m seeing about that many alerts per second, as I get alerted for every detected frame. I’m running this on a reasonably powered machine with Nvidia Quadro P3200, so I have some cuda power available. I’ll also have up to 5 cameras on this system attempting to detect at approx 15 FPS each

    What is the best way to address this issue so that it alerts once and then perhaps tracks. No one will be able to draw boxes, it needs to recognize and then track. I don’t necessarily want to alert on the first detection because the confidence may not be high enough and maybe higher in subsequent frames Maybe it selects the best detection/recognition in a given time frame and chooses that?? Been struggling with this issue for a while. Any thoughts?

    • Adrian Rosebrock April 12, 2019 at 12:28 pm #

      Hi Ray — have you seen this tutorial where I show you how to detect + track? That seems like it would be the best option for you.

  76. Junior April 16, 2019 at 5:52 pm #

    Hi, Adrian.
    First of all, thanks for the excellent tutorials. I’m running the script on a Raspiberry Pi, using Mosses and detecting objects by PeopleCascade. I’m not using the Raspiberry camera, I use a video file. As expected, even with the use of Mosses and Cascades, tracking is very slow. So in another tutorial, I saw that it uses threads to boost FPS through the VideoStream library. However, at line 56 of your script, you use vs = cv2.VideoCapture (args [“video”]) instead of vs = VideoStream (args [“video”]). start (), which is the threaded version. I modified line 56, using vs = VideoStream (args [“video”]). start (), but the “invalid number od channels is input image” error occurs.
    Do you have any idea why this error occurred? Or, why did you use vs = cv2.VideoCapture (args [“video”]) instead of vs = VideoStream (args [“video”]). start () on line 56?
    Thank you very much.

    • Adrian Rosebrock April 18, 2019 at 6:34 am #

      If you are using a video file as an input you should be using the “cv2.VideoCapture” function or the FileVideoStream class.

      • Junior April 18, 2019 at 3:25 pm #

        Hi, Adrian.

        Thanks for your reply. But VideoStream() is threaded and cv2.VideoCapture() isn´t. In your tutorial “Increasing webcam FPS with Python and OpenCV”, in class WebcanVideostream (threaded), you have the parameter src and you say “if src is a string then it is assumed to be the path to a video file.”. In the constructor __init_-(sef, src=0) of WebcamVideoStreamClass you call cv2.VideoCaoture and capture threaded frames. And in your VideoStream class, in line 23, you instantiate WebcamVideoStream if a pi camera is not used. So, why is not possible use VideoStream (src=video file path)?
        Thanks a lot


        • Adrian Rosebrock April 25, 2019 at 9:35 am #

          The short answer is that with the threaded version it will run continuously and discard frames from the video file if they are not processed fast enough. The “FileVideoStream” class implements a queue/buffer to ensure each frame of the video file is processed.

  77. Junior April 17, 2019 at 11:44 am #

    Hi, Adrian.

    Yesterday I left a message here. I have another question. Do you think that using PeopleCascade for people detection and Mosses for tracking is the best option when the hardware is a RaspIberry PI?

    Thank you.

    • Adrian Rosebrock April 18, 2019 at 6:35 am #

      On computationally limited devices, such as the Pi, MOSSES is the most efficient tracker (at least in terms of computation). Haar cascades are also very fast and suitable for the Pi, provided they give you the accuracy you desire.

  78. Junior April 17, 2019 at 4:56 pm #

    Hi, Adrian.

    Sorry for another post here. The error issued when using vs= VideoStream(src= args[“video”]).start() is “Insufficient Memory” and it occurs in cv2.imshow(“Frame”, frame). I think it´s a problem in the thread when running in Raspiberry Pi. Any idea on correcting this error?
    Thanks a lot.

    • Adrian Rosebrock April 18, 2019 at 6:35 am #

      Your Pi is running out of memory. Try resizing your input images/frames before processing them.

  79. San April 21, 2019 at 2:05 am #

    Hi Adrian, Thanks for the wonderful post. I’m trying the following steps,

    1) Detect object using detector, 2, Track using tracker 3. After 15 frames repeat 1.

    Meaning, i’m refreshing the bounding box from detector every 15 frames. The problem is that the tracker once initialized with a bounding box cannot be re-initialized with a new bounding box. The Tracker::init() function (tracker.cpp from opencv_contrib) itself is preventing re-initialization.

    I’m using Opencv 4.0.1. Is this a limitation of opencv tracker itself? One idea I had was to delete the tracker instance every 15 frames and create a new instance of the tracker and initialize with a new bounding box from the detector. The problem is that deleting the class instance (in CPP) using “delete tracker” is crashing. I tried KCF, MOSSE, CSRT and all are behaving the same.

    Based on my search, this seems to be a problem reported by many.

    Any thoughts? I’d like to know if we have a solution before touching the OpenCV code.

    Thank you.

    • Adrian Rosebrock April 25, 2019 at 9:11 am #

      Hey San — I think you need to follow this tutorial which covers your exact question.

  80. pranay May 13, 2019 at 4:01 am #

    Where does Optical flow methods fit in like Lucas-Kanade ? is tracking via optical flow more efficient?

    • Adrian Rosebrock May 15, 2019 at 2:55 pm #

      I’ll try to cover optical flow in a future tutorial.

  81. Jessica May 13, 2019 at 4:28 am #

    Hi Adrian, thanks for the really cool and detailed post. I had great fun following this, and I actually learned a lot! What would you recommend as a next step? I have no background in object detection and tracking.

    • Adrian Rosebrock May 15, 2019 at 2:55 pm #

      Hey Jessica — if you are brand new to the world of computer vision and image processing I would recommend reading Practical Python and OpenCV. That is the first book I’ve ever written and is designed to help you get up to speed with OpenCV, paving the way to building more advanced projects.

  82. Michael Hill June 6, 2019 at 9:44 am #

    Great tutorial my friend! Keep up the good work! Thanks to you I was easily able to get my Raspberry Pi 3b+ up and running opencv and tracking objects. I was just wondering if it is possible to select a rectangular ROI from a center point and a defined box size. I.E. you use a mouse click to define the center and have a predefined height and width for the box. Many thanks!

    • Adrian Rosebrock June 12, 2019 at 2:07 pm #

      It’s possible but you would need to implement that functionality yourself. See this post on how to work with OpenCV mouse events.

      Basically you’ll want to detect when the mouse is clicked, capture the (x, y)-coordinates, and then from there you’ll be able to derive the starting and ending box coordinates based on the pre-defined width and height.

  83. physics June 14, 2019 at 8:09 pm #

    Hi Adrian,

    Many thanks for these wonderful tutorials! They’ve been tremendously helpful.

    I am trying to import a ROI from ImageJ (.roi file) and use it to track a feature in a video of mine. It doesn’t seem as though Tracker can handle non-rectangular bounding boxes in the input to tracker.init(). Do you have any suggestions as to how I can handle this?

    • Adrian Rosebrock June 19, 2019 at 2:18 pm #

      You need a rectangular bounding box. If you don’t have one, compute the minimum enclosing bounding box for your coordinates.

      • physics July 4, 2019 at 5:05 pm #

        Hi Adrian,

        Thank you for your response! If my ROI is a quadrilateral and I know the four vertices, do you have any suggestions of other OpenCV functionalities that could handle this kind of input ROI?

        • Adrian Rosebrock July 10, 2019 at 10:02 am #

          The “cv2.boundingRect” function might work, but if you already know the four coordinates, you can just find the top-left, and bottom-right most coordinates and derive your bounding box.

  84. Kyle Winsor June 23, 2019 at 5:46 pm #

    I have been working on a school project for months now tracking a small drone in real time, I was curious to see what method you recommend on the best way to go about this. I have used many many different methods and each have their own ups and downs. Just reaching out to other sources to see what other methods I could explore. Thanks!

    • Adrian Rosebrock June 26, 2019 at 1:27 pm #

      It’s really dependent on your exact project and what objects you are tracking. There is no “one-size-fits-all” solution. It sounds like you’re on the right track though, you need to experiment with each tracker and balance the pros/cons for your particular application.

      • Kyle Winsor June 26, 2019 at 2:21 pm #

        Thank you for the response! One other question, we had decided to invest in a better camera and ran the camera calibration, there is a noticeable difference in the picture. What other benefits should/could we expect in a higher quality camera and using the camera calibration?

        • Adrian Rosebrock July 4, 2019 at 11:00 am #

          I’m not sure I understand the question. The higher quality camera you have, ideally the higher quality camera, less artifacts in the image, and better the resulting calibration.

  85. Michael HIll June 27, 2019 at 8:17 pm #

    I have a question. Is there any way to reselect a different ROI to track without restarting the program? I have tried to implement this but whatever I do I end up tracking the original object.

  86. prashant k singh June 29, 2019 at 9:15 am #

    Hey Adrian,
    Thank you for the blogs you write on OpenCV. It helps me alot to get proper touch with opencv and your way of explanation is excellent.
    I am doing machine learning based project on ”conjestion detection in traffic”.
    Adrian can you suggest me the best tracker for vehicle detection in traffic in which i can perform fine-tunning on the model according to traffic in my city.

    • Adrian Rosebrock July 4, 2019 at 10:44 am #

      You would want to try each tracker, visualize validate that its performing correctly, and then pick the one that works best for your project.

  87. Yoichiro Nambu July 4, 2019 at 5:05 pm #

    Really cool project! Its amazing what OpenCV can do in basically 100 lines of code.

    • Adrian Rosebrock July 10, 2019 at 10:02 am #

      Thanks Yoichiro!

  88. Giorgos July 18, 2019 at 10:30 am #

    Dear Adrian. This is my second reply. First one was lost..
    What happens when I loose tracking? I cant reinitialize tracker. Should I create new? Does it affect memory? Can I discard the old one? I read a reference (for c) about tracker.release() but its not working here. I also read about IsInit flag ,but what about python? Thank you

  89. Khang July 20, 2019 at 1:23 am #

    Hey Adrian, I get stucking reset the tracker. I want to reset the detection after 30 frames. So do I need using any methods like clear(), release(), etc… or tracker has some method relevant to that before I reinit tracker. Sorry about my bad english

  90. Peter July 25, 2019 at 2:31 pm #


    Are you aware of any optimized implementations of CSRT that would run raspberry pi? I’m looking for something that could run at 30fps in Pi 4.

  91. Ranga priyan v September 8, 2019 at 1:28 am #

    Hi adrian,
    Can i use this code on raspberry pi and interface stepper motor to continuously track the particular selected object in real time ??

  92. rahul September 9, 2019 at 2:46 am #

    Hi adrian,
    can i combine your pan tilt code with this code and run it on raspberry pi.

  93. Elena October 1, 2019 at 5:28 pm #

    Hi Adrian!

    Many thanks for the great tutorial! As always!

    Just one question: I am trying to save output file (.avi) in output folder with the help of args constructor. Till now, I managed to save the output file with green bounding boxes in the frames. Tracker (csrt) worked quite well!

    Is there a chance to save in the output file also manually selected first blue bounding box, so that the output file looks like your gif file above?

    Many thanks in advance!

    • Adrian Rosebrock October 3, 2019 at 12:24 pm #

      You can use the “cv2.rectangle” function to draw the blue bounding box on the first frame if that what you are asking.

  94. josh November 1, 2019 at 6:03 am #

    Hi adrian, wow you are truly awsome.
    i have a problem, how do i run the code? can i do it via Picharm? or only via terminal?

    • Adrian Rosebrock November 7, 2019 at 10:28 am #

      You can use PyCharm but you’ll want to configure your command line arguments. I personally suggest just using the command line/terminal.

  95. Josh November 1, 2019 at 7:35 am #

    How can I increase my video stream to full screen?

  96. medison November 3, 2019 at 8:46 pm #

    Amazing tutorial! Thank you very much!

    • Adrian Rosebrock November 7, 2019 at 10:24 am #

      Thanks Medison!

  97. Ruby January 13, 2020 at 2:05 pm #

    Hi adrian, you are truly awsome.
    I have a project related to object detection and tracking. Can you help me on how to combine both the object detection and object tracking phase into a single script?
    I hope you can help me. Thank you so much !!

    • Adrian Rosebrock January 16, 2020 at 10:33 am #

      I would suggest you follow this tutorial which addresses your exact question.

Before you leave a comment...

Hey, Adrian here, author of the PyImageSearch blog. I'd love to hear from you, but before you submit a comment, please follow these guidelines:

  1. If you have a question, read the comments first. You should also search this page (i.e., ctrl + f) for keywords related to your question. It's likely that I have already addressed your question in the comments.
  2. If you are copying and pasting code/terminal output, please don't. Reviewing another programmers’ code is a very time consuming and tedious task, and due to the volume of emails and contact requests I receive, I simply cannot do it.
  3. Be respectful of the space. I put a lot of my own personal time into creating these free weekly tutorials. On average, each tutorial takes me 15-20 hours to put together. I love offering these guides to you and I take pride in the content I create. Therefore, I will not approve comments that include large code blocks/terminal output as it destroys the formatting of the page. Kindly be respectful of this space.
  4. Be patient. I receive 200+ comments and emails per day. Due to spam, and my desire to personally answer as many questions as I can, I hand moderate all new comments (typically once per week). I try to answer as many questions as I can, but I'm only one person. Please don't be offended if I cannot get to your question
  5. Do you need priority support? Consider purchasing one of my books and courses. I place customer questions and emails in a separate, special priority queue and answer them first. If you are a customer of mine you will receive a guaranteed response from me. If there's any time left over, I focus on the community at large and attempt to answer as many of those questions as I possibly can.

Thank you for keeping these guidelines in mind before submitting your comment.

Leave a Reply