OpenCV Object Tracking

In last week’s blog post we got our feet wet by implementing a simple object tracking algorithm called “centroid tracking”.

Today, we are going to take the next step and look at eight separate object tracking algorithms built right into OpenCV!

You see, while our centroid tracker worked well, it required us to run an actual object detector on each frame of the input video. For the vast majority of circumstances, having to run the detection phase on each and every frame is undesirable and potentially computationally limiting.

Instead, we would like to apply object detection only once and then have the object tracker be able to handle every subsequent frame, leading to a faster, more efficient object tracking pipeline.

The question is — can OpenCV help us achieve such object tracking?

The answer is undoubtedly a resounding “Yes”.

To learn how to apply object tracking using OpenCV’s built-in object trackers, just keep reading.

Looking for the source code to this post?
Jump right to the downloads section.

OpenCV Object Tracking

In the first part of today’s blog post, we are going to briefly review the eight object tracking algorithms built-in to OpenCV.

From there I’ll demonstrate how we can use each of these object trackers in real-time.

Finally, we’ll review the results of each of OpenCV’s object trackers, noting which ones worked under what situations and which ones didn’t.

Let’s go ahead and get started tracking objects with OpenCV!

8 OpenCV Object Tracking Implementations

Figure 1: Drone footage of a car in motion being tracked with the MedianFlow tracker.

You might be surprised to know that OpenCV includes eight (yes, eight!) separate object tracking implementations that you can use in your own computer vision applications.

I’ve included a brief highlight of each object tracker below:

  1. BOOSTING Tracker: Based on the same algorithm used to power the machine learning behind Haar cascades (AdaBoost), but like Haar cascades, is over a decade old. This tracker is slow and doesn’t work very well. Interesting only for legacy reasons and comparing other algorithms. (minimum OpenCV 3.0.0)
  2. MIL Tracker: Better accuracy than BOOSTING tracker but does a poor job of reporting failure. (minimum OpenCV 3.0.0)
  3. KCF Tracker: Kernelized Correlation Filters. Faster than BOOSTING and MIL. Similar to MIL and KCF, does not handle full occlusion well. (minimum OpenCV 3.1.0)
  4. CSRT Tracker: Discriminative Correlation Filter (with Channel and Spatial Reliability). Tends to be more accurate than KCF but slightly slower. (minimum OpenCV 3.4.2)
  5. MedianFlow Tracker: Does a nice job reporting failures; however, if there is too large of a jump in motion, such as fast moving objects, or objects that change quickly in their appearance, the model will fail. (minimum OpenCV 3.0.0)
  6. TLD Tracker: I’m not sure if there is a problem with the OpenCV implementation of the TLD tracker or the actual algorithm itself, but the TLD tracker was incredibly prone to false-positives. I do not recommend using this OpenCV object tracker. (minimum OpenCV 3.0.0)
  7. MOSSE Tracker: Very, very fast. Not as accurate as CSRT or KCF but a good choice if you need pure speed. (minimum OpenCV 3.4.1)
  8. GOTURN Tracker: The only deep learning-based object detector included in OpenCV. It requires additional model files to run (will not be covered in this post). My initial experiments showed it was a bit of a pain to use even though it reportedly handles viewing changes well (my initial experiments didn’t confirm this though). I’ll try to cover it in a future post, but in the meantime, take a look at Satya’s writeup(minimum OpenCV 3.2.0)

My personal suggestion is to:

  • Use CSRT when you need higher object tracking accuracy and can tolerate slower FPS throughput
  • Use KCF when you need faster FPS throughput but can handle slightly lower object tracking accuracy
  • Use MOSSE when you need pure speed

Satya Mallick also provides some additional information on these object trackers in his article as well.

Object Trackers have been in active development in OpenCV 3. Here is a brief summary of which versions of OpenCV the trackers appear in:

Figure 2: OpenCV object trackers and which versions of OpenCV they appear in. I recommend OpenCV 3.4+ if you plan to use the built-in trackers.

Note: Despite following the instructions in this issue on GitHub and turning off precompiled headers, I was not able to get OpenCV 3.1 to compile.

Now that you’ve had a brief overview of each of the object trackers, let’s get down to business!

Object Tracking with OpenCV

To perform object tracking using OpenCV, open up a new file, name it opencv_object_tracker.py , and insert the following code:

We begin by importing our required packages. Ensure that you have OpenCV installed (I recommend OpenCV 3.4+) and that you have imutils  installed:

Now that our packages are imported, let’s parse two command line arguments:

Our command line arguments include:

  • --video : The optional path to the input video file. If this argument is left off, then the script will use your webcam.
  • --tracker : The OpenCV object tracker to use. By default, it is set to kcf  (Kernelized Correlation Filters). For a full list of possible tracker code strings refer to the next code block or to the section below, “Object Tracking Results”.

Let’s handle the different types of trackers:

As denoted in Figure 2 not all trackers are present in each minor version of OpenCV 3+.

There’s also an implementation change that occurs at OpenCV 3.3. Prior to OpenCV 3.3, tracker objects must be created with cv2.Tracker_create  and passing an uppercase string of the tracker name (Lines 22 and 23).

For OpenCV 3.3+, each tracker can be created with their own respective function call such as cv2.TrackerKCF_create . The dictionary, OPENCV_OBJECT_TRACKERS , contains seven of the eight built-in OpenCV object trackers (Lines 30-38). It maps the object tracker command line argument string (key) with the actual OpenCV object tracker function (value).

On Line 42 the tracker  object is instantiated based on the command line argument for the tracker and the associated key from OPENCV_OBJECT_TRACKERS .

Note: I am purposely leaving GOTURN out of the set of usable object trackers as it requires additional model files.

We also initialize initBB  to None  (Line 46). This variable will hold the bounding box coordinates of the object that we select with the mouse.

Next, let’s initialize our video stream and FPS counter:

Lines 49-52 handle the case in which we are accessing our webcam. Here we initialize the webcam video stream with a one-second pause for the camera sensor to “warm up”.

Otherwise, the --video  command line argument was provided so we’ll initialize a video stream from a video file (Lines 55 and 56).

Let’s begin looping over frames from the video stream:

We grab a frame  on Lines 65 and 66 as well as handle the case where there are no frames left in a video file on Lines 69 and 70.

In order for our object tracking algorithms to process the frame faster, we resize  the input frame to 50 pixels (Line 74) — the less data there is to process, the faster our object tracking pipeline will run.

We then grab the width and height of the frame as we’ll need the height later (Line 75).

Now let’s handle the case where an object has already been selected:

If an object has been selected, we need to update the location of the object. To do so, we call the update  method on Line 80 passing only the frame  to the function. The update  method will locate the object’s new position and return a success  boolean and the bounding box  of the object.

If successful, we draw the new, updated bounding box location on the frame  (Lines 83-86). Keep in mind that trackers can lose objects and report failure so the success  boolean may not always be True .

Our FPS throughput estimator is updated on Lines 89 and 90.

On Lines 94-98 we construct a list of textual information to display on the frame . Subsequently, we draw the information on the frame  on Lines 101-104.

From there, let’s show the frame  and handle selecting an object with the mouse:

We’ll display a frame  and continue to loop unless a key is pressed.

When the “s” key is pressed, we’ll “select” an object ROI using cv2.selectROI . This function allows you to manually select an ROI with your mouse while the video stream is frozen on the frame:

Figure 3: Selecting an object’s ROI with the mouse and cv2.selectROI.

The user must draw the bounding box and then press “ENTER” or “SPACE” to confirm the selection. If you need to reselect the region, simply press “ESCAPE”.

Using the bounding box info reported from the selection function, our tracker  is initialized on Line 120. We also initialize our FPS counter on the subsequent Line 121.

Of course, we could also use an actual, real object detector in place of manual selection here as well.

Last week’s blog post already showed you how to combine a face detector with object tracking. For other objects, I would suggest referring to this blog post on real-time deep learning object detection to get you started. In a future blog post in this object tracking series, I’ll be showing you how to combine both the object detection and object tracking phase into a single script.

Lastly, let’s handle if the “q” key (for “quit”) is pressed or if there are no more frames in the video, thus exiting our loop:

This last block simply handles the case where we have broken out of the loop. All pointers are released and windows are closed.

Object Tracking Results

In order to follow along and apply object tracking using OpenCV to the videos in this blog post, make sure you use the “Downloads” section to grab the code + videos.

From there, open up a terminal and execute the following command:

Be sure to edit the two command line arguments as desired: --video  and --tracker .

If you have downloaded the source code + videos associated with this tutorial, the available arguments for --video  include the following video files:

  • american_pharoah.mp4
  • dashcam_boston.mp4
  • drone.mp4
  • nascar_01.mp4
  • nascar_02.mp4
  • race.mp4
  • …and feel free to experiment with your own videos or other videos you find online!

The available arguments for --tracker  include:

  • csrt
  • kcf
  • boosting
  • mil
  • tld
  • medianflow
  • mosse

Refer to the section, “8 OpenCV Object Tracking Implementations” above for more information about each tracker.

You can also use your computer’s webcam — simply leave off the video file argument:

In the following example video I have demonstrated how OpenCV’s object trackers can be used to track an object for an extended amount of time (i.e., an entire horse race) versus just short clips:

Video and Audio Credits

To create the examples for this blog post I needed to use clips from a number of different videos.

A massive “thank you” to Billy Higdon (Dashcam Boston), The New York Racing Association, Inc. (American Pharaoh), Tom Wadsworth (Multirotor UAV Tracking Cars for Aerial Filming), NASCAR (Danica Patrick leads NASCAR lap), GERrevolt (Usain Bolt 9.58 100m New World Record Berlin), Erich Schall, and Soularflair.

Summary

In today’s blog post you learned how to utilize OpenCV for object tracking. Specifically, we reviewed the eight object tracking algorithms (as of OpenCV 3.4) included in the OpenCV library:

  • CSRT
  • KCF
  • Boosting
  • MIL
  • TLD
  • MedianFlow
  • MOSSE
  • GOTURN

I recommend using either CSRT, KCF, or MOSSE for most object tracking applications:

  • Use CSRT when you need higher object tracking accuracy and can tolerate slower FPS throughput
  • Use KCF when you need faster FPS throughput but can handle slightly lower object tracking accuracy
  • Use MOSSE when you need pure speed

From there, we applied each of OpenCV’s eight object trackers to various tasks, including sprinting, horse racing, auto racing, drone/UAV tracker, and dash cams for vehicles.

In next week’s blog post you’ll learn how to apply multi-object tracking using a special, built-in (but mostly unknown) OpenCV function.

To be notified when next week’s blog post on multi-object tracking goes live, just enter your email address in the form below!

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!

,

113 Responses to OpenCV Object Tracking

  1. Navaneeth Krishnan July 30, 2018 at 11:28 am #

    Adrain i hava seen your face detection video it was nice. But some time there were false positives occuring. then i heard about openface for face recognition. why dont you make a tutorial on openface it will be very helpful

    • Adrian Rosebrock July 30, 2018 at 1:22 pm #

      Navaneeth — you requested that very topic last week. I am aware of your comment and have acknowledged it. I love taking requests regarding what readers want to learn more above, but I need to kindly ask you to please stop requesting the same topic on every post. If I can cover it in the future I certainly will.

  2. Fan July 30, 2018 at 12:22 pm #

    Hi Adam, great post as always! I wonder if you can do a separate post that use a similar approach(semi-supervised) for pixel-wise tracking?

    • Adrian Rosebrock July 30, 2018 at 1:18 pm #

      It’s Adrian actually, not Adam 😉 I’ll be doing a pixel-wise segmentation post soon.

  3. Jean-Marie July 30, 2018 at 12:55 pm #

    Hi, Very nice ! I test on RC plane video footage. Don’t know if bug or now but when bounding box fail to capture objet (rc plane), you must restart the loop from scratch (with video from timestamp zero) which is quite annoying. Is there any solution to delete the memorize bounding box with keyboard shortcut ? If I look long footage, it let better chance to successful track plane or another plane. Thanks !!! I tested goturn and performance is really poor I find (12/15 fps). Native code seems to run on CPU not GPU. Will investigate !!!

    • Adrian Rosebrock July 30, 2018 at 1:21 pm #

      I haven’t done enough investigate work with GOTURN but just like any other machine learning algorithm, if the model was not trained on data representative of what it will see in the “real world” then it’s not going to perform well. I suspect GOTURN was not trained on RC plane video footage or objects that may appear in such footage.

      As far as restarting the video all you should have to do is re-instantiate the actual OpenCV object tracker inside the “if” statement that is used to catch the “c” key. Something like this should work:

  4. Singh July 30, 2018 at 1:46 pm #

    Hi Adrian,
    I am getting the following error when I am trying to execute the program!

    Traceback (most recent call last):
    File “opencv_object_tracking.py”, line 35, in
    “csrt”: cv2.TrackerCSRT_create,
    AttributeError: module ‘cv2.cv2’ has no attribute ‘TrackerCSRT_create’

    • Adrian Rosebrock July 30, 2018 at 2:01 pm #

      The only time I’ve seen a similar error is from this post on saliency detection. If you check out the comments section, in particular this comments thread you’ll find what should be the solution. It appears there is a problem with your OpenCV install and/or version. Reading the thread I linked you to should help resolve it.

    • KhanhPhamDinh July 30, 2018 at 2:20 pm #

      I think your version is official and some private proprietaries cause they dropout several functionality. The way to deal with was install opencv-contrib-python on this helpful link:
      https://stackoverflow.com/questions/44633378/attributeerror-module-cv2-cv2-has-no-attribute-createlbphfacerecognizer

      • Adrian Rosebrock July 30, 2018 at 3:06 pm #

        Thanks so much for sharing the solution, KhanhPhamDinh!

        • Lu August 6, 2018 at 5:37 pm #

          You need to uninstall opencv before installing opencv-contrib
          Make sure no console is running that has imported cv2 while you execute your installing process
          Run the cmd in Administration Mode (Go to the .exe of the cmd program, right click and select ‘Run as Administrator’)

    • Aaron July 31, 2018 at 12:44 pm #

      I’ve also run into this problem. I’m working on an Ubuntu 16.04 virtual machine – following Adrian’s build instructions for OpenCV (v3.4) I also installed opencv_contrib_python without any luck. It appears only to be CSRT. The rest of the trackers are there.

      • Aaron July 31, 2018 at 12:49 pm #

        And I’ve just discovered my issue. I’m on OpenCV 3.4.0, and the minimum as mentioned above is 3.4.2.

    • cuong September 27, 2018 at 3:15 am #

      I also encountered the same error. Have you solved it yet?

  5. David July 30, 2018 at 2:07 pm #

    Hi Adrian,
    Is it possible to get object position during traking operation (delta position in relation of the center of the screen for example) ? I would like to plug the camera on a motorized support to “re-centering” tracked object on real-time operation.

    David

    • Adrian Rosebrock July 30, 2018 at 3:08 pm #

      Hey David, I’m not sure I fully understand your question. If you know the center (x, y)-coordinates of the screen along with the centroid or corners of the predicted bounding box you can always compute the distance between them to obtain the delta. But again, I may not be understanding your question properly.

  6. KhanhPhamDinh July 30, 2018 at 2:15 pm #

    Dear fellow. I ‘m window user and although i follow your manual to drag the objective region but it’s unable to track (without any blue rectangle appear). Could you help me? Best regards!

    • Adrian Rosebrock July 30, 2018 at 3:07 pm #

      If you select the bounding box and then the bounding box immediately disappears after selection then the “success” boolean is false. The object tracker cannot track your object. You should try a different object tracker, keeping in mind my pros/cons listed in the post.

  7. Anis July 30, 2018 at 2:20 pm #

    I tried the code and I got this error

    AttributeError: module ‘cv2.cv2’ has no attribute ‘TrackerCSRT_create’

    I am using a Mac-OS Sierra + Python3 and OpenCV 3.4.1

    What could be the problem?

    • Adrian Rosebrock July 30, 2018 at 3:06 pm #

      Make sure you read my reply to “Singh” as I have addressed this question.

  8. Jean-Marie July 30, 2018 at 2:53 pm #

    thanks Adrian for the tips for re-instantiate the object tracker: perfectly works !
    About Goturn , even on horse race or dashcam_boscam video, tracking is lost or become larger and larger with Goturn… Something is probably not well configured. Really cool piece of code ! love it …

    • Adrian Rosebrock July 30, 2018 at 3:08 pm #

      Awesome, I’m glad that fix worked! 😀

  9. Carlos July 30, 2018 at 5:23 pm #

    Excellent post! Do you have a recommendation for traffic lights recognition ?

    • Adrian Rosebrock July 30, 2018 at 5:34 pm #

      I would suggest training your own custom object detector. Faster R-CNNs and SSDs would be a good start. Use this post to help you get started. From there I cover how to train your own custom deep learning-based object detectors inside Deep Learning for Computer Vision with Python. Two of the chapters even have a self-driving car theme, including front/rear view vehicle recognition and traffic sign recognition.

  10. Florian July 30, 2018 at 5:36 pm #

    i had implemented something similar. using yolo once per second +object follwing from dlib.in the meantime, together with a matrix to calculate the matching. depending on the algo, if you have 50+ objects it can get complex.

    • Adrian Rosebrock July 31, 2018 at 9:45 am #

      Congrats on the successful project, Florian! As you noted it can get very complex depending on how many objects you’re tracking. In a future post in this series I’m going to be demonstrating more efficient multi-object tracking by distributing the trackers across multiple cores on a processor.

  11. Hermann-Marcus Behrens July 30, 2018 at 9:04 pm #

    Very impressive. Love your work.

    • Adrian Rosebrock July 31, 2018 at 9:43 am #

      Thank you, I really appreciate it 🙂

  12. Sudharsan July 31, 2018 at 12:52 am #

    Hi Adrian,
    Is it possible to use in raspberry pi 3B+?
    You are doing a great work in computer vision
    Thanks for the tutorial.

    • Adrian Rosebrock July 31, 2018 at 9:41 am #

      Try it and see! 😉 But yes, the code will work on a Pi, just use a faster tracker like MOSSE as the Pi is slower than a normal laptop/desktop.

      • Sudharsan August 3, 2018 at 4:17 am #

        Thanks for the reply it works well with MOSSE. It also works with Picamera too.

        • Adrian Rosebrock August 7, 2018 at 7:08 am #

          Thanks for following up, Sudharsan 🙂

  13. atom July 31, 2018 at 2:49 am #

    Hi Adrian,
    (may be my question is not right place) in your posts on object detection I’ve read, the prototxt and model files are already available, attached for download. So can you make a post to show how to train a neural network using some well-known frameworks (I found you prefer caffe). Actually, I just want to see if I can experience “like-a-boss” feeling in making complete neural network: I capture (collect) images as many, throw them in the network I prepared, get a cup of coffee for waiting, export into model files and lastly apply these files to your post.
    Say I’m insane if you like!

    • Adrian Rosebrock July 31, 2018 at 9:40 am #

      Hey Atom — I actually cover how to train your own custom models inside Deep Learning for Computer Vision with Python. That is by far the most comprehensive book to training your own deep learning models. Otherwise, I think you’ll have fun playing with this tutorial.

      • atom July 31, 2018 at 10:32 pm #

        Thanks so much, Adrian, I truly appreciate that

        • Adrian Rosebrock August 1, 2018 at 6:33 am #

          Happy too, Atom 🙂

  14. Markus July 31, 2018 at 4:53 am #

    dear Adrian:
    I am a Chinese studen.I am sorry that myEnglish is not good enough.I have a question that
    how can i input my own vidio and where i should write my path to the vidio flie.
    Because of my poor English,I think too much time.if you can answer me,i am very aprriciated
    Markus

    • Adrian Rosebrock July 31, 2018 at 9:39 am #

      There are no modifications needed to the code. Just update the --video switch to point to your video file when you execute the script:

      $ python opencv_object_tracking.py --video path/to/your/video.mp4

      • Markus August 1, 2018 at 4:48 am #

        amazing!thank you so much ,i have succeed in run the code with my local video !
        i am very appreciated for your help!

  15. Ray Casler July 31, 2018 at 5:03 am #

    Adrian;

    Would this work on a raspberry pi with pi camera?

    • Adrian Rosebrock July 31, 2018 at 9:38 am #

      Yes, but you would want to use one of the faster detectors like MOSSE as the Raspberry Pi isn’t as fast as a standard laptop/desktop.

  16. sam July 31, 2018 at 5:59 am #

    Hello Adrian,

    What is the best algorithm to distinguish head covered (scarf, Cap, etc.) from uncovered head? For detection and tracking purpose.

    • Adrian Rosebrock July 31, 2018 at 9:38 am #

      There are a few ways to approach this:

      1. Use a face detector to detect a face and then run a classifier on the face region to determine covered/uncovered
      2. Train an object detector to detect covered/uncovered

      The first method is easier to implement but will be more prone to missing faces altogether as sometimes covered heads/faces are significantly harder to detect if the model was never trained on those images.

  17. Gabe July 31, 2018 at 1:26 pm #

    Hey Adrian,

    Thanks for the tutorial, I am trying to run the code and whether I give no video argument and just use my webcam or if I give a path to a video on my computer whenever I press ‘s’ to draw the bounding box I get this message:

    Traceback (most recent call last):
    File “opencv_object_tracker.py”, line 120, in
    showCrosshair=True)
    TypeError: Required argument ‘boundingBox’ (pos 3) not found

    I’m not sure what I am doing wrong since I just used your source code. If you have any tips to fix this please let me know. Thanks!

    • Adrian Rosebrock August 1, 2018 at 6:30 am #

      Hi Gabe, just to confirm a few things:

      1. You used the “Downloads” section of the blog post to download the code rather than copied + pasted, right? Sometimes readers accidentally introduce errors if they try to copy and paste which is why I always recommend using the “Downloads” section.
      2. Which version of OpenCV are you using?

      • Rob August 30, 2018 at 1:30 pm #

        Hi Adrian,

        my experience is that this happens for all the trackers I’ve tried you say will compile under 3.2 (my version of OpenCV). As soon as “s” is hit, the program terminates with the above error.

        Python 3.5.2
        OpenCV 3.2
        Trackers tried: kcf, medianflow and mil

        I am using your code (downloaded) and one of the .mp4s provided

        Cheers,
        Rob

        • Adrian Rosebrock September 5, 2018 at 9:27 am #

          Hey Rob — I’ve had a lot of issues with OpenCV 3.2 and the object tracking algorithms. I would really recommend you use at least OpenCV 3.3 and ideally OpenCV 3.4+.

  18. CRay July 31, 2018 at 3:23 pm #

    Greetings!
    This is one of the great websites for OpenCV and Python implementation. I’m thankful for Adrian. I also purchased the book package earlier.
    I’m not a software engineer, but I’m trying to learn.
    I’m also looking for someone who can integrate what Adrian is showing us in terms of Facial recognition into an existing surveillance system, like ZoneMinder (https://zoneminder.com/), or iVideon (https://www.ivideon.com) or Kerberos (https://www.kerberos.io/).
    I prefer Zoneminder as I can control all data.
    basically, I’m looking for a integration or creating a plugin that would in realtime (or near realtime) do face recognition using OpenCV. I would also like to incorporate other types of detection such as license plate detection, obj3ect counter etc.
    I have a budget established for this. If anyone is familiar with Zomeminder and can develop something like this, pls let me know

    • Adrian Rosebrock August 1, 2018 at 6:31 am #

      Thank you for picking up a copy of my book, I hope you are enjoying it so far! Regarding your project send me an email and we can get your job listed on PyImageJobs.

  19. Byron July 31, 2018 at 6:03 pm #

    Adrian

    Thanks for the post. Very interesting. Came across an error on the construction of the argument parser…..

    usage: __main__.py [-h] [-t TRACKER]

    • Adrian Rosebrock August 1, 2018 at 6:32 am #

      The error is related to your command line arguments not being supplied. If you’re new to command line arguments and how they work with Python, that’s okay, but make sure you read this post first so you can get up to speed.

  20. Learner August 1, 2018 at 8:15 am #

    I am unable to create a Frame and track the same on the video not sure why, am on ubuntu and installed OpenCv through Anaconda framework.. even during first run i got an Attribute Error which I resolved on reading KhanhPhamDinh (thanks).

    Any help is most welcomed.

    • Adrian Rosebrock August 2, 2018 at 9:29 am #

      When you say you are unable to “create a frame” are you saying that the Python script does not open a window? That it crashes immediately with another error? If you can be more specific I can try to provide a recommendation.

  21. farshad August 1, 2018 at 11:55 pm #

    thanks a lot Adrian. good job again. What is the differences between object tracker and particle filter in opencv?

    • Adrian Rosebrock August 2, 2018 at 9:24 am #

      You can use particle filtering as a form of object tracking; however, particle filtering methods have a number of other assumptions, including Monte Carlo-based algorithms, dynamic systems, and random perturbations.

  22. reza August 2, 2018 at 4:20 am #

    Hi Adrian.tnx for your great post.I have a question and I hope you can help me.I want to build a people counter that installed at the entrance of store.what’s your best solution for doing this?face detection?remove background of video?frames subtraction?…

    • Adrian Rosebrock August 2, 2018 at 9:19 am #

      I’ll actually be covering how to build a person counter later in this object tracking series, stay tuned!

      • reza August 2, 2018 at 5:42 pm #

        Thank you so much.Adrian how long does it take?less than one month or more?

        • Adrian Rosebrock August 7, 2018 at 7:43 am #

          The person counter post will be published this Monday.

  23. madhu dande August 2, 2018 at 8:36 pm #

    unable to track the car using any of models. While executing the program the Boston dash cam video is launching but as mentioned i couldn’t able to track. need your support

    • Adrian Rosebrock August 7, 2018 at 7:41 am #

      Make sure you click the window opened by OpenCV (not your terminal window) and then press the “s” key on your keyboard to select the ROI.

  24. Greg August 3, 2018 at 10:57 am #

    Great post! Just wondering if you have any advice about how one might go about working with trackers in OpenCV on the Android platform.

  25. SINAN OGUZ August 5, 2018 at 2:27 pm #

    Hi, Adrian, this is my first comment :), first of all thanks for the your web site, I’ve learnt many things about image processing.

    I have question automatic tracking initilization, for this post(OpenCv Object Tracking)

    You said “Of course, we could also use an actual, real object detector in place of manual selection here as well.”

    What is your suggestions instead of selecting ROI? Is there a new post about it?

    Thanks for all 🙂

    • Adrian Rosebrock August 7, 2018 at 6:47 am #

      This post shows you how an object detector can automatically detect ROIs and track them.

  26. M Tayab Khan August 6, 2018 at 3:59 am #

    hi Adrian,
    I am getting the following error while running the file object_tracker.py

    error: the following arguments are required: -p/–prototxt, -m/–model

    • Adrian Rosebrock August 7, 2018 at 6:45 am #

      You need to supply the command line arguments to the script when you’re executing it (which you’re not doing). If you’re new to command line arguments I recommend you read this post to get up to speed. From there you’ll be able to execute the script.

      • M Tayab Khan August 11, 2018 at 8:26 am #

        Thank You v much sir,

        The mistake was a very basic one and I am great full to you that you guide me in a good way. you are doing a great job. keep it up. Most of the time I was stuck in the same problem but now it will be solved.

  27. SAM August 7, 2018 at 9:34 am #

    Hello Adrian,
    thanks for continuous nice work.
    I tried CSRT tracker, tracking result seems good; however when the target disappears from the screen and then it shows again , it isn’t detected
    Any support to control this situation ( losing target)

    Thanks again.

    • Adrian Rosebrock August 9, 2018 at 3:07 pm #

      Hey Sam — you can actually combine the techniques from this post with the previous post on centroid tracking. I’ll be providing an example of such an implementation in next week’s blog post on people tracking.

      • SAM August 10, 2018 at 11:32 am #

        Thanks Adrian,

        in your post it was face detection on a pertained model, for the tracking example the object am tracking is not pertained so how can i achieve it

        • Adrian Rosebrock August 15, 2018 at 9:27 am #

          If you have not trained an object detector to detect your object you will not be able to detect it which means you will not be able to track it. What object are you trying to detect?

          • SAM August 15, 2018 at 10:07 am #

            the idea is to select any object and track it in the video ; can i train the system to detect a selected object on the runtime ? haar cascade or any other algorithm

          • Adrian Rosebrock August 15, 2018 at 10:53 am #

            No, the actual object detector needs to be trained ahead of time. Once you’ve detected the actual object you can track it.

  28. Ching June Hao August 9, 2018 at 1:31 pm #

    Hi Adrian, thanks for such a good tutorial. But when I was trying to run the code, there’s nothing happen. What will be the possible error? I’m using python 3.6 with OpenCV version 3.4.1.

    Thanks in advance!

    • Adrian Rosebrock August 9, 2018 at 2:34 pm #

      Could you provide more details on what you mean by “nothing is happening”? Does the script automatically exit? Is an error thrown? Does the script run but no output displayed?

  29. Jindřich Soukup August 13, 2018 at 10:14 am #

    Hi Adrian,
    thanks a lot for your tutorial. I have problems with detection of object in complicated conditions (dark object in dark region of the scene). I’m also trying to detect, when the target disappears the scene. I’ve noticed that together with the detected position the tracker also gives the ‘success’ information which in unfortunately always True (even for the case when tracking failed). In this thread http://answers.opencv.org/question/110756/how-to-reset-or-update-kcf-tracker-roi-when-it-lose-the-target/ is discussed that one of the solution is to modify opencv function to give response of the tracker instead of binarized True/False information. Unfortunately, I’m not proficient enough to make such changes of the code. Is there another possible way?
    Thanks a lot.
    Jindřich

    • Adrian Rosebrock August 15, 2018 at 8:50 am #

      Unfortunately there is not an easy solution to this problem. In fact, handling the case when object trackers lose track of an object is one of the hardest components of an object tracking algorithm — I would argue that it’s far from solved. It’s not an easy solution and unfortunately for any other level of granularity you would need to modify the OpenCV source code.

  30. Jeff August 13, 2018 at 10:02 pm #

    I was able to get it to work with the pi camera by changing line 51 from

    vs = VideoStream(src=0).start()

    to

    vs = VideoStream(usePiCamera=1).start()

    • Adrian Rosebrock August 15, 2018 at 8:36 am #

      Yep, that’s all you need to do!

  31. Jeff August 13, 2018 at 10:13 pm #

    At the end of your post you say CSRF instead of CSRT a few times.

    • Adrian Rosebrock August 15, 2018 at 8:36 am #

      Thanks Jeff. Post updated 🙂

  32. vipul August 14, 2018 at 1:29 pm #

    Traceback (most recent call last):
    File “deep_learning_with_opencv.py”, line 114, in
    showCrosshair=True)
    TypeError: Required argument ‘boundingBox’ (pos 3) not found

    how to solve this!!

    • Adrian Rosebrock August 15, 2018 at 8:22 am #

      Hey Vipul — which version of OpenCV are you using?

      • vipul August 15, 2018 at 2:29 pm #

        3.3.1

  33. vipul August 15, 2018 at 2:30 pm #

    “csrt”: cv2.TrackerCSRT_create,
    AttributeError: ‘module’ object has no attribute ‘TrackerCSRT_create’

    how??

    • Adrian Rosebrock August 15, 2018 at 3:38 pm #

      You should take a look at Figure 2, Vipul — you need at least OpenCV 3.4 for the CSRT tracker. Since you’re on OpenCV 3.3 you should comment out the CSRT tracker and the MOSSE tracker, then the code will work for you.

  34. Greg August 16, 2018 at 9:25 am #

    Any advice on tracking when the background colors are similar to the object being tracked? I have been experimenting with tracking a green ball. The CSRT tracker does a decent job indoors but when i move outside and roll it through the grass it fails. I can detect the ball but the tracker loses it and ends up locked onto a spot on the grass.

    • Adrian Rosebrock August 17, 2018 at 7:21 am #

      How are you detecting objects in the image? Is it via just simple color thresholding? If so, you should use a more advanced method like a dedicated object detector (HOG + Linear SVM, SSDs, Faster R-CNNs, etc.). I have an example of such an object detection + object tracking pipeline here.

  35. Greg August 17, 2018 at 12:31 pm #

    Yes, I’m currently just using color thresholding to detect the green ball. To test the tracker I put the ball on a high contrast surface and the ball is detected. I then use the detection to start tracking. If I then roll the ball off the high contrast surface onto the grass, the object detector fails.

    • Greg August 17, 2018 at 12:32 pm #

      I mean the object tracker fails when the ball leaves the high contrast surface, not the detector.

      • Adrian Rosebrock August 17, 2018 at 12:43 pm #

        That’s not too terribly surprising given depending on how of a difference the high contrast surface is (comparatively) and how fast the ball is moving (the faster it moves, the harder it will be for the trackers to work properly). The solution is to use a hybrid approach that balances tracking with detection — you can find the method in this post.

  36. sathish August 21, 2018 at 3:52 am #

    Hello Adrian,

    Actually i am in python3 and when i execute the below code it is showing me the error.

    File “”, line 8
    (major, minor) = cv2.__version__.split(“.”)[:2]
    ^
    SyntaxError: invalid syntax.

    I am new to python and when i googled i found that “the tuple is deprecated and we need to split” something… bla bla..

    Can you help me on this error?

    • Adrian Rosebrock August 22, 2018 at 9:39 am #

      Make sure you use the “Downloads” section of this blog post to download the source code rather than trying to copy and paste. My guess is that you introduced an error in the code when copying and pasting or trying to “code along” with the post.

  37. sathish August 22, 2018 at 8:00 am #

    Hello Adrian,

    I am getting error when i try to run the python file with –video arguments. Can you please help me on this. I am on windows

  38. Walid August 27, 2018 at 3:08 pm #

    Thanks a lot Adrian
    A couple of question,

    1-How can I make the tracker return false when the object leaves the scene?

    2-how can I deal with an object that changes its dimensions in frame?

    Best Regards

    • Adrian Rosebrock August 30, 2018 at 9:13 am #

      1. The tracker itself will return “False” via the success boolean. You can also track the bounding box and once it hits the boundaries of the frame you could mark it as “leaving the scene” as well.

      2. You would want to use an object tracking method like CSRT which works well in changes in scale.

  39. Eyup Ucmaz September 6, 2018 at 1:18 pm #

    Hi Adrian,
    Is it possible that I follow the object I already defined? So when the camera is turned on, can I find the object I defined? If possible, how can it be done?

    • Adrian Rosebrock September 11, 2018 at 8:38 am #

      Hey Eyup — object tracking and object detection are two different types of algorithms. What type of object are you trying to detect?

  40. Yogesh September 10, 2018 at 3:22 am #

    Hi Adrian, i am using Open cv 3.4.2 but still getting below error-

    tracker = cv2.TrackerBoosting_create
    AttributeError: module ‘cv2.cv2’ has no attribute ‘TrackerBoosting_create’

    • Adrian Rosebrock September 11, 2018 at 8:16 am #

      Your question has been answered in this reply.

  41. Andres Gomez September 11, 2018 at 11:00 am #

    Hi Adrian, I followed your tutorial to install Opencv on Raspberry and your people counter as well, but I made some changes on the code: I used substraction background to detect movement on the video and FindContours for enclosing it in a rectangle. However I use CentroidTracker which you made in the previous tutorials

    Though I made those changes, still is very slow on Raspberry PI 3B so I have to implement MOSSE to increase the speed. I saw many implementation about MOSSE algorithm but it always use ROI to track the object. My question is: What is the ROI output? Is a coordinate?
    Because my plan is:

    # select the bounding box of the object we want to track (make
    # sure you press ENTER or SPACE after selecting the ROI)
    #initBB = cv2.selectROI(“Frame”, frame, fromCenter=False,
    # showCrosshair=True)
    initBB =(x,y,x+w,x+h)

    # start OpenCV object tracker using the supplied bounding box
    # coordinates, then start the FPS throughput estimator as well
    tracker.init(frame, initBB)

    But it doesn’t work very well.

    PS: initBB need to be an array to have all the coordinates of each contours. I wiil put again my question but i don’t know if my question has been deleted or it just doesn’t appear on the page.

  42. Andres Gomez September 11, 2018 at 11:03 am #

    I realize that I would have a problem if a use MOSSE algorithm with findcontours because in each frame the function FIndContours will find the same contour until it disappears and I will pass to MOSSE algorithm and it will count the same person many times. So I m considering to do people counter on C++ to be faster tahn python

  43. Fabio September 19, 2018 at 4:37 pm #

    Hi. I’ve getting the warning from “imutils.video import VideoStream” (line2), error: No module named imutils.video. But i’ve installed imutils as you did before. Can you tell me what it means? Thx btw. Keep work!

    • Adrian Rosebrock October 8, 2018 at 1:22 pm #

      It sounds like you have not installed “imutils” correctly. Please make sure you double-check.

  44. LZQthePlane September 28, 2018 at 6:16 am #

    Hi Adrian, I am learning CV lfrom your blogs, they are brilliant.
    However, a error came out when I run this script, it happened in line 120: tracker.init(frame, initBB), error is “AttributeError: ‘builtin_function_or_method’ object has no attribute ‘init'”, I am using opencv 3.4, could you please help me out ?
    By the way, HAPPY WEDDING !

    • Adrian Rosebrock October 8, 2018 at 12:23 pm #

      Hm, I’m not sure what may be causing that error. Are you using the code from the “Downloads” section of this blog post? Or did you copy and paste? Make sure you use the “Downloads” section to avoid any accidental errors.

  45. Rob October 4, 2018 at 6:28 pm #

    Adrian,

    Yet another awesome blog.

    Question, if the object leaves the view panel, how does one get rid of the remaining bounding box that is left on the edge?

    And let’s say, that we decide that after awhile we want to track a different object instead? The current setup allows me to open another panel and make a new selection, but it doesn’t seem to track the new selection, nor does it destroy the previous object being tracked.

    • Adrian Rosebrock October 8, 2018 at 9:56 am #

      Hey Rob, that’s partially a limitation of each object tracker algorithm. Some object tracking algorithms do a poor job of detecting if an object has left the scene of view. Instead, you should try to monitor the (x, y)-coordinates of the object and if they “hang around” the border of the frame for too long you can mark them as “disappeared”.

  46. Dan Richter October 9, 2018 at 2:55 pm #

    Hi Adrian,

    I would like to program Stable Multi-Target Tracking in Real-Time see

    https://www.youtube.com/watch?v=InqV34BcheM

    Any advice would be appreciated very much.

  47. lico October 12, 2018 at 3:56 am #

    Hi,Adrian:

    I use the script and try it,everything is OK, but if I re-select ROI, than the tracker.update() function always return False, so it can not modify ROI and re-track it.

    And I also find even I modify the ROI, but sometime it still show origin ROI.

    So i want to know how I can re-select ROI and make if effect immediately.

    thanks.

Leave a Reply