Ball Tracking with OpenCV


Today marks the 100th blog post on PyImageSearch.

100 posts. It’s hard to believe it, but it’s true.

When I started PyImageSearch back in January of 2014, I had no idea what the blog would turn into. I didn’t know how it would evolve and mature. And I most certainly did not know how popular it would become. After 100 blog posts, I think the answer is obvious now, although I struggled to put it into words (ironic, since I’m a writer) until I saw this tweet from @si2w:

Big thanks for @PyImageSearch, his blog is by far the best source for projects related to OpenCV.

I couldn’t agree more. And I hope the rest of the PyImageSearch readers do as well.

It’s been an incredible ride and I really have you, the PyImageSearch readers to thank. Without you, this blog really wouldn’t have been possible.

That said, to make the 100th blog post special, I thought I would do something a fun — ball tracking with OpenCV:

The goal here is fair self-explanatory:

  • Step #1: Detect the presence of a colored ball using computer vision techniques.
  • Step #2: Track the ball as it moves around in the video frames, drawing its previous positions as it moves.

The end product should look similar to the GIF and video above.

After reading this blog post, you’ll have a good idea on how to track balls (and other objects) in video streams using Python and OpenCV.

Looking for the source code to this post?
Jump right to the downloads section.

Ball tracking with OpenCV

Let’s get this example started. Open up a new file, name it , and we’ll get coding:

Lines 2-8 handle importing our necessary packages. We’ll be using deque , a list-like data structure with super fast appends and pops to maintain a list of the past N (x, y)-locations of the ball in our video stream. Maintaining such a queue allows us to draw the “contrail” of the ball as its being tracked.

We’ll also be using imutils , my collection of OpenCV convenience functions to make a few basic tasks (like resizing) much easier. If you don’t already have imutils  installed on your system, you can grab the source from GitHub or just use pip  to install it:

From there, Lines 11-16 handle parsing our command line arguments. The first switch, --video  is the (optional) path to our example video file. If this switch is supplied, then OpenCV will grab a pointer to the video file and read frames from it. Otherwise, if this switch is not supplied, then OpenCV will try to access our webcam.

If this your first time running this script, I suggest using the --video  switch to start: this will demonstrate the functionality of the Python script to you, then you can modify the script, video file, and webcam access to your liking.

A second optional argument, --buffer  is the maximum size of our deque , which maintains a list of the previous (x, y)-coordinates of the ball we are tracking. This deque  allows us to draw the “contrail” of the ball, detailing its past locations. A smaller queue will lead to a shorter tail whereas a larger queue will create a longer tail (since more points are being tracked):

Figure 1: An example of a short contrail (buffer=32) on the left, and a longer contrail (buffer=128) on the right. Notice that as the size of the buffer increases, so does the length of the contrail.

Figure 1: An example of a short contrail (buffer=32) on the left, and a longer contrail (buffer=128) on the right. Notice that as the size of the buffer increases, so does the length of the contrail.

Now that our command line arguments are parsed, let’s look at some more code:

Lines 21 and 22 define the lower and upper boundaries of the color green in the HSV color space (which I determined using the range-detector script in the imutils  library). These color boundaries will allow us to detect the green ball in our video file. Line 23 then initializes our deque  of pts  using the supplied maximum buffer size (which defaults to 64 ).

From there, we need to grab access to our vs  pointer. If a --video  switch was not supplied, then we grab reference to our webcam (Lines 27 and 28) — we use the  VideoStream  threaded class for efficiency. Otherwise, if a video file path was supplied, then we open it for reading and grab a reference pointer on Lines 31 and 32 (using the built in cv2.VideoCapture ).

Line 38 starts a loop that will continue until (1) we press the q  key, indicating that we want to terminate the script or (2) our video file reaches its end and runs out of frames.

Line 40 makes a call to the read  method of our camera  pointer which returns a 2-tuple. The first entry in the tuple, grabbed  is a boolean indicating whether the frame  was successfully read or not. The frame  is the video frame itself. Line 43 handles VideoStream  vs VideoCapture  implementations.

In the case we are reading from a video file and the frame is not successfully read, then we know we are at the end of the video and can break from the while  loop (Lines 47 and 48).

Lines 52-54 preprocess our frame  a bit. First, we resize the frame to have a width of 600px. Downsizing the frame  allows us to process the frame faster, leading to an increase in FPS (since we have less image data to process). We’ll then blur the frame  to reduce high frequency noise and allow us to focus on the structural objects inside the frame , such as the ball. Finally, we’ll convert the frame  to the HSV color space.

Lines 59 handles the actual localization of the green ball in the frame by making a call to cv2.inRange . We first supply the lower HSV color boundaries for the color green, followed by the upper HSV boundaries. The output of cv2.inRange  is a binary mask , like this one:

Figure 2: Generating a mask for the green ball using the cv2.inRange function.

Figure 2: Generating a mask for the green ball using the cv2.inRange function.

As we can see, we have successfully detected the green ball in the image. A series of erosions and dilations (Lines 60 and 61) remove any small blobs that may be left on the mask.

Alright, time to perform compute the contour (i.e. outline) of the green ball and draw it on our frame :

We start by computing the contours of the object(s) in the image on Line 65 and 66. On the subsequent line, make the function compatible with all versions of OpenCV. You can read more about why this change to cv2.findContours  is necessary in this blog post. We’ll also initialize the center  (x, y)-coordinates of the ball to None  on Line 68.

Line 71 makes a check to ensure at least one contour was found in the mask . Provided that at least one contour was found, we find the largest contour in the cnts  list on Line 75, compute the minimum enclosing circle of the blob, and then compute the center (x, y)-coordinates (i.e. the “centroids) on Lines 77 and 78.

Line 81 makes a quick check to ensure that the radius  of the minimum enclosing circle is sufficiently large. Provided that the radius  passes the test, we then draw two circles: one surrounding the ball itself and another to indicate the centroid of the ball.

Finally, Line 89 appends the centroid to the pts  list.

The last step is to draw the contrail of the ball, or simply the past N (x, y)-coordinates the ball has been detected at. This is also a straightforward process:

We start looping over each of the pts  on Line 92. If either the current point or the previous point is None  (indicating that the ball was not successfully detected in that given frame), then we ignore the current index continue looping over the pts  (Lines 95 and 96).

Provided that both points are valid, we compute the thickness  of the contrail and then draw it on the frame  (Lines 100 and 101).

The remainder of our  script simply performs some basic housekeeping by displaying the frame  to our screen, detecting any key presses, and then releasing the vs  pointer.

Ball tracking in action

Now that our script has been coded it up, let’s give it a try. Open up a terminal and execute the following command:

This command will kick off our script using the supplied ball_tracking_example.mp4  demo video. Below you can find a few animated GIFs of the successful ball detection and tracking using OpenCV:

Figure 3: An example of successfully performing ball tracking with OpenCV.

Figure 3: An example of successfully performing ball tracking with OpenCV.

An example of successfully performing ball tracking with OpenCV.

Figure 3: An example of successfully performing ball tracking with OpenCV.

For the full demo, please see the video below:

Finally, if you want to execute the script using your webcam rather than the supplied video file, simply omit the --video  switch:

However, to see any results, you will need a green object with the same HSV color range was the one I used in this demo.


In this blog post we learned how to perform ball tracking with OpenCV. The Python script we developed was able to (1) detect the presence of the colored ball, followed by (2) track and draw the position of the ball as it moved around the screen.

As the results showed, our system was quite robust and able to track the ball even if it was partially occluded from view by my hand.

Our script was also able to operate at an extremely high frame rate (> 32 FPS), indicating that color based tracking methods are very much suitable for real-time detection and tracking.

If you enjoyed this blog post, please consider subscribing to the PyImageSearch Newsletter by entering your email address in the form below — this blog (and the 99 posts preceding it) wouldn’t be possible without readers like yourself.


If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , ,

605 Responses to Ball Tracking with OpenCV

  1. Andrew September 14, 2015 at 11:17 am #

    Hello Adrian!
    As always, a very nice tutorial, very well explained 🙂
    How would you handle the situation where we have, let’s say 10 green balls in the video?

    Best regards!

    • Adrian Rosebrock September 15, 2015 at 6:06 am #

      Great question Andrew, thank’s for asking. If you had more than 1 ball in the image, you would simply loop over each of the contours individually, make sure they are of sufficient size, and draw their enclosing circles individually. And if you wanted to track multiple balls of different colors, you would need to define a list of lower and upper boundaries, loop over them, and then create a mask for each set of boundaries.

      • Anderson October 7, 2017 at 6:59 am #

        Hi Adrian, can you explain in detail how to loop over each of the contours individually so that I can handle more than 1 ball? Thank you!

        • Adrian Rosebrock October 9, 2017 at 12:39 pm #

          Remove the call to max on Line 66. Then just loop over the detected contours:

          The biggest problem is that you need to maintain a dequeue for each ball which involves object tracking. The simplest way to accomplish this is via centroid tracking. I will try to do a blog post on this technique in the future.

          • ANKIT SAINI December 4, 2017 at 7:29 pm #

            Hey Adrian! I need help a bit more specifically in tracking motion for multiple objects.

          • asyraf October 11, 2018 at 12:17 pm #

            hye adrian im try to do this method but not working.. can u help me please..

      • Suraj October 29, 2017 at 3:41 am #

        How to run this code on pycharm .

        • Adrian Rosebrock October 30, 2017 at 3:04 pm #

          Hi Suraj — the easiest way is from the command line. You should read this blog post on setting up an environment with PyCharm.

  2. David September 14, 2015 at 12:38 pm #

    Great post Adrian. This would be useful for tracking tennis balls! And the time to process a frame is fast! I wonder if Hawk-eye uses OpenCV

    • Adrian Rosebrock September 15, 2015 at 6:12 am #

      Tracking fast moving objects in sports such as tennis balls and hockey pucks is a deceptively challenging problem. The issues arrises from the objects moving so fast that the standard computer vision algorithms can’t really process them — all they see is a bunch of motion blur. I’m not sure about tennis, but I know in the case of hockey they ended up putting a chip in the puck that interfaces with other algorithms, allowing it to be more easier tracked (and thus watched on TV).

  3. Tyrone September 14, 2015 at 4:40 pm #

    If you don’t have his book I suggest you get it.
    Keep up the good work Adrian.

    • Adrian Rosebrock September 15, 2015 at 6:01 am #

      Thanks Tyrone! 🙂

  4. Nathanael Anderson September 15, 2015 at 10:55 am #

    Any chance you could put a tutorial together to track a green laser pointer dot over multiple surfaces, including moving video? I’ve been enjoying reading all the info you post. thanks for all the work you put into it. I started working with opencv because of your work.

    • Adrian Rosebrock September 15, 2015 at 4:28 pm #

      Hey Nathanael — welcome to the world of computer vision, I’m happy that I could be an inspiration. I hope you’re enjoying the blog!

      Unfortunately, I personally don’t own any laser pointers. There might me a red laser pointer buried somewhere in the boxes the last time I moved, but I’m not sure. If I can get my hands on a laser pointer I’ll try to do a tutorial on tracking it.

  5. Neeraj September 15, 2015 at 9:53 pm #

    Hi Adrian- Thanks for such detailed explanation on open cv concepts, your site is the best site for learning open cv concepts, Now these days I eagerly wait for your email for what new you have published.Thanks always!.I am right now not able to capture video from my webcamera, I am using virtual box and installed ubuntu on VB, My host operating is OSX. my frame returned are NONE and grabbed is always false. I tried changing the camera = cv2.VideoCapture(0) argument from 0 to 1,-1 . Do I need to do anything special if I need to access webcamera from virtual box.Under VB->Device->USB-> apple HD face time camera is selected.

    • Adrian Rosebrock September 16, 2015 at 7:26 am #

      Unfortunately, using VirtualBox you will not be able to access the raw webcam stream from your OSX machine. This is considered to be a security concern. Imagine if a VM could access your webcam anytime it wanted! So, because of this, accessing the raw webcam is disabled. In this case, you have two options:

      1. Try VMWare which does allow for the webcam to be accessed from the VM. I personally have not tried this out, but I have heard this from other.
      2. Install OpenCV on your native OSX machine.

      I hope that helps!

      • gabrigam October 11, 2019 at 7:00 am #

        Hi Adrian and all fiends

        I’am using virtualbox 6.2 with Ubuntu 19.04 and for fix cam problem try this:

        1) install Virtualbox extension pack

        2) enable usb 2/3 from virtualbox property machine and add cam device

        3) start vm and enable cam

        I hope this Helps



  6. Adam Gibson September 16, 2015 at 12:05 am #

    Just FYI, adding the following code allows operation on Python 3 & OpenCV 3 (at the top, near line 7):

    • Adrian Rosebrock September 16, 2015 at 7:24 am #

      Thanks for the comment Adam! The code will work will OpenCV 3 without a problem, but the change you suggested is required for Python 3. I’ll update this post to use NumPy’s arange instead to make the code compatible with both Python versions.

  7. Luis September 16, 2015 at 6:11 pm #

    Hi Adrian,

    Thanks for another great tutorial on OpenCV.
    I am working with a freshly compiled Python3 + OpenCV3 on a Raspberry Pi 2, installed from your tutorial on the subject and running this code I am getting the following error:

    I even added the lines suggested by Adam Gibson for compatibility withto allow Python3 and OpenCV3 but the error persists.
    Do you have ay idea of what am I missing?

    • Adrian Rosebrock September 17, 2015 at 8:15 am #

      Any time you see an error related to an image being None and not having a shape attribute, it’s because the image/frame was either (1) not loaded from disk, indicating that the path to the image is incorrect, or (2) a frame is not being read properly from your video stream. Based on your provided command, it looks like you are trying to access the webcam of your system. Try using the supplied video (in the code download of this post) and see if it works. If it does, then the issue is with your webcam.

      • Nick B April 6, 2016 at 1:07 pm #

        Hi Adrian,

        I have the same attribute error, however I tested my webcam with your video test script, so it should be working?

        Thank you

        • Adrian Rosebrock April 7, 2016 at 12:44 pm #

          Which webcam video test script did you use?

          • Namal September 21, 2016 at 1:08 pm #

            Hi Adrian,

            I also have this problem.with video track, its working properly.but not with web cam.error as above mentions.but camera is working good.what should i do?

          • Adrian Rosebrock September 21, 2016 at 2:08 pm #

            I’m sorry to hear that your OpenCV setup is having issues accessing your webcam. Which webcam are you using?

          • Santo April 26, 2017 at 5:25 pm #

            Hello Adrian,

            I want to modify this code to detect a digit using picamera.

            Could you suggest a way to do it?


          • Adrian Rosebrock April 28, 2017 at 9:45 am #

            That really depends on the types of digits you’re trying to detect as well as the environment they are in. Typical methods for object detection in general include Histogram of Oriented Gradients. I cover object detection in detail inside the PyImageSearch Gurus course, but without seeing an example image of what you’re trying to detect, I can’t point you in the right direction.

      • avtar September 21, 2017 at 3:48 am #

        i m getting the same error!! i need help! i also installed ur open cv on youtube.. i am using pi camera. i am getting a video feed of the video i supplied but i am not getting live stream.

  8. Luis September 17, 2015 at 7:06 am #


    Just figured it out.
    To use this code on a Raspberry Pi with Python3 OpenCV3 and the RaspiCAM I needed to load the v4l2 driver:

    sudo modprobe bcm2835-v4l2

    To load the driver every time the RPi boots up, just add the following line to /etc/modules


    Thanks for the tutorial

    • Adrian Rosebrock September 17, 2015 at 8:11 am #

      Thanks for sharing Luis! Another alternative is just to modify the frame reading loop to use the picamera module as detailed in this post.

      • John November 27, 2015 at 1:16 am #

        Hello Adrian,

        I got the same as error as Luis. Would you explain detail how to modify frame reading loop?

        • Adrian Rosebrock November 27, 2015 at 7:35 am #

          Hey John — take a look at Luis’ other comment on this post, he mentioned how he resolved the error.

  9. ancientkittens September 17, 2015 at 5:48 pm #

    I love this article – nice work. I even noted that it got picked up in python weekly!!

    • Adrian Rosebrock September 18, 2015 at 7:33 am #

      Thanks! 😀 I’m glad you enjoyed it!

  10. Yuke September 17, 2015 at 9:21 pm #

    Hi Adrian,

    Thanks for sharing this project.

    I have a question regarding to recover the ball when it appears in the scene again, do you using detector to do it? Or only use HSV color boundaries?

    Another is that, I find out your application is robust for illumination changes, do you using other feature for tracking? Because I think only HSV could not handle it…

    • Adrian Rosebrock September 18, 2015 at 7:33 am #

      When the HSV drops out of the frame, the HSV boundaries are simply used to pick it back up when it re-enters the scene. To answer your second question, since this is a basic demonstration of how to perform object detection, I’m only using color-based methods. In future methods I’ll show more robust approaches using features.

  11. Vlad September 18, 2015 at 4:28 pm #


  12. Luis Jose September 25, 2015 at 2:42 am #

    Hi Adrian!
    Amazing work, as always! I wonder, how difficult do you think is to extend this code and follow the position of more balls of different colors?

    Thanks for sharing all this knowledge with the world!


    • Adrian Rosebrock September 25, 2015 at 6:38 am #

      Not too challenging at all. Just define a list of lower and upper color boundaries you want to track, loop over them for each frame, and generate a mask for each color. I actually detail exactly how to do this in the PyImageSearch Gurus course.

      • HienTran September 30, 2019 at 3:51 am #

        thanks for amazing project. I have a question that if I want to caculate or estimate the speed of the ball so what I have to do. this is the first time I learn about the tracking

        • Adrian Rosebrock October 3, 2019 at 12:33 pm #

          Take a look at Raspberry Pi for Computer Vision where we do speed calculation of vehicles. The same method can be applied to ball tracking as well.

  13. Ali October 1, 2015 at 2:39 pm #

    Hi Adrian,

    Beautiful tutorial. I am motivated to try this sort of tracking on a squash ball. Do you think it might work? Some of the challenges that come to mind:

    1) ball is black
    2) ball absolute diameter is small, and the perceived ball size becomes even smaller as the distance between the camera sensor and the ball increases
    3) very high speed of ball

    • Adrian Rosebrock October 2, 2015 at 7:10 am #

      Hey Ali — great questions, thanks for asking. If the ball is black, that could cause some issues when using color based detection, but that’s actually not too much of an issue provided that there is enough contrast between the black color and the rest of the image scene. What’s actually really concerning is the very high speed of the ball. Motion blur can really, really hurt the performance of computer vision algorithms. I think you would need a very high FPS camera, incorporate color tracking (if at all possible), and might want to use a bit of machine learning to build a custom squash ball tracker.

  14. Willem Jongman October 3, 2015 at 8:50 pm #

    Hi Adrian,

    When there is initially no contour in the mask, and then the green object is moved into view, it will generate a “deque index out of range” exception on line 94.

    I Modified line 94 to:

    if counter >= 10 and i == 1 and len(pts) >= 10 and pts[-10] is not None:

    That seems to have solved it.

    Thank you very much for sharing your image-processing knowledge, I learned some neat tricks from it and I hope you will be keeping up this good work.


    • Adrian Rosebrock October 4, 2015 at 7:01 am #

      Thanks for sharing Willem! 🙂

  15. Pedro October 14, 2015 at 10:45 am #

    Hi Adrian,

    Awesome tutorial, as always.
    Best website to source OpenCv and computer vision 🙂

    • Adrian Rosebrock October 14, 2015 at 11:25 am #

      Thanks for the kind words Pedro! 😀

  16. Prasanna K Routray December 27, 2015 at 4:06 pm #

    I tried to run this but it’s giving me this error:

    • Adrian Rosebrock December 28, 2015 at 8:24 am #

      You need to install the imutils package:

      $ pip install imutils

      • deshario September 30, 2017 at 7:32 am #

        Can we check that which position is ball coming from ?
        For example :: if our ball is in right position and we move it into left.
        the output that i need is :: print(“Ball is coming from right to left”)
        How can i do it ?

        • Adrian Rosebrock October 2, 2017 at 9:58 am #

          Absolutely. Please see this post.

  17. Sharad Patel January 5, 2016 at 9:26 am #


    I am planning to do a deep dive into your tutorial for a project of mine. I am new to motion tracking and I have a question (it may be answered in the code – if so please can you point it out). Is it possible to set regions on the image such that when the ball enters it, the code can do something (e.g. output a message)? Thanks.

    • Adrian Rosebrock January 5, 2016 at 1:55 pm #

      All you really need are some if statements and the bounding box of the contour. For example, if I wanted to see if the ball entered the top-left corner of my frame, I would do something like:

      The code in the if statement will only fire if the center of the ball is within the upper-left corner of the frame (within 50 pixels). You can of course modify the code to suit your needs.

      • Sharad Patel January 7, 2016 at 6:02 pm #

        Great! Thank you. When I came across this post I wasn’t aware of your Quickstart package. Just downloaded it and I am working my way through the tutorials – enjoying it all so far!

        • Adrian Rosebrock January 8, 2016 at 6:29 am #

          Thanks for picking up a copy of the Quickstart Bundle Sharad, enjoy! 🙂

          • Sharad Patel January 8, 2016 at 7:03 am #

            Sorry – one more question. I have a video similar to yours but I have a red ball. Do you have any tips / tools that you can recommend in establishing the colour bounds for my object (I have tried guesstimating with a web based color-picker). Thanks.

          • Adrian Rosebrock January 8, 2016 at 9:18 am #

            Take a look at the range-detector script I link to in the body of the blog post. You can use this to help determine the appropriate color threshold values.

      • Amirul Izwan February 25, 2016 at 12:23 pm #

        Hello Adrian,
        good job on your tutorials, significant big help to my school project. I tried experimenting with the ‘if’ statement as you suggest here, the problem is it only work once for the first time, the second (and later) time I run the code it throws me error: name ‘x’ is not defined. Is there any way to fix this? Thanks!

        • Adrian Rosebrock February 25, 2016 at 4:39 pm #

          If you’re getting an error that the variable x is undefined, then you’ll want to double check your code and ensure that x is being properly calculated during each iteration of the while loop. It sounds like a logic error in the code that has been introduced after modifying it.

  18. Jessie January 12, 2016 at 6:08 pm #

    Thanks for sharing!

    I’m wondering what is the longest distance between the ball and the camera can be to guarantee the accuracy?

    • Adrian Rosebrock January 13, 2016 at 6:39 am #

      As long as the ball is in the field of view in the camera and the radius doesn’t fall below the minimum radius of 10 pixels (which is a tunable parameter), this will work. You might also be interested in measuring the distance from the camera to an object.

  19. Hilman January 17, 2016 at 4:11 am #

    Hey Adrian, I have a question.
    I can’t help it but to notice that you didn’t change the lowerGreen and upperGreen boundary in the line

    mask = cv2.inRange(hsv, greenLower, greenUpper)

    into NumPy array when in your OpenCV and Python Color Detection post, you said that the OpenCV will expect the colour limit will be in form of NumPy’s array. Why is that?

    • Adrian Rosebrock January 17, 2016 at 5:22 pm #

      That’s a good point! I thought it did need to be a NumPy array, but it seems a tuple of integers will work as well. Thanks for pointing this out Hilman.

  20. mathivanan January 17, 2016 at 8:22 am #

    some one help me how can i print the coordinates of the ball on the terminal …..

    • Adrian Rosebrock January 17, 2016 at 5:20 pm #

      After Line 72, simply do: print((x, y))

  21. Bart January 21, 2016 at 12:21 am #

    This is a nice tutorial, well explained, I was wondering how to add a pan/tilt servo to the project so that an external camera (USB) can move like the contrails

    • Adrian Rosebrock January 21, 2016 at 8:51 am #

      I honestly haven’t worked with a pan/tilt servo before, although that is something I will try to cover in a future blog post — be sure to keep an eye out!

  22. Guru January 31, 2016 at 1:58 pm #

    Extremely Great Post Man. I would like to request you to demonstrate shape based tracking instead of color based tracking in this context. It would help me greatly to be frank.

    • Adrian Rosebrock February 2, 2016 at 10:36 am #

      Have you tried looking into HOG + Linear SVM (commonly called object detectors)? It’s a great way to perform shape based detection followed by tracking.

  23. david February 2, 2016 at 8:47 pm #

    Hi, great post. Just curious about lines 44-46.

    frame = imutils.resize(frame, width=600)
    blurred = cv2.GaussianBlur(frame, (11, 11), 0)
    hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)

    Should: hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
    Be: hsv = cv2.cvtColor(blurred, cv2.COLOR_BGR2HSV)

    Otherwise, the “blurred” frame isn’t used that I can see.

    • Adrian Rosebrock March 13, 2016 at 10:31 am #

      Hey David, thanks for pointing this out. I didn’t mean for the blurring to be included in the code, I have commented it out. Sorry for any confusion!

  24. Tain February 3, 2016 at 1:51 pm #

    Evening Adrian

    i am absolutely new to Python and openCV, however have some programming experience.

    But i am struggling to get the demo running.

    My guess is that for some reason frames arent being grabbed from camera or video and so I end with an error, “noneType” doesnt have an attribute called Shape, when the code calls for resize of an object.

    Any thoughts on where I am going wrong?

    Windows 10, Python 2.7

    Thanks for your help

    • Adrian Rosebrock February 4, 2016 at 9:15 am #

      Any time you see an image or frame being “NoneType” it’s almost 100% due to the fact that (1) the image is not correctly read from disk or (2) the frame could not be read from the video stream. I would double check that you can properly access your webcam via OpenCV since that is likely were the issue lies.

  25. Bob February 4, 2016 at 4:03 pm #

    Hi Great Tuto, got it to work well, the thing is that I am hopeless with color spaces and don’t understand anything else than RGB. I’ve tried using rgb2hsv() conversion function to try and track a red ball or a blue ball, but … didn’t get it to work. I searched python documentation but the functions they propose (like colorsys.rgb_to_hsv()) don’t give results in the same ranges.I also tried different wikipedia functions and online functions but I don’t seem to get it to work with anything else than the green.
    Any help welcome.

    • Adrian Rosebrock February 5, 2016 at 9:23 am #

      Take a look at the range-detector script that I link to in this post. You can use this script to help you determine appropriate color threshold values.

  26. Hilman February 5, 2016 at 7:01 am #

    Hey Adrian. I got one question.

    On line 96, the code ‘key = cv2.waitKey(1) & 0xFF’, why the ampersand sign and the ‘0xFF’ is needed? I’ve googled it, the best explanation I have found is something about if the computer is 64-bit (if I remember correctly).

    • Adrian Rosebrock February 5, 2016 at 9:17 am #

      This is used to take the bitwise AND between the return value of cv2.waitKey gives the least significant byte. This byte is then passed into the ord function so we can get the actual value of the key press.

  27. David Kadouch February 8, 2016 at 6:43 am #

    Hey Adrian
    Quick question: I want to track a different color than the green/yellow ball. What’s the formula you used to transform the RGB values to HSV values? (in your code sample the values for greenLower = (29, 86, 6) and greenUpper = (64, 255, 255).
    I’m struggling with that and I can’t make it work. I want to track a blue object.


    • Adrian Rosebrock February 8, 2016 at 3:46 pm #

      To transform the RGB values to HSV values, it’s best to use the cv2.cvtColor function. You can find the formula for the conversion on this page. However, if you’re trying to detect a different color object, I suggest using the range-detector script I mention in this post.

  28. ghanendra February 16, 2016 at 11:51 am #

    hey Adrian..! I am not able to do it on raspberry pi 2.. Its showing NoneType error and for ball_tracking_example.mp4 fps is very low.
    please Help me out.

    • Adrian Rosebrock February 16, 2016 at 3:36 pm #

      Anytime you see a NoneType error it’s 99% of the time due to an image not being read properly from disk or a frame not being read from the video stream. The issue here is that you’re using cv2.VideoCapture when you should instead be using the picamera Python package to access the Raspberry Pi camera module. You can read more on how to access the Raspberry Pi camera module here.

      You could also swap out the cv2.VideoCapture for the VideoStream that works with both the Raspberry Pi camera module and USB webcams. Find out more here.

      • Ghanendra February 21, 2016 at 10:54 am #

        Thanks a lot Adrian.
        I was able to do live stream ball tracking with pi.
        I want to detect front head light of a vehicle during night time. Still I am just a beginner. Can you help me out on this?

        • Adrian Rosebrock February 22, 2016 at 4:25 pm #

          That’s definitely a bit more of a challenge. To start, you’ll want to find the brightest spots in an image. Then, you’ll need to filter these regions and apply a heuristic of some sort. A first try would be finding two spots in an image that are lie approximately on the same horizontal line. You might also want to try training an object detector to detect the front of the car prior to processing the ROI for headlights.

          • Ghanendra March 10, 2016 at 10:20 pm #

            Hey Adrian!!
            Can you help me out with the code for detecting two spots horizontally in an image.??
            I need to determine multiple bright objects in a live video stream.
            Just like finding multiple balls with same colors.
            Thanks in advance.

          • Adrian Rosebrock March 13, 2016 at 10:28 am #

            The same code can be applied. Just define the color ranges for each object you want to detect, then create a mask for each of the color ranges. From there, you can find and track the objects. If you don’t want to use color ranges, then I suggest reading reading this post on finding bright spots in images.

          • ghanendra March 22, 2016 at 10:03 am #

            hey Adrian thanks for help.
            I tried for blue color creating a different mask and setting color range. it was getting tracked simultaneously.
            I was able to track green and blue.
            1. how to track two balls in same horizontals line??
            2. in your tutorial we are finding the largest contour in the mask, instead of that how to find all the contours and track them separately??
            3. how to track multiple objects of same color?? just like if I have 5-10 green balls. So how to track them?

          • Adrian Rosebrock March 22, 2016 at 4:15 pm #

            The most important aspect of getting multiple colors is to use multiple masks — one for each color you want to track. You then apply each technique of color thresholding, finding the largest contour(s), and then tracking then. But again, you need to create a mask for each color range that you want to track.

          • ghanendra March 22, 2016 at 11:37 pm #

            Haha… Adrian you misunderstood me. I was asking about tracking same color. Tracking multiple objects of ” SAME COLOR”.

          • Adrian Rosebrock March 24, 2016 at 5:21 pm #

            Got it, I understand now. See by reply to “Maikal” in this comments section. I detail a procedure that can be used to handle objects that are the same color.

          • ghanendra March 26, 2016 at 11:04 am #

            Hey Adrian really thanks a lot for answering my questions. I just love these tutorials and everyday I will come with a new question and I hope you won’t mind answering them. haha!!
            One more

            I need to indicate the detected green ball using an LED, so how can I use RPi.GPIO with this code?? I tried importing but showing an error.
            How to use GPIO pins with this code??

          • Adrian Rosebrock March 27, 2016 at 9:09 am #

            I’ll be covering this soon on the PyImageSearch blog, keep an eye out 🙂

  29. amin February 18, 2016 at 8:38 am #

    Hi Adrian,
    thanks for your GREAT tutorials,
    i want to merge this tracking ball code with “unifying-picamera-and-cv2…” to have best result in tracking green ball
    at first install last jessie update and install opencv 3.1.0 with python 3 same as your post “how-to-install-opencv-3-on-raspbian-jessie”
    for simple imshow (no tracking & max witdh = 400) can reach 39 FPS with picamera & about 27 FPS for webcam
    but when add ball-tracking code FPS decrease to 7.8 with picamera and 7 with webcam 😐
    why webcam has close speed to picamera when add tracking code?
    is it possible to reach better FPS (without change size)?
    i try several ways to increasing FPS but they are not good enough
    e.g. increase priority by change nice of python process “renice -n -20 PID of process”
    but no so good maybe increase FPS just 0.1
    thanks a lot

    • Adrian Rosebrock February 18, 2016 at 9:30 am #

      So keep in mind that the FPS is not measuring the physical FPS of your camera sensor. Instead, it’s measuring the total number of frames you can process in a single second. The more steps you add to your video processing pipeline, the slower it will run.

      Your results reflect this as well. When using just cv2.imshow you were able to process 39 frames per second. However, once you included smoothing, color thresholding, and contour detection, your processing rate dropped to 7 frames per second. Again, this makes sense — you are adding more steps to your processing pipeline, therefore you cannot process as many frames per second.

      Think of your video processing pipeline as a flight of stairs. The less functions you have to call inside your pipeline (i.e., the “while” loop), the faster you can go down the stairs. The more functions you have, the longer your staircase becomes — and therefore it takes longer for you to descend the stairs.

  30. kazem March 6, 2016 at 8:31 am #

    Hi Adrian, great tutorial. You mentioned you have used range-detector to determine the boundaries. Would you mind telling me how did you do that? I ran it and I can see I can use the sliders to make sure that My object stands out as black from the white background. But there is nowhere I can see any value?

    • Adrian Rosebrock March 6, 2016 at 9:13 am #

      Indeed, the sliders control the values. The easiest way to get the actual RGB or HSV color thresholds is to insert a print statement right after you press the q key to exit the script. I’ll be doing a more detailed tutorial on how to use the range-detector in the future.

      • Hojo October 17, 2016 at 4:15 am #

        I have just started learning python about a week ago and I am still trying to wrap my head around the language.

        So while this question sounds dumb, How do you run range-detector in python? Is it already in imutils?

        I am trying to detect and track multiple moving black balls in the same frame, print out the respective positions and calculate the distance traveled, velocity, etc.

        I have written code before to do this but in matlab, (I split an image into R-G-B, performed a background subtraction on each channel, inverted the resulting images, took the similar and binarized) however when reading up on object tracking; I noticed that many use HSV instead of RGB. After reading more I can see why HSV is preferred over RGB but because of this, I need to be able to define the color ranges. The range-detector looked liked perfect to use, but… (back to my question above).

        • Adrian Rosebrock October 17, 2016 at 4:03 pm #

          There are many ways to execute the range-detector script but most are based on how your Python PATH is defined. Where do you have the imutils package installed on your system? The script itself is already in imutils. The easiest method would be to change directory to it and execute using your input image/video stream as the source.

  31. Selim M. March 8, 2016 at 3:53 pm #

    Hello Adrian,
    Thanks for the tutorials, I learned a lot from them. I have a problem about the camera though. It does not capture the frames. I didnt have a problem when taking photos but it seems that the video is a bit problematic. I run the code and it doesnt capture the frames. Do you have an idea about why it happens?

    Have a nice day!

    • Adrian Rosebrock March 8, 2016 at 4:10 pm #

      What type of camera are you using? Additionally, you might want to try another piece of software (such as OSX’s PhotoBooth or the like) to ensure that your camera can be accessed by your OS.

  32. giulio mignemi March 23, 2016 at 9:35 am #

    hello, I need to set the color to identify the ball covered with aluminum foil, could you help me?

    • Adrian Rosebrock March 24, 2016 at 5:19 pm #

      I would recommend against this. Trying to detect and recognize objects that are reflective is very challenging due to the fact that reflective materials (by definition) reflect light back into the camera. Thus, it becomes very hard to define a color range for reflective materials. Instead, if at all possible, change the color of the object you are tracking.

  33. maikal March 24, 2016 at 12:40 am #

    Anyone can tell me how to detect two green balls simultaneously???

    • Adrian Rosebrock March 24, 2016 at 5:13 pm #

      Change Line 66 to be a for loop and loop over the contours individually (rather than picking out the largest one). You can get rid of the max call and then process each of the contours individually. I would insert a bit of logic to help prune false-positive contours based on their size, but that should get you started!

      • maikal March 25, 2016 at 12:16 pm #

        Yeah Adrian thanks a lot. I changed to for loop, multiple contours are getting detected and they are overlapping each other. I tried to change the radius size but still not getting proper result.
        Waiting for your logic.

        • Adrian Rosebrock March 27, 2016 at 9:15 am #

          If the contours are overlapping, then that will cause an issue with the tracking — this is also why you might want to consider using different color objects for tracking. In the case of overlapping objects, you should consider applying the watershed algorithm for segmentation.

        • Wallace Bartholomeu May 27, 2017 at 12:10 am #

          Can you please this part of your code ?
          Im trying to do, but unsucessful.

  34. Alan April 6, 2016 at 1:22 am #

    Hi Adrian,
    If we are tracking multiple balls, you said loop over the contours in the earlier post. However, how do you identify the contours so that when you draw the lines, its belong to the correct ball.

    • Adrian Rosebrock April 6, 2016 at 9:08 am #

      There are many ways to accomplish this, some easy, some complicated. The quickest solution is to compute the centroid of each object in the frame. Then, find the objects in the next frame. Compute the centroids again. Take the Euclidean distance between the centroids. The pairs of objects that have the smallest distances are thus the “same” object. This would make for a great blog post in the future, so I’ll make sure I cover that.

      • yaswanth kumar May 5, 2016 at 7:23 am #

        Hey Adrian, don’t we have any python library or any algorithm to do the same? if yes, can you please suggest some! Thanks

        • Adrian Rosebrock May 5, 2016 at 7:31 am #

          No, there isn’t a library that you can pull off the shelf for this. It’s not too hard to code though. I’ll try to do a blog post on it in the future, but my queue/idea list is quite large at the moment.

  35. Matt April 7, 2016 at 8:28 am #

    Hi Adrian,

    Thanks for your great tuto. Do your script run with a beaglebone black card ?

    Thanks in advance

    • Adrian Rosebrock April 7, 2016 at 12:33 pm #

      I don’t own a Beaglebone Black, so I honestly cant’ say.

      • Matt April 8, 2016 at 2:57 am #

        Ok thanks. But in your project, what kind of card have you used ?

        • Adrian Rosebrock April 8, 2016 at 12:53 pm #

          I either use a laptop or desktop running Ubunutu or OSX, or I use a Raspberry Pi.

      • Matt April 13, 2016 at 9:36 am #

        Ok but from where do you run your script ? Raspberry Pi or Laptop ? Thanks.

        • Adrian Rosebrock April 13, 2016 at 6:52 pm #

          I run it on both. From this particular script, I executed it on my laptop. But it can also be run on the Raspberry Pi by modifying the code to access the Raspberry Pi camera.

          • Matt April 14, 2016 at 8:41 am #

            Thanks. According to your video, you seem to track your ball in the plane (x,y). What’s happened if you move the ball on the z-axis ?

          • Adrian Rosebrock April 14, 2016 at 4:42 pm #

            This code doesn’t take in account the z-axis. But you can certainly combine the code in this blog post with the code from measuring the distance from your camera to an object to obtain the measuring in the z-axis as well.

          • Matt April 15, 2016 at 3:12 am #

            Yes, I read it. But in your code, I do not understand how do you compute the coordinates of the ball in the frame world. Did you compute the coordinates changing frames (e.g world frame -> camer frame) ?

          • Adrian Rosebrock April 15, 2016 at 12:11 pm #

            The (x, y)-coordinates of the ball are obtained from the image itself. They are found by thresholding the image, finding the contour corresponding to the ball, and then computing its center. These coordinates are then stored in a queue (i.e., the actual “tracking” part). If you would like to add in tracking along the z-axis, you’ll need to see the blog post I linked you to above. The trick is apply an initial calibration step that can be used to measure the perceived distance in pixels and convert the pixels to actual measurable units.

          • Matt April 19, 2016 at 7:07 am #

            Thanks for your answer. But in your tuto, you do not measure distance from the camera to an object but only distance between different objects. Moreover, in your algo, I do not see the using of intrinsic parameters (e.g focal length of the camera). Could you help me please ? Thanks

          • Adrian Rosebrock April 19, 2016 at 7:24 am #

            Hey Matt, as I mentioned in a previous reply to one of your comments, you need to see this blog post for measuring the distance from an object to camera. This requires you to combine the source code to both blog posts to achieve your goal. I’ll see about doing such a blog post in the future, but if you would like to build a system that measures distance + direction, you’ll need to study the posts and combine them together.

          • Matt April 19, 2016 at 2:50 pm #

            Yes, I see but I asked myself if a simple webcam work for 3D tracking… I did a state of the art, and I read that a special 3D camera sensor is required. That’s why I asked you 🙂

          • Adrian Rosebrock April 19, 2016 at 3:01 pm #

            For 3D tracking, you’ll likely want to explore other avenues. If you want to use a 2D camera (which would be a bit challenging), then camera calibration via intrinsic parameters would be required. Otherwise, you might want to look into stereo/depth cameras for more advanced tracking methods. Hopefully I’ll be able to cover both of these techniques in future blog posts 🙂

          • Matt April 20, 2016 at 3:53 am #

            I hope for you 🙂

            Otherwise, I thought to use a simple way for computing z-coordinate which consists to use the size of the object to determine z-coordinate roughly. When it appears larger on the camera, it must be closer and inversely, if it’s smaller, it’s farther away. But I don’t know if this method is robust. Because if I use a small object, it would be difficult.

  36. Jon April 8, 2016 at 5:54 pm #

    This uses a USB camera. I have your code for the picamera working from another module and would like to use the picamera. What is the correct way to do this?

    Can I replace line 26:

    camera = cv2.VideoCapture(0)


    camera = PiCamera()
    camera.resolution = (640, 480)
    camera.framerate = 32


    • Adrian Rosebrock April 13, 2016 at 7:16 pm #

      Instead of replacing the code using picamera directly, I would instead use the “unified” approach detailed in this post.

      • Dan June 4, 2016 at 10:36 pm #

        Just to get this particular tutorial working with picamera or the unified approach, would you detail (or post) the specific changes to get ball tracking working with the picamera?

        • Adrian Rosebrock June 5, 2016 at 11:24 am #

          To be totally honest, it’s not likely going to wrote a separate blog post detailing each and every code change required. If you go through the accessing Raspberry Pi camera post and unifying access post, I’m more than confident that you can update the code to work with the PiCamera module.

          Start with the template I detail inside the “accessing the picamera module” tutorial. Then, start to insert the code inside the while loop into the video frame processing pipeline. It’s better to learn by doing.

  37. WouterH April 14, 2016 at 3:48 pm #

    Why are you calculating the center? The function minimumEnclosingCircle already returns the center + radius or am I missing something?

    Regards and thanks for the nice example.

    • Adrian Rosebrock April 14, 2016 at 4:36 pm #

      The cv2.minEnclosingCircle does indeed return the center (x, y)-coordinates of the circle. However, it presumes that the shape is a perfect circle (which is not always the case during the segmentation). So instead, you can compute the moments of the object and obtain a “weighted” center. This is a more accurate representation of the center coordinates.

      • michael March 4, 2019 at 1:00 pm #

        hi Adrian!
        yes we have two centers here :
        1) the center of the object which is not perfect circle
        2) the center of the estimated ball

        i think more correct to draw the center of estimated ball
        additionally we can smooth the path to reduce some noises in the curve.

  38. Om April 22, 2016 at 3:39 am #

    Help me iam not able to install imutils on RPI how can I do it

    • Adrian Rosebrock April 22, 2016 at 11:42 am #

      You should be able to install it via pip:

      $ pip install imutils

  39. anirban April 23, 2016 at 1:57 pm #

    Hi – Excellent blog, but when running i get the below error. Can someone help?

    File “”, line 39, in
    image, contours,hierarchy = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
    ValueError: need more than 2 values to unpack

    • Adrian Rosebrock April 25, 2016 at 2:09 pm #

      It sounds like you’re using OpenCV 3; however, this blog post utilizes OpenCV 2.4. To fix this error, you simply need to change the cv2.findContours line to:

      (_, cnts, _) = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

      I detail the difference in cv2.findContours between OpenCV 2.4 and OpenCV 3 in this blog post.

      • anirban April 25, 2016 at 3:35 pm #

        Hi Adrian – So kind of you to reply in such a short time, i appreciate your help to starters as me. i am not running opencv 3 But i am on 2.4.9 which i got by running commands on python terminal cv2.__version__

        Can you suggest anything else?

        • Adrian Rosebrock April 26, 2016 at 5:17 pm #

          My mistake — I read your original comment to fast. I should have been able to tell that you were using OpenCV 2.4. In that case, you just need to modify the code to be:

          (cnts, _) = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

          I discuss the changes in the cv2.findContours function between OpenCV 2.4 and OpenCV 3 in this blog post.

  40. Shubham Batra April 25, 2016 at 6:45 am #

    @Adrian Hey!, I am tracking a table tennis ball using the color segmentation and hough circle method, but this only works fine when the ball is moving slowly.
    When the ball is moving very fast then tracking is lost.
    I am using the Kinect for Windows V2 Sensor which gives at most 30fps.
    Do I need a better high speed camera or any other algorithms can do the trick with the same 30fps camera ?

    • Adrian Rosebrock April 25, 2016 at 1:59 pm #

      I wouldn’t recommend using Hough Circles for this. Not only are the parameters a bit tricky to get just right, but the other issue is at high speeds, the ball will become “motion blurred” and not really resemble a circle. Instead, I would suggest using a simple contour method like I detailed in this blog post. Otherwise, if you really want to use Hough Circles, you’ll want to get a much faster camera and have the hardware that can process > 60 FPS.

      • Shubham Batra April 27, 2016 at 3:16 am #

        @Adrian I’ll try out the simpler contour method and see if that works just fine,
        else I’ll have to get a better camera.
        Anyways thanks for the help!

  41. reza aulia April 29, 2016 at 8:14 am #

    if i want to change colour , where i can find type of color ??

    • Adrian Rosebrock April 30, 2016 at 3:58 pm #

      Please see the range-detector script that I mention in this blog post.

  42. Suraj May 12, 2016 at 4:33 am #

    Hello Adrian,

    I want to blur the rest of the video while the specified colour region stays normal.
    Any leads on how I can get to it?

    • Adrian Rosebrock May 12, 2016 at 3:35 pm #

      I would suggest using transparent overlays. You can blur the entire image using a smoothing method of your choice. This becomes your “background”. And then you can overlay the original, non-blurred object. This will require you to extract the ROI/mask of the object.

  43. Matt May 25, 2016 at 8:33 am #

    Hi adrian,

    I have an other question please. I do not understand how have you computed the coordinates of the ball without considering the focal length of your camera in your algo.

    Could you explain me what is the difference between your work and the case where you could use the focal length ?



  44. Oscar June 16, 2016 at 5:42 am #

    Hi Adrian.

    great tutorial.
    One question, is it possible create an executable of this script and a shortcut? this way, the program runs by double-clicking on the shortcut. I have UBUNTU.

    • Adrian Rosebrock June 18, 2016 at 8:22 am #

      Thanks Oscar. And regarding your question, I don’t know about that. I’ve never tried to create a Python script that runs via shortcut.

  45. Ihtasham June 17, 2016 at 9:50 am #

    Hi, I want to know how we can do people track and get the track direction. In your tutorial just direction is start from where body escaped from the cam and how we can come the window in neutral form again.

    • Adrian Rosebrock June 18, 2016 at 8:18 am #

      In this tutorial I demonstrate how to compute direction and track direction. You can apply the same methodology to other objects (such as people) as well.

      • Yasaman June 7, 2018 at 5:19 am #

        Dear Adrian,
        Could we apply this method to detect a bird that fly in different backgrounds?

        • Adrian Rosebrock June 7, 2018 at 3:00 pm #

          Birds can be various shapes and colors so I don’t think I would recommend this method. You might want to consider background subtraction if your camera is fixed and not moving. Another option may be to train your own custom object detector as well.

  46. Arkya June 18, 2016 at 2:58 pm #

    Hey, thanks for the awesome tutorial.
    What if I need to track any other colored ball (say, black), how would I get the HSV range of that color?

    • Adrian Rosebrock June 20, 2016 at 5:34 pm #

      Please see the range-detector script that I linked to in the blog post. This script will help you define the HSV color range for a particular object.

      • Arkya June 22, 2016 at 2:53 pm #

        thanks, got it

  47. yaswanth kumar June 23, 2016 at 11:17 am #

    Hi Adrian,
    Can’t we use RGB color space and RGB colour boundaries to detect a colour?

    • Adrian Rosebrock June 23, 2016 at 1:05 pm #

      Absolutely. Please check this blog post as an example.

  48. Amin July 5, 2016 at 11:35 am #

    Hi Adrian,
    i make a robot (see that-> )
    its work base on your code to find green ball
    now im trying to optimize codes and have some question use erode & dilate functions.why you dont use like this ?:
    mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, None, iterations=2) initialize center by None but i delete it & nothing happened : / searching for object detection methods for detect ball in my robot so find this algorithms

    * transfer color space to HSV & find contours then find circle in it as you explain in this post
    ** Hough Circle Transform -> as you said its need camera with high FPS
    *** HOG
    **** Cascade Classification
    ***** CAMshift

    now i want to now which algorithm is fastest and have best performance?
    what are other algorithms i can use to find ball in video frames?


    • Adrian Rosebrock July 5, 2016 at 1:40 pm #

      You could certainly use a closing operation as well. In this case, I simply used a series of erosions and dilations. As for your second question, I’m not sure what you mean.

      The fastest tracking algorithm will undoubtedly be CAMShift. CAMShift is extremely fast since it only relies on the color histogram of an image. The Hough circle transform can be a bit slow, and worse, it’s a pain to tune the parameters. Haar cascades are very fast, but prone to false-positive detections. HOG is slower than Haar, but tends to be more accurate. If all you want to track is a green ball, then I would suggest using either the cv2.inRange function or CAMShift.

      • Amin July 6, 2016 at 12:44 am #

        thanks a lot
        just one thing i had forgotten to ask :
        some times when i bring out the ball from camera view
        get this error :
        Zero Division Error : float division by zero
        its related to this line :
        center = (int(M[“m10”] / M[“m00”]), int(M[“m01”] / M[“m00”]))
        how fix it?

        • Adrian Rosebrock July 6, 2016 at 4:14 pm #

          If the ball goes out of view and you are trying to compute the center, then you can run into a divide by zero bug. Changing teh code to:

          Should resolve the issue.

  49. Ewerton Lopes July 8, 2016 at 6:03 am #

    Hey Adrian!

    First of all, thanks for the blog! It is amazing! 😀
    Right now I am doing my PhD research.. and somehow I need to track a person using a robot base… Not all people in the scene, but just one given person of interest, lets say! Well, I am thinking on tracking her based on a given color the person is wearing (let’s say green!). I wondering, however, wether it is possible, for instance, to use a kind of AR or QR code tag on the person instead of the color.. Just to avoid getting noise from other colors around… Do you have any idea on this matter? I would love to hear your feedback!

    Thanks, man!

    • Adrian Rosebrock July 8, 2016 at 9:47 am #

      Sure, this is absolutely possible. It all comes down to how well you can detect the “object/marker”. If it’s easier to detect the person via color, do that. If the QR code gives you better detection accuracy, then go with that. I would suggest running a few tests and seeing what works best in your situation.

  50. Jarno Virta July 15, 2016 at 4:00 pm #

    Hi! Thanks for the tutorial! I have been learning OpenCV for a while now and I must say it is fascinating! I have an Arduino robot that I can control from my phone via bluetooth and it can also move around randomly while avoiding obstacles using a sonar range finder. I’m in the process of adding a Raspberry Pi to the robot, which will detect a ball and instruct the Arduino to move toward it. Your tutorial has been very useful!

    I was thinking of using houghcircles to check that the object is infact a ball but this proved a bit too difficult because of other stuff being picked up, or if I set the color range for the mask and the diameter for the circle too restrictively, the ball is not found because of, among other things, variations in the tone of color of the ball… The robot should be able to detect the ball at some distance, so that brings certain requirements as well. I must say, I dont fully understand the Hough circle detection either, I often get a huge number of circles… Maybe just detecting contours is enough for now.

    Is it possible to detect the ball without resorting to color filtering?

    • Adrian Rosebrock July 18, 2016 at 5:25 pm #

      The parameters to Hough Circles can be a real pain to tune, so in general, I don’t recommend using it. I would suggest instead filtering on your contour properties.

      As for detecting a ball in an image without color filtering, that’s absolutely possible. In general, you would need to train your own custom object detector.

  51. Ed July 21, 2016 at 2:04 pm #

    Hi Adrian,

    Ive been following a few of your tutorials and have openCV setup on my pi but I cannot get this tutorial to work! (even running your source exactly)

    Whenever I type the command to run it I simply end up back at the prompt, heres my output:

    (cv) pi@pi:~/Documents $ python –video ball_tracking_example.mp4
    (cv) pi@pi:~/Documents $

    Any idea why its doing this?

    • Adrian Rosebrock July 22, 2016 at 10:57 am #

      If you end up back at the prompt, then OpenCV cannot open your .mp4 file. Make sure you compiled OpenCV on your Pi with video support.

      • Ed July 22, 2016 at 3:17 pm #


        I’m not sure that I have. It’s installed on a raspberrypi as per your tutorial to install openCV 3.0 and Python 2.7.

        Is the video support required for acessing the video feed from the picamera?

        • Adrian Rosebrock July 27, 2016 at 2:53 pm #

          Video support is not required for accessing the Raspberry Pi camera module provided that you are using the Python picamera package. However, if you are reading frames from a video file, then yes, video support would be required.

  52. Yao Lingjie July 28, 2016 at 2:56 am #

    Hi Adrian,

    Can I know how do I find out the object’s lower and upper boundaries by using the imutils range_detector?

    • Adrian Rosebrock July 29, 2016 at 8:37 am #

      My favorite way would be to add a “print” statement at the bottom of the range_detector script that prints out the values when you press a key on your keyboard or exit the script. I’m currently looking at overhauling the script to make it a little more user friendly.

  53. Olivier Supplien August 9, 2016 at 7:01 am #


    I am lookink for a way to track several object in the same time, like coloured sticky labels.
    Do you have any idea?

    By th way, your code was very helpfull and very well-commented, thank you.

    • Adrian Rosebrock August 10, 2016 at 9:30 am #

      You can certainly track multiple objects at the same time. You just need to define the lower and upper color boundaries for each object you want to track. Then, generate a mask for each colored object and use the cv2.findContours function to find each of the objects.

  54. Aris August 16, 2016 at 9:26 am #

    Hey Adrian

    from line 19 and 20. Is it the HSV or RGB color code?

    • Adrian Rosebrock August 16, 2016 at 12:56 pm #

      That is in the HSV color space.

  55. Marcel August 18, 2016 at 4:20 pm #

    Hello Adrian,

    Excellent tutorial on tracking ball with OpenCV.

    I am starting studies in computer vision.

    I have some questions..

    The first is: How can I change the trace color the ball and let permantente in the image?

    And the second question: How would the code to find another color and apply a square mask?


    • Adrian Rosebrock August 22, 2016 at 1:39 pm #

      To track the movement of the ball, you can use this post.

      To track a different color object, be sure to use the range-detector script that I mention in the blog post. You can apply a square mask using cv2.rectangle

      • Marcel August 24, 2016 at 2:22 pm #

        Many thanks for the reply,

        I managed to create a square mask for the color red and would like to create a condition to check whether center the green ball went over the color red, how it could be this condition?

        • Adrian Rosebrock August 24, 2016 at 3:51 pm #

          Basically, all you need to do is create two masks — one for the red and one for the green ball. Then, once you have these masks take the bitwise AND between them. This will give you any regions where the two colors overlap. You can find these regions using cv2.findContours or more simply cv2.countNonZero

  56. Mark August 23, 2016 at 7:36 am #

    Hi Adrian,

    thanks for your great job with sharing all this knowledge!
    But I have a question. Have you ever tried to use higher FPS camera with raspberry? For instance action camera like GoPro. I’m wondering is it even possible. It should be connected via USB so I think it could be a bottleneck. Considering that CSI is the best option here => raspberry camera is the only way to capture HD video at ~60FPS in real time, right? I’ve read about some tricky HDMI input to CSI adapter so said GoPro could action like raspberry cam but its like 2 times the price of RPi3 and the availability leaves much to desire… What do you think?

    Have a nice day!

    • Adrian Rosebrock August 24, 2016 at 12:17 pm #

      I personally haven’t tried using a GoPro before, regardless of processing the frames on a Pi or standard hardware. In general, I think the Pi will be strained to process 60 FPS unless you are only doing extremely basic operations on each frame.

  57. Nilesh September 1, 2016 at 10:22 am #


    Wonderful post. I have a question on the similar lines – How about tracking two or more/2 same color objects in the video. Lets say for instance we have 2 red, 2 green and 1 blue balls in the scene. How would you recommend tracking them with a unique identifier?

    I am expecting 5 different trajectories (similar to one in ball tracking example), one for each ball. Thank you for your help.

    • Adrian Rosebrock September 1, 2016 at 10:55 am #

      For multiple objects, I would suggest constructing a mask for each color. Once you have the masked regions for each color range, you can apply centroid tracking. Compute the Euclidean distance between your centroid objects between subsequent frames. Objects that have minimum distance should be “associated” and “uniquely tracked” together.

  58. John September 9, 2016 at 12:21 pm #

    Hello ! I actually have a few questions to ask
    What I’m trying to do is to run this program using a Raspberry Pi 3 using the PiCamera , but I keep getting this error : ‘NoneType’ object has no attribute ‘ shape ‘
    I tried to modify your code a little by adding these lines

    from picamera import PiCamera ( At the very top )
    camera = PiCamera() Line 26
    But the error ‘PiCamera’ object has no attribute ‘read’

    I looked at the tutorial here but still couldn’t quite understand..
    Not sure what to do about this..Would really appreciate some help, thanks!

    • Adrian Rosebrock September 12, 2016 at 12:58 pm #

      Anytime you see an error related to “NoneType”, it’s because an image was not read properly from disk or (in this case) a frame was not read from a video stream. I would suggest going back to the Accessing the Raspberry Pi camera post and ensure that you can get the video stream working without any frame processing. From there, start to add in the code from this post in small bits.

      Finally, if you need a jumpstart, I would suggest going through Practical Python and OpenCV to help you learn the basics of computer vision and OpenCV. All code examples in the book are compatible with the Raspberry Pi.

      • John October 3, 2016 at 12:08 pm #

        I managed to bring up a live feed video but still can’t get to work. I tried inserting with picamera.PiCamera() as camera: before the image processing part but still recieved the same error

        • Adrian Rosebrock October 4, 2016 at 7:02 am #

          It’s really hard to say what the exact issue is without being in front of your physical setup. I’m not sure how much it would help, but I would suggest going through my post on common camera errors on the Raspberry Pi and seeing if any of them relate to your particular situation.

  59. Mostafa Sabry September 11, 2016 at 9:10 am #

    Hi adrian, .
    I really appreciate your effort in these blogs that we alot benefit from.
    I am trying to run the code on python 2.7 with open CV2 and i keep getting the same error of ‘NoneType’ object has no attribute ‘shape’.
    I am working on the computer NOT raspberry PI as I checked the comments above.
    I would be grateful if you can help me handle this issue.
    I am using a webcam built in the laptop and I checked it is working using the command
    on a separate python file .
    I am new to python. I traced the code and to try to let the code run on the video instead but I failed to understand the “argparse” library.

    • Adrian Rosebrock September 12, 2016 at 12:50 pm #

      Hi Mostafa, I think you might have missed my reply to Nick above. My reply to him is true for you as well. Anytime you see a frame as “NoneType”, it’s because the frame was not read properly from the video file or video stream. Given your inexperience with argparse, I think this is likely the issue though. Be sure to download the source code to this post using the “Downloads” form and then my example command found at the top of the file to help you execute it.

      • Mostafa Sabry September 17, 2016 at 6:57 am #

        Thank U Adrian for your reply.
        After searching a little bit online for the issue, I found a suggestion online that DID WORK which is to put a timed delay from the imported “time library” just after the command “cv2.VideoCapture(0)” to give the webcam time to load.
        The code did work and thank U very much for your coordination and the incredible stuff you are providing.
        I might need your help soon 🙂 as I want to adjust the code a little bit to fit my problem I will address you.

        • Adrian Rosebrock September 19, 2016 at 1:18 pm #

          Great job resolving the issue Mostafa!

  60. Ranjani Raja September 19, 2016 at 1:25 am #

    I installed imutils package but still there is a error “no module named imutils”.

    • Adrian Rosebrock September 19, 2016 at 1:03 pm #

      If you are using a Python virtual environment, make sure you have installed imutils into the Python virtual environment:

      I would also read the first half of this blog post to learn more about how to use Python virtual environments.

  61. Andre Brown September 19, 2016 at 11:08 pm #

    Hi Adrian
    I would like to know if it is possible for the contrail to be drawn based on the size of the detected contour or circle drawn, so as you move the ball closer, the thickness of the contrail increases, and further away it decreases.
    Also, is it possible to not have the contrails disappear with time. I have tried setting the buffer to 1280 but they still eventually disappear. it seems they start thick, then thin to nothing with time. I would like to keep all contrails in buffer and am currently writing these to an img file on exiting.

    • Adrian Rosebrock September 21, 2016 at 2:16 pm #

      It’s certainly possible to make the contrail larger or smaller based on the size of the ball. The larger the ball is, the closer we can assume it is to the camera. Similarly, the smaller the ball is, the farther away it is from the camera. Using this relationship you can adjust the size of contrail. The radius of the minimum enclosing circle at any given time will give you this information.

      As for keeping all points of the contrail, simply sway out the deque for a standard Python list.

  62. Mohamed October 1, 2016 at 5:47 pm #

    I’d like to thank you for your efforts. The code is well explained.
    I have a question that might be naive as I am not a vision guy. Does the same/similar code work on non-circular objects? For example, rectangular ones?

    Thanks again.

    • Adrian Rosebrock October 2, 2016 at 8:58 am #

      Yes, the code certainly works for non-circular objects — this code assumes the largest region in the mask is the object you want to track. This could be circular or non-circular. If your object is non-circular you may want to compute the minimum bounding (rotated) rectangle versus the minimum enclosing circle. Other than that, nothing has to be changed.

  63. Daniel October 6, 2016 at 6:17 pm #

    Hi Adrian!

    When I run your code it works pretty well with my green ball, but when there is no ball in the screen the red contrail turns crazy and doesn’t disappear as in your video. What could be the problem?

    Psdt: Awesome website! Thanks for shareing your work.

    • Adrian Rosebrock October 7, 2016 at 7:25 am #

      The red contrail tracks the last position(s) of the ball. If the red contrail is doing “crazy” things then check the mask. There is likely another green object in your video stream.

  64. huang October 16, 2016 at 6:55 am #

    How to display the data in the form of text in the deque

    • Adrian Rosebrock October 17, 2016 at 4:09 pm #

      Can you elaborate? I’m not sure what you are trying to accomplish.

  65. lokesh p October 19, 2016 at 12:42 pm #

    can we track the ball in the outfield?? is that possible?

    • Adrian Rosebrock October 20, 2016 at 8:42 am #

      You can, but it’s not easy. You would require a high FPS camera since baseballs move extremely fast. If the camera is not a high FPS then you’ll have a lot of motion blur to deal with. In fact, even with a high FPS camera there will still be a lot of motion blur. Tracking motion of fast moving objects normally is a combination of image processing techniques + machine learning to actually predict where the ball is traveling.

      I would suggest starting with this paper that reviews a ball tracking technique for pitches.

  66. Ejjelthi November 3, 2016 at 6:46 pm #

    Hi Adrian,

    i want to track a player and a ball (like in a football background),

    may you give tell me what can i change in the code?

    Thanks in advance.

    • Adrian Rosebrock November 4, 2016 at 10:02 am #

      Tracking a player and a ball is a much more advanced project. I wouldn’t recommend simple color thresholding for that. Instead, you should investigate correlation-based filters. These are much more advanced and even then being able to track a player across the pitch for an entire game is unrealistic. We can do it for short clips, but for not entire matches.

  67. Ranim November 8, 2016 at 9:50 am #

    Thank you so much for your efforts. I am really enjoying and benefting from this blog. I have questions regarding the number of frames.

    Is it possible to know how many frames in a second we are processing for the video ?

    can we custmoize it to process a specific number of frames oer second ?

    • Adrian Rosebrock November 10, 2016 at 8:47 am #

      You can use this blog post to help you determine how many frames per second you are processing. Calls to time.sleep will allow you to properly set the number of frames per second that you want to process.

      • Ranim November 14, 2016 at 4:53 am #

        Thanks a lot.

        • Adrian Rosebrock November 14, 2016 at 12:03 pm #

          No problem, happy I could help 🙂

  68. mandy November 8, 2016 at 3:26 pm #

    i am actually doing some similar project,
    a small question,

    after obtaining centroid x, y
    (1) store in the buffer (buffer size = 128 is better for my project)
    (2) drawline using Opencv

    How do convert your code in java ?

    Thanks if you can help

    • Adrian Rosebrock November 10, 2016 at 8:43 am #

      Hey Mandy, while I’m happy to provide free tutorials to help you learn about computer vision and image processing, I only support the Python code that I write. I do not provide support in converting code to other languages. I hope you understand.

  69. Marlin November 13, 2016 at 7:57 pm #

    I would like to be able to use this to be able to transform this example to track multiple objects of different colors. However, How can I define a long list of colors and then define upper and lower boundaries of the color given the RGB (or HSV) color.

    For example: I want to detect a silver ball the RGB for silver is 192, 192, 192. The HSV for silver is 0 0 75.

    How can I get the upper and lower limits of the color silver without actually using the script and detecting an object?

    • Adrian Rosebrock November 14, 2016 at 12:04 pm #

      Hey Marlin — I would suggest using the HSV or L*a*b* color space as they are easier to define color ranges in. The problem will be lighting conditions. Consider a “white” object for instance. Under blue light the object will have a blue-ish tinge to it. Under direct light the white object will actually reflect the light. This makes it challenging to use color-based detection in varying lighting conditions.

      In short, you should play around with varying HSV and L*a*b* values in your lighting conditions to determine what appropriate values are.

  70. Alex Johansson November 14, 2016 at 8:26 am #


    What would be the simplest ready to use (free or cheap) software to use for just tracking a tennis player’s movement on the court in order to create visual tracing or heat map of that movement?
    Thank you for any help/direction.

    • Adrian Rosebrock November 14, 2016 at 12:01 pm #

      That really depends on the type of camera feed you are using. If the camera is fixed then simple background subtraction would suffice. If you are trying to work with moving cameras with lots of different lighting conditions then the problem becomes much harder. In general you will not find an out-of-the-box solution for this — you’ll likely need to code the solution yourself.

      • Alex November 17, 2016 at 6:06 am #

        Thank you so much Adrian. It would be a fixed camera.

  71. Shervin Aslani November 14, 2016 at 3:41 pm #

    Hi Adrian, Awesome work. I’m new to python but I was able to learn quite a bit by going over your tutorial and code. Im currently working on a school project where we are trying track the path of a barbell while someone is performing weighted squats so we can assess and correct the technique that is being performed. We have painted to the end of the barbell with a bright yellow which allows us to track the bar path using the contrails which you designed. In order for us to properly assess squatting techniques we need to measure velocity, position, and be able to track these kinematic relationships. Is it possible to save to trail or path positions with time to an excel file or something similar?

    Also, we were wondering if it would be possible to record the video stream so we can review it in the future?

    Thanks for all your help and support.

    • Adrian Rosebrock November 16, 2016 at 1:59 pm #

      Very cool, as a fellow lifter I would certainly be interested in such a barbell tracking project. Regarding measuring position (and therefore velocity), you can derive both by extending the code from this post.

      I then demonstrate how to record the video stream to video here.

  72. Carlos November 16, 2016 at 5:44 pm #

    Hi Adrian

    I was wondering how can I implement this to identify several balls at the same time, i don’t really need to draw the connecting lines

    Thanks for your help

    • Adrian Rosebrock November 18, 2016 at 9:03 am #

      You need to define color thresholds for each of the balls. Loop over each of boundaries, color threshold the image, and compute contours. Instead of keeping only the largest contour, keep the contours that are sufficiently large.

  73. ANIL November 17, 2016 at 4:30 am #

    Hi Adrian,

    Thanks for the well explained tutorial. I want to use your code to detect eye pupils by using 2 cameras simultaneously. Than, I want to use serial communication between Python and Arduino (possibly by using pyserial) which will drive servo motors according to the location of the eye pupils in real time. I’m fairly new to both Python and OpenCV. How should I proceed to run the code for 2 cameras simultaneously?

    Thanks in advance for any support.

  74. kanta November 24, 2016 at 8:22 am #

    how to see tracking video for this code plz help me …code is suceesfully executed but how to see the output i dont know

    • Adrian Rosebrock November 28, 2016 at 10:46 am #

      Hey Kanta — are you using my example video included in the “Downloads” section of this post? If so (and you’re not seeing an output video) then it sounds like your OpenCV installation was not compiled with video support. I would suggest following one of my tutorials on installing OpenCV on your system or using my pre-configured Ubuntu VirtualBox virtual machine included in Practical Python and OpenCV.

  75. Juekun November 25, 2016 at 10:07 pm #

    Thanks for the awesome and well-explained tutorial!

    • Adrian Rosebrock November 28, 2016 at 10:35 am #

      Thanks Juekun, I’m happy it helped you 🙂

  76. ivandrew December 6, 2016 at 3:07 am #

    how to eliminate red line on the detection of the ball? manah parts that must be replaced or removed? thanks you before

    • Adrian Rosebrock December 7, 2016 at 9:47 am #

      Comment out the call to cv2.line to remove the red line.

  77. syukron December 6, 2016 at 3:10 am #

    how to do tracking color of clothes? so that the robot can follow the color using OpenCV 3.0.0 in raspberry pi using raspicam?

    • Adrian Rosebrock December 7, 2016 at 9:46 am #

      There are many ways to track an object in a video stream. For color-based methods you should consider CamShift.

  78. Sam December 12, 2016 at 3:01 am #

    You may have answered this already but What if you want to track a red ball or a blue ball?
    The crux of the matter is knowing what to pass for color filter in the inrange function.
    Where did you get the values you passed in?

    • Adrian Rosebrock December 12, 2016 at 10:27 am #

      I’ll write a blog post to demonstrate exactly how to do this since many readers are asking, but the gist is that you need to use the range-detector script to manually tune the color threshold values.

      • saransh October 9, 2018 at 8:07 am #

        eagerly waiting for your tutorial on range-detector.

  79. Apiz December 19, 2016 at 11:39 pm #

    Hi, Adrian. why my video from raspberry came is flipped?

    • Adrian Rosebrock December 21, 2016 at 10:28 am #

      It’s hard to say. Perhaps you installed your Raspberry Pi camera module upside down?

  80. Himani January 1, 2017 at 9:52 am #

    Hi Adrain,

    Happy new year Adrain….

    I want to identify the white line on the image and coordinates(x,y) of the line. Can you help me?

    Thank you.

    • Adrian Rosebrock January 4, 2017 at 11:09 am #

      The technique to do this really depends on how complex your image is. For basic images, thresholding and contour extraction is all you need. For more noisy images, you may need to apply a Hough Lines transform. For complex images, it’s entirely domain dependent.

  81. Gabriel Rech January 8, 2017 at 9:32 am #

    Hi Adrian,

    Thanks for the tutorial!

    I’m having some difficulties to detect balls with robustness.
    Basically I’m using your range-detector scritp to identify the mask, and it works, but when I change the position of the ball, like when I put it backwards, the parameters I use to detect the ball in the first position it don’t detect the entire ball,sometimes it doesen’t even detect the ball at all. In others words, my code itsn’t robust enough. How can I make it more robust?

    Other question, I didn’t quiet understantd what is the function of the HSV transform, I know what it does, but I don’t understand why are you using it

    Thanks for your attention

    • Adrian Rosebrock January 9, 2017 at 9:10 am #

      It sounds like you’re in an environment where there are dramatic changes in lighting conditions. For example, the area of the room may be “more bright” close to your monitor and then as you pull the ball back the ball shifts into a darker region of the room.

      Ideally, you should have uniform (or as close to as uniform) lighting conditions as possible to ensure your color thresholds work regardless of where the ball is. It’s easier to write code that works for well-controlled environments that unconstrained ones.

      If you can’t change your lighting conditions, consider trying the L*a*b* color space or using multiple color thresholds. We use HSV/L*a*b* over RGB since it tends to be more intuitive to define colors in these color spaces and more representative of how humans perceive color.

  82. Luca Mastrostefano January 9, 2017 at 2:52 pm #

    Hello Adrian,

    First of all, I would like to congratulate with you for this amazing blog!

    I’ve just installed OpenCV on my laptop ( and copied-pasted your code.

    It works perfectly as I run it!

    But.. it goes at 12 FPS instead of reaching the 32 FPS you referred.
    How can I speed up this algorithm? Is the 32 FPS version of your code different compared to the one published in this blog post?

    Currently, I’m working on a Macbook Pro (2,4 GHz Intel Core i5, 8GB Ram) with OpenCV 3.2.0 and Python 2.7.

    Thank you again for your help!

    • Adrian Rosebrock January 10, 2017 at 1:07 pm #

      My suggestion would be to apply video stream threading to help improve your frame rate throughput.

      • Luca Mastrostefano January 12, 2017 at 5:24 pm #

        Thank you for your fast response!

        I have to correct my first post:
        12 FPS was the speed with the stream from the camera.
        If I switch to a pre-saved video the same algorithm goes to 63 FPS!

        So, it is really fast as it is.

        But as you suggest I’m testing the video stream threading and now it is really super fast! The sampling from the camera goes up to hundreds of FPS.

        I’m eager to test also the tracker with this technic!

        Thank you again!

  83. James January 16, 2017 at 3:31 am #

    Very useful information Adrian,
    I’m curious is it a simple line of code added that would show the speed/velocity of the ball?

    • Adrian Rosebrock January 16, 2017 at 8:05 am #

      You can certainly derive the speed and velocity of the ball using the tracked coordinates; however, the numbers may be slightly off if your camera isn’t calibrated. It would serve as a simple estimation though.

  84. Chris January 18, 2017 at 12:22 pm #


    I’m looking to running this code with a live feed from a pi camera. Can that be done?

  85. Jaspreeth January 22, 2017 at 2:45 am #

    hii great work !!!
    Im planning to design a quadcopter which can track objects and move according with the object.
    how can i use this code for making trajectory planning ? can you help me please ?

    thanks in advance 🙂

    • Adrian Rosebrock January 22, 2017 at 10:11 am #

      Hey Jaspreeth — this certainly sounds like a neat project! However, I don’t have any tutorials on trajectory planning. I will certainly consider it for a future blog post.

  86. Rhitik January 22, 2017 at 6:18 pm #

    in which directory should I install imutils?

    • Adrian Rosebrock January 24, 2017 at 2:31 pm #

      You should install “imutils” via the “pip” command:

      $ pip install imutils

      This will automatically install imutils for you.

  87. Dan Price January 23, 2017 at 8:20 pm #


    I thoroughly enjoy your book and tutorials, really helping a newbie like me understand the concepts. Is this the best method to track an IR led using a Pi NOIR camera? I have the code thresholding for the “white” light, but is highly dependent on a compatible background. Would you help please?

    • Adrian Rosebrock January 24, 2017 at 2:23 pm #

      I admittedly have only used the NoIR camera once so I regrettably don’t have much insight here. The problem here is that trying to detect and track light itself should be avoided. Think of how a camera captures an image — it’s actually capturing the light that reflects off the surfaces around it. This makes detecting light itself not advisable.

  88. Wanderson February 11, 2017 at 11:37 am #

    Hi Adrian,

    Have you worked with the kalman filter? Do you have a link to indicate?

    Thank you.

  89. Wilbur Bacalso February 15, 2017 at 5:02 pm #

    Hi Adrian,

    Thanks for all your post. I’m super new at all this and have been learning a lot from your blog. I downloaded the code and video file for the ball tracker and I’m getting an error code of: NameError: name ‘xrange’ is not defined. I’m obviously missing something and I can’t seem to figure our what. Any help would be appreciated.Thanks in advance!

    • Adrian Rosebrock February 16, 2017 at 9:51 am #

      It sounds like you’re using Python 3 where the xrange function is simply named range (no “x” at the beginning). Update the function call and it will work.

      • Wilbur Bacalso February 16, 2017 at 2:21 pm #

        That did it! Thanks for your quick response Adrian. Look forward to buying your lessons and learning more when I get some cash together.

        • Adrian Rosebrock February 20, 2017 at 8:06 am #

          Awesome, I’m happy we could clear up the error 🙂

  90. Glenn Holland February 19, 2017 at 4:30 pm #

    Hi Adrian.

    Great Tutorial.

    You are getting upwards of 32fps with colour detection tracking, do you think you could get a similar rate using brightness detection like you demoed in your tutorial of finding the bright spot of the optic nerve in a retinal scan?

    • Adrian Rosebrock February 20, 2017 at 7:42 am #

      Since the tutorial you are referring to relies only a Gaussian blur followed by a max operation, yes, you should be able to obtain comparable FPS.

  91. aslan February 21, 2017 at 10:54 am #

    Hi Adrian,

    Your code perfectly works with arranging exact HSV range for an object thanks for this, but by varying lighting conditions HSV values of an object may significantly change, especially S and V values so it may detect other colors in different lighting conditions. For example, I arranged the HSV values for my blue object, but in some conditions it detected gray things or black things. You mentioned above about this problem and you said you need to find a solution yourself.

    Can you suggest any tecnique or algorithm or document for this problem?

    • Adrian Rosebrock February 22, 2017 at 1:36 pm #

      If your lighting conditions are changing dramatically you may want to try the L*a*b* color space. It might also help if you can color calibrate your camera ahead of time. If that’s not possible, you might want to consider machine learning based object detection. Depending on your objects, HOG + Linear SVM would be a good start.

  92. peter February 23, 2017 at 7:58 pm #

    Hi Adrian, Thank you for you amazing posts first!

    I’m new to OpenCV. Following this post, I now can detect a moving object within a certain HSV range via my webcam. Nonetheless, I have encountered some problems when I tried to only detect multiple round tennis balls.

    Here are my concerns:
    (i). I can’t detect multiple balls. I tried a for-loop and I also tried to follow one of your post( I also tried the “watershed algorithm”, but my program result is extremely unstable. (The circle is jumping around and lots of unnecessary tiny circles)

    (ii) I can’t detect round objects only. I tried the HoughCircles Function. However, it’s seems like detecting perfect circle only. Then I tried the circularity parameter via the SimpleBlobDetector using HSV picture after some thresholding; I’m sure that only the contour of tennis ball is left in the HSV, but the SimpleBlobDetector always ignores my tennis ball contour.

    (ii) when there is an another object with similar HSV range, my program will output a false result.

    Any help would be appreciated.Thanks in advance!

    • Adrian Rosebrock February 24, 2017 at 11:27 am #

      If you are getting a lot of tiny circles you might be detecting “noise”. Try applying a series or erosions and dilations to cleanup your mask.

      You are also correct — if there are multiple objects with similar HSV ranges, they will be detected (this is a limitation of color-based object detection). In that case, you should try more structural object detection such as HOG + Linear SVM. I discuss this method in detail inside the PyImageSearch Gurus course.

  93. Louay February 26, 2017 at 8:08 pm #

    Hi Adrian,

    Thanks a lot for the tutorial! I managed to replicate it in C# to integrate in a project I’m working on.

    Now I want to change the color of the tracked object. I read all the comments and you answer was to use the range-detector, which I really can’t use because I’m a Python noob.

    It would be great if you could guide me towards an another way to find the upper and lower bounds of a color.
    I’m particularly confused because in your green upper you have (64, 255, 255) which seems like an RGB value! As far as I know in hsv s and v only go up to 100. But also, the lower bound (29, 86, 6) actually corresponds to green in RGB.
    If you could please explain a little more how you have those values, it would help a lot to find the ones I’m looking for (Orange, for a ping pong ball)

    Thanks again and keep up the good work!

    • Adrian Rosebrock February 27, 2017 at 11:09 am #

      Thank you for the suggestion Louay. I’m actually in the process of overhauling the range-detector script to make it easier to use. Once it’s finished I’ll post a tutorial on how to use it.

      • Louay February 27, 2017 at 12:41 pm #

        great news! thanks.

        In the meantime, could you explain how you have your values and how do they correspond to hsv?

        I’m just trying to mimic that so I get my values for other colors (using a simple color picker tool)

        • Adrian Rosebrock March 2, 2017 at 7:01 am #

          The values I determined using the range-detector script which used the HSV color space when processing the video/frame. I’m not sure what you mean by how they correspond to HSV? Are you asking how to convert RGB to HSV?

          • Greg January 11, 2019 at 12:49 pm #

            I’m still trying to figure out the answer to Louay’s question:

            “I’m particularly confused because in your green upper you have (64, 255, 255) which seems like an RGB value! As far as I know in hsv s and v only go up to 100. But also, the lower bound (29, 86, 6) actually corresponds to green in RGB.”

            It seems like you have set greenLower and greenUpper using RGB values but then you use them to mask an image in HSV format. In HSV, (29, 86, 6) is black, not green. In RGB format, (29, 86, 6) is a nice shade of green.

            The question is, are you setting lowerGreen and upperGreen in RGB or HSV?

          • Adrian Rosebrock January 16, 2019 at 10:21 am #

            In OpenCV HSV values are in the range:

            – H: [0, 180]
            – S: [0, 255]
            – V: [0, 255]

            In this tutorial we are using HSV for the color threshold.

  94. Vijay February 27, 2017 at 6:04 pm #

    Hi Adrian,

    I need to extract key frames from the given video to do certain machine learning algorithms.

    If you have any idea about it, can you share some details. I need to use opencv and PIL for this purpose.

    Converting videos into frames and extraction key-frames from frames (Using Python – OpenCV and PIL)

    Videos –> Frame extraction from videos (using Python) –> Frames (DB) –> keyframe extractor (Using Python) –> Keyframes (DB)

    Thanks in advance!

    • Adrian Rosebrock February 27, 2017 at 6:43 pm #

      I think this all depends on what you call a “key frame”. I discuss how to detect, extract, and create short video clips from longer clips inside this post. If you’re instead interested in how to efficiently extract and store features from a dataset and store in a persistent (efficient) storage system, take a look at the PyImageSearch Gurus course.

  95. Annie Dobbyn March 8, 2017 at 11:17 am #

    Right this might be a dumb question but how did you find your FPS?

    • Adrian Rosebrock March 8, 2017 at 12:57 pm #

      The FPS of your physical camera sensor? Or the FPS processing rate of your pipeline? Typically we are concerned with how many frames we can process in a single second.

  96. sinjon March 13, 2017 at 6:25 pm #

    Hi Adrian,

    Is there a way to check all modules have been downloaded?

    My opencv wouldn’t bind with python in the virtual environment, so I’m currently creating outside it.

    I’m getting error messages that mask from (mask.copy() is not defined making me think something is missing

    Thanks in advance

    • Adrian Rosebrock March 15, 2017 at 9:02 am #

      I’m not sure what you mean by “all modules have been downloaded”. You can run pip freeze to see which Python packages have been installed on your system, but this won’t include the file in the output. You can also ls the contents of your Python’s site-packages directory.

  97. sinjon March 14, 2017 at 7:31 am #

    Hello Adrian,

    I’m getting an error that the ‘mask’ from mask.copy() is not defined.

    I was unable to bind my opencv to python for my virtual environment so I’m i’m building outside it. Feel like this could be causing problems.

    Thanks in advance

  98. Arati March 20, 2017 at 1:22 pm #

    Sir can i use Kinect Sensor for accessing video?is it possible?please explain how it is..

  99. Umang March 21, 2017 at 11:24 am #

    hello Adrian

    I am doing a similar kind of project but i want track vehicles from a cctv camera to detect the speed of vehicle can you suggest me any method

    • Adrian Rosebrock March 22, 2017 at 8:39 am #

      Determine the frames per second rate of your camera/video processing pipeline and use this as a rough estimate to the speed.

  100. Jim March 21, 2017 at 7:04 pm #


    This is great. I’m essentially wanting to make an extension of this application, but have the ball (or tracking marker) fixed to a person, and measure how quickly (in real-world speed) they can shuffle from side to side.
    Assuming they are moving in a straight line perpendicular to the camera, could this application be extended to calibrate pixels in the frame to a real world distance, and somewhat accurately measure the subject’s side to side motion (velocity, acceleration)?

    • Adrian Rosebrock March 22, 2017 at 8:37 am #

      Yes, provided that you know the approximate frames per second rate of the camera you can use this information to approximate the velocity.

  101. Mehdi March 25, 2017 at 2:28 am #

    Just Great

  102. Arun April 2, 2017 at 5:27 am #

    • Arun April 2, 2017 at 5:43 am #

      its over

    • Adrian Rosebrock April 3, 2017 at 2:03 pm #

      If you are getting a NoneType error, it’s likely because your system is not properly decoding the frame. See this blog post for more information on NoneType errors.

  103. dharu April 10, 2017 at 5:46 am #

    I want the report of this project

    • Adrian Rosebrock April 12, 2017 at 1:17 pm #

      I don’t know what you mean by “report”.

  104. Ghani Putra April 12, 2017 at 11:42 am #

    I installed imutils in virtual environment but still i had error said “No modules named imutils” even when i checked in the console it showed me the directory of the folder (so it has already installed). What should i do?

    • Adrian Rosebrock April 12, 2017 at 12:54 pm #

      Double-check that imutils was correctly installed into your virtual environment by (1) accessing it via the workon command and then running pip freeze.

  105. Shraddha April 16, 2017 at 3:38 pm #

    Hi Adrian,
    This code is amazing! It works perfectly with a tennis ball but when I try to implement with a white table tennis ball, it doesn’t track it. I used the range detector script to get the min max threshold values as follows whiteLower=[158,136,155] and whiteUpper=[255,255,255] and just replaced the greenLower and greenUpper with those values, which are in bgr. I’m using mp4 video files one with the table tennis ball with a brown background(which it tracks) and same background with the white ball(no luck here). The issue seems to be that cnts=0 so maybe its not finding the contour?

    • Shraddha April 16, 2017 at 3:41 pm #

      I meant “green tennis ball with a brown background(which it tracks) and same background with the white ball(no luck here). “

      • Adrian Rosebrock April 19, 2017 at 1:05 pm #

        It sounds like your mask does not contain the object you are looking for. Try displaying the mask to your screen to help debug the script. It might be that your color thresholds are incorrect as well.

        • Tony Du July 17, 2017 at 6:08 am #

          Hi Adrian:

          When I running your code on,it can return a image on screen,but when I use my webcam,it also return none type error.I’ve alredy used vlc to test my webcam,it succeed!Could you help me with this question?

          • Adrian Rosebrock July 18, 2017 at 9:58 am #

            Hi Tony — I cover NoneType errors and why they happen when working with images/video streams in this blog post. I would suggest starting there.

  106. Yusron April 17, 2017 at 11:45 am #

    Hi Adrian, I have some questions for you, I have one project with motion detection based on color object
    1. Objects that I use do not have to circle, may be square, or formless. Because I just want to focus on color
    2. How can I detect that the object is moving or not?

    • Adrian Rosebrock April 19, 2017 at 1:00 pm #

      If you want to use color to detect an object, then you would use the color thresholding technique mentioned in this blog post. Compute the centroid of object after color thresholding, then monitor the (x, y)-coordinates. If they change, you’ll know the object is moving.

  107. sinjon April 21, 2017 at 4:20 pm #

    Hello Adrian,

    Is there a way to set the video / image as an array, so that when the buffer reaches the highest of its journey before returning, it’ll stop tracking?

    Many thanks

    • Adrian Rosebrock April 24, 2017 at 9:55 am #

      Hi Sinjon — the dequeue data structure can store an object that you want (including a NumPy array). If would like to maintain a queue of the past N frames until some stopping criteria is met, just update the dequeue with the latest frames read from the camera.

  108. Marcos Idaho April 23, 2017 at 5:20 pm #

    hi Adrian, great job. I am very new to Open CV and Python. When i give the path of the default video, the video is not getting uploaded. The switch fails and my web_cam turns on, and green objects can be tracked. Can you tell me how to add the path of the video_file in the arguments?

    • Adrian Rosebrock April 24, 2017 at 9:36 am #

      Hey Marcos — you supply the video file path via command line argument when you start the script:

      $ python --video ball_tracking_example.mp4

      Notice how the --video switch points to a video file residing on disk.

      • Marcos Idaho April 24, 2017 at 9:45 am #

        Hi Adrian, thank you! What if i am using pycharm interface?

        • Adrian Rosebrock April 24, 2017 at 10:04 am #

          If you are using PyCharm you would want to set the script parameters before executing. Alternatively you could comment out the command line argument parsing code and just hardcode paths to your video file.

          • Hari April 26, 2017 at 2:51 am #

            Great work Adrian. I am getting one error while running the code, its showing in pts.appendleft(center),pts is not defined. Can you help me on this?

          • Adrian Rosebrock April 28, 2017 at 9:54 am #

            Hi Hari — make sure you use the “Downloads” section of this post to download the code. The pts variable is instantiated on Line 21

          • Marcos Idaho April 30, 2017 at 5:48 pm #

            Thank you Adrian! I was trying to track two ants moving in a video file. I wrote a sample code inspired from your code. But i am not able to track the ants, can you help me in this regard?

            youtube link to video is attached

          • Adrian Rosebrock May 1, 2017 at 1:21 pm #

            If color thresholding isn’t working to track the individual ants, have you tried background subtraction/motion detection? Of course, this implies that the ants are moving.

  109. Marcos Idaho May 7, 2017 at 2:04 pm #

    @Adrian. Thank you very much for the reply. I was able to track them through background subtraction. I have one more question, i tried to save the processed video, but its not getting saved. Its giving an error.

    frame = imutils.resize(frame,width=600)
    (h, w) = image.shape[:2]

    AttributeError: ‘NoneType’ object has no attribute ‘shape’

    • Adrian Rosebrock May 8, 2017 at 12:22 pm #

      Hi Marcos — please see this blog post where I discuss common causes of NoneType errors and how to resolve them.

      • kiran May 15, 2017 at 11:02 am #

        @Adrian. Thank you for such a nice tutorial. What to do when there are multiple balls in the video having and they are all green. I tried doing this, but the tracks are getting messed up, when ball crosses each other or one ball goes near the other one.

        • Adrian Rosebrock May 17, 2017 at 10:08 am #

          This will become extremely challenging if the balls are all the same color. I would suggest looking into correlation trackers and particle filters.

  110. terance May 8, 2017 at 11:39 am #

    Hello. Is there a way to just output the red line that is following the movement?

    • Adrian Rosebrock May 8, 2017 at 12:13 pm #

      Hi Terance — what do you mean by “output”? Do you mean just draw the red line? Print the coordinates to your terminal? Save them to disk?

  111. Marcos Idaho May 16, 2017 at 1:43 pm #

    Hi Adrian, if there are multiple objects, then how to track them? If the objects are moving, is it better to use image subtraction methods?

    • Adrian Rosebrock May 17, 2017 at 9:53 am #

      Hey Marcos — please see my reply to “Ghanendra” above related to tracking multiple objects. The gist is that you define color ranges for each type of object you want to detect and construct masks for each of them. If the objects are moving and there is a fixed, non-moving background background subtraction would be a better bet.

  112. Ouma May 22, 2017 at 7:52 am #

    Is there a C++ version ? Can this algorithm deals with industrual object motion

    • Adrian Rosebrock May 25, 2017 at 4:39 am #

      Sorry, I only provide Python + OpenCV implementations on the PyImageSearch blog.

  113. Wallace Bartholomeu May 26, 2017 at 11:52 pm #

    Hi Adrian..
    Im Trying to change line 66 to provide a for loop to track multiple balls, like you explaned ad time ago, but I cant do it. Can you please exemplify ??
    Thanks a lot !

    • Adrian Rosebrock May 28, 2017 at 1:03 am #

      Hi Wallace — basically, you need to loop over each of the individual contours detected instead of taking the max-area contour.

      This will loop over each of the individual contours instead of taking the largest one.

  114. Alex Ronnebaum June 5, 2017 at 1:11 pm #


    I am working on designing a drone summer camp where the students in the camp build and program drones to perform search and rescue missions. They will be tracking different colored targets and I am having trouble changing the color that the camera is tracking. Could you please give me some tips.

    • Adrian Rosebrock June 6, 2017 at 11:59 am #

      Hey Alex — this sounds like a great project. I would suggest using the range-detector script in imutils to help you define the color ranges. I’ll also be releasing an updated, easier to use range-detector in the future as well.

  115. Boris June 8, 2017 at 7:20 pm #


    I have followed your tutorial to install OpenCV and python on macOS Sierra, however when I run this .py file on my mac, the camera LED lights up, but no camera window opens. Could you help me?

    Thank you.

    • Adrian Rosebrock June 9, 2017 at 1:36 pm #

      Hi Boris — that is indeed very strange. Can you confirm that frames are being read from the stream by inserting a print(frame.shape) statement? That will at least tell you if frames are being read from the camera sensor.

      • Bryce July 22, 2018 at 12:59 am #


        When I run the script with either the test video or trying to use my raspberry pi camera, no window pops up. The terminal just seems to run the code and move on to the next line. How do I get a window to pop up to show the object tracking?


        • Adrian Rosebrock July 25, 2018 at 8:26 am #

          When you say the terminal just runs the code, do you mean that the script keeps running with no window displaying? Or the terminal just exits. If it’s the latter you need to update Line 28 to be:

          vs = VideoStream(usePicamera=True).start()

  116. Fahim June 11, 2017 at 11:14 am #

    Hi Adrian,
    I see that you referred to the imutils documentation for the range-detector to automatically determine the upper and lower range for the object to detect. But would you please please show me an example exactly how you used it?
    I just can’t get it.

    • Adrian Rosebrock June 13, 2017 at 11:06 am #

      Hi Fahim — I will add writing a tutorial on how to use the range-detector script to my to-do list.

  117. Murali Vikas Reddy June 21, 2017 at 6:48 am #

    Your tutorial was nice .
    Can you explain me in detail how you are tracking those x and y points such that I can track the radius of the ball and print the message whether the ball is moving forward or backward.

    • Adrian Rosebrock June 22, 2017 at 9:29 am #

      The (x, y)-coordinates are stored in a dequeue object, as the code explains. To determine if a ball is moving forward, I would actually suggest monitoring the radius. As the radius increases, the ball is getting closer to the camera. As the radius decreases, the ball is moving farther away.

  118. Dibakar Saha June 24, 2017 at 1:53 pm #

    Hey Adrian, nice blog and and a great post. I was wondering if I designed a Haar cascade for a ball and use it to track its motion instead of using a mask as you did then how would I do it?

  119. Daniel June 26, 2017 at 12:21 pm #

    Hello Adrian,

    I have the same problem changing the HSV colour. I try to find a pink colour and is not working … I checked every tutorial available on the web, nothing working. If you can create kind of colour picker which gives you the range straight away will be cool. The one you specified is giving me a range but no the right one, and I have no ideea which out of 3 to use.

    • Adrian Rosebrock June 27, 2017 at 6:22 am #

      Hi Daniel — I will certainly do a color picker tutorial in the future.

      • DAniel June 27, 2017 at 7:03 am #

        Do you have any email address I can chat with you? I will put mine here and if you can reply will be cool. I have an interesting project and I need some help on the image processing side.
        Thank you !

  120. Daniel June 27, 2017 at 8:27 am #

    Hello Adrian,
    I need to find a ball on the roulette table and tell where exactly is placed on the table ( like number and colour ) using image processing.
    Any suggestions?
    Thank you very much

    • Adrian Rosebrock June 30, 2017 at 8:31 am #

      That sounds quite tricky as there are a number of variables here. I would start by using a fixed camera in controlled lighting conditions. Color thresholding can be used to reveal the red vs. green color. As for ball tracking, that really depends on the type of ball. If the ball is colored, color thresholding could work. If the ball is a silver metallic, then color thresholding will not work due to reflections. You might need to train a custom object objector in that case.

  121. vishnu July 9, 2017 at 6:51 am #

    im getting name ‘xrange’ is not defined

    • Adrian Rosebrock July 11, 2017 at 6:39 am #

      This blog post was designed for OpenCV 2.4 and Python 2.7 (as there were no Python 3 bindings back then). I’m assuming you are using Python 3 in which case you can change xrange to range and the code will work.

  122. Rouhollah July 13, 2017 at 7:26 am #

    Dear Adrian,
    Thank you for your awesome tutorials; I have tried your code with my Raspberry Pie 2 (jessie python3 + openCV3.2) but the result is far slower than we can see in your clips; is there anything that i can do about it?
    For now I have put the graphical memory on 128

    Also i was wondering is it better to use a tracking algorithm instead of detection , even for simple tasks, like this post or not; My goal is to detect a color-specific circle in real-time; I’ve also implemented the MIL algorithm but still the result was not satisfactory at all-very slow. Should I change my hardware?

    Thank you again! 🙂

    • Adrian Rosebrock July 14, 2017 at 7:28 am #

      To start, I would suggest using threading to increase your FPS processing rate.

      As for tracking, it depends on which algorithm you choose. Some tracking algorithms are fast, some very slow. If you’re just getting started with computer vision you might want to use more standard laptop/desktop hardware to get a feel for various algorithms and compare performance on the Pi.

  123. seth July 18, 2017 at 7:53 am #

    Hi adrian, great work on openCV code, its been useful while trying to learn computer and machine vision.

    1 question though; i’m trying to devise a way of using openCV to track 1 of 2 objects and their coordinates in space i.e. if it is on the left hand side of a central divider or a right hand side, to then translate this information to my rpi3 and make a robot arm appropriately move to either left or right before kinematically actuating the arm motors to grab the objects.

    is it possible to use this method you’ve posted to achieve this?


    • Adrian Rosebrock July 18, 2017 at 9:42 am #

      Hi Seth — yes, that is certainly possible. This method finds the largest object and tracks it; however, you could modify it to loop over all contours and simply discard those that are too small. Once you have detected the contours of the objects you can monitor their (x, y)-coordinates and check to see if they pass over a central line. From there, you would need to write code to communicate with your robot arm which is obviously device specific.

      • seth July 20, 2017 at 5:25 pm #

        HI adrian, the method i was planning on using is to draw a centralised point on a video feed much like your coordinate tracking sequel to this post. and then have code dictate how far the object is from that line, however i can’t seem to get opencv to actually 1) draw a line, and 2) have it as a reference point in the capture.

        to measure the distance i was going to try get it to form a box around the colour filtered object that is the dimensions of the object within a cuboid container. the object always being the same distance from the camera.

        i just purchased the ebook of yours but have not had a chance to read it, is a technique covered in that that would aid me?

        thanks again.

        • Adrian Rosebrock July 21, 2017 at 8:52 am #

          Hi Seth — thank you for picking up a copy of Practical Python and OpenCV, I’m sure you’ll enjoy it.

          As for your question, I think it would be helpful if you could share a screenshot of what you are working with (ideally over email). From there I can give you better suggestions on what to try.

  124. David July 19, 2017 at 1:42 pm #

    Hi Adrian,

    First off, sweet website. Second, there must be at least two of you because you are just so on top of responding to all of the comments on this website. Anyway, I just wanted to make a comment regarding the range detector.

    At first it was not working out for me. More specifically, I was not able to move the sliders that were controlling the threshold values or they would reset themselves and it was very difficult to terminate the script by pressing ‘q.’ It seemed like python kept going through the while loop, which in turn was resetting my trackbars. Only pressing ‘q’ at the right moment would allow me to stop the script.

    I just added two more lines of code and now it works wonderfully. First, as you had suggested I inserted print(v1_min, v2_min, v3_min, v1_max, v2_max, v3_max) on line 103 within the ‘if’ statement so the script would spit out some values. Secondly, I inserted cv.waitKey(0) as the final line of the while loop.

    I’m a total novice, so i don’t know why it didn’t work initially. Maybe it had something to do with my system? My operating system is MacOS Sierra (version 10.12.5) and I ran the script with python3.6.1 and the latest version of openCV.

    • Adrian Rosebrock July 21, 2017 at 8:56 am #

      Hi David — thanks for the comment, it’s much appreciated. I’m actually planning on re-coding the entire range-detector script (making it much easier to use) and doing a blog post on it. The new script will help resolve these types of frustrating issues.

  125. Jazz August 2, 2017 at 6:22 am #

    Your work is really helpful.
    Can you please guide, I want to plot the x and y axis of the ball in MATLAB.
    Which parameters shall I save in .txt file, in order to get the plot of Green Ball x and y axis


    • Adrian Rosebrock August 4, 2017 at 7:02 am #

      Hi Jazz — at each iteration of the while loop you would want to save the the center tuple computed on Line 69 to disk. From there you can ingest the .txt file into MATLAB and plot (or better yet, just plot using matplotlib).

  126. John K August 4, 2017 at 3:48 am #

    Hey Adrian, i want to ask something. I want to change the colorLower and colorUpper into White. What number should i change? btw i’m new in image processing.

    • Adrian Rosebrock August 4, 2017 at 6:47 am #

      Hi John — I would suggest you use the range_detector script I’ve mentioned in previous comments to help you tune the color threshold change. The exact values will vary depending on your environment and lighting conditions.

      Also, if you’re new to image processing, I would highly recommend that you work through Practical Python and OpenCV — this will help you learn the fundamentals of computer vision and image processing.

      • John K August 7, 2017 at 10:26 am #

        Thank you for your advice. I have another question how to connect this code to rtsp protocol (ip camera)?which one should i change.

        • Adrian Rosebrock August 10, 2017 at 9:02 am #

          I do not have any tutorials on IP cameras, but I will try to cover this topic in the future.

  127. Abhranil August 5, 2017 at 7:11 pm #

    When I am trying to run this python code on videos that I downloaded,it is not accurate enough.
    What should I do now?

    • Adrian Rosebrock August 10, 2017 at 9:06 am #

      You will need to use the range-detector script mentioned in the blog post/comments to manually tune your thresholds.

  128. Jyoti August 6, 2017 at 6:14 pm #

    Adrian, how can I detect a hand movement instead of a ball ?

  129. Manggala August 7, 2017 at 9:30 am #

    Hai Adrian,

    Thank you for your tutorial. Your tutorial is great! i will make my project using it. BUt i have problem, i will access video form RTSP protocol, i’m using Yi Ants. I have put RTSP link in VideoCapture(“my_rtsp_link”) but it’s doesn’t work. Do you have any idea why?Thank you.

  130. Madhu Oruganti August 10, 2017 at 3:39 am #

    Dear Andrian,
    How can I save this file after running the code.

    • Adrian Rosebrock August 10, 2017 at 8:41 am #

      Save the video to file? Or a specific frame?

      • Madhu August 10, 2017 at 10:06 am #

        As a video to same location.

  131. satinder September 17, 2017 at 7:20 am #

    sir your tutorial is by far the best i have found on the internet, but the problem is with lighting ,whenever light conditions are changed i am unable to detect my ball, moreover can you also tell me whether there is any technique by which i can click on the object to track,(because it is very difficult to set the HSV value for a specific color, i have to tune it for hours, then i got the best one), THANK YOU,
    it would be very nice of you if you can tell me what to do because i can’t afford the course

    • Adrian Rosebrock September 18, 2017 at 2:08 pm #

      Correct, when your lighting conditions change you cannot apply color thresholding-based techniques (as you the color threshold values will change). Instead you should consider applying a different object detection technique such as HOG + Linear SVM.

  132. Vignesh September 29, 2017 at 4:56 am #

    Adrian, You are the Boss!! Keep up the good Work

    • Adrian Rosebrock October 2, 2017 at 10:19 am #

      Thank you Vignesh, I appreciate that 🙂 Have a great day.

  133. umbnich September 30, 2017 at 11:37 am #

    Hi Adrian!
    I want to integrate this script with the “Unifying picamera …”, so I write:

    if not args.get(“video”, False):
    camera = VideoStream(usePiCamera=1).start()
    camera = cv2.VideoCapture(args[“video”])

    while True
    frame =

    but when I execute a Nonetype error appear

    • Adrian Rosebrock October 2, 2017 at 9:56 am #

      To start, your code is incorrect. You should be using VideoStream for both your USB camera and your Raspberry Pi camera module. For video files, use FileVideoStream.

      Secondly, it sounds like Python cannot access your Raspberry Pi camera module. You can read up on common reasons for this in this post.

  134. Pawan October 13, 2017 at 9:47 am #

    Hey Adrian, thanks for the tutorial.
    I have one question,
    the coordinates that you are getting are live coordinates and are you storing them anywhere?
    If not where exactly I can store them?(In the code)
    Thanks in advance

    • Adrian Rosebrock October 14, 2017 at 10:42 am #

      The coordinates are stored in the dequeue data structure.

  135. Manuel Alejandro Diaz Zapata October 17, 2017 at 1:48 pm #

    Hello Adrian. Loved this tutorial.

    As a final project for my Image Processing class (engineering undergrad) we chose to implement a program that tracks people given a video feed. So researching online I’ve found that two of your tutorials could help us greatly: Pedestrian Detection OpenCV and this one.

    What we want to make is something that fuses these two together, but thinking about this it comes to my mind something rather sketchy to make it work.

    This is an oversimplified step by step :

    Since the Pedestrian Dector draws a hollow rectangle on the person, we could make it solid, then do the BGRtoHSV colorspace convertion to apply this code, but a problem comes to my mind and it’s when two people come near, resulting in a bigger square and perhaps losing track of one of the subjects. Maybe this can be avoided if the rectangles drawn on the image per person are small.

    If you can give me your take on this approach, it would be much appreciated.

    • Adrian Rosebrock October 19, 2017 at 5:01 pm #

      Hi Manual — I’m a little bit confused by the project here. Your goal is to detect a pedestrian in a video stream and then track the (x, y)-coordinates? Instead of bothering with color filling, why not just track the (x, y)-coordinates directly? Is there a particular reason you do not want to do this?

      • Manuel Alejandro Díaz Zapata October 22, 2017 at 12:35 am #

        Well, thinking about it, it does makes a lot more sense, because the pedestrian detection already has the centroid of the person. I’ll try it and report back with results.

        Some friends are working on this problem but using blobs. Which do you think can be more effective tracking multiple pedestrians simultaneously.

        Thanks again.

        • Adrian Rosebrock October 22, 2017 at 8:23 am #

          There are various algorithms you can use for multi-object tracking. Centroid-based tracking would be the easiest. Correlation filters are more advanced but would provide better accuracy.

  136. sanup s babu October 23, 2017 at 4:52 am #

    Hi adrian,
    Iam doing a project similar to human following robot by using python 2.7 and opencv 3.1 with raspberry 3 model b.
    In my project,human wear a jacket printed with 3 circles on the back with same color (lets say RED) in the circle. camera detects these circle and measures each width of the circles.
    when humans move left, left circle’s width increses than other two and also same in the case of humans moving right.My problem arises in a frame there will be 3 circles and i have to find maximum area from 1 of the 3 circles and i must indicate whether it is LEFT or RIGHT.

    please help..

    • Adrian Rosebrock October 23, 2017 at 6:11 am #

      I would suggest you create a mask using cv2.inRange for each color you want to detect. You can then sort your contours to determine the left, right, etc.

      • sanup s babu October 23, 2017 at 12:07 pm #

        In this project ,how to calculate sum of horizontal width of 3 circles discussed above , after performing bounding box around the circles.
        (After creating bounding box we get x,y,w,h. frame it i choose w as a horizontal distance.)
        Since 3 circles are in same frame it is difficult to calculate each contour width.

  137. Gaudon Florian October 24, 2017 at 4:42 pm #

    Hi Adrian,

    First of all, big thank’s to you for all your tutorials.
    I’m running to project on my raspberry pi but I got a little question for you, why my raspberry isn’t using 100% of the processor when we are processing the frames and when I don’t get my 30FPS ( about 5-6 FPS i guess) ?

    • Adrian Rosebrock October 25, 2017 at 1:13 pm #

      Hi Gaudon — Sometimes algorithm’s simply take time to execute but do not use all of the processor capabilities.

  138. Jesus De Jesus October 27, 2017 at 6:17 pm #

    Hello Adrian,

    I’m wondering if there is a way to do this kind of track but usin RGB instead of HSV?

    • Adrian Rosebrock October 30, 2017 at 2:09 pm #

      Hi Jesus — It’s of course possible. You’ll find that the Hue Saturation Value color space is easier to work with in this case. For a detailed read up on color spaces, be sure to check out PyImageSearch Gurus Lesson 1.8.

  139. sanup s babu October 29, 2017 at 7:46 am #

    Hi adrian,
    how the contours are numbered in a frame?
    if 5 is the length of the contour it starts from left to right or Right to left or Random
    C0,C1,C2,C3,C4 -left to right or
    C4,C3,C2,C1,C0 – Right to left or
    C3,C2,C4,C0,C1 – Random

    • Adrian Rosebrock October 30, 2017 at 3:08 pm #

      Hi Sanup. In PyImageSearch Gurus Lesson 1.11.5: Sorting contours, I detail how to sort contours and provide code you can use in your projects. You can also find the code on GitHub.

  140. Kiel October 31, 2017 at 4:23 pm #

    Hi Adrian,

    First off, you are doing some incredible things. In this ball tracking code I would like to do a few things to accommodate my goal.

    1. Track multiple circles simultaneously (even better if different colored dots could be used)
    2. Not erase the lines drawn (and make them not so thick)
    3. Write a new video file with the lines.

    The goal here is to look at objects and their movement from a time lapse video to create a spaghetti diagram of where the objects traveled.

    • Adrian Rosebrock November 2, 2017 at 2:42 pm #

      You can track multiple objects by modifying my code to find all “reasonably sized” contours instead of simply taking the largest one. You can also create masks for different colors as well.

      To accomplish the second goal use a list data structure to store all points.

      To save the resulting output as a video, follow this tutorial.

  141. Goh Zhi Wen November 2, 2017 at 4:54 am #


    I am a beginner on this topic, but I need help with regards to a project I am working on for my university report. If I need to track the trajectory of billiard ball on the table surface, is it possible to use your code? The most important thing I need is firstly, the coordinate of the initial position of the ball and the coordinate of the ball at a certain time during the video. Is it possible to do this with this code?

    • Adrian Rosebrock November 2, 2017 at 2:13 pm #

      For tracking the actual movement and location of the ball I would recommend this tutorial instead.

  142. Maram Qurban November 6, 2017 at 12:01 am #

    Can I contact you? I have some questions about the tutorial!

    • Dhara November 7, 2017 at 1:23 pm #

      Click the Contact Tab at the top of the page

  143. Dhara November 7, 2017 at 1:20 pm #

    Hey Adrian,
    I was wondering why you calculated the center of the object in Lines 68-69 when cv2.minEnclosingCircle() gives you the (x,y) for the center. Great tutorial btw. It helped me so much.

    • Adrian Rosebrock November 9, 2017 at 6:49 am #

      It’s a bit of a (slightly) redundant calculation, but the minimum enclosing circle may not be exact versus the centroid of the mask which would be more exact.

  144. Alex Sev November 18, 2017 at 5:48 am #

    Is it easy to make your robot using raspberry pi. to go to the desired color? I mean when the ball with this color is in left then go left? then center also in right part. i don’t know how to determine the coordinates of the frames :'(

    • Adrian Rosebrock November 18, 2017 at 8:06 am #

      If your goal is to determine the direction, and then have the direction inform the robot on where to go, you can track the object movement. And from there control the servos of your robot.

  145. amber November 18, 2017 at 12:15 pm #

    can we use this (after some modifications) for tracking eye pupil movement?

    • Adrian Rosebrock November 20, 2017 at 4:13 pm #

      Not really, no. This method relies on color thresholding. Pupil detection and tracking is much more challenging. I haven’t personally tried it, but I know other PyImageSearch readers have had good luck with this tutorial.

  146. TJ December 2, 2017 at 11:21 pm #

    Hi Jonathan,

    I am trying to use the colour picker Python script that you created, but when I run the file I get this error:

    usage: [-h] -i IMAGE [-l LOWER] [-u UPPER] error: argument -i/–image is required

    How do I define an image in this script? Thanks for helping.

  147. Suhas December 17, 2017 at 12:51 pm #

    Hey Adrian,
    What if the ball changed color in its trajectory? Like, suppose the ball was a color-changing led bulb. Any way to track objects not just based on color? In fact, this happens in real scenarios too, like differential lighting.

    • Adrian Rosebrock December 19, 2017 at 4:28 pm #

      Yes, if you are looking for structural descriptors take a look at HOG + Linear SVM.

  148. anzar January 11, 2018 at 6:20 am #

    in raspberry pi camera these following line show error IndentationError: Unexpected indent

    if not args.get(“video”, False):
    camera = cv2.VideoCapture(0)
    camera = cv2.VideoCapture(args[“video”])


    if len(cnts) > 0:

    • Adrian Rosebrock January 11, 2018 at 7:12 am #

      Make sure you use the “Downloads” section of this blog post to download the code + example video. It looks like an indentation error occurred when you tried to copy and paste the code. Using the “Downloads” section will ensure this does not happen.

  149. Lantao January 11, 2018 at 7:14 am #

    Hi Adrian,

    Thank you for your incredible tutorial, it helps me a lot about me project! May I have question about the object tracking? First, if I want to reach a hand gesture recognition with webcam. This method in your tutorial sees difficult to track hand gestures, because our faces will be also detected if we set a threshold for our skin. Secondly, if I want to detect the movement of each figures what methods you would recommend? Your previous tutorial said about to change the value of dx and dy to detect tiny movement of objects, is that possible to detect the movement of fingures? If you could help me to answer these questions it will help me a lot for my project.

    Have a good day!

    King Regards,

    • Adrian Rosebrock January 11, 2018 at 7:33 am #

      1. There are a variety of ways to perform hand gesture recognition. For controlled environments simple thresholding/background subtraction and contour properties will work. For more advanced hand gesture recognition you may need a stereo vision camera to compute the depth map and then recognize the gesture. If this is for a school project, I recommend the former.

      2. I’m not sure I understand the question here. The code in this post demonstrates how to track movement. Monitoring the dX and dY values to determine direction can be found here.

      • Lantao January 12, 2018 at 4:55 am #

        Hi Adrian,

        Thank you for your reply! The second question is actually about how to track the movement of figures. In other words, I want to detect the movement of each figures to zoom in and out a image. So, is there any way you would recommend? Thank you.

        Have a good day!

        King Regards,

        • Adrian Rosebrock January 12, 2018 at 5:25 am #

          Once you’ve found the bounding box of the object you want to track you would want to apply a dedicated tracking algorithm to it. You could apply centroid tracking or correlation tracking. Such as “zooming in and out” that sounds like an additional post processing effect. You can achieve this by cropping the ROI out and resizing it.

  150. Owais January 18, 2018 at 6:02 pm #

    c = max(cnts, key=cv2.contourArea) Adrian sorry for my english will you please explain what happening in this code

    • Adrian Rosebrock January 19, 2018 at 6:46 am #

      The variable cnts is a list of contours. We are looking for the largest contour so we call max which will find the contour with the largest cv2.contourArea. Basically, we are testing each individual cnts using cv2.contourArea and returning the contour that maximizes this value.

      • owais January 21, 2018 at 4:04 pm #

        Thank you for reply Adrian could you explain what is difference between contour and contour_area i know contour is boundary of a object i am stuck in contour_area i’ll be thankful to you

  151. Fexyler January 19, 2018 at 9:35 am #

    Hi Adrian,how can I track white objects with HSV or something? Like an egg? Please answer this,thank you!!

    • Adrian Rosebrock January 22, 2018 at 6:45 pm #

      Detecting white objects is pretty challenging as white will reflect and appear lighter or darker (or varying shades of a color, depending on proximity and lighting conditions). If you’re specifically working with eggs it might be better to take a look at structural descriptors and object detection such as HOG + Linear SVM.

  152. Fexyler January 19, 2018 at 9:43 am #

    And How Can I count contoured eggs? How Can I count detected eggs ?

  153. Ayesha January 25, 2018 at 5:59 pm #

    Hey Adrian, great tutorial as always. Can you please tell how can we modify this code for human tracking instead of ball? i need to implement that in my project. i would be grateful for your help.

    • Adrian Rosebrock January 26, 2018 at 10:15 am #

      I would suggest using a dedicated human detector such as this Haar cascade or modifying my deep learning object detection code. From there you would want to pass the ROI into a dedicated tracking algorithm, such as correlation tracking. I hope that helps get you started!

  154. hardik singh shekhawat February 11, 2018 at 12:15 pm #

    hello Adrian ,

    I really like this tutorial, I have a question … suppose I have green, red , yellow ball on detection of green ball gpio 18 should be set in output mode else remain off so basically I want to assign three different gpio pins on detection three different colour. can you please help me on it?

    • Adrian Rosebrock February 12, 2018 at 6:22 pm #

      I cover how to use GPIO pins + OpenCV in this post. Use a Python dictionary to map a color ID (such as an integer or string) to a GPIO pin. Use color thresholding as we do in this post to detect each color. Then lookup the color in the dictionary and access the GPIO pin. I hope that helps.

  155. Radhika Kamal Agrawal March 3, 2018 at 6:53 am #

    I could find out the direction of the ball whether it is up down or left right. But I am interested in finding out if the ball is in circular motion. Please help me to track the same. Kindly help me with the Python code.

    • Adrian Rosebrock March 3, 2018 at 7:34 am #

      So you’re trying to detect if the ball is making in a circle as it travels across the frame?

  156. Arpit Shukla March 8, 2018 at 1:19 pm #

    Hey Adrian, I have a problem when I am running the code in Ubuntu that the tracker is moving here and there even when the ball is there in the frame. Why is the tracker moving even when there is no ball? I am using my webcam for this. Please help me out!

    • Adrian Rosebrock March 9, 2018 at 9:04 am #

      Use “cv2.imshow” to visualize the “mask” generated by color thresholding. It sounds like there is a lot of “green” in your background which is causing the detector to falsely fire. You may need to tune the color threshold parameters or choose a different color altogether.

      • Arpit Shukla March 9, 2018 at 9:42 pm #

        Thank you Adrian!
        It worked 🙂

        • Adrian Rosebrock March 14, 2018 at 1:29 pm #

          Awesome, congrats on resolving the issue, Arpit!

  157. muratcan March 19, 2018 at 7:16 pm #

    Hello Adrian
    First of all your project is gorgeous, congratulations! I’m also dealing with a project like yours. The thing that I want to do is: In addition to your code, object tracking robot under the control of engine. For that: If object is on the right side of center of frame, it’s going to move forward with the servo motor which is on the right. If object is on the left, move forward with left servo motor.. In short, I want the robot to follow the object wirelessly.
    But I couldn’t create this algorythm. Could you help me about that? If you help me, I will be really appreciate. Because it’s my final project and need to finish in 10 days..

    • Adrian Rosebrock March 20, 2018 at 8:27 am #

      Congrats on working on your final year project, that’s awesome. I don’t have any tutorials on controlling a servo based on object movement. And even if I did, the code wouldn’t likely work out of the box. You would need to modify it to work with your own hardware. I wish you the best of luck with your graduation project.

  158. mo1878 March 20, 2018 at 10:58 am #

    Hello Adrian,

    Firstly I’d like to thank you for this tutoriaI. Secondly,I am playing around with the code right now, but I am wondering if it is possible to just output the (x,y) coordinates of the centroid? rather than the change in the x and y; (dx, dy)?

    • Adrian Rosebrock March 20, 2018 at 11:05 am #

      I think you might be confusing this blog post with this one. We don’t compute dX and dY in this post.

      • mo1878 March 20, 2018 at 11:07 am #

        Apologies, Yes I got my tabs mixed up. In that case, should I post the answer on the other page rather than here?

  159. Lisa April 14, 2018 at 6:14 am #

    I am trying to to follow this tutorial. I tried to install the imutils using pip. It says it’s installed but whenever I am trying to run the code, there is an error saying there is no such module names imutils. Please help.

    • Adrian Rosebrock April 16, 2018 at 2:31 pm #

      Hey Lisa — are you using Python virtual environments? If so, I get the impression you are either (1) not installing “imutils” into the Python virtual environment or (2) you’re using “sudo” when trying to pip-install the package (you can’t use sudo when pip-installing a package into a Python virtual environment). Your commands should look something like this:

      • vishal May 21, 2018 at 9:48 pm #

        hello hope you are fine,
        i want to know how to show mask in frame and only detect the tennis ball red color is not pass through it

        • Adrian Rosebrock May 22, 2018 at 5:55 am #

          If you want to detect the color “red” you’ll need to tune the color threshold values used to generate the mask. Once you have the mask you add use a bitwise OR to add it to frame.

  160. Ferishta May 5, 2018 at 10:35 pm #

    HI Adrian for some reason it is not identifying the word “xrange” as a key word: it outputs the following error: name ‘orange’ is not defined

    HELP please:(

    thank you so much Adrian, you’re awesome.

  161. Ferishta May 6, 2018 at 12:37 am #

    ok my previous error was fixed. I changed xrange to range. Now when I run the code it says, picamera.exc.PiCameraValueError: Incorrect buffer length for resolution 640X480. I am using the Picamera.

    • Adrian Rosebrock May 9, 2018 at 10:23 am #

      Make sure you are clearing the buffer at the end of every loop. See this blog post for a template on using the Raspberry Pi camera module.

  162. sara May 6, 2018 at 9:04 pm #

    I am using this tutorial to track a yellow line and I want to use a rectangle instead of a circle. any suggestions on how to do that? Im doing a line following robot

  163. park May 15, 2018 at 7:51 am #

    Hello Adrian!! It was a great help for me.
    Actually, I have one question..
    I want keep red line in my frame(with no disappear) and after a certain period of time, it is cleaned at once. How can I build the code?

    • Adrian Rosebrock May 17, 2018 at 7:09 am #

      Whenever the “certain period of time” criteria is met you can simply re-initialize the deque data structure as an empty queue.

  164. chrisw May 17, 2018 at 12:33 pm #

    Hi Adrian,
    Your tutorials are fantastic. Loved this one. Just wondering, how would one go about dumping the cvCircle shape as x,y,z co-ordinates in a simple csv or xml file? I’ve followed some of your other tutorials on 3d camera reconstruction but I guess I need a little more of a pointer of what and where are the matrix values I need to reference for the ext file output.

    awesome. Thanks.

    • Adrian Rosebrock May 22, 2018 at 6:53 am #

      It sounds like you may be new to writing Python code (which is perfectly okay of course!) but I would suggest reading up on file I/O. Python makes it dead simple. Python also includes a csv module for reading/writing CSV files but I would suggest using simple Python file I/O operations.

  165. Colton May 20, 2018 at 1:42 am #

    Great tutorial. One question though… I want to do ball tracking where color may not be reliable for a variety of reasons (no guarantee of ball color or lighting conditions). Any suggestions on other filters I can implement prior to finding the ball that may be more robust to lighting conditions. In brainstorming this, optical flow and background subtraction emerged as options but they could be made difficult by a non-stationary background. Any other options I should be considering?

    • Adrian Rosebrock May 22, 2018 at 6:18 am #

      If your lighting conditions that that uncontrolled and there is no guarantee on ball color you should consider training your own custom object detector. HOG + Linear SVM would be a good starting point but you may need to train your own custom deep learning object detector.

  166. Sweets May 22, 2018 at 6:50 am #

    Hi Adrian,

    I want to detect a silver color tool, which is continuously moving and changing its position and orientation. The area around tool is white and grey color. I tried using color based detection but I am unable to get exact threshold .Grey, white and silver colors are looking same (pinkish). which algorithm shall I use for this application?


    • Adrian Rosebrock May 22, 2018 at 6:58 am #

      Do you have an example of the image/video you are working with? If you cannot create a color range for the tool then you’ll want to look into more advanced object detection methods. Since the tool is changing in orientation I would not recommend HOG + Linear SVM. If you have enough training data a deep learning object detector may be your best bet.

      • Sweets May 22, 2018 at 8:46 am #

        I have few sample videos. If I have enough sample videos and I know 80% position and orientation of tools, can I use HOG+SVM?

        • Adrian Rosebrock May 23, 2018 at 7:23 am #

          You can use HOG + Linear SVM but keep in mind that HOG feature vectors are not rotation invariant. You would need to train a HOG + Linear SVM model for each orientation, normally in 10-25 degree increments.

  167. jonathan_g May 27, 2018 at 3:34 pm #

    Hi Adrian. I have a question about finding the center of the ball. I am most interested in getting the center of the circle enclosing the ball. I am getting an error in line 78 “Get ZeroDivisionError: float division in python”. Could I change the code to run line 78 only if m00 > 0.

    Do you have any idea why I am getting the error?


    • Adrian Rosebrock May 28, 2018 at 9:35 am #

      To prevent the error you can either:

      1. Check if m00 is indeed greater than zero

      2. Add