Ball Tracking with OpenCV

ball-tracking-animated-02

Today marks the 100th blog post on PyImageSearch.

100 posts. It’s hard to believe it, but it’s true.

When I started PyImageSearch back in January of 2014, I had no idea what the blog would turn into. I didn’t know how it would evolve and mature. And I most certainly did not know how popular it would become. After 100 blog posts, I think the answer is obvious now, although I struggled to put it into words (ironic, since I’m a writer) until I saw this tweet from @si2w:

Big thanks for @PyImageSearch, his blog is by far the best source for projects related to OpenCV.

I couldn’t agree more. And I hope the rest of the PyImageSearch readers do as well.

It’s been an incredible ride and I really have you, the PyImageSearch readers to thank. Without you, this blog really wouldn’t have been possible.

That said, to make the 100th blog post special, I thought I would do something a fun — ball tracking with OpenCV:

The goal here is fair self-explanatory:

  • Step #1: Detect the presence of a colored ball using computer vision techniques.
  • Step #2: Track the ball as it moves around in the video frames, drawing its previous positions as it moves.

The end product should look similar to the GIF and video above.

After reading this blog post, you’ll have a good idea on how to track balls (and other objects) in video streams using Python and OpenCV.

Looking for the source code to this post?
Jump right to the downloads section.

Ball tracking with OpenCV

Let’s get this example started. Open up a new file, name it ball_tracking.py , and we’ll get coding:

Lines 2-6 handle importing our necessary packages. We’ll be using deque , a list-like data structure with super fast appends and pops to maintain a list of the past N (x, y)-locations of the ball in our video stream. Maintaining such a queue allows us to draw the “contrail” of the ball as its being tracked.

We’ll also be using imutils , my collection of OpenCV convenience functions to make a few basic tasks (like resizing) much easier. If you don’t already have imutils  installed on your system, you can grab the source from GitHub or just use pip  to install it:

From there, Lines 9-14 handle parsing our command line arguments. The first switch, --video  is the (optional) path to our example video file. If this switch is supplied, then OpenCV will grab a pointer to the video file and read frames from it. Otherwise, if this switch is not supplied, then OpenCV will try to access our webcam.

If this your first time running this script, I suggest using the --video  switch to start: this will demonstrate the functionality of the Python script to you, then you can modify the script, video file, and webcam access to your liking.

A second optional argument, --buffer  is the maximum size of our deque , which maintains a list of the previous (x, y)-coordinates of the ball we are tracking. This deque  allows us to draw the “contrail” of the ball, detailing its past locations. A smaller queue will lead to a shorter tail whereas a larger queue will create a longer tail (since more points are being tracked):

Figure 1: An example of a short contrail (buffer=32) on the left, and a longer contrail (buffer=128) on the right. Notice that as the size of the buffer increases, so does the length of the contrail.

Figure 1: An example of a short contrail (buffer=32) on the left, and a longer contrail (buffer=128) on the right. Notice that as the size of the buffer increases, so does the length of the contrail.

Now that our command line arguments are parsed, let’s look at some more code:

Lines 19 and 20 define the lower and upper boundaries of the color green in the HSV color space (which I determined using the range-detector script in the imutils  library). These color boundaries will allow us to detect the green ball in our video file. Line 21 then initializes our deque  of pts  using the supplied maximum buffer size (which defaults to 64 ).

From there, we need to grab access to our camera  pointer. If a --video  switch was not supplied, then we grab reference to our webcam (Lines 25 and 26). Otherwise, if a video file path was supplied, then we open it for reading and grab a reference pointer on Lines 29 and 30.

Line 33 starts a loop that will continue until (1) we press the q  key, indicating that we want to terminate the script or (2) our video file reaches its end and runs out of frames.

Line 35 makes a call to the read  method of our camera  pointer which returns a 2-tuple. The first entry in the tuple, grabbed  is a boolean indicating whether the frame  was successfully read or not. The frame  is the video frame itself.

In the case we are reading from a video file and the frame is not successfully read, then we know we are at the end of the video and can break from the while  loop (Lines 39 and 40).

Lines 44-46 preprocess our frame  a bit. First, we resize the frame to have a width of 600px. Downsizing the frame  allows us to process the frame faster, leading to an increase in FPS (since we have less image data to process). We’ll then blur the frame  to reduce high frequency noise and allow us to focus on the structural objects inside the frame , such as the ball. Finally, we’ll convert the frame  to the HSV color space.

Lines 51 handles the actual localization of the green ball in the frame by making a call to cv2.inRange . We first supply the lower HSV color boundaries for the color green, followed by the upper HSV boundaries. The output of cv2.inRange  is a binary mask , like this one:

Figure 2: Generating a mask for the green ball using the cv2.inRange function.

Figure 2: Generating a mask for the green ball using the cv2.inRange function.

As we can see, we have successfully detected the green ball in the image. A series of erosions and dilations (Lines 52 and 53) remove any small blobs that my be left on the mask.

Alright, time to perform compute the contour (i.e. outline) of the green ball and draw it on our frame :

We start by computing the contours of the object(s) in the image on Line 57. We specify an array slice of -2 to make the cv2.findContours  function compatible with both OpenCV 2.4 and OpenCV 3. You can read more about why this change to cv2.findContours  is necessary in this blog post. We’ll also initialize the center  (x, y)-coordinates of the ball to None  on Line 59.

Line 62 makes a check to ensure at least one contour was found in the mask . Provided that at least one contour was found, we find the largest contour in the cnts  list on Line 66, compute the minimum enclosing circle of the blob, and then compute the center (x, y)-coordinates (i.e. the “centroids) on Lines 68 and 69.

Line 72 makes a quick check to ensure that the radius  of the minimum enclosing circle is sufficiently large. Provided that the radius  passes the test, we then draw two circles: one surrounding the ball itself and another to indicate the centroid of the ball.

Finally, Line 80 appends the centroid to the pts  list.

The last step is to draw the contrail of the ball, or simply the past N (x, y)-coordinates the ball has been detected at. This is also a straightforward process:

We start looping over each of the pts  on Line 84. If either the current point or the previous point is None  (indicating that the ball was not successfully detected in that given frame), then we ignore the current index continue looping over the pts  (Lines 86 and 87).

Provided that both points are valid, we compute the thickness  of the contrail and then draw it on the frame  (Lines 91 and 92).

The remainder of our ball_tracking.py  script simply performs some basic housekeeping by displaying the frame  to our screen, detecting any key presses, and then releasing the camera  pointer.

Ball tracking in action

Now that our script has been coded it up, let’s give it a try. Open up a terminal and execute the following command:

This command will kick off our script using the supplied ball_tracking_example.mp4  demo video. Below you can find a few animated GIFs of the successful ball detection and tracking using OpenCV:

Figure 3: An example of successfully performing ball tracking with OpenCV.

Figure 3: An example of successfully performing ball tracking with OpenCV.

An example of successfully performing ball tracking with OpenCV.

Figure 3: An example of successfully performing ball tracking with OpenCV.

For the full demo, please see the video below:

Finally, if you want to execute the script using your webcam rather than the supplied video file, simply omit the --video  switch:

However, to see any results, you will need a green object with the same HSV color range was the one I used in this demo.

Summary

In this blog post we learned how to perform ball tracking with OpenCV. The Python script we developed was able to (1) detect the presence of the colored ball, followed by (2) track and draw the position of the ball as it moved around the screen.

As the results showed, our system was quite robust and able to track the ball even if it was partially occluded from view by my hand.

Our script was also able to operate at an extremely high frame rate (> 32 FPS), indicating that color based tracking methods are very much suitable for real-time detection and tracking.

If you enjoyed this blog post, please consider subscribing to the PyImageSearch Newsletter by entering your email address in the form below — this blog (and the 99 posts preceding it) wouldn’t be possible without readers like yourself.

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 11-page Resource Guide on Computer Vision and Image Search Engines, including exclusive techniques that I don’t post on this blog! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , ,

294 Responses to Ball Tracking with OpenCV

  1. Andrew September 14, 2015 at 11:17 am #

    Hello Adrian!
    As always, a very nice tutorial, very well explained 🙂
    How would you handle the situation where we have, let’s say 10 green balls in the video?

    Best regards!

    • Adrian Rosebrock September 15, 2015 at 6:06 am #

      Great question Andrew, thank’s for asking. If you had more than 1 ball in the image, you would simply loop over each of the contours individually, make sure they are of sufficient size, and draw their enclosing circles individually. And if you wanted to track multiple balls of different colors, you would need to define a list of lower and upper boundaries, loop over them, and then create a mask for each set of boundaries.

  2. David September 14, 2015 at 12:38 pm #

    Great post Adrian. This would be useful for tracking tennis balls! And the time to process a frame is fast! I wonder if Hawk-eye uses OpenCV https://en.wikipedia.org/wiki/Hawk-Eye

    • Adrian Rosebrock September 15, 2015 at 6:12 am #

      Tracking fast moving objects in sports such as tennis balls and hockey pucks is a deceptively challenging problem. The issues arrises from the objects moving so fast that the standard computer vision algorithms can’t really process them — all they see is a bunch of motion blur. I’m not sure about tennis, but I know in the case of hockey they ended up putting a chip in the puck that interfaces with other algorithms, allowing it to be more easier tracked (and thus watched on TV).

  3. Tyrone September 14, 2015 at 4:40 pm #

    Awsome.
    If you don’t have his book I suggest you get it.
    Keep up the good work Adrian.

    • Adrian Rosebrock September 15, 2015 at 6:01 am #

      Thanks Tyrone! 🙂

  4. Nathanael Anderson September 15, 2015 at 10:55 am #

    Any chance you could put a tutorial together to track a green laser pointer dot over multiple surfaces, including moving video? I’ve been enjoying reading all the info you post. thanks for all the work you put into it. I started working with opencv because of your work.

    • Adrian Rosebrock September 15, 2015 at 4:28 pm #

      Hey Nathanael — welcome to the world of computer vision, I’m happy that I could be an inspiration. I hope you’re enjoying the blog!

      Unfortunately, I personally don’t own any laser pointers. There might me a red laser pointer buried somewhere in the boxes the last time I moved, but I’m not sure. If I can get my hands on a laser pointer I’ll try to do a tutorial on tracking it.

  5. Neeraj September 15, 2015 at 9:53 pm #

    Hi Adrian- Thanks for such detailed explanation on open cv concepts, your site is the best site for learning open cv concepts, Now these days I eagerly wait for your email for what new you have published.Thanks always!.I am right now not able to capture video from my webcamera, I am using virtual box and installed ubuntu on VB, My host operating is OSX. my frame returned are NONE and grabbed is always false. I tried changing the camera = cv2.VideoCapture(0) argument from 0 to 1,-1 . Do I need to do anything special if I need to access webcamera from virtual box.Under VB->Device->USB-> apple HD face time camera is selected.

    • Adrian Rosebrock September 16, 2015 at 7:26 am #

      Unfortunately, using VirtualBox you will not be able to access the raw webcam stream from your OSX machine. This is considered to be a security concern. Imagine if a VM could access your webcam anytime it wanted! So, because of this, accessing the raw webcam is disabled. In this case, you have two options:

      1. Try VMWare which does allow for the webcam to be accessed from the VM. I personally have not tried this out, but I have heard this from other.
      2. Install OpenCV on your native OSX machine.

      I hope that helps!

  6. Adam Gibson September 16, 2015 at 12:05 am #

    Just FYI, adding the following code allows operation on Python 3 & OpenCV 3 (at the top, near line 7):

    • Adrian Rosebrock September 16, 2015 at 7:24 am #

      Thanks for the comment Adam! The code will work will OpenCV 3 without a problem, but the change you suggested is required for Python 3. I’ll update this post to use NumPy’s arange instead to make the code compatible with both Python versions.

  7. Luis September 16, 2015 at 6:11 pm #

    Hi Adrian,

    Thanks for another great tutorial on OpenCV.
    I am working with a freshly compiled Python3 + OpenCV3 on a Raspberry Pi 2, installed from your tutorial on the subject and running this code I am getting the following error:

    I even added the lines suggested by Adam Gibson for compatibility withto allow Python3 and OpenCV3 but the error persists.
    Do you have ay idea of what am I missing?
    Thanks,

    • Adrian Rosebrock September 17, 2015 at 8:15 am #

      Any time you see an error related to an image being None and not having a shape attribute, it’s because the image/frame was either (1) not loaded from disk, indicating that the path to the image is incorrect, or (2) a frame is not being read properly from your video stream. Based on your provided command, it looks like you are trying to access the webcam of your system. Try using the supplied video (in the code download of this post) and see if it works. If it does, then the issue is with your webcam.

      • Nick B April 6, 2016 at 1:07 pm #

        Hi Adrian,

        I have the same attribute error, however I tested my webcam with your video test script, so it should be working?

        Thank you

        • Adrian Rosebrock April 7, 2016 at 12:44 pm #

          Which webcam video test script did you use?

          • Namal September 21, 2016 at 1:08 pm #

            Hi Adrian,

            I also have this problem.with video track, its working properly.but not with web cam.error as above mentions.but camera is working good.what should i do?

          • Adrian Rosebrock September 21, 2016 at 2:08 pm #

            I’m sorry to hear that your OpenCV setup is having issues accessing your webcam. Which webcam are you using?

          • Santo April 26, 2017 at 5:25 pm #

            Hello Adrian,

            I want to modify this code to detect a digit using picamera.

            Could you suggest a way to do it?

            Thanks,
            Santo

          • Adrian Rosebrock April 28, 2017 at 9:45 am #

            That really depends on the types of digits you’re trying to detect as well as the environment they are in. Typical methods for object detection in general include Histogram of Oriented Gradients. I cover object detection in detail inside the PyImageSearch Gurus course, but without seeing an example image of what you’re trying to detect, I can’t point you in the right direction.

  8. Luis September 17, 2015 at 7:06 am #

    Hi,

    Just figured it out.
    To use this code on a Raspberry Pi with Python3 OpenCV3 and the RaspiCAM I needed to load the v4l2 driver:

    sudo modprobe bcm2835-v4l2

    To load the driver every time the RPi boots up, just add the following line to /etc/modules

    bcm2835.v4l2

    Thanks for the tutorial

    • Adrian Rosebrock September 17, 2015 at 8:11 am #

      Thanks for sharing Luis! Another alternative is just to modify the frame reading loop to use the picamera module as detailed in this post.

      • John November 27, 2015 at 1:16 am #

        Hello Adrian,

        I got the same as error as Luis. Would you explain detail how to modify frame reading loop?

        • Adrian Rosebrock November 27, 2015 at 7:35 am #

          Hey John — take a look at Luis’ other comment on this post, he mentioned how he resolved the error.

  9. ancientkittens September 17, 2015 at 5:48 pm #

    I love this article – nice work. I even noted that it got picked up in python weekly!!

    • Adrian Rosebrock September 18, 2015 at 7:33 am #

      Thanks! 😀 I’m glad you enjoyed it!

  10. Yuke September 17, 2015 at 9:21 pm #

    Hi Adrian,

    Thanks for sharing this project.

    I have a question regarding to recover the ball when it appears in the scene again, do you using detector to do it? Or only use HSV color boundaries?

    Another is that, I find out your application is robust for illumination changes, do you using other feature for tracking? Because I think only HSV could not handle it…

    • Adrian Rosebrock September 18, 2015 at 7:33 am #

      When the HSV drops out of the frame, the HSV boundaries are simply used to pick it back up when it re-enters the scene. To answer your second question, since this is a basic demonstration of how to perform object detection, I’m only using color-based methods. In future methods I’ll show more robust approaches using features.

  11. Vlad September 18, 2015 at 4:28 pm #

    Great!

  12. Luis Jose September 25, 2015 at 2:42 am #

    Hi Adrian!
    Amazing work, as always! I wonder, how difficult do you think is to extend this code and follow the position of more balls of different colors?

    Thanks for sharing all this knowledge with the world!

    Luis

    • Adrian Rosebrock September 25, 2015 at 6:38 am #

      Not too challenging at all. Just define a list of lower and upper color boundaries you want to track, loop over them for each frame, and generate a mask for each color. I actually detail exactly how to do this in the PyImageSearch Gurus course.

  13. Ali October 1, 2015 at 2:39 pm #

    Hi Adrian,

    Beautiful tutorial. I am motivated to try this sort of tracking on a squash ball. Do you think it might work? Some of the challenges that come to mind:

    1) ball is black
    2) ball absolute diameter is small, and the perceived ball size becomes even smaller as the distance between the camera sensor and the ball increases
    3) very high speed of ball

    • Adrian Rosebrock October 2, 2015 at 7:10 am #

      Hey Ali — great questions, thanks for asking. If the ball is black, that could cause some issues when using color based detection, but that’s actually not too much of an issue provided that there is enough contrast between the black color and the rest of the image scene. What’s actually really concerning is the very high speed of the ball. Motion blur can really, really hurt the performance of computer vision algorithms. I think you would need a very high FPS camera, incorporate color tracking (if at all possible), and might want to use a bit of machine learning to build a custom squash ball tracker.

  14. Willem Jongman October 3, 2015 at 8:50 pm #

    Hi Adrian,

    When there is initially no contour in the mask, and then the green object is moved into view, it will generate a “deque index out of range” exception on line 94.

    I Modified line 94 to:

    if counter >= 10 and i == 1 and len(pts) >= 10 and pts[-10] is not None:

    That seems to have solved it.

    Thank you very much for sharing your image-processing knowledge, I learned some neat tricks from it and I hope you will be keeping up this good work.

    Cheers,
    Willem.

    • Adrian Rosebrock October 4, 2015 at 7:01 am #

      Thanks for sharing Willem! 🙂

  15. Pedro October 14, 2015 at 10:45 am #

    Hi Adrian,

    Awesome tutorial, as always.
    Best website to source OpenCv and computer vision 🙂

    • Adrian Rosebrock October 14, 2015 at 11:25 am #

      Thanks for the kind words Pedro! 😀

  16. Prasanna K Routray December 27, 2015 at 4:06 pm #

    Hello,
    I tried to run this but it’s giving me this error:

    • Adrian Rosebrock December 28, 2015 at 8:24 am #

      You need to install the imutils package:

      $ pip install imutils

  17. Sharad Patel January 5, 2016 at 9:26 am #

    Adrian,

    I am planning to do a deep dive into your tutorial for a project of mine. I am new to motion tracking and I have a question (it may be answered in the code – if so please can you point it out). Is it possible to set regions on the image such that when the ball enters it, the code can do something (e.g. output a message)? Thanks.

    • Adrian Rosebrock January 5, 2016 at 1:55 pm #

      All you really need are some if statements and the bounding box of the contour. For example, if I wanted to see if the ball entered the top-left corner of my frame, I would do something like:

      The code in the if statement will only fire if the center of the ball is within the upper-left corner of the frame (within 50 pixels). You can of course modify the code to suit your needs.

      • Sharad Patel January 7, 2016 at 6:02 pm #

        Great! Thank you. When I came across this post I wasn’t aware of your Quickstart package. Just downloaded it and I am working my way through the tutorials – enjoying it all so far!

        • Adrian Rosebrock January 8, 2016 at 6:29 am #

          Thanks for picking up a copy of the Quickstart Bundle Sharad, enjoy! 🙂

          • Sharad Patel January 8, 2016 at 7:03 am #

            Sorry – one more question. I have a video similar to yours but I have a red ball. Do you have any tips / tools that you can recommend in establishing the colour bounds for my object (I have tried guesstimating with a web based color-picker). Thanks.

          • Adrian Rosebrock January 8, 2016 at 9:18 am #

            Take a look at the range-detector script I link to in the body of the blog post. You can use this to help determine the appropriate color threshold values.

      • Amirul Izwan February 25, 2016 at 12:23 pm #

        Hello Adrian,
        good job on your tutorials, significant big help to my school project. I tried experimenting with the ‘if’ statement as you suggest here, the problem is it only work once for the first time, the second (and later) time I run the code it throws me error: name ‘x’ is not defined. Is there any way to fix this? Thanks!

        • Adrian Rosebrock February 25, 2016 at 4:39 pm #

          If you’re getting an error that the variable x is undefined, then you’ll want to double check your code and ensure that x is being properly calculated during each iteration of the while loop. It sounds like a logic error in the code that has been introduced after modifying it.

  18. Jessie January 12, 2016 at 6:08 pm #

    Thanks for sharing!

    I’m wondering what is the longest distance between the ball and the camera can be to guarantee the accuracy?

    • Adrian Rosebrock January 13, 2016 at 6:39 am #

      As long as the ball is in the field of view in the camera and the radius doesn’t fall below the minimum radius of 10 pixels (which is a tunable parameter), this will work. You might also be interested in measuring the distance from the camera to an object.

  19. Hilman January 17, 2016 at 4:11 am #

    Hey Adrian, I have a question.
    I can’t help it but to notice that you didn’t change the lowerGreen and upperGreen boundary in the line

    mask = cv2.inRange(hsv, greenLower, greenUpper)

    into NumPy array when in your OpenCV and Python Color Detection post, you said that the OpenCV will expect the colour limit will be in form of NumPy’s array. Why is that?

    • Adrian Rosebrock January 17, 2016 at 5:22 pm #

      That’s a good point! I thought it did need to be a NumPy array, but it seems a tuple of integers will work as well. Thanks for pointing this out Hilman.

  20. mathivanan January 17, 2016 at 8:22 am #

    some one help me how can i print the coordinates of the ball on the terminal …..

    • Adrian Rosebrock January 17, 2016 at 5:20 pm #

      After Line 72, simply do: print((x, y))

  21. Bart January 21, 2016 at 12:21 am #

    This is a nice tutorial, well explained, I was wondering how to add a pan/tilt servo to the project so that an external camera (USB) can move like the contrails

    • Adrian Rosebrock January 21, 2016 at 8:51 am #

      I honestly haven’t worked with a pan/tilt servo before, although that is something I will try to cover in a future blog post — be sure to keep an eye out!

  22. Guru January 31, 2016 at 1:58 pm #

    Extremely Great Post Man. I would like to request you to demonstrate shape based tracking instead of color based tracking in this context. It would help me greatly to be frank.
    Thanks.

    • Adrian Rosebrock February 2, 2016 at 10:36 am #

      Have you tried looking into HOG + Linear SVM (commonly called object detectors)? It’s a great way to perform shape based detection followed by tracking.

  23. david February 2, 2016 at 8:47 pm #

    Hi, great post. Just curious about lines 44-46.

    frame = imutils.resize(frame, width=600)
    blurred = cv2.GaussianBlur(frame, (11, 11), 0)
    hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)

    Should: hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
    Be: hsv = cv2.cvtColor(blurred, cv2.COLOR_BGR2HSV)
    ?

    Otherwise, the “blurred” frame isn’t used that I can see.

    • Adrian Rosebrock March 13, 2016 at 10:31 am #

      Hey David, thanks for pointing this out. I didn’t mean for the blurring to be included in the code, I have commented it out. Sorry for any confusion!

  24. Tain February 3, 2016 at 1:51 pm #

    Evening Adrian

    i am absolutely new to Python and openCV, however have some programming experience.

    But i am struggling to get the demo running.

    My guess is that for some reason frames arent being grabbed from camera or video and so I end with an error, “noneType” doesnt have an attribute called Shape, when the code calls for resize of an object.

    Any thoughts on where I am going wrong?

    Windows 10, Python 2.7

    Thanks for your help
    Tain

    • Adrian Rosebrock February 4, 2016 at 9:15 am #

      Any time you see an image or frame being “NoneType” it’s almost 100% due to the fact that (1) the image is not correctly read from disk or (2) the frame could not be read from the video stream. I would double check that you can properly access your webcam via OpenCV since that is likely were the issue lies.

  25. Bob February 4, 2016 at 4:03 pm #

    Hi Great Tuto, got it to work well, the thing is that I am hopeless with color spaces and don’t understand anything else than RGB. I’ve tried using rgb2hsv() conversion function to try and track a red ball or a blue ball, but … didn’t get it to work. I searched python documentation but the functions they propose (like colorsys.rgb_to_hsv()) don’t give results in the same ranges.I also tried different wikipedia functions and online functions but I don’t seem to get it to work with anything else than the green.
    Any help welcome.
    Cheers

    • Adrian Rosebrock February 5, 2016 at 9:23 am #

      Take a look at the range-detector script that I link to in this post. You can use this script to help you determine appropriate color threshold values.

  26. Hilman February 5, 2016 at 7:01 am #

    Hey Adrian. I got one question.

    On line 96, the code ‘key = cv2.waitKey(1) & 0xFF’, why the ampersand sign and the ‘0xFF’ is needed? I’ve googled it, the best explanation I have found is something about if the computer is 64-bit (if I remember correctly).

    • Adrian Rosebrock February 5, 2016 at 9:17 am #

      This is used to take the bitwise AND between the return value of cv2.waitKey gives the least significant byte. This byte is then passed into the ord function so we can get the actual value of the key press.

  27. David Kadouch February 8, 2016 at 6:43 am #

    Hey Adrian
    Quick question: I want to track a different color than the green/yellow ball. What’s the formula you used to transform the RGB values to HSV values? (in your code sample the values for greenLower = (29, 86, 6) and greenUpper = (64, 255, 255).
    I’m struggling with that and I can’t make it work. I want to track a blue object.

    thanks
    David

    • Adrian Rosebrock February 8, 2016 at 3:46 pm #

      To transform the RGB values to HSV values, it’s best to use the cv2.cvtColor function. You can find the formula for the conversion on this page. However, if you’re trying to detect a different color object, I suggest using the range-detector script I mention in this post.

  28. ghanendra February 16, 2016 at 11:51 am #

    hey Adrian..! I am not able to do it on raspberry pi 2.. Its showing NoneType error and for ball_tracking_example.mp4 fps is very low.
    please Help me out.

    • Adrian Rosebrock February 16, 2016 at 3:36 pm #

      Anytime you see a NoneType error it’s 99% of the time due to an image not being read properly from disk or a frame not being read from the video stream. The issue here is that you’re using cv2.VideoCapture when you should instead be using the picamera Python package to access the Raspberry Pi camera module. You can read more on how to access the Raspberry Pi camera module here.

      You could also swap out the cv2.VideoCapture for the VideoStream that works with both the Raspberry Pi camera module and USB webcams. Find out more here.

      • Ghanendra February 21, 2016 at 10:54 am #

        Thanks a lot Adrian.
        I was able to do live stream ball tracking with pi.
        I want to detect front head light of a vehicle during night time. Still I am just a beginner. Can you help me out on this?

        • Adrian Rosebrock February 22, 2016 at 4:25 pm #

          That’s definitely a bit more of a challenge. To start, you’ll want to find the brightest spots in an image. Then, you’ll need to filter these regions and apply a heuristic of some sort. A first try would be finding two spots in an image that are lie approximately on the same horizontal line. You might also want to try training an object detector to detect the front of the car prior to processing the ROI for headlights.

          • Ghanendra March 10, 2016 at 10:20 pm #

            Hey Adrian!!
            Can you help me out with the code for detecting two spots horizontally in an image.??
            I need to determine multiple bright objects in a live video stream.
            Just like finding multiple balls with same colors.
            Thanks in advance.

          • Adrian Rosebrock March 13, 2016 at 10:28 am #

            The same code can be applied. Just define the color ranges for each object you want to detect, then create a mask for each of the color ranges. From there, you can find and track the objects. If you don’t want to use color ranges, then I suggest reading reading this post on finding bright spots in images.

          • ghanendra March 22, 2016 at 10:03 am #

            hey Adrian thanks for help.
            I tried for blue color creating a different mask and setting color range. it was getting tracked simultaneously.
            I was able to track green and blue.
            1. how to track two balls in same horizontals line??
            2. in your tutorial we are finding the largest contour in the mask, instead of that how to find all the contours and track them separately??
            3. how to track multiple objects of same color?? just like if I have 5-10 green balls. So how to track them?

          • Adrian Rosebrock March 22, 2016 at 4:15 pm #

            The most important aspect of getting multiple colors is to use multiple masks — one for each color you want to track. You then apply each technique of color thresholding, finding the largest contour(s), and then tracking then. But again, you need to create a mask for each color range that you want to track.

          • ghanendra March 22, 2016 at 11:37 pm #

            Haha… Adrian you misunderstood me. I was asking about tracking same color. Tracking multiple objects of ” SAME COLOR”.

          • Adrian Rosebrock March 24, 2016 at 5:21 pm #

            Got it, I understand now. See by reply to “Maikal” in this comments section. I detail a procedure that can be used to handle objects that are the same color.

          • ghanendra March 26, 2016 at 11:04 am #

            Hey Adrian really thanks a lot for answering my questions. I just love these tutorials and everyday I will come with a new question and I hope you won’t mind answering them. haha!!
            One more

            I need to indicate the detected green ball using an LED, so how can I use RPi.GPIO with this code?? I tried importing but showing an error.
            How to use GPIO pins with this code??

          • Adrian Rosebrock March 27, 2016 at 9:09 am #

            I’ll be covering this soon on the PyImageSearch blog, keep an eye out 🙂

  29. amin February 18, 2016 at 8:38 am #

    Hi Adrian,
    thanks for your GREAT tutorials,
    i want to merge this tracking ball code with “unifying-picamera-and-cv2…” to have best result in tracking green ball
    at first install last jessie update and install opencv 3.1.0 with python 3 same as your post “how-to-install-opencv-3-on-raspbian-jessie”
    for simple imshow (no tracking & max witdh = 400) can reach 39 FPS with picamera & about 27 FPS for webcam
    but when add ball-tracking code FPS decrease to 7.8 with picamera and 7 with webcam 😐
    why webcam has close speed to picamera when add tracking code?
    is it possible to reach better FPS (without change size)?
    i try several ways to increasing FPS but they are not good enough
    e.g. increase priority by change nice of python process “renice -n -20 PID of process”
    but no so good maybe increase FPS just 0.1
    thanks a lot

    • Adrian Rosebrock February 18, 2016 at 9:30 am #

      So keep in mind that the FPS is not measuring the physical FPS of your camera sensor. Instead, it’s measuring the total number of frames you can process in a single second. The more steps you add to your video processing pipeline, the slower it will run.

      Your results reflect this as well. When using just cv2.imshow you were able to process 39 frames per second. However, once you included smoothing, color thresholding, and contour detection, your processing rate dropped to 7 frames per second. Again, this makes sense — you are adding more steps to your processing pipeline, therefore you cannot process as many frames per second.

      Think of your video processing pipeline as a flight of stairs. The less functions you have to call inside your pipeline (i.e., the “while” loop), the faster you can go down the stairs. The more functions you have, the longer your staircase becomes — and therefore it takes longer for you to descend the stairs.

  30. kazem March 6, 2016 at 8:31 am #

    Hi Adrian, great tutorial. You mentioned you have used range-detector to determine the boundaries. Would you mind telling me how did you do that? I ran it and I can see I can use the sliders to make sure that My object stands out as black from the white background. But there is nowhere I can see any value?

    • Adrian Rosebrock March 6, 2016 at 9:13 am #

      Indeed, the sliders control the values. The easiest way to get the actual RGB or HSV color thresholds is to insert a print statement right after you press the q key to exit the script. I’ll be doing a more detailed tutorial on how to use the range-detector in the future.

      • Hojo October 17, 2016 at 4:15 am #

        I have just started learning python about a week ago and I am still trying to wrap my head around the language.

        So while this question sounds dumb, How do you run range-detector in python? Is it already in imutils?

        I am trying to detect and track multiple moving black balls in the same frame, print out the respective positions and calculate the distance traveled, velocity, etc.

        I have written code before to do this but in matlab, (I split an image into R-G-B, performed a background subtraction on each channel, inverted the resulting images, took the similar and binarized) however when reading up on object tracking; I noticed that many use HSV instead of RGB. After reading more I can see why HSV is preferred over RGB but because of this, I need to be able to define the color ranges. The range-detector looked liked perfect to use, but… (back to my question above).

        • Adrian Rosebrock October 17, 2016 at 4:03 pm #

          There are many ways to execute the range-detector script but most are based on how your Python PATH is defined. Where do you have the imutils package installed on your system? The script itself is already in imutils. The easiest method would be to change directory to it and execute using your input image/video stream as the source.

  31. Selim M. March 8, 2016 at 3:53 pm #

    Hello Adrian,
    Thanks for the tutorials, I learned a lot from them. I have a problem about the camera though. It does not capture the frames. I didnt have a problem when taking photos but it seems that the video is a bit problematic. I run the code and it doesnt capture the frames. Do you have an idea about why it happens?

    Have a nice day!

    • Adrian Rosebrock March 8, 2016 at 4:10 pm #

      What type of camera are you using? Additionally, you might want to try another piece of software (such as OSX’s PhotoBooth or the like) to ensure that your camera can be accessed by your OS.

  32. giulio mignemi March 23, 2016 at 9:35 am #

    hello, I need to set the color to identify the ball covered with aluminum foil, could you help me?

    • Adrian Rosebrock March 24, 2016 at 5:19 pm #

      I would recommend against this. Trying to detect and recognize objects that are reflective is very challenging due to the fact that reflective materials (by definition) reflect light back into the camera. Thus, it becomes very hard to define a color range for reflective materials. Instead, if at all possible, change the color of the object you are tracking.

  33. maikal March 24, 2016 at 12:40 am #

    Anyone can tell me how to detect two green balls simultaneously???

    • Adrian Rosebrock March 24, 2016 at 5:13 pm #

      Change Line 66 to be a for loop and loop over the contours individually (rather than picking out the largest one). You can get rid of the max call and then process each of the contours individually. I would insert a bit of logic to help prune false-positive contours based on their size, but that should get you started!

      • maikal March 25, 2016 at 12:16 pm #

        Yeah Adrian thanks a lot. I changed to for loop, multiple contours are getting detected and they are overlapping each other. I tried to change the radius size but still not getting proper result.
        Waiting for your logic.

        • Adrian Rosebrock March 27, 2016 at 9:15 am #

          If the contours are overlapping, then that will cause an issue with the tracking — this is also why you might want to consider using different color objects for tracking. In the case of overlapping objects, you should consider applying the watershed algorithm for segmentation.

  34. Alan April 6, 2016 at 1:22 am #

    Hi Adrian,
    If we are tracking multiple balls, you said loop over the contours in the earlier post. However, how do you identify the contours so that when you draw the lines, its belong to the correct ball.

    • Adrian Rosebrock April 6, 2016 at 9:08 am #

      There are many ways to accomplish this, some easy, some complicated. The quickest solution is to compute the centroid of each object in the frame. Then, find the objects in the next frame. Compute the centroids again. Take the Euclidean distance between the centroids. The pairs of objects that have the smallest distances are thus the “same” object. This would make for a great blog post in the future, so I’ll make sure I cover that.

      • yaswanth kumar May 5, 2016 at 7:23 am #

        Hey Adrian, don’t we have any python library or any algorithm to do the same? if yes, can you please suggest some! Thanks

        • Adrian Rosebrock May 5, 2016 at 7:31 am #

          No, there isn’t a library that you can pull off the shelf for this. It’s not too hard to code though. I’ll try to do a blog post on it in the future, but my queue/idea list is quite large at the moment.

  35. Matt April 7, 2016 at 8:28 am #

    Hi Adrian,

    Thanks for your great tuto. Do your script run with a beaglebone black card ?

    Thanks in advance

    • Adrian Rosebrock April 7, 2016 at 12:33 pm #

      I don’t own a Beaglebone Black, so I honestly cant’ say.

      • Matt April 8, 2016 at 2:57 am #

        Ok thanks. But in your project, what kind of card have you used ?

        • Adrian Rosebrock April 8, 2016 at 12:53 pm #

          I either use a laptop or desktop running Ubunutu or OSX, or I use a Raspberry Pi.

      • Matt April 13, 2016 at 9:36 am #

        Ok but from where do you run your script ? Raspberry Pi or Laptop ? Thanks.

        • Adrian Rosebrock April 13, 2016 at 6:52 pm #

          I run it on both. From this particular script, I executed it on my laptop. But it can also be run on the Raspberry Pi by modifying the code to access the Raspberry Pi camera.

          • Matt April 14, 2016 at 8:41 am #

            Thanks. According to your video, you seem to track your ball in the plane (x,y). What’s happened if you move the ball on the z-axis ?

          • Adrian Rosebrock April 14, 2016 at 4:42 pm #

            This code doesn’t take in account the z-axis. But you can certainly combine the code in this blog post with the code from measuring the distance from your camera to an object to obtain the measuring in the z-axis as well.

          • Matt April 15, 2016 at 3:12 am #

            Yes, I read it. But in your code, I do not understand how do you compute the coordinates of the ball in the frame world. Did you compute the coordinates changing frames (e.g world frame -> camer frame) ?

          • Adrian Rosebrock April 15, 2016 at 12:11 pm #

            The (x, y)-coordinates of the ball are obtained from the image itself. They are found by thresholding the image, finding the contour corresponding to the ball, and then computing its center. These coordinates are then stored in a queue (i.e., the actual “tracking” part). If you would like to add in tracking along the z-axis, you’ll need to see the blog post I linked you to above. The trick is apply an initial calibration step that can be used to measure the perceived distance in pixels and convert the pixels to actual measurable units.

          • Matt April 19, 2016 at 7:07 am #

            Thanks for your answer. But in your tuto, you do not measure distance from the camera to an object but only distance between different objects. Moreover, in your algo, I do not see the using of intrinsic parameters (e.g focal length of the camera). Could you help me please ? Thanks

          • Adrian Rosebrock April 19, 2016 at 7:24 am #

            Hey Matt, as I mentioned in a previous reply to one of your comments, you need to see this blog post for measuring the distance from an object to camera. This requires you to combine the source code to both blog posts to achieve your goal. I’ll see about doing such a blog post in the future, but if you would like to build a system that measures distance + direction, you’ll need to study the posts and combine them together.

          • Matt April 19, 2016 at 2:50 pm #

            Yes, I see but I asked myself if a simple webcam work for 3D tracking… I did a state of the art, and I read that a special 3D camera sensor is required. That’s why I asked you 🙂

          • Adrian Rosebrock April 19, 2016 at 3:01 pm #

            For 3D tracking, you’ll likely want to explore other avenues. If you want to use a 2D camera (which would be a bit challenging), then camera calibration via intrinsic parameters would be required. Otherwise, you might want to look into stereo/depth cameras for more advanced tracking methods. Hopefully I’ll be able to cover both of these techniques in future blog posts 🙂

          • Matt April 20, 2016 at 3:53 am #

            I hope for you 🙂

            Otherwise, I thought to use a simple way for computing z-coordinate which consists to use the size of the object to determine z-coordinate roughly. When it appears larger on the camera, it must be closer and inversely, if it’s smaller, it’s farther away. But I don’t know if this method is robust. Because if I use a small object, it would be difficult.

  36. Jon April 8, 2016 at 5:54 pm #

    This uses a USB camera. I have your code for the picamera working from another module and would like to use the picamera. What is the correct way to do this?

    Can I replace line 26:

    camera = cv2.VideoCapture(0)

    with:

    camera = PiCamera()
    camera.resolution = (640, 480)
    camera.framerate = 32

    Thanks.

    • Adrian Rosebrock April 13, 2016 at 7:16 pm #

      Instead of replacing the code using picamera directly, I would instead use the “unified” approach detailed in this post.

      • Dan June 4, 2016 at 10:36 pm #

        Just to get this particular tutorial working with picamera or the unified approach, would you detail (or post) the specific changes to get ball tracking working with the picamera?

        • Adrian Rosebrock June 5, 2016 at 11:24 am #

          To be totally honest, it’s not likely going to wrote a separate blog post detailing each and every code change required. If you go through the accessing Raspberry Pi camera post and unifying access post, I’m more than confident that you can update the code to work with the PiCamera module.

          Start with the template I detail inside the “accessing the picamera module” tutorial. Then, start to insert the code inside the while loop into the video frame processing pipeline. It’s better to learn by doing.

  37. WouterH April 14, 2016 at 3:48 pm #

    Why are you calculating the center? The function minimumEnclosingCircle already returns the center + radius or am I missing something?

    Regards and thanks for the nice example.

    • Adrian Rosebrock April 14, 2016 at 4:36 pm #

      The cv2.minEnclosingCircle does indeed return the center (x, y)-coordinates of the circle. However, it presumes that the shape is a perfect circle (which is not always the case during the segmentation). So instead, you can compute the moments of the object and obtain a “weighted” center. This is a more accurate representation of the center coordinates.

  38. Om April 22, 2016 at 3:39 am #

    Help me iam not able to install imutils on RPI how can I do it

    • Adrian Rosebrock April 22, 2016 at 11:42 am #

      You should be able to install it via pip:

      $ pip install imutils

  39. anirban April 23, 2016 at 1:57 pm #

    Hi – Excellent blog, but when running i get the below error. Can someone help?

    File “ball_tracking.py”, line 39, in
    image, contours,hierarchy = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
    ValueError: need more than 2 values to unpack

    • Adrian Rosebrock April 25, 2016 at 2:09 pm #

      It sounds like you’re using OpenCV 3; however, this blog post utilizes OpenCV 2.4. To fix this error, you simply need to change the cv2.findContours line to:

      (_, cnts, _) = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

      I detail the difference in cv2.findContours between OpenCV 2.4 and OpenCV 3 in this blog post.

      • anirban April 25, 2016 at 3:35 pm #

        Hi Adrian – So kind of you to reply in such a short time, i appreciate your help to starters as me. i am not running opencv 3 But i am on 2.4.9 which i got by running commands on python terminal cv2.__version__

        Can you suggest anything else?

        • Adrian Rosebrock April 26, 2016 at 5:17 pm #

          My mistake — I read your original comment to fast. I should have been able to tell that you were using OpenCV 2.4. In that case, you just need to modify the code to be:

          (cnts, _) = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

          I discuss the changes in the cv2.findContours function between OpenCV 2.4 and OpenCV 3 in this blog post.

  40. Shubham Batra April 25, 2016 at 6:45 am #

    @Adrian Hey!, I am tracking a table tennis ball using the color segmentation and hough circle method, but this only works fine when the ball is moving slowly.
    When the ball is moving very fast then tracking is lost.
    I am using the Kinect for Windows V2 Sensor which gives at most 30fps.
    Do I need a better high speed camera or any other algorithms can do the trick with the same 30fps camera ?

    • Adrian Rosebrock April 25, 2016 at 1:59 pm #

      I wouldn’t recommend using Hough Circles for this. Not only are the parameters a bit tricky to get just right, but the other issue is at high speeds, the ball will become “motion blurred” and not really resemble a circle. Instead, I would suggest using a simple contour method like I detailed in this blog post. Otherwise, if you really want to use Hough Circles, you’ll want to get a much faster camera and have the hardware that can process > 60 FPS.

      • Shubham Batra April 27, 2016 at 3:16 am #

        @Adrian I’ll try out the simpler contour method and see if that works just fine,
        else I’ll have to get a better camera.
        Anyways thanks for the help!

  41. reza aulia April 29, 2016 at 8:14 am #

    if i want to change colour , where i can find type of color ??

    • Adrian Rosebrock April 30, 2016 at 3:58 pm #

      Please see the range-detector script that I mention in this blog post.

  42. Suraj May 12, 2016 at 4:33 am #

    Hello Adrian,

    I want to blur the rest of the video while the specified colour region stays normal.
    Any leads on how I can get to it?

    • Adrian Rosebrock May 12, 2016 at 3:35 pm #

      I would suggest using transparent overlays. You can blur the entire image using a smoothing method of your choice. This becomes your “background”. And then you can overlay the original, non-blurred object. This will require you to extract the ROI/mask of the object.

  43. Matt May 25, 2016 at 8:33 am #

    Hi adrian,

    I have an other question please. I do not understand how have you computed the coordinates of the ball without considering the focal length of your camera in your algo.

    Could you explain me what is the difference between your work and the case where you could use the focal length ?

    Thanks

    Matt

  44. Oscar June 16, 2016 at 5:42 am #

    Hi Adrian.

    great tutorial.
    One question, is it possible create an executable of this script and a shortcut? this way, the program runs by double-clicking on the shortcut. I have UBUNTU.

    • Adrian Rosebrock June 18, 2016 at 8:22 am #

      Thanks Oscar. And regarding your question, I don’t know about that. I’ve never tried to create a Python script that runs via shortcut.

  45. Ihtasham June 17, 2016 at 9:50 am #

    Hi, I want to know how we can do people track and get the track direction. In your tutorial just direction is start from where body escaped from the cam and how we can come the window in neutral form again.

    • Adrian Rosebrock June 18, 2016 at 8:18 am #

      In this tutorial I demonstrate how to compute direction and track direction. You can apply the same methodology to other objects (such as people) as well.

  46. Arkya June 18, 2016 at 2:58 pm #

    Hey, thanks for the awesome tutorial.
    What if I need to track any other colored ball (say, black), how would I get the HSV range of that color?

    • Adrian Rosebrock June 20, 2016 at 5:34 pm #

      Please see the range-detector script that I linked to in the blog post. This script will help you define the HSV color range for a particular object.

      • Arkya June 22, 2016 at 2:53 pm #

        thanks, got it

  47. yaswanth kumar June 23, 2016 at 11:17 am #

    Hi Adrian,
    Can’t we use RGB color space and RGB colour boundaries to detect a colour?

    • Adrian Rosebrock June 23, 2016 at 1:05 pm #

      Absolutely. Please check this blog post as an example.

  48. Amin July 5, 2016 at 11:35 am #

    Hi Adrian,
    i make a robot (see that-> http://www.dropbox.com/s/c7ctgyzjhepxqc7/Raspberry_Robot.jpg?dl=0 )
    its work base on your code to find green ball
    now im trying to optimize codes and have some question

    1.you use erode & dilate functions.why you dont use like this ?:
    mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, None, iterations=2)

    2.you initialize center by None but i delete it & nothing happened : /

    3.im searching for object detection methods for detect ball in my robot so find this algorithms

    * transfer color space to HSV & find contours then find circle in it as you explain in this post
    ** Hough Circle Transform -> as you said its need camera with high FPS
    *** HOG
    **** Cascade Classification
    ***** CAMshift

    now i want to now which algorithm is fastest and have best performance?
    what are other algorithms i can use to find ball in video frames?

    thanks

    • Adrian Rosebrock July 5, 2016 at 1:40 pm #

      You could certainly use a closing operation as well. In this case, I simply used a series of erosions and dilations. As for your second question, I’m not sure what you mean.

      The fastest tracking algorithm will undoubtedly be CAMShift. CAMShift is extremely fast since it only relies on the color histogram of an image. The Hough circle transform can be a bit slow, and worse, it’s a pain to tune the parameters. Haar cascades are very fast, but prone to false-positive detections. HOG is slower than Haar, but tends to be more accurate. If all you want to track is a green ball, then I would suggest using either the cv2.inRange function or CAMShift.

      • Amin July 6, 2016 at 12:44 am #

        thanks a lot
        just one thing i had forgotten to ask :
        some times when i bring out the ball from camera view
        get this error :
        Zero Division Error : float division by zero
        its related to this line :
        center = (int(M[“m10”] / M[“m00”]), int(M[“m01”] / M[“m00”]))
        how fix it?

        • Adrian Rosebrock July 6, 2016 at 4:14 pm #

          If the ball goes out of view and you are trying to compute the center, then you can run into a divide by zero bug. Changing teh code to:

          Should resolve the issue.

  49. Ewerton Lopes July 8, 2016 at 6:03 am #

    Hey Adrian!

    First of all, thanks for the blog! It is amazing! 😀
    Right now I am doing my PhD research.. and somehow I need to track a person using a robot base… Not all people in the scene, but just one given person of interest, lets say! Well, I am thinking on tracking her based on a given color the person is wearing (let’s say green!). I wondering, however, wether it is possible, for instance, to use a kind of AR or QR code tag on the person instead of the color.. Just to avoid getting noise from other colors around… Do you have any idea on this matter? I would love to hear your feedback!

    Thanks, man!

    • Adrian Rosebrock July 8, 2016 at 9:47 am #

      Sure, this is absolutely possible. It all comes down to how well you can detect the “object/marker”. If it’s easier to detect the person via color, do that. If the QR code gives you better detection accuracy, then go with that. I would suggest running a few tests and seeing what works best in your situation.

  50. Jarno Virta July 15, 2016 at 4:00 pm #

    Hi! Thanks for the tutorial! I have been learning OpenCV for a while now and I must say it is fascinating! I have an Arduino robot that I can control from my phone via bluetooth and it can also move around randomly while avoiding obstacles using a sonar range finder. I’m in the process of adding a Raspberry Pi to the robot, which will detect a ball and instruct the Arduino to move toward it. Your tutorial has been very useful!

    I was thinking of using houghcircles to check that the object is infact a ball but this proved a bit too difficult because of other stuff being picked up, or if I set the color range for the mask and the diameter for the circle too restrictively, the ball is not found because of, among other things, variations in the tone of color of the ball… The robot should be able to detect the ball at some distance, so that brings certain requirements as well. I must say, I dont fully understand the Hough circle detection either, I often get a huge number of circles… Maybe just detecting contours is enough for now.

    Is it possible to detect the ball without resorting to color filtering?

    • Adrian Rosebrock July 18, 2016 at 5:25 pm #

      The parameters to Hough Circles can be a real pain to tune, so in general, I don’t recommend using it. I would suggest instead filtering on your contour properties.

      As for detecting a ball in an image without color filtering, that’s absolutely possible. In general, you would need to train your own custom object detector.

  51. Ed July 21, 2016 at 2:04 pm #

    Hi Adrian,

    Ive been following a few of your tutorials and have openCV setup on my pi but I cannot get this tutorial to work! (even running your source exactly)

    Whenever I type the command to run it I simply end up back at the prompt, heres my output:

    (cv) pi@pi:~/Documents $ python ball_tracking.py –video ball_tracking_example.mp4
    (cv) pi@pi:~/Documents $

    Any idea why its doing this?

    • Adrian Rosebrock July 22, 2016 at 10:57 am #

      If you end up back at the prompt, then OpenCV cannot open your .mp4 file. Make sure you compiled OpenCV on your Pi with video support.

      • Ed July 22, 2016 at 3:17 pm #

        Hi,

        I’m not sure that I have. It’s installed on a raspberrypi as per your tutorial to install openCV 3.0 and Python 2.7.

        Is the video support required for acessing the video feed from the picamera?

        • Adrian Rosebrock July 27, 2016 at 2:53 pm #

          Video support is not required for accessing the Raspberry Pi camera module provided that you are using the Python picamera package. However, if you are reading frames from a video file, then yes, video support would be required.

  52. Yao Lingjie July 28, 2016 at 2:56 am #

    Hi Adrian,

    Can I know how do I find out the object’s lower and upper boundaries by using the imutils range_detector?

    • Adrian Rosebrock July 29, 2016 at 8:37 am #

      My favorite way would be to add a “print” statement at the bottom of the range_detector script that prints out the values when you press a key on your keyboard or exit the script. I’m currently looking at overhauling the script to make it a little more user friendly.

  53. Olivier Supplien August 9, 2016 at 7:01 am #

    Hi,

    I am lookink for a way to track several object in the same time, like coloured sticky labels.
    Do you have any idea?

    By th way, your code was very helpfull and very well-commented, thank you.

    • Adrian Rosebrock August 10, 2016 at 9:30 am #

      You can certainly track multiple objects at the same time. You just need to define the lower and upper color boundaries for each object you want to track. Then, generate a mask for each colored object and use the cv2.findContours function to find each of the objects.

  54. Aris August 16, 2016 at 9:26 am #

    Hey Adrian

    from line 19 and 20. Is it the HSV or RGB color code?

    • Adrian Rosebrock August 16, 2016 at 12:56 pm #

      That is in the HSV color space.

  55. Marcel August 18, 2016 at 4:20 pm #

    Hello Adrian,

    Excellent tutorial on tracking ball with OpenCV.

    I am starting studies in computer vision.

    I have some questions..

    The first is: How can I change the trace color the ball and let permantente in the image?

    And the second question: How would the code to find another color and apply a square mask?

    Thanks,

    • Adrian Rosebrock August 22, 2016 at 1:39 pm #

      To track the movement of the ball, you can use this post.

      To track a different color object, be sure to use the range-detector script that I mention in the blog post. You can apply a square mask using cv2.rectangle

      • Marcel August 24, 2016 at 2:22 pm #

        Many thanks for the reply,

        I managed to create a square mask for the color red and would like to create a condition to check whether center the green ball went over the color red, how it could be this condition?

        • Adrian Rosebrock August 24, 2016 at 3:51 pm #

          Basically, all you need to do is create two masks — one for the red and one for the green ball. Then, once you have these masks take the bitwise AND between them. This will give you any regions where the two colors overlap. You can find these regions using cv2.findContours or more simply cv2.countNonZero

  56. Mark August 23, 2016 at 7:36 am #

    Hi Adrian,

    thanks for your great job with sharing all this knowledge!
    But I have a question. Have you ever tried to use higher FPS camera with raspberry? For instance action camera like GoPro. I’m wondering is it even possible. It should be connected via USB so I think it could be a bottleneck. Considering that CSI is the best option here => raspberry camera is the only way to capture HD video at ~60FPS in real time, right? I’ve read about some tricky HDMI input to CSI adapter so said GoPro could action like raspberry cam but its like 2 times the price of RPi3 and the availability leaves much to desire… What do you think?

    Have a nice day!

    • Adrian Rosebrock August 24, 2016 at 12:17 pm #

      I personally haven’t tried using a GoPro before, regardless of processing the frames on a Pi or standard hardware. In general, I think the Pi will be strained to process 60 FPS unless you are only doing extremely basic operations on each frame.

  57. Nilesh September 1, 2016 at 10:22 am #

    Hello,

    Wonderful post. I have a question on the similar lines – How about tracking two or more/2 same color objects in the video. Lets say for instance we have 2 red, 2 green and 1 blue balls in the scene. How would you recommend tracking them with a unique identifier?

    I am expecting 5 different trajectories (similar to one in ball tracking example), one for each ball. Thank you for your help.

    • Adrian Rosebrock September 1, 2016 at 10:55 am #

      For multiple objects, I would suggest constructing a mask for each color. Once you have the masked regions for each color range, you can apply centroid tracking. Compute the Euclidean distance between your centroid objects between subsequent frames. Objects that have minimum distance should be “associated” and “uniquely tracked” together.

  58. John September 9, 2016 at 12:21 pm #

    Hello ! I actually have a few questions to ask
    What I’m trying to do is to run this program using a Raspberry Pi 3 using the PiCamera , but I keep getting this error : ‘NoneType’ object has no attribute ‘ shape ‘
    I tried to modify your code a little by adding these lines

    from picamera import PiCamera ( At the very top )
    camera = PiCamera() Line 26
    But the error ‘PiCamera’ object has no attribute ‘read’

    I looked at the tutorial here http://www.pyimagesearch.com/2015/03/30/accessing-the-raspberry-pi-camera-with-opencv-and-python/ but still couldn’t quite understand..
    Not sure what to do about this..Would really appreciate some help, thanks!

    • Adrian Rosebrock September 12, 2016 at 12:58 pm #

      Anytime you see an error related to “NoneType”, it’s because an image was not read properly from disk or (in this case) a frame was not read from a video stream. I would suggest going back to the Accessing the Raspberry Pi camera post and ensure that you can get the video stream working without any frame processing. From there, start to add in the code from this post in small bits.

      Finally, if you need a jumpstart, I would suggest going through Practical Python and OpenCV to help you learn the basics of computer vision and OpenCV. All code examples in the book are compatible with the Raspberry Pi.

      • John October 3, 2016 at 12:08 pm #

        I managed to bring up a live feed video but still can’t get ball_tracking.py to work. I tried inserting with picamera.PiCamera() as camera: before the image processing part but still recieved the same error

        • Adrian Rosebrock October 4, 2016 at 7:02 am #

          It’s really hard to say what the exact issue is without being in front of your physical setup. I’m not sure how much it would help, but I would suggest going through my post on common camera errors on the Raspberry Pi and seeing if any of them relate to your particular situation.

  59. Mostafa Sabry September 11, 2016 at 9:10 am #

    Hi adrian, .
    I really appreciate your effort in these blogs that we alot benefit from.
    I am trying to run the code on python 2.7 with open CV2 and i keep getting the same error of ‘NoneType’ object has no attribute ‘shape’.
    I am working on the computer NOT raspberry PI as I checked the comments above.
    I would be grateful if you can help me handle this issue.
    I am using a webcam built in the laptop and I checked it is working using the command
    cv2.VideoCapture(0)
    on a separate python file .
    I am new to python. I traced the code and to try to let the code run on the video instead but I failed to understand the “argparse” library.

    • Adrian Rosebrock September 12, 2016 at 12:50 pm #

      Hi Mostafa, I think you might have missed my reply to Nick above. My reply to him is true for you as well. Anytime you see a frame as “NoneType”, it’s because the frame was not read properly from the video file or video stream. Given your inexperience with argparse, I think this is likely the issue though. Be sure to download the source code to this post using the “Downloads” form and then my example command found at the top of the file to help you execute it.

      • Mostafa Sabry September 17, 2016 at 6:57 am #

        Thank U Adrian for your reply.
        After searching a little bit online for the issue, I found a suggestion online that DID WORK which is to put a timed delay from the imported “time library” just after the command “cv2.VideoCapture(0)” to give the webcam time to load.
        The code did work and thank U very much for your coordination and the incredible stuff you are providing.
        I might need your help soon 🙂 as I want to adjust the code a little bit to fit my problem I will address you.
        THANKS

        • Adrian Rosebrock September 19, 2016 at 1:18 pm #

          Great job resolving the issue Mostafa!

  60. Ranjani Raja September 19, 2016 at 1:25 am #

    I installed imutils package but still there is a error “no module named imutils”.

    • Adrian Rosebrock September 19, 2016 at 1:03 pm #

      If you are using a Python virtual environment, make sure you have installed imutils into the Python virtual environment:

      I would also read the first half of this blog post to learn more about how to use Python virtual environments.

  61. Andre Brown September 19, 2016 at 11:08 pm #

    Hi Adrian
    I would like to know if it is possible for the contrail to be drawn based on the size of the detected contour or circle drawn, so as you move the ball closer, the thickness of the contrail increases, and further away it decreases.
    Also, is it possible to not have the contrails disappear with time. I have tried setting the buffer to 1280 but they still eventually disappear. it seems they start thick, then thin to nothing with time. I would like to keep all contrails in buffer and am currently writing these to an img file on exiting.
    thanks
    Andre

    • Adrian Rosebrock September 21, 2016 at 2:16 pm #

      It’s certainly possible to make the contrail larger or smaller based on the size of the ball. The larger the ball is, the closer we can assume it is to the camera. Similarly, the smaller the ball is, the farther away it is from the camera. Using this relationship you can adjust the size of contrail. The radius of the minimum enclosing circle at any given time will give you this information.

      As for keeping all points of the contrail, simply sway out the deque for a standard Python list.

  62. Mohamed October 1, 2016 at 5:47 pm #

    I’d like to thank you for your efforts. The code is well explained.
    I have a question that might be naive as I am not a vision guy. Does the same/similar code work on non-circular objects? For example, rectangular ones?

    Thanks again.

    • Adrian Rosebrock October 2, 2016 at 8:58 am #

      Yes, the code certainly works for non-circular objects — this code assumes the largest region in the mask is the object you want to track. This could be circular or non-circular. If your object is non-circular you may want to compute the minimum bounding (rotated) rectangle versus the minimum enclosing circle. Other than that, nothing has to be changed.

  63. Daniel October 6, 2016 at 6:17 pm #

    Hi Adrian!

    When I run your code it works pretty well with my green ball, but when there is no ball in the screen the red contrail turns crazy and doesn’t disappear as in your video. What could be the problem?

    Psdt: Awesome website! Thanks for shareing your work.

    • Adrian Rosebrock October 7, 2016 at 7:25 am #

      The red contrail tracks the last position(s) of the ball. If the red contrail is doing “crazy” things then check the mask. There is likely another green object in your video stream.

  64. huang October 16, 2016 at 6:55 am #

    How to display the data in the form of text in the deque

    • Adrian Rosebrock October 17, 2016 at 4:09 pm #

      Can you elaborate? I’m not sure what you are trying to accomplish.

  65. lokesh p October 19, 2016 at 12:42 pm #

    can we track the ball in the outfield?? is that possible?

    • Adrian Rosebrock October 20, 2016 at 8:42 am #

      You can, but it’s not easy. You would require a high FPS camera since baseballs move extremely fast. If the camera is not a high FPS then you’ll have a lot of motion blur to deal with. In fact, even with a high FPS camera there will still be a lot of motion blur. Tracking motion of fast moving objects normally is a combination of image processing techniques + machine learning to actually predict where the ball is traveling.

      I would suggest starting with this paper that reviews a ball tracking technique for pitches.

  66. Ejjelthi November 3, 2016 at 6:46 pm #

    Hi Adrian,

    i want to track a player and a ball (like in a football background),

    may you give tell me what can i change in the code?

    Thanks in advance.

    • Adrian Rosebrock November 4, 2016 at 10:02 am #

      Tracking a player and a ball is a much more advanced project. I wouldn’t recommend simple color thresholding for that. Instead, you should investigate correlation-based filters. These are much more advanced and even then being able to track a player across the pitch for an entire game is unrealistic. We can do it for short clips, but for not entire matches.

  67. Ranim November 8, 2016 at 9:50 am #

    Thank you so much for your efforts. I am really enjoying and benefting from this blog. I have questions regarding the number of frames.

    Is it possible to know how many frames in a second we are processing for the video ?

    can we custmoize it to process a specific number of frames oer second ?

    • Adrian Rosebrock November 10, 2016 at 8:47 am #

      You can use this blog post to help you determine how many frames per second you are processing. Calls to time.sleep will allow you to properly set the number of frames per second that you want to process.

      • Ranim November 14, 2016 at 4:53 am #

        Thanks a lot.

        • Adrian Rosebrock November 14, 2016 at 12:03 pm #

          No problem, happy I could help 🙂

  68. mandy November 8, 2016 at 3:26 pm #

    i am actually doing some similar project,
    a small question,

    after obtaining centroid x, y
    (1) store in the buffer (buffer size = 128 is better for my project)
    (2) drawline using Opencv

    How do convert your code in java ?

    Thanks if you can help

    • Adrian Rosebrock November 10, 2016 at 8:43 am #

      Hey Mandy, while I’m happy to provide free tutorials to help you learn about computer vision and image processing, I only support the Python code that I write. I do not provide support in converting code to other languages. I hope you understand.

  69. Marlin November 13, 2016 at 7:57 pm #

    I would like to be able to use this to be able to transform this example to track multiple objects of different colors. However, How can I define a long list of colors and then define upper and lower boundaries of the color given the RGB (or HSV) color.

    For example: I want to detect a silver ball the RGB for silver is 192, 192, 192. The HSV for silver is 0 0 75.

    How can I get the upper and lower limits of the color silver without actually using the script and detecting an object?

    • Adrian Rosebrock November 14, 2016 at 12:04 pm #

      Hey Marlin — I would suggest using the HSV or L*a*b* color space as they are easier to define color ranges in. The problem will be lighting conditions. Consider a “white” object for instance. Under blue light the object will have a blue-ish tinge to it. Under direct light the white object will actually reflect the light. This makes it challenging to use color-based detection in varying lighting conditions.

      In short, you should play around with varying HSV and L*a*b* values in your lighting conditions to determine what appropriate values are.

  70. Alex Johansson November 14, 2016 at 8:26 am #

    HI,

    What would be the simplest ready to use (free or cheap) software to use for just tracking a tennis player’s movement on the court in order to create visual tracing or heat map of that movement?
    Thank you for any help/direction.

    • Adrian Rosebrock November 14, 2016 at 12:01 pm #

      That really depends on the type of camera feed you are using. If the camera is fixed then simple background subtraction would suffice. If you are trying to work with moving cameras with lots of different lighting conditions then the problem becomes much harder. In general you will not find an out-of-the-box solution for this — you’ll likely need to code the solution yourself.

      • Alex November 17, 2016 at 6:06 am #

        Thank you so much Adrian. It would be a fixed camera.

  71. Shervin Aslani November 14, 2016 at 3:41 pm #

    Hi Adrian, Awesome work. I’m new to python but I was able to learn quite a bit by going over your tutorial and code. Im currently working on a school project where we are trying track the path of a barbell while someone is performing weighted squats so we can assess and correct the technique that is being performed. We have painted to the end of the barbell with a bright yellow which allows us to track the bar path using the contrails which you designed. In order for us to properly assess squatting techniques we need to measure velocity, position, and be able to track these kinematic relationships. Is it possible to save to trail or path positions with time to an excel file or something similar?

    Also, we were wondering if it would be possible to record the video stream so we can review it in the future?

    Thanks for all your help and support.

    • Adrian Rosebrock November 16, 2016 at 1:59 pm #

      Very cool, as a fellow lifter I would certainly be interested in such a barbell tracking project. Regarding measuring position (and therefore velocity), you can derive both by extending the code from this post.

      I then demonstrate how to record the video stream to video here.

  72. Carlos November 16, 2016 at 5:44 pm #

    Hi Adrian

    I was wondering how can I implement this to identify several balls at the same time, i don’t really need to draw the connecting lines

    Thanks for your help

    • Adrian Rosebrock November 18, 2016 at 9:03 am #

      You need to define color thresholds for each of the balls. Loop over each of boundaries, color threshold the image, and compute contours. Instead of keeping only the largest contour, keep the contours that are sufficiently large.

  73. ANIL November 17, 2016 at 4:30 am #

    Hi Adrian,

    Thanks for the well explained tutorial. I want to use your code to detect eye pupils by using 2 cameras simultaneously. Than, I want to use serial communication between Python and Arduino (possibly by using pyserial) which will drive servo motors according to the location of the eye pupils in real time. I’m fairly new to both Python and OpenCV. How should I proceed to run the code for 2 cameras simultaneously?

    Thanks in advance for any support.

  74. kanta November 24, 2016 at 8:22 am #

    how to see tracking video for this code plz help me …code is suceesfully executed but how to see the output i dont know

    • Adrian Rosebrock November 28, 2016 at 10:46 am #

      Hey Kanta — are you using my example video included in the “Downloads” section of this post? If so (and you’re not seeing an output video) then it sounds like your OpenCV installation was not compiled with video support. I would suggest following one of my tutorials on installing OpenCV on your system or using my pre-configured Ubuntu VirtualBox virtual machine included in Practical Python and OpenCV.

  75. Juekun November 25, 2016 at 10:07 pm #

    Thanks for the awesome and well-explained tutorial!

    • Adrian Rosebrock November 28, 2016 at 10:35 am #

      Thanks Juekun, I’m happy it helped you 🙂

  76. ivandrew December 6, 2016 at 3:07 am #

    how to eliminate red line on the detection of the ball? manah parts that must be replaced or removed? thanks you before

    • Adrian Rosebrock December 7, 2016 at 9:47 am #

      Comment out the call to cv2.line to remove the red line.

  77. syukron December 6, 2016 at 3:10 am #

    how to do tracking color of clothes? so that the robot can follow the color using OpenCV 3.0.0 in raspberry pi using raspicam?

    • Adrian Rosebrock December 7, 2016 at 9:46 am #

      There are many ways to track an object in a video stream. For color-based methods you should consider CamShift.

  78. Sam December 12, 2016 at 3:01 am #

    You may have answered this already but What if you want to track a red ball or a blue ball?
    The crux of the matter is knowing what to pass for color filter in the inrange function.
    Where did you get the values you passed in?

    • Adrian Rosebrock December 12, 2016 at 10:27 am #

      I’ll write a blog post to demonstrate exactly how to do this since many readers are asking, but the gist is that you need to use the range-detector script to manually tune the color threshold values.

  79. Apiz December 19, 2016 at 11:39 pm #

    Hi, Adrian. why my video from raspberry came is flipped?

    • Adrian Rosebrock December 21, 2016 at 10:28 am #

      It’s hard to say. Perhaps you installed your Raspberry Pi camera module upside down?

  80. Himani January 1, 2017 at 9:52 am #

    Hi Adrain,

    Happy new year Adrain….

    I want to identify the white line on the image and coordinates(x,y) of the line. Can you help me?

    Thank you.

    • Adrian Rosebrock January 4, 2017 at 11:09 am #

      The technique to do this really depends on how complex your image is. For basic images, thresholding and contour extraction is all you need. For more noisy images, you may need to apply a Hough Lines transform. For complex images, it’s entirely domain dependent.

  81. Gabriel Rech January 8, 2017 at 9:32 am #

    Hi Adrian,

    Thanks for the tutorial!

    I’m having some difficulties to detect balls with robustness.
    Basically I’m using your range-detector scritp to identify the mask, and it works, but when I change the position of the ball, like when I put it backwards, the parameters I use to detect the ball in the first position it don’t detect the entire ball,sometimes it doesen’t even detect the ball at all. In others words, my code itsn’t robust enough. How can I make it more robust?

    Other question, I didn’t quiet understantd what is the function of the HSV transform, I know what it does, but I don’t understand why are you using it

    Thanks for your attention

    • Adrian Rosebrock January 9, 2017 at 9:10 am #

      It sounds like you’re in an environment where there are dramatic changes in lighting conditions. For example, the area of the room may be “more bright” close to your monitor and then as you pull the ball back the ball shifts into a darker region of the room.

      Ideally, you should have uniform (or as close to as uniform) lighting conditions as possible to ensure your color thresholds work regardless of where the ball is. It’s easier to write code that works for well-controlled environments that unconstrained ones.

      If you can’t change your lighting conditions, consider trying the L*a*b* color space or using multiple color thresholds. We use HSV/L*a*b* over RGB since it tends to be more intuitive to define colors in these color spaces and more representative of how humans perceive color.

  82. Luca Mastrostefano January 9, 2017 at 2:52 pm #

    Hello Adrian,

    First of all, I would like to congratulate with you for this amazing blog!

    I’ve just installed OpenCV on my laptop (http://www.pyimagesearch.com/2016/11/28/macos-install-opencv-3-and-python-2-7/) and copied-pasted your code.

    It works perfectly as I run it!

    But.. it goes at 12 FPS instead of reaching the 32 FPS you referred.
    How can I speed up this algorithm? Is the 32 FPS version of your code different compared to the one published in this blog post?

    Currently, I’m working on a Macbook Pro (2,4 GHz Intel Core i5, 8GB Ram) with OpenCV 3.2.0 and Python 2.7.

    Thank you again for your help!

    • Adrian Rosebrock January 10, 2017 at 1:07 pm #

      My suggestion would be to apply video stream threading to help improve your frame rate throughput.

      • Luca Mastrostefano January 12, 2017 at 5:24 pm #

        Thank you for your fast response!

        I have to correct my first post:
        12 FPS was the speed with the stream from the camera.
        If I switch to a pre-saved video the same algorithm goes to 63 FPS!

        So, it is really fast as it is.

        But as you suggest I’m testing the video stream threading and now it is really super fast! The sampling from the camera goes up to hundreds of FPS.

        I’m eager to test also the tracker with this technic!

        Thank you again!

  83. James January 16, 2017 at 3:31 am #

    Very useful information Adrian,
    I’m curious is it a simple line of code added that would show the speed/velocity of the ball?

    • Adrian Rosebrock January 16, 2017 at 8:05 am #

      You can certainly derive the speed and velocity of the ball using the tracked coordinates; however, the numbers may be slightly off if your camera isn’t calibrated. It would serve as a simple estimation though.

  84. Chris January 18, 2017 at 12:22 pm #

    Hi

    I’m looking to running this code with a live feed from a pi camera. Can that be done?

  85. Jaspreeth January 22, 2017 at 2:45 am #

    hii great work !!!
    Im planning to design a quadcopter which can track objects and move according with the object.
    how can i use this code for making trajectory planning ? can you help me please ?

    thanks in advance 🙂

    • Adrian Rosebrock January 22, 2017 at 10:11 am #

      Hey Jaspreeth — this certainly sounds like a neat project! However, I don’t have any tutorials on trajectory planning. I will certainly consider it for a future blog post.

  86. Rhitik January 22, 2017 at 6:18 pm #

    in which directory should I install imutils?

    • Adrian Rosebrock January 24, 2017 at 2:31 pm #

      You should install “imutils” via the “pip” command:

      $ pip install imutils

      This will automatically install imutils for you.

  87. Dan Price January 23, 2017 at 8:20 pm #

    Adrian,

    I thoroughly enjoy your book and tutorials, really helping a newbie like me understand the concepts. Is this the best method to track an IR led using a Pi NOIR camera? I have the code thresholding for the “white” light, but is highly dependent on a compatible background. Would you help please?

    • Adrian Rosebrock January 24, 2017 at 2:23 pm #

      I admittedly have only used the NoIR camera once so I regrettably don’t have much insight here. The problem here is that trying to detect and track light itself should be avoided. Think of how a camera captures an image — it’s actually capturing the light that reflects off the surfaces around it. This makes detecting light itself not advisable.

  88. Wanderson February 11, 2017 at 11:37 am #

    Hi Adrian,

    Have you worked with the kalman filter? Do you have a link to indicate?

    Thank you.

  89. Wilbur Bacalso February 15, 2017 at 5:02 pm #

    Hi Adrian,

    Thanks for all your post. I’m super new at all this and have been learning a lot from your blog. I downloaded the code and video file for the ball tracker and I’m getting an error code of: NameError: name ‘xrange’ is not defined. I’m obviously missing something and I can’t seem to figure our what. Any help would be appreciated.Thanks in advance!

    • Adrian Rosebrock February 16, 2017 at 9:51 am #

      It sounds like you’re using Python 3 where the xrange function is simply named range (no “x” at the beginning). Update the function call and it will work.

      • Wilbur Bacalso February 16, 2017 at 2:21 pm #

        That did it! Thanks for your quick response Adrian. Look forward to buying your lessons and learning more when I get some cash together.

        • Adrian Rosebrock February 20, 2017 at 8:06 am #

          Awesome, I’m happy we could clear up the error 🙂

  90. Glenn Holland February 19, 2017 at 4:30 pm #

    Hi Adrian.

    Great Tutorial.

    You are getting upwards of 32fps with colour detection tracking, do you think you could get a similar rate using brightness detection like you demoed in your tutorial of finding the bright spot of the optic nerve in a retinal scan?

    • Adrian Rosebrock February 20, 2017 at 7:42 am #

      Since the tutorial you are referring to relies only a Gaussian blur followed by a max operation, yes, you should be able to obtain comparable FPS.

  91. aslan February 21, 2017 at 10:54 am #

    Hi Adrian,

    Your code perfectly works with arranging exact HSV range for an object thanks for this, but by varying lighting conditions HSV values of an object may significantly change, especially S and V values so it may detect other colors in different lighting conditions. For example, I arranged the HSV values for my blue object, but in some conditions it detected gray things or black things. You mentioned above about this problem and you said you need to find a solution yourself.

    Can you suggest any tecnique or algorithm or document for this problem?

    • Adrian Rosebrock February 22, 2017 at 1:36 pm #

      If your lighting conditions are changing dramatically you may want to try the L*a*b* color space. It might also help if you can color calibrate your camera ahead of time. If that’s not possible, you might want to consider machine learning based object detection. Depending on your objects, HOG + Linear SVM would be a good start.

  92. peter February 23, 2017 at 7:58 pm #

    Hi Adrian, Thank you for you amazing posts first!

    I’m new to OpenCV. Following this post, I now can detect a moving object within a certain HSV range via my webcam. Nonetheless, I have encountered some problems when I tried to only detect multiple round tennis balls.

    Here are my concerns:
    (i). I can’t detect multiple balls. I tried a for-loop and I also tried to follow one of your post(http://www.pyimagesearch.com/2016/10/31/detecting-multiple-bright-spots-in-an-image-with-python-and-opencv/). I also tried the “watershed algorithm”, but my program result is extremely unstable. (The circle is jumping around and lots of unnecessary tiny circles)

    (ii) I can’t detect round objects only. I tried the HoughCircles Function. However, it’s seems like detecting perfect circle only. Then I tried the circularity parameter via the SimpleBlobDetector using HSV picture after some thresholding; I’m sure that only the contour of tennis ball is left in the HSV, but the SimpleBlobDetector always ignores my tennis ball contour.

    (ii) when there is an another object with similar HSV range, my program will output a false result. http://imgur.com/q8xverE http://imgur.com/DAuUz1g

    Any help would be appreciated.Thanks in advance!

    • Adrian Rosebrock February 24, 2017 at 11:27 am #

      If you are getting a lot of tiny circles you might be detecting “noise”. Try applying a series or erosions and dilations to cleanup your mask.

      You are also correct — if there are multiple objects with similar HSV ranges, they will be detected (this is a limitation of color-based object detection). In that case, you should try more structural object detection such as HOG + Linear SVM. I discuss this method in detail inside the PyImageSearch Gurus course.

  93. Louay February 26, 2017 at 8:08 pm #

    Hi Adrian,

    Thanks a lot for the tutorial! I managed to replicate it in C# to integrate in a project I’m working on.

    Now I want to change the color of the tracked object. I read all the comments and you answer was to use the range-detector, which I really can’t use because I’m a Python noob.

    It would be great if you could guide me towards an another way to find the upper and lower bounds of a color.
    I’m particularly confused because in your green upper you have (64, 255, 255) which seems like an RGB value! As far as I know in hsv s and v only go up to 100. But also, the lower bound (29, 86, 6) actually corresponds to green in RGB.
    If you could please explain a little more how you have those values, it would help a lot to find the ones I’m looking for (Orange, for a ping pong ball)

    Thanks again and keep up the good work!

    • Adrian Rosebrock February 27, 2017 at 11:09 am #

      Thank you for the suggestion Louay. I’m actually in the process of overhauling the range-detector script to make it easier to use. Once it’s finished I’ll post a tutorial on how to use it.

      • Louay February 27, 2017 at 12:41 pm #

        great news! thanks.

        In the meantime, could you explain how you have your values and how do they correspond to hsv?

        I’m just trying to mimic that so I get my values for other colors (using a simple color picker tool)

        • Adrian Rosebrock March 2, 2017 at 7:01 am #

          The values I determined using the range-detector script which used the HSV color space when processing the video/frame. I’m not sure what you mean by how they correspond to HSV? Are you asking how to convert RGB to HSV?

  94. Vijay February 27, 2017 at 6:04 pm #

    Hi Adrian,

    I need to extract key frames from the given video to do certain machine learning algorithms.

    If you have any idea about it, can you share some details. I need to use opencv and PIL for this purpose.

    Converting videos into frames and extraction key-frames from frames (Using Python – OpenCV and PIL)

    Videos –> Frame extraction from videos (using Python) –> Frames (DB) –> keyframe extractor (Using Python) –> Keyframes (DB)

    Thanks in advance!

    • Adrian Rosebrock February 27, 2017 at 6:43 pm #

      I think this all depends on what you call a “key frame”. I discuss how to detect, extract, and create short video clips from longer clips inside this post. If you’re instead interested in how to efficiently extract and store features from a dataset and store in a persistent (efficient) storage system, take a look at the PyImageSearch Gurus course.

  95. Annie Dobbyn March 8, 2017 at 11:17 am #

    Right this might be a dumb question but how did you find your FPS?

    • Adrian Rosebrock March 8, 2017 at 12:57 pm #

      The FPS of your physical camera sensor? Or the FPS processing rate of your pipeline? Typically we are concerned with how many frames we can process in a single second.

  96. sinjon March 13, 2017 at 6:25 pm #

    Hi Adrian,

    Is there a way to check all modules have been downloaded?

    My opencv wouldn’t bind with python in the virtual environment, so I’m currently creating outside it.

    I’m getting error messages that mask from (mask.copy() is not defined making me think something is missing

    Thanks in advance

    • Adrian Rosebrock March 15, 2017 at 9:02 am #

      I’m not sure what you mean by “all modules have been downloaded”. You can run pip freeze to see which Python packages have been installed on your system, but this won’t include the cv2.so file in the output. You can also ls the contents of your Python’s site-packages directory.

  97. sinjon March 14, 2017 at 7:31 am #

    Hello Adrian,

    I’m getting an error that the ‘mask’ from mask.copy() is not defined.

    I was unable to bind my opencv to python for my virtual environment so I’m i’m building outside it. Feel like this could be causing problems.

    Thanks in advance

  98. Arati March 20, 2017 at 1:22 pm #

    Sir can i use Kinect Sensor for accessing video?is it possible?please explain how it is..

  99. Umang March 21, 2017 at 11:24 am #

    hello Adrian

    I am doing a similar kind of project but i want track vehicles from a cctv camera to detect the speed of vehicle can you suggest me any method

    • Adrian Rosebrock March 22, 2017 at 8:39 am #

      Determine the frames per second rate of your camera/video processing pipeline and use this as a rough estimate to the speed.

  100. Jim March 21, 2017 at 7:04 pm #

    Adrian,

    This is great. I’m essentially wanting to make an extension of this application, but have the ball (or tracking marker) fixed to a person, and measure how quickly (in real-world speed) they can shuffle from side to side.
    Assuming they are moving in a straight line perpendicular to the camera, could this application be extended to calibrate pixels in the frame to a real world distance, and somewhat accurately measure the subject’s side to side motion (velocity, acceleration)?

    • Adrian Rosebrock March 22, 2017 at 8:37 am #

      Yes, provided that you know the approximate frames per second rate of the camera you can use this information to approximate the velocity.

  101. Mehdi March 25, 2017 at 2:28 am #

    Just Great

  102. Arun April 2, 2017 at 5:27 am #

    • Arun April 2, 2017 at 5:43 am #

      its over

    • Adrian Rosebrock April 3, 2017 at 2:03 pm #

      If you are getting a NoneType error, it’s likely because your system is not properly decoding the frame. See this blog post for more information on NoneType errors.

  103. dharu April 10, 2017 at 5:46 am #

    I want the report of this project

    • Adrian Rosebrock April 12, 2017 at 1:17 pm #

      I don’t know what you mean by “report”.

  104. Ghani Putra April 12, 2017 at 11:42 am #

    I installed imutils in virtual environment but still i had error said “No modules named imutils” even when i checked in the console it showed me the directory of the folder (so it has already installed). What should i do?

    • Adrian Rosebrock April 12, 2017 at 12:54 pm #

      Double-check that imutils was correctly installed into your virtual environment by (1) accessing it via the workon command and then running pip freeze.

  105. Shraddha April 16, 2017 at 3:38 pm #

    Hi Adrian,
    This code is amazing! It works perfectly with a tennis ball but when I try to implement with a white table tennis ball, it doesn’t track it. I used the range detector script to get the min max threshold values as follows whiteLower=[158,136,155] and whiteUpper=[255,255,255] and just replaced the greenLower and greenUpper with those values, which are in bgr. I’m using mp4 video files one with the table tennis ball with a brown background(which it tracks) and same background with the white ball(no luck here). The issue seems to be that cnts=0 so maybe its not finding the contour?

    • Shraddha April 16, 2017 at 3:41 pm #

      I meant “green tennis ball with a brown background(which it tracks) and same background with the white ball(no luck here). “

      • Adrian Rosebrock April 19, 2017 at 1:05 pm #

        It sounds like your mask does not contain the object you are looking for. Try displaying the mask to your screen to help debug the script. It might be that your color thresholds are incorrect as well.

  106. Yusron April 17, 2017 at 11:45 am #

    Hi Adrian, I have some questions for you, I have one project with motion detection based on color object
    1. Objects that I use do not have to circle, may be square, or formless. Because I just want to focus on color
    2. How can I detect that the object is moving or not?

    • Adrian Rosebrock April 19, 2017 at 1:00 pm #

      If you want to use color to detect an object, then you would use the color thresholding technique mentioned in this blog post. Compute the centroid of object after color thresholding, then monitor the (x, y)-coordinates. If they change, you’ll know the object is moving.

  107. sinjon April 21, 2017 at 4:20 pm #

    Hello Adrian,

    Is there a way to set the video / image as an array, so that when the buffer reaches the highest of its journey before returning, it’ll stop tracking?

    Many thanks
    Sinjon

    • Adrian Rosebrock April 24, 2017 at 9:55 am #

      Hi Sinjon — the dequeue data structure can store an object that you want (including a NumPy array). If would like to maintain a queue of the past N frames until some stopping criteria is met, just update the dequeue with the latest frames read from the camera.

  108. Marcos Idaho April 23, 2017 at 5:20 pm #

    hi Adrian, great job. I am very new to Open CV and Python. When i give the path of the default video, the video is not getting uploaded. The switch fails and my web_cam turns on, and green objects can be tracked. Can you tell me how to add the path of the video_file in the arguments?

    • Adrian Rosebrock April 24, 2017 at 9:36 am #

      Hey Marcos — you supply the video file path via command line argument when you start the script:

      $ python ball_tracking.py --video ball_tracking_example.mp4

      Notice how the --video switch points to a video file residing on disk.

      • Marcos Idaho April 24, 2017 at 9:45 am #

        Hi Adrian, thank you! What if i am using pycharm interface?

        • Adrian Rosebrock April 24, 2017 at 10:04 am #

          If you are using PyCharm you would want to set the script parameters before executing. Alternatively you could comment out the command line argument parsing code and just hardcode paths to your video file.

          • Hari April 26, 2017 at 2:51 am #

            Great work Adrian. I am getting one error while running the code, its showing in pts.appendleft(center),pts is not defined. Can you help me on this?

          • Adrian Rosebrock April 28, 2017 at 9:54 am #

            Hi Hari — make sure you use the “Downloads” section of this post to download the code. The pts variable is instantiated on Line 21

          • Marcos Idaho April 30, 2017 at 5:48 pm #

            Thank you Adrian! I was trying to track two ants moving in a video file. I wrote a sample code inspired from your code. But i am not able to track the ants, can you help me in this regard?

            youtube link to video is attached
            https://www.youtube.com/watch?v=bc_OdLgGrPQ&feature=youtu.be%22

          • Adrian Rosebrock May 1, 2017 at 1:21 pm #

            If color thresholding isn’t working to track the individual ants, have you tried background subtraction/motion detection? Of course, this implies that the ants are moving.

  109. Marcos Idaho May 7, 2017 at 2:04 pm #

    @Adrian. Thank you very much for the reply. I was able to track them through background subtraction. I have one more question, i tried to save the processed video, but its not getting saved. Its giving an error.


    frame = imutils.resize(frame,width=600)
    (h, w) = image.shape[:2]

    AttributeError: ‘NoneType’ object has no attribute ‘shape’

    • Adrian Rosebrock May 8, 2017 at 12:22 pm #

      Hi Marcos — please see this blog post where I discuss common causes of NoneType errors and how to resolve them.

      • kiran May 15, 2017 at 11:02 am #

        @Adrian. Thank you for such a nice tutorial. What to do when there are multiple balls in the video having and they are all green. I tried doing this, but the tracks are getting messed up, when ball crosses each other or one ball goes near the other one.

        • Adrian Rosebrock May 17, 2017 at 10:08 am #

          This will become extremely challenging if the balls are all the same color. I would suggest looking into correlation trackers and particle filters.

  110. terance May 8, 2017 at 11:39 am #

    Hello. Is there a way to just output the red line that is following the movement?

    • Adrian Rosebrock May 8, 2017 at 12:13 pm #

      Hi Terance — what do you mean by “output”? Do you mean just draw the red line? Print the coordinates to your terminal? Save them to disk?

  111. Marcos Idaho May 16, 2017 at 1:43 pm #

    Hi Adrian, if there are multiple objects, then how to track them? If the objects are moving, is it better to use image subtraction methods?

    • Adrian Rosebrock May 17, 2017 at 9:53 am #

      Hey Marcos — please see my reply to “Ghanendra” above related to tracking multiple objects. The gist is that you define color ranges for each type of object you want to detect and construct masks for each of them. If the objects are moving and there is a fixed, non-moving background background subtraction would be a better bet.

Trackbacks/Pingbacks

  1. OpenCV Track Object Movement - PyImageSearch - September 21, 2015

    […] week’s blog post is an extension to last week’s tutorial on ball tracking with OpenCV. We won’t be learning how to build the next generation, groundbreaking video game controller […]

Leave a Reply