OpenCV Track Object Movement

This past Saturday, I was caught in the grips of childhood nostalgia, so I busted out my PlayStation 1 and my original copy of Final Fantasy VII. As a kid in late middle school/early high school, I logged 70+ hours playing through this heartbreaking, inspirational, absolute masterpiece of an RPG.

As a kid in middle school (when I had a lot more free time), this game was almost like a security blanket, a best friend, a make-believe world encoded in 1’s in 0’s where I could escape to, far away from the daily teenage angst, anxiety, and apprehension.

I spent so much time inside this alternate world that I completed nearly every single side quest. Ultimate and Ruby weapon? No problem. Omnislash? Done. Knights of the Round? Master level.

It probably goes without saying that Final Fantasy VII is my favorite RPG of all time — and it feels absolutely awesome to be playing it again.

But as I sat on my couch a couple nights ago, sipping a seasonal Sam Adams Octoberfest while entertaining my old friends Cloud, Tifa, Barret, and the rest of the gang, it got me thinking: “Not only have video games evolved dramatically over the past 10 years, but the controllers have as well.”

Think about it. While it a bit gimmicky, the Wii Remote was a major paradigm shift in user/game interaction. Over on the PlayStation side, we had PlayStation Move, essentially a wand with both (1) an internal motion sensors, (2) and an external motion tracking component via a webcam hooked up to the PlayStation 3 itself. Of course, then there is the XBox Kinect (one of the largest modern day computer vision success stories, especially within the gaming area) that required no extra remote or wand — using a stereo camera and a regression forest for pose classification, the Kinect allowed you to become the controller.

This week’s blog post is an extension to last week’s tutorial on ball tracking with OpenCV. We won’t be learning how to build the next generation, groundbreaking video game controller — but I will show you how to track object movement in images, allowing you to determine the direction an object is moving:

Read on to learn more.

Looking for the source code to this post?
Jump right to the downloads section.

OpenCV Track Object Movement

Note: The code for this post is heavily based on last’s weeks tutorial on ball tracking with OpenCV, so because of this I’ll be shortening up a few code reviews. If you want more detail for a given code snippet, please refer to the original blog post on ball tracking.

Let’s go ahead and get started. Open up a new file, name it object_movement.py , and we’ll get to work:

We start off by importing our necessary packages on Lines 2-8. We need Python’s built in deque datatype to efficiently store the past N points the object has been detected and tracked at. We’ll also need imutils, by collection of OpenCV and Python convenience functions. If you’re a follower of this blog, you likely already have this package installed. If you don’t have imutils  installed/upgraded yet, let pip  take care of the installation process:

Lines 11-16 handle parsing our two (optional) command line arguments. If you want to use a video file with this example script, just pass the path to the video file to the object_movement.py  script using the --video  switch. If the --video  switch is omitted, your webcam will (attempted) to be used instead.

We also have a second command line argument, --buffer , which controls the maximum size of the deque  of points. The larger the deque  the more (x, y)-coordinates of the object are tracked, essentially giving you a larger “history” of where the object has been in the video stream. We’ll default the --buffer  to be 32, indicating that we’ll maintain a buffer of (x, y)-coordinates of our object for only the previous 32 frames.

Now that we have our packages imported and our command line arguments parsed, let’s continue on:

Lines 20 and 21 define the lower and upper boundaries of the color green in the HSV color space (since we will be tracking the location of a green ball in our video stream). Let’s also initialize our pts  variable to be a deque  with a maximum size of buffer  (Line 25).

From there, Lines 25-28 initialize a few bookkeeping variables we’ll utilize to compute and display the actual direction the ball is moving in the video stream.

Lastly, Lines 32-37 handle grabbing a pointer, vs , to either our webcam or video file. We are taking advantage of the imutils.video  VideoStream  class to handle the camera frames in a threaded approach. To handle the video file, cv2.VideoCapture  does the best job.

Now that we have a pointer to our video stream we can start looping over the individual frames and processing them one-by-one:

This snippet of code is identical to last week’s post on ball tracking so please refer to that post for more detail, but the gist is:

  • Line 43: Start looping over the frames from the vs  pointer (whether that’s a video file or a webcam stream).
  • Line 45: Grab the next frame  from the video stream. Line 48 takes care of parsing either a tuple or variable directly.
  • Lines 52 and 53: If a frame  could not not be read, break from the loop.
  • Lines 57-59: Pre-process the frame  by resizing it, applying a Gaussian blur to smooth the image and reduce high frequency noise, and finally convert the frame  to the HSV color space.
  • Lines 64-66: Here is where the “green color detection” takes place. A call to cv2.inRange  using the greenLower  and greenUpper  boundaries in the HSV color space leaves us with a binary mask  representing where in the image the color “green” is found. A series of erosions and dilations are applied to remove small blobs in the mask .

You can see an example of the binary mask below:

Figure 1: Generating a mask for the green ball, allowing us to segment the ball from the other contents of the image.

Figure 1: Generating a mask for the green ball, allowing us to segment the ball from the other contents of the image.

On the left we have our original frame and on the right we can clearly see that only the green ball has been detected, while all other background and foreground objects are filtered out.

Finally, we use the cv2.findContours  function to find the contours (i.e. “outlines”) of the objects in the binary mask (Lines 70-72).

Let’s find the ball contour:

This code is also near-identical to the previous post on ball tracking, but I’ll give a quick rundown of the code to ensure you understand what is going on:

  • Line 76: Here we just make a quick check to ensure at least one object was found in our frame .
  • Lines 80-83: Provided that at least one object (in this case, our green ball) was found, we find the largest contour (based on its area), and compute the minimum enclosing circle and the centroid of the object. The centroid is simply the center (x, y)-coordinates of the object.
  • Lines 86-92: We’ll require that our object have at least a 10 pixel radius to track it — if it does, we’ll draw the minimum enclosing circle surrounding the object, draw the centroid, and finally update the list of pts  containing the center (x, y)-coordinates of the object.

Unlike last weeks example that simply drew the contrail of the object as it moved around the frame, let’s see how we can actually track the object movement, followed by using this object movement to compute the direction the object is moving using only (x, y)-coordinates of the object:

On Line 95 we start to loop over the (x, y)-coordinates of object we are tracking. If either of the points are None  (Lines 98 and 99) we simply ignore them and keep looping.

Otherwise, we can actually compute the direction the object is moving by investigating two previous (x, y)-coordinates.

Computing the directional movement (if any) is handled on Lines 107 and 108 where we compute dX  and dY , the deltas (differences) between the x and y coordinates of the current frame and a frame towards the end of the buffer, respectively.

However, it’s important to note that there is a bit of a catch to performing this computation. An obvious first solution would be to compute the direction of the object between the current frame and the previous frame. However, using the current frame and the previous frame is a bit of an unstable solution. Unless the object is moving very quickly, the deltas between the (x, y)-coordinates will be very small. If we were to use these deltas to report direction, then our results would be extremely noisy, implying that even small, minuscule changes in trajectory would be considered a direction change. In fact, these changes could be so small that they would be near invisible to the human eye (or at the very least, trivial) — we are most likely not that interested reporting and tracking such small movements.

Instead, it’s much more likely that we are interested in the larger object movements and reporting the direction in which the object is moving — hence we compute the deltas between the coordinates of the current frame and a frame farther back in the queue. Performing this operation helps reduce noise and false reports of direction change.

On Line 113 we check the magnitude of the x-delta to see if there is a  significant difference in direction along the x-axis. In this case, if there is more than 20 pixel difference between the x-coordinates, we need to figure out in which direction the object is moving. If the sign of dX  is positive, then we know the object is moving to the right (east). Otherwise, if the sign of dX is negative, then we are moving to the left (west).

Note: You can make the direction detection code more sensitive by decreasing the threshold. In this case, a 20 pixel different obtains good results. However, if you want to detect tiny movements, simply decrease this value. On the other hand, if you want to only report large object movements, all you need to do is increase this threshold.

Lines 118 and 119 handle dY  in a similar fashion. First, we must ensure there is a significant change in movement (at least 20 pixels). If so, we can check the sign of dY . If the sign of dY  is positive, then we’re moving up (north), otherwise the sign is negative and we’re moving down (south).

However, it could be the case that both dX  and dY  have substantial directional movements (indicating diagonal movement, such as “South-East” or “North-West”). Lines 122 and 123 handle the case our object is moving along a diagonal and update the direction  variable as such.

At this point, our script is pretty much done! We just need to wrap a few more things up:

Again, this code is essentially identical to the previous post on ball tracking, so I’ll just give a quick rundown:

  • Lines 131 and 132: Here we compute the thickness of the contrail of the object and draw it on our frame .
  • Lines 138-140: This code handles drawing some diagnostic information to our frame , such as the direction  in which the object is moving along with the dX  and dY  deltas used to derive the direction , respectively.
  • Lines 143-149: Display the frame  to our screen and wait for a keypress. If the q  key is pressed, we’ll break from the while  loop on Line 149.
  • Lines 152-160: Cleanup our vs  pointer and close any open windows.

Testing out our object movement tracker

Now that we have coded up a Python and OpenCV script to track object movement, let’s give it a try. Fire up a shell and execute the following command:

Below we can see an animation of the OpenCV tracking object movement script:

Figure 2: Successfully tracking the green ball as it's moving north.

Figure 2: Successfully tracking the green ball as it’s moving north.

However, let’s take a second to examine a few of the individual frames.

Figure 3: Tracking object movement as the balls move north.

Figure 3: Tracking object movement as the balls move north.

From the above figure we can see that the green ball has been successfully detected and is moving north. The “north” direction was determined by examining the dX  and dY  values (which are displayed at the bottom-left of the frame). Since |dY| > 20 we were able to determine there was a significant change in y-coordinates. The sign of dY  is also positive, allowing us to determine the direction of movement is north.

Figure 4: Using OpenCV to track object movement

Figure 4: Using OpenCV to track object movement

Again, |dY| > 20, but this time the sign is negative, so we must be moving south.

Figure 5: Tracking object movement.

Figure 5: Tracking object movement.

In the above image we can see that the ball is moving east. It may appear that the ball is moving west (to the left); however, keep in mind that our viewpoint is reversed, so my right is actually your left.

Figure 6: Tracking the object using OpenCV.

Figure 6: Tracking the object using OpenCV.

Just as we can track movements to the east, we can also track movements to the west.

Figure 7: Diagonal object detection and tracking.

Figure 7: Diagonal object detection and tracking.

Moving across a diagonal is also not an issue. When both |dX| > 20 and |dY| > 20, we know that the ball is moving across a diagonal.

You can see the full demo video here:

If you want the object_movement.py  script to access your webcam stream rather than the supplied object_tracking_example.mp4  video supplied in the code download of this post, simply omit the --video  switch:

Summary

In this blog post you learned about tracking object direction (not to mention, my childhood obsession with Final Fantasy VII).

This tutorial started as an extension to our previous article on ball tracking. While the ball tracking tutorial showed us the basics of object detection and tracking, we were unable to compute the actual direction the ball was moving. By simply computing the deltas between (x, y)-coordinates of the object in two separate frames, we were able to correctly track object movement and even report the direction it was moving.

We could make this object movement tracker even more precise by reporting the actual angle of movement simply by taking the arctangent of dX  and dY  respectively — but I’ll leave that as an exercise to you, the reader.

Be sure to download the code to this post and give it a try!

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , ,

211 Responses to OpenCV Track Object Movement

  1. Pedro September 21, 2015 at 6:33 pm #

    Hey Adrian,

    And what about the z-dimension? Could you determine the direction just by checking if the ball’s radius is increasing or decreasing?

    Awesome tutorial!!

    Pedro

    • Adrian Rosebrock September 22, 2015 at 12:02 am #

      Hey Pedro, great question. Yes, you can certainly do that as well, although for good accurate readings I would suggest performing some sort of camera calibration. See this post for more info.

  2. Jeffrey Batista September 25, 2015 at 9:52 am #

    Hi, i’m the following error
    attributeerror: nonetype object has no attribute shape

    • Adrian Rosebrock September 26, 2015 at 6:33 am #

      If you are getting an error related to the NumPy array having no shape, then your system is not reading frames from the video stream correctly. If you are trying to read frames from a video file, then OpenCV was likely not compiled and installed with video support. If you are trying to read frames from a webcam, then OpenCV is having problems accessing the webcam.

    • Fabs September 28, 2015 at 2:44 pm #

      Hi, in my case I solved that problem by typing:

      $ sudo modprobe bcm2835-v4l2

      found it on some other pyimagesearch post. But I wonder if its possible to get that in some kind of autostart so we don’t have to type it in manually all the time we want to run cv.

      • Adrian Rosebrock September 29, 2015 at 6:08 am #

        This should only work if you have installed the V4l2 driver already installed (which it sounds like you do). But I imagine most PyImageSearch readers don’t have V4l2 configured and installed.

      • Mats Önnerby November 15, 2015 at 5:24 pm #

        Hi

        To make it load at boot, you need to add this line to /etc/module

        bcm2835-v4l2

        You need to start your editor with sudo, link this

        sudo vi /etc/modules

        • Adrian Rosebrock November 16, 2015 at 6:37 am #

          Thanks for sharing Mats!

  3. Jeffrey Batista September 28, 2015 at 12:17 pm #

    That has been my entire headache, So when i use this command cap = cv2.VideoCapture(0) nothing happens. how can i recompile OpenCV with video support?

    • Adrian Rosebrock September 29, 2015 at 6:04 am #

      That really depends on your operating system. I have a bunch of tutorials related to installing OpenCV and Python together available here — I would definitely start there and follow the instructions specific to your OS and Python version.

  4. Fabs September 29, 2015 at 8:23 am #

    Hi Adrian, thanks for your awesome tutorial. I got your code running as long as I dont run it with “sudo”. Since I want to work with some stepper motors as well, I made the experience that those need to be run with “sudo” in front.
    When I try to run the python file in the (cv) mode with
    $ sudo python filename.py
    it prints: “ImportError: No module named imutils” which I re-installed twice today. The same command without sudo
    $ python filename.py
    works great. But when implementing the GPIO modul (import RPi.GPIO) I need sudo to get the motors working.
    Do you have any idea why there is this difference with imutils in sudo?

    • Adrian Rosebrock September 30, 2015 at 6:37 am #

      If you want to use sudo, I would actually suggest launching a root shell and then installing imutils

      Remember, the sudo command is running your script as root — it doesn’t care about what you have installed in your user account. Also, if you have been using a virtual environment for your user account, you’ll need to create one for your root account as well and make sure you access it before executing the script.

  5. Scott Routley October 3, 2015 at 5:05 pm #

    I tried to create a similar demo a few months ago. (Applying what I had learnt from you)

    But one of my biggest challenges determining what are the best upper and lower colours.

    Do you have any techniques that you could share?

    • Adrian Rosebrock October 4, 2015 at 7:02 am #

      Take a look at this post where I briefly discuss and link off to the range-detector script. This can be used to help determine good color ranges.

  6. Fabs October 8, 2015 at 4:32 am #

    Is there a way to get the stream faster? Cause my video stream is very lagged, with a three seconds delay. So I won’t get smoth curves like you do, instead I’ll just see straight lines from point to point. Looks like I got arround 2fps. I’m working with the RPi B+ and the CPU always shows 100% when video stream is shown.

    • Adrian Rosebrock October 8, 2015 at 5:53 am #

      For most image processing tasks I would really suggest using the Pi 2. It’s much faster and better suited for video processing.

  7. Dan October 12, 2015 at 1:14 pm #

    Hi Adrian,

    what sort of fps and resolutions did you achieve with the RPi 2 performing the colour tracking processes?

    Thank you

    • Adrian Rosebrock October 13, 2015 at 7:14 am #

      I haven’t had a chance to run this script on the Pi 2 yet, but once I do, I’ll be sure to come back and let you know. My intuition says it should be approximately 8-16 FPS.

  8. Anh October 14, 2015 at 1:40 am #

    Great tutorial! If I want to detect Arrow (for left, right), how I can do that? Thank you

    • Adrian Rosebrock October 14, 2015 at 6:17 am #

      The actual shape of the object (whether it’s an arrow or a ball) doesn’t really matter for this technique. What matters is the color of the object. You can use the range-detector script mentioned in this post to help you find the color ranges to detect your object.

  9. Oren October 16, 2015 at 7:39 am #

    Would you recommend using same technique to detect and track the feet of a runner or animal e.g. dog to analyse gait cycle events i.e foot ground contact time and foot off time? What is your experience using the Meanshift and Camshift in opencv?

    • Adrian Rosebrock October 17, 2015 at 6:48 am #

      I probably would not recommend this approach for feet detection unless the shoes/paws were easily identifiable by color. The cornerstone of this approach is that you can define the color range of the object you want to track. Shoes and animal paws can vary dramatically in color — and often times they may blend into their environment making it hard to define a color range that can segment them from the background.

      Finally, you can read about my experience with CamShift here.

  10. PABLITO October 19, 2015 at 12:46 pm #

    Hello Adrian, I’m working with HSV space color too. So I want to ask you how do you choose saturation and brightness values. Please help me with that information .

    Thank you very much

    • Adrian Rosebrock October 20, 2015 at 6:15 am #

      Please see my reply to Scott above.

  11. Jeffrey Batista November 3, 2015 at 3:46 pm #

    Hi Adrian, are you using the pi camera or a USB camera for this demo. After following the tutorial I never managed to to get opencv to detect my raspberry pi camera. I can take pictures using the camera = picamera.PiCamera() the camera = cv2.VideoCapture(0) method doesn’t work. If you could please explain the part that i missed. I’m really trying to get into computing vision but feel like a I hit a dead end road. Thank you, I really appreciate your time!

    • Adrian Rosebrock November 4, 2015 at 6:34 am #

      This demo was actually done with my OSX machine which has a built in webcam.

      However, I’ve used both picamera and cv2.VideoCapture to capture images/video using the Raspberry Pi camera module. If you want to use picamera take a look at this post. If you want to use cv2.VideoCapture with a USB camera, then you’ll need to look into installing the uv4l drivers and re-compiling and re-installing OpenCV.

      • Jamie January 19, 2017 at 1:19 pm #

        I am trying to follow this tutorial with a picamera, I followed your post of accessing the picamera and tried to implement that code into this tutorial. however, I cannot get past this error “Incorrect buffer length for resolution”

        • Adrian Rosebrock January 20, 2017 at 11:01 am #

          Hey Jamie — it sounds like you’re forgetting the call to .truncate on the Raspberry Pi camera object. Please see this tutorial for more information.

  12. Deven Patel November 3, 2015 at 11:55 pm #

    Thanks a ton for all the inspiration. Inspired by your post, i ended up with this https://github.com/devenpatel2/Backprojection_tracking. I used backprojection for tracking purpose. Still have to add the queue though. Your posts really give good ideas to start a new project in CV.

    • Adrian Rosebrock November 4, 2015 at 6:30 am #

      Nice work Deven! 😀

  13. Gintautas December 5, 2015 at 5:05 am #

    IS this tutorial works on Python 2.7 and Open CV 3.0 ? and im using raspberry pi 2 camera module.
    https://cdn.sparkfun.com//assets/parts/8/2/7/8/11868-00a.jpg

    • Adrian Rosebrock December 5, 2015 at 6:19 am #

      This tutorial can run on the the Raspberry Pi if you are using a USB camera. But since you’re using the camera module, you’ll need to update the main loop that grabs frames to use the picamera module instead. You can read more about accessing the Raspberry Pi camera module here.

      • Gintautas December 5, 2015 at 8:23 am #

        i tried using usb camera, its filming for few seconds and crash.

        • Adrian Rosebrock December 6, 2015 at 7:20 am #

          The script makes the assumption that the deque buffer is filled before the object to track has entered the screen. You can either modify this behavior, or wait until the buffer is filled to avoid this error.

          • Vyspace March 14, 2016 at 6:47 am #

            Hi Adrian,

            How can I change the script to make it wait until the buffer is filled?

            Thanks in advance,

            Vyspace

          • Adrian Rosebrock March 14, 2016 at 3:21 pm #

            What do you mean by “make it wait until the buffer is filled”? You do not want the “tail” of the tracked object to be drawn until the buffer is filled? If so, you’ll want to modify Line 94 to look like:

            if counter >= 10 and i == 1 and len(pts) == args["buffer"]

        • Sergio March 23, 2016 at 11:45 pm #

          Hi you all. I have had the some issue. I fix them modifying a little the code:

          I understand that counter is a flag to wait to have at least 10 points in the buffer, we check if is true and i is equal to 10 and finally chek if the point number 0 (i-10) isn’t None. Inside the loop we calculate delta between 10 points. Hope it can help you.

          Thank’s a lot Adrian for your great job, without your contribution, work with RPi+PiCamera+OpenCV+Python could be impossible for me.

  14. Gintautas December 5, 2015 at 9:00 am #

    And last question is about:

    # define the lower and upper boundaries of the “green”
    # ball in the HSV color space
    greenLower = (29, 86, 6)
    greenUpper = (64, 255, 255)

    If i want that i would detect not green but red. Those numbers from where ? from rgb to HSV or ?

    • Adrian Rosebrock December 6, 2015 at 7:19 am #

      Please see my reply to Scott Routley above.

  15. Joshua December 15, 2015 at 9:24 am #

    I implemented this code exactly, but I cant run this code at faster than 2fps. CPU usage also never exceeds 35%. Using Python 3 and Opencv 3.

  16. kihong February 17, 2016 at 7:33 am #

    Thank you Adrian!

    The opencv 3.0 with python 2.7 is installed as your guide.

    I want to control gpio in opencv environment to control motors.
    Actually, i try to project “track_moving objects” using your “object_movement.py” program.

    However, command window come out error message ” no module rpi . gpio as gpio”
    Despite i inserted ” import RPi.GPIO as GPIO in python-code” .
    What should i do?

    sorry.. I would like you understand i can’t speak English well.

    • Adrian Rosebrock February 17, 2016 at 12:34 pm #

      I haven’t written a blog post detailing the process yet (although I will in the future), but in the meantime, head to this post and search for “GPIO” in the comments. I detail how to address the issue there.

  17. Sai Krishna February 18, 2016 at 7:56 am #

    Hi Adrian,

    can you please explain why you have converted the image to HSV color space? why can’t we use BGR space?
    thanks for the great tutorial

    • Adrian Rosebrock February 18, 2016 at 9:32 am #

      In some situations you can use the RGB color space for color thresholding and tracking. The problem is that RGB isn’t very representative of how humans perceive and interpret color. HSV (and L*a*b*) are much more intuitive of how we see and define color — it is also much easier to define color threshold ranges in HSV. I would suggest reading up on color space to better familiarize yourself with this idea.

  18. Manez February 26, 2016 at 2:40 pm #

    Great work Adrian !

    How could you identify and tracked multiple objects ?

    Is it possible to compute the time for which an objet is detected ?

    • Adrian Rosebrock February 27, 2016 at 10:18 am #

      Tracking multiple objects is substantially harder, but can be done. In this case, you need to create lower and upper boundaries of color thresholds for each object you want to detect (assuming they are different colors). You’ll also want to check the size of each contour to ensure it’s sufficiently large (to avoid noise). At that point, you’ll have multiple contours that you can use for tracking.

      As for the time for which an object is detected, yes, just use the time Python module and you can take the timestamp of when the object enters the view of the camera and the time that it leaves the field of view.

  19. Mathilda April 29, 2016 at 9:53 am #

    hi Adrian,
    how can I print the direction of the ball?
    I thought I could easily print np.abs(dX) and np.abs(dY)
    but I was wrong… Cuz it only prints 0…

    • Adrian Rosebrock April 30, 2016 at 3:58 pm #

      Take a look at Lines 98-118; specifically; how the direction variable is determined. This code is used to display the direction variable to the screen. You can also print it to your terminal.

  20. Alex May 9, 2016 at 8:51 am #

    Hi Adrian!

    First of all thanks for all your great tutorials, it really helped me a lot for my projects. Without them, it would have been impossible for me to obtain any results in Python.

    I would like to kindly ask you if there is a possibility to plot the trajectory which is made by the tracked object with respect to the time stamp ( I added this info because I need to synchronize this movement with the result given by another sensor). I tried to plot dX, dY but I only get an empty graph.

    Thanks in advance!

    • Adrian Rosebrock May 9, 2016 at 6:51 pm #

      Hi Alex — you can maintain a list of dX and dY over a per-frame basis. I would suggest using the dequeue data structure to maintain this list. From there, you should be able to plot these values, but again, make sure you have multiple values inside the list, that way you can plot the trajectory over time.

  21. Aditya May 16, 2016 at 7:03 am #

    Hi Adrian,

    Firstly, Thankyou for all your great work. i am new to python and looking into your tutorials helps a lot.
    In my project i am trying to do somewhat same using your code, like i am detecting a object(beam) and then drawing contours around it and calculating its position, but what i want to know is how much an object has moved from its position( i want to get the new coordinates of that position- how to get it?) and i see in your program that if the object has moved and if it remains there for a bit and again starts moving then you consider the intermediate state as your starting point or zeroth point rather could you tell me how to retain it in that position(like pointing out its coordinates at that point and saying its at north-east) and then calculate the new direction and coordinates if it has moved from that position rather than setting it to zero.

    Thank you in advance.

    • Adrian Rosebrock May 16, 2016 at 9:12 am #

      So if I understand your question correctly, if an object pauses along it’s trajectory for N frames, you want this position to be logged and recorded? And then all new subsequent movements tracked based on where you previously logged the object?

      • Aditya May 17, 2016 at 5:00 am #

        Yes Andrian. And is there any way to get the coordinates at that point or all along the motion .

        • Adrian Rosebrock May 17, 2016 at 11:32 am #

          This will take some non-trivial edits to the code. The first is to detect when the ball has “stopped”. To accomplish this, you’ll want to monitor the (x, y)-coordinates of the ball. If it stays (approximately) in the same area for N frames (where you can define N to be whatever value you want — I suggest using 32-64 consecutive frames), then you can mark the ball as “stopped”. Store the (x, y)-coordinates of this “stopped” location. And then once the ball starts moving again, derive your changes in direction based on the stored, saved location.

          Again, this is a non-trivial edit to the code, so you’ll have to work at it.

  22. ahmed May 23, 2016 at 3:21 am #

    Adrian,

    I tried this and it worked for me. I would like to extend this tracking to two (similar but not equal sized) contours. Is that possible, what changes would be necessary in the algorithm?

    • Adrian Rosebrock May 23, 2016 at 7:20 pm #

      Absolutely, you can certainly extend this to track multiple objects. You’ll want to change the code that finds the max of the contour area and instead have it return the top N largest objects in the images. You can then loop over them and prune them to find your two objects.

  23. Duncan June 2, 2016 at 8:08 am #

    hey what if tou are you are using a read ball can you change 64, 255, 255 code ??

    • Adrian Rosebrock June 3, 2016 at 3:10 pm #

      You would need to manually set these parameters for your object. I would suggest using the range-detector script.

  24. Israel July 9, 2016 at 4:17 pm #

    Hey Adrian thanks for the tutorial, I’m trying to use your code to do automated navigation in my robotics project, when I run your program it opens up, but as soon as I try to use a green object on to track it down, it exits saying

    At first I thought it was because of the radius restriction you had, but even after changing it it doesn’t make a difference, do you have any clue as of why it’s exiting?

    • Adrian Rosebrock July 11, 2016 at 10:14 am #

      Please see my reply to “Gintautas” above where I discuss this error and how to resolve it.

  25. Supra July 26, 2016 at 8:39 am #

    Worked perfectly using OpenCV 3.1.0 and python 3.4.2 for raspberry pi 3. Without using “import imutils”. So I used cv2.resize(frame, (800, 480)).
    Thanks!

    • Adrian Rosebrock July 27, 2016 at 2:00 pm #

      Nice job Supra! 🙂 Congrats on getting the example working.

  26. Julian August 4, 2016 at 9:18 am #

    Hi Adrian,
    I’m working on Windows 7, using python 2.7.10 and opencv 3.1.0
    But when I try to launch the following command :
    python object_movement.py –video object_tracking_example.mp4
    Absolutely nothing happen :'(
    Not even an error message…
    Did I forget something ?
    Thank you 🙂

    • Adrian Rosebrock August 4, 2016 at 10:06 am #

      Please see my reply to “Jeffrey Batista” above — it’s likely that your version of OpenCV was not compiled with video support.

  27. LinuxCircle September 7, 2016 at 12:16 am #

    How do you deal with different lights in the room? Yellow ball may appear brown in darker room.

    • Adrian Rosebrock September 8, 2016 at 1:25 pm #

      At the most basic level, you would need to tune the color parameters on a per-room basis if you are expecting dramatic changes in lighting conditions.

  28. Ravi October 3, 2016 at 12:59 pm #

    Thank you for sharing your knowledge. Does this algorithm work well for soccer ball detection and tracking? As it could be any color for different kind of soccer balls, what algorithm would you suggest? I just want to detect soccer balls in the field and not anything else.

    • Adrian Rosebrock October 4, 2016 at 6:59 am #

      The answer, like the answer to most problems in advanced computer science is “it depends”. It depends on the quality of your camera. It depends on lighting conditions. It depends on occlusion. Tracking a soccer ball on a pitch based on color alone would be somewhat problematic due to varying lighting conditions (shadowing, clouds overhead, glare from lights). I think correlation filters would work better here or feature based approaches.

  29. Randy October 9, 2016 at 7:42 pm #

    Hi Adrian.
    I am 12 years old and I was doing a project related to this one. I was wondering what type of resources/ installations do I need to compile and execute this code?
    By the way, this project seems really cool!!

    Thanks,
    Randy

    • Adrian Rosebrock October 11, 2016 at 1:01 pm #

      You don’t need a super powerful system to run this code. In fact, a simple Raspberry Pi can easily run this code. If you have a laptop or desktop from the past 5 years you shouldn’t have any issue running this example.

  30. Arun October 26, 2016 at 2:47 pm #

    How to get the co-ordinates of a particular tracked object, which will be required as an input for another device?

    • Adrian Rosebrock November 1, 2016 at 9:32 am #

      Line 72 already gives you the (x, y)-coordinates of the tracked object.

      • Amy November 10, 2016 at 4:26 am #

        How to save the coordinates of the tracking object?

        • Adrian Rosebrock November 10, 2016 at 8:36 am #

          I assume you mean “save” the coordinates to disk? I would use cPickle and write the pts object to file:

  31. Ned Reilly November 26, 2016 at 8:19 pm #

    Hello Adrian,
    I am searching for how to track objects with the KLT tracker. I got really confused about it. Do you have an idea about which object detection sould I use and then how can I track it?

    • Adrian Rosebrock November 28, 2016 at 10:28 am #

      It really depends on what type of object you are trying to track. Are you tracking based on color? Shape? Texture? A combination of all?

      • Reza January 1, 2017 at 10:20 am #

        Hi Adrian. Fantastic code. Run like a Charm.
        What if to track vehicle base on shape,color, texture or Combination? could you please guide in more specific details?
        Thank you very much 😉

  32. Mosa December 6, 2016 at 5:13 am #

    Hi Adrian, thank you for the incredible tutorial, I’m following your tutorials and doing it myself.

    However, in this particular one, I’m getting this error :

    ” if counter >= 10 and i == 1 and pts[-10] is not None:
    IndexError: deque index out of range”

    what could possibly be the error in my case? thanks in advace.

    • Adrian Rosebrock December 7, 2016 at 9:45 am #

      Take a look at the comment from “Gintautas” exactly a year ago 🙂

      • Mosa December 11, 2016 at 1:50 am #

        Yeah it works now ! thank you so much 🙂

  33. Apiz December 19, 2016 at 6:33 pm #

    Hi Adrian, from your tutorial OpenCV track object Movement. How I want to transfer the coordinates that i get from Raspberry pi camera to Pulse Width Modulation (or PWM)? Because i want send the signal using GPiO to Motor Driver MD10C to control the current. I hope u can help me.. thanks

    • Adrian Rosebrock December 21, 2016 at 10:29 am #

      I don’t have any tutorials for operating a motor via a Python script, but you can combine OpenCV with GPIO via this tutorial.

  34. onur December 23, 2016 at 12:20 pm #

    hi adrian,
    thanks for greatful sharing,
    i installed opencv 2.4.10 and put the code into python 2.7.9 on my raspberry pi 3b. The code works on my windows computer but it doesnt work on my rasp. I have no error i use usb webcam. When i run the module i have no video window . there is only restart on shell window.
    sincerely….

    • Adrian Rosebrock December 31, 2016 at 1:54 pm #

      Are you using a Raspberry Pi camera module? Or a USB camera? If you’re using a USB camera you might have forgotten to compile OpenCV with video support. I would suggest following one of my tutorials to help you get OpenCV + Python installed and configured correctly.

      • onur January 2, 2017 at 2:24 pm #

        I am using a USB camera, while installing OpenCV i have to skip the mpeg part of the video compiler. I installed OpenCV by following your tutorial. Wıthout skipping this part i get error while ‘make’ procedure. Can it be the problem?

        I have another question if you answer i will be happy. After trying the OpenCV installization my wifi scan interface do not work so i cannot find any wireless connection for internet. Unless i find the problem, i will reinstall the raspian.

        Thanks…

        • Adrian Rosebrock January 4, 2017 at 10:55 am #

          Your WiFi issue doesn’t sound OpenCV related. That could be a problem with either (1) your Raspberry Pi or (2) your network.

          As for the OpenCV compile, it’s hard to know what the problem is without seeing the error message is from “make”. For what it’s worth, I offer a pre-configured Raspbian .img file as part of Practical Python and OpenCV. You should also check and make sure your USB camera is compatible with the Pi.

          • onur January 29, 2017 at 9:22 am #

            adrian i have another question,

            this code is compatible with opencv 2.7.10 and i have some error with opencv3, do you suggest opencv3 or 2.4.10. If you suggest opencv3, can you share your modified code which works with opencv3?

            sincerely…

          • Adrian Rosebrock January 29, 2017 at 2:41 pm #

            Hey Onur — the only line of code that should have to change for OpenCV 3 compatibility is the cv2.findContours call. Here is the updated to make it work with OpenCV 3:

          • onur January 30, 2017 at 6:19 am #

            Now i have 2.4.13 openCV in my rasp. When i click run than in shell window, there are two ‘RESTART’ and no camera window opens, my camera works with the command ‘sudo fswebcam -S 20 image.jpeg’ which gives correct image. What can be problem? I cannot solve the problem please help me what can i do?

            sorry for my too many questions,

            sincerely…

          • onur January 31, 2017 at 7:38 am #

            I have finally solved my problem,

            We cannot use camera while using SSH. The code works fine The only problem is the effect of lighting conditions.

            Thanks Adrian…

          • Adrian Rosebrock February 1, 2017 at 12:56 pm #

            If you’re trying to view the results of the camera when SSH’ing just enable X11 forwarding:

            $ ssh -X user@ip_address

  35. Roshan January 8, 2017 at 2:47 am #

    why is the frame = imutils.resize(frame, width=600) giving me a nonetype error for shape

  36. Arvind Mohan January 11, 2017 at 4:45 am #

    What should be done while tracking a pentagon or any polygon instead of the circle?

    • Adrian Rosebrock January 11, 2017 at 10:31 am #

      I would suggest using contour approximation and contour properties, like I do in this blog post on tracking targets in drone video streams.

  37. Westley January 19, 2017 at 11:24 am #

    Hi there,

    Great tutorial! However, whenever I bring the ball in front of the camera, it is tracked for about a quarter of a second and then crashes. Leaving the error:

    line 95, in
    if counter >= 10 and i == 1 and pts[-10] is not None:
    IndexError: deque index out of range

    Any idea on how to fix this?

    Thanks!

    • Adrian Rosebrock January 20, 2017 at 11:01 am #

      Please see the comment thread started by “Gintautas” above for the full solution to your error.

  38. George January 23, 2017 at 11:52 pm #

    Hi Adrian, thanks for your awesome tutorial.It really helped me in doing my project.

    Each time this program displays the direction sequentially even after it moves out of the frame. For eg: when we move from left to right, it displays west and when it moves out of the frame and again move object from left to right, it displays east then west(west then west is only required). But in my project, I don’t want to connect the points which was present just before moving out of the frame and the point which would appear just after the object is displayed in the frame. Can u please help?

    Thanks in advance.

    • Adrian Rosebrock January 24, 2017 at 2:22 pm #

      Hey George — if you want to stop the trail of points from displaying after the object has moved out of the screen, just count the number of pixels in the mask: cv2.countNonZero(mask) if the value is zero then the object is not in view. Place this in an if statement to avoid any extra drawing.

      • George February 1, 2017 at 12:41 am #

        Thanks Adrian. Is it possible to update the pts array as empty each time the object is not in view so that the buffer(array) newly updates when it moves in and out of the view?

        • Adrian Rosebrock February 1, 2017 at 12:49 pm #

          Sure. Just include some sentinel value that represents “not in view” to the pts list. You’ll want to modify the for loop logic that loops over the pts and then ignores this sentinel value when drawing.

  39. dreamline January 25, 2017 at 10:22 am #

    thank you.
    if you want to see normal by mirror or flip the video:

    best regards

    i hope you add project for counting the objects by the webcam, like counte the moving cars in the street.

  40. Joe January 31, 2017 at 12:48 pm #

    Hello Dude,

    Thank you so much for this great tutorial! Dude, I was trying to test multiple balls together but I got the texts over each others, can you please tell me how to separate each text?

    Thank you so much,

    Best regards,

    Joe

    • Adrian Rosebrock February 1, 2017 at 12:53 pm #

      What do you mean by “separate each text”? Can you please elaborate?

  41. Louis February 2, 2017 at 10:04 am #

    Hi Adrian,
    Nice work, Thanks for your tutorial!

    I am trying to tracking ants and analyse their directions(whether they are getting into the hole or out of it.) Now I have drew the contour of each ant. I am stucked in tracking their directions. If I want to use ‘deque’ for each ant, I need to detect each ant and find out which ‘deque’ it belongs to. Any good idea?

    Thanks in advance.

    • Adrian Rosebrock February 3, 2017 at 11:12 am #

      There are many ways to approach this problem. One way would be correlation trackers, but I think that is overkill. Simply compute the centroid of each ant and then compute the distance between centroids in subsequent frames. Minimum distances between centroids are very likely to be the same ant.

  42. Conni February 3, 2017 at 10:28 am #

    Dear Adrian,
    I have just started to use the imutils tools. Thank you for your awsome tutorial. Is it possible to change the field of view of the videostream.
    I tried: vs = VideoStream(usePiCamera=args[“picamera”]>0, resolution = (640, 480)).start()
    However, this does seem to change the resolution but not the field of view.

    • Adrian Rosebrock February 3, 2017 at 11:02 am #

      Hi Conni — what do you mean by the “field of view”? Are you looking for a specific area of the frame? If so you would accomplish that via image cropping with NumPy array slicing. If you’re new to OpenCV/computer vision I would suggest working through Practical Python and OpenCV to help you get started.

  43. Rob February 12, 2017 at 5:37 pm #

    Hi Adrian,

    Thanks for the tutorial! It’s been super helpful. Is it possible to write the video from a webcam to a saved file in a way that saves the object detection marker and trail of center points? I have been able to save a video of what my webcam sees, but I haven’t been able to do it in a way where I also record the object detection or center trails.

    Thanks again for the tutorials. I’ve only been using Python for a few days now and you’ve been a huge help.

    • Adrian Rosebrock February 13, 2017 at 1:38 pm #

      Hey Rob — I would start by using this tutorial to help you write the video to file. You can then use cPickle to serialize the dequeue object to file as well.

  44. spospider February 24, 2017 at 11:50 am #

    is there a version of this for python 3.4 ?

    • Adrian Rosebrock February 27, 2017 at 11:25 am #

      Change the cv2.findContours call to:

      (_, cnts, _) = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

      And it will work for OpenCV 3.

      • spospider March 3, 2017 at 2:33 am #

        i tried it with python 3 on rpi 2 model b and the terminal always tells me “select timeout” every 10 seconds and the frame window is black with dx and dy = 0
        is there a way to solve this ?

        • spospider March 3, 2017 at 2:35 am #

          cpu usage is below 7%
          program doesnt quit unless it is terminated when it becomes unresponsive

          • Adrian Rosebrock March 4, 2017 at 9:40 am #

            If your frame window is black, I think you might have an issue with your Raspberry Pi firmware. Please see this post for more information.

  45. Josh Lovell April 29, 2017 at 10:02 am #

    How would I use this to track a ball on my screen as opposed to on my webcam?

    • Josh Lovell April 29, 2017 at 10:03 am #

      As in, a live video, not a saved one. Thanks!

      • Adrian Rosebrock May 1, 2017 at 1:37 pm #

        You would simply need to access your webcam rather than the video file. I describe the basics of accessing a webcam and video file inside Practical Python and OpenCV. This would likely be an excellent starting point for you (and help you master the fundamentals of OpenCV).

  46. Samyak Jain May 1, 2017 at 8:25 am #

    Hey Adrian, do you know of any object tracking methods/implementations which consider the problem of objects with similar appearances?

    • Adrian Rosebrock May 1, 2017 at 1:13 pm #

      Hi Samyak — I think that depends on how you define “similar appearance”. What does “similar appearance” mean in context of your problem? Are the objects too similar in terms of color such that you cannot define color thresholds to segment each of them?

      • Samyak Jain May 1, 2017 at 3:33 pm #

        Yes, by “similar appearance” I mean objects like multiple same colored balls in a video sequence. In that case, if I want to track one particular ball, without being mistaken for another ball, how would I go about doing it?

        • Adrian Rosebrock May 3, 2017 at 6:00 pm #

          In that case I would consider exactly how you are identifying the original ball in the first place. If you can identify it via color thresholding (using some heuristics to discount the others), then correlation tracking or centroid tracking is a good start. Otherwise, consider training a custom object detector that is not based on color and use that to detect your object. From there you can also apply correlation tracking and centroid tracking.

  47. TAMG May 7, 2017 at 6:02 pm #

    hi adrian ,, i’ve a strange question ,, how does it work ?!! ,, i mean when i run the code it run successfully .. but nothing happens even the video doesn’t open .. what are the possibilities for this 🙁 ??

    • Adrian Rosebrock May 8, 2017 at 12:20 pm #

      It sounds like your OpenCV install wasn’t compiled with video support enabled. I would suggest using one of my tutorials to compile OpenCV with video support.

  48. Austin Bashaw May 7, 2017 at 10:33 pm #

    Hi Adrian,

    I am working on pedestrian tracking and detection for a class of mine. I am not very good with the coding lingo, so I apologize in advance.

    Based on your tutorial using HOG Descriptors, I am able to find pedestrian extremely well. I am attempting to place a unique colored rectangle around each person and am able to do so. The problem comes in with the tracking portion. I am struggling with how to make my code realize that ‘Person A’ is still ‘Person A’ in a consecutive frame. Currently, a different colored box is drawn around the same person between consecutive frames.

    Since the HOG returns ‘rects’ or points of a rectangle around person. How can I use this ‘rects’ array to track across several frames if the HOG might skip detection over a frame or two?

    I would definitely have an F in my class if it weren’t for your site.
    Thanks

    • Adrian Rosebrock May 8, 2017 at 12:20 pm #

      Hi Austin — I would suggest using the centroid tracking method. Compute the centroids for each bounding box and then determine the Euclidean distance between each centroid in successive frames. Bounding boxes with the smallest distance between the frames (very likely) refer to the same person/object.

  49. Kiki May 10, 2017 at 12:35 am #

    Hi Adrian,
    Thanks for your awesome tutorial !
    Best website to source OpenCV and computer vision ! I learned a lot from it.It will be useful to my coming graduation project.
    But I’m not sure what kind of object detection and tracking methods did you use in this tutorial??

    Thanks in advance

    • Adrian Rosebrock May 11, 2017 at 8:49 am #

      Thank you Kiki, I’m happy to hear you are enjoying the PyImageSearch blog! I’d also like to wish you the best of luck on your graduation project. As far as object detection/tracking in this blog post, we used simple color thresholding to detect the ball in each frame. The (x, y)-coordinates of the ball were then monitored and stored in a list.

  50. Fufu May 10, 2017 at 1:25 am #

    Hi Adrian,
    Thanks for your awesome tutorial !
    Best website to source OpenCV and computer vision!It will be useful to my graduation project.
    But I am not sure what kind of object detection and tracking methods did you use in this tutorial??

  51. Tira May 22, 2017 at 11:07 am #

    Hi Adrian,
    Thanks for your great tutorial!
    But I have a question, is this method can be used to set the object to stay in the center with the camera moving automatically follow the direction of the movement of the object?

    I have assembled a pi camera with two servo motors in order to move vertically and horizontally for 180 degree.

    Sorry if my english is not good. Thanks.

    • Adrian Rosebrock May 25, 2017 at 4:38 am #

      Provided you can detect and track the object in each frame, then yes, you could theoretically use the servos to follow the object.

  52. Suraj June 18, 2017 at 12:38 am #

    Hey Adrian,
    Nice tutorial by the way! Do you know any website from which I can download OpenCv, Portable Python and Numpy. Can you tell me as to how to install them?

  53. selvam July 3, 2017 at 11:20 am #

    Hi Adrian
    First, thanks a ton for your contribution.

    Is it possible to take a reference point for the ball in the frame and calculate the distance and angle with respective to that reference point as the ball moves.

    And of course if the ball comes to the reference point, it should tell the distance is Zero cm/mm

    • Adrian Rosebrock July 5, 2017 at 6:10 am #

      Yes, but I would suggest reading up on how to perform camera calibration first. This will give you a much better estimation to the angle and distance.

  54. Dibakar Saha July 3, 2017 at 4:09 pm #

    Hi Adrian,

    I am Dibakar Saha. I found your blog searching on google for “object motion tracking opencv”.

    Here is my question-
    In this blog post you have used HSV color space to identify a green ball. Right? What if instead of using the HSV color space we use a haar cascade to identify an object? Can we do that? Well, you see understanding HSV color space is a bit hard for me and that is why I ask this question. Moreover I have used range_picker.py and I find it highly difficult to find the right lower and upper limits of a particular color.

    I am a newbie to OpenCV and I have just started to learn it. But I have a good experience in Python, C, C++ and Java.

    Thanks,
    Dibakar.

    • Adrian Rosebrock July 5, 2017 at 6:05 am #

      You can certainly train your own object detector to detect ball-like objects. Haar cascades are one option, but I prefer HOG + Linear SVM. I discuss the HOG + Linear SVM object detector framework (with lots of Python code), inside the PyImageSearch Gurus course.

  55. Varghese Mathai July 9, 2017 at 1:08 pm #

    bro, I made a tiny application using your tutorial and some .NET code. All OpenCV wrappers for .NET seemed to be kinda expensive. So I used sockets to communicate within localhost.

    Not a big deal. The delay in recognising the direction is noticeable. . But was interesting.

    Thanks

    The demo can be found here:
    https://www.youtube.com/watch?v=SyC9LNrI3BA

    • Adrian Rosebrock July 11, 2017 at 6:37 am #

      Congratulations Varghese, this is AWESOME! Great work.

  56. Chunni August 15, 2017 at 11:34 am #

    Hi Adrian,
    I had a query regarding the approaching of the object. How do you find if the object is approaching towards the camera?
    Thank you in advance

    • Adrian Rosebrock August 17, 2017 at 9:18 am #

      The object will appear bigger (i.e., larger radius/bounding box) the closer it gets to the camera.

  57. Mafuyu August 18, 2017 at 5:00 am #

    Hi Adrian,

    Thanks for making this guide, I’ve been tweaking it on my own for my school project.

    Is it possible for the script to print the last known coordinates onto the terminal? Like once I quit the script, the next line on the terminal is the last known coordinates of the ball.

    I’ll be connecting the raspberry pi through putty, so I’ll need to see results on that!

    Thank you

    • Adrian Rosebrock August 21, 2017 at 3:51 pm #

      Sure, just place the following code at the end of the script after you quit:

      print(cX, cY)

  58. Navaneeth Krishnan September 2, 2017 at 4:10 am #

    hi adrian i am a big fan of you i want a favour from you.how to find the color range for hsv color space. i want to find the optimal color range for red is there any way to find the color range easiy

    • Adrian Rosebrock September 5, 2017 at 9:36 am #

      The “optimal range” is going to depend highly on your environment unless you are doing some sort of color balancing. For a given environment you would want to use the “range-detector” script I mentioned in the post.

  59. ALOK MISHRA September 10, 2017 at 1:33 pm #

    sir, i want to use this code to control mouse cursor movement is it possible…

    • Adrian Rosebrock September 11, 2017 at 9:05 am #

      I’ve never tried to control the mouse via Python before. For Windows you might want to try the win32api/cytpes library. I also believe that PyAutoGUI is multi-platform but I’ve never tried either of these.

  60. deshario September 25, 2017 at 9:26 am #

    Hi Sir ! If i have two balls… 1 is red and 1 is green how to implement it ! It will be very helpful to me #Thank

    • Adrian Rosebrock September 26, 2017 at 8:25 am #

      You define two different color thresholds, one for red and one for green. You then create a mask for each color and check the contours.

  61. Jimmer October 23, 2017 at 1:34 am #

    If i wanted to track a tennis ball bouncing at speed what would be a good approach to detect the point of impact.

    I was thinking that i would try and measure the change of the slope of the ball as it is approaching the ground and after it bounces. Do you have any suggestions on how to do this?

    • Adrian Rosebrock October 23, 2017 at 6:16 am #

      In general this can be a pretty tough problem. If you do not have a fixed, non-moving camera it could be near impossible. How are you capturing frames?

  62. Jimmer October 23, 2017 at 10:33 am #

    I will be using a fixed camera to capture the frames.

    • Adrian Rosebrock October 23, 2017 at 12:25 pm #

      It’s still pretty hard to provide concrete suggestions without first seeing some example images from your input camera so I’m more-or-less shooting in the dark here, but I would consider looking at training a custom object detector to detect the racket. You could do the same for a tennis ball but color thresholding might also be reliable enough. Once you have the bounding boxes for both you can monitor them and determine when the (x, y)-coordinates overlap/are sufficient close.

  63. Karen November 6, 2017 at 4:15 pm #

    Hi Adrian,

    Rather than tracking a green ball, is there a way to modify this code to track eye movement? I’m currently using haarcascades to detect eyes on a face, and I’m curious as if to whether or not I can replace the green ball with the cascades instead to track their movement?

    • Adrian Rosebrock November 6, 2017 at 4:27 pm #

      For eye movement take a look at facial landmarks. They will help you get started.

  64. kaber November 8, 2017 at 4:28 pm #

    Hi Adrian,

    i am running my code in ubuntu , when i put the command line, nothing happens to video , can you explain me how to run

    • Adrian Rosebrock November 9, 2017 at 6:19 am #

      Can you elaborate on what you mean by “nothing happens”? Are you trying to use the example video file? Or are you trying to use your webcam on your Ubuntu machine?

      • kaber November 17, 2017 at 3:39 pm #

        Yes i am trying to use the example video file, and nothing happens means just it does not execute any thing and command line starts with new one.

        • Adrian Rosebrock November 18, 2017 at 8:09 am #

          So the script automatically exits? If that’s the case it sounds like OpenCV was compiled without video I/O support. Please use one of my tutorials to install OpenCV (and make sure you don’t skip any steps). Again, this is most likely due to not having video I/O support compiled with OpenCV.

  65. Terry December 14, 2017 at 3:22 am #

    Greetings.
    I have been trying to make this track multiple objects, but I fail on keeping the deque pairs “pts” on some array or something. I have managed to get multiple contours on the np.array, so I can have multiple centers and I believe that I have to do an iteration on my array to see if there is already a tuple with its x,y close to the new one, so that I append the new tuple on that position on the array and do that for all the centers I get. I fail at managing to initialize an array of tuples and iterating through them :/

    • Adrian Rosebrock December 15, 2017 at 8:26 am #

      Hey Terry — I would suggest creating a deque for each object you want to track. You can use either a correlation tracking algorithm or centroid-based tracking to associate the deque with an object. Compute the euclidean distance between the the centroids of objects in subsequent frames. The centroids with the smallest distances are (presumably) the same object.

  66. Mustafa December 26, 2017 at 11:38 am #

    I have done Object detection using DNN and now I have the boundary box drawn on the perticular object. Furthermore, I want that object to be tracked exactly as it is done in this post?

    • Adrian Rosebrock December 26, 2017 at 3:44 pm #

      I would suggest passing the bounding box into a dedicated object tracking algorithm. Take a look at correlation tracking.

  67. Spencer December 30, 2017 at 3:24 pm #

    Hi Adrian,
    In your code you write:

    # resize the frame, blur it, and convert it to the HSV
    # color space
    frame = imutils.resize(frame, width=600)
    blurred = cv2.GaussianBlur(frame, (11, 11), 0)
    hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)

    but you do not use the “blurred” variable when setting hsv (instead the code uses “frame”). Is there a reason that you do not use “blurred” to set hsv? and if so, why is “blurred” calculated at all. Thanks!

    • Adrian Rosebrock December 31, 2017 at 9:35 am #

      I used the blurred image when I was doing some debugging but forgot to remove it from the code after I was done. It can be safely ignored/removed.

  68. owais January 23, 2018 at 4:22 pm #

    hi adrian

    dX = pts[-10][0] – pts[i][0]
    dY = pts[-10][1] – pts[i][1]

    Adrian could you explain this code i know here we are computing our coordinates i want to know how it’s working thanks in advance

    • Adrian Rosebrock January 24, 2018 at 5:02 pm #

      We are computing the difference in (x, y)-coordinates between the current point (index 0) and the point added 10 frames ago (index -10).

  69. faraz January 27, 2018 at 10:50 am #

    can you plz show me how to change this code to track very small ball?

  70. Amare Mahtsentu February 4, 2018 at 6:23 pm #

    Hi adrian
    thank you for your time

    can I use this motion tracking together with deep learning object detectors like FRCNN ? how?
    If you get my idea, after generating the coordinates of the bounding boxes how can I use these points to track motion of the object and follo until it is lost? thank you

    • Adrian Rosebrock February 6, 2018 at 10:25 am #

      I would suggest you look into dedicated object tracking algorithms such as “correlation trackers”. These trackers allow you to pass in the bounding box coordinates of a detected object and then track the object in subsequent frames.

      • Amare Mahtsentu February 7, 2018 at 1:25 am #

        Thank you
        does your practitioner bundle have such kind of things (detection with deep learning plus tracking with opencv)?

        • Adrian Rosebrock February 8, 2018 at 8:36 am #

          The Practitioner Bundle covers the fundamentals of object detection. The ImageNet Bundle covers more advanced object detection algorithms such as Faster R-CNNs and Single Shot Detectors (SSDs). I do not cover object tracking with OpenCV inside the book as the book focuses on deep learning rather than OpenCV. I’ll be doing more object detection tutorials here on the PyImageSearch blog though!

  71. Rajnish Kumar February 12, 2018 at 2:08 am #

    Hello sir.

    I am Rajnish kumar, I am working on a project where i m gonna choose different options with help of eye motion but i have issues regarding eye motion .How by my pupil is detected and its midpoint .so please help me with this issue .However i have gone through all your tutorials just as object tracking,face detection,etc

    • Adrian Rosebrock February 12, 2018 at 6:16 pm #

      I don’t have any tutorials on eye tracking and pupil localization but I know a few PyImageSearch readers have had good luck with this algorithm.

  72. Nithin February 17, 2018 at 3:11 am #

    Hi Adrian,
    Can you suggest any method to display the video output from an opencv python program to a webpage. Thanks in advance 🙂

    • Adrian Rosebrock February 18, 2018 at 9:45 am #

      I don’t have any tutorials to cover that, but I will certainly try to cover it in the future! Thank you for the suggestion.

  73. Praneeth February 17, 2018 at 10:55 pm #

    Hello Adrian,

    My project is a pupil controlled wheelchair, i need to detect the motion of the pupil in real time and actuate the DC motors to move in the direction accordingly.
    Could you please help me track the black pupil in real time.
    It would be great if you could send me a source code, that can support my project.
    I will be using python with open CV2 for image processing and Arduino for actuation of motors.
    Thank you in advance!

    • Adrian Rosebrock February 18, 2018 at 9:40 am #

      I don’t have any tutorials on pupil detection but I know other PyImageSearch readers have had good luck with this tutorial. I would suggest starting there.

  74. SHREY MAHESHWARI February 22, 2018 at 8:38 am #

    Hello Adrian,

    How can i get the dx and dy values for two different colored objects??

    • Adrian Rosebrock February 22, 2018 at 8:55 am #

      You would need to maintain two deque data structures, one for each of the different colored objects. This would also require you to to define two sets of color thresholds for the cv2.inRange function.

      • SHREY MAHESHWARI February 24, 2018 at 1:19 am #

        Could you show how do to it in the python code above? I mean how to maintain two deque data structures and what transformations should be there in the cv2.inRange function

  75. Rakshanda March 21, 2018 at 3:31 am #

    HI Adrain,

    I need to track people moving and count the number of people. Can you please suggest me how can I do it?

    • Adrian Rosebrock March 22, 2018 at 10:03 am #

      I don’t have a tutorial on people counting at the moment but I will try to do one soon.

      • Anand July 27, 2018 at 8:10 am #

        very well appreciated if you do a tutorial on this one.I’m googling for over a week to find a good tutorial on people-counting. Also your tutorials are excellent,simple and informative for newbies like me.
        Thankyou

  76. mo1878 March 21, 2018 at 2:50 pm #

    Hello Adrian,

    Firstly I’d like to thank you for this tutorial. Secondly,I am playing around with the code right now, but I am wondering if it is possible to just output the (x,y) coordinates of the centroid? rather than the change in the x and y; (dx, dy)?

    • Adrian Rosebrock March 22, 2018 at 9:49 am #

      What does “output” in this context mean?

  77. diego March 26, 2018 at 2:57 pm #

    Hello Adrian!

    Thanks for sharing your knowledge. How can I determine the movement of more than one object at the same time time?

    • Adrian Rosebrock March 27, 2018 at 6:16 am #

      Hey Diego, take a look at my replies to the following PyImageSearch readers:

      – Manez on February 26, 2016
      – Alex on May 9, 2016
      – ahmed on May 23, 2016
      – Samyak on Jain May 1, 2017
      – Terry December 14, 2017

      I hope that helps point you in the right direction!

  78. amare March 30, 2018 at 3:06 pm #

    hi Adrian

    last time on February 6 2018 You suggest me to use correlation trackers along with object detectors like FRCNN. I tried that and it detect and track the objects well. but there is no way of telling the direction of movement. do you have any idea?
    Thanks

    • Adrian Rosebrock April 4, 2018 at 12:49 pm #

      Yes, there is a way. If you know the bounding box of the object you can compute the center/centroid. Pass the centroid into a dequeue data structure, like I do in this post and you can determine object direction.

  79. John April 25, 2018 at 6:37 am #

    How can I track the velocity of the object moving

    • Johnny April 27, 2018 at 8:11 pm #

      Dear Adrian,

      I am new to opencv coding. I love this tutorial, but I was wondering if it possible to add in velocity tracking to this code and convert to real world coordinates? I want to track the speed of a moving object accurately with a 1in/sec margin of error. Do you have any ideas or know of any tutorials I can refer to?

      • Adrian Rosebrock April 28, 2018 at 6:06 am #

        If you’re new to OpenCV coding then trying to compute the velocity is a bit aggressive. I would start with learning the fundamentals first, you will need a strong foundation for this project. From there you’ll want to learn about the intrinsic/extrinsic parameters of a camera and how to properly calibrate it. A calibration is required for real-world coordinates and an accurate measurement. You can avoid this more complicated camera if you’re willing to sacrifice accuracy and will most likely be worse than a 1in margin of error, depending on your project. I do not have any tutorials on velocity tracking but I may consider it for the future. In the meantime consider working through Practical Python and OpenCV to help you learn the fundamentals. I hope at the very least that helps point you in the right direction and gives you some terms to research.

  80. Steve June 12, 2018 at 4:50 am #

    Hey adrian

    i was wandering if its possible to track multiple objects?

    • Adrian Rosebrock June 13, 2018 at 5:40 am #

      Totally. See my reply to “Manez” on February 26, 2016.

  81. Jose June 28, 2018 at 4:30 am #

    Hi !

    Sorry for my stupid question but where is your haarcascade to detect the ball??

    Keep with your amazing job!

    • Adrian Rosebrock June 28, 2018 at 7:57 am #

      We aren’t using a Haar cascade here, we are using object detection and tracking via color thresholding.

  82. John July 22, 2018 at 10:06 pm #

    Adrian,

    I am attempting to predict the path of a the ball and draw the tale for that set of predicted points. I am storing the predicted (x,y) coordinates for each frame into a queue then I attempt to draw that tail the same way you have but the issue i am having is the tail is never cleared off the screen. Do you have any suggestions on how i could fix that?

    • Adrian Rosebrock July 25, 2018 at 8:21 am #

      Hey John — there’s probably a logic error in your code somewhere. Are you using the same deque data structure as I am? That will ensure the older (x, y)-coordinates are removed from the queue and the newer coordinates are kept. You may also be using a simple Python list which will never delete older items.

  83. Jay August 23, 2018 at 10:35 am #

    Hi Andrian ! first thank you for this such good tutorial. Recently there have some questions bothering me , if I want to design an object tracking model , especially draw the movement of the objets why can’t I do it base on deep learning such like your recent post talking about real time detection with deep learning? Thanks for your time !

    • Adrian Rosebrock August 24, 2018 at 8:38 am #

      Hey Jay, I’m not sure I understand the question here. You can certainly use deep learning to perform object detection. Once you have the object detected you can track it as well. Here is a good example of both in action.

  84. Naveen August 24, 2018 at 6:31 am #

    Dear Adrian,

    Appreciate for your tutorial, I am new to opencv coding and really interesting this tutorial.
    installed opencv and run your source code working fine.How to set object boundaries and colors, I want to track pen with green color or red. most of the times showing red lines without green color, How to track exact accurately, Please provide me any solution.

  85. kumar August 29, 2018 at 3:32 am #

    Dear Adrian,

    I installed opencv and run your source code working fine. started small project using with your code, Crated nine small circles(c1,c2 …c9) in white paper, How to get exact location with accurate when moving object, Please provide me solution for this.

    • Adrian Rosebrock August 30, 2018 at 9:03 am #

      Can you clarify what you mean by “exact location”? You can compute the bounding box of the contour region of each circle by using the “cv2.boundingRect” function. Furthermore, this tutorial shows you how to compute the centroid of the contour as well. Instead of finding the largest contour via the “max” function you would just loop over them individually.

  86. MJ October 10, 2018 at 5:35 pm #

    Hi. tnq for your astonishing work. I’m somehow noob in python. I;m using spyder for compiling thus code but it doesn’t grab any frame from the example video .

    • Adrian Rosebrock October 12, 2018 at 9:09 am #

      Try executing the code from your command line/prompt rather than the Spyder IDE.

  87. Sun November 4, 2018 at 9:32 pm #

    Hi Adrian,

    Thank you so much for your great tutorial.
    I’m working on a similar project recently, I’m provided a video shooting through a vehicle control panel, the task is to find out meters(speed, rotate speed etc..)locations on that panel and track them. All the meters are on the same panel (which is a plane surface).
    I have tried feature matching and object tracking but cannot get a satisfying results in both speed and accuracy. I’m wondering do you have suggestions for this task?

    Thanks again

    • Adrian Rosebrock November 6, 2018 at 1:23 pm #

      Hey Sun — I’m not entirely sure I understand the project. Do you have any images or example video?

  88. Kevin December 4, 2018 at 7:16 am #

    Hi Adrian,

    Thank you for these amazing tutorials! One quick question tho, what if both dirX and dirY are empty? The if-else statement starting from line 123 does not seem to solve this case. But I guess it really doesnt matter since it has no impact for the tracking?

    • Adrian Rosebrock December 4, 2018 at 9:33 am #

      If both are empty then you are correct, you wouldn’t be able to derive the tracking information since there are no “deltas”.

Leave a Reply