Find distance from camera to object/marker using Python and OpenCV

Find distance from camera to object using Python and OpenCV

A couple of days ago, Cameron, a PyImageSearch reader emailed in and asked about methods to find the distance from a camera to an object/marker in an image. He had spent some time researching, but hadn’t found an implementation.

I knew exactly how Cameron felt. Years ago I was working on a small project to analyze the movement of a baseball as it left the pitcher’s hand and headed for home plate.

Using motion analysis and trajectory-based tracking I was able to find/estimate the ball location in the frame of the video. And since a baseball has a known size, I was also able to estimate the distance to home plate.

It was an interesting project to work on, although the system was not as accurate as I wanted it to be — the “motion blur” of the ball moving so fast made it hard to obtain highly accurate estimates.

My project was definitely an “outlier” situation, but in general, determining the distance from a camera to a marker is actually a very well studied problem in the computer vision/image processing space. You can find techniques that are very straightforward and succinct like the triangle similarity. And you can find methods that are complex (albeit, more accurate) using the intrinsic parameters of the camera model.

In this blog post I’ll show you how Cameron and I came up with a solution to compute the distance from our camera to a known object or marker.

Definitely give this post a read — you won’t want to miss it!

Looking for the source code to this post?
Jump right to the downloads section.

Triangle Similarity for Object/Marker to Camera Distance

In order to determine the distance from our camera to a known object or marker, we are going to utilize triangle similarity.

The triangle similarity goes something like this: Let’s say we have a marker or object with a known width W. We then place this marker some distance D from our camera. We take a picture of our object using our camera and then measure the apparent width in pixels P. This allows us to derive the perceived focal length F of our camera:

F = (P x  D) / W

For example, let’s say I place a standard piece of 8.5 x 11in piece of paper (horizontally; W = 11) D = 24 inches in front of my camera and take a photo. When I measure the width of the piece of paper in the image, I notice that the perceived width of the paper is P = 248 pixels.

My focal length F is then:

F = (248px x 24in) / 11in = 543.45

As I continue to move my camera both closer and farther away from the object/marker, I can apply the triangle similarity to determine the distance of the object to the camera:

D’ = (W x F) / P

Again, to make this more concrete, let’s say I move my camera 3 ft (or 36 inches) away from my marker and take a photo of the same piece of paper. Through automatic image processing I am able to determine that the perceived width of the piece of paper is now 170 pixels. Plugging this into the equation we now get:

D’ = (11in x 543.45) / 170 = 35in

Or roughly 36 inches, which is 3 feet.

Note: When I captured the photos for this example my tape measure had a bit of slack in it and thus the results are off by roughly 1 inch. Furthermore, I also captured the photos hastily and not 100% on top of the feet markers on the tape measure, which added to the 1 inch error. That all said, the triangle similarity still holds and you can use this method to compute the distance from an object or marker to your camera quite easily.

Make sense now?

Awesome. Let’s move into some code to see how finding the distance from your camera to an object or marker is done using Python, OpenCV, and image processing and computer vision techniques.

Finding the distance from your camera to object/marker using Python and OpenCV

Let’s go ahead and get this project started. Open up a new file, name it distance_to_camera.py , and we’ll get to work:

The first thing we’ll do is import our necessary packages (Lines 2-5). We’ll use paths  from imutils  to load the available images in a directory. We’ll use NumPy for numerical processing and cv2  for our OpenCV bindings.

From there we define our find_marker  function. This function accepts a single argument, image , and is meant to be utilized to find the object we want to compute the distance to.

In this case we are using a standard piece of 8.5 x 11 inch piece of paper as our marker.

Our first task is to now find this piece of paper in the image.

To to do this, we’ll convert the image to grayscale, blur it slightly to remove high frequency noise, and apply edge detection on Lines 9-11.

After applying these steps our image should look something like this:

Figure 1: Applying edge detection to find our marker, which in this case is a piece of paper.

Figure 1: Applying edge detection to find our marker, which in this case is a piece of paper.

As you can see, the edges of our marker (the piece of paper) have clearly been reveled. Now all we need to do is find the contour (i.e. outline) that represents the piece of paper.

We find our marker on Lines 15 and 16 by using the cv2.findContours function (taking care to handle OpenCV 2.4 and OpenCV 3+) and then determining the contour with the largest area on Line 17.

We are making the assumption that the contour with the largest area is our piece of paper. This assumption works for this particular example, but in reality finding the marker in an image is highly application specific.

In our example, simple edge detection and finding the largest contour works well. We could also make this example more robust by applying contour approximation, discarding any contours that do not have 4 points (since a piece of paper is a rectangle and thus has 4 points), and then finding the largest 4-point contour.

Note: More on this methodology can be found in this post on building a kick-ass mobile document scanner.

Other alternatives to finding markers in images is to utilize color, such that the color of the marker is substantially different from the rest of the scene in the image. You could also apply methods like keypoint detection, local invariant descriptors, and keypoint matching to find markers; however, these approaches are outside the scope of this article and are again, highly application specific.

Anyway, now that we have the contour that corresponds to our marker, we return the bounding box which contains the (x, y)-coordinates and width and height of the box (in pixels) to the calling function on Line 20.

Let’s also quickly to define a function that computes the distance to an object using the triangle similarity detailed above:

This function takes a knownWidth of the marker, a computed focalLength , and perceived width of an object in an image (measured in pixels), and applies the triangle similarity detailed above to compute the actual distance to the object.

To see how we utilize these functions, continue reading:

The first step to finding the distance to an object or marker in an image is to calibrate and compute the focal length. To do this, we need to know:

  • The distance of the camera from an object.
  • The width (in units such as inches, meters, etc.) of this object. Note: The height could also be utilized, but this example simply uses the width.

Let’s also take a second and mention that what we are doing is not true camera calibration. True camera calibration involves the intrinsic parameters of the camera, which you can read more on here.

On Line 28 we initialize our known KNOWN_DISTANCE  from the camera to our object to be 24 inches. And on Line 32 we initialize the KNOWN_WIDTH  of the object to be 11 inches (i.e. a standard 8.5 x 11 inch piece of paper laid out horizontally).

The next step is important: it’s our simple calibration step.

We load the first image off disk on Line 37 — we’ll be using this image as our calibration image.

Once the image is loaded, we find the piece of paper in the image on Line 38, and then compute our focalLength  on Line 39 using the triangle similarity.

Now that we have “calibrated” our system and have the focalLength , we can compute the distance from our camera to our marker in subsequent images quite easily.

Let’s see how this is done:

We start looping over our image paths on Line 42.

Then, for each image in the list, we load the image off disk on Line 45, find the marker in the image on Line 46, and then compute the distance of the object to the camera on Line 47.

From there, we simply draw the bounding box around our marker and display the distance on Lines 50-57 (the boxPoints  are calculated on Line 50 taking care to handle OpenCV 2.4 and OpenCV 3+ versions).

Results

To see our script in action, open up a terminal, navigate to your code directory, and execute the following command:

If all goes well you should first see the results of 2ft.png , which is the image we use to “calibrate” our system and compute our initial focalLength :

Figure 2: This image is used to compute the initial focal length of the system. We start by utilizing the known width of the object/marker in the image and the known distance to the object.

Figure 2: This image is used to compute the initial focal length of the system. We start by utilizing the known width of the object/marker in the image and the known distance to the object.

From the above image we can see that our focal length is properly determined and the distance to the piece of paper is 2 feet, per the KNOWN_DISTANCE  and KNOWN_WIDTH  variables in the code.

Now that we have our focal length, we can compute the distance to our marker in subsequent images:

Figure 3: Utilizing the focal length to determine that our piece of paper marker is roughly 3 feet from our camera.

Figure 3: Utilizing the focal length to determine that our piece of paper marker is roughly 3 feet from our camera.

In above example our camera is now approximate 3 feet from the marker.

Let’s try moving back another foot:

Utilizing the computed focal length to determine w are roughly 4 feet from our marker.

Figure 4: Utilizing the computed focal length to determine our camera is roughly 4 feet from our marker.

Again, it’s important to note that when I captured the photos for this example I did so hastily and left too much slack in the tape measure. Furthermore, I also did not ensure my camera was 100% lined up on the foot markers, so again, there is roughly 1 inch error in these examples.

That all said, the triangle similarity approach detailed in this article will still work and allow you to find the distance from an object or marker in an image to your camera.

Summary

In this blog post we learned how to determine the distance from a known object in an image to our camera.

To accomplish this task we utilized the triangle similarity, which requires us to know two important parameters prior to applying our algorithm:

  1. The width (or height) in some distance measure, such as inches or meters, of the object we are using as a marker.
  2. The distance (in inches or meters) of the camera to the marker in step 1.

Computer vision and image processing algorithms can then be used to automatically determine the perceived width/height of the object in pixels and complete the triangle similarity and give us our focal length.

Then, in subsequent images we simply need to find our marker/object and utilize the computed focal length to determine the distance to the object from the camera.

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 11-page Resource Guide on Computer Vision and Image Search Engines, including exclusive techniques that I don’t post on this blog! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , , ,

221 Responses to Find distance from camera to object/marker using Python and OpenCV

  1. joe January 22, 2015 at 12:11 pm #

    How could you apply these techniques to sports events photos taken with a telephoto lens?

    • Adrian Rosebrock January 22, 2015 at 12:43 pm #

      You would have to estimate the intrinsic parameters of the camera which requires calibration with a “chessboard”.

  2. André January 27, 2015 at 7:14 am #

    HI Adrian,

    Nice post, I will try to implement the idea to determine the size of an object after the calibration.

  3. Hajar February 6, 2015 at 11:38 am #

    Great article Adrian, it was really helpful! Thank you so much!

    • Adrian Rosebrock February 6, 2015 at 1:15 pm #

      I’m glad you enjoyed it Hajar! 🙂

  4. JD April 11, 2015 at 2:01 am #

    hey, Adrian

    it’s a great piece of work you have done here and i used this technique working for my android device , and i want to take it to next level by measuring multiple object distances..but getting same results if two objects are on same position. any suggestions would be appreciated

    thnx

    • Adrian Rosebrock April 11, 2015 at 8:54 am #

      Hi JD, if you are looking to measure multiple objects, you just need to examine multiple contours from the cv2.findContours function. In this example, I’m simply taking the largest one, but in your case, you should loop over each of the contours individually and process them and see if they correspond to your marker.

    • Aminah September 9, 2017 at 2:17 pm #

      I want to use this technique with android device too. 🙂

  5. Anton April 12, 2015 at 6:05 am #

    on above example, we know the width of an object. What if we don’t know the width of the object ? for example in real life situation where a robot need to navigate the it meets each unknown object and need to find the distance even tough it has now knowledge about each objects actual width. Any other examples ?

    • Adrian Rosebrock April 12, 2015 at 7:18 pm #

      In general, you’ll need to know the dimension of the object in order to get an accurate reading on the width/height. However, in unconstrained environments you might be able to use a stereo camera so you can compute the “depth” of an image and do a better job avoiding obstacles.

  6. Abu April 25, 2015 at 12:32 pm #

    Is it possible to do this using the back end of a car in realtime? Maybe a HOG classifier could detect it and than the program? Any insight would be helpful.

    Thanks

  7. Dries May 2, 2015 at 6:20 am #

    Hi there Andrian!

    First of all i’m very thankful for this and other tutorials out there! this is by far the best series of tutorials online! Perfectly described step by step and explained why to preform every step 🙂 I can’t thank you enough!

    I’m trying to do the same thing as described in the above tutorial (finding an object and determine the distance between obj and camera) but i wonder how i should do it when using constant streaming video instead of loading images?

    thanks!

    • Adrian Rosebrock May 2, 2015 at 7:10 am #

      Hi Dries, thanks for the great comment, it definitely put a smile on my face 🙂 As for when using a constant video stream versus loading images, there is no real difference, you just need to update the code to use a video stream. It’s actually a lot easier than it sounds and I cover it both in Practical Python and OpenCV and this post.

  8. lady kenna June 7, 2015 at 2:11 am #

    what if i wanted to display cm instead of ft?

    • Adrian Rosebrock June 7, 2015 at 6:44 am #

      Your final metric is completely arbitrary — you can use feet, meters, centimeters, whatever you want. Take a look at Lines 25 and 29 and redefine them using the metric you want. And then you’ll need to modify Line 52 to output your metric instead of feet.

  9. raj August 18, 2015 at 11:41 am #

    Hi there,
    when i try the same code using 2 feet and an image of 1 inch ,the focal length is around 1260 . Is this ok ? coz im getting unacceptable distances of around 3.4 feet for 6 feet… Are there any limits for this method . I find this method really interesting , i am thinking forward to do this in my project .
    One more thing , will this work for webcam from a laptop .
    Thanks in advance

    • Adrian Rosebrock August 19, 2015 at 6:50 am #

      The main limitation of this method is that you need to have a straight-on view of the object you are detecting. As the viewpoint becomes angled, it distorts the calculation of the bounding box and thus the overall returned distance. And yes, this method will work with your laptop webcam, you just need to update the code to grab frames using the cv2.VideoCapture function. See this post for an example of grabbing frames from the webcam stream.

  10. David September 4, 2015 at 2:15 pm #

    Hi Adrian, Excelent tutorial i have a question for you, i hope you can help me.

    i need the position (X,Y,Z) in mm, with your tutorial i could get the point z, my problem is when i calibrate the camera to get the points X,Y my Z is wrong.

    do you know what happens?

    Thanks.

    • Adrian Rosebrock September 5, 2015 at 5:28 am #

      Are you doing work in 3D? If so, this method is not suited for 3D and you’ll need to use a different calibration method to examine the intrinsic parameters.

      • David September 6, 2015 at 11:50 pm #

        Hi Adrian, Thanks for answer me.

        I’ll tell you what my project. I detect a small object in real time with a webcam, with the position of the object, will point a laser that will be about two servos , one for the X axis , and one for the Y axis.

        I am ready the detection and tracking. but I’m having trouble getting the positions X , Y, Z . I research the transformation from 3D to 2D but there are certain points that do not understand.

        do you can help me?

        Thanks a lot

        • Matt April 20, 2016 at 8:39 am #

          Hi David !

          Currently, I’m working on a similar project, and I have a problem to do the relationship between coordinates in the 3D and servos. First, I would like to compute and track my 3D-coordinates object but it does not work.

          Did you find an idea ?

          Thanks

  11. murugan October 16, 2015 at 10:35 am #

    awesome program.but how to use it for a real streaming purpose.

    • Adrian Rosebrock October 17, 2015 at 6:46 am #

      This code can be easily adapted to work in real-time video streams. I would start by looking up the cv2.VideoCapture function. I use it multiple times on the PyImageSearch blog — I think this post can help get you started.

  12. murugan October 20, 2015 at 1:17 am #

    hi sir
    i tried cv2.videocapture but ended with errors so i request you to modify the program

    • Adrian Rosebrock October 20, 2015 at 6:11 am #

      If you are getting errors related to cv2.VideoCapture you should ensure that your installation of OpenCV has been compiled with video support.

  13. hyshan October 21, 2015 at 9:43 pm #

    Hi Adrian, how do you define the marker?

    • Adrian Rosebrock October 22, 2015 at 6:19 am #

      In the case of this blog post, I defined a marker as a large rectangle. Rectangle-like regions have the benefit of being easy to find in an image. Markers can be made more robust by adding (1) color or (2) any type of special design on the marker themselves.

  14. li October 26, 2015 at 11:26 am #

    excuse me, I want to know how to realise it in real-time camera?

    • Adrian Rosebrock October 27, 2015 at 4:55 am #

      In order to perform real-time distance detection, you’ll need to leverage the cv2.VideoCapture function to access the video stream of your camera. I have an example of accessing the video stream in this post. I also cover accessing the video stream more thoroughly inside Practical Python and OpenCV.

  15. Shatha Omar November 1, 2015 at 11:07 pm #

    Hi , this is a great helpful job
    please i’m working on application that take an image then find the object to calculate its dimensins , so i need to know the distance or if there is another blog/ article/ refferance you can help me with

    • Adrian Rosebrock November 3, 2015 at 10:18 am #

      In order to compute the distance to an object in an image, you need to be able to identify what the object is. For example, you could maintain a know database of objects and their dimensions, so when you find them, just pull out the dimensions and run the distance calculation.

  16. khalil November 19, 2015 at 1:29 pm #

    Thanks for your nice post.. it is really a good job.. dear, is there any additional procedure for which i can use this procedure for the unknown object distance measurement from camera….

    • Adrian Rosebrock November 20, 2015 at 6:30 am #

      You’ll need to know the size of some object in the image to perform camera calibration. In this example, we have a piece of paper. But you could have used a coin. A coffee cup. A book. But the point is that you need to know the size of object(s) you’ll be using to perform the camera calibration using triangle similarity.

  17. Minjae November 24, 2015 at 2:39 am #

    Hi, Thank you for this post.

    I’m looking for like Johnny Chung Lee’s Wii headtracking in VR Juggler through VRPN projects.

    Anyway,
    when I tried this code, there are some isues.
    My pi2 connected with noir-picam and ms-webcam.

    1. X-window
    (image:1297): GdkGLExt-WARNING **: Window system doesn’t support OpenGL.

    2. X-wondow with cv virtualenv
    Traceback (most recent call last):
    File “distance_to_camera.py”, line 41, in
    marker = find_marker(image)
    File “distance_to_camera.py”, line 16, in find_marker
    (cnts, _) = cv2.findCountours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
    ValueError: too many values to unpack (expected 2)

    3. console
    (image:867): Gtk-WARNING **: cannot open display:

    4. cv virtualenv
    same with 2.

    Please let me know the reason…

    • Adrian Rosebrock November 24, 2015 at 6:35 am #

      Hey Minjae:

      1. I’m not a Windows user, so I’m not sure a about this particular problem. Perhaps another PyImageSearcher reacher can help you out here.

      2. The code associated with this post was built for OpenCV 2.4. You are using OpenCV 3. You need to update the cv2.findContours function call, here to work with OpenCV 3..

      3. Are you SSH’ing into your Pi? If so, make sure you pass in the -X flag for X11 forwarding.

  18. nami November 25, 2015 at 9:14 am #

    Hi adrian,

    Thanks for sharing.
    I have tried your code and its work.
    But is it possible to calculate the object distance if the object size in real world ( width / height ) unknown? Just using internal camera parameter and object size in pixels..

    • Adrian Rosebrock November 25, 2015 at 1:54 pm #

      Hey Nami, you might want to look into camera calibration and the intrinsic camera parameters.

      • nami November 26, 2015 at 12:11 am #

        Hi adrian,

        So after we do camera calibration and get focal length ( fx, fy), principal point, etc. How can i measure the object distance? with object size in real world undefine..
        Thanks..

        • Adrian Rosebrock November 26, 2015 at 6:52 am #

          You basically need to convert the “pixel points” to “real world” points, from there the distance between objects can be measured. This is something I’ll try to cover on PyImageSearch in the future, but in the meantime, this is a really good MATLAB tutorial that demonstrates the basics.

  19. mert January 7, 2016 at 7:54 am #

    Hello Adrian. First of all , This is a great website. I have a question : Is this distance estimation can be done in real time processing applications. I mean I want to measure distance from an object(cicular and coorful) to my robot while the robot moving on a straight line. ı use a moderate camera with low resolution(usb cam).How should I do that

    • Adrian Rosebrock January 7, 2016 at 12:36 pm #

      Yes, this can absolutely be done in real-time processing, that’s not an issue. As long as you perform the calibration ahead of time, you can certainly perform the distance computation in your video pipeline.

      • mert January 8, 2016 at 7:28 am #

        Thanks for the help and your fast reply man. Camera Calibration looks like complicated though. Cause I wiil look at an object from an angled position say 30 degree(initially) while I m moving on a straigt line the angle will increase. In each time. So I have to make sure that the object is almost middle in the frame to use above code?

        • Adrian Rosebrock January 8, 2016 at 9:21 am #

          Camera calibration (at least the calibration discussed in this post) is actually pretty straightforward. This post assumes you have a 90-degree straight-on view of the object. For angled positioning this approach won’t work well unless you can apply a perspective transform. You should look into more advanced camera calibration methods.

          • mert January 8, 2016 at 1:52 pm #

            Got it. I’ll get right on it. Hope to meet you

  20. JolyDroneSP January 18, 2016 at 9:17 am #

    Hi Adrian. Thanks for the good work, it’s the most concise guide to this topic that I have found!

    By the way, what camera did you use for this project?

    I am trying to implement it through raspberry pi board camera and it doesn’t seem to work…

    • Adrian Rosebrock January 18, 2016 at 3:19 pm #

      I simply took photos using my iPhone for this post, but the code can work with either a built-in/USB webcam or the Raspberry Pi camera. If you’re having trouble getting your Raspberry Pi camera + code working, I suggest reading this post on accessing the Raspberry Pi camera.

  21. Tejas January 22, 2016 at 1:40 pm #

    Hello Adrian!

    First of all, thank you for a wonderful tutorial as always. I am working on a similar project and need some help.

    1. Is there a literature review available regarding all methods of depth estimation without using a stereo camera i.e. using only a normal single lens camera? If not, can you point me to resources – papers and hopefully implementations – about the state-of-the-art on this problem? I should mention that I am mainly interested in understanding the physics of the scene and not reconstructing per se. I haven’t been able to find any decent open implementations of these and that’s kind of sad.

    2. I am investigating the importance of head movements in animals(humans included) for depth perception and came across few decade old papers on the topic. Can you provide me references on how motion of camera affects detection of edges, depth estimation etc from a computer vision perspective? Aligning with point 1, I am looking for something on the lines of how one can estimate depth accurately by moving the single lens camera and detect edges and/or object boundaries by virtue of this movement. Methods like triangle similarity aren’t really helpful since they need an estimate of the original size of object/marker in question.

    Please help me out with this and keep up the good work! 🙂

    • Adrian Rosebrock January 22, 2016 at 4:43 pm #

      For both questions, my suggestion would be start off with this paper and follow the references. This paper is heavily cited in the CV literature and links to previous works that should also help you out.

  22. Tyrone Robinson February 5, 2016 at 2:43 pm #

    I may be coming in a little for this post but Im having trouble with the code.
    When I run it I get the following error. Can you please assist me with this.
    By the wayI think you are doing an awesome job.

    • Adrian Rosebrock February 6, 2016 at 9:56 am #

      Please see my reply to Minjae above and read this post for more information.

      • Swan August 22, 2016 at 7:52 pm #

        Thanks for that clarification!

  23. azizul February 17, 2016 at 3:24 am #

    hello Adrian. it a very good work.
    but can i ask question, how to implement stereo vision in this project?

    • Adrian Rosebrock February 17, 2016 at 12:36 pm #

      I honestly don’t do much work in stereo vision, although that’s something I would like to cover in 2016. Once I get a tutorial going for stereo version, I’ll be sure to let you know!

      • azizul February 18, 2016 at 12:26 am #

        i will be really please to hear the news from u…thnk u very much 🙂

  24. Jon February 21, 2016 at 5:52 pm #

    Hi Adrian!

    First off, kudos on making such a complex system seem so intuitive! It makes the process look so much less intimidating to us newbies (whether I’m able to follow along once my picam gets here is another story!)

    My question for ya is this: assuming I can manage to follow along and get distance readings for my marker, how difficult would it be to add the code required to trigger an event on a device that the marker is mounted to? In my case, I am looking to vibrate a small motor when the tracked object is more than 10 feet away. Ideally, it would increase in intensity based on how much further the object gets from the camera. I know this would require some investment in more hardware, but does it sound like a plausible idea?

    Thanks for posting this, and being so generous with helping out in the comments!

  25. Randy April 7, 2016 at 6:38 am #

    In OpenCV 3, we must use cv2.boxPoints(marker) instead of cv2.cv.BoxPoints(marker).

    • Adrian Rosebrock April 7, 2016 at 12:33 pm #

      You’re absolutely right Randy!

      • Kenton May 26, 2017 at 1:15 pm #

        Hey,

        I’m messing with this stuff now and it’s not working out too well for this part.
        My error is cv2 has no attribute boxPoints.
        Is there a way around this you can think of?

        Thanks

        • Adrian Rosebrock May 28, 2017 at 1:07 am #

          Hi Kenton — can you check which version of OpenCV that you are using? The cv2.boxPoints function is named differently depending on your OpenCV version.

  26. Kevin April 13, 2016 at 7:50 pm #

    Hey Adrian, great work! This was very informative and well done. I have a quick question regarding the limitations of such an approach when using larger distances. For example, 30 feet. At this distance, a relatively small object may be represented by very few pixels right? I’m assuming the better the camera (with better resolution) the farther the range in which this approach can still be accurate. My question is whether my assumption is indeed correct?

    Furthermore, I find that when I utilize this approach, the distance calculation sometimes fluctuates as the perceived width in pixels fluctuates. Could this be due to noise? And if so, what are some good techniques to reduce said noise? I’ve looked into the blur function in OpenCV, but I haven’t had much luck with that.

    Thanks again for the website, it’s super helpful. Look forward to hearing from you. Thank you!

    • Adrian Rosebrock April 14, 2016 at 4:51 pm #

      Indeed, that is correct. The farther away the object is and the smaller the resolution of the camera is, the less accurate the approximation will be. As for reducing noise, that is entirely dependent on the image data you are working with. Blurring is one way to reduce noise. You might also want to look into the segmentation process and ensure you are obtaining an accurate segmentation of the background from the foreground. This can be improved by tweaking Canny edge detection parameter, threshold values, etc.

  27. mahdi April 16, 2016 at 8:17 am #

    hello, please hellp me can i run this with opencv 3.1.0 ?

    • Adrian Rosebrock April 17, 2016 at 3:31 pm #

      Please see my response to “Tyrone” above — the only change needed for this code to run with OpenCV 3 is to modify the cv2.findContours.

  28. Abdul Javed May 8, 2016 at 2:59 pm #

    Hello Adrian… i just wanted to know that how can i use this distance recognition technique to make a 2D map of a vertical wall(whose photo can be taken easily) to precisely know the position of doors windows and other stuffs on the wall and the distances between each other and their dimensions with certain accuracy……???

    • Adrian Rosebrock May 9, 2016 at 6:54 pm #

      Hi Adbul — I’m not sure quite sure what you mean by a 2D map of a vertical wall, but if you want super precise measurements between doors, windows, etc., then I would suggest using a 3D/stereo camera instead of a 2D camera. This will give you much better results.

  29. Matt May 10, 2016 at 8:07 am #

    Hi Adrian,

    Thanks for your tuto again. I have a question. What happened if you tilt the paper with an angle, with respect to the camera ? Does your algo consider it ?

    Thanks.

    • Adrian Rosebrock May 10, 2016 at 8:12 am #

      Yes, even with the paper titled, this algorithm will still find the paper, provided that the paper is the largest contour area in the image. For a more robust algorithm for finding rectangular regions (and verifying that they indeed have 4 vertices), please see this post.

      • Matt May 10, 2016 at 9:31 am #

        Ok, but for example, if you tilted your paper with an angle of 90 degrees, you do not detect a rectangle, so you do not know the distance between the object and the camera no?

        • Adrian Rosebrock May 10, 2016 at 6:24 pm #

          Hey Matt, I’m not sure I understand your question — a rectangle has 4 vertices, no matter how you rotate it. An easy way to detect rectangles in an image is to simply use contour approximation, which I mentioned in my previous comment.

          • Matt May 12, 2016 at 5:27 am #

            If you consider z the axis on which you compute the distance object-camera, x and y the additional axes, and if you rotate with an angle 90 degrees around x-axis or y-axis, your camera do not detect a rectangle but a straight line. So what happens in this case ?

            Thanks.

          • Adrian Rosebrock May 12, 2016 at 3:33 pm #

            Oh, you were referring to the z-axis, that was my mistake. In that case, you would need utilize a more advanced algorithm to detect your object. This blog post is primarily geared towards easily detectable objects and computing the distance.

  30. Frane May 30, 2016 at 4:49 pm #

    Hi Adrian, i need to find the distance and cordinates of the red marker. I made the filter to see red color only but i have problem considering distance. I lose the “seeing” the red color after 10cm (the markers are 3 red circle diameter 10cm each in triangle formation). Im using raspberry pi 2 b+ and pi camera

    • Adrian Rosebrock May 31, 2016 at 3:47 pm #

      How are you looking for the red color? Via color thresholding? If so, investigate the mask that is being generated from cv2.inRange and see if the red color region exists in the mask. If it does, then you’ll likely want to look at the contours being detected and see if red masked region is being returned.

  31. yousaf shah June 5, 2016 at 7:59 am #

    how to work when difference size of same object?

    • Adrian Rosebrock June 5, 2016 at 11:20 am #

      You normally would use a single reference object to calibrate your camera. Once you have the camera calibrated, you can detect the distances/object sizes of varying sizes. See this blog post for more information.

  32. Farzaneh Golkhoo June 5, 2016 at 2:25 pm #

    Hi Adrian
    Thanks for your great information, just I have a question.
    if you take a picture of a special object for example in the ceiling and you are required to know the distance of that object from the camera but not the perpendicular distance, what should we do?
    In fact I know the exact location of the camera (x,y,z) and also the location of the object in the ceiling in terms of (x,y) but I do not know the z of the object. I want to measure z.
    As you said I can measure the perpendicular distance between the camera an the object by taking a picture of the object, but I have to know the direct distance (not perpendicular) between the camera and the object.

    Thank you so much

  33. Mr.E June 8, 2016 at 7:36 am #

    Hi dear Adrian . tanks for all of your good and useful information.I had very good result of this algorithm on mobile photography or raspberry camera . but i just have big problem with the digital camera witch has a lenses (DSLR or compact ). Can you help me about this?? if i know the F of the camera for example 3.2 how should be put it on my calculating ???
    what i calculate (F) is about 3600~3700 . and so it’s give me wrong answer .

    • Adrian Rosebrock June 9, 2016 at 5:27 pm #

      There is a difference between the focal length of the physical lens and the perceived focal length from the image. You’ll need to calibrate your DSLR camera in the same way that you performed the calibrations on the Pi and mobile phone.

  34. Chandrama June 13, 2016 at 4:28 am #

    Hello i am trying to use your code but not able to get output ,

    getting error like..

    Please help to resolve

    • Adrian Rosebrock June 15, 2016 at 12:51 pm #

      Please read the comments to this post before you post as I’ve already addressed this issue multiple times. Since you’re using OpenCV 3, you need to change the cv2.findContours call to:

      (_, cnts, _) = cv2.findContours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)

      You can read more about the change to cv2.findContours between OpenCV 2.4 and OpenCV 3 in this blog post.

  35. Talgat July 12, 2016 at 4:34 am #

    Thank you!

    Excellent article! But one little mistake that can confuse beginners, you wrote “perceived width of the paper is P = 249 pixels” but in calculations you used 248. Hope to see the tutorial on finding the distance to an randomly chosen object by using stereo-pair of cameras.

    • Adrian Rosebrock July 12, 2016 at 4:32 pm #

      Thanks for pointing out the typo! I have corrected it now.

  36. Amigo July 24, 2016 at 8:11 am #

    Please comment on how accurate can this method be, i.e mm cm or in inches etc. also if I want to find distance between two objects, how should I modify this code?

    • Adrian Rosebrock July 27, 2016 at 2:40 pm #

      The level of accuracy depends on the resolution of your camera. The smaller the resolution, the less accurate. The further the objects are away, the less accurate. This script does not perform radial distortion correct, which is something else to consider. As for finding the distance between two objects, please see this post.

  37. Shiva September 6, 2016 at 2:19 am #

    Hello Adrain,
    Great Article to start with the distance estimation.
    I have downloaded your code and trying to validate with images, but i am not getting the distance as expected. Could you please send me some reference images.

    • Adrian Rosebrock September 6, 2016 at 3:38 pm #

      Hey Shiva — the downloads to this blog post also include the example images I included in this post. You can use these images to validate your distances.

  38. Thommy September 8, 2016 at 2:36 am #

    Hai Adrian, when i try run the code, i got error :

    Please, hope the resolve

    • Adrian Rosebrock September 8, 2016 at 1:15 pm #

      The code for this blog post was intended for OpenCV 2.4; however, you are using OpenCV 3. You can resolve the issue by changing the code to:

      box = cv2.boxPoints(marker)

      • Su November 1, 2017 at 2:09 am #

        Hi Adrian,
        Can you please help me here?

        Traceback (most recent call last):
        File “distance_to_camera.py”, line 53, in
        box = np.int0(cv2.BoxPoints(marker))
        AttributeError: ‘module’ object has no attribute ‘BoxPoints’

        __________________
        >>> cv2.__version__
        ‘3.3.0’

        • Adrian Rosebrock November 2, 2017 at 2:31 pm #

          See my reply to “Thommy” above.

  39. Ankit September 17, 2016 at 9:24 am #

    Hello Adrian Rosebrock,
    Your explanation helped me understand the concept very well. I am still perplexed by another problem. For example, I have a calibrated camera, i.e. I know the focal lengths and the optical offsets of the lens and sensor. Then, the relation between pixel and the actual height/width of an object is true only if the object is placed at the focal length. If I place an object of unknown dimensions at an unknown distance from the camera lens, then there is no way to estimate the distance between them. Am I thinking right or is something missing? Can you please help.

    Thanks

  40. Ondrej September 17, 2016 at 4:48 pm #

    Hi. I was thinking about make mobile app which will measure width and height some objects by using dual cameras like this:www.theverge.com/2016/4/6/11377202/huawei-p9-dual-camera-system-how-it-works . Do you think that it will be working? Thank you for your answer.

    • Adrian Rosebrock September 19, 2016 at 1:10 pm #

      Using two cameras you can measure the “depth” of an image. I don’t cover this on the PyImageSearch blog, but it is absolutely possible.

  41. Erwin October 5, 2016 at 1:52 am #

    Hi Adrien,
    Thanks for the information. I have one question: Is it possible to incorporate the distance estimation with the ball tracking code you have? I am currently working on a project whereby I am trying to detect an object with the color green and find the distance between the camera and the object.

    • Adrian Rosebrock October 6, 2016 at 6:55 am #

      Absolutely, but you need to calibrate your system first as I did in this blog post only this time using the green ball. From there you will be able to estimate distance.

      • Erwin October 7, 2016 at 1:26 am #

        Sorry, i seems to have phrased my question wrongly. What I meant was is it possible to find the distance from the camera to the green ball in real time? So instead of making it detect edges, i modified it to detect green?

        • Adrian Rosebrock October 7, 2016 at 7:21 am #

          Yes, it’s absolutely possible. As long as you perform the calibration step before you try to find the distances you can use the same technique to determine the distance to the ball in real-time.

          • Erwin October 10, 2016 at 3:53 am #

            Your help is greatly appreciated. Thank you.

  42. Josue Godinez October 8, 2016 at 1:38 pm #

    Hello Adrian, nice post.

    Just a simple question. How you determine the “focalLength”?

    • Adrian Rosebrock October 11, 2016 at 1:09 pm #

      Take a look at the example in the “Triangle Similarity for Object/Marker to Camera Distance” section to see how focal length is computed. Otherwise, Lines 38 and 39 compute the focalLength variable.

  43. John October 9, 2016 at 6:09 am #

    Hello . I want to use this code to detect images real time using the PiCamera. I read your replies and honestly have no idea how to ” Use the cv2.VideoCapture function to access the stream of your camera ” . Playing with the code results in all sorts of errors. Could you help me by giving a detailed explanation ? Thanks

  44. Mostafa Sabry October 17, 2016 at 3:00 pm #

    Hi Adrain,
    I am a big fan of your posts, I was impressed with all of what I have seen. I have a problem in my thesis that I guess you might help me in related to localization. My thesis project is automating the process of lawn mower. The mower will be given the size of a rectangle to cover then it will move back and forth starting from one of the corners. It will track its distance using wheel encoders, however, due to error due slippage and drifting. Therefore, we need a reliable method through which we can know the absolute location to correct for the relative localization error of wheel encoders. We have tried several methods and all had some problems:
    1- Tracking successive features in frames to estimate the rotational and transnational matrices however for sharp turns it looses track of everything.
    2- Triangulation by placing three balls of different colors and identifying the angle of each through a camera on a motor however using colors outdoor is so unreliable due different lightening at different day times. When you decrease the HSV scale becomes more accurate but increases the probability of loses balls and increasing range catches noise from the environment.
    3- Using homography instead of color balls for triangulation but it is computationally slow.
    4- Using April Tags or Aruco tags but as mechanical engineers, were are finding it hard to develop our algorithm and still we didn’t find a starting point to continue on by finding a code and understanding it.
    Hope U can help us and sorry for the long post

    • Adrian Rosebrock October 17, 2016 at 3:54 pm #

      To start, it’s always helpful to have actual real-world images of what you’re working with. This helps me and other readers visually see and appreciate what you are trying to accomplish. Myself, as I imagine many other readers, don’t know much about the intricacies of lawn mowers, wheel encoders, or slippage/drifting.

      That said, based on your comment it seems that the homography is producing the correct results, correct? And if that’s the case why not focus your efforts on speeding up the homography estimation? Try to reduce the number of keypoints? Utilize binary rather than real-valued descriptors? Implement this part of the algorithm in C/C++ for an extra speed gain?

  45. Eren November 30, 2016 at 8:55 am #

    hi Adrian. Firstly,thank you for sharing.nowadays I am working similar projects.
    so I have a question.there is an object with a known width w and I don’t know distance D from my camera.in fact I will do that I will put a rectangular object ( a box) with a known size under camera I will measure distance from object but I know only distance between camera and ground. how to do?

    • Adrian Rosebrock December 1, 2016 at 7:34 am #

      If you know the distance from the ground to the camera and know the size of the object in both units (inches, millimeters, etc.) then you should be able to apply some trigonometry to workout the triangle property. I haven’t actually tried this, so I’m just thinking off the top of my head. It might not work, but it’s worth a shot.

  46. Alex December 6, 2016 at 10:06 am #

    Hi Adrian, first of all, excellent article!
    I have a question, regarding this code. I’m currently carrying out research for my dissertation which requires using stereo vision to calculate distance from the camera to a chosen object/area. Will this code be applicable for stereo vision as well ?

    Thanks!

    • Adrian Rosebrock December 7, 2016 at 9:42 am #

      No, I would not use this code for stereo vision. The reason is because stereo cameras (by definition) can give you a much more accurate depth map. I don’t do much work in stereo vision, but this short tutorial or computing a depth map should help you out.

  47. Alpha January 5, 2017 at 11:10 am #

    i am doing a project in which i need to get the exact location of a human at a distance. Exact location refers to the EXACT location as used by a gun to aim at an enemy. Here, i dont have any marker object or any known distances. Any way out?

    • Adrian Rosebrock January 7, 2017 at 9:38 am #

      You would need to compute the intrinsic properties of the camera first. I would suggest starting here.

  48. Tanya January 13, 2017 at 12:37 am #

    Hey Adrian,

    This is super nice tutorial ever!!
    I am working on the project about detecting the distance changing of array sphere. (changing like mm in unit)

    I gotta put the camera at the same axis of the movement so its very hard to see the difference.( the sphere is very small like 2 mm)

    Do you think is it possible to detect using your algorithm?

    • Adrian Rosebrock January 13, 2017 at 8:32 am #

      It really depends on the quality of the images themselves. How high is the resolution of your image capture?

      • Tanya January 15, 2017 at 7:36 pm #

        around 640 × 480 pixels/each image
        for area of 1 x 2 cm

        • Adrian Rosebrock January 16, 2017 at 8:09 am #

          Hmm, that’s a pretty small resolution for that accurate of results. The first step would be to calibrate your camera and account for barrel distortion. If your images are really noisy and you can’t accurately segment the object from every image, then you’re in for some real trouble. But if you can get a nice segmentation I would give it a try and see what results you come up with. It’s best to experiment with projects like these and note what does and does not work.

  49. Dasaradh S K January 14, 2017 at 9:33 am #

    Hi Adrian. i would like to calculate the distance between a drone flying at a good height (with a cam & MCUs) and people at the ground. Please help me Adrian. 😀

  50. Rahmat Hardian Putra January 15, 2017 at 7:15 am #

    Hi Adrian.. your article is a really great tutorial 😀
    btw i’m working on a project that measure distance of a fire, but fire tend to change it shape, whether it smaller or bigger, so the marker also get bigger or smaller, therefore I can’t define the width and height of the marker.

    the question is, can I modify your algorithm so that I can measure distance with my condition ? or do you have another method that suitable to measure distance of a fire ?
    Thanks in advance

    Regards
    Rahmat Hardian

    • Adrian Rosebrock January 15, 2017 at 12:01 pm #

      It sounds like you might need a more advanced calibration technique. I personally haven’t done/read any research related to fire direction techniques with computer vision, but I would suggest reading up on intrinsic camera properties for calibration.

  51. kartik pandey February 27, 2017 at 8:09 am #

    can you please describe the hardware part of this amazing project of yours as i need it for a small project of mine too like how did u integrate the phones camera or did u use raspberry pi with a camera module please let me know

    • Adrian Rosebrock February 27, 2017 at 11:02 am #

      I used my iPhone to capture the example photos. The photos were moved to my laptop. And then I processed the photos using my laptop. The actual method used to capture the photos doesn’t matter as long as (1) its consistent and (2) you are calibrating your camera.

  52. Musa April 17, 2017 at 3:44 pm #

    Hi Adrian,
    I’m trying to copy this method for a USB camera and have used your previous posts to modify it to work with a camera using a while loop. The problem I’m having is the max function in find_marker is constantly coming out as None hence resulting in the distance_to_camera function throwing an error. Do you have any idea why this could be?

  53. Horus May 19, 2017 at 5:18 am #

    Hi,

    I have an CT 2D image with two projection. in the image there are spherical objects, free or occluded. How can I get the depth of these spheres and their centre. An idea or any help would be great.

  54. Israa June 10, 2017 at 6:24 pm #

    Hi adrian,
    please help me
    when I use the same code and the same images, I get these results .

    inches:
    24.0
    201.873157429
    17.7140389174

    what is error?

  55. youssef June 13, 2017 at 4:21 am #

    Hey adrian, this website is the bestest reference for beginners, but may i ask if for example the camera is not looking straight to the object the distance in pixels would change so it wouldnt work right ?
    if my logic was correct how to overcome this ?
    thank you

    • Adrian Rosebrock June 13, 2017 at 10:54 am #

      If there is variation of viewpoint then you would definitely want to calibrate your camera by computing the intrinsic properties of the camera. This will give you more reliable results.

  56. Randy June 15, 2017 at 8:45 am #

    Hi, excellent tutorial, i got the following error, maybe you or someone can help me:
    OpenCV Error: Assertion failed (scn == 3 || scn == 4) in cvtColor,

    • Adrian Rosebrock June 16, 2017 at 11:17 am #

      Hi Randy — double check your image paths to cv2.imread. It looks like your image path was invalid, causing cv2.imread to return None. The cv2.imread function does not throw an error when an invalid image path is supplied.

  57. Sid June 29, 2017 at 2:51 am #

    Hi Adrian,
    Amazing tutorial to get me started with the marker detection. It was really easy to understand with all the explanation that you’ve given! Cheers to that 🙂

    Lets say I wish to detect an object which is totally black with a size of (2 in x 2 in). How would I be able to do that with your code?

    Thanks

    • Adrian Rosebrock June 30, 2017 at 8:13 am #

      I would suggest using color thresholding (rather than contour properties) to find your black object.

  58. Vaibhav Jha July 12, 2017 at 7:57 am #

    Hey Adrian,

    In order to calculate you used L = W*F/P
    L= length of the object from cam
    W=Width of object
    F=Focal length
    P= Pixel width of the object

    But in the code for the pixel width you supplied the value marker[1][0].

    my question:
    1) What is the meaning of marker[1][0]?
    2) Also, shouldn’t you just convert the value of knownwidth into pixel and hardwire into the code? Instead of marker[1][0]

    Thanks !

    P.S. I am applying you code(modifying it a bit) for faces. I am also very new to programming so sorry if my questions are too silly.

  59. Alex August 11, 2017 at 1:30 am #

    Awesome! Could this work with height instead of width?

    • Adrian Rosebrock August 14, 2017 at 1:22 pm #

      Yes, absolutely. As long as either the width or height is known you can use the same algorithm.

  60. Shehroz August 22, 2017 at 8:03 am #

    Hi Adrian

    Can i use Kinect sensor as my camera?if so,do i have to change something?

    • Adrian Rosebrock August 22, 2017 at 10:42 am #

      It’s been a long time since I’ve used the Kinect camera, but I would likely recommend something like PyKinect.

  61. nahid August 29, 2017 at 5:54 am #

    Hi Adrian, exellent tutorial, i’m working on a project in which i measure how tall are the People. I have to use a cellphon as camera.
    Can i mix this Projekt with machine learning. So that i train a System with bodysizes in cm und in Pixels as expriences.
    Denn i use the bodysize as my refrence to calculate another properties of Body.
    for example: leg and arm length. At the end can i Train another system to estimate the weight.

    Please tell me your recommendations and expriences. That’s my masterthesis and i have not enough time.

    Thanks

    • Adrian Rosebrock August 31, 2017 at 8:38 am #

      If you need to measure the size of a person you don’t actually need any machine learning outside any you might want to apply to detect a person in a photo/video stream. Simply compute the intrinsic/extrinsic parameters of the camera and calibrate.

      Computing weight is much more challenging. If you’re using a 2D camera I doubt you’ll be able to obtain a reliable weight measurement. Perhaps machine learning could be used here with enough training data but I would be very skeptical.

  62. Rahul Tripathy September 1, 2017 at 11:50 am #

    Hi Adrian,

    I definitely liked the approach but i do have a few questions. It would be great if you answered it!
    First, the image width will vary with the angle the picture is taken from. Then how to find the distance in such case?
    Second, the depth here is for a known object. What if I want to get the depth of objects in an image? All blogs stop after getting the disparity map! I need to know the use of the disparity map like you do with all the other concepts in your blogs 🙂

    It would be great if you could help! And to be mentioned…. Not just I but many love your blogs and it may be the only place where we got to know that computer vision is not hard to begin 🙂 So, from all of us…. Thank You.

    • Adrian Rosebrock September 1, 2017 at 11:58 am #

      This is a basic form of distance measuring. For varying viewpoints and more advanced distance measuring you would definitely want to calibrate your camera by computing the intrinsic/extrinsic parameters. If you have multiple cameras or a stereo camera you can compute the depth map.

  63. Ram September 13, 2017 at 3:20 pm #

    Im getting this error on run of this program:

    (cnts, _) = cv2.findContours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
    ValueError: too many values to unpack

    plzz solve my doubt

    • Adrian Rosebrock September 13, 2017 at 3:21 pm #

      Please read the other comments. I’ve already addressed this question multiple times. See my reply to “Tyrone Robinson”.

  64. Dawn Rabor September 28, 2017 at 4:58 am #

    Hi adrian! I would just like to ask if there is a possible way how to compute the distances between lines in an image? Thank you!

    • Adrian Rosebrock September 28, 2017 at 8:57 am #

      Please see this post.

  65. Carlos October 3, 2017 at 9:50 am #

    Hey Adrian, I have learned a lot with your tutorials, thanks you! I

    I am currently working on a very challenging project where I need to determine the X and Y coordinates of an object in relation to the room it is in. I know the real height of the object and how the camera perceives it’s height in pixels, my difficult is how I come up with the X and Y coordinates? The linear diantace from the camera to the object is possible to be calculated by using the triangle.

    Best

    Carlos

    • Adrian Rosebrock October 4, 2017 at 12:40 pm #

      Hi Carlos — so if I understand your question correctly you need to transform the (x, y)-coordinates of your object to the “real world” coordinates? If so, you would need to perform a more robust camera calibration by computing the intrinsic/extrinsic parameters.

  66. Dulce October 19, 2017 at 1:05 pm #

    Hello, where can I find theory about the perceive focal length. I’ve looked into the triangle similarity and coundn´t find a relation between them

  67. Manh Nguyen October 24, 2017 at 5:13 am #

    Hi Adian, This post is very interesting, Everyweek I have to read your website for studying computer vision. I have a question. I mesure two time. the first time, Distance and width = const so I calculated focal length F. the second time, I measure distance by ultrasonic sensor and F =focal length in first time and I measure width. If I do this, the result is correct? Pls help me. Wish you have a nice day, thank you!

    • Adrian Rosebrock October 24, 2017 at 7:09 am #

      Hi Manh — I haven’t used ultrasonic sensors for distance measurement so I would do a bit more research into this.

  68. berk November 5, 2017 at 5:47 pm #

    Hello Adrian;
    Vaibhav Jha asked the same question, but I think you missed it.why use the item at the marker as follows?
    it will not cause a usage like this: marker[0][0]

  69. Mary November 6, 2017 at 4:49 am #

    Thank you. Can I make it work with a video instead of images? where I need to calculate the distance between the camera and a person face?

  70. Amanda Joy Panell November 13, 2017 at 7:21 pm #

    Do you think this technique could be tweaked to find the distance of something REALLY far away? (3-5 miles) accurately?

    • Adrian Rosebrock November 15, 2017 at 1:13 pm #

      No, not using standard cameras. Measuring objects from that far away would require the intrinsic/extrinsic parameters of the camera, a high definition capture, and an object that could be easily detected/isolated.

  71. YJ November 15, 2017 at 1:10 am #

    Traceback (most recent call last):

    (cnts, _) = cv2.findContours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
    ValueError: too many values to unpack

    Hi Adrian, I have face a problem since i just try to run the program why will come out those error? can tell me why ? thank you.

    • Adrian Rosebrock November 15, 2017 at 12:54 pm #

      Please take a look at the comments as this question has already been addressed. In particular, I address this question in my replies to “Minjae”, “Chandrama”, and others. The quick and dirty solution is:

      (_, cnts, _) = cv2.findContours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)

      Since you are using OpenCV 3.

  72. YJ November 15, 2017 at 1:58 am #

    Hi Adrian, is it possible modify your code and use in real time ? like I’m going to use the laptop camera to detect the distance of an object from the camera .

    • Adrian Rosebrock November 15, 2017 at 12:53 pm #

      Yes. Please see my replies to “raj” and “Jon”.

  73. TrAyZeN November 21, 2017 at 12:15 pm #

    Hello, in what unit the focal length is express ?

    • Adrian Rosebrock November 21, 2017 at 1:15 pm #

      In this tutorial we use “inches” as our unit of measurement. However, you can convert the code to use another unit.

  74. Srinivasan November 24, 2017 at 4:20 am #

    Hey Adrain,
    This code is good for distance calibration. I have small that when it’s fixed frame of paper its calculated easily. When calculating distance of moving object how to calibrate the distance, where you don’t know the exact size of the object(Both physical size and pixel size of the object) which appears in the frame

  75. joseph November 27, 2017 at 8:12 am #

    hello Adrian, please how do I measure the width of the piece of paper in the image take i have captured with my phone

    • Adrian Rosebrock November 27, 2017 at 12:57 pm #

      Hi Joseph — I would suggest referring to this blog post on measuring the size of objects in an image.

  76. Kevin Roy November 30, 2017 at 9:34 am #

    Hi Adrian,
    I’ve recently started openCV and have been following your tutorials. So far, its been the best series of tutorials I’ve ever found, online and otherwise. Currently, I am working on a little side-project which requires me to crop a square part of an image. The part needed to be cropped is random so I cannot directly mention the X, Y coordinates in the code. Therefore I tried to use the find_marker function in this post to find the square and then crop it. But as sson as I run the program, I get the following error:
    (cnts, _) = cv2.findContours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
    ValueError: too many values to unpack

    Can you please help me to sort it all out?

    • Adrian Rosebrock November 30, 2017 at 3:33 pm #

      Hi Kevin — I’ve actually discussed this error a few times in the comments. Please see my replies to “Tyrone”, “Chandrama”, “YJ”, etc. Thank you.

  77. Thanos December 5, 2017 at 12:46 pm #

    Hi Adrian,
    I am a MSc student at National Technical University of Athens, and my master thesis is Real Time Position Tracking of an underwater robotic fish inside a tank full of water. I use your color tracking algorithm, to extract (x,y) position of the fish. The robotic fish is moving in a plane xy that is perpendicular to my camera. At this moment the distance is at 135 cm from the camera. I want to add to my code, a block of code that calculates the distance of the moving plane from the camera, to update the distance in case of moving fish deeper in the water.
    I want to combine your color tracking algorithm with that distance find algorithm, real time.
    I have three questions:
    1. According to your code (distance from camera), the focal length is at mm units? Your focal length result is 543.45. My focal length result is in range of 200 – 800 testing a variety of cameras avaliable. If focal length units are in mm, is there any logical explanation of too much high values? Because from what I know, focal length must be small (3 – 10 mm). I use simple webcameras.
    2. The distance you calculate is the distance from camera to the perpendicular plane or the distance from camera to the object, where the object has coordinates (x,y), not to the center of the screen?
    3. This algorith you are using, is calculated on the air. My robotic fish is inside the water. Will the result changes if the color tracking is happening inside the water? Because due to refractive effects Ι think I will face a big problem.

  78. Benni December 6, 2017 at 10:30 am #

    It is showing me a error in the 54 th line “cv2.drawContours(image, [box], -1, (0, 255, 0), 2)”.It shows that cv2.error. error: (-215) npoints > 0 in function cv::drawContours.
    Please help me soon. I am stuck.

    • Benni December 6, 2017 at 10:49 am #

      Actually I figured it up. Thank You anyways. it is because I should have typed the code as “cv2.drawContours(image, [box.astype(“int”)], -1, (0,255,0), 2)”

  79. Saurabh Thawali December 24, 2017 at 4:13 am #

    That’s awesome and exactly i was looking for to implement in my personal project 🙂
    Lucky to find such a detail resource, Appreciate your great work and advice/comments Adrian Rosebrock…. You rock \m/ \m/

    Now i am implementing this using laptop webcam the way you guided in your another post here https://www.pyimagesearch.com/2015/05/25/basic-motion-detection-and-tracking-with-python-and-opencv/

    but i am not sure how your Room Status (occupied/unoccupied) is changing based on calculations….may be i need to work on it more.
    Thanks once again ….Much appreciated !!

    • Adrian Rosebrock December 26, 2017 at 4:15 pm #

      Thanks Saurabh 🙂 I’m glad you found the post useful! The room status is changed based on background subtraction. For a more advanced background subtraction algorithm, take a look at this blog post.

  80. Nachiket December 29, 2017 at 9:55 am #

    What is the difference between the function cv2.boundingRect() and cv2.minAreaRect()? When to each of them? Is it not that both of them returns x,y,w,h?

    • Adrian Rosebrock December 31, 2017 at 9:49 am #

      The cv2.minAreaRect function returns a bounding box that can be rotated, hence the term the minimum area rectangle that the region will fit into. The cv2.boundingRect function will return a non-rotated bounding box.

      • Nachiket January 2, 2018 at 2:38 am #

        Ok Got it. Here in this example, we can use either of them, right? Can you tell me a particular example where we ‘should’ use minArea function and an example where boundingRect should be used?
        Thank you

        • Adrian Rosebrock January 3, 2018 at 1:04 pm #

          As an example, you should use the minAreaRect when you need a rotated bounding box and then later need to apply a perspective transform, such as in the document scanner post. If you used boundingRect the four points of the rectangle would not match the vertices of the document we are “scanning”.

  81. SR January 5, 2018 at 10:14 am #

    My code is working without any errors but, the distance (value) is not displayed on the picture after the code runs. any help would be much appreciated.

    • Adrian Rosebrock January 5, 2018 at 1:23 pm #

      It sounds like either the reference object or the object you want to compute the distance to was not detected. Check the contours list and ensure they were detected.

  82. Karan January 8, 2018 at 2:28 pm #

    Good evening sir, I want to know how can I detect the height at which object is placed from the ground when we are using a webcam as a feed…
    thanks sir in advance..
    please do reply

    • Adrian Rosebrock January 8, 2018 at 2:30 pm #

      Hey Karan — take a look at this blog post where I discuss how to measure the size of objects in images. I hope that helps!

  83. SNR January 16, 2018 at 11:23 am #

    Thank you very much for this informative tutorial. i am currently doing my final year project and i have to find distance to tennis balls. i have studied your real time ball tracking tutorial. i have tried to combine that and this tutorial for a real time situation. however i am not getting it right. any tips on how to combine the two codes, where to break the code etc. any help would be much much appreciated and will be a great support to complete my project. again thank you very much for the tutorials.

    • Adrian Rosebrock January 16, 2018 at 12:45 pm #

      Congrats on working on your final year project, that’s great! It’s hard to give generic tips so could you please elaborate on what specific issues/errors you are encountering when trying to combine the code from the two posts?

      • SNR January 18, 2018 at 10:05 am #

        Thank you very much for replying, I tried to add the ‘finding distance’ code to the ‘ball tracking’ code. I’m having trouble finding the exact position where the ‘finding distance’ code should start in the original ‘ball tracking’ code.

        Also I should not be using the ‘find marker’ statement at the beginning right? Since I’m already tracking the ball and contouring it in the ‘ball tracking’ code.

        And I changed the term ‘image’ to camera, giving the camera = cv2.VideoCapture(0) command.

        I also removed the IMAGE_PATHS = [“images/2ft.png”, “images/3ft.png”, “images/4ft.png”] and image = cv2.imread(IMAGE_PATHS[0]) command. I’m not sure whether its the correct thing to do.

        The command, “for imagePath in IMAGE_PATHS:” and “marker = find_marker(image)” , I think I replaced it with the wrong terms; therefore I’m not getting anything.

        I can see that the camera is active and that’s all. Sorry for asking so many questions, this my first time doing image processing. Any help would be much appreciated as I’m struggling with this big time. Thank you very much for your time!

        • Adrian Rosebrock January 19, 2018 at 6:53 am #

          I would start by ensuring you have performed the calibration and computed the triangle similarity. If I understand your project correctly, there isn’t a point in trying to track an object if you haven’t computed the distance, correct? Thus, you need to calibrate first.

          Once you have your focal measure you can move on to the actual tracking. Here you’ll be looping over frames from your video stream and looking for your object. If you’re using the ball tracking code, here you will be performing color thresholding. Once you find the ball in the mask you can pass this area into the distance_to_camera function which will give you your distance.

          It can be tricky putting together code from multiple posts, especially if you’re new to image processing and computer vision, but it is doable. I would recommend that you work through Practical Python and OpenCV. I designed this book to help beginners with zero experience in computer vision/image processing get up to speed quickly. It’s a quick read and if you pick up a copy, you’ll be done by the end of the weekend and be better prepared to tackle your ball tracking + distance measurement project.

          I hope that helps!

          • SNR January 31, 2018 at 10:38 pm #

            thank you very much for all the tips. i have calibrated and found the focal length and also the color threshold. i don’t understand the part where you tell to, “pass this area into the distance_to_camera function”. any help would be so helpful.

          • Adrian Rosebrock February 3, 2018 at 11:16 am #

            Once you have the focal length and the object detected you can call the “distance_to_camera” function (take a look at the parameters required for the function; that is what I meant by “pass these values”).

  84. mithilesh February 23, 2018 at 4:30 am #

    hi adrian you did an awesome job there…i have a question regarding finding the depth of an object in an single shot of camera..is this possible?

    • Adrian Rosebrock February 26, 2018 at 2:10 pm #

      This method will give you the distance to the object. Are you referring to computing a depth map for the entire image?

      • mithilesh February 27, 2018 at 12:16 am #

        yes

        • Adrian Rosebrock February 27, 2018 at 11:29 am #

          Computing the depth map is best done using a stereo/depth camera. Are you trying to build the depth map from a standard 2D camera?

  85. amir February 23, 2018 at 2:46 pm #

    Hey Adrian,

    How can I implement this if I want to measure the distance from a pi camera to an object which is detected in real time from its colour? Im struggling to adapt this code for using my raspberry pi camera for distance tracking…

    Thanks,

    Amir

    • Adrian Rosebrock February 26, 2018 at 2:05 pm #

      Hey Amir — this tutorial demonstrates how to detect and track an object based on color. You can combine the code for both of these tutorials.

      • ami February 27, 2018 at 10:29 am #

        Hey adrian, really appreciate your response.

        I couldn’t delete/modify my original reply apologies. When merging this code for detecting colour in a stream, i’m not sure if you still use your images for calibration and calculating focal length because I am thinking that you have to somehow leverage the colour (in my case its blue) to be used as a reference point to measure from..? Spent a few days on this now and I’m struggling on trying to figure out how to combine the two functions so that the distance to the blue colour from the picamera can be measured in live stream..

        Thanks in advance again!

        • Adrian Rosebrock February 27, 2018 at 11:22 am #

          You can use whatever object you would like for the initial collaboration provided you know:

          1. The size of the object (in terms of a measurable units)
          2. The initial distance to the object (again, in measurable units)

          Once you perform the initial calibration you no longer need the reference object.

          Try to nail down the code used to compute the focal length before you try incorporating the actual tracking of the object and measuring the distance.

          It can be a tedious process, but this is how we learn. Keep it up, you’re doing great 🙂

  86. Tsaqif Alfatan Nugraha February 25, 2018 at 10:59 pm #

    Hi adrian, This post is awesome !
    I would like to ask you two questions:
    First, is the focal length will different if the object we measure different too ? For example, first i used paper and get the focal length . Then, i want to measure distance from a car to camera. Can, i use the first focal length when to measure distance a car to camera.

    Second, how we can measure the object that is not parallel to camera ? For example, i want to measure parallel distance between camera and traffic light. The traffic light located above the camera. Is this way can be implement on this measurement ?

    • Adrian Rosebrock February 26, 2018 at 1:49 pm #

      1. You just need to calibrate your system by computing the focal length once per run. Once the script is up and running and you have performed the calibration you can compute the distance to any object, provided you can localize the object of course.

      2. For an object that is not parallel to the camera you will need a more advanced method. Take a look at computing the intrinsic and extrinsic parameters of your camera.

  87. LOKESH AMARA March 4, 2018 at 10:43 am #

    Sir, your work is awesome and very helpful.But i have 2 questions

    1) sir, how did you know the image is only 270 pixels wide because if i take image with my android camera i am getting 3024 pixels width

    2)and later you mentioned that by moving some distance back you get 180 pixels width of the object through image processing.how did u get that

    sir, i am doing my final year project on this topic only.could you please help me out??
    Thank you in advance

    • Adrian Rosebrock March 7, 2018 at 9:35 am #

      I did not know the sizes n pixels. I knew (1) the distance from the camera to the object and (2) the width of the object. Once I detected the object in the image I could determine any pixel dimensions.

  88. ADAM March 10, 2018 at 4:55 pm #

    is there any way to work this out on Android studio ? can we integrate this with tensorflow ?

    • Adrian Rosebrock March 14, 2018 at 1:20 pm #

      Java provides OpenCV bindings, I would suggest you start there.

  89. SKR March 29, 2018 at 3:12 pm #

    As usual AR rocks with his technical yet easy to follow articles!
    I would request to have some 1) suggestions, 2) pointers, and 3) insights regarding a problem I am working on since past a few days. Instead of having one image, how would you compute depth (Z-coordinate), if you have two images? Let us say for example, you have two time-aligned videos, first showing the front view and the second showing the left-side view. We can extract RGB frames from both videos so we have two images now. Without having any knowledge of camera intrinsic or width of object or focal length how you can compute the depth and thereby the real X,Y,Z coordinates using these two images? Please suggest all methods/techniques or provide pointers to resources, perhaps your own article on this problem, and if possible give some insights. The video contains a person sitting and changing his gaze so the problem relates to estimation of gaze.

    Lastly, by any logic or technique, is it possible to measure depth from single video, if we have multiple frames extracted at different times without camera or object moving?

    • Adrian Rosebrock March 30, 2018 at 6:52 am #

      This method is a very simplistic form of camera calibration. If you want to be working with depth you should compute the extrinsic/intrinsic parameters of the camera and perform a full-blown calibration. I wouldn’t suggest any approximations (they won’t work). I haven’t worked with depth from a single video/camera so I’m not sure about the answer to that. This video from Microsoft research might be similar to what you are trying to obtain.

      • SKR March 31, 2018 at 4:40 pm #

        Thanks for your reply AR. Actually what I wanted to know whether it is possible to measure depth to create a 3D point cloud WITHOUT computing the extrinsic/intrinsic parameters of the camera and full calibration. I have no knowledge of cameras as I do not possess them. What I have are two time-aligned random RGB videos, one a giving front pose and the other giving a side pose. I can extract multiple frames from each videos and can start working.

        Source Wiki: It is possible to perform rectification without having the camera parameters. All that is required is a set of seven or more image to image correspondences to compute the fundamental matrices and epipoles. Ref: Richard Hartley and Andrew Zisserman (2003). Multiple view geometry in computer vision. Cambridge university press.

        So if we can perform rectification using more than seven extracted frames, is it possible to arrive at depth somehow? Thanks so much for your pointers and insights.

        • Adrian Rosebrock April 4, 2018 at 12:40 pm #

          I believe the OpenCV docs have something similar to what you are referring to. I would suggest starting there. You need seven images (typically the “chessboard”) and keypoint correspondences to compute the fundamental matrix. The OpenCV docs cover this.

          • SKR May 1, 2018 at 11:39 pm #

            Hey Adrian, just a quick question, do you have any idea about what a volume blending means and how one can achieve it so that 3D reconstructed images look like a filled up object instead of a plain image? I was unable to find some good resources to read and understand about it.

            Also, can multiple consecutive video frames act as image slices to have volume rendering during 3D reconstruction?
            Thanks in anticipation.

          • Adrian Rosebrock May 3, 2018 at 9:37 am #

            Sorry, I do not have any experience with volume blending.

  90. lulu March 31, 2018 at 2:33 am #

    hi Adrian, i want to used your distance formula in my thesis. could you give me reference where you get that formula so i can put it on my thesis ?

    thank you

    • Adrian Rosebrock April 4, 2018 at 12:45 pm #

      Triangle similarity is a basic Geometry topic. Most Geometry textbooks will cover it. If you want to include a reference to this PyImageSearch blog post, please feel free to do so, but I don’t think there is a “singular source/reference” for using triangle similarity.

  91. ayesha sh April 15, 2018 at 6:30 pm #

    Hy Adrian, Great Tutorial as always. I have implemented it and got awesome results. Next thing I want to do is to implement it to calculate distance of a moving object from the camera. Like you already have a tutorial on motion detection and tracking, I would like to know how can we merge these two so that we can find the distance of a moving object from camera continuously rather than putting images?
    Thanks in advance.

    • Adrian Rosebrock April 16, 2018 at 2:23 pm #

      The first step would be to access your systems camera. I would suggest starting with this blog post and then merging the code together.

  92. AliK April 18, 2018 at 6:37 pm #

    Hi Adrian..thanks for your as-always great tutorial…I have one question: imagine after edge detection (figure 1) you needed to choose the contour of your measuring tape (instead of the piece of paper).. do you have any idea how that can be done? (consider the image is busy so it cannot be defined simply as the contour on far right or far left hand side of the image)..for example, is there a way to use two parallel lines (which is the property of measuring tape) to detect its contour in a busy image? Thanks for any suggestions you might have!

    • Adrian Rosebrock April 20, 2018 at 10:10 am #

      Yes, you can do this. Take a look at the “cv2.HoughLines” function in OpenCV (I do not have any tutorials on this method, unfortunately).

      • AliK April 20, 2018 at 11:46 pm #

        Thank you Adrian!

  93. Sam May 15, 2018 at 9:54 pm #

    As always great tutorial sir..
    From where you get bounding box rectangle width in pixels?
    Which function/line return it’s value in above code?

    • Adrian Rosebrock May 17, 2018 at 7:04 am #

      The “find_marker” function is responsible for finding the marker (in this case, the piece of paper) in the image. The “cv2.minAreaRect” returns the (rotated) bounding box.

  94. dheeraj June 26, 2018 at 12:07 pm #

    which camera would be preffered for this project?

    • Adrian Rosebrock June 28, 2018 at 8:16 am #

      I used my iPhone to gather the example images. If you would like a USB camera I really like the Logitech C920.

  95. ahsan July 7, 2018 at 2:53 am #

    hello sir,
    what code displays the pixels in image.?

    • Adrian Rosebrock July 10, 2018 at 8:48 am #

      I’m not sure I understand your question properly but the “cv2.imshow” function is used to display an image to your screen. I’m not sure what else you may be looking for.

  96. ahsan July 8, 2018 at 7:31 am #

    hello sir,
    can you help me,? please

    whether the code that displays the pixel value is “marker[1][0]”.?
    but why the program I created pixel value there is its comma, not an integer

    thanks

  97. Tux July 10, 2018 at 5:42 am #

    Hi Adrian and thank you for all your brillant work.

    Can you explain why you work on 800×600 images instead of the originals please ?

    Thanks very much !

    • Adrian Rosebrock July 10, 2018 at 8:10 am #

      The less data there is to process, the faster your algorithms will run. Secondly, resizing images can be considered “noise reduction”. High resolution images may be visually appealing for us to look at but they can actually hurt computer vision algorithm performance.

Trackbacks/Pingbacks

  1. Sorting Contours using Python and OpenCV - PyImageSearch - April 20, 2015

    […] And we even leveraged the power of contours to find the distance from a camera to object or marker. […]

Leave a Reply