Finding extreme points in contours with OpenCV


A few weeks ago, I demonstrated how to order the (x, y)-coordinates of a rotated bounding box in a clockwise fashion — an extremely useful skill that is critical in many computer vision applications, including (but not limited to) perspective transforms and computing the dimensions of an object in an image.

One PyImageSearch reader emailed in, curious about this clockwise ordering, and posed a similar question:

Is it possible to find the extreme north, south, east, and west coordinates from a raw contour?

“Of course it is!”, I replied.

Today, I’m going to share my solution to find extreme points along a contour with OpenCV and Python.

Looking for the source code to this post?
Jump right to the downloads section.

Finding extreme points in contours with OpenCV

In the remainder of this blog post, I am going to demonstrate how to find the extreme north, south, east, and west (x, y)-coordinates along a contour, like in the image at the top of this blog post.

While this skill isn’t inherently useful by itself, it’s often used as a pre-processing step to more advanced computer vision applications. A great example of such an application is hand gesture recognition:

Figure 1: Computing the extreme coordinates along a hand contour

Figure 1: Computing the extreme coordinates along a hand contour

In the figure above, we have segmented the skin/hand from the image, computed the convex hull (outlined in blue) of the hand contour, and then found the extreme points along the convex hull (red circles).

By computing the extreme points along the hand, we can better approximate the palm region (highlighted as a blue circle):

Figure 2: Using extreme points along the hand allows us to approximate the center of the palm.

Figure 2: Using extreme points along the hand allows us to approximate the center of the palm.

Which in turn allows us to recognize gestures, such as the number of fingers we are holding up:

Figure 3: Finding extreme points along a contour with OpenCV plays a pivotal role in hand gesture recognition.

Figure 3: Finding extreme points along a contour with OpenCV plays a pivotal role in hand gesture recognition.

Note: I cover how to recognize hand gestures inside the PyImageSearch Gurus course, so if you’re interested in learning more, be sure to claim your spot in line for the next open enrollment!

Implementing such a hand gesture recognition system is outside the scope of this blog post, so we’ll instead utilize the following image:

Figure 4: Our example image containing objects of interest. For each of these objects, we are going to compute the extreme north, south, east, and west (x, y)-coordinates.

Figure 4: Our example image containing a hand. We are going to compute the extreme north, south, east, and west (x, y)-coordinates along the hand contour.

Where our goal is to compute the extreme points along the contour of the hand in the image.

Let’s go ahead and get started. Open up a new file, name it , and let’s get coding:

Lines 2 and 3 import our required packages. We then load our example image from disk, convert it to grayscale, and blur it slightly.

Line 12 performs thresholding, allowing us to segment the hand region from the rest of the image. After thresholding, our binary image looks like this:

Figure 5: Our image after thresholding. The outlines of the hand is now revealed.

Figure 5: Our image after thresholding. The outlines of the hand are now revealed.

In order to detect the outlines of the hand, we make a call to cv2.findContours , followed by sorting the contours to find the largest one, which we presume to be the hand itself (Lines 18-21).

Before we can find extreme points along a contour, it’s important to understand that a contour is simply a NumPy array of (x, y)-coordinates. Therefore, we can leverage NumPy functions to help us find the extreme coordinates.

For example, Line 24 finds the smallest x-coordinate (i.e., the “west” value) in the entire contour array c  by calling argmin()  on the x-value and grabbing the entire (x, y)-coordinate associated with the index returned by argmin() .

Similarly, Line 25 finds the largest x-coordinate (i.e., the “east” value) in the contour array using the argmax()  function.

Lines 26 and 27 perform the same operation, only for the y-coordinate, giving us the “north” and “south” coordinates, respectively.

Now that we have our extreme north, south, east, and west coordinates, we can draw them on our image :

Line 32 draws the outline of the hand in yellow, while Lines 33-36 draw circles for each of the extreme points, detailed below:

  • West: Red
  • East: Green
  • North: Blue
  • South: Teal

Finally, Lines 39 and 40 display the results to our screen.

To execute our script, make sure you download the code and images associated with this post (using the “Downloads” form found at the bottom of this tutorial), navigate to your code directory, and then execute the following command:

Your should then see the following out image:

Figure 5: Detecting extreme points in contours with OpenCV and Python.

Figure 6: Detecting extreme points in contours with OpenCV and Python.

As you can see we have successfully labeled each of the extreme points along the hand. The western-most point is labeled in red, the northern-most point in blue, the eastern-most point in green, and finally the southern-most point in teal.

Below we can see a second example of labeling the extreme points a long a hand:

Figure 6: Labeling extreme points along a hand contour using OpenCV and Python.

Figure 7: Labeling extreme points along a hand contour using OpenCV and Python.

Let’s examine one final instance:

Figure 7: Again, were are able to accurately compute the extreme points along the contour.

Figure 8: Again, were are able to accurately compute the extreme points along the contour.

And that’s all there is to it!

Just keep in mind that the contours list returned by cv2.findContours  is simply a NumPy array of (x, y)-coordinates. By calling argmin()  and argmax()  on this array, we can extract the extreme (x, y)-coordinates.


In this blog post, I detailed how to find the extreme north, south, east, and west (x, y)-coordinates along a given contour. This method can be used on both raw contours and rotated bounding boxes.

While finding the extreme points along a contour may not seem interesting on its own, it’s actually a very useful skill to have, especially as a preprocessing step to more advanced computer vision and image processing algorithms, such as hand gesture recognition.

To learn more about hand gesture recognition, and how finding extreme points along a contour is useful in recognizing gestures, be sure to signup for the next open enrollment in the PyImageSearch Gurus course!

See you inside!


If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!

, ,

55 Responses to Finding extreme points in contours with OpenCV

  1. Keith Prisbrey April 11, 2016 at 11:39 am #

    Thank you, thank you, for this and all your blogs! They are all very helpful in our ancient brush-stroke kanji OCR projects.

    Best Regards, Keith

    • Adrian Rosebrock April 13, 2016 at 7:02 pm #

      No problem, I’m happy to help Keith! 🙂

  2. leena April 16, 2016 at 6:35 am #

    Thanks for the useful code. In case of triangle, will it be possible to get the direction (left/right, up/down) of triangle if I have extreme points and center points.
    Can you help to find the direction of arrow (exactly a triangle)?


    • Adrian Rosebrock April 17, 2016 at 3:32 pm #

      If the triangle is a perfect triangle has you described then each line of the triangle will have the same length (equilateral triangle). And if that’s the case, then the triangle is “pointing” in all three directions (or no direction, depending on how you look at it).

  3. Tey November 5, 2016 at 1:16 pm #

    Thanks for the tutorial ~
    if I want to find all the extreme points or fingertips how can i do it in opencv for android?

    • Adrian Rosebrock November 7, 2016 at 2:51 pm #

      Hey Tey — I only cover OpenCV + Python on this blog post. I did not cover Android/Java.

  4. Kevin February 14, 2017 at 3:33 pm #

    Hi Adrian,

    Great post, it works flawlessly. But can you help provide hints/reasoning for my questions?
    1. What is the purpose of GaussianBlur here?
    2. I’ve extended this into a live video stream and when my hand rotates back and forth there are times when there are a lot of blotches that don’t properly represent the shape’s outline.

    Is this where adaptive thresholding might come into play?


    • Kevin February 14, 2017 at 3:37 pm #

      Also, all my searches are showing erode/dilate being called with some kind of ‘kernel’. Can you explain why you have None here?

      • Adrian Rosebrock February 15, 2017 at 9:08 am #

        If you supply a value of “None” then a 3×3 kernel is used by default.

    • Adrian Rosebrock February 15, 2017 at 9:07 am #

      1. The Gaussian blur helps reduce high frequency noise. It blurs regions of the images we are uninterested in allowing us to focus on the underlying “structure” of the image — in this case, the LCD screen and the box containing the thermostat.

      2. Basic thresholding is best used under controlled lighting conditions. Adaptive thresholding can help with this, but isn’t a sure-fire solution for each problem.

  5. Kapil March 14, 2017 at 1:21 am #

    Hi Adrian,

    Cool stuff. I had some issues with some of my implementation. I think you can help. Here it goes the question.

    I have a numpy array for a detected contour from which I have extracted extreme points in all four directions. Now I want extract 12 points. Let’ say if I start from a reference point (Extreme-top) after every 30 degree angle I want to get co-ordinates of a point. After all the traversing is done I’d be having array of 12 points which could be given to next image processing algorithm.

    I hope I’m clear with my question.

    Please share your thoughts on the same.

    • Adrian Rosebrock March 15, 2017 at 8:59 am #

      If you have the 4 extreme coordinates, compute a circle that corresponds to the area of these points (i.e., a minimum enclosing circle). Compute the (x, y)-coordinates in 30 degree increments along this circle. Then find the closest point in the contours list to this (x, y)-coordinate. This will take a bit of knowledge of trigonometry to complete, but it’s absolutely doable.

  6. Chan May 20, 2017 at 12:29 am #

    If I take hibiscus and have just 2 petals which are perpendicular to each other and need to inject the nectar part(centre part)! Can i use this technique? Or do I have a better option?

    • Adrian Rosebrock May 21, 2017 at 5:15 am #

      Hi Chan — it would be easier to understand your question if you could provide an example image of what you’re working with.

  7. Joachim November 15, 2017 at 7:54 am #

    How can I retrieve the part of the contours that is above a certain point efficiently (Without checking the points one by one) ?

    • Adrian Rosebrock November 15, 2017 at 12:47 pm #

      Hi Joachim — have you tried using NumPy array indexing and slicing? The vector operation of checking the coordinates would be significantly faster than trying to check the points one-by-one.

  8. Dilpreet kaur December 1, 2017 at 12:49 am #

    Sir i am working on hand gesture recognition in opencv using c++ but i am not able to separate my hand from other skin colour objects sir please help me on this.

  9. Gayaa February 14, 2018 at 11:29 pm #

    Hi Adrian,

    This article is really useful for me as same as your other articles. Thank you very for it.
    Also I have a problem.
    Can we use this method to find extreme points of a human body? I meant top of the head and bottom of the feet.
    Because I want to calculate the distance (height) between both points.

    Please give me a suggestion…

    • Adrian Rosebrock February 18, 2018 at 10:05 am #

      You would need to have the contour extraction of the human body. We were able to threshold and localize the hand in this example. Given the contours, we computed the extreme points. You would need to do the same for the human body. That said, this post on measuring object dimensions would be a better start for you.

  10. Mithun February 27, 2018 at 9:35 am #

    Hi adrian
    its very useful
    Thank u

  11. Mithun February 27, 2018 at 9:36 am #

    how i find the contour points

    • Adrian Rosebrock February 27, 2018 at 11:23 am #

      The blog post demonstrates how to find the contour points — via the cv2.findContours function. Perhaps I am misunderstanding your question?

  12. Maham Khan March 20, 2018 at 2:17 pm #

    Hi Adrian!

    Excellent post.
    I was wondering how to get the tips of fingers? More like local maxima. But in the contour array, how to do it?

    • Adrian Rosebrock March 22, 2018 at 10:12 am #

      There’s a few ways to do this, but you’ll want to look up “convexity defects” as a starting point.

  13. Angga May 26, 2018 at 5:10 am #

    Hi Adrian, this is great tutorial.

    I would to ask you, how do we crop just the contour and remove the outside the image contours/crop ?


  14. Hamed June 1, 2018 at 5:06 pm #

    Thanks Adrian.
    I have a question.
    I’ve notice that Contour is 3D numpy array. But why ? what problem with 2D numpy array? we know that contours is simple point in image and each point demonstrate with 2 element. But why in OpenCV contour is 3D numpy array ?
    Thank you very much.

    • Adrian Rosebrock June 5, 2018 at 8:15 am #

      The shape and how you interpret the results of “cv2.findContours” in OpenCV is highly dependent on the flags you pass into the function (i.e., normal list of contours, hierarchy, etc.). Make sure you read the docs for the function.

  15. Vikram November 7, 2018 at 6:29 am #

    Hi Adrian .. your posts have saved my life a couple of times! now you can be a hero again 🙂 ..sorry .. i am struggling with detecting objects with a lighter background ..most examples i find online generally have black as a background / backgrounds darker than the objects being detected. I am working on identification of govt issued ID cards and their pics are taken via users cell phones. Hence detecting that in a varied backdrop is becoming a huge challenge for me .. could u please check the images i loaded in the SO query here ?

    afaik , for any sort of feature detection , the object of interest must appear light while the b/g should be totally dark right ? but try as i might, i am just unable to do anything with this image. Any pointers will be like manna from the heavens

    • Adrian Rosebrock November 10, 2018 at 10:17 am #

      There are a few ways to approach this problem but if your overall goal is to detect the government IDs regardless of background you won’t be able to rely on traditional image processing — you’ll need to use a bit of machine learning instead. I would recommend training your own custom object detector on the IDs themselves.

  16. Shrouti Gangopadhyay November 19, 2018 at 5:33 am #

    i want to determine the coordinate of the point residing on convex hull which is farthest from the centre of mass. How to do? please help

    • Adrian Rosebrock November 19, 2018 at 12:24 pm #

      Do you already have the convex hull coordinates? If so, compute the center of mass (i.e., centroid) coordinates. Then compute the Euclidean distance between the centroid and all points along the hull. The point with the largest distance will be the coordinates you are looking for.

  17. ali December 17, 2018 at 9:01 am #

    Thanks Adrian.
    when i run program occure this problem:
    AttributeError: module ‘imutils’ has no attribute ‘grab_contours’

    • Adrian Rosebrock December 18, 2018 at 9:00 am #

      Upgrade your install of the “imutils” library:

      $ pip install --upgrade imutils

  18. Ganesh Deshmukh January 6, 2019 at 7:36 am #

    thanks for free tutorials, please explain me how can I find contours for continuous video using this code?

    • Adrian Rosebrock January 8, 2019 at 7:02 am #

      You mean apply contour detection to every frame in a video? See this tutorial as an example.

  19. MUDASAR ALI ARSHAD March 6, 2019 at 6:05 am #

    Hey excellent work I really like it. I want to detect each fingertip using your code please help me out.

  20. hussam March 10, 2019 at 8:31 am #

    How i can use the co-ordinates that found with the contours and save it to the .CSV file?

    • Adrian Rosebrock March 13, 2019 at 3:43 pm #

      That’s not really a computer vision question, that’s a basic programming question. It’s okay if you are new to programming and Python but I would highly suggest taking some time to read basic Python tutorials, specifically ones that focus on file I/O.

  21. Chacha El Bacha March 28, 2019 at 5:19 am #

    Hi Adrian, your posts are very helpfull!
    I am trying this code but instead of getting the contour of the object, i am getting the contour of the screen itself. For example, my image is 500×281 so the contour is drawn on this surface not on the object itself.
    Is there any reason for that? What am i doing wrong?
    Thank you 🙂

    • Chacha El Bacha March 28, 2019 at 5:29 am #

      I tried to change the index of cv2.drawContours and put it 0 instead of -1.. Still getting same result

    • Adrian Rosebrock April 2, 2019 at 6:33 am #

      How is your image binarized? Is the foreground white on a black background? It sounds like you may have your binary mask inverted.

  22. younes April 19, 2019 at 5:27 am #

    Hello Mr Adrian,
    thanks for this amazing article, so fabulous.
    i’ve a question, i want to comput the width of every finger! i search over internet i didn’t found anything, i was thinking if there is a relation between finger’s width and palm’s width, but it doesn’t exist!.
    so, any idea how to do it?
    thanks in advance.

    • Adrian Rosebrock April 25, 2019 at 9:28 am #

      Have you tried using this tutorial on measuring object size? You could adapt it to work with computing finger/hand size.

      • younes April 29, 2019 at 9:16 am #

        Yeah, i already tried it, but it doesn’t work efficiently, becaus ethe fingers is part of hand, and it just detect the hand as object!
        i thought also about convexity defects, just to compute the width of the proximal phalanx, but the problem is the points of convexity defects changes everytime i change the image, they don’t stay in the same position of finger.

  23. Arpita Gupta July 8, 2019 at 4:03 am #

    Hello Sir,Could you please tell me if there is a method to get the coordinates of all the points of the contours.I mean How can we get the coordinates of the all the points (locus) of the contour.I shall be grateful to you.Please help!

    • Adrian Rosebrock July 10, 2019 at 9:47 am #

      The contour is a list of (x, y)-points along the contour.

  24. Kapil sarwat August 3, 2019 at 1:54 am #

    Hi adrain
    I am working on fingerprints. I want to segment finger from a fingerphoto and then zoom-in the distal phalanx area? Is there any possible way?

  25. pydev August 10, 2019 at 11:10 pm #

    A godsend! Thank you so much for this! It’s hard to figure out what functions or methods to use as a beginner and tutorials like yours help us get a firm grip on the subject matter.

    • Adrian Rosebrock August 16, 2019 at 5:51 am #

      Thanks, I’m glad you enjoyed it!

  26. sajid December 4, 2019 at 12:53 pm #

    how to get the extreme point of human body using real time web cam?

  27. Amarjeet December 31, 2019 at 5:28 am #

    Suppose I have a svg image,and I want all the x,y co-ordinates of the path that draws that image.How can I achieve this with open cv.

    • Adrian Rosebrock January 2, 2020 at 8:51 am #

      OpenCV cannot read SVG images. You would need to convert it to a PNG, JPEG, etc. and then read the image.

Before you leave a comment...

Hey, Adrian here, author of the PyImageSearch blog. I'd love to hear from you, but before you submit a comment, please follow these guidelines:

  1. If you have a question, read the comments first. You should also search this page (i.e., ctrl + f) for keywords related to your question. It's likely that I have already addressed your question in the comments.
  2. If you are copying and pasting code/terminal output, please don't. Reviewing another programmers’ code is a very time consuming and tedious task, and due to the volume of emails and contact requests I receive, I simply cannot do it.
  3. Be respectful of the space. I put a lot of my own personal time into creating these free weekly tutorials. On average, each tutorial takes me 15-20 hours to put together. I love offering these guides to you and I take pride in the content I create. Therefore, I will not approve comments that include large code blocks/terminal output as it destroys the formatting of the page. Kindly be respectful of this space.
  4. Be patient. I receive 200+ comments and emails per day. Due to spam, and my desire to personally answer as many questions as I can, I hand moderate all new comments (typically once per week). I try to answer as many questions as I can, but I'm only one person. Please don't be offended if I cannot get to your question
  5. Do you need priority support? Consider purchasing one of my books and courses. I place customer questions and emails in a separate, special priority queue and answer them first. If you are a customer of mine you will receive a guaranteed response from me. If there's any time left over, I focus on the community at large and attempt to answer as many of those questions as I possibly can.

Thank you for keeping these guidelines in mind before submitting your comment.

Leave a Reply