Finding extreme points in contours with OpenCV

extreme_points_header

A few weeks ago, I demonstrated how to order the (x, y)-coordinates of a rotated bounding box in a clockwise fashion — an extremely useful skill that is critical in many computer vision applications, including (but not limited to) perspective transforms and computing the dimensions of an object in an image.

One PyImageSearch reader emailed in, curious about this clockwise ordering, and posed a similar question:

Is it possible to find the extreme north, south, east, and west coordinates from a raw contour?

“Of course it is!”, I replied.

Today, I’m going to share my solution to find extreme points along a contour with OpenCV and Python.

Looking for the source code to this post?
Jump right to the downloads section.

Finding extreme points in contours with OpenCV

In the remainder of this blog post, I am going to demonstrate how to find the extreme north, south, east, and west (x, y)-coordinates along a contour, like in the image at the top of this blog post.

While this skill isn’t inherently useful by itself, it’s often used as a pre-processing step to more advanced computer vision applications. A great example of such an application is hand gesture recognition:

Figure 1: Computing the extreme coordinates along a hand contour

Figure 1: Computing the extreme coordinates along a hand contour

In the figure above, we have segmented the skin/hand from the image, computed the convex hull (outlined in blue) of the hand contour, and then found the extreme points along the convex hull (red circles).

By computing the extreme points along the hand, we can better approximate the palm region (highlighted as a blue circle):

Figure 2: Using extreme points along the hand allows us to approximate the center of the palm.

Figure 2: Using extreme points along the hand allows us to approximate the center of the palm.

Which in turn allows us to recognize gestures, such as the number of fingers we are holding up:

Figure 3: Finding extreme points along a contour with OpenCV plays a pivotal role in hand gesture recognition.

Figure 3: Finding extreme points along a contour with OpenCV plays a pivotal role in hand gesture recognition.

Note: I cover how to recognize hand gestures inside the PyImageSearch Gurus course, so if you’re interested in learning more, be sure to claim your spot in line for the next open enrollment!

Implementing such a hand gesture recognition system is outside the scope of this blog post, so we’ll instead utilize the following image:

Figure 4: Our example image containing objects of interest. For each of these objects, we are going to compute the extreme north, south, east, and west (x, y)-coordinates.

Figure 4: Our example image containing a hand. We are going to compute the extreme north, south, east, and west (x, y)-coordinates along the hand contour.

Where our goal is to compute the extreme points along the contour of the hand in the image.

Let’s go ahead and get started. Open up a new file, name it extreme_points.py , and let’s get coding:

Lines 2 and 3 import our required packages. We then load our example image from disk, convert it to grayscale, and blur it slightly.

Line 12 performs thresholding, allowing us to segment the hand region from the rest of the image. After thresholding, our binary image looks like this:

Figure 5: Our image after thresholding. The outlines of the hand is now revealed.

Figure 5: Our image after thresholding. The outlines of the hand are now revealed.

In order to detect the outlines of the hand, we make a call to cv2.findContours , followed by sorting the contours to find the largest one, which we presume to be the hand itself (Lines 18-21).

Before we can find extreme points along a contour, it’s important to understand that a contour is simply a NumPy array of (x, y)-coordinates. Therefore, we can leverage NumPy functions to help us find the extreme coordinates.

For example, Line 24 finds the smallest x-coordinate (i.e., the “west” value) in the entire contour array c  by calling argmin()  on the x-value and grabbing the entire (x, y)-coordinate associated with the index returned by argmin() .

Similarly, Line 25 finds the largest x-coordinate (i.e., the “east” value) in the contour array using the argmax()  function.

Lines 26 and 27 perform the same operation, only for the y-coordinate, giving us the “north” and “south” coordinates, respectively.

Now that we have our extreme north, south, east, and west coordinates, we can draw them on our image :

Line 32 draws the outline of the hand in yellow, while Lines 33-36 draw circles for each of the extreme points, detailed below:

  • West: Red
  • East: Green
  • North: Blue
  • South: Teal

Finally, Lines 39 and 40 display the results to our screen.

To execute our script, make sure you download the code and images associated with this post (using the “Downloads” form found at the bottom of this tutorial), navigate to your code directory, and then execute the following command:

Your should then see the following out image:

Figure 5: Detecting extreme points in contours with OpenCV and Python.

Figure 6: Detecting extreme points in contours with OpenCV and Python.

As you can see we have successfully labeled each of the extreme points along the hand. The western-most point is labeled in red, the northern-most point in blue, the eastern-most point in green, and finally the southern-most point in teal.

Below we can see a second example of labeling the extreme points a long a hand:

Figure 6: Labeling extreme points along a hand contour using OpenCV and Python.

Figure 7: Labeling extreme points along a hand contour using OpenCV and Python.

Let’s examine one final instance:

Figure 7: Again, were are able to accurately compute the extreme points along the contour.

Figure 8: Again, were are able to accurately compute the extreme points along the contour.

And that’s all there is to it!

Just keep in mind that the contours list returned by cv2.findContours  is simply a NumPy array of (x, y)-coordinates. By calling argmin()  and argmax()  on this array, we can extract the extreme (x, y)-coordinates.

Summary

In this blog post, I detailed how to find the extreme north, south, east, and west (x, y)-coordinates along a given contour. This method can be used on both raw contours and rotated bounding boxes.

While finding the extreme points along a contour may not seem interesting on its own, it’s actually a very useful skill to have, especially as a preprocessing step to more advanced computer vision and image processing algorithms, such as hand gesture recognition.

To learn more about hand gesture recognition, and how finding extreme points along a contour is useful in recognizing gestures, be sure to signup for the next open enrollment in the PyImageSearch Gurus course!

See you inside!

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 11-page Resource Guide on Computer Vision and Image Search Engines, including exclusive techniques that I don’t post on this blog! Sound good? If so, enter your email address and I’ll send you the code immediately!

, ,

16 Responses to Finding extreme points in contours with OpenCV

  1. Keith Prisbrey April 11, 2016 at 11:39 am #

    Thank you, thank you, for this and all your blogs! They are all very helpful in our ancient brush-stroke kanji OCR projects.

    Best Regards, Keith

    • Adrian Rosebrock April 13, 2016 at 7:02 pm #

      No problem, I’m happy to help Keith! 🙂

  2. leena April 16, 2016 at 6:35 am #

    Thanks for the useful code. In case of triangle, will it be possible to get the direction (left/right, up/down) of triangle if I have extreme points and center points.
    Can you help to find the direction of arrow (exactly a triangle)?

    regards

    • Adrian Rosebrock April 17, 2016 at 3:32 pm #

      If the triangle is a perfect triangle has you described then each line of the triangle will have the same length (equilateral triangle). And if that’s the case, then the triangle is “pointing” in all three directions (or no direction, depending on how you look at it).

  3. Tey November 5, 2016 at 1:16 pm #

    Thanks for the tutorial ~
    if I want to find all the extreme points or fingertips how can i do it in opencv for android?

    • Adrian Rosebrock November 7, 2016 at 2:51 pm #

      Hey Tey — I only cover OpenCV + Python on this blog post. I did not cover Android/Java.

  4. Kevin February 14, 2017 at 3:33 pm #

    Hi Adrian,

    Great post, it works flawlessly. But can you help provide hints/reasoning for my questions?
    1. What is the purpose of GaussianBlur here?
    2. I’ve extended this into a live video stream and when my hand rotates back and forth there are times when there are a lot of blotches that don’t properly represent the shape’s outline.

    Is this where adaptive thresholding might come into play?

    Thanks,
    Kevin

    • Kevin February 14, 2017 at 3:37 pm #

      Also, all my searches are showing erode/dilate being called with some kind of ‘kernel’. Can you explain why you have None here?

      • Adrian Rosebrock February 15, 2017 at 9:08 am #

        If you supply a value of “None” then a 3×3 kernel is used by default.

    • Adrian Rosebrock February 15, 2017 at 9:07 am #

      1. The Gaussian blur helps reduce high frequency noise. It blurs regions of the images we are uninterested in allowing us to focus on the underlying “structure” of the image — in this case, the LCD screen and the box containing the thermostat.

      2. Basic thresholding is best used under controlled lighting conditions. Adaptive thresholding can help with this, but isn’t a sure-fire solution for each problem.

  5. Kapil March 14, 2017 at 1:21 am #

    Hi Adrian,

    Cool stuff. I had some issues with some of my implementation. I think you can help. Here it goes the question.

    I have a numpy array for a detected contour from which I have extracted extreme points in all four directions. Now I want extract 12 points. Let’ say if I start from a reference point (Extreme-top) after every 30 degree angle I want to get co-ordinates of a point. After all the traversing is done I’d be having array of 12 points which could be given to next image processing algorithm.

    I hope I’m clear with my question.

    Please share your thoughts on the same.

    • Adrian Rosebrock March 15, 2017 at 8:59 am #

      If you have the 4 extreme coordinates, compute a circle that corresponds to the area of these points (i.e., a minimum enclosing circle). Compute the (x, y)-coordinates in 30 degree increments along this circle. Then find the closest point in the contours list to this (x, y)-coordinate. This will take a bit of knowledge of trigonometry to complete, but it’s absolutely doable.

  6. Chan May 20, 2017 at 12:29 am #

    If I take hibiscus and have just 2 petals which are perpendicular to each other and need to inject the nectar part(centre part)! Can i use this technique? Or do I have a better option?

    • Adrian Rosebrock May 21, 2017 at 5:15 am #

      Hi Chan — it would be easier to understand your question if you could provide an example image of what you’re working with.

  7. Joachim November 15, 2017 at 7:54 am #

    How can I retrieve the part of the contours that is above a certain point efficiently (Without checking the points one by one) ?

    • Adrian Rosebrock November 15, 2017 at 12:47 pm #

      Hi Joachim — have you tried using NumPy array indexing and slicing? The vector operation of checking the coordinates would be significantly faster than trying to check the points one-by-one.

Leave a Reply