OpenCV Shape Descriptor: Hu Moments Example

OpenCV Shape Descriptors

So what type of shape descriptors does OpenCV provide?

The most notable are Hu Moments which are can be used to describe, characterize, and quantify the shape of an object in an image.

Hu Moments are normally extracted from the silhouette or outline of an object in an image. By describing the silhouette or outline of an object, we are able to extract a shape feature vector (i.e. a list of numbers) to represent the shape of the object.

We can then compare two feature vectors using a similarity metric or distance function to determine how “similar” the shapes are.

In this blog post I’ll show you how to extract the Hu Moments shape descriptor using Python and OpenCV.

Looking for the source code to this post?
Jump right to the downloads section.

OpenCV and Python versions:
This example will run on Python 2.7/Python 3.4+ and OpenCV 2.4.X/OpenCV 3.0+.

OpenCV Shape Descriptor: Hu Moments Example

As I mentioned, Hu Moments are used to characterize the outline or “silhouette” of an object in an image.

Normally, we obtain this shape after applying some sort of segmentation (i.e. setting the background pixels to black and the foreground pixels to white). Thresholding is the most common approach to obtain our segmentation.

After we have performed thresholding we have the silhouette of the object in the image.

We could also find the contours of the silhouette and draw them, thus creating an outline of the object.

Regardless of which method we choose, we can still apply the Hu Moments shape descriptors provided that we obtain consistent representations across all images.

For example, it wouldn’t make sense to extract Hu Moments shape features from the silhouette of one set of images and then extract Hu Moments shape descriptors from the outline of another set of images if our intention is to compare the shape features in some way.

Anyway, let’s get started and extract our OpenCV shape descriptors.

First, we’ll need an image, diamond.png:

Figure 1: Extracting OpenCV shape descriptors from our image

Figure 1: Extracting OpenCV shape descriptors from our image

This image is of a diamond, where the black pixels correspond to the background of the image and the white pixels correspond to the foreground. This is an example of a silhouette of an object in an image. If we had just the border of the diamond, it would be the outline of the object.

Regardless, it is important to note that our Hu Moments shape descriptor will only be computed over the white pixels.

Now, let’s extract our shape descriptors:

The first thing we need to do is import our cv2 package which provides us with our OpenCV bindings.

Then, we load our diamond image off disk using the cv2.imread method and convert it to grayscale.

We convert our image to grayscale because Hu Moments requires a single channel image — the shape quantification is only carried out among the white pixels.

From here, we can compute our Hu Moments shape descriptor using OpenCV:

In order to compute our Hu Moments, we first need to compute the original 24 moments associated with the image using cv2.moments.

From there, we pass these moments into cv2.HuMoments, which calculates Hu’s seven invariant moments.

Finally, we flatten our array to form our shape feature vector.

This feature vector can be used to quantify and represent the shape of an object in an image.

Summary

In this blog post I showed you how to use the Hu Moments OpenCV shape descriptor.

In future blog posts I will show you how to compare Hu Moments feature vectors for similarity.

Be sure to enter your email address in the form below to be informed when I post new awesome content!

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , ,

33 Responses to OpenCV Shape Descriptor: Hu Moments Example

  1. Marcio Carvalho February 10, 2015 at 9:03 am #

    Hi Adrian,

    Thank you for this great post.

    I have one question. Shouldn’t the Hu’s moment invariants be log transformed?

    I’ve seen in some forums that this is better for shapes feature extraction and pattern classification. Couldn’t confirm it though.

    hu = cv2.HuMoments(cv2.moments(image)).flatten()

    print -np.sign(hu) * np.log10(np.abs(hu))

    • Adrian Rosebrock February 10, 2015 at 9:32 am #

      Hi Marcio, that’s a really, really great question, thanks for asking! So yes, when comparing Hu moments it can be helpful to log transform the Hu moments features. This helps prevent noise and “spikiness” of features. Is it absolutely necessary? It depends on your application. I always start with the normal Hu moments and then log transform if necessary.

  2. Bustomi Raharjo May 7, 2015 at 1:20 pm #

    Hi Adrian,

    My name’s Bustomi, i’ve developing apps to translate ASL to text on android and using OpenCV. I’ve done with background segmentaion, sparate foreground, find contour. I want to get Hu Momments value from object and then train those using LibSVM on OpenCV android. Could uexplain to me how to do that or even better give me example, please. Thanks in advance.

    Best,

    Bustomi

    • Adrian Rosebrock May 7, 2015 at 1:30 pm #

      Hi Bustomi, I don’t have any experience using OpenCV on Android. But the example in this post demonstrates how to extract Hu Moments. From there all you need to do is pass them into your favorite machine learning library.

  3. Raul Ferreira December 19, 2015 at 11:08 am #

    Hello, I’m doing a work about text recognition using Opencv 2.4.11 and Python 2.7.
    My teacher told that HU moments were a good basis for my future code to work, and I find this code really interesting.

    I was using png pictures with Arial Text and I think that this code is capable of finding HU moments on each letter.

    My questions are:
    – Can I use this code to recognize more than one letter each time?
    – Can this code be used algong with webcams to recognize hand written letters?
    – Do you have any ideia how could I continue my work using this code?

    Any help would be aprecciated. Waiting for your reply.

    Thank you

    • Adrian Rosebrock December 20, 2015 at 9:48 am #

      You can certainly use Hu Moments and Zernike moments for character recognition; however, the Histogram of Oriented Gradients descriptor is much better suited for this type of problem.

      In order to recognize letters, you’ll need to extract features from characters then train a machine learning classifier. From there, you can use the classifier to recognize new characters.

      I would suggest taking a look at Practical Python and OpenCV. Inside the book I have an entire chapter dedicated to building a handwritten digit recognizer using HOG and Linear SVM. This code can be easily adjusted to use Hu Moments if you would like.

  4. Sumukha April 25, 2016 at 6:49 am #

    Can I use HuMoments to find a descriptor for every pixel of the image (considering a neighbourhood)?

    • Adrian Rosebrock April 25, 2016 at 1:58 pm #

      I’m not sure I understand your question, but the Hu Moments descriptor takes into account every pixel in the input image (or ROI) when you compute it.

  5. Mayank June 3, 2016 at 1:14 am #

    Hi Adrian, out of these two (Hu Moments and Zernike moments) which is better for object shapes. Like i want to detect guns, so out of these two which will give better results and why ? I an using HOG, color and shape as features.

    • Adrian Rosebrock June 3, 2016 at 3:01 pm #

      Out of the two, Zernike Moments tends to give better results for 2D shape recognition. But if you’re looking to detect objects (such as guns) in various lighting conditions, poses, etc., you would be better off training your own custom HOG + Linear SVM detector. And depending on the complexity of your images, you may even need deep learning.

  6. Lahiru November 6, 2016 at 3:51 am #

    Hi, Can you add the Humemnts comparison part also. Thanks

  7. siyer December 9, 2016 at 2:38 am #

    Hi Adrian

    You mentioned in your other post that Zernlke moments are invariant to rotation, which enables us to do perspective transform even and compare it safely. Butif we look for a more precision match between two contour sets , should we consider Hu moments in that case? Your thoughts please…..

    Regards
    Shankar

    • Adrian Rosebrock December 10, 2016 at 7:13 am #

      Simple. I would test both and compare the results, then go with the method that provided the best results. Hu Moments will work just fine in very controlled conditions but you’ll often get more mileage from Zernike Moments.

  8. Koushik February 5, 2017 at 6:33 am #

    Hi,is it possible to recognize shape of different cars uniquely by using OpenCv?

    • Adrian Rosebrock February 7, 2017 at 9:21 am #

      Hi Koushik — you wouldn’t want to use basic image descriptors to recognize cars. Instead, I would recommend a machine learning or deep learning approach. I’ll be demonstrating how to recognize the make and model of cars in my upcoming book.

  9. Geeth September 6, 2017 at 6:53 am #

    Hi Adrian,
    Thanks for all your codes!.I use many of them.
    I have a Q- Which method do you think is best for shape matching of hand gestures.
    I have a large vocabulary of 10,000 different hand gestures.
    I tried many methods like HOG, SIFT, SURF, BRIEF, moments, convexhull. But none is giving me good accuracy

    • Adrian Rosebrock September 7, 2017 at 7:04 am #

      It really depends. If your method doesn’t need to be invariant to rotation, HOG is a good choice. SIFT/SURF/etc. require an additional step of keypoint detection. In low contrast situations you may not be able to detect enough keypoints to build a reliable model. Instead you might want to consider looking at stereo/depth cameras for this project.

  10. ashutosh gupta April 19, 2018 at 3:40 pm #

    Hi Adrian.
    Great Post. I applied following Hu descriptor on images in my dataset which consists of images of elephants, horses, buildings, Buses etc (Wang Dataset) with some backgrounds in them. so I am trying to classify them on basis of shape feature and key objects in images should match by shape descriptor of Hu moments. I am comparing Hu moments of two images to calculate the similarity between them. In case of dinosaurs(with no background) I am getting almost 100% accuracy with this descriptor but for others, I am nearly getting 10% accuracy with this. So I think that background is making things worse. So What type of segmentation I can apply to the image before passing it to Hu moments descriptor for minimizing the effect of the background. I have tried applying adaptive thresholding and auto_canny edge detection on the image but it’s not improving the results. what type of preprocessing I can do on images for such case?

    • Adrian Rosebrock April 20, 2018 at 9:54 am #

      You need an extremely good segmentation of your object to apply Hu moments or Zernike moments. If you cannot segment the foreground from the background the resulting feature vector will not be very discriminative. You may want to consider using an object detection method such as HOG + Linear SVM.

  11. jaini August 4, 2018 at 1:27 am #

    How to compare two images of shapes, to find similarly between them?

    • jaini August 4, 2018 at 1:28 am #

      Can we use hue moments for it? And if so, how to compare hue values?

      • Adrian Rosebrock August 7, 2018 at 7:00 am #

        You can use Hu moments to compare shapes. You would:

        1. Extract Hu moments from the shapes
        2. Compute the Euclidean distance between them

        Shapes with smaller Euclidean distances are more similar.

        If you’re interested in learning more about quantifying images/objects and comparing them I would suggest you take a look at the PyImageSearch Gurus course where I have over 60+ lessons on feature extraction and comparing images.

  12. Ayu September 14, 2018 at 8:19 am #

    Hi, I’m working on research using the HU Moment Invariants. I want to ask, do you know the meaning of each seven characteristics of HU Moment Invariants? Please provide an explanation and reference. Thank you very much.

  13. Yonten February 1, 2019 at 12:44 am #

    I am extracting features using Hu Moments from segmented license plate characters? The 7 moments are calculated and I can see almost similar values for all the numbers and characters. Is it possible to detect the desired characters from these extracted features?

    • Adrian Rosebrock February 1, 2019 at 6:38 am #

      Technically yes, but it the character recognition won’t be that accurate, especially in real-world images. I would suggest referring to the PyImageSearch Gurus course where I cover ANPR in detail.

  14. Chiranjibi Sitaula March 23, 2019 at 11:13 pm #

    Hi, I am trying to extract features of the image based on the objects of the indoor scene image. I am trying to slice the image into several sub-images and then adaptive thresholding on them and finally, apply moments (hu and zenike). Is it a good option for objects shape feature extraction which will be used to extract the feature of the whole image? Thanks

    • Adrian Rosebrock March 27, 2019 at 9:13 am #

      That can work but the problem is that you’ll need to reliably segment the objects from the image. Secondly, these descriptors are not viewpoint invariant. You may need a stronger feature extraction method and model. I would suggest going through the PyImageSearch Gurus course to learn more about these advanced methods.

  15. Chiranjibi Sitaula March 24, 2019 at 7:01 am #

    Can I use this features for image recognition having multiple shapes?

    • Adrian Rosebrock March 27, 2019 at 9:07 am #

      Yes, but that really depends on your actual shapes themselves. If you can describe what you are trying to achieve I can try to provide a suggestion to you.

  16. Deepesh Khanal June 27, 2019 at 5:09 am #

    I am trying to extract features from fruits for fruits recognition. The dataset I use is Fruits-360.
    https://github.com/Horea94/Fruit-Images-Dataset

    Is it better to use Moment Invariants (Hu, Zernike) or HOG and why? And, which works better for multiple fruits recognition in a single image.

Before you leave a comment...

Hey, Adrian here, author of the PyImageSearch blog. I'd love to hear from you, but before you submit a comment, please follow these guidelines:

  1. If you have a question, read the comments first. You should also search this page (i.e., ctrl + f) for keywords related to your question. It's likely that I have already addressed your question in the comments.
  2. If you are copying and pasting code/terminal output, please don't. Reviewing another programmers’ code is a very time consuming and tedious task, and due to the volume of emails and contact requests I receive, I simply cannot do it.
  3. Be respectful of the space. I put a lot of my own personal time into creating these free weekly tutorials. On average, each tutorial takes me 15-20 hours to put together. I love offering these guides to you and I take pride in the content I create. Therefore, I will not approve comments that include large code blocks/terminal output as it destroys the formatting of the page. Kindly be respectful of this space.
  4. Be patient. I receive 200+ comments and emails per day. Due to spam, and my desire to personally answer as many questions as I can, I hand moderate all new comments (typically once per week). I try to answer as many questions as I can, but I'm only one person. Please don't be offended if I cannot get to your question
  5. Do you need priority support? Consider purchasing one of my books and courses. I place customer questions and emails in a separate, special priority queue and answer them first. If you are a customer of mine you will receive a guaranteed response from me. If there's any time left over, I focus on the community at large and attempt to answer as many of those questions as I possibly can.

Thank you for keeping these guidelines in mind before submitting your comment.

Leave a Reply

[email]
[email]