Non-Maximum Suppression for Object Detection in Python

Applying non-maximum suppression to an image

Connecticut is cold. Very cold. Sometimes it’s hard to even get out of bed in the morning. And honestly, without the aide of copious amounts of pumpkin spice lattes and the beautiful sunrise over the crisp autumn leaves, I don’t think I would leave my cozy bed.

But I have work to do. And today that work includes writing a blog post about Felzenszwalb et al. method for non-maximum suppression.

If you remember, last week we discussed Histogram of Oriented Gradients for Objection Detection.

This method can be broken into a 6-step process, including:

  1. Sampling positive images
  2. Sampling negative images
  3. Training a Linear SVM
  4. Performing hard-negative mining
  5. Re-training your Linear SVM using the hard-negative samples
  6. Evaluating your classifier on your test dataset, utilizing non-maximum suppression to ignore redundant, overlapping bounding boxes

After applying these steps you’ll have an object detector that is as smooth as, well, John Coltrane:

Figure 1: My Python object detection framework applied to face detection. Even in low contrast images, faces can be easily detected.

Figure 1: My Python object detection framework applied to face detection. Even in low contrast images, faces can be easily detected.

(Note: Images utilized in this post were taken from the MIT + CMU Frontal Face Images dataset)

These are the bare minimum steps required to build an object classifier using Histogram of Oriented Gradients. Extensions to this method exist including Felzenszwalb et al.’s deformable parts model and Malisiewicz et al.’s Exemplar SVM.

However, no matter which HOG + Linear SVM method you choose, you will (with almost 100% certainty) detect multiple bounding boxes surrounding the object in the image.

For example, take a look at the image of Audrey Hepburn at the top of this post. I forked my Python framework for object detection using HOG and a Linear SVM and trained it to detect faces. Clearly, it has found Ms. Hepburns face in the image — but the detection fired a total of six times!

While each detection may in fact be valid, I certainty don’t want my classifier to report to back to me saying that it found six faces when there is clearly only one face. Like I said, this is common “problem” when utilizing object detection methods.

In fact, I don’t even want to call it a “problem” at all! It’s a good problem to have. It indicates that your detector is working as expected. It would be far worse if your detector either (1) reported a false positive (i.e. detected a face where one wasn’t) or (2) failed to detect a face.

To fix this situation we’ll need to apply Non-Maximum Suppression (NMS), also called Non-Maxima Suppression.

When I first implemented my Python object detection framework I was unaware of a good Python implementation for Non-Maximum Suppression, so I reached out to my friend Dr. Tomasz Malisiewicz, whom I consider to be the “go to” person on the topic of object detection and HOG.

Tomasz, being the all-knowing authority on the topic referred me to two implementations in MATLAB which I have since implemented in Python. We’re going to review the first method by Felzenszwalb etl al. Then, next week, we’ll review the (faster) non-maximum suppression method implemented by Tomasz himself.

So without very delay, let’s get our hands dirty.

Looking for the source code to this post?
Jump right to the downloads section.

OpenCV and Python versions:
This example will run on Python 2.7/Python 3.4+ and OpenCV 2.4.X/OpenCV 3.0+.

Non-Maximum Suppression for Object Detection in Python

Open up a file, name it nms.py , and let’s get started implementing the Felzenszwalb et al. method for non-maximum suppression in Python:

We’ll start on Line 2 by importing a single package, NumPy, which we’ll utilize for numerical processing.

From there we define our non_max_suppression_slow  function on Line 5. this function accepts to arguments, the first being our set of bounding boxes in the form of (startX, startY, endX, endY) and the second being our overlap threshold. I’ll discuss the overlap threshold a little later on in this post.

Lines 7 and 8 make a quick check on the bounding boxes. If there are no bounding boxes in the list, simply return an empty list back to the caller.

From there, we initialize our list of picked bounding boxes (i.e. the bounding boxes that we would like to keep, discarding the rest) on Line 11.

Let’s go ahead and unpack the (x, y) coordinates for each corner of the bounding box on Lines 14-17 — this is done using simple NumPy array slicing.

Then we compute the area of each of the bounding boxes on Line 21 using our sliced (x, y) coordinates.

Be sure to pay close attention to Line 22. We apply np.argsort  to grab the indexes of the sorted coordinates of the bottom-right y-coordinate of the bounding boxes. It is absolutely critical that we sort according to the bottom-right corner as we’ll need to compute the overlap ratio of other bounding boxes later in this function.

Now, let’s get into the meat of the non-maxima suppression function:

We start looping over our indexes on Line 26, where we will keep looping until we run out of indexes to examine.

From there we’ll grab the length of the idx  list o Line 31, grab the value of the last entry in the idx  list on Line 32, append the index i  to our list of bounding boxes to keep on Line 33, and finally initialize our suppress  list (the list of boxes we want to ignore) with index of the last entry of the index list on Line 34.

That was a mouthful. And since we’re dealing with indexes into a index list it’s not exactly an easy thing to explain. But definitely pause here and examine these code as it’s important to understand.

Time to compute the overlap ratios and determine which bounding boxes we can ignore:

Here we start looping over the (remaining) indexes in the idx  list on Line 37, grabbing the value of the current index on Line 39.

Using last entry in the idx  list from Line 32 and the current entry in the idx  list from Line 39, we find the largest (x, y) coordinates for the start bounding box and the smallest (x, y) coordinates for the end of the bounding box on Lines 44-47.

Doing this allows us to find the current smallest region inside the larger bounding boxes (and hence why it’s so important that we initially sort our idx  list according to the bottom-right y-coordinate). From there, we compute the width and height of the region on Lines 50 and 51.

So now we are at the point where the overlap threshold comes into play. On Line 55 we compute the overlap , which is a ratio defined by the area of the current smallest region divided by the area of current bounding box, where “current” is defined by the index j  on Line 39.

If the overlap  ratio is greater than the threshold on Line 59, then we know that the two bounding boxes sufficiently overlap and we can thus suppress the current bounding box. Common values for overlapThresh  are normally between 0.3 and 0.5.

Line 64 then deletes the suppressed bounding boxes from the idx  list and we continue looping until the idx  list is empty.

Finally, we return the set of picked bounding boxes (the ones that were not suppressed) on Line 67.

Let’s go ahead and create a driver so we can execute this code and see it in action. Open up a new file, name it nms_slow.py , and add the following code:

We start by importing our non_max_suppression_slow  function on Line 2. I put this function in the pyimagesearch  package for organizational purposes, but you can put the function wherever you see fit. From there, we import NumPy for numerical processing and cv2  for our OpenCV bindings on Lines 3-4.

Then, we define a list of images  on Line 8. This list consists of 2-tuples, where the first entry in the tuple is a path to an image and the second entry is the list of bounding boxes. These bounding boxes were obtained from my HOG + Linear SVM classifier detecting potential “faces” at varying locations and scales. Our goal is to take the set of bounding boxes for each image and apply non-maximum suppression.

We start by looping over the image path and bounding boxes on Line 27 and load the image on Line 30.

To visualize the results of non-maximum suppression in action, we first draw the original (non-suppressed) bounding boxes on Lines 34 and 35.

We then apply non-maximum suppression on Line 38 and draw the picked bounding boxes on Lines 42-43.

The resulting images are finally displayed on Lines 46-48.

Non-Maximum Suppression in Action

To see the Felzenszwalb et al. non-maximum suppression method in action, download the source code and accompanying images for this post from the bottom of this page, navigate to the source code directory, and issue the following command:

First, you’ll see the Audrey Hepburn image:

Figure 1: Our classifier initially detected six bounding boxes, but by applying non-maximum suppression, we are left with only one (the correct) bounding box.

Figure 2: Our classifier initially detected six bounding boxes, but by applying non-maximum suppression, we are left with only one (the correct) bounding box.

Notice how six bounding boxes were detected, but by applying non-maximum suppression, we are able to prune this number down to one.

The same is true for the second image:

Figure 2: Initially detecting three bounding boxes, but by applying non-maximum suppression we can prune the number of overlapping bounding boxes down to one.

Figure 3: Initially detecting three bounding boxes, but by applying non-maximum suppression we can prune the number of overlapping bounding boxes down to one.

Here we have found three bounding boxes corresponding to the same face, but non-maximum suppression is about to reduce this number to one bounding box.

So far we have only examined images that contain one face. But what about images that contain multiple faces? Let’s take a look:

Figure 3: Non-maximum suppression correctly handles when there are multiple faces, suppressing the smaller overlapping bounding boxes, but retaining the boxes that do not overlap.

Figure 4: Non-maximum suppression correctly handles when there are multiple faces, suppressing the smaller overlapping bounding boxes, but retaining the boxes that do not overlap.

Even for images that contain multiple objects, non-maximum suppression is able to ignore the smaller overlapping bounding boxes and return only the larger ones. Non-maximum suppression returns two bounding boxes here because the bounding boxes for each face at all. And even if they did overlap, do the overlap ratio does not exceed the supplied threshold of 0.3.

Summary

In this blog post I showed you how to apply the Felzenszwalb et al. method for non-maximum suppression.

When using the Histogram of Oriented Gradients descriptor and a Linear Support Vector Machine for object classification you almost always detect multiple bounding boxes surrounding the object you want to detect.

Instead of returning all of the found bounding boxes you should first apply non-maximum suppression to ignore bounding boxes that significantly overlap each other.

However, there are improvements to be made to the Felzenszwalb et al. method for non-maximum suppression.

In my next post I’ll implement the method suggested by my friend Dr. Tomasz Malisiewicz which is reported to be over 100x faster!

Be sure to download the code to this post using the form below! You’ll definitely want to have it handy when we examine Tomasz’s non-maximum suppression algorithm next week!

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 11-page Resource Guide on Computer Vision and Image Search Engines, including exclusive techniques that I don’t post on this blog! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , , , ,

27 Responses to Non-Maximum Suppression for Object Detection in Python

  1. rajacheers November 18, 2014 at 11:19 am #

    Hi

    Thanks for the great post.

    I noticed if the first coordinate in bigger, it cant suppress them

    Examples boxes like
    [[348 98 234 234]
    [345 102 233 233]
    [338 100 238 238]],

    [[175 80 32 32]
    [276 93 30 30]
    [176 81 30 30]
    [276 93 30 30]
    [177 81 29 29]
    [277 93 29 29]].

    What can be improved to make this work?

    • Adrian Rosebrock November 18, 2014 at 12:01 pm #

      Please note my comment in the post:

      It is absolutely critical that we sort according to the bottom-right corner as we’ll need to compute the overlap ratio of other bounding boxes later in this function.

      The algorithm assumes that the coordinates are in the following order: (x-coordinate of the top-left, y-coordinate of the top-left, x-coordinate of the bottom-right, and y-coordinate of the bottom right). If for some reason you are worried about the order of the coordinates, simply perform a check and re-order as necessary. I wouldn’t call this an “improvement”, just an assumption of the non-maximum suppression algorithm that can be easily guaranteed by checking the points beforehand.

      • Morné January 26, 2015 at 12:37 am #

        I’m also suspicious about the rationale for “picking” the bottom box (irrespective of horizontal position), and not the biggest box. Would you mind clarifying?

        • Adrian Rosebrock January 26, 2015 at 6:44 am #

          You are picking the largest bounding box since they are sorted by area prior to the actual suppression algorithm being run.

          • Artem August 4, 2015 at 8:20 am #

            But you are using variable ‘area’ inly in the line 55, no sorting by the area are performed. Clarify please.

          • Adrian Rosebrock August 4, 2015 at 1:16 pm #

            Sorting is actually performed on Line 22. The area is used to determine the overlap, and if the overlap exceeds the threshold, then the indexes can be removed. Line 22 can also be replaced to sort via some other value, such as the score or probability of the detection returned by the SVM.

  2. Juan January 16, 2015 at 5:42 am #

    Hi Adrian,

    Are you going to publish a tutorial on Malisiewicz et al.’s Exemplar SVM? In the HoG tutorial you commented that you had already implemented their algorithm in python.

    • Adrian Rosebrock January 16, 2015 at 7:31 am #

      Hi Juan, you are correct. I am publishing a tutorial on the Malisiewicz et al. method for non-maximum suppression (not the entire Exemplar SVM framework). However, due to poor blog post planning on my part, I had to push the post back to accommodate a few other posts that needed to get ASAP. However, I will definitely get the faster version of non-maximum suppression posted later in January or early February.

  3. Mikael March 17, 2015 at 9:33 am #

    Hi Adrian,

    Thanks for this post. There may be a difference between your implementation and the original one.
    In the matlab code, it looks like the initial sort is done on detection scores, not the y coordinate: https://github.com/rbgirshick/voc-dpm/blob/master/test/nms.m#L40-L43
    The documentation is not very clear about what “boxes” is (“Detection bounding boxes (see pascal_test.m)”) but the method is meant to select “high-scoring detections” https://github.com/rbgirshick/voc-dpm/blob/master/test/nms.m#L5-L6.
    I guess your version end-up selecting the detection with highest “y”.

    • Adrian Rosebrock March 17, 2015 at 11:29 am #

      Hi Mikael, thanks for pointing this out to me! In the future I’ll issue an update to this post that includes the scoring of each bounding box.

      • Ricardo February 14, 2017 at 4:30 pm #

        Hi Adrian is there an update containing the scores of the boxes?
        nice post by the way! regards from Mexico!!

        • Adrian Rosebrock February 15, 2017 at 9:05 am #

          Yes, you can find the update in the imutils library.

  4. Alex August 12, 2015 at 11:05 am #

    Thanks for an awesome post as always!

    You mention in one of your comments above:
    “Line 22 can also be replaced to sort via some other value, such as the score or probability of the detection returned by the SVM.”

    That’s exactly what I’m looking for. I want to be able to keep the highest scoring boxes returned by my neural network (those with the higher prob of being in the positive class), and smartly merge all nearby bounding boxes with that highest scoring box. Should changing line 22 to sort by my neural network score do this? I tried that, and it’s not working for me. I’m wondering if you have suggestions of other things that might need changing.

    • Adrian Rosebrock August 13, 2015 at 7:10 am #

      Hey Alex — yes, you are correct. Change Line 22 to sort by the score of the bounding box returned by your neural network instead of the y-coordinate. Make sure that your bounding box scores corresponding to the (x, y)-coordinates of your bounding boxes that you pass into the function. I would also suggest creating a separate scores parameter to the function.

  5. Nico October 7, 2015 at 1:13 pm #

    Is there a specific reason to bias the area by +1 in each dimension on line 21?

    • Adrian Rosebrock October 7, 2015 at 3:57 pm #

      It simply avoids any type of “divide by zero” errors.

      • Nico October 8, 2015 at 11:35 am #

        Thanks, makes sense!

  6. Mayank May 19, 2016 at 1:34 pm #

    Hi adrian,
    How do I add features while doing hard negative mining. I have extracted HOG features for both positive and negative class and have trained my model. Now if i do sliding window on training set of negative class i will get features of a window. Now the problem is feature matrix of complete image is larger than that of a small window so how would i add these new features to my existing features list. Also since the images are same there will be redundant features too, so how to avoid redundant data here??

    • Adrian Rosebrock May 19, 2016 at 5:55 pm #

      I would save your hard-negative feature vectors to a separate file (such as CSV, HDF5, whatever file format you are using to store your current feature vectors from the original training data). Once you have your hard-negatives saved to disk, load both the hard-negatives and original data and stack them together. For this, I like using the np.hstack function. I cover this in more detail inside PyImageSearch Gurus.

  7. Attila September 27, 2016 at 9:20 am #

    Hi Adrian, thanks for the great tutorial!

    I’m using nms for person detection with OpenCV’s detectMultiScale function, which returns a foundWeights list containing a value for every detection prior to applying nms. Do you have any suggestions on how to use that information to calculate a certain confidence level for each detection after applying nms?

    • Adrian Rosebrock September 28, 2016 at 10:46 am #

      The weights returned by detectMultiScale are your actual confidence levels from the SVM. You would normally hardcode a confidence threshold for good vs. bad predictions. This is normally done on a dataset-to-dataset basis.

  8. Iancu March 15, 2017 at 2:26 pm #

    Hello. Reagarding this topic, can told me if in C++ exists, and if the answer is yes, which is his name or something with another words ? I see in Python exists a library which do all the work, In c++ is something like this? Thank you.

  9. Brian March 19, 2017 at 7:35 pm #

    Hi Adrian,

    When running non-maximum suppression it seems that a single of the original boxes in a cluster is chosen. Is there a good algorithm that instead of using one of the input boxes a box that is created from the other boxes is returned? Preferably is there was a way to weight the boxes as you merged them using the classifier confidence, I see that you can sort them by their classification accuracy rather than by y2 but that would still result in only one of the original boxes being chosen.

    Also how does maximum suppression work with multiple scales? Like when you do a sliding window with an image pyramid the same object might be detected at different scales.

    • Adrian Rosebrock March 21, 2017 at 7:28 am #

      You apply non-maxima suppression after applying a sliding window across multiple scales. The returned bounding boxes are always in terms of the original image dimensions (this simply involves multiplying the bounding boxes by the ratio of the pyramid layer to the original image dimensions). The bounding boxes are collected from all sliding windows + scales and then NMS is applied.

      As for weighting the bounding boxes, you could do that, assuming your SVM returns a score to represent how “good” the prediction is (i.e., a probability). However, I’m not sure this would give you any better results since higher scores will ultimately dominate the weighted average. I doubt that you would see big improvement in Intersection over Union, the metric we use for measuring the quality of a detection.

      If you’re interested in learning more about object detection, be sure to take a look at the PyImageSearch Gurus course.

  10. Walid Ahmed April 25, 2017 at 4:03 pm #

    Hi
    I found it strange that in the example one of the bounding boxes is returned and not a new bounding box,
    is this normal?

    • Adrian Rosebrock April 25, 2017 at 9:04 pm #

      Correct, that is how non-maxima suppression works. It “suppresses” all the other bounding boxes (i.e., the non-maxima ones). It does not take an average over the others (or some other metric).

Trackbacks/Pingbacks

  1. (Faster) Non-Maximum Suppression in Python - PyImageSearch - February 16, 2015

    […] Last week I showed you how to implement the Felzenszwalb et al. method. And this week I am going to show you the Malisiewicz et al. method which is over 100x faster. […]

Leave a Reply