Non-Maximum Suppression for Object Detection in Python

Applying non-maximum suppression to an image

Connecticut is cold. Very cold. Sometimes it’s hard to even get out of bed in the morning. And honestly, without the aide of copious amounts of pumpkin spice lattes and the beautiful sunrise over the crisp autumn leaves, I don’t think I would leave my cozy bed.

But I have work to do. And today that work includes writing a blog post about Felzenszwalb et al. method for non-maximum suppression.

If you remember, last week we discussed Histogram of Oriented Gradients for Objection Detection.

This method can be broken into a 6-step process, including:

  1. Sampling positive images
  2. Sampling negative images
  3. Training a Linear SVM
  4. Performing hard-negative mining
  5. Re-training your Linear SVM using the hard-negative samples
  6. Evaluating your classifier on your test dataset, utilizing non-maximum suppression to ignore redundant, overlapping bounding boxes

After applying these steps you’ll have an object detector that is as smooth as, well, John Coltrane:

Figure 1: My Python object detection framework applied to face detection. Even in low contrast images, faces can be easily detected.

Figure 1: My Python object detection framework applied to face detection. Even in low contrast images, faces can be easily detected.

(Note: Images utilized in this post were taken from the MIT + CMU Frontal Face Images dataset)

These are the bare minimum steps required to build an object classifier using Histogram of Oriented Gradients. Extensions to this method exist including Felzenszwalb et al.’s deformable parts model and Malisiewicz et al.’s Exemplar SVM.

However, no matter which HOG + Linear SVM method you choose, you will (with almost 100% certainty) detect multiple bounding boxes surrounding the object in the image.

For example, take a look at the image of Audrey Hepburn at the top of this post. I forked my Python framework for object detection using HOG and a Linear SVM and trained it to detect faces. Clearly, it has found Ms. Hepburns face in the image — but the detection fired a total of six times!

While each detection may in fact be valid, I certainty don’t want my classifier to report to back to me saying that it found six faces when there is clearly only one face. Like I said, this is common “problem” when utilizing object detection methods.

In fact, I don’t even want to call it a “problem” at all! It’s a good problem to have. It indicates that your detector is working as expected. It would be far worse if your detector either (1) reported a false positive (i.e. detected a face where one wasn’t) or (2) failed to detect a face.

To fix this situation we’ll need to apply Non-Maximum Suppression (NMS), also called Non-Maxima Suppression.

When I first implemented my Python object detection framework I was unaware of a good Python implementation for Non-Maximum Suppression, so I reached out to my friend Dr. Tomasz Malisiewicz, whom I consider to be the “go to” person on the topic of object detection and HOG.

Tomasz, being the all-knowing authority on the topic referred me to two implementations in MATLAB which I have since implemented in Python. We’re going to review the first method by Felzenszwalb etl al. Then, next week, we’ll review the (faster) non-maximum suppression method implemented by Tomasz himself.

So without very delay, let’s get our hands dirty.

Looking for the source code to this post?
Jump right to the downloads section.

OpenCV and Python versions:
This example will run on Python 2.7/Python 3.4+ and OpenCV 2.4.X/OpenCV 3.0+.

Non-Maximum Suppression for Object Detection in Python

Open up a file, name it , and let’s get started implementing the Felzenszwalb et al. method for non-maximum suppression in Python:

We’ll start on Line 2 by importing a single package, NumPy, which we’ll utilize for numerical processing.

From there we define our non_max_suppression_slow  function on Line 5. this function accepts to arguments, the first being our set of bounding boxes in the form of (startX, startY, endX, endY) and the second being our overlap threshold. I’ll discuss the overlap threshold a little later on in this post.

Lines 7 and 8 make a quick check on the bounding boxes. If there are no bounding boxes in the list, simply return an empty list back to the caller.

From there, we initialize our list of picked bounding boxes (i.e. the bounding boxes that we would like to keep, discarding the rest) on Line 11.

Let’s go ahead and unpack the (x, y) coordinates for each corner of the bounding box on Lines 14-17 — this is done using simple NumPy array slicing.

Then we compute the area of each of the bounding boxes on Line 21 using our sliced (x, y) coordinates.

Be sure to pay close attention to Line 22. We apply np.argsort  to grab the indexes of the sorted coordinates of the bottom-right y-coordinate of the bounding boxes. It is absolutely critical that we sort according to the bottom-right corner as we’ll need to compute the overlap ratio of other bounding boxes later in this function.

Now, let’s get into the meat of the non-maxima suppression function:

We start looping over our indexes on Line 26, where we will keep looping until we run out of indexes to examine.

From there we’ll grab the length of the idx  list o Line 31, grab the value of the last entry in the idx  list on Line 32, append the index i  to our list of bounding boxes to keep on Line 33, and finally initialize our suppress  list (the list of boxes we want to ignore) with index of the last entry of the index list on Line 34.

That was a mouthful. And since we’re dealing with indexes into a index list it’s not exactly an easy thing to explain. But definitely pause here and examine these code as it’s important to understand.

Time to compute the overlap ratios and determine which bounding boxes we can ignore:

Here we start looping over the (remaining) indexes in the idx  list on Line 37, grabbing the value of the current index on Line 39.

Using last entry in the idx  list from Line 32 and the current entry in the idx  list from Line 39, we find the largest (x, y) coordinates for the start bounding box and the smallest (x, y) coordinates for the end of the bounding box on Lines 44-47.

Doing this allows us to find the current smallest region inside the larger bounding boxes (and hence why it’s so important that we initially sort our idx  list according to the bottom-right y-coordinate). From there, we compute the width and height of the region on Lines 50 and 51.

So now we are at the point where the overlap threshold comes into play. On Line 55 we compute the overlap , which is a ratio defined by the area of the current smallest region divided by the area of current bounding box, where “current” is defined by the index j  on Line 39.

If the overlap  ratio is greater than the threshold on Line 59, then we know that the two bounding boxes sufficiently overlap and we can thus suppress the current bounding box. Common values for overlapThresh  are normally between 0.3 and 0.5.

Line 64 then deletes the suppressed bounding boxes from the idx  list and we continue looping until the idx  list is empty.

Finally, we return the set of picked bounding boxes (the ones that were not suppressed) on Line 67.

Let’s go ahead and create a driver so we can execute this code and see it in action. Open up a new file, name it , and add the following code:

We start by importing our non_max_suppression_slow  function on Line 2. I put this function in the pyimagesearch  package for organizational purposes, but you can put the function wherever you see fit. From there, we import NumPy for numerical processing and cv2  for our OpenCV bindings on Lines 3-4.

Then, we define a list of images  on Line 8. This list consists of 2-tuples, where the first entry in the tuple is a path to an image and the second entry is the list of bounding boxes. These bounding boxes were obtained from my HOG + Linear SVM classifier detecting potential “faces” at varying locations and scales. Our goal is to take the set of bounding boxes for each image and apply non-maximum suppression.

We start by looping over the image path and bounding boxes on Line 27 and load the image on Line 30.

To visualize the results of non-maximum suppression in action, we first draw the original (non-suppressed) bounding boxes on Lines 34 and 35.

We then apply non-maximum suppression on Line 38 and draw the picked bounding boxes on Lines 42-43.

The resulting images are finally displayed on Lines 46-48.

Non-Maximum Suppression in Action

To see the Felzenszwalb et al. non-maximum suppression method in action, download the source code and accompanying images for this post from the bottom of this page, navigate to the source code directory, and issue the following command:

First, you’ll see the Audrey Hepburn image:

Figure 1: Our classifier initially detected six bounding boxes, but by applying non-maximum suppression, we are left with only one (the correct) bounding box.

Figure 2: Our classifier initially detected six bounding boxes, but by applying non-maximum suppression, we are left with only one (the correct) bounding box.

Notice how six bounding boxes were detected, but by applying non-maximum suppression, we are able to prune this number down to one.

The same is true for the second image:

Figure 2: Initially detecting three bounding boxes, but by applying non-maximum suppression we can prune the number of overlapping bounding boxes down to one.

Figure 3: Initially detecting three bounding boxes, but by applying non-maximum suppression we can prune the number of overlapping bounding boxes down to one.

Here we have found three bounding boxes corresponding to the same face, but non-maximum suppression is about to reduce this number to one bounding box.

So far we have only examined images that contain one face. But what about images that contain multiple faces? Let’s take a look:

Figure 3: Non-maximum suppression correctly handles when there are multiple faces, suppressing the smaller overlapping bounding boxes, but retaining the boxes that do not overlap.

Figure 4: Non-maximum suppression correctly handles when there are multiple faces, suppressing the smaller overlapping bounding boxes, but retaining the boxes that do not overlap.

Even for images that contain multiple objects, non-maximum suppression is able to ignore the smaller overlapping bounding boxes and return only the larger ones. Non-maximum suppression returns two bounding boxes here because the bounding boxes for each face at all. And even if they did overlap, do the overlap ratio does not exceed the supplied threshold of 0.3.


In this blog post I showed you how to apply the Felzenszwalb et al. method for non-maximum suppression.

When using the Histogram of Oriented Gradients descriptor and a Linear Support Vector Machine for object classification you almost always detect multiple bounding boxes surrounding the object you want to detect.

Instead of returning all of the found bounding boxes you should first apply non-maximum suppression to ignore bounding boxes that significantly overlap each other.

However, there are improvements to be made to the Felzenszwalb et al. method for non-maximum suppression.

In my next post I’ll implement the method suggested by my friend Dr. Tomasz Malisiewicz which is reported to be over 100x faster!

Be sure to download the code to this post using the form below! You’ll definitely want to have it handy when we examine Tomasz’s non-maximum suppression algorithm next week!


If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , , , ,

54 Responses to Non-Maximum Suppression for Object Detection in Python

  1. rajacheers November 18, 2014 at 11:19 am #


    Thanks for the great post.

    I noticed if the first coordinate in bigger, it cant suppress them

    Examples boxes like
    [[348 98 234 234]
    [345 102 233 233]
    [338 100 238 238]],

    [[175 80 32 32]
    [276 93 30 30]
    [176 81 30 30]
    [276 93 30 30]
    [177 81 29 29]
    [277 93 29 29]].

    What can be improved to make this work?

    • Adrian Rosebrock November 18, 2014 at 12:01 pm #

      Please note my comment in the post:

      It is absolutely critical that we sort according to the bottom-right corner as we’ll need to compute the overlap ratio of other bounding boxes later in this function.

      The algorithm assumes that the coordinates are in the following order: (x-coordinate of the top-left, y-coordinate of the top-left, x-coordinate of the bottom-right, and y-coordinate of the bottom right). If for some reason you are worried about the order of the coordinates, simply perform a check and re-order as necessary. I wouldn’t call this an “improvement”, just an assumption of the non-maximum suppression algorithm that can be easily guaranteed by checking the points beforehand.

      • Morné January 26, 2015 at 12:37 am #

        I’m also suspicious about the rationale for “picking” the bottom box (irrespective of horizontal position), and not the biggest box. Would you mind clarifying?

        • Adrian Rosebrock January 26, 2015 at 6:44 am #

          You are picking the largest bounding box since they are sorted by area prior to the actual suppression algorithm being run.

          • Artem August 4, 2015 at 8:20 am #

            But you are using variable ‘area’ inly in the line 55, no sorting by the area are performed. Clarify please.

          • Adrian Rosebrock August 4, 2015 at 1:16 pm #

            Sorting is actually performed on Line 22. The area is used to determine the overlap, and if the overlap exceeds the threshold, then the indexes can be removed. Line 22 can also be replaced to sort via some other value, such as the score or probability of the detection returned by the SVM.

          • Nik January 1, 2018 at 4:27 pm #

            So I guess in the code you are computing the areas using array slices but sorting only on bottom right indices. And that’s how we always choose the bottom right box of a particular object.
            area = (x2 – x1 + 1) * (y2 – y1 + 1)

            This is only a sort of the indices.
            idxs = np.argsort(y2)
            I get the largest score rationale and probably also the largest box logic. But I am unable to understand why the bottom right box. This is also the case in the faster NMS case.

          • Adrian Rosebrock January 3, 2018 at 1:10 pm #

            I have updated code that optionally uses the probabilities available here. Please take a look and see if that helps clear up your doubts.

          • sdfsfasf April 24, 2018 at 2:43 am #

            The updated codes:

            are much clearer. Thanks.

  2. Juan January 16, 2015 at 5:42 am #

    Hi Adrian,

    Are you going to publish a tutorial on Malisiewicz et al.’s Exemplar SVM? In the HoG tutorial you commented that you had already implemented their algorithm in python.

    • Adrian Rosebrock January 16, 2015 at 7:31 am #

      Hi Juan, you are correct. I am publishing a tutorial on the Malisiewicz et al. method for non-maximum suppression (not the entire Exemplar SVM framework). However, due to poor blog post planning on my part, I had to push the post back to accommodate a few other posts that needed to get ASAP. However, I will definitely get the faster version of non-maximum suppression posted later in January or early February.

  3. Mikael March 17, 2015 at 9:33 am #

    Hi Adrian,

    Thanks for this post. There may be a difference between your implementation and the original one.
    In the matlab code, it looks like the initial sort is done on detection scores, not the y coordinate:
    The documentation is not very clear about what “boxes” is (“Detection bounding boxes (see pascal_test.m)”) but the method is meant to select “high-scoring detections”
    I guess your version end-up selecting the detection with highest “y”.

    • Adrian Rosebrock March 17, 2015 at 11:29 am #

      Hi Mikael, thanks for pointing this out to me! In the future I’ll issue an update to this post that includes the scoring of each bounding box.

      • Ricardo February 14, 2017 at 4:30 pm #

        Hi Adrian is there an update containing the scores of the boxes?
        nice post by the way! regards from Mexico!!

        • Adrian Rosebrock February 15, 2017 at 9:05 am #

          Yes, you can find the update in the imutils library.

          • Zhang December 23, 2017 at 10:31 pm #

            Hi Adrian,
            I have two questions. First if we have already known the highest-scoring bounding box, why we still have to use NMS to suppress others? Why not directly choose the highest-scoring one? Secondly I have some trouble in understanding line 33 “pick.append(i)”, This piece is within the loop of “while”. Will “pick” have length of 1 at the end of this program in line 67 “return boxes[pick]”?(i.e, will the program return only one bounding box?)
            Thank you, best regards from China!

          • Adrian Rosebrock December 26, 2017 at 4:23 pm #

            Consider the event where there are two objects that are spatially close to each other. Our algorithm detects multiple bounding boxes around both. If we took the one with the largest probability we could potentially suppress bounding boxes that have some partial overlap as well.

  4. Alex August 12, 2015 at 11:05 am #

    Thanks for an awesome post as always!

    You mention in one of your comments above:
    “Line 22 can also be replaced to sort via some other value, such as the score or probability of the detection returned by the SVM.”

    That’s exactly what I’m looking for. I want to be able to keep the highest scoring boxes returned by my neural network (those with the higher prob of being in the positive class), and smartly merge all nearby bounding boxes with that highest scoring box. Should changing line 22 to sort by my neural network score do this? I tried that, and it’s not working for me. I’m wondering if you have suggestions of other things that might need changing.

    • Adrian Rosebrock August 13, 2015 at 7:10 am #

      Hey Alex — yes, you are correct. Change Line 22 to sort by the score of the bounding box returned by your neural network instead of the y-coordinate. Make sure that your bounding box scores corresponding to the (x, y)-coordinates of your bounding boxes that you pass into the function. I would also suggest creating a separate scores parameter to the function.

      • Sam March 13, 2018 at 7:16 am #

        Great post!
        I have a question. Since the sorting criteria can be transferred to the score of classifier like svm, does it mean that if there is no score(just like your post), I can sort by left-bottom value?or other corner values?
        Thank you!

        • Adrian Rosebrock March 14, 2018 at 12:45 pm #

          If there is no probability/confidence score to sort on you can arbitrarily sort on a corner coordinate.

  5. Nico October 7, 2015 at 1:13 pm #

    Is there a specific reason to bias the area by +1 in each dimension on line 21?

    • Adrian Rosebrock October 7, 2015 at 3:57 pm #

      It simply avoids any type of “divide by zero” errors.

      • Nico October 8, 2015 at 11:35 am #

        Thanks, makes sense!

  6. Mayank May 19, 2016 at 1:34 pm #

    Hi adrian,
    How do I add features while doing hard negative mining. I have extracted HOG features for both positive and negative class and have trained my model. Now if i do sliding window on training set of negative class i will get features of a window. Now the problem is feature matrix of complete image is larger than that of a small window so how would i add these new features to my existing features list. Also since the images are same there will be redundant features too, so how to avoid redundant data here??

    • Adrian Rosebrock May 19, 2016 at 5:55 pm #

      I would save your hard-negative feature vectors to a separate file (such as CSV, HDF5, whatever file format you are using to store your current feature vectors from the original training data). Once you have your hard-negatives saved to disk, load both the hard-negatives and original data and stack them together. For this, I like using the np.hstack function. I cover this in more detail inside PyImageSearch Gurus.

  7. Attila September 27, 2016 at 9:20 am #

    Hi Adrian, thanks for the great tutorial!

    I’m using nms for person detection with OpenCV’s detectMultiScale function, which returns a foundWeights list containing a value for every detection prior to applying nms. Do you have any suggestions on how to use that information to calculate a certain confidence level for each detection after applying nms?

    • Adrian Rosebrock September 28, 2016 at 10:46 am #

      The weights returned by detectMultiScale are your actual confidence levels from the SVM. You would normally hardcode a confidence threshold for good vs. bad predictions. This is normally done on a dataset-to-dataset basis.

  8. Iancu March 15, 2017 at 2:26 pm #

    Hello. Reagarding this topic, can told me if in C++ exists, and if the answer is yes, which is his name or something with another words ? I see in Python exists a library which do all the work, In c++ is something like this? Thank you.

  9. Brian March 19, 2017 at 7:35 pm #

    Hi Adrian,

    When running non-maximum suppression it seems that a single of the original boxes in a cluster is chosen. Is there a good algorithm that instead of using one of the input boxes a box that is created from the other boxes is returned? Preferably is there was a way to weight the boxes as you merged them using the classifier confidence, I see that you can sort them by their classification accuracy rather than by y2 but that would still result in only one of the original boxes being chosen.

    Also how does maximum suppression work with multiple scales? Like when you do a sliding window with an image pyramid the same object might be detected at different scales.

    • Adrian Rosebrock March 21, 2017 at 7:28 am #

      You apply non-maxima suppression after applying a sliding window across multiple scales. The returned bounding boxes are always in terms of the original image dimensions (this simply involves multiplying the bounding boxes by the ratio of the pyramid layer to the original image dimensions). The bounding boxes are collected from all sliding windows + scales and then NMS is applied.

      As for weighting the bounding boxes, you could do that, assuming your SVM returns a score to represent how “good” the prediction is (i.e., a probability). However, I’m not sure this would give you any better results since higher scores will ultimately dominate the weighted average. I doubt that you would see big improvement in Intersection over Union, the metric we use for measuring the quality of a detection.

      If you’re interested in learning more about object detection, be sure to take a look at the PyImageSearch Gurus course.

      • Nir December 8, 2018 at 3:44 am #


        A quick question regarding the different scales:

        if you find a box size 50×50 for example and inside it another box of size 25×25 (using 0.5 pyramid), NMS won’t remove it as intersection is small. how can we dill with it?


  10. Walid Ahmed April 25, 2017 at 4:03 pm #

    I found it strange that in the example one of the bounding boxes is returned and not a new bounding box,
    is this normal?

    • Adrian Rosebrock April 25, 2017 at 9:04 pm #

      Correct, that is how non-maxima suppression works. It “suppresses” all the other bounding boxes (i.e., the non-maxima ones). It does not take an average over the others (or some other metric).

  11. Oliver June 25, 2018 at 7:09 am #

    Hello Adrian,

    thanks for your post, very helpful!
    I’d just like to change one detail and I can’t figure out how to do it properly.
    In case of overlap>overlapThresh, I’d like to combine the two overlapping boxes. This means that I want to delete both original boxes and replace them by a box that is as big as the two overlapping boxes. How can I change the code to get this done?

    • Adrian Rosebrock June 25, 2018 at 1:41 pm #

      You would want to compute the maximum and minimum (x, y)-coordinates to find the size of the new bounding box. The suppress list will have all indexes of the bounding boxes that should be removed. Loop over them and delete them as we normally would but add in a new bounding box with a new index as well.

  12. Matthew Hockley August 30, 2018 at 9:19 am #

    Thank you so much for your post, it was very useful learning the basics of non-maximum suppression.

    • Adrian Rosebrock August 30, 2018 at 9:45 am #

      Awesome, I’m glad it helped you!

  13. Roger December 7, 2018 at 4:00 am #

    Hy Adrian!! First of all, amazing tutorial !

    I have a question about NMS and real time video with haarcascades. Can I use this code to fix multiple detection box around my object in real time video?

    For my detection I need this line:

    Cars= car.detectMultiScale(…)

    Can i replace ur line 8 “images” for “cars”?
    Could it fix my problem with multiple boxes ?

    Thanks !!!

    • Adrian Rosebrock December 11, 2018 at 1:06 pm #

      Hey Roger, I’m not sure if the code will work as a drop-in for your exact application, but as long as your algorithm produces bounding box coordinates, then yes, you can apply NMS and it can potentially help suppress weak, overlapping bounding boxes.

  14. Borys January 7, 2019 at 1:11 pm #

    Hi Adrian, may I ask you, is cv2.dnn.NMSBoxes better than your solution or worser an why? I’ve noticed that you used it in .

    • Adrian Rosebrock January 7, 2019 at 2:23 pm #

      The cv2.dnn.NMSBoxes function didn’t exist when I wrote this blog post. It also appeared to be buggy in previous releases of OpenCV (I believe it’s okay now though).

      • Borys January 8, 2019 at 6:29 am #

        Thank you for answer! )

  15. Borys January 7, 2019 at 1:12 pm #

    Sorry, link have been truncated. I’ve noticed it in

  16. Jean Pablo Mello January 9, 2019 at 2:08 pm #

    Hi, Adrian,

    Very nice post! One beginner question: after applying NMS, the number of bounding boxes over the same object will always be reduced to one as ideal, or is it possible for more than one to remain?

    • Adrian Rosebrock January 11, 2019 at 9:45 am #

      That is entirely dependent on the level of overlap between the bounding boxes and the overlap threshold. Typically it’s desirable for overlapping bounding boxes to be reduced to just one bounding box.

  17. Anh Minh February 1, 2019 at 11:57 am #

    Hi Adrian,

    why did you +1 on lines 50-51? and when computing the overlap ratio, should the denominator be area[j] + area[i] – float(w * h) as it is supposed to compute the iou ratio?


    • Adrian Rosebrock February 5, 2019 at 9:44 am #

      You “+1” to avoid any division by zero errors.

      • Anand March 6, 2019 at 12:45 pm #

        Hi Adria,
        @Anh Minh means to ask why you have not considered union area i.e area[i]+area[j], because for calculating IOU (intersection area/Union area), so please explain reason behind this.

  18. Vishal July 3, 2019 at 6:15 pm #

    Hi Adrian,

    Thank you for the post. How can I update this code to suppress objects based on class? For example: If I want to suppress boxes pertaining to Dog, Cats, People, Cars etc separately. So if cat and dog boxes are overlapping, I do not want one to be suppressed.


    • Adrian Rosebrock July 4, 2019 at 10:12 am #

      1. Perform object detection
      2. Loop over all unique sets of classes
      3. For each unique set, select all objects of that class
      4. Run NMS on only those objects

  19. quang nhat tran January 15, 2020 at 11:04 pm #

    if using python 3, we have to use range not xrange anymore right ?

    • Adrian Rosebrock January 16, 2020 at 10:26 am #

      Correct, you should use the “range” function for Python 3.


  1. (Faster) Non-Maximum Suppression in Python - PyImageSearch - February 16, 2015

    […] Last week I showed you how to implement the Felzenszwalb et al. method. And this week I am going to show you the Malisiewicz et al. method which is over 100x faster. […]

Before you leave a comment...

Hey, Adrian here, author of the PyImageSearch blog. I'd love to hear from you, but before you submit a comment, please follow these guidelines:

  1. If you have a question, read the comments first. You should also search this page (i.e., ctrl + f) for keywords related to your question. It's likely that I have already addressed your question in the comments.
  2. If you are copying and pasting code/terminal output, please don't. Reviewing another programmers’ code is a very time consuming and tedious task, and due to the volume of emails and contact requests I receive, I simply cannot do it.
  3. Be respectful of the space. I put a lot of my own personal time into creating these free weekly tutorials. On average, each tutorial takes me 15-20 hours to put together. I love offering these guides to you and I take pride in the content I create. Therefore, I will not approve comments that include large code blocks/terminal output as it destroys the formatting of the page. Kindly be respectful of this space.
  4. Be patient. I receive 200+ comments and emails per day. Due to spam, and my desire to personally answer as many questions as I can, I hand moderate all new comments (typically once per week). I try to answer as many questions as I can, but I'm only one person. Please don't be offended if I cannot get to your question
  5. Do you need priority support? Consider purchasing one of my books and courses. I place customer questions and emails in a separate, special priority queue and answer them first. If you are a customer of mine you will receive a guaranteed response from me. If there's any time left over, I focus on the community at large and attempt to answer as many of those questions as I possibly can.

Thank you for keeping these guidelines in mind before submitting your comment.

Leave a Reply