Implementing RootSIFT in Python and OpenCV


Still using the original, plain ole’ implementation of SIFT by David Lowe?

Well, according to Arandjelovic and Zisserman in their 2012 paper, Three things everyone should know to improve object retrieval, you’re selling yourself (and your accuracy) short by using the original implementation.

Instead, you should be utilizing a simple extension to SIFT, called RootSIFT, that can be used to dramatically increase object recognition accuracy, quantization, and retrieval accuracy.

Whether you’re matching descriptors of regions surrounding keypoints, clusterings SIFT descriptors using k-means, or building a bag of visual words model, the RootSIFT extension can be used to improve your results.

Best of all, the RootSIFT extension sits on top of the original SIFT implementation and does not require changes to the original SIFT source code.

You do not have to recompile or modify your favorite SIFT implementation to utilize the benefits of RootSIFT.

So if you’re using SIFT regularly in your computer vision applications, but have yet to level-up to RootSIFT, read on.

This blog post will show you how to implement RootSIFT in Python and OpenCV — without (1) having to change a single line of code in the original OpenCV SIFT implementation and (2) without having to compile the entire library.

Sound interesting? Check out the rest of this blog post to learn how to implement RootSIFT in Python and OpenCV.

Looking for the source code to this post?
Jump right to the downloads section.

OpenCV and Python versions:
In order to run this example, you’ll need Python 2.7 and OpenCV 2.4.X.

Why RootSIFT?

It is well known that when comparing histograms the Euclidean distance often yields inferior performance than when using the chi-squared distance or the Hellinger kernel [Arandjelovic et al. 2012].

And if this is the case why do we often use the Euclidean distance to compare SIFT descriptors when matching keypoints? Or clustering SIFT descriptors to form a codebook? Or quantizing SIFT descriptors to form a bag of visual words?

Remember, while the original SIFT papers discuss comparing descriptors using the Euclidean distance, SIFT is still a histogram itself — and wouldn’t other distance metrics offer greater accuracy?

It turns out, the answer is yes. And instead of comparing SIFT descriptors using a different metric we can instead modify the 128-dim descriptor returned from SIFT directly.

You see, Arandjelovic et al. suggest a simple algebraic extension to the SIFT descriptor itself, called RootSIFT, that allow SIFT descriptors to be “compared” using a Hellinger kernel — but still utilizing the Euclidean distance.

Here is the simple algorithm to extend SIFT to RootSIFT:

  • Step 1: Compute SIFT descriptors using your favorite SIFT library.
  • Step 2: L1-normalize each SIFT vector.
  • Step 3: Take the square root of each element in the SIFT vector. Then the vectors are L2-normalized.

That’s it!

It’s a simple extension. But this little modification can dramatically improve results, whether you’re matching keypoints, clustering SIFT descriptors, of quantizing to form a bag of visual words, Arandjelovic et al. have shown that RootSIFT can easily be used in all scenarios that SIFT is, while improving results.

In the rest of this blog post, I’ll show you how to implement RootSIFT using Python and OpenCV. Using this implementation, you’ll be able to incorporate RootSIFT into your own applications — and improve your results!

Implementing RootSIFT in Python and OpenCV

Open up your favorite editor, create a new file and name it , and let’s get started:

The first thing we’ll do is import our necessary packages. We’ll use NumPy for numerical processing and cv2  for our OpenCV bindings.

We then define our RootSIFT  class on Line 5 and the constructer on Lines 6-8. The constructor simply initializes the OpenCV SIFT descriptor extractor.

The compute  function on Line 10 then handles the computation of the RootSIFT descriptor. This function requires two arguments and an optional third argument.

The first argument to the  compute  function is the image  that we want to extract RootSIFT descriptors from. The second argument is the list of keypoints, or local regions, from where the RootSIFT descriptors will be extracted. And finally, an epsilon variable, eps , is supplied to prevent any divide-by-zero errors.

From there, we extract the original SIFT descriptors on Line 12.

We make a check on Lines 15 and 16 — if there are no keypoints or descriptors, we simply return an empty tuple.

Converting the original SIFT descriptors to RootSIFT descriptors takes place on Lines 20-22.

We first L1-normalize each vector in the descs  array (Line 20).

From there, we take the square root of each element in the SIFT vector (Line 21).

Lastly, all we have to do is return the tuple of keypoints and RootSIFT descriptors to the calling function on Line 25.

Running RootSIFT

To actually see RootSIFT in action, open up a new file, name it , and we’ll explore how to extract SIFT and RootSIFT descriptors from images:

On Lines 1 and 2 we import our RootSIFT  descriptor along with our OpenCV bindings.

We then load our example image, convert it to grayscale, and detect Difference of Gaussian keypoints on Lines 7-12.

From there, we extract the original SIFT descriptors on Lines 15-17.

And we extract the RootSIFT descriptors on Lines 20-22.

To execute our script, simply issue the following command:

Your output should look like this:


As you can see, we have extract 1,006 DoG keypoints. And for each keypoint we have extracted 128-dim SIFT and RootSIFT descriptors.

From here, you can take this RootSIFT implementation and apply it to your own applications, including keypoint and descriptor matching, clustering descriptors to form centroids, and quantizing to create a bag of visual words model — all of which we will cover in future posts.


In this blog post, I showed you how to extend the original OpenCV SIFT implementation by David Lowe to create the RootSIFT descriptor, a simple extension suggested by Arandjelovic and Zisserman in their 2012 paper, Three things everyone should know to improve object retrieval.

The RootSIFT extension does not require you to modify the source of your favorite SIFT implementation — it simply sits on top of the original implementation.

The simple 4-step 3-step process to compute RootSIFT is:

  • Step 1: Compute SIFT descriptors using your favorite SIFT library.
  • Step 2: L1-normalize each SIFT vector.
  • Step 3: Take the square root of each element in the SIFT vector. Then the vectors are L2 normalized

No matter if you are using SIFT to match keypoints, form cluster centers using k-means, or quantize SIFT descriptors to form a bag of visual words, you should definitely consider utilizing RootSIFT rather than the original SIFT to improve your object retrieval accuracy.


If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , , , , , , ,

70 Responses to Implementing RootSIFT in Python and OpenCV

  1. Johnny Johnson April 14, 2015 at 12:52 pm #

    Great one Adrian! A lot of new terms and functional phrases here too. Thanks again

    • Adrian Rosebrock April 14, 2015 at 1:38 pm #

      Glad to hear you enjoyed the article! 🙂

    • ham October 9, 2015 at 9:49 am #

      I m not able to implement in opencv 3.0. please help

  2. Ben April 15, 2015 at 10:15 am #

    Nice article! Thanks for sharing.

    • Adrian Rosebrock April 15, 2015 at 10:32 am #

      Thanks Ben, I’m glad you enjoyed it! 🙂

  3. Johnny Johnson April 17, 2015 at 9:52 am #

    No luck for me yet on this one. Segmentation dump going on. Probably has something to do with the opencv version. Any advice on what to check?

    • Adrian Rosebrock April 17, 2015 at 11:56 am #

      If it’s segfaulting then it’s probably an issue with the way OpenCV was installed and compiled. Check your OpenCV version and ensure you are running something of the 2.4.X flavor. Also try to remove one line at a time and re-run until you can pinpoint the line of code that is causing the segfault.

      • Johnny Johnson April 24, 2015 at 11:01 am #

        Alright, from the python interpreter I managed to dump out the version of both python and opencv which are: Python 2.7.6 (default, Mar 22 2014, 22:59:56) and
        >>> from cv2 import __version__; __version__

        I went ahead and requested your original scripts and its still giving me an error: segmentation fault (core dumped)

        trying to execute line: 12 kps = detector.detect(image)

        I am still digging just had some other priorities come up.

        Thanks again

        • Adrian Rosebrock April 24, 2015 at 11:45 am #

          Thanks for the added info Johnny! I have tried the code with OpenCV 2.4.9, 2.4.10, and 2.4.11 and I’m not getting a segfault. That’s definitely quite strange. Keep me posted and if you find anything!

          • Bikram Hanzra April 24, 2015 at 7:25 pm #

            Hi Adrian and Johnny,
            The segmentation fault is because OpenCV was built without using the nonfree module. SIFT is in the nonfree module and hence the segmentation fault.
            I faced this problem some weeks back when I had built OpenCV from source code downloaded from the OpenCV GitHub repository.
            Hope it helps!!

            Thank You

          • Adrian Rosebrock May 1, 2015 at 7:11 pm #

            Nice, thanks for the tip Bikram! 🙂 But to my understanding, only OpenCV 3 (which is in beta) does not compile the nonfree module by default. The previous versions of OpenCV 2.4.X still compiled all nonfree modules during installation time.

          • Johnny April 29, 2015 at 5:42 pm #

            Somehow I managed to get 2.4.11 build by following this:

            Now when I kick off python I get the output just as you demonstrated but no image pops up.

            NOTE: back on the build steps when I try to execute line 8 it gave me this:
            Package ‘ffmpeg’ has no installation candidate

            So I removed it and everything else went some what smooth.

            Any ideas?

          • Adrian Rosebrock May 1, 2015 at 6:57 pm #

            Wow, that’s really strange. I’ll admit that I’m stumped on that one.

  4. Christian April 17, 2015 at 12:21 pm #

    What is the point of converting the image to grayscale after reading it in the driver file?

    • Adrian Rosebrock April 17, 2015 at 12:31 pm #

      The color information (i.e. the Red, Green, and Blue channels individually) are not needed to detect keypoints. Furthermore, most keypoint detectors expect a grayscale image so we convert from RGB to grayscale and discard the color information.

      • Yong Yuan April 24, 2015 at 4:04 am #

        Hi, Adrian, in the, you apply the Hellinger kernel by first L1-normalizing, taking the square-root, and then L2-normalizing. I have tested by first L2-normalizing, taking the square-root, and then L1-normalizing, just as the paper says: 1) L1-normailze the SIFT vector (originally it has unit L2 norm); 2) square root each element. I find the order of L2-normalizing and L1-normalizing doesn’t effect the values of rootSIFT. But for understanding, I thought it’s better to follow the order as the paper says.

        • Adrian Rosebrock April 24, 2015 at 6:42 am #

          Hey Yong, take a look at Slide 10 of the presentation done by Arandjelovic. This slide details their implementation of RootSIFT where the first step is L1 normalization, the second is element-wise square root, and the final step is L2 normalization. It’s interesting that the values were not effected though.

          • Yong Yuan April 25, 2015 at 9:07 am #

            Yeah, it’s truth. The slide at 38th page shows the mAP they combine all the improvements reaches 92.9%, It’s really amazing. I only get 83.35% perfermance with 500k visual words.

  5. Yong Yuan April 23, 2015 at 3:58 am #

    It’s very interesting and useful, and I’m looking forward to reading your future posts. By the way, I find some of your posts have been translated to Chinese.

  6. Chris May 12, 2015 at 5:27 pm #

    Very nice article, but I had a few questions about your python code:

    1) When you do L1 normalization with axis=0, aren’t you normalizing all columns of the set of descriptors? I would think you would use ‘desc /= (desc.sum(axis=1, keepdims=True_ + eps)’ if you wanted to normalize each SIFT descriptor…

    2) Are you sure you are supposed to divide the normalized and squarerooted descriptor vectors by the L2 norm? If you read the paper by Arandjelovic and Zimmerman, they do not do this. I feel like you did this because you saw that as a step in their presentation, but I think that in that slide they were saying that the descriptor is L2 normalized as a result of L1 normalizing and taking the square root.

    Let me know if I’m off my rocker, and thanks for introducing me to this cool trick!

    • Adrian Rosebrock May 12, 2015 at 8:55 pm #

      Hey Chris, to answer your questions:

      1. Thanks for pointing this out! It looks like I have accidentally pushed a previous version of the RootSIFT code online from my git repo. This was certainly not my intention. Thanks a million for pointing this out. The code and blog post have been updated.

      2. The original SIFT descriptor is L2 normalized, so while the paper does not explicitly state that square-rooted descriptor should be L2 normalized, I think it’s applied. Perhaps I am wrong, but that’s how I interpreted it.

      • Chris May 18, 2015 at 2:01 pm #

        Thanks for clearing that up, Adrian!

        Here’s my take on the L2 normalization. When I just do the math on a SIFT descriptor, this is what happens.

        Step 1: L1 normalize SIFT vector
        Step 2: Take square root of each element.
        Step 3: Calculate L2 norm of transformed SIFT vector and divide each element by this value.

        Now what’s happening is that the L2 norm is always 1.0 (or near to it as 0.999999). So it seems that this step is just unnecessary because the vector is already L2 normalized.

        • Adrian Rosebrock May 18, 2015 at 2:33 pm #

          Good point. It does seem like this step is unnecessary. I am going to run some benchmarks related to image retrieval accuracy on my system and see if anything changes. Technically, it shouldn’t. But either way I’ll be posting an update on this article mentioning that the final L2 normalization is not necessary.

        • Adrian Rosebrock May 19, 2015 at 8:28 am #

          Hey Chris, I just wanted to let you know that I have updated the post to reflect your notes. Thanks again!

  7. Sawyer July 3, 2015 at 1:29 pm #

    Hey Adrian,

    What does the radius of each green circle tell us about that particular keypoint?

    Would u mind posting the code that generates the “detect” image?


    • Adrian Rosebrock July 3, 2015 at 1:33 pm #

      Hey Sawyer, sure thing. I have lots of plans to cover features and descriptors in future posts, so stay tuned!

  8. Sawyer July 7, 2015 at 1:23 pm #

    Hey Adrian,

    I was recently reading through a paper, “Food 101 – Mining Discriminative Components with Random Forests” (

    In section 5.1, Implementation Details, the author mentions transforming SURFs using signed square rooting, and then references the RootSIFT paper:
    “…two feature types are extracted: Dense SURFS, which are transformed using signed square-rooting.”

    Is this essentially a “RootSURF”, or am I oversimplifying it? And can the Hellinger Kernel be used [effectively] with other feature extractors, like AKAZE?

    • Adrian Rosebrock July 7, 2015 at 1:36 pm #

      Feature vectors generated from AKAZE and KAZE are binary feature vectors so they are compared using a Hamming distance. The chi-squared distance doesn’t make much sense here, unless you have constructed a bag-of-visual-words and are comparing the bag-of-visual-words histograms using the chi-squared distance.

      As for SURF, yes, that is essentially RootSURF.

      • Sawyer July 7, 2015 at 2:11 pm #

        Thanks for clarifying. pyimagesearch has the best customer service on the net

        • Adrian Rosebrock July 7, 2015 at 2:14 pm #

          PyImageSearch customer service = Adrian on his laptop 😉

      • Karido December 3, 2019 at 6:31 pm #

        Only AKAZE uses a binary descriptor and therefore the hamming distance. KAZE uses the euclidean distances like SIFT!

  9. Rushi November 6, 2015 at 2:25 am #

    Thanks Man ! Every paper writer should have writing skills like you !

    You just made it so simple.

    • Adrian Rosebrock November 6, 2015 at 6:19 am #

      Thanks for the kind words Rushi 😀

  10. Brian November 8, 2015 at 8:54 pm #

    Two questions:
    1) You convert the image to grayscale on line 8, but on line 12, 16, and 21 it appears you are using the original full color image. Or… am I missing something?

    2) Do you have a version of this code that plays nice with OpenCV 3.0.0? I’m getting a “module object has no attribute” error for the DescriptorExtractor.


    • Adrian Rosebrock November 9, 2015 at 6:29 am #

      Hey Brian — thanks for pointing that out. You can convert DoG keypoints in either color or gray images. I’ll update the code to make sure it’s using the grayscale image though.

      As for working with OpenCV 3.0 and SIFT, you should give this post a read.

      • Brian November 9, 2015 at 6:24 pm #

        Hi Adrian,
        I used your post on “Where did SIFT and SURF go” just a few days ago to get OpenCV 3.0.0 installed on my fresh install of Jessie (RPI 2). I admire your method as it parallels my own “How To” instructions; replete with expected times for each step! The install went great, but for some reason it’s not playing well with the code above.

        For example, on line 11 you have:
        detector = cv2.FeatureDetector_create(“SIFT”)
        … which results in the following error:
        ‘module’ object has no attribute FeatureDetector_create’
        …so I changed it to:
        detector = cv2.xfeatures2d.SIFT_create()

        On line 15 you have:
        extractor = cv2.DescriptorExtractor_create(“SIFT”)
        … which results in the same type of error (module has no attribute)

        I tried playing around with an “xfeatures2d” version of that line of code without any luck. Documentation from OpenCV is also not up-to-date regarding 3.0.0 as several sites have pointed out.

        Any ideas for a fix are greatly appreciated!

        • Adrian Rosebrock November 10, 2015 at 6:24 am #

          This post was written well before OpenCV 3 was released — it’s intended for OpenCV 2.4.X.

          However, you can easily update the code to run with OpenCV 3 by using something like this:

  11. Rijul Paul February 16, 2016 at 11:41 am #

    Hey Adrian, could you please help me out by giving me any urls from where I can get sift executable binary for Mac??? Thanks in advance 🙂

    • Adrian Rosebrock February 16, 2016 at 3:36 pm #

      Unfortunately it’s not that simple. You’ll need to compile OpenCV with the extra modules support enabled.

      • Rijul Paul February 19, 2016 at 2:33 am #

        You mean I need to compile the code to create sift executable binary on Mac environment???

        • Adrian Rosebrock February 19, 2016 at 6:47 am #

          Yes, that is correct.

          • Rijul Paul February 23, 2016 at 1:32 am #

            Thanks man. 🙂

  12. Bhaskar April 8, 2016 at 12:12 pm #

    Traceback (most recent call last):
    File “”, line 97, in
    print “RootSIFT: kps=%d, descriptors=%s ” % (len(kps), descs.shape)
    AttributeError: ‘NoneType’ object has no attribute ‘shape’

    Can you help me with this?

    • Adrian Rosebrock April 8, 2016 at 12:50 pm #

      Try investigating the len(kps) — the only reason descs would be None is that no keypoints were initially detected in the input image.

  13. Abder-Rahman May 23, 2016 at 4:29 pm #

    Thanks for the nice tutorial. I just want to ask:

    – What do you mean by “detect Difference of Gaussian keypoints in the image”? (line 10)

    – Where can I find the documentation for the detect( ) method? (line 12)


    • Adrian Rosebrock May 23, 2016 at 7:18 pm #

      The “Difference of Gaussian”, or more commonly DoG, is the default keypoint detector that SIFT utilizes. As for documentation for the detect method, see the OpenCV documentation.

  14. Shivang June 6, 2016 at 4:01 am #

    hey man! just wanted to ask how would I compute a matrix for all the descriptors if i am taking a large dataset of images,here you have taken one image in consideration.
    I could run a loop to read all the images but how should i store all the descriptors in a matrix?

    • Adrian Rosebrock June 7, 2016 at 3:27 pm #

      There are multiple ways to do this. The easiest would be to create a list, loop over your images, extract features from each image, append them to the list, and then convert the list to a NumPy array.

      Another option, if you know the number of images you are going to process ahead of time, is to allocate memory for the NumPy array before feature extraction. Either option will work.

  15. Arvind June 13, 2016 at 8:51 am #

    Hello Adrian

    i am getting the same result as you shown, but i am not getting any image

    please help me out

    • Adrian Rosebrock June 15, 2016 at 12:48 pm #

      How are you accessing your system? In a headless manner via SSH or VNC? Or are you “wired in” with a keyboard, mouse, and monitor?

  16. Anne October 24, 2016 at 9:59 am #

    Hey Adrian,
    Really useful article. Thank you! 🙂 But I have a conceptual based doubt regarding SIFT. Since I am a beginner, this question might come out to be a bit absurd.

    I understand that cv2.KeyPoints Class offers an attribute ‘pt’ for the x and y coordinates of each keypoint (DoG). But can I extract the pixel position after applying SIFT? I have searched long and hard but haven’t gotten an answer. I am really hoping I can find the answer here.

    Thank you in advance!

    • Adrian Rosebrock October 24, 2016 at 10:14 am #

      The DoG keypoint detector (confusingly called “SIFT” in OpenCV which is also the name of the local invariant descriptor) does indeed return a keypoint object with a .pt attribute. The SIFT descriptor takes this object and then describes the region surrounding it. I'm not sure what you mean by "extract the pixel position after applying SIFT" because the pixel position hasn't changed at all. Applying the SIFT descriptor does not change the (x, y)-coordinates of the keypoint.

      • Anne October 24, 2016 at 10:45 am #

        Thanks for the quick reply!

        I admit I might be even more confused with the concept than I thought.

        I am currently working on something where I am required to apply SLIC and then SIFT on an input image. I am the trying to calculate the number of keypoints for each of the superpixels I obtain after SLIC. Upon implementing SLIC, I get a 2D numpy array, let’s say ‘segments.’ Segments has the same dimensions as the image. So I was hoping upon extracting the coordinates of the keypoints after SIFT, I can apply a simple conditional statement in the iteration of ‘segments’ and use a “count” variable to calculate the total number of keypoints in that superpixel?

        What my question really means is : Are the coordinates returned using ‘pt’ the pixel positions of the keypoint (so as I can use them as stated above)? But I have noticed that the ‘pt’ attribute returns float-like (x,y) value. This is where my real confusion arises.

        Thanks once again!

        • Adrian Rosebrock November 1, 2016 at 9:55 am #

          I would suggest detecting keypoints on the image first. Then, apply SLIC and obtain your “segments”. Loop over each of these segments and then check to see if the (x, y)-coordinates of the keypoint .pt object falls inside the segment. This will allow you to assign each of the keypoints to a specific superpixel.

          At the end of the day, the coordinates returned by .pt are the (x, y)-coordinate pixel positions of the keypoint in the original image.

          • Anne November 5, 2016 at 9:39 am #

            Tried it and worked! Thanks a ton once again 🙂

  17. Naseer November 15, 2016 at 5:58 am #

    Please post detailed implementation of SIFT in python not just how to use library.

    • Adrian Rosebrock November 15, 2016 at 6:43 am #

      I have provided a detailed explanation of SIFT (along with many other keypoint detectors and local invariant descriptors) Inside the PyImageSearch Gurus course.

  18. Fredrik January 8, 2017 at 2:11 pm #


    Very nice exampel, I will try to impement the same in Java

  19. Walid February 8, 2017 at 10:43 am #

    Complex topic and yet simple to understand as always.
    My question is
    What is the significance of color and size of Keypoints?

    • Adrian Rosebrock February 10, 2017 at 2:11 pm #

      The “size” is the radius of the keypoint area. The “color” has no significance — it’s just used to display the actual keypoint on the screen.

  20. sheikha March 16, 2017 at 3:02 pm #

    While running this program i get an erroe ** ImportError: No module named rootsift**

    please help.. mine is opecv 2.4.9

    • Adrian Rosebrock March 17, 2017 at 9:26 am #

      Make sure you use the “Downloads” section of this blog post to download the source code + project structure for the tutorial. You likely do not have the project structure setup correctly, hence the import error.

  21. Adel September 19, 2017 at 9:32 pm #

    Hi Adrian .. How to compute sift descriptor for None key point?

    • Adrian Rosebrock September 20, 2017 at 6:59 am #

      You cannot. A KeyPoint object cannot be None.

  22. Kapil January 15, 2018 at 8:02 am #

    Hi Adrian,

    kps = detector.detect(gray)
    error: ..\..\..\..\opencv\modules\core\src\alloc.cpp:52: error: (-4) Failed to allocate 127844356 bytes in function cv::OutOfMemoryError

    I get the above error, what should I do?

    • Adrian Rosebrock January 15, 2018 at 9:09 am #

      Based on the error message, it looks like your system is running out of memory during the keypoint detection. This is likely because your input image is too large (in terms of width and height). Resize your image to have a maximum size of 600 to 1000px along its maximum dimension and everything should work fine.

  23. ali January 30, 2018 at 7:20 am #

    how to object recognition (identification) for different objects?

  24. venkat October 8, 2019 at 11:40 pm #

    sir, i am using python 3.7.1 and opencv 4.1.1 but i cant use sift or surf in it. How can use sift and surf whether i have to use a older version or is there any other methods.

Before you leave a comment...

Hey, Adrian here, author of the PyImageSearch blog. I'd love to hear from you, but before you submit a comment, please follow these guidelines:

  1. If you have a question, read the comments first. You should also search this page (i.e., ctrl + f) for keywords related to your question. It's likely that I have already addressed your question in the comments.
  2. If you are copying and pasting code/terminal output, please don't. Reviewing another programmers’ code is a very time consuming and tedious task, and due to the volume of emails and contact requests I receive, I simply cannot do it.
  3. Be respectful of the space. I put a lot of my own personal time into creating these free weekly tutorials. On average, each tutorial takes me 15-20 hours to put together. I love offering these guides to you and I take pride in the content I create. Therefore, I will not approve comments that include large code blocks/terminal output as it destroys the formatting of the page. Kindly be respectful of this space.
  4. Be patient. I receive 200+ comments and emails per day. Due to spam, and my desire to personally answer as many questions as I can, I hand moderate all new comments (typically once per week). I try to answer as many questions as I can, but I'm only one person. Please don't be offended if I cannot get to your question
  5. Do you need priority support? Consider purchasing one of my books and courses. I place customer questions and emails in a separate, special priority queue and answer them first. If you are a customer of mine you will receive a guaranteed response from me. If there's any time left over, I focus on the community at large and attempt to answer as many of those questions as I possibly can.

Thank you for keeping these guidelines in mind before submitting your comment.

Leave a Reply