Ordering coordinates clockwise with Python and OpenCV

order_coordinates_correctToday we are going to kick-off a three part series on calculating the size of objects in images along with measuring the distances between them.

These tutorials have been some of the most heavily requested lessons on the PyImageSearch blog. I’m super excited to get them underway — and I’m sure you are too.

However, before we start learning how to measure the size (and not to mention, the distance between) objects in images, we first need to talk about something…

A little over a year ago, I wrote one my favorite tutorials on the PyImageSearch blog: How to build a kick-ass mobile document scanner in just 5 minutes. Even though this tutorial is over a year old, its still one of the most popular blog posts on PyImageSearch.

Building our mobile document scanner was predicated on our ability to apply a 4 point cv2.getPerspectiveTransform with OpenCV, enabling us to obtain a top-down, birds-eye-view of our document.

However. Our perspective transform has a deadly flaw that makes it unsuitable for use in production environments.

You see, there are cases where the pre-processing step of arranging our four points in top-left, top-right, bottom-right, and bottom-left order can return incorrect results!

To learn more about this bug, and how to squash it, keep reading.

Looking for the source code to this post?
Jump right to the downloads section.

Ordering coordinates clockwise with Python and OpenCV

The goal of this blog post is two-fold:

  1. The primary purpose is to learn how to arrange the (x, y)-coordinates associated with a rotated bounding box in top-left, top-right, bottom-right, and bottom-left order. Organizing bounding box coordinates in such an order is a prerequisite to performing operations such as perspective transforms or matching corners of objects (such as when we compute the distance between objects).
  2. The secondary purpose is to address a subtle, hard-to-find bug in the order_points  method of the imutils package. By resolving this bug, our order_points  function will no longer be susceptible to a debilitating bug.

All that said, let’s get this blog post started by reviewing the original, flawed method at ordering our bounding box coordinates in clockwise order.

The original (flawed) method

Before we can learn how to arrange a set of bounding box coordinates in (1) clockwise order and more specifically, (2) a top-left, top-right, bottom-right, and bottom-left order, we should first review the order_points  method detailed in the original 4 point getPerspectiveTransform blog post.

I have renamed the (flawed) order_points  method to order_points_old  so we can compare our original and updated methods. To get started, open up a new file and name it order_coordinates.py :

Lines 2-8 handle importing our required Python packages for this example. We’ll be using the imutils  package later in this blog post, so if you don’t already have it installed, be sure to install it via pip :

Otherwise, if you do have imutils  installed, you should upgrade to the latest version (which has the updated order_points  implementation):

Line 10 defines our order_points_old  function. This method requires only a single argument, the set of points that we are going to arrange in top-left, top-right, bottom-right, and bottom-left order; although, as we’ll see, this method has some flaws.

We start on Line 15 by defining a NumPy array with shape (4, 2)  which will be used to store our set of four (x, y)-coordinates.

Given these pts , we add the x and y values together, followed by finding the smallest and largest sums (Lines 19-21). These values give us our top-left and bottom-right coordinates, respectively.

We then take the difference between the x and y values, where the top-right point will have the smallest difference and the bottom-left will have the largest distance (Lines 26-28).

Finally, Line 31 returns our ordered (x, y)-coordinates to our calling function.

So all that said, can you spot the flaw in our logic?

I’ll give you a hint:

What happens when the sum or difference of the two points is the same?

In short, tragedy.

If either the sum array s  or the difference array diff  have the same values, we are at risk of choosing the incorrect index, which causes a cascade affect on our ordering.

Selecting the wrong index implies that we chose the incorrect point from our pts  list. And if we take the incorrect point from pts , then our clockwise top-left, top-right, bottom-right, bottom-left ordering will be be destroyed.

So how can we address this problem and ensure that it doesn’t happen?

To handle this problem, we need to devise a better order_points  function using more sound mathematic principles. And that’s exactly what we’ll cover in the next section.

A better method to order coordinates clockwise with OpenCV and Python

Now that we have looked at a flawed version of our order_points  function, let’s review an updated, correct implementation.

The implementation of the order_points  function we are about to review can be found in the imutils package; specifically, in the perspective.py file. I’ve included the exact implementation in this blog post as a matter of completeness:

Again, we start off on Lines 2-4 by importing our required Python packages. We then define our order_points  function on Line 6 which requires only a single parameter — the list of pts  that we want to order.

Line 8 then sorts these pts  based on their x-values. Given the sorted xSorted  list, we apply array slicing to grab the two left-most points along with the two right-most points (Lines 12 and 13).

The leftMost  points will thus correspond to the top-left and bottom-left points while rightMost  will be our top-right and bottom-right points — the trick is to figure out which is which.

Luckily, this isn’t too challenging.

If we sort our leftMost  points according to their y-value, we can derive the top-left and bottom-left points, respectively (Lines 18 and 19).

Then, to determine the bottom-right and bottom-left points, we can apply a bit of geometry.

Using the top-left point as an anchor, we can apply the Pythagorean theorem and compute the Euclidean distance between the top-left and rightMost  points. By the definition of a triangle, the hypotenuse will be the largest side of a right-angled triangle.

Thus, by taking the top-left point as our anchor, the bottom-right point will have the largest Euclidean distance, allowing us to extract the bottom-right and top-right points (Lines 26 and 27).

Finally, Line 31 returns a NumPy array representing our ordered bounding box coordinates in top-left, top-right, bottom-right, and bottom-left order.

Testing our coordinate ordering implementations

Now that we have both the original and updated versions of order_points , let’s continue the implementation of our order_coordinates.py  script and give them both a try:

Lines 33-37 handle parsing our command line arguments. We only need a single argument, --new , which is used to indicate whether or not the new or the original order_points  function should be used. We’ll default to using the original implementation.

From there, we load example.png  from disk and perform a bit of pre-processing by converting the image to grayscale and smoothing it with a Gaussian filter.

We continue to process our image by applying the Canny edge detector, followed by a dilation + erosion to close any gaps between outlines in the edge map.

After performing the edge detection process, our image should look like this:

Figure 1: Computing the edge map of the input image.

Figure 1: Computing the edge map of the input image.

As you can see, we have been able to determine the outlines/contours of the objects in the image.

Now that we have the outlines of the edge map, we can apply the cv2.findContours  function to actually extract the outlines of the objects:

We then sort the object contours from left-to-right, which isn’t a requirement, but makes it easier to view the output of our script.

The next step is to loop over each of the contours individually:

Line 61 starts looping over our contours. If a contour is not sufficiently large (due to “noise” in the edge detection process), we discard the contour region (Lines 63 and 64).

Otherwise, Lines 68-71 handle computing the rotated bounding box of the contour (taking care to use cv2.cv.BoxPoints  [if we are using OpenCV 2.4] or cv2.boxPoints  [if we are using OpenCV 3]) and drawing the contour on the image .

We’ll also print the original rotated bounding box  so we can compare the results after we order the coordinates.

We are now ready to order our bounding box coordinates in a clockwise arrangement:

Line 81 applies the original (i.e., flawed) order_points_old  function to arrange our bounding box coordinates in top-left, top-right, bottom-right, and bottom-left order.

If the --new 1  flag has been passed to our script, then we’ll apply our updated order_points  function (Lines 85 and 86).

Just like we printed the original bounding box to our console, we’ll also print the ordered points so we can ensure our function is working properly.

Finally, we can visualize our results:

We start looping over our (hopefully) ordered coordinates on Line 93 and draw them on our image .

According to the colors  list, the top-left point should be red, the top-right point purple, the bottom-right point blue, and finally, the bottom-left point teal.

Lastly, Lines 97-103 draw the object number on our image  and display the output result.

To execute our script using the original, flawed implementation, just issue the following command:

Figure 2: Arranging our rotated bounding box coordinates in top-left, top-right, bottom-right, and bottom-left order...but with a major flaw (take a look at Object #6).

Figure 2: Arranging our rotated bounding box coordinates in top-left, top-right, bottom-right, and bottom-left order…but with a major flaw (take a look at Object #6).

As we can see, our output is anticipated with the points ordered clockwise in a top-left, top-right, bottom-right, and bottom-left arrangement — except for Object #6!

Note: Take a look at the output circles — notice how there isn’t a blue one?

Looking at our terminal output for Object #6, we can see why:

Figure 3: Take a look at the bounding box coordinates for Object #6. And then see what happens what happens when we take their sum and differences.

Figure 3: Take a look at the bounding box coordinates for Object #6. And then see what happens what happens when we take their sum and differences.

Taking the sum of these coordinates we end up with:

  • 520 + 255 = 775
  • 491 + 226 = 717
  • 520 + 197 = 717
  • 549 + 226 = 775

While the difference gives us:

  • 520 – 255 = 265
  • 491 – 226 = 265
  • 520 – 197 = 323
  • 549 – 226 = 323

As you can see, we end up with duplicate values!

And since there are duplicate values, the argmin()  and argmax()  functions don’t work as we expect them to, giving us an incorrect set of “ordered” coordinates.

To resolve this issue, we can use our updated order_points  function in the imutils package. We can verify that our updated function is working properly by issuing the following command:

This time, all of our points are ordered correctly, including Object #6:

Figure 4: Correctly ordering coordinates clockwise with Python and OpenCV.

Figure 4: Correctly ordering coordinates clockwise with Python and OpenCV.

When utilizing perspective transforms (or any other project that requires ordered coordinates), make sure you use our updated implementation!


In this blog post, we started a three part series on calculating the size of objects in images and measuring the distance between objects. To accomplish these goals, we’ll need to order the 4 points associated with the rotated bounding box of each object.

We’ve already implemented such a function in a previous blog post; however, as we discovered, this implementation has a fatal flaw — it can return the wrong coordinates under very specific situations.

To resolve this problem, we defined a new, updated order_points  function and placed it in the imutils package. This implementation ensures that our points are always ordered correctly.

Now that we can order our (x, y)-coordinates in a reliable manner, we can move on to measuring the size of objects in an image, which is exactly what I’ll be discussing in our next blog post.

Be sure to signup for the PyImageSearch Newsletter by entering your email address in the form below — you won’t want to miss this series of posts!


If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!

, ,

54 Responses to Ordering coordinates clockwise with Python and OpenCV

  1. David Hoffman March 21, 2016 at 1:55 pm #

    This is a great solution to the problem. I have run into this problem before and never was able to devise a reliable solution.

    • Adrian Rosebrock March 21, 2016 at 6:31 pm #

      I’m glad the blog post helped, David! 🙂

  2. Mahed March 21, 2016 at 2:33 pm #

    Oh many thanks Adiran !! I knew you would help me out
    I will try this and see if my results are better this time

    Many Thanks again Adrian !!

    • Adrian Rosebrock March 21, 2016 at 6:31 pm #

      No problem Mahed!

  3. Neville March 21, 2016 at 9:39 pm #

    Thanks for this post Adrian, but I’m a little confused about one aspect of the new algorithm.

    In the new order_points function, lines 18 & 19 sort the left most coordinates in order of the y-value, which gives us the top-left & bottom-left points. That makes perfect sense to me.

    My question is, why was it not just as simple for the right side points?
    Rather than the Euclidean distance calculation forming the basis of the sort in lines 26 & 27, why could the right most points not have also just been sorted by their y-value to determine the top-right & bottom-right points?

    I expect there is a scenario where that wont work (which is the reason for the more complicated solution), but I just cant think of what that scenario would be.

    • Adrian Rosebrock March 22, 2016 at 4:40 pm #

      Yes, you are correct. However, after the previous bug from what looked like an innocent heuristic approach, I decided to go with the Euclidean distance, that way I always knew the bottom-right corner of the rectangle would be based on the properties of triangles (and therefore the correct corner chosen), rather than running into another scenario where the order_points function broke (and being left again to figure which heuristic broke it). Consider this more “preemptive strike” against any bugs that could arise.

      • Jacky March 27, 2018 at 6:45 am #

        I think the y-coordinates order method is more stable. The Euclidean distance method will get wrong order when the object is a trapezoid. For examle, they points are (10,10),(30,10),(20,20),(10,20). The output is incorret when using Eudclidean distance method, but correct when using y-coordinates order method.

        ordered by Euclidean distance
        [[10 10]
        [20 20]
        [30 10]
        [10 20]]
        orderd by y-coordinates
        [[10 10]
        [30 10]
        [20 20]
        [10 20]]

        For the test code, please go to gist: https://gist.github.com/flashlib/e8261539915426866ae910d55a3f9959

  4. Arif March 23, 2016 at 10:35 am #

    Fantastic as always 🙂

    • Adrian Rosebrock March 24, 2016 at 5:18 pm #

      Thanks Arif! 🙂

  5. Mahed March 26, 2016 at 10:57 am #

    Hi Adrian ,et. everyone ,

    I used the code above as video feed on rasPi inorder to zoom in the image containing
    a ‘white’ text embedded on a ‘red’ square, mounted on a drone as part of a project.

    The algorithm works perfectly and the zoomed image appears perfectly upright text
    even when image is slightly skewed ….. However when the square was turned
    completely on the l.h.s or r.h.s …. The text appeared sideways too ….

    The same situation was when the text was facing bottom

    Unfortunately my text recognition software (pytesseract) couldnt read the text sideways/bottomways
    There are other recog. engines that can deal with this but are not free

    Is there a way i could modify the code so that my embedded text always upright.
    I did give myself a thinking but could’nt go that far becoz i thought that for the case
    if the image is completely sideways … i might say that the distance between top-left
    and top right corner is less than what was before and hence rotate by 90` but the
    thing is the case wont work for both situations … i.e. completley l.h.s and r.h.s
    and i am absolutely clueless on how to solve when the text is facing bottom

    • Mahed March 26, 2016 at 11:00 am #

      Oh crisis !! I just realized my first solution method wont work as well because my
      target is a red square not a rectangle ….. *facepalms*

  6. jack April 4, 2016 at 12:06 pm #

    is it possible to distinguish between real objects and floor lines ? I am trying to detect objects on a floor that has lines all over and I don’t know how to separate the real objects from rectangles/squares on the floor.

    • Adrian Rosebrock April 6, 2016 at 9:18 am #

      In most cases, yes, this should be possible. Using the Canny edge detector, you can determine lines that run most of the width/height of the image. Furthermore, these lines should also be equally spaced and in many cases intersecting. You can use this information to help prune out the floro lines that you are not interested in.

  7. leena April 5, 2016 at 12:21 am #

    Useful post as always.

  8. Chris September 13, 2016 at 9:09 am #

    If I am reading this right, another special case that may need to be accounted for is skewed four-sided objects where the order of the x values may not reliably give you lefts and rights. Take for example an object with the points {(0,0), (2,0), (3,4), (5,4)}. The top right x is smaller than the bottom left x and the sort by x’s will result in top left and top right being identified as top left and bottom left respectively.

  9. solarflare January 4, 2017 at 9:14 am #

    Hi Adrian,

    Question on the circular items in the picture. Why is it that the bounding rectangle is upright (that is, not angled) for the two quarters, but is angled for the nickle?

    • Adrian Rosebrock January 4, 2017 at 10:38 am #

      It’s simply due to how the left-most and right-most coordinates are sorted. You can also see results like these if there is noise due to shadowing, lighting spaces, etc.

      • solarflare January 6, 2017 at 10:32 am #

        It almost seemed that the bounding rectangle detected the rotation angle of the coin (relative to the the head and neck of the President being upright). Just wanted to make sure this was coincidence and not a desired feature of the algorithm.

        • Adrian Rosebrock January 7, 2017 at 9:30 am #

          Yep, that’s a total coincidence.

  10. David Killen January 24, 2017 at 6:26 am #

    You write

    # now that we have the top-left coordinate, use it as an
    # anchor to calculate the Euclidean distance between the
    # top-left and right-most points; …
    # … the point with the largest distance will be
    # our bottom-right point

    This may be true for images of quadrilaterals but it’s not generally true. Imagine a square oriented with its edges horizontal and vertical and now deform it by sliding the right-hand vertical image straight upwards so that we get a series of parallelograms. At some point it consists of two equilateral triangles stuck together and now the two right-hand points are equidistant from the top-left corner. From now on, the upper-right corner is further from the anchor than is the lower-right corner.

    I discovered this the hard way while trying to find the grid-lines on a go board. There were a lot of false lines from diagonals and they created some very skewed parallelograms.

    • David Killen January 24, 2017 at 10:07 am #

      I think I should have written ‘images of rectangles’ instead of ‘images of quadrilaterals’

    • David Killen January 24, 2017 at 10:13 am #


      I’m now using code that finds the top-left and bottom-left points by your method but then calculates the angle bl->tl->r for r in the rightMost points and assigns tr and br accordingly. It seems to work.

      • Adrian Rosebrock January 24, 2017 at 2:19 pm #

        Thank you for sharing your insights David, I appreciate it.

  11. Varsha April 27, 2017 at 6:01 am #

    This approach is not working for Video Frame object, as object from first frame appearing in second frame , its counting as second object. kindly help…

    • Adrian Rosebrock April 28, 2017 at 9:31 am #

      Hi Varsha — I’m not sure what you mean by “counting as second object”. Can you please elaborate?

  12. Diyanpure July 22, 2017 at 3:55 am #

    I need to solve my programs like this but i need in C++ program.. can you give me a suggestion ? Please.. and thanks..

    • Adrian Rosebrock July 24, 2017 at 3:44 pm #

      Hello — I only provide Python on this blog. Best of luck with the project!

  13. Umesh August 26, 2017 at 9:39 am #

    Your article is very good but..
    I can’t understand what line number 86, perspective. order_points(pts) do..
    In perspective manner..
    Please guide me..
    Thanks for article.

    • Adrian Rosebrock August 27, 2017 at 10:35 am #

      Hi Umesh — I’m not sure what your question is. What specifically are you trying to understand with the order_points function?

  14. Hai November 6, 2017 at 11:33 pm #

    Hi, Adiran!
    The blogpost is very useful for me, many thanks for sharing!
    There is a little problem, I am processing a short video with this method, when there exists objects in video, it perfectly draws rectangles on objects, but when comes to blank (simple black color) part of the video, it gives me an error like:
    in line : (cnts, _) = contours.sort_contours(cnts)
    not enough values to unpack (expected 2, got 0)

    The error seems like becouse of cannot find any contours in blank part of the video, how to i solve this? … I am a beginner on CV, thank you again!

    • Adrian Rosebrock November 9, 2017 at 7:10 am #

      I would need to see the full traceback of the error to determine the exact issue; however, I would suggest checking if len(cnts) == 0. If this is the case you can skip the frame since no contours can be found. Since you’re new to OpenCV and image processing I would definitely recommend working through Practical Python and OpenCV where I discuss the fundamentals of OpenCV in detail. By the time you work through the book you’ll be able to work through the majority of tutorials here on PyImageSearch with ease.

  15. Waqar Ahmed November 26, 2017 at 7:52 am #

    i got an assignment to measure the length and height of the rice grain and i am new to this language and can’t understand anything but i am reading this site and it is really helpful but can you tell me how to do that?

    • Adrian Rosebrock November 27, 2017 at 1:08 pm #

      Hey Waqar — take a look at this blog post where I share more information on measuring object sizes in images.

  16. nwpuxhld April 11, 2018 at 9:47 pm #

    I find it doesn’t work in some case. The target is not Rect, it is Quadrilateral. For example, the four points are:[[ 96 263] [ 98 380] [100 382] [ 97 263]]. Do you know any solution for this case?

  17. wellington castro April 15, 2018 at 11:25 am #

    Adrian thanks for this solution, really! But I was left with a question: Why can’t you reorder the rightmost points in respect to y-axis to get TR and BR just as you did for TL and BL ?

    • Adrian Rosebrock April 16, 2018 at 2:26 pm #

      It creates a few edge cases, unfortunately. Refer to the comments section of the previous post.

  18. Artemii May 9, 2018 at 1:51 am #

    My appreciation for such a detailed explanation, a great introduction to the CV.

    Code works well while detecting a bounders, but where is an issue with a counting of objects for some reason.. Some of the digits get skipped, like #3, #5, #6 and #7 instead of 1, 2, 3, 4.

    Any ideas why it could happed?

    • Adrian Rosebrock May 9, 2018 at 9:30 am #

      We’re not performing digit recognition in this tutorial so I’m not sure what you are referring to the digits being skipped. Could you please clarify?

  19. Francesco Vergentini June 7, 2018 at 1:05 pm #

    Thank you for your crystal clear solutions.
    I am using your new method for a application where my box is narrow and quite long. When this is rotated approx 45° clockwise happen that the most-left points are the two bottom ones: the result of new algorithm will give as tl point the bl one.

    For this special condition the old algorithm works better but fails in the cases you already mentioned.

    I cant figure out how could we fix it in order to be always reliable.

    Any suggestion would be very much appreciated.

  20. Hola November 3, 2018 at 7:08 am #


    I have some objects on a wooden textured background, took a photo of it and tried out this algorithm, it breaks at finding the contour. The contour includes the background texture as well in it, making tiny boxes, I tried around tweaking the contourArea size, but some of the objects have similar size of that some of the texture.

    How can i completely remove the textures? any ideas?

    • Adrian Rosebrock November 6, 2018 at 1:32 pm #

      Hey Hola — do you have any example images of what you’re working with? That would likely help me provide suggestions.

  21. Shershon February 15, 2019 at 4:07 am #

    Hey Adrian! When I try to execute the program the following error occurs.

    ipykernel_launcher.py: error: the following arguments are required: -i/–image, -w/–width

    I am using jupyter notebook.

    • Adrian Rosebrock February 15, 2019 at 6:12 am #

      First make sure you read this tutorial on command line arguments so you can get up to speed on what they are and how they work.

      From there you have two options:

      1. Execute the script via command line instead of Jupyter notebook.
      2. Update the “argparse” code per the recommendations in the argparse tutorial I linked you to.

  22. rosangela March 31, 2019 at 9:04 am #

    HI Adrian, how can a measure the objects(height and widht) in an image knowing the distance of picamera to the object????

    • Adrian Rosebrock April 2, 2019 at 5:59 am #

      Make sure you follow this tutorial. Once you have the triangle similarity you can compute the width and height.

  23. Fawzi Sdudah May 15, 2019 at 10:53 am #

    I’m not sure the euclidean method, as used, is fool-proof.

    After the anchor point, let’s denote the two ordered, right-most points a and b. a will always be closer to the anchor than b as they have been ordered (a_x is smaller than b_x).

    For a fixed value of x_y, there are two cases for y_b:

    1- when y_b is smaller, (point a) will be both below point b and will be the bottom right-most even though it does not make the longest line from the anchor. Point b will make the longest line from the anchor.

    2- when y_b is larger, (point b) will be below a, at the bottom, and will make the longest line from the anchor.

    Fawzi Sdudah

  24. Shruti November 6, 2019 at 1:00 am #

    which IDE you are using ?

    • Adrian Rosebrock November 7, 2019 at 10:15 am #

      I prefer Sublime Text 2 or PyCharm for most projects.

  25. Mahmoud December 6, 2019 at 10:46 am #

    Dear Adrian,
    Thanks for your effort.
    This blog is working well with me.
    The problem is when I use my photo . it draw coordinates in each cell of photo without the real object that I need.
    I do not know this is due to resolution of image or what.
    I need to know the coordinates or the object only without background.

    Thanks for your support

  26. Amine January 4, 2020 at 10:20 am #

    Hope you are fine, i need your help, i used your but it doesn’t generate ordered pixels i mean in the same piece of countour pixels are not ordred due to the function findcontour. your function return rect not ordered pixels if i’m not wrong.

    Thank you very much


  1. Measuring size of objects in an image with OpenCV - PyImageSearch - March 28, 2016

    […] Last week, we learned an important technique: how reliably order a set of rotated bounding box coordinates in a top-left, top-right, bottom-right, and bottom-left arrangement. […]

  2. Measuring distance between objects in an image with OpenCV - PyImageSearch - April 4, 2016

    […] weeks ago, we started this round of tutorials by learning how to (correctly) order coordinates in a clockwise manner using Python and OpenCV. Then, last week, we discussed how to measure the size of objects in an image using a reference […]

  3. Finding extreme points in contours with OpenCV - PyImageSearch - April 11, 2016

    […] few weeks ago, I demonstrated how to order the (x, y)-coordinates of a rotated bounding box in a clockwise fashion — an extremely useful skill that is critical in many computer vision applications, including […]

Before you leave a comment...

Hey, Adrian here, author of the PyImageSearch blog. I'd love to hear from you, but before you submit a comment, please follow these guidelines:

  1. If you have a question, read the comments first. You should also search this page (i.e., ctrl + f) for keywords related to your question. It's likely that I have already addressed your question in the comments.
  2. If you are copying and pasting code/terminal output, please don't. Reviewing another programmers’ code is a very time consuming and tedious task, and due to the volume of emails and contact requests I receive, I simply cannot do it.
  3. Be respectful of the space. I put a lot of my own personal time into creating these free weekly tutorials. On average, each tutorial takes me 15-20 hours to put together. I love offering these guides to you and I take pride in the content I create. Therefore, I will not approve comments that include large code blocks/terminal output as it destroys the formatting of the page. Kindly be respectful of this space.
  4. Be patient. I receive 200+ comments and emails per day. Due to spam, and my desire to personally answer as many questions as I can, I hand moderate all new comments (typically once per week). I try to answer as many questions as I can, but I'm only one person. Please don't be offended if I cannot get to your question
  5. Do you need priority support? Consider purchasing one of my books and courses. I place customer questions and emails in a separate, special priority queue and answer them first. If you are a customer of mine you will receive a guaranteed response from me. If there's any time left over, I focus on the community at large and attempt to answer as many of those questions as I possibly can.

Thank you for keeping these guidelines in mind before submitting your comment.

Leave a Reply