Text skew correction with OpenCV and Python

Today’s tutorial is a Python implementation of my favorite blog post by Félix Abecassis on the process of text skew correction (i.e., “deskewing text”) using OpenCV and image processing functions.

Given an image containing a rotated block of text at an unknown angle, we need to correct the text skew by:

  1. Detecting the block of text in the image.
  2. Computing the angle of the rotated text.
  3. Rotating the image to correct for the skew.

We typically apply text skew correction algorithms in the field of automatic document analysis, but the process itself can be applied to other domains as well.

To learn more about text skew correction, just keep reading.

Looking for the source code to this post?
Jump right to the downloads section.

Text skew correction with OpenCV and Python

The remainder of this blog post will demonstrate how to deskew text using basic image processing operations with Python and OpenCV.

We’ll start by creating a simple dataset that we can use to evaluate our text skew corrector.

We’ll then write Python and OpenCV code to automatically detect and correct the text skew angle in our images.

Creating a simple dataset

Similar to Félix’s example, I have prepared a small dataset of four images that have been rotated by a given number of degrees:

Figure 1: Our four example images that we’ll be applying text skew correction to with OpenCV and Python.

The text block itself is from Chapter 11 of my book, Practical Python and OpenCV, where I’m discussing contours and how to utilize them for image processing and computer vision.

The filenames of the four files follow:

The first part of the filename specifies whether our image has been rotated counter-clockwise (negative) or clockwise (positive).

The second component of the filename is the actual number of degrees the image has been rotated by.

The goal our text skew correction algorithm will be to correctly determine the direction and angle of the rotation, then correct for it.

To see how our text skew correction algorithm is implemented with OpenCV and Python, be sure to read the next section.

Deskewing text with OpenCV and Python

To get started, open up a new file and name it correct_skew.py .

From there, insert the following code:

Lines 2-4 import our required Python packages. We’ll be using OpenCV via our cv2  bindings, so if you don’t already have OpenCV installed on your system, please refer to my list of OpenCV install tutorials to help you get your system setup and configured.

We then parse our command line arguments on Lines 7-10. We only need a single argument here, --image , which is the path to our input image.

The image is then loaded from disk on Line 13.

Our next step is to isolate the text in the image:

Our input images contain text that is dark on a light background; however, to apply our text skew correction process, we first need to invert the image (i.e., the text is now light on a dark background — we need the inverse).

When applying computer vision and image processing operations, it’s common for the foreground to be represented as light while the background (the part of the image we are not interested in) is dark.

A thresholding operation (Lines 23 and 24) is then applied to binarize the image:

Figure 2: Applying a thresholding operation to binarize our image. Our text is now white on a black background.

Given this thresholded image, we can now compute the minimum rotated bounding box that contains the text regions:

Line 30 finds all (x, y)-coordinates in the thresh  image that are part of the foreground.

We pass these coordinates into cv2.minAreaRect  which then computes the minimum rotated rectangle that contains the entire text region.

The cv2.minAreaRect  function returns angle values in the range [-90, 0). As the rectangle is rotated clockwise the angle value increases towards zero. When zero is reached, the angle is set back to -90 degrees again and the process continues.

Note: For more information on cv2.minAreaRect , please see this excellent explanation by Adam Goodwin.

Lines 37 and 38 handle if the angle is less than -45 degrees, in which case we need to add 90 degrees to the angle and take the inverse.

Otherwise, Lines 42 and 43 simply take the inverse of the angle.

Now that we have determined the text skew angle, we need to apply an affine transformation to correct for the skew:

Lines 46 and 47 determine the center (x, y)-coordinate of the image. We pass the center  coordinates and rotation angle into the cv2.getRotationMatrix2D  (Line 48). This rotation matrix M  is then used to perform the actual transformation on Lines 49 and 50.

Finally, we display the results to our screen:

Line 53 draws the angle  on our image so we can verify that the output image matches the rotation angle (you would obviously want to remove this line in a document processing pipeline).

Lines 57-60 handle displaying the output image.

Skew correction results

To grab the code + example images used inside this blog post, be sure to use the “Downloads” section at the bottom of this post.

From there, execute the following command to correct the skew for our neg_4.png  image:

Figure 3: Applying skew correction using OpenCV and Python.

Here we can see that that input image has a counter-clockwise skew of 4 degrees. Applying our skew correction with OpenCV detects this 4 degree skew and corrects for it.

Here is another example, this time with a counter-clockwise skew of 28 degrees:

Figure 4: Deskewing images using OpenCV and Python.

Again, our skew correction algorithm is able to correct the input image.

This time, let’s try a clockwise skew:

Figure 5: Correcting for skew in text regions with computer vision.

And finally a more extreme clockwise skew of 41 degrees:

Figure 6: Deskewing text with OpenCV.

Regardless of skew angle, our algorithm is able to correct for skew in images using OpenCV and Python.

Interested in learning more about computer vision and OpenCV?

If you’re interested in learning more about the fundamentals of computer vision and image processing, be sure to take a look at my book, Practical Python and OpenCV:

Inside the book you’ll learn the basics of computer vision and OpenCV, working your way up to more advanced topics such as face detectionobject tracking in video, and handwriting recognition, all with lots of examples, code, and detailed walkthroughs.

If you’re interested in learning more (and how my book can teach you these algorithms in less than a single weekend), just click the button below:

Summary

In today’s blog post I provided a Python implementation of Félix Abecassis’ approach to skew correction.

The algorithm itself is quite straightforward, relying on only basic image processing techniques such as thresholding, computing the minimum area rotated rectangle, and then applying an affine transformation to correct the skew.

We would commonly use this type of text skew correction in an automatic document analysis pipeline where our goal is to digitize a set of documents, correct for text skew, and then apply OCR to convert the text in the image to machine-encoded text.

I hope you enjoyed today’s tutorial!

To be notified when future blog posts are published, be sure to enter your email address in the form below!

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 11-page Resource Guide on Computer Vision and Image Search Engines, including exclusive techniques that I don’t post on this blog! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , , , ,

19 Responses to Text skew correction with OpenCV and Python

  1. Doug February 20, 2017 at 12:29 pm #

    I would be very interested in how to extend this technique for 3 dimensions. e.g.: The common case where someone has taken a picture of some text using a camera phone, but from an off angle.

    • Adrian Rosebrock February 20, 2017 at 1:10 pm #

      This thread on Twitter was just brought to my attention and would likely be helpful for you.

      • Doug February 21, 2017 at 9:05 am #

        That looks perfect. Thank you.

  2. Adam February 20, 2017 at 1:50 pm #

    This method works nice for perfect scans (without noise) of justified text (or at least left or right aligned with many lines). Better approach would be detecting blank space between lines and finding mean angle of lines fitting in this space. Is there a way to do this efficiently?

    • Adrian Rosebrock February 22, 2017 at 1:44 pm #

      For a more robust approach, take a look at the link in my reply to “Doug” above.

  3. Carlos February 20, 2017 at 3:38 pm #

    I implemented this technique in an application some time ago. It is simple and fast. But it is not very suitable when you have more complex layouts such as forms. The technique that work more accurately was to rotate the image in several angles from negative to positive and count what angle produced more white pixels (on binarized iamge) in every row.
    But this is too slow. Could you point me to some approach that is faster than this, please?

    • Adrian Rosebrock February 22, 2017 at 1:44 pm #

      If you are working with form images I think it would be best to match areas of the forms to a template form rather than applying skew correction.

  4. sumant February 21, 2017 at 10:02 am #

    I get the right bounding boxes after threshold only if i swap indexes for np.where on line no 30.This would necessitate a inverse of the angle on line 48

  5. Rohit March 21, 2017 at 3:49 am #

    Thanks Adrian for this informative post. It took me time to figure out how cv2.minAreaRect works. If the angle returned by the cv2.minAreaRect is bordering -45 ( let say -50 or -55), will this code not lead to the text being skewed in a perpendicular direction rather than a horizontal direction? Just asking out of curiosity

    Thanks again

    • Adrian Rosebrock March 21, 2017 at 7:05 am #

      I’m not sure what you mean, but Lines 37-43 handle this case. I would test out the code on an example image.

  6. Filozof50 March 21, 2017 at 10:42 am #

    YOU ARE A GOD!!! Thanks alot! You did amazing job here. 🙂

  7. zara March 27, 2017 at 9:00 am #

    Hi adrain.

    coords = np.column_stack(np.where(thresh > 0)) what this function does? can we replace this function with coords = cv2.findNonZero(thresh). What about image if it is portrait?

    • Adrian Rosebrock March 28, 2017 at 1:00 pm #

      The np.where function returns the indexes of the thresh array that have a pixel value greater than zero. Calling np.column_stack turns them into (x, y)-coordinates.

  8. Mathew Orman April 4, 2017 at 8:12 am #

    This is not usfull, it rotates rectangle not text. If image contains rotated and croped text this code returns angle = 0.0 and does not correct the skiew…

    • zara April 20, 2017 at 4:01 am #

      yeah. In my case also it ‘s behaving like that only.

  9. OpenCV Learner June 7, 2017 at 9:42 am #

    Dear Adrian,

    I have used your codes and help so many times but never gave thanks.
    Currently I am working on a summer project (just for fun) and I needed something like this to process rotated musical sheets. Works perfectly!

    So many thanks to you and keep up the good work!

    • Adrian Rosebrock June 9, 2017 at 1:48 pm #

      Thank you for the comment, I’m happy to hear the tutorial helped you! 🙂 Best of luck with the rest of your summer projects. Projects like yours are the best way to learn. Keep it up!

  10. Gabriel June 10, 2017 at 10:12 pm #

    Hi Adrian,
    I’m using your code but getting the “cords” using cv2.findContours.
    When doing that, my object was being rotated to the wrong direction. To fix that I needed to change “angle = -(90 + angle)” to “angle = (90 + angle)”; removed the minus sign.
    What is the difference between the way you got the cords and “cv2.findContours”?

    • Adrian Rosebrock June 13, 2017 at 11:09 am #

      The method I used takes all thresholded (x, y)-coordinates and uses them to estimate the rotation. The cv2.findContours function will only find the outline of the region, provided you are computing the bounding box.

Leave a Reply