Rotate images (correctly) with OpenCV and Python

opencv_rotated_header

Let me tell you an embarrassing story of how I wasted three weeks of research time during graduate school six years ago.

It was the end of my second semester of coursework.

I had taken all of my exams early and all my projects for the semester had been submitted.

Since my school obligations were essentially nil, I started experimenting with (automatically) identifying prescription pills in images, something I know a thing or two about (but back then I was just getting started with my research).

At the time, my research goal was to find and identify methods to reliably quantify pills in a rotation invariant manner. Regardless of how the pill was rotated, I wanted the output feature vector to be (approximately) the same (the feature vectors will never be to completely identical in a real-world application due to lighting conditions, camera sensors, floating point errors, etc.).

After the first week I was making fantastic progress.

I was able to extract features from my dataset of pills, index them, and then identify my test set of pills regardless of how they were oriented…

…however, there was a problem:

My method was only working with round, circular pills — I was getting completely nonsensical results for oblong pills.

How could that be?

I racked my brain for the explanation.

Was there a flaw in the logic of my feature extraction algorithm?

Was I not matching the features correctly?

Or was it something else entirely…like a problem with my image preprocessing.

While I might have been ashamed to admit this as a graduate student, the problem was the latter:

I goofed up.

It turns out that during the image preprocessing phase, I was rotating my images incorrectly.

Since round pills have are approximately square in their aspect ratio, the rotation bug wasn’t a problem for them. Here you can see a round pill being rotated a full 360 degrees without an issue:

Figure 1: Rotating a circular pill doesn't reveal any obvious problems.

Figure 1: Rotating a circular pill doesn’t reveal any obvious problems.

But for oblong pills, they would be “cut off” in the rotation process, like this:

Figure 2: However, rotating oblong pills using the OpenCV's standard cv2.getRotationMatrix2D and cv2.warpAffine functions caused me some problems.

Figure 2: However, rotating oblong pills using the OpenCV’s standard cv2.getRotationMatrix2D and cv2.warpAffine functions caused me some problems that weren’t immediately obvious.

In essence, I was only quantifying part of the rotated, oblong pills; hence my strange results.

I spent three weeks and part of my Christmas vacation banging my head against the wall trying to diagnose the bug — only to feel quite embarrassed when I realized it was due to me being negligent with the cv2.rotate  function.

You see, the size of the output image needs to be adjusted, otherwise, the corners of my image would be cut off.

How did I accomplish this and squash the bug for good?

To learn how to rotate images with OpenCV such that the entire image is included and none of the image is cut off, just keep reading.

Looking for the source code to this post?
Jump right to the downloads section.

Rotate images (correctly) with OpenCV and Python

In the remainder of this blog post I’ll discuss common issues that you may run into when rotating images with OpenCV and Python.

Specifically, we’ll be examining the problem of what happens when the corners of an image are “cut off” during the rotation process.

To make sure we all understand this rotation issue with OpenCV and Python I will:

  • Start with a simple example demonstrating the rotation problem.
  • Provide a rotation function that ensures images are not cut off in the rotation process.
  • Discuss how I resolved my pill identification issue using this method.

A simple rotation problem with OpenCV

Let’s get this blog post started with an example script.

Open up a new file, name it rotate_simple.py , and insert the following code:

Lines 2-5 start by importing our required Python packages.

If you don’t already have imutils, my series of OpenCV convenience functions installed, you’ll want to do that now:

If you already have imutils  installed, make sure you have upgraded to the latest version:

From there, Lines 8-10 parse our command line arguments. We only need a single switch here, --image , which is the path to where our image resides on disk.

Let’s move on to actually rotating our image:

Line 14 loads the image we want to rotate from disk.

We then loop over various angles in the range [0, 360] in 15 degree increments (Line 17).

For each of these angles we call imutils.rotate , which rotates our image  the specified number of angle  degrees about the center of the image. We then display the rotated image to our screen.

Lines 24-27 perform an identical process, but this time we call imutils.rotate_bound  (I’ll provide the implementation of this function in the next section).

As the name of this method suggests, we are going to ensure the entire image is bound inside the window and none is cut off.

To see this script in action, be sure to download the source code using the “Downloads” section of this blog post, followed by executing the command below:

The output of using the imutils.rotate  function on a non-square image can be seen below:

Figure 3: An example of corners being cut off when rotating an image using OpenCV and Python.

Figure 3: An example of corners being cut off when rotating an image using OpenCV and Python.

As you can see, the image is “cut off” when it’s rotated — the entire image is not kept in the field of view.

But if we use imutils.rotate_bound  we can resolve this issue:

Figure 4: We can ensure the entire image is kept in the field of view by modifying the matrix returned by cv2.getRotationMatrix2D.

Figure 4: We can ensure the entire image is kept in the field of view by modifying the matrix returned by cv2.getRotationMatrix2D.

Awesome, we fixed the problem!

So does this mean that we should always use .rotate_bound  over the .rotate  method?

What makes it so special?

And what’s going on under the hood?

I’ll answer these questions in the next section.

Implementing a rotation function that doesn’t cut off your images

Let me start off by saying there is nothing wrong with the cv2.getRotationMatrix2D  and cv2.warpAffine  functions that are used to rotate images inside OpenCV.

In reality, these functions give us more freedom than perhaps we are comfortable with (sort of like comparing manual memory management with C versus automatic garbage collection with Java).

The cv2.getRotationMatrix2D  function doesn’t care if we would like the entire rotated image to kept.

It doesn’t care if the image is cut off.

And it won’t help you if you shoot yourself in the foot when using this function (I found this out the hard way and it took 3 weeks to stop the bleeding).

Instead, what you need to do is understand what the rotation matrix is and how it’s constructed.

You see, when you rotate an image with OpenCV you call cv2.getRotationMatrix2D  which returns a matrix M that looks something like this:

Figure 5: The structure of the matrix M returned by cv2.getRotationMatrix2D.

Figure 5: The structure of the matrix M returned by cv2.getRotationMatrix2D.

This matrix looks scary, but I promise you: it’s not.

To understand it, let’s assume we want to rotate our image \theta degrees about some center (c_{x}, c_{y}) coordinates at some scale (i.e., smaller or larger).

We can then plug in values for \alpha and \beta:

\alpha = scale * cos \theta and \beta = scale * sin \theta

That’s all fine and good for simple rotation — but it doesn’t take into account what happens if an image is cut off along the borders. How do we remedy this?

The answer is inside the rotate_bound  function in convenience.py of imutils:

On Line 41 we define our rotate_bound  function.

This method accepts an input image  and an angle  to rotate it by.

We assume we’ll be rotating our image about its center (x, y)-coordinates, so we determine these values on lines 44 and 45.

Given these coordinates, we can call cv2.getRotationMatrix2D  to obtain our rotation matrix M (Line 50).

However, to adjust for any image border cut off issues, we need to apply some manual calculations of our own.

We start by grabbing the cosine and sine values from our rotation matrix M (Lines 51 and 52).

This enables us to compute the new width and height of the rotated image, ensuring no part of the image is cut off.

Once we know the new width and height, we can adjust for translation on Lines 59 and 60 by modifying our rotation matrix once again.

Finally, cv2.warpAffine  is called on Line 63 to rotate the actual image using OpenCV while ensuring none of the image is cut off.

For some other interesting solutions (some better than others) to the rotation cut off problem when using OpenCV, be sure to refer to this StackOverflow thread and this one too.

Fixing the rotated image “cut off” problem with OpenCV and Python

Let’s get back to my original problem of rotating oblong pills and how I used .rotate_bound  to solve the issue (although back then I had not created the imutils  Python package — it was simply a utility function in a helper file).

We’ll be using the following pill as our example image:

Figure 6: The example oblong pill we will be rotating with OpenCV.

Figure 6: The example oblong pill we will be rotating with OpenCV.

To start, open up a new file and name it rotate_pills.py . Then, insert the following code:

Lines 2-5 import our required Python packages. Again, make sure you have installed and/or upgraded the imutils Python package before continuing.

We then parse our command line arguments on Lines 8-11. Just like in the example at the beginning of the blog post, we only need one switch: --image , the path to our input image.

Next, we load our pill image from disk and preprocess it by converting it to grayscale, blurring it, and detecting edges:

After executing these preprocessing functions our pill image now looks like this:

Figure 7: Detecting edges in the pill.

Figure 7: Detecting edges in the pill.

The outline of the pill is clearly visible, so let’s apply contour detection to find the outline of the pill:

We are now ready to extract the pill ROI from the image:

First, we ensure that at least one contour was found in the edge map (Line 26).

Provided we have at least one contour, we construct a mask  for the largest contour region on Lines 29 and 30.

Our mask  looks like this:

Figure 8: The mask representing the entire pill region in the image.

Figure 8: The mask representing the entire pill region in the image.

Given the contour region, we can compute the (x, y)-coordinates of the bounding box of the region (Line 34).

Using both the bounding box and mask , we can extract the actual pill region ROI (Lines 35-38).

Now, let’s go ahead and apply both the imutils.rotate  and imutils.rotate_bound  functions to the imageROI , just like we did in the simple examples above:

After downloading the source code to this tutorial using the “Downloads” section below, you can execute the following command to see the output:

The output of imutils.rotate  will look like:

Figure 9: Incorrectly rotating an image with OpenCV causes parts of the image to be cut off.

Figure 9: Incorrectly rotating an image with OpenCV causes parts of the image to be cut off.

Notice how the pill is cut off during the rotation process — we need to explicitly compute the new dimensions of the rotated image to ensure the borders are not cut off.

By using imutils.rotate_bound , we can ensure that no part of the image is cut off when using OpenCV:

Figure 10: By modifying OpenCV's rotation matrix we can resolve the issue and ensure the entire image is visible.

Figure 10: By modifying OpenCV’s rotation matrix we can resolve the issue and ensure the entire image is visible.

Using this function I was finally able to finish my research for the winter break — but not before I felt quite embarrassed about my rookie mistake.

Summary

In today’s blog post I discussed how image borders can be cut off when rotating images with OpenCV and cv2.warpAffine .

The fact that image borders can be cut off is not a bug in OpenCV — in fact, it’s how cv2.getRotationMatrix2D  and cv2.warpAffine  are designed.

While it may seem frustrating and cumbersome to compute new image dimensions to ensure you don’t lose your borders, it’s actually a blessing in disguise.

OpenCV gives us so much control that we can modify our rotation matrix to make it do exactly what we want.

Of course, this requires us to know how our rotation matrix M is formed and what each of its components represents (discussed earlier in this tutorial). Provided we understand this, the math falls out naturally.

To learn more about image processing and computer vision, be sure to take a look at the PyImageSearch Gurus course where I discuss these topics in more detail.

Otherwise, I encourage you to enter your email address in the form below to be notified when future blog posts are published.

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 11-page Resource Guide on Computer Vision and Image Search Engines, including exclusive techniques that I don’t post on this blog! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , ,

9 Responses to Rotate images (correctly) with OpenCV and Python

  1. stylianos iordanis January 2, 2017 at 1:23 pm #

    amazing article as always.
    do you know any ml/ deep learning NN architectures that are rotation invariant inherently, without image preprocessing and thus creating multiple train exemplars leading to the same classification (as I suppose you do) ?
    I have googled the topic without success.
    thanks for everything

    • Adrian Rosebrock January 4, 2017 at 10:59 am #

      This is still an active area of research. Focus your search on “steerable filters + convolutional neural networks” and you’ll come across some of the more recent publications. The problem here is that rotation invariance can vary based on what type of dataset you’re working with. For example, rotation invariance for natural scene images (broad classification and therefore easier) is much easier than obtain than say rotation invariance for fine-grained classification (such as pill identification).

  2. David January 2, 2017 at 2:52 pm #

    This is very useful. Thanks, Adrian, and Happy New Year!

    • Adrian Rosebrock January 4, 2017 at 10:53 am #

      Happy New Year to you as well David!

  3. Javier de la Rosa January 2, 2017 at 5:26 pm #

    Excellent! Any easy to way to return a mask for the portions of the rotated image that are not part of the image itself?

  4. solarflare January 2, 2017 at 5:43 pm #

    Hi Adrian,

    Thanks for posting this. I have a question for you. What if you were interested in the opposite scenario? That is, if you were doing object tracking and you wanted to calculate the rotation angle as the object is rotating. For example, you may take a reference image of an object, and then track the object realtime using the webcam while the object is rotating back and forth. I am interested in the angle of the object in the current frame relative to the angle of the object in the reference frame. For simplicity, let’s for now assume that the object only rotates along the axis of the camera, and does not change size. Could you point me to the right direction for this?

    Thanks in advance.

    • Adrian Rosebrock January 4, 2017 at 10:51 am #

      There are multiple ways to accomplish this, each of them based on the actual shape of the object you are working with. The best solution would be to determine the center of the object and a second identifiable point of some sort on the object that can be detected across many frames. Exactly which method you use is really dependent on the project. Once you have these points you can measure how much the object has rotated between frames.

      For irregular objects you could simply compute the mask + bounding box and then compute the minimum-enclosing rectangle which will also give you the angle of rotation.

      • solarflare January 6, 2017 at 8:31 am #

        Hi Adrian,

        Thanks for the reply. Let’s say that we are trying to create a more general algorithm under the following scenario: we would like to detect the rotation of different objects, but in all cases the object is circular and has a detectable pattern to it that’s not symmetric (therefore it would be possible to tell the angle). However, let’s say the pattern itself is not always the same. Consider for instance company logos that are circular. Usually the pattern here is an annulus and the detectable features are not necessarily the same from one logo to another, except for that the features are located in an annulus around the center. I was thinking about taking a reference on the annulus and then tracking the rotational angle. However, I’m not sure if there is a better approach, and how to make this approach computationally efficient. If SIFT or SURF algorithms are used, I fear they would not be efficient so I was hoping there would be a better method.

        • Adrian Rosebrock January 7, 2017 at 9:32 am #

          The standard approach here would be to use SIFT/SURF, keypoint matching, and RANSAC. You mentioned wanting to create a “general algorithm”, but in reality I don’t think this is advisable. Most successful computer vision applications focus on a specific problem and attempt to solve it. Take logo recognition for example — we’ve become better at logo recognition but it’s not solved. I would suggest you start with SIFT/SURF and see how far it gets you in your particular problem, but try to stay away from solving “general” problems.

Leave a Reply