Image Difference with OpenCV and Python

In a previous PyImageSearch blog post, I detailed how to compare two images with Python using the Structural Similarity Index (SSIM).

Using this method, we were able to easily determine if two images were identical or had differences due to slight image manipulations, compression artifacts, or purposeful tampering.

Today we are going to extend the SSIM approach so that we can visualize the differences between images using OpenCV and Python. Specifically, we’ll be drawing bounding boxes around regions in the two input images that differ.

To learn more about computing and visualizing image differences with Python and OpenCV, just keep reading.

Looking for the source code to this post?
Jump right to the downloads section.

Image Difference with OpenCV and Python

In order to compute the difference between two images we’ll be utilizing the Structural Similarity Index, first introduced by Wang et al. in their 2004 paper, Image Quality Assessment: From Error Visibility to Structural Similarity. This method is already implemented in the scikit-image library for image processing.

The trick is to learn how we can determine exactly where, in terms of (x, y)-coordinate location, the image differences are.

To accomplish this, we’ll first need to make sure our system has Python, OpenCV, scikit-image, and imutils.

You can learn how to configure and install Python and OpenCV on your system using one of my OpenCV install tutorials.

If you don’t already have scikit-image  installed/upgraded, upgrade via:

While you’re at it, go ahead and install/upgrade imutils  as well:

Now that our system is ready with the prerequisites, let’s continue.

Computing image difference

Can you spot the difference between these two images?

Figure 1: Manually inspecting the difference between two input images (source).

If you take a second to study the two credit cards, you’ll notice that the MasterCard logo is present on the left image but has been Photoshopped out from the right image.

You may have noticed this difference immediately, or it may have taken you a few seconds. Either way, this demonstrates an important aspect of comparing image differences — sometimes image differences are subtle — so subtle that the naked eye struggles to immediately comprehend the difference (we’ll see an example of such an image later in this blog post).

So why is computing image differences so important?

One example is phishing. Attackers can manipulate images ever-so-slightly to trick unsuspecting users who don’t validate the URL into thinking they are logging into their banking website — only to later find out that it was a scam.

Comparing logos and known User Interface (UI) elements on a webpage to an existing dataset could help reduce phishing attacks (a big thanks to Chris Cleveland for passing along PhishZoo: Detecting Phishing Websites By Looking at Them as an example of applying computer vision to prevent phishing).

Developing a phishing detection system is obviously much more complicated than simple image differences, but we can still apply these techniques to determine if a given image has been manipulated.

Now, let’s compute the difference between two images, and view the differences side by side using OpenCV, scikit-image, and Python.

Open up a new file and name it , and insert the following code:

Lines 2-5 show our imports. We’ll be using compare_ssim  (from scikit-image), argparse , imutils , and cv2  (OpenCV).

We establish two command line arguments, --first  and --second , which are the paths to the two respective input images we wish to compare (Lines 8-13).

Next we’ll load each image from disk and convert them to grayscale:

We load our first and second images, --first and  --second , on Lines 16 and 17, storing them as imageA  and imageB , respectively.

Figure 2: Our two input images that we are going to apply image difference to.

Then we convert each to grayscale on Lines 20 and 21.

Figure 3: Converting the two input images to grayscale.

Next, let’s compute the Structural Similarity Index (SSIM) between our two grayscale images.

Using the compare_ssim  function from scikit-image, we calculate a score  and difference image, diff  (Line 25).

The score  represents the structural similarity index between the two input images. This value can fall into the range [-1, 1] with a value of one being a “perfect match”.

The diff  image contains the actual image differences between the two input images that we wish to visualize. The difference image is currently represented as a floating point data type in the range [0, 1] so we first convert the array to 8-bit unsigned integers in the range [0, 255] (Line 26) before we can further process it using OpenCV.

Now, let’s find the contours so that we can place rectangles around the regions identified as “different”:

On Lines 31 and 32 we threshold our diff  image using both cv2.THRESH_BINARY_INV  and cv2.THRESH_OTSU — both of these settings are applied at the same time using the vertical bar ‘or’ symbol, | . For details on Otsu’s bimodal thresholding setting, see this OpenCV documentation.

Subsequently we find the contours of thresh  on Lines 33-35. The ternary operator on Line 35 simply accommodates difference between the cv2.findContours return signature in OpenCV 2.4 and OpenCV 3, respectively.

The image in Figure 4 below clearly reveals the ROIs of the image that have been manipulated:

Figure 4: Using thresholding to highlight the image differences using OpenCV and Python.

Now that we have the contours stored in a list, let’s draw rectangles around the different regions on each image:

Beginning on Line 38, we loop over our contours, cnts . First, we compute the bounding box around the contour using the cv2.boundingRect  function. We store relevant (x, y)-coordinates as x  and y  as well as the width/height of the rectangle as w  and h .

Then we use the values to draw a red rectangle on each image with cv2.rectangle  (Lines 43 and 44).

Finally, we show the comparison images with boxes around differences, the difference image, and the thresholded image (Lines 47-50).

We make a call to cv2.waitKey  on Line 50 which makes the program wait until a key is pressed (at which point the script will exit).

Figure 5: Visualizing image differences using Python and OpenCV.

Next, let’s run the script and visualize a few more image differences.

Visualizing image differences

Using this script and the following command, we can quickly and easily highlight differences between two images:

As you can see in Figure 6, the security chip and name of the account holder have both been removed:

Figure 6: Comparing and visualizing image differences using computer vision (source).

Let’s try another example of computing image differences, this time of a check written by President Gerald R. Ford (source).

By running the command below and supplying the relevant images, we can see that the differences here are more subtle:

Figure 7: Computing image differences and highlighting the regions that are different.

Notice the following changes in Figure 7:

  • Betty Ford’s name is removed.
  • The check number is removed.
  • The symbol next to the date is removed.
  • The last name is removed.

On a complex image like a check it is often difficult to find all the differences with the naked eye. Luckily for us, we can now easily compute the differences and visualize the results with this handy script made with Python, OpenCV, and scikit-image.


In today’s blog post, we learned how to compute image differences using OpenCV, Python, and scikit-image’s Structural Similarity Index (SSIM). Based on the image difference we also learned how to mark and visualize the different regions in two images.

To learn more about SSIM, be sure to refer to this post and the scikit-image documentation.

I hope you enjoyed today’s blog post!

And before you go, be sure to enter your email address in the form below to be notified when future PyImageSearch blog posts are published!


If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 11-page Resource Guide on Computer Vision and Image Search Engines, including exclusive techniques that I don’t post on this blog! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , ,

59 Responses to Image Difference with OpenCV and Python

  1. Linus June 19, 2017 at 11:40 am #

    This is so cool (even though I have no specific use-case for now…) 😀
    Thanks for writing this post!

    • Adrian Rosebrock June 19, 2017 at 2:09 pm #

      Thanks Linus, I’m glad you enjoyed the post 🙂

  2. Mourad June 19, 2017 at 5:50 pm #

    Nice job ! Thank’x again Adrian, you are helping nad inspiring me every day.

    Can you please make a tutorial on how to detect soccer players on a pitch ? This technique is used to compute satistics of players performences using Computer Vision.

    • Adrian Rosebrock June 20, 2017 at 10:54 am #

      Hi Mourad — detecting (and tracking) soccer players on a pitch is a non-trivial problem. It’s not “easy” by any means as it involves both object detection and tracking (due to players being occluded). This is still an open-ended area of research.

  3. Luis Loja June 19, 2017 at 7:40 pm #

    Is it python 2 or 3?

    • Adrian Rosebrock June 20, 2017 at 10:51 am #

      This project is compatible with both Python 2.7 and Python 3.

  4. RITESH PATEL June 19, 2017 at 9:07 pm #

    thanks for this I will use this in my recent project fault finding on production batch

  5. John Cohen June 19, 2017 at 11:18 pm #

    this is very important for me—PI!

    • Adrian Rosebrock June 20, 2017 at 10:50 am #

      Glad to hear it John!

  6. Steven Barnes June 20, 2017 at 2:01 am #

    Nice post but what about changes of colour that are intensity neutral, e.g. if a colour image has been changed to a grey scale image the above approach will see no difference, likewise if, in a part of the image r & g values have been swapped.

    • Adrian Rosebrock June 20, 2017 at 10:50 am #

      Image difference algorithms do exactly that — detect image differences. If you convert an image to grayscale or swap RGB values you’ll by definition be changing the image. Perhaps I’m misunderstanding your question?

      • Steven Barnes June 22, 2017 at 11:42 am #

        Let us take your credit card example, if rather than deleting the logo you were to replace the red circle with an equal intensity of blue, then the yellow with an equal intensity of red and finally the blue with an equal intensity of yellow you would, of course, have changed the image __but__ the method shown in your code above would state that there was no difference between the two images, (as they are both reduced to grey scale before comparing.

        • Adrian Rosebrock June 27, 2017 at 6:48 am #

          Well, to start, when converting RGB images to grayscale, each of the channels are not weighted equally. Secondly, if you are concerned with per-channel differences, simply run the image difference algorithm on each channel of your input images.

  7. Simon June 20, 2017 at 2:20 am #

    Nice writeup would any of these work on two identical images but different scales? Trying to find a reliable method to compare two images that may be a different scale.

    Thanks for sharing this.

    • Adrian Rosebrock June 20, 2017 at 10:49 am #

      For images that have different scales, but the same aspect ratio, simply resize both image so that they are the same dimensions. From there, this method will work.

  8. pavan June 20, 2017 at 4:50 am #

    i tired of install import cv2 in my mac book plz help out. im from india,im always follow ur web, projects . thanks for helping for all.

    • Adrian Rosebrock June 20, 2017 at 10:45 am #

      Are you having trouble compiling and installing OpenCV? If so, I would suggest talking a look at the Quickstart Bundle and Hardcopy Bundle of my book, Practical Python and OpenCV. Both of these bundles include a pre-configured Ubuntu VirtualBox virtual machine that has OpenCV + Python pre-installed. Best of all, this VM will run on Linux, macOS, and Windows. It’s by far the fastest way to get up and running with OpenCV.

  9. Andreas June 20, 2017 at 4:53 am #

    I really like your walk-through of the code with examples. It makes it is easy to follow and understand. Great work.

    • Adrian Rosebrock June 20, 2017 at 10:43 am #

      Thanks Andreas, I’m glad you found the tutorial helpful! 🙂

  10. Bill June 20, 2017 at 12:41 pm #

    I was having an issue differentiating between two very similar images (with my own eyes), and wanted to write a little program to do it for me. Thanks for explaining this!

  11. Bill June 20, 2017 at 7:47 pm #


    When trying to install scikit-image I ran into a memory error when pip was installing matplotlib. I think it’s because pip’s caching mechanism is “trying to read the entire file into memory before caching it…which poses a problem in a limited memory environment” (

    The way to avoid caching is the following:
    pip --no-cache-dir install scikit-image

    Hopefully this saves some people from scratching their heads.

    • Adrian Rosebrock June 22, 2017 at 7:26 am #

      Thanks for sharing Bill!

  12. Vijeta June 20, 2017 at 11:25 pm #

    Thanks for the post..very useful..

    • Adrian Rosebrock June 22, 2017 at 7:26 am #

      I’m glad you found it useful, Vijeta!

  13. Marc June 25, 2017 at 10:25 am #

    Hi Adrian,

    would you use this technique to identify if an object popped up in an image?
    Specifically, I have a wildcam that takes pictures a soon as movement is detected. Most of the time it is birds and squirrels. What i would like to do, is check if a deer or a wild boar is in the image. This helps reduce flipping 700 images just to find those few with the relevant animal. Of course deer and boar may show up during day and night ;-).

    • Adrian Rosebrock June 27, 2017 at 6:32 am #

      For your particular project, I would treat this as a motion detection problem. The tutorial I linked to will help you build your wildlife camera surveillance system. From there, you’ll want to train a custom image classifier to recognize any animals you are interested in. Exactly which machine learning method you should use depends on your application, but I would recommend starting with PyImageSearch Gurus which has over 40+ lessons on image classification and object detection.

  14. Harrison June 26, 2017 at 10:26 pm #

    Hey Adrian, thanks so much for the tutorial! Are there other techniques to do this process for the same object but taken in two different photos. For example, if I wanted to compare a stop sign in two different photos, even if the photo is cropped the images will differ slightly by nature (due to lighting and other variables). Thanks!

    • Adrian Rosebrock June 27, 2017 at 6:09 am #

      To start, you would want to detect and extract the stop signs from the two input images. Exactly how you do this depends on your image processing pipeline. Once you have the stop signs there are a number of ways to compare them. For this method to work best, you would need to align the stop signs, which likely isn’t ideal. I would instead treat this is as an image similarity problem and apply image descriptors and feature extraction to quantify the stop signs.

      • Harrison June 27, 2017 at 2:34 pm #

        Thanks for the response! So would better techniques be things like zernike moments and color histogram comparisons?

        • Adrian Rosebrock June 30, 2017 at 8:28 am #

          Again, I think this depends on how you define “similarity”. What specifically are you trying to detect that is difference between road signs?

          My primary suggestion would be to take a look at the PyImageSearch Gurus course where I discuss object detection, feature extraction, and image similarity in detail.

          • Harrison July 3, 2017 at 10:31 am #

            I am trying to detect differences in color and shape. Essentially trying to determine if a street sign is misprinted by comparing it to a correctly printed one.

          • Adrian Rosebrock July 5, 2017 at 6:12 am #

            It’s hard to give an exact suggestion without seeing example images you are working with. Again, I would refer you to the PyImageSearch Gurus course I mentioned in the previous comment for more details on image similarity and comparison.

  15. y0c0 July 2, 2017 at 2:38 pm #

    i solved that, now i have another error–sys error:

    error: the following arguments are required: -f/–first, -s/–second

    usage: [-h] -f FIRST -s SECOND error: the following arguments are required: -f/–first, -s/–second

  16. Steven Leach July 2, 2017 at 8:28 pm #

    I was able to follow these instructions only by using sudo on my Linux mint system

  17. LianMing July 12, 2017 at 5:41 am #

    Nice work there, much appreciate it ! Is there a way to compare the images using the SSIM approach without converting it to greyscale? I realised there are some slight color changes in my images when viewing it with naked eyes but there were no difference after converting it to greyscale.

    • Adrian Rosebrock July 12, 2017 at 2:42 pm #

      If you wanted to compute SSIM for an RGB image you would simply separate the image into its respective Red, Green, and Blue components, compute SSIM for each channel, and average the values together.

  18. Andrew July 22, 2017 at 6:56 am #

    If 2 images are in different point of view, contrast, noise.. ? How can I know they are the same or not?

    • Adrian Rosebrock July 24, 2017 at 3:44 pm #

      Typically if you have objects that are captured at different viewing angles you would detect keypoints, extract local invariant descriptors, and then apply keypoint matching using RANSAC. If too few keypoints match, then the objects are not the same. If there is a reasonable percentage of overlap in the match, then the objects can be considered the same.

      I teach this in detail (with code) inside Practical Python and OpenCV where we learn how to recognize the covers of books.

  19. Prabhakar Srinivasan August 5, 2017 at 12:46 am #

    brilliant! thank you for sharing

    • Adrian Rosebrock August 10, 2017 at 9:10 am #

      Thanks Prabhakar 🙂

  20. anwar August 9, 2017 at 12:02 am #

    Hi Adrian, how to make output in terminal like boolean, 1 for no different and 0 for different?

    • Adrian Rosebrock August 10, 2017 at 8:50 am #

      For that I would also read this blog post as well. Basically, you need to supply a threshold on the SSIM or MSE value. If the threshold is small enough, there is no difference.

  21. Yitzhak September 6, 2017 at 7:31 am #

    thanks, great job !!

  22. Ambika September 11, 2017 at 6:32 am #

    Hi Adrian!

    Great Tutorial for image difference detection. I just wanted to know if instead of drawing rectangle around the difference region, can we mask that area out? So that the whole image is visible and the part which is different is white. How can we do it? Can you please help.

    • Adrian Rosebrock September 11, 2017 at 9:00 am #

      You could mask the area out as well. You could draw a (filled in) rectangle around the contour area. You could compute the bounding box and then draw the rectangle. You could use the cv2.bitwise_and function. There are a number of different ways to accomplish this task.

  23. Erika September 21, 2017 at 5:33 pm #

    It is great! Thank you.
    But I wondered the value of it in reality because we always have to get the original image for comparison. Usually, it is not that kind of easy job.

    • Adrian Rosebrock September 23, 2017 at 10:13 am #

      If you are trying to compare two images for similarity you must have an original image of some sort in your dataset — otherwise you would not have anything to compare against.

  24. manu September 23, 2017 at 4:20 am #

    hi adryan i found this error
    pi@raspberrypi:~ $ python –first 3.jpg –second 4.jpg
    Traceback (most recent call last):
    File “”, line 23, in
    (score, diff) = structural_similarity (grayA, grayB,full= True)
    TypeError: structural_similarity() got an unexpected keyword argument ‘full’

    • Adrian Rosebrock September 23, 2017 at 9:59 am #

      Can you run pip freeze and confirm which version of scikit-image you are using?

  25. Vin September 23, 2017 at 7:11 pm #

    Hi I am improving my knowledge from your tutorials.
    I have question like at road intersection we have combination of different signs like traffic light is with green colour and priority sign how can recognition this combination . Display a text like vehicle has priority to move?? Like if I detect both signs together . Or priority and straight ahead sign is there how can recognize the combination of 2 signs saying that vehicle having priority in this road??

    • Adrian Rosebrock September 24, 2017 at 8:45 am #

      I would need to see an example image of what you’re working with, but if I understand your question correctly you would need to train a multi-class object detector that can recognize each of the traffic signs + priority indicators.

  26. manu September 25, 2017 at 12:29 am #

    hi adryan, i am having this version
    how can i solve this problem. please help.
    actually we want this for detecting the errors in PCB board. we are using a 8 mp camera which gives less clarity image. can you suggest a better option for this

    • Adrian Rosebrock September 26, 2017 at 8:30 am #

      That is a pretty old version of scikit-image so that’s likely the issue. Make sure you upgrade it:

      $ pip install --upgrade scikit-image

  27. pochao October 24, 2017 at 3:39 am #

    Thank you for your share

    Very useful for me

  28. pochao October 24, 2017 at 9:37 pm #

    It works on my Raspberry Pi.

    But this message appears, what’s mean?

    * (Original:10126): WARNING **: Error retrieving accessibility bus address: org.freedesktop.DBus.Error.ServiceUnknown: The name org.a11y.Bus was not provided by any .service files

    • Adrian Rosebrock October 25, 2017 at 12:28 pm #

      Hi Pochao — try this package to get rid of that warning: sudo apt-get install libcanberra-gtk*

  29. Preethi K November 3, 2017 at 12:41 am #

    Hi Adrian,

    Nice article about the comparison since I don’t want to compare the complete image will it be possible to compare a part of the reference image with the current image and then wanted to decide on the correctness. Is there an option for this.

    • Adrian Rosebrock November 6, 2017 at 10:52 am #

      There are multiple ways to accomplish this, most are dependent on the exact images you are trying to compare. If you know the region you want to extract I would suggest using NumPy array slices to extract the ROI and then compare it. If the region location can vary in the images, use keypoint detection + local invariant descriptors + keypoint matching as I do in the “Book cover recognition” chapter of Practical Python and OpenCV.

Leave a Reply