Image Difference with OpenCV and Python

In a previous PyImageSearch blog post, I detailed how to compare two images with Python using the Structural Similarity Index (SSIM).

Using this method, we were able to easily determine if two images were identical or had differences due to slight image manipulations, compression artifacts, or purposeful tampering.

Today we are going to extend the SSIM approach so that we can visualize the differences between images using OpenCV and Python. Specifically, we’ll be drawing bounding boxes around regions in the two input images that differ.

To learn more about computing and visualizing image differences with Python and OpenCV, just keep reading.

Looking for the source code to this post?
Jump right to the downloads section.

Image Difference with OpenCV and Python

In order to compute the difference between two images we’ll be utilizing the Structural Similarity Index, first introduced by Wang et al. in their 2004 paper, Image Quality Assessment: From Error Visibility to Structural Similarity. This method is already implemented in the scikit-image library for image processing.

The trick is to learn how we can determine exactly where, in terms of (x, y)-coordinate location, the image differences are.

To accomplish this, we’ll first need to make sure our system has Python, OpenCV, scikit-image, and imutils.

You can learn how to configure and install Python and OpenCV on your system using one of my OpenCV install tutorials.

If you don’t already have scikit-image  installed/upgraded, upgrade via:

While you’re at it, go ahead and install/upgrade imutils  as well:

Now that our system is ready with the prerequisites, let’s continue.

Computing image difference

Can you spot the difference between these two images?

Figure 1: Manually inspecting the difference between two input images (source).

If you take a second to study the two credit cards, you’ll notice that the MasterCard logo is present on the left image but has been Photoshopped out from the right image.

You may have noticed this difference immediately, or it may have taken you a few seconds. Either way, this demonstrates an important aspect of comparing image differences — sometimes image differences are subtle — so subtle that the naked eye struggles to immediately comprehend the difference (we’ll see an example of such an image later in this blog post).

So why is computing image differences so important?

One example is phishing. Attackers can manipulate images ever-so-slightly to trick unsuspecting users who don’t validate the URL into thinking they are logging into their banking website — only to later find out that it was a scam.

Comparing logos and known User Interface (UI) elements on a webpage to an existing dataset could help reduce phishing attacks (a big thanks to Chris Cleveland for passing along PhishZoo: Detecting Phishing Websites By Looking at Them as an example of applying computer vision to prevent phishing).

Developing a phishing detection system is obviously much more complicated than simple image differences, but we can still apply these techniques to determine if a given image has been manipulated.

Now, let’s compute the difference between two images, and view the differences side by side using OpenCV, scikit-image, and Python.

Open up a new file and name it image_diff.py , and insert the following code:

Lines 2-5 show our imports. We’ll be using compare_ssim  (from scikit-image), argparse , imutils , and cv2  (OpenCV).

We establish two command line arguments, --first  and --second , which are the paths to the two respective input images we wish to compare (Lines 8-13).

Next we’ll load each image from disk and convert them to grayscale:

We load our first and second images, --first and  --second , on Lines 16 and 17, storing them as imageA  and imageB , respectively.

Figure 2: Our two input images that we are going to apply image difference to.

Then we convert each to grayscale on Lines 20 and 21.

Figure 3: Converting the two input images to grayscale.

Next, let’s compute the Structural Similarity Index (SSIM) between our two grayscale images.

Using the compare_ssim  function from scikit-image, we calculate a score  and difference image, diff  (Line 25).

The score  represents the structural similarity index between the two input images. This value can fall into the range [-1, 1] with a value of one being a “perfect match”.

The diff  image contains the actual image differences between the two input images that we wish to visualize. The difference image is currently represented as a floating point data type in the range [0, 1] so we first convert the array to 8-bit unsigned integers in the range [0, 255] (Line 26) before we can further process it using OpenCV.

Now, let’s find the contours so that we can place rectangles around the regions identified as “different”:

On Lines 31 and 32 we threshold our diff  image using both cv2.THRESH_BINARY_INV  and cv2.THRESH_OTSU — both of these settings are applied at the same time using the vertical bar ‘or’ symbol, | . For details on Otsu’s bimodal thresholding setting, see this OpenCV documentation.

Subsequently we find the contours of thresh  on Lines 33-35. The ternary operator on Line 35 simply accommodates difference between the cv2.findContours return signature in various versions of OpenCV.

The image in Figure 4 below clearly reveals the ROIs of the image that have been manipulated:

Figure 4: Using thresholding to highlight the image differences using OpenCV and Python.

Now that we have the contours stored in a list, let’s draw rectangles around the different regions on each image:

Beginning on Line 38, we loop over our contours, cnts . First, we compute the bounding box around the contour using the cv2.boundingRect  function. We store relevant (x, y)-coordinates as x  and y  as well as the width/height of the rectangle as w  and h .

Then we use the values to draw a red rectangle on each image with cv2.rectangle  (Lines 43 and 44).

Finally, we show the comparison images with boxes around differences, the difference image, and the thresholded image (Lines 47-50).

We make a call to cv2.waitKey  on Line 50 which makes the program wait until a key is pressed (at which point the script will exit).

Figure 5: Visualizing image differences using Python and OpenCV.

Next, let’s run the script and visualize a few more image differences.

Visualizing image differences

Using this script and the following command, we can quickly and easily highlight differences between two images:

As you can see in Figure 6, the security chip and name of the account holder have both been removed:

Figure 6: Comparing and visualizing image differences using computer vision (source).

Let’s try another example of computing image differences, this time of a check written by President Gerald R. Ford (source).

By running the command below and supplying the relevant images, we can see that the differences here are more subtle:

Figure 7: Computing image differences and highlighting the regions that are different.

Notice the following changes in Figure 7:

  • Betty Ford’s name is removed.
  • The check number is removed.
  • The symbol next to the date is removed.
  • The last name is removed.

On a complex image like a check it is often difficult to find all the differences with the naked eye. Luckily for us, we can now easily compute the differences and visualize the results with this handy script made with Python, OpenCV, and scikit-image.

Summary

In today’s blog post, we learned how to compute image differences using OpenCV, Python, and scikit-image’s Structural Similarity Index (SSIM). Based on the image difference we also learned how to mark and visualize the different regions in two images.

To learn more about SSIM, be sure to refer to this post and the scikit-image documentation.

I hope you enjoyed today’s blog post!

And before you go, be sure to enter your email address in the form below to be notified when future PyImageSearch blog posts are published!

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , ,

146 Responses to Image Difference with OpenCV and Python

  1. Linus June 19, 2017 at 11:40 am #

    This is so cool (even though I have no specific use-case for now…) 😀
    Thanks for writing this post!

    • Adrian Rosebrock June 19, 2017 at 2:09 pm #

      Thanks Linus, I’m glad you enjoyed the post 🙂

      • kushi March 7, 2018 at 6:01 am #

        can we get two persons image comparision source code..and result should be in percentage format….if two images are same it should be show 100%,,, in case images are different then 0%

        • Adrian Rosebrock March 7, 2018 at 9:07 am #

          What parts of the person are you trying to compare? Are you doing facial recognition?

          • giri March 11, 2018 at 7:03 pm #

            im try facial recognition and live traacking system plz help me

          • Adrian Rosebrock March 14, 2018 at 1:13 pm #

            If you’re interested in facial recognition definitely take a look at the PyImageSearch Gurus course where I cover facial recognition in detail.

  2. Mourad June 19, 2017 at 5:50 pm #

    Nice job ! Thank’x again Adrian, you are helping nad inspiring me every day.

    Can you please make a tutorial on how to detect soccer players on a pitch ? This technique is used to compute satistics of players performences using Computer Vision.

    • Adrian Rosebrock June 20, 2017 at 10:54 am #

      Hi Mourad — detecting (and tracking) soccer players on a pitch is a non-trivial problem. It’s not “easy” by any means as it involves both object detection and tracking (due to players being occluded). This is still an open-ended area of research.

  3. Luis Loja June 19, 2017 at 7:40 pm #

    Is it python 2 or 3?

    • Adrian Rosebrock June 20, 2017 at 10:51 am #

      This project is compatible with both Python 2.7 and Python 3.

  4. RITESH PATEL June 19, 2017 at 9:07 pm #

    thanks for this I will use this in my recent project fault finding on production batch

  5. John Cohen June 19, 2017 at 11:18 pm #

    this is very important for me—PI!

    • Adrian Rosebrock June 20, 2017 at 10:50 am #

      Glad to hear it John!

  6. Steven Barnes June 20, 2017 at 2:01 am #

    Nice post but what about changes of colour that are intensity neutral, e.g. if a colour image has been changed to a grey scale image the above approach will see no difference, likewise if, in a part of the image r & g values have been swapped.

    • Adrian Rosebrock June 20, 2017 at 10:50 am #

      Image difference algorithms do exactly that — detect image differences. If you convert an image to grayscale or swap RGB values you’ll by definition be changing the image. Perhaps I’m misunderstanding your question?

      • Steven Barnes June 22, 2017 at 11:42 am #

        Let us take your credit card example, if rather than deleting the logo you were to replace the red circle with an equal intensity of blue, then the yellow with an equal intensity of red and finally the blue with an equal intensity of yellow you would, of course, have changed the image __but__ the method shown in your code above would state that there was no difference between the two images, (as they are both reduced to grey scale before comparing.

        • Adrian Rosebrock June 27, 2017 at 6:48 am #

          Well, to start, when converting RGB images to grayscale, each of the channels are not weighted equally. Secondly, if you are concerned with per-channel differences, simply run the image difference algorithm on each channel of your input images.

  7. Simon June 20, 2017 at 2:20 am #

    Nice writeup would any of these work on two identical images but different scales? Trying to find a reliable method to compare two images that may be a different scale.

    Thanks for sharing this.

    • Adrian Rosebrock June 20, 2017 at 10:49 am #

      For images that have different scales, but the same aspect ratio, simply resize both image so that they are the same dimensions. From there, this method will work.

  8. pavan June 20, 2017 at 4:50 am #

    i tired of install import cv2 in my mac book plz help out. im from india,im always follow ur web, projects . thanks for helping for all.

    • Adrian Rosebrock June 20, 2017 at 10:45 am #

      Are you having trouble compiling and installing OpenCV? If so, I would suggest talking a look at the Quickstart Bundle and Hardcopy Bundle of my book, Practical Python and OpenCV. Both of these bundles include a pre-configured Ubuntu VirtualBox virtual machine that has OpenCV + Python pre-installed. Best of all, this VM will run on Linux, macOS, and Windows. It’s by far the fastest way to get up and running with OpenCV.

  9. Andreas June 20, 2017 at 4:53 am #

    I really like your walk-through of the code with examples. It makes it is easy to follow and understand. Great work.

    • Adrian Rosebrock June 20, 2017 at 10:43 am #

      Thanks Andreas, I’m glad you found the tutorial helpful! 🙂

  10. Bill June 20, 2017 at 12:41 pm #

    I was having an issue differentiating between two very similar images (with my own eyes), and wanted to write a little program to do it for me. Thanks for explaining this!

  11. Bill June 20, 2017 at 7:47 pm #

    Adrian,

    When trying to install scikit-image I ran into a memory error when pip was installing matplotlib. I think it’s because pip’s caching mechanism is “trying to read the entire file into memory before caching it…which poses a problem in a limited memory environment” (https://stackoverflow.com/questions/29466663/memory-error-while-using-pip-install-matplotlib).

    The way to avoid caching is the following:
    pip --no-cache-dir install scikit-image

    Hopefully this saves some people from scratching their heads.

    • Adrian Rosebrock June 22, 2017 at 7:26 am #

      Thanks for sharing Bill!

  12. Vijeta June 20, 2017 at 11:25 pm #

    Thanks for the post..very useful..

    • Adrian Rosebrock June 22, 2017 at 7:26 am #

      I’m glad you found it useful, Vijeta!

  13. Marc June 25, 2017 at 10:25 am #

    Hi Adrian,

    would you use this technique to identify if an object popped up in an image?
    Specifically, I have a wildcam that takes pictures a soon as movement is detected. Most of the time it is birds and squirrels. What i would like to do, is check if a deer or a wild boar is in the image. This helps reduce flipping 700 images just to find those few with the relevant animal. Of course deer and boar may show up during day and night ;-).

    • Adrian Rosebrock June 27, 2017 at 6:32 am #

      For your particular project, I would treat this as a motion detection problem. The tutorial I linked to will help you build your wildlife camera surveillance system. From there, you’ll want to train a custom image classifier to recognize any animals you are interested in. Exactly which machine learning method you should use depends on your application, but I would recommend starting with PyImageSearch Gurus which has over 40+ lessons on image classification and object detection.

  14. Harrison June 26, 2017 at 10:26 pm #

    Hey Adrian, thanks so much for the tutorial! Are there other techniques to do this process for the same object but taken in two different photos. For example, if I wanted to compare a stop sign in two different photos, even if the photo is cropped the images will differ slightly by nature (due to lighting and other variables). Thanks!

    • Adrian Rosebrock June 27, 2017 at 6:09 am #

      To start, you would want to detect and extract the stop signs from the two input images. Exactly how you do this depends on your image processing pipeline. Once you have the stop signs there are a number of ways to compare them. For this method to work best, you would need to align the stop signs, which likely isn’t ideal. I would instead treat this is as an image similarity problem and apply image descriptors and feature extraction to quantify the stop signs.

      • Harrison June 27, 2017 at 2:34 pm #

        Thanks for the response! So would better techniques be things like zernike moments and color histogram comparisons?

        • Adrian Rosebrock June 30, 2017 at 8:28 am #

          Again, I think this depends on how you define “similarity”. What specifically are you trying to detect that is difference between road signs?

          My primary suggestion would be to take a look at the PyImageSearch Gurus course where I discuss object detection, feature extraction, and image similarity in detail.

          • Harrison July 3, 2017 at 10:31 am #

            I am trying to detect differences in color and shape. Essentially trying to determine if a street sign is misprinted by comparing it to a correctly printed one.

          • Adrian Rosebrock July 5, 2017 at 6:12 am #

            It’s hard to give an exact suggestion without seeing example images you are working with. Again, I would refer you to the PyImageSearch Gurus course I mentioned in the previous comment for more details on image similarity and comparison.

  15. y0c0 July 2, 2017 at 2:38 pm #

    i solved that, now i have another error–sys error:

    error: the following arguments are required: -f/–first, -s/–second

    usage: image_diff.py [-h] -f FIRST -s SECOND
    image_diff.py: error: the following arguments are required: -f/–first, -s/–second

  16. Steven Leach July 2, 2017 at 8:28 pm #

    I was able to follow these instructions only by using sudo on my Linux mint system

  17. LianMing July 12, 2017 at 5:41 am #

    Nice work there, much appreciate it ! Is there a way to compare the images using the SSIM approach without converting it to greyscale? I realised there are some slight color changes in my images when viewing it with naked eyes but there were no difference after converting it to greyscale.

    • Adrian Rosebrock July 12, 2017 at 2:42 pm #

      If you wanted to compute SSIM for an RGB image you would simply separate the image into its respective Red, Green, and Blue components, compute SSIM for each channel, and average the values together.

  18. Andrew July 22, 2017 at 6:56 am #

    If 2 images are in different point of view, contrast, noise.. ? How can I know they are the same or not?

    • Adrian Rosebrock July 24, 2017 at 3:44 pm #

      Typically if you have objects that are captured at different viewing angles you would detect keypoints, extract local invariant descriptors, and then apply keypoint matching using RANSAC. If too few keypoints match, then the objects are not the same. If there is a reasonable percentage of overlap in the match, then the objects can be considered the same.

      I teach this in detail (with code) inside Practical Python and OpenCV where we learn how to recognize the covers of books.

  19. Prabhakar Srinivasan August 5, 2017 at 12:46 am #

    brilliant! thank you for sharing

    • Adrian Rosebrock August 10, 2017 at 9:10 am #

      Thanks Prabhakar 🙂

  20. anwar August 9, 2017 at 12:02 am #

    Hi Adrian, how to make output in terminal like boolean, 1 for no different and 0 for different?

    • Adrian Rosebrock August 10, 2017 at 8:50 am #

      For that I would also read this blog post as well. Basically, you need to supply a threshold on the SSIM or MSE value. If the threshold is small enough, there is no difference.

  21. Yitzhak September 6, 2017 at 7:31 am #

    thanks, great job !!

  22. Ambika September 11, 2017 at 6:32 am #

    Hi Adrian!

    Great Tutorial for image difference detection. I just wanted to know if instead of drawing rectangle around the difference region, can we mask that area out? So that the whole image is visible and the part which is different is white. How can we do it? Can you please help.

    • Adrian Rosebrock September 11, 2017 at 9:00 am #

      You could mask the area out as well. You could draw a (filled in) rectangle around the contour area. You could compute the bounding box and then draw the rectangle. You could use the cv2.bitwise_and function. There are a number of different ways to accomplish this task.

  23. Erika September 21, 2017 at 5:33 pm #

    It is great! Thank you.
    But I wondered the value of it in reality because we always have to get the original image for comparison. Usually, it is not that kind of easy job.

    • Adrian Rosebrock September 23, 2017 at 10:13 am #

      If you are trying to compare two images for similarity you must have an original image of some sort in your dataset — otherwise you would not have anything to compare against.

  24. manu September 23, 2017 at 4:20 am #

    hi adryan i found this error
    pi@raspberrypi:~ $ python diffrence.py –first 3.jpg –second 4.jpg
    Traceback (most recent call last):
    File “diffrence.py”, line 23, in
    (score, diff) = structural_similarity (grayA, grayB,full= True)
    TypeError: structural_similarity() got an unexpected keyword argument ‘full’

    • Adrian Rosebrock September 23, 2017 at 9:59 am #

      Can you run pip freeze and confirm which version of scikit-image you are using?

  25. Vin September 23, 2017 at 7:11 pm #

    Hi I am improving my knowledge from your tutorials.
    I have question like at road intersection we have combination of different signs like traffic light is with green colour and priority sign how can recognition this combination . Display a text like vehicle has priority to move?? Like if I detect both signs together . Or priority and straight ahead sign is there how can recognize the combination of 2 signs saying that vehicle having priority in this road??

    • Adrian Rosebrock September 24, 2017 at 8:45 am #

      I would need to see an example image of what you’re working with, but if I understand your question correctly you would need to train a multi-class object detector that can recognize each of the traffic signs + priority indicators.

  26. manu September 25, 2017 at 12:29 am #

    hi adryan, i am having this version
    scikit-image==0.10.1
    how can i solve this problem. please help.
    actually we want this for detecting the errors in PCB board. we are using a 8 mp camera which gives less clarity image. can you suggest a better option for this

    • Adrian Rosebrock September 26, 2017 at 8:30 am #

      That is a pretty old version of scikit-image so that’s likely the issue. Make sure you upgrade it:

      $ pip install --upgrade scikit-image

  27. pochao October 24, 2017 at 3:39 am #

    Thank you for your share

    Very useful for me

  28. pochao October 24, 2017 at 9:37 pm #

    It works on my Raspberry Pi.

    But this message appears, what’s mean?

    * (Original:10126): WARNING **: Error retrieving accessibility bus address: org.freedesktop.DBus.Error.ServiceUnknown: The name org.a11y.Bus was not provided by any .service files

    • Adrian Rosebrock October 25, 2017 at 12:28 pm #

      Hi Pochao — try this package to get rid of that warning: sudo apt-get install libcanberra-gtk*

  29. Preethi K November 3, 2017 at 12:41 am #

    Hi Adrian,

    Nice article about the comparison since I don’t want to compare the complete image will it be possible to compare a part of the reference image with the current image and then wanted to decide on the correctness. Is there an option for this.

    • Adrian Rosebrock November 6, 2017 at 10:52 am #

      There are multiple ways to accomplish this, most are dependent on the exact images you are trying to compare. If you know the region you want to extract I would suggest using NumPy array slices to extract the ROI and then compare it. If the region location can vary in the images, use keypoint detection + local invariant descriptors + keypoint matching as I do in the “Book cover recognition” chapter of Practical Python and OpenCV.

  30. manu prasad November 27, 2017 at 3:34 am #

    hi adryan
    can i use this method for detecting the errors in a pcb board (for example soldering, improper connection etc.).i have a 8mp pi cam. but when i used this method it showing lot and lot of errors which is actually not needed. how can i fix it. please help.why it is showing lot of errors when comparing two images taken using pi camera?.please help me to fix it. waiting for your suggestions.

    • Adrian Rosebrock November 27, 2017 at 12:59 pm #

      If you are able to align both images, then yes, this method would work for simple image differences.

  31. Ilja December 7, 2017 at 7:37 am #

    Hi Adrian, thanks for the code!
    I just can’t manage to run it properly, I keep getting the error:

    image_diff.py: error: argument -f/–first is required

    I can’t really trace where this comes from, I am quite new to this. Do you know what I have to change or install for this error to disappear? Would be great!! Thanks in advance, cheers!

    • Adrian Rosebrock December 8, 2017 at 4:50 pm #

      Hi Ilja — please read up on command line arguments. You need to open up a terminal and execute the script via the command line, exactly as I do in this blog post.

  32. Bilal December 14, 2017 at 11:40 am #

    Hi Adrian,

    Thanks for the tutorial.
    I will send you two images which are almost the same. If you calculate the difference via ImageJ, you will see a black image but by using you algorithm it just cause chaos.
    I think cv2.THRESH_OTSU is the culprit here.

    Thanks and Regards,

    Bilal

    • Adrian Rosebrock December 15, 2017 at 8:22 am #

      It’s hard to say what the exact issue is without seeing the images. ImageJ might be using a different implementation or it could be nuance of your two particular images.

      • Bilal January 26, 2018 at 5:26 am #

        Sorry for the delayed response. How can I send the images to you? I have put my email address as well.

        • Adrian Rosebrock January 26, 2018 at 10:03 am #

          Please send me a message and from there we can chat over email. You can send the images as attachments or upload to a cloud-based service such as Dropbox.

  33. Anh January 22, 2018 at 2:13 pm #

    Hi Adrian,

    I’m trying to solve a problem of checking if 2 given handwritten signatures are taken from the same person. Can you show me how to do it?

  34. Gandhirajan February 5, 2018 at 1:14 pm #

    Hi Adrian,

    Thanks for the very useful post. In one of my use case, I gotta compare two images to figure out the dimension differences between the two as one of the image is a reference master image. Any pointer on how to approach this use case?

    • Adrian Rosebrock February 6, 2018 at 10:10 am #

      Hey Gandhirajan — what do you mean by “dimension differences”? Are you referring to computing the width or height of an object?

  35. Pranali February 27, 2018 at 4:47 am #

    Hey ..
    Thank u for this code !
    I installed dependencies , scikit image ..still there is error showing can’t import compare.ssim
    Please help with this !
    Thank u !

    • Adrian Rosebrock February 27, 2018 at 11:25 am #

      Hey Pranali — just to confirm, which version of scikit-image do you have installed? You can verify via “pip freeze”.

  36. Vinay March 6, 2018 at 9:26 am #

    Hello Adrian,

    Is there any way to automatically decide which filter will be applied on image by analyzing image quality.
    While searching for the above query i reach to your this post, so I ask here.

    Thanks a lot..!!

    • Adrian Rosebrock March 7, 2018 at 9:12 am #

      I’m not sure what you mean by “automatically decide which filter”. Could you elaborate?

  37. Rinsha V.K March 19, 2018 at 5:08 am #

    This guide was very helpful. I could understand everything.Thankyou for putting this. But when I run the program i get this :
    ImportError: cannot import name compare_ssim

    • Adrian Rosebrock March 19, 2018 at 4:54 pm #

      Hey Rinsha — what version of scikit-image are you using?

  38. Ravikumar March 27, 2018 at 3:55 pm #

    Hey Adrian,
    This was a very nice tutorial. Although i wanted to know if there is a way to show any difference in intensity of the photos. Eg. Green and DarkGreen.

    • Adrian Rosebrock March 30, 2018 at 7:29 am #

      Once you know where in the images the difference occurs you can extract the ROI and then do a simple subtraction on the arrays to obtain the difference in intensities.

  39. Nihel March 28, 2018 at 6:38 am #

    Hello Adrian,
    Thank u for this code
    still there is error showing can’t import compare.ssim
    version of scikit-image:

    Python 3.5.2
    >>> import skimage
    >>> print(skimage.__version__)
    0.10.1

    • Adrian Rosebrock March 30, 2018 at 7:19 am #

      You need to upgrade your scikit-image install:

      $ pip install --upgrade scikit-image

  40. ju April 8, 2018 at 7:00 am #

    I want to know this function compare_ssim the param data_range how to work. Because I know the function is compare every px‘s ssim score.Actually two picture is so similar but their score is so low.
    I try to change the data_range the score will improve.But I don’t know how it work.Hope you give me a answer.

    • Adrian Rosebrock April 10, 2018 at 12:29 pm #

      The data_range here would be the distance between the minimal pixel value (0) and max pixel value (255). This value is estimated from the data point (unsigned 8-bit integers).

  41. Esteban April 9, 2018 at 9:08 am #

    Hi! Great code, but I encountered an issue. It works fine when there is a difference, finding and drawing the contours. However, when there is no evident difference between the two images, it draws thousands of contours across the image, and that affects the code I’m using this for. What causes this and how can I fix it?

    • Adrian Rosebrock April 10, 2018 at 12:08 pm #

      I haven’t encountered this problem before. To be honest I’m not sure what would cause it. A quick fix would be to check the returned SSIM score and if it’s above a (user defined) threshold you can simply mark the images as “similar enough/identical” and continue on with the program.

  42. Sudheendra April 22, 2018 at 1:14 pm #

    Hi Adrian,

    I want to compare 2 images that have a subtle difference :
    1st image has a nut and bolt without grease.
    2nd image has a nut and bolt with grease.

    I want the “grease” difference to be the output.

    Could you please let me know which would be the most accurate way to achieve this?

    Would the code mentioned for the above example be useful or is there a better way to handle this?

    Also, please let me know is Open CV is the only way to achieve this? I checked the AWS, Azure APIs but could not find any service that would solve this.

    I need this for handling a Business use case, so please let me know the best option.

    Thanks in advance.

    Regards,
    Sudheendra

    • Adrian Rosebrock April 25, 2018 at 6:10 am #

      It’s very likely that you will need to implement this algorithm by hand, again, most likely using OpenCV. There will not be an “off the shelf” solution for you to solve this project.

      Do you have any example images of what the two images you need to compare look like? It would be best to see them before providing any advice.

      • Sudheendra April 29, 2018 at 1:10 am #

        Hi Adrian,

        Thanks for the reply.

        I have uploaded 2 sample images in pasteboard:

        Without grease image – https://pasteboard.co/HiObAUb.jpg

        With grease image – https://pasteboard.co/HiObPBj.jpg

        When these 2 images are compared , I want the difference to be highlighted – main difference being the grease. This is a use case from an automobile industry.

        Could you please let me know how to implement and get very good accuracy.
        I want this to be implemented on AWS. I saw one more link of yours in which you had done all the pre-setup in AWS for python and CV.

        Please guide me. Eagerly waiting for your reply.

        Regards,
        Sudheendra

        • Adrian Rosebrock May 3, 2018 at 10:24 am #

          The first step here would be to detect the grease. You could try color thresholding but that wouldn’t work well with varying lighting conditions. Instead, I would suggest trying to train a deep learning-based segmentation network, such as Mask R-CNN or UNet. It will be a very challenging project to say the least (just wanted to give you a warning).

  43. Sudheendra R May 7, 2018 at 9:49 am #

    Hi Adrian,

    Thanks for the reply. I can sense that it will not be easy. 🙂 Neverthless, thanks for the advice. I will try it out. I have another immediate business requirement and I need your guidance for sure 🙂

    We need a customized image processing algorithm that would provide a specific set of output parameters after an image is processed.

    If a manufacturing part like fastener is the image, then after processing this image, we need output parameters like dent, line, scratch, ring etc and their accuracy percentage. This would be a quality check of the part. This would enhance the visual inspection and result in a better qualify of the end product by identifying defective parts in a early phase before assembly.

    For this, we need an algorithm built on a specific image classifier and a trained model. Please let me know which platform should I use and how should I proceed.

    Regards,
    Sudheendra

    • Adrian Rosebrock May 9, 2018 at 10:04 am #

      While you could use basic image processing here I do not think it would work well for a robust solution. Deep learning-based methods would likely achieve the highest accuracy provided you have enough training data for each defect/problem you’re trying to detect. I personally prefer Keras for training deep neural networks. I discuss how to work with Keras and train your own networks inside my book, Deep Learning for Computer Vision with Python.

      • Sudheendra R May 13, 2018 at 11:00 am #

        Thank you for your reply. I will check the details and get back to you.

        Regards,
        Sudheendra

  44. Ali.K May 18, 2018 at 4:09 pm #

    Hi Adrian, many thanks for the tutorial. A question regarding thresholding: “On Lines 31 and 32 we threshold our diff image using both cv2.THRESH_BINARY_INV and cv2.THRESH_OTSU-both of these settings are applied at the same time using the vertical bar”. I applied the same technique in my project (where I am detecting/following a black line in front of the robot) and it improves the number of frames in the video where line is detected correctly, which is good! But I do not understand how it works! I read the documentation you referred to, and it says “OTSU algorithm finds the optimal threshold value automatically”. So when using the following line of code for thresholding, which threshold value is finally used? The one I define or the one determined by the algorithm behind OTSU? I am just trying to understand why it is improving the performance of my code. Thanks 🙂
    cv2.threshold(blurred, 100, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)

    • Adrian Rosebrock May 22, 2018 at 6:34 am #

      The “cv2.threshold” function will return two values: the threshold value T and the thresholded image itself. The value “T” is what the Otsu method automatically completed.

  45. Nut June 12, 2018 at 5:48 am #

    Hi,

    I really love your article, but I have few questions. I’m thinking about developing the testing website framework for my company to detect the bug from the new version vs old. This test cases include color matching, positioning, etc. My first question is what is the main reason that you use gray scale for your comparison, I see in compare_ssim we can add multichannel=True then the color image will be comparable. Second, if the first question did work, so could I use it for the testing framework, or do you have any suggestion?

    Thanks,

    • Adrian Rosebrock June 13, 2018 at 5:38 am #

      For this particular example we were examining the “structural components” of the input object. The actual color did not matter. If you are monitoring a website then color would certainly matter as a bug in the CSS could cause a different color text, background, etc. You would want to experiment with both. You should also take a look at template matching to help you locate specific components on the website.

  46. Coby June 13, 2018 at 6:05 pm #

    Question: Does this library perform well when there are differences in rotation, translation, and scaling? I deal with scanned images where there can be a slight skew when scanning that introduces an angle of rotation. When comparing two scanned images they also might not have the same X-Y scan origin and hence have a translation. And the pixel densities and related scaling may also differ due to scanner and output differences.

    • Adrian Rosebrock June 15, 2018 at 12:36 pm #

      No, this method does not work for differences in rotation, translation, scaling, etc. For that you would use to use keypoint detectors, local invariant descriptors, and keypoint matching to locate objects in your two images.

  47. Sarah Kunkel July 6, 2018 at 2:50 pm #

    Hi! I am having trouble running this program. I have installed and updated scikit-image on my computer, but it is still saying that there is no module named skimage. Here is the exact error:

    from skimage.measure import compare_ssim as ssim
    ModuleNotFoundError: No module named ‘skimage’

    I am confused why it is not recognizing skimage even though I have downloaded it on my computer.

    Thanks.

    • Adrian Rosebrock July 10, 2018 at 8:50 am #

      Are you using Python virtual environments? My guess is that you may have installed scikit-image globally but not into your Python virtual environment.

  48. David July 12, 2018 at 9:53 am #

    Hi Adrian,
    Great article. I am new to python. I am trying to capture characteristics of 2 different image shapes. Characteristics such as color, shape, size, location. I need to compare the differences between the two images. For example imageA may consist of a small circle and imageB may have a larger circle. The difference between imageA and imageB in this case would be that ImageB’s circle grew in size.

    May I ask what python library(s) would be the best to handle this type of task? Thanks

    • Adrian Rosebrock July 13, 2018 at 5:04 am #

      OpenCV would be ideal for this task along with (potentially) a machine learning library such as scikit-learn. I would suggest you work through Practical Python and OpenCV where I teach the fundamentals of the OpenCV library and even include some case studies on how to compare images. One chapter even covers how to recognize the covers of books in images via keypoint matching — this algorithm could be adapted for your own problems.

      A more in-depth study of feature extraction and object detection/recognition can be found inside the PyImageSearch Gurus course which includes 160+ lessons and approximately 70 lessons specifically related to your problem.

      I hope that helps point you in the right direction!

  49. Deepali agarwal July 12, 2018 at 3:09 pm #

    This is perfect! Thanks so much. It works perfectly for my image comparison automation.

    • Adrian Rosebrock July 13, 2018 at 4:55 am #

      Awesome, I’m glad to hear it Deepali!

  50. Viktor July 23, 2018 at 10:14 am #

    Hello Adrian. Ty for really good job. Could you tell me: can i add arguments(path to images) not like keys in console or parameters in IDE, but just like parameters?
    I have some Selenium test what producing screenshots, so i want compare screenshots with
    references.

    • Adrian Rosebrock July 25, 2018 at 8:11 am #

      Hey Viktor — have you considered creating a configuration file, perhaps in JSON format, and then updating the script to load the configuration/parameters when you execute the script? That would probably be the easiest.

  51. IndhraG July 25, 2018 at 5:30 am #

    Hi Adrain,

    This works perfect. But How can I implement this same concept with two different dimension-ed images. Is that possible.? Thank you.

    • Adrian Rosebrock July 25, 2018 at 7:54 am #

      If the images are similar in aspect ratio I would suggest resizing them so that each image has the same dimensions. If they do not have similar dimensions you should apply more advanced techniques, such as keypoint matching.

  52. Manbodh August 1, 2018 at 3:30 am #

    Sir while using vars(line 13) I m getting an exception from system . What should I do ??

    • Adrian Rosebrock August 2, 2018 at 9:32 am #

      What is the exact exception? My guess is that you did not provide the command line arguments properly. I would suggest you read this post on command line arguments to see if that resolves your error.

  53. heetak Chung August 2, 2018 at 5:24 pm #

    Hi Adrian,
    I read your article very well. After then, I have a question.
    How to measure the percentage of images between same images but different ratio?
    for example, if I make a 10% enlarged origin image, instead of resizing, I cut the side part of an image and make same size with the origin image, After then, How can I make an algorithm to measure high percentage of that image?

    • Adrian Rosebrock August 7, 2018 at 6:49 am #

      Building an algorithm that can still measure image differences even after distortions can be incredibly challenging. You might want to take a look at “perceptual hashing” papers for inspiration, including the work from TinEye. I do not have a solution offhand for this project.

  54. parisa August 13, 2018 at 12:51 pm #

    hi dear Adrian
    could you please help me and tell me how can I filter an image so that only the pixels close to a certain color are left ?

  55. Kaustav Mukherjee August 14, 2018 at 8:01 pm #

    Hi Adrian,

    With the RANSAC algorithm as you commented earlier, can we looks for similarities in images with slightly differing features? For example, two images of American Express Gold Card taken from slightly different angles, the cards having different numbers, names, expiry dates, but rest of the cards are the exact same?

    • Adrian Rosebrock August 15, 2018 at 8:19 am #

      I wouldn’t recommend using keypoint matching to help detect differences. You would normally use keypoints and keypoint matching to verify correspondences. If you’re interested in the actual text on the card (name, expiration date, etc.) you should just OCR the image.

      • Kaustav Mukherjee August 15, 2018 at 4:06 pm #

        Thanks Adrian. What I meant is, I am not interested in the card details, but just to verify whether two cards are of the same type (say both are American Express Gold cards, but belonging to different persons). Will this algorithm help? To detect that the two cards are of the same type even though taken from slightly different angles, and some content being different (names, card numbers, expiry dates, etc.) ?

        • Adrian Rosebrock August 17, 2018 at 8:13 am #

          The best way to detect differences in name, number, expiration dates, and card types would be to OCR the card and compare directly. You could use this method to a degree but it wouldn’t be as accurate as OCR’ing directly.

  56. David Mike August 15, 2018 at 5:26 am #

    Thanks for sharing with us the top class tips. I love this post so much. Your image differences with open cv and pathon help me a lot. This post just makes me crazy.

    • Adrian Rosebrock August 15, 2018 at 8:13 am #

      Thanks David, I’m glad you enjoyed the post 🙂

  57. Midun August 19, 2018 at 3:34 am #

    Hi Adrian,

    thresh = cv2.threshold(diff,0,255,cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU) [1]

    cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

    what does this [1] refer to? without [1] I get an error telling _’tuple’ object has no attribute ‘copy’_ can you please brief it .

    —– Your tutorials are awesome and makes me learn more. And I really appreciate you for helping out even for older posts —-

    • Adrian Rosebrock August 22, 2018 at 10:08 am #

      The “1” is the index of the returned value. The cv2.threshold function will return two values: The threshold value T and the thresholded image itself. Without knowing your exact full error I’m not sure what the error may be.

  58. Quinn September 3, 2018 at 11:08 am #

    Hi Adrian,

    Is there any way to detect changes in gradient and colour, as well as the visual differences used in the credit card example above together?

    I’m having a hard time identifying these differences in grayscale as the threshold algorithm used to draw the bounding box isn’t picking these up.

    • Adrian Rosebrock September 5, 2018 at 8:56 am #

      Absolutely. I actually cover how to detect changes in gradient for barcode detection in this post. You can adapt it for your own purposes 🙂

  59. Alex September 27, 2018 at 5:02 pm #

    Hi Adrian
    Can you give me some input on applying this method on video files?
    I have a photo of the garage from inside looking out and video from the same angle filming car attempting to park in. My idea was to make some sort of subtraction to remove everything but the car and then draw contours of result. In perfect world this would mean that only car would be that contour and I would draw rectangle around it and show that rectangle on original video frame.
    I managed to do it almost using gauss blur, addWeighted and adaptiveTreshold on both frames and then subtract them, but problem is that car contour is too small when car is outside the garage and it is not detected until is to close.

    Your method gives me better results when car is far, but problem occurs when car get closer and car lights hit the wall and “difference” between frames is detected. Also, because car moves, everywhere where car was is also detected as difference.

    My attempts to filter out those lights and reflections were in vain because compare_ssim works even worse then.

    My question is: Is there any way to apply some treshold on frames before comparing them with compare_ssim so I can avoid shadows and reflections? Also can it work only to show differences from video file (like cv2.subtract) so I can avoid false difference detection behind the car?

    Sorry for long post 🙂

  60. kiran October 26, 2018 at 9:08 am #

    Hi , If the text difference is recognised and if it is printed it will be even better

  61. Ibn Ahmad November 2, 2018 at 6:52 am #

    Thanks Adrian for the post, I am looking forward to use your tutorial(s) as a springboard into computer visioning.
    Sorry for my noobs question: how can I get the compared images?

    • Adrian Rosebrock November 2, 2018 at 7:10 am #

      I’m not sure what you mean by “get the compared images”? Could you clarify?

      • Ibn Ahmad November 3, 2018 at 9:15 am #

        What I meant is the sample of the two image used for the comparism. The picture of the “Credit Card”.

        • Ibn Ahmad November 3, 2018 at 9:31 am #

          Nevertheless Dr. Adrian Thanks for your response. I have run the code with difference images and it run successfully

          • Adrian Rosebrock November 6, 2018 at 1:29 pm #

            Awesome, I’m glad it worked!

  62. Rohit November 14, 2018 at 12:53 am #

    is this approach is also applicable for motion detection using surveillance camera .

  63. Ashley November 27, 2018 at 1:28 pm #

    Hi.. I was wondering if you might have a suggestion for looking at the same image but with a different illumination. Say a picture that is of the exact same area just at a different time of day?

    Thanks.

    • Adrian Rosebrock November 30, 2018 at 9:26 am #

      What specifically would you be trying to compare in those two images?

  64. Shreyans Bhansali December 5, 2018 at 3:08 pm #

    Awesome tutorial by the way!! Right now, the contours are based on mean structural similarity but what difference function should I use for contouring based on color? Thanks

    • Adrian Rosebrock December 6, 2018 at 9:33 am #

      Hey there, Shreyans. I’m not sure what you mean by “contouring based on color” — could you elaborate?

  65. Shreyans Bhansali December 10, 2018 at 11:19 am #

    Hey Adrian, I meant two scenes, same object, different color. I only consider the contour with the maximum area, so I wanna look for the difference in the scene based on color and not structure.

    • Adrian Rosebrock December 11, 2018 at 12:43 pm #

      There are a few ways to approach that but I think color histograms would be the easiest approach. Extract the object, compute color histograms, and then compare.

  66. Nang Pham December 11, 2018 at 2:51 am #

    Have a good day Mr. Author,

    My English is not very good so I can not say much.
    Thank you for writing this tutorial. Please take me as your student.

    Thank you very much.

    • Adrian Rosebrock December 11, 2018 at 12:36 pm #

      Thank you Nang, I appreciate that!

Leave a Reply