How-To: Python Compare Two Images

Would you have guessed that I’m a stamp collector?

Just kidding. I’m not.

But let’s play a little game of pretend.

Let’s pretend that we have a huge dataset of stamp images. And we want to take two arbitrary stamp images and compare them to determine if they are identical, or near identical in some way.

In general, we can accomplish this in two ways.

The first method is to use locality sensitive hashing, which I’ll cover in a later blog post.

The second method is to use algorithms such as Mean Squared Error (MSE) or the Structural Similarity Index (SSIM).

In this blog post I’ll show you how to use Python to compare two images using Mean Squared Error and Structural Similarity Index.

Looking for the source code to this post?
Jump right to the downloads section.

OpenCV and Python versions:
This example will run on Python 2.7/Python 3.4+ and OpenCV 2.4.X/OpenCV 3.0+.

Our Example Dataset

Let’s start off by taking a look at our example dataset:

Figure 1: Our example image dataset. Left: The original image. Middle: The original image with contrast adjustments. Right: The original image with Photoshopped overlay.

Figure 1: Our example image dataset. Left: The original image. Middle: The original image with contrast adjustments. Right: The original image with Photoshopped overlay.

Here you can see that we have three images: (left) our original image of our friends from Jurassic Park going on their first (and only) tour, (middle) the original image with contrast adjustments applied to it, and (right), the original image with the Jurassic Park logo overlaid on top of it via Photoshop manipulation.

Now, it’s clear to us that the left and the middle images are more “similar” to each other — the one in the middle is just like the first one, only it is “darker”.

But as we’ll find out, Mean Squared Error will actually say the Photoshopped image is more similar to the original than the middle image with contrast adjustments. Pretty weird, right?

Mean Squared Error vs. Structural Similarity Measure

Let’s take a look at the Mean Squared error equation:

Equation 1: Mean Squared Error

Equation 1: Mean Squared Error

While this equation may look complex, I promise you it’s not.

And to demonstrate this you, I’m going to convert this equation to a Python function:

So there you have it — Mean Squared Error in only four lines of Python code once you take out the comments.

Let’s tear it apart and see what’s going on:

  • On Line 7 we define our mse function, which takes two arguments: imageA and imageB (i.e. the images we want to compare for similarity).
  • All the real work is handled on Line 11. First we convert the images from unsigned 8-bit integers to floating point, that way we don’t run into any problems with modulus operations “wrapping around”. We then take the difference between the images by subtracting the pixel intensities. Next up, we square these difference (hence mean squared error, and finally sum them up.
  • Line 12 handles the mean of the Mean Squared Error. All we are doing is dividing our sum of squares by the total number of pixels in the image.
  • Finally, we return our MSE to the caller one Line 16.

MSE is dead simple to implement — but when using it for similarity, we can run into problems. The main one being that large distances between pixel intensities do not necessarily mean the contents of the images are dramatically different. I’ll provide some proof for that statement later in this post, but in the meantime, take my word for it.

It’s important to note that a value of 0 for MSE indicates perfect similarity. A value greater than one implies less similarity and will continue to grow as the average difference between pixel intensities increases as well.

In order to remedy some of the issues associated with MSE for image comparison, we have the Structural Similarity Index, developed by Wang et al.:

Equation 2: Structural Similarity Index

Equation 2: Structural Similarity Index

The SSIM method is clearly more involved than the MSE method, but the gist is that SSIM attempts to model the perceived change in the structural information of the image, whereas MSE is actually estimating the perceived errors. There is a subtle difference between the two, but the results are dramatic.

Furthermore, the equation in Equation 2 is used to compare two windows (i.e. small sub-samples) rather than the entire image as in MSE. Doing this leads to a more robust approach that is able to account for changes in the structure of the image, rather than just the perceived change.

The parameters to Equation 2 include the (x, y) location of the N x N window in each image, the mean of the pixel intensities in the x and y direction, the variance of intensities in the x and y direction, along with the covariance.

Unlike MSE, the SSIM value can vary between -1 and 1, where 1 indicates perfect similarity.

Luckily, as you’ll see, we don’t have to implement this method by hand since scikit-image already has an implementation ready for us.

Let’s go ahead and jump into some code.

How-To: Compare Two Images Using Python

We start by importing the packages we’ll need — matplotlib for plotting, NumPy for numerical processing, and cv2 for our OpenCV bindings. Our Structural Similarity Index method is already implemented for us by scikit-image, so we’ll just use their implementation.

Lines 7-16 define our mse method, which you are already familiar with.

We then define the compare_images function on Line 18 which we’ll use to compare two images using both MSE and SSIM. The mse function takes three arguments: imageA and imageB, which are the two images we are going to compare, and then the title of our figure.

We then compute the MSE and SSIM between the two images on Lines 21 and 22.

Lines 25-39 handle some simple matplotlib plotting. We simply display the MSE and SSIM associated with the two images we are comparing.

Lines 43-45 handle loading our images off disk using OpenCV. We’ll be using our original image (Line 43), our contrast adjusted image (Line 44), and our Photoshopped image with the Jurassic Park logo overlaid (Line 45).

We then convert our images to grayscale on Lines 48-50.

Now that our images are loaded off disk, let’s show them. On Lines 52-65 we simply generate a matplotlib figure, loop over our images one-by-one, and add them to our plot. Our plot is then displayed to us on Line 65.

Finally, we can compare our images together using the compare_images function on Lines 68-70.

We can execute our script by issuing the following command:

Results

Once our script has executed, we should first see our test case — comparing the original image to itself:

Comparing the two original images together.

Figure 2: Comparing the two original images together.

Not surpassingly, the original image is identical to itself, with a value of 0.0 for MSE and 1.0 for SSIM. Remember, as the MSE increases the images are less similar, as opposed to the SSIM where smaller values indicate less similarity.

Now, take a look at comparing the original to the contrast adjusted image:

Figure 3: Comparing the original and the contrast adjusted image.

Figure 3: Comparing the original and the contrast adjusted image.

In this case, the MSE has increased and the SSIM decreased, implying that the images are less similar. This is indeed true — adjusting the contrast has definitely “damaged” the representation of the image.

But things don’t get interesting until we compare the original image to the Photoshopped overlay:

Figure 4: Comparing the original and Photoshopped overlay image.

Figure 4: Comparing the original and Photoshopped overlay image.

Comparing the original image to the Photoshop overlay yields a MSE of 1076 and a SSIM of 0.69.

Wait a second.

A MSE of 1076 is smaller than the previous of 1401. But clearly the Photoshopped overlay is dramatically more different than simply adjusting the contrast! But again, this is a limitation we must accept when utilizing raw pixel intensities globally.

On the other end, SSIM is returns a value of 0.69, which is indeed less than the 0.78 obtained when comparing the original image to the contrast adjusted image.

Summary

In this blog post I showed you how to compare two images using Python.

To perform our comparison, we made use of the Mean Squared Error (MSE) and the Structural Similarity Index (SSIM) functions.

While the MSE is substantially faster to compute, it has the major drawback of (1) being applied globally and (2) only estimating the perceived errors of the image.

On the other hand, SSIM, while slower, is able to perceive the change in structural information of the image by comparing local regions of the image instead of globally.

So which method should you use?

It depends.

In general, SSIM will give you better results, but you’ll lose a bit of performance.

But in my opinion, the gain in accuracy is well worth it.

Definitely give both MSE and SSIM a shot and see for yourself!

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 11-page Resource Guide on Computer Vision and Image Search Engines, including exclusive techniques that I don’t post on this blog! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , ,

78 Responses to How-To: Python Compare Two Images

  1. Xavier Paul November 26, 2014 at 4:53 am #

    Good day Adrian, I am trying to do a program that will search for an Image B within an Image A. I’m able to do with C# but it takes about 6seconds to detect image B in A and report its coordinates.

    Please can you help me?

    • Adrian Rosebrock November 26, 2014 at 7:14 am #

      Hi Xavier. I think you might want to take a look at template matching to do this. I did a guest post over at Machine Learning Mastery on how to do this.

  2. Mark December 4, 2014 at 11:33 pm #

    Marvellous! It’s in a very good way to describe and teach. Thanks for the great work.

    Next step, would it be possible to mark the difference between the 2 pictures?

    Below is a simple way, but I am much looking forward to see an advance one. Thanks.

    from PIL import Image
    from PIL import ImageChops
    from PIL import ImageDraw

    imageA= Image.open(“Original.jpg”)
    imageB= Image.open(“Editted.jpg”)

    dif = ImageChops.difference(imageB, imageA).getbbox()
    draw = ImageDraw.Draw(imageB)
    draw.rectangle(dif)
    imageB.show()

    • Adrian Rosebrock December 5, 2014 at 7:15 am #

      Hi Mark, a very simple way to visualize the difference between two images is to simply subtract one from the other. OpenCV has a built in function for this called cv2.subtract.

  3. Mark December 9, 2014 at 2:10 am #

    Hello dear Doctor Rosebrock,

    Many thanks for your reply, and guidance.

    I googled and I can only find some examples involved cv2.subtract for other purposes but not marking differences between 2 pictures.

    You have November and December posts using cv2.subtract but newbie like me just don’t get how to make it work to mark difference.

    Would you mind give an example, if you have time? Thanks.

    • Adrian Rosebrock December 9, 2014 at 7:30 am #

      Hi Mark, if I understand correctly, are you trying to visualize the difference between two afters after applying the cv2.subtract function? If so, all you need to do is apply cv2.imshow on the output of cv2.subtract. Then you’ll be able to see your output.

  4. Mridula February 17, 2015 at 12:47 am #

    Hi Adrian,

    I need to compare 2 images under a masked region. Can you help me with that? I mean how to i extend this code to work for a subregion of the images. Also i do a lot of video processing, like comparing whether 2 videos are equal or whether the videos have any artifacts. I would like to make it automated. Any posts on that?

    • Adrian Rosebrock February 17, 2015 at 6:52 am #

      Hi Mridula. If you’re comparing 2 masked regions, you’re probably better off using a feature based approach by extracting features from the 2 masked regions. However, if your 2 masked regions have the same dimensions or aspect ratios, you might be able to get away with SSIM or MSE. And if you’re interested in comparing two images/frames to see if they are identical, I would utilize image hashing.

    • Umesh February 22, 2016 at 5:34 pm #

      Hi Mridula,
      I am looking something similar to what you are doing on automation of comparing 2 videos..
      Any input on what you are using and go ahead..
      Thanks in advance

  5. bhavesh March 19, 2015 at 2:51 am #

    can you guide regarding how to compare two card , one image (card) is stored in disk and second image(card ) to be compare has been taken from camera

    • Adrian Rosebrock March 19, 2015 at 7:05 am #

      Hi Bhavesh, if you are looking to capture an image from your webcam, take a look a this post to get you started. It shows an example on how to access your webcam using Python + OpenCV. From there, you can take the code and modify it to your needs!

  6. Weston Renou March 19, 2015 at 4:13 pm #

    Thanks for this. I’ve inadvertently duplicated some of my personal photos and I wanted a quick way to de-duplicate my photos *and* a good entry project to start playing with computer vision concepts and techniques. And this is a perfect little project.

    • Adrian Rosebrock March 19, 2015 at 4:42 pm #

      Hi Weston, I’m glad the article helped. You should also take a look at my post on image hashing.

  7. Wookyung An April 4, 2015 at 3:13 am #

    Hi,

    I am trying to evaluate the segmentation performance between segmented image and ground truth in binary image. In this case, which metric is suitable to compare?

    Thank you.

    • Adrian Rosebrock April 4, 2015 at 7:13 am #

      That’s a great question. In reality, there are a lot of different methods that you could use to evaluate your segmentation. However, I would let your overall choice be defined by what others are using in the literature. Personally, I have not had to evaluate segmentation algorithms analytically before, so I would start by reading segmentation survey papers such as these and seeing what metrics others are using.

  8. Timothy Clemans April 18, 2015 at 1:06 pm #

    How do I compare images of different sizes?

    • Adrian Rosebrock April 18, 2015 at 1:27 pm #

      Hey Timothy, if you want to compare images of different sizes using MSE and SSIM, just resize them to the same size, ignoring the aspect ratio. Otherwise, you may want to look at some more advanced techniques to compare images, like using color histograms.

  9. Ciaran April 21, 2015 at 4:23 am #

    Hi Adrian,

    That was a very informative post and well explained. I have it working with png images, do you know if it’s possible to compare dicom images using the same method?

    I have tried using the pydicom package but have not had any success.

    Any help or advice would be greatly appreciated!

    • Adrian Rosebrock April 21, 2015 at 6:29 am #

      Hi Ciaran, I actually have not worked with DICOM images or the pydicom package before. But in general, if you can get your image into a NumPy format, then you’ll be able to apply OpenCV and scikit-image functions to it.

      • Ciaran April 23, 2015 at 6:54 am #

        Thanks for the response, I have worked it out using NumPy and it’s now working for me. Thanks!

        • Mohit November 24, 2016 at 5:37 pm #

          Hey Ciaran, glad to hear you got working with DICOM images in python and PIL. I’ve been having some trouble. Would you be able to send over some starter code on using numpy to bridge the two packages? Thanks!

  10. Primoz July 3, 2015 at 7:48 am #

    Thank you for this great post. I am wondering how post about locality sensitive hashing is advancing?

    • Adrian Rosebrock July 3, 2015 at 10:27 am #

      Hey Primoz, thanks for the comment. Locality Sensitive Hashing is a great topic, I’ll add it to my queue of ideas to write about. My favorite LSH methods use random projections to construct randomly separating hyperplanes. Then, for each hyperplane, a hash is constructed based on which “side” the feature lies on. These binary tests can be compiled into a hash. It’s a neat, super efficient trick that tends to perform well in the real-world. I’ll be sure to do a post about it in the future!

  11. Arun July 14, 2015 at 8:10 am #

    Thank you for this great post. I am working on it . I would like to know how to convert the MSE to the percentage difference of the two images.

  12. budy August 9, 2015 at 10:37 pm #

    nice explanation….thanks

  13. Ninja November 22, 2015 at 6:07 pm #

    Hi Adrian
    Is there a way or a method exposed by scikit-image to write the diff between two images used to compare to another file?
    Also,Is there any way to ignore some elements/text in image to be compared?
    thanks

    • Adrian Rosebrock November 23, 2015 at 6:34 am #

      To write the difference between two images to file, you could just use normal subtraction and subtract the two images from each other, followed by writing them to file. As for ignoring certain elements in the image, no, that cannot be done without heavily modifying the SSIM or MSE function.

  14. Saurav Mondal January 13, 2016 at 4:51 am #

    Hi Adrian, I have tried a lot to install skimage library for python 2.7. but it seems there is a problem with the installations. am not able to get any help. is there anyother possible package that could help regarding the same? I am actually trying to implement GLCM and Haralick features for finding out texture parameters. Also, is there any other site that can help regarding the Skimage library problem??

    • Adrian Rosebrock January 13, 2016 at 6:31 am #

      Installing scikit-image can be done using pip:

      $ pip install -U scikit-image

      You can read about the installation process on this page. You weren’t specific about what error message you were getting, so I would suggest starting by following their detailed instructions. From there, you should consider opening a “GitHub Issue” if you think your error message is unique to your setup.

      Finally, if you want to extract Haralick features, I would suggest using mahotas.

      • saurav January 13, 2016 at 11:31 pm #

        Thanks a lot adrian

  15. Umesh February 22, 2016 at 6:03 pm #

    Hi Adrian,
    I am working on a project in which i need to compare the two videos and give an out put with the difference between the reference video and the actual converted video. And this white process needs to be automated..
    Any input on this.
    Thanks in advance

    • Adrian Rosebrock February 23, 2016 at 3:25 pm #

      If you’re looking for the simple difference between the two images, then the cv2.absdiff function will accomplish this for you. I demonstrate how to use it in this post on motion detection.

  16. vj March 14, 2016 at 12:09 pm #

    hi adrian…..I am working on a project in which i need to compare the image already stored with the person in the live streaming and i want to check whether those persons are same.
    Thanks in advance

    • Adrian Rosebrock March 14, 2016 at 3:17 pm #

      That’s a pretty challenging, but doable problem. How are you comparing the people? By their faces? Their profile? Their entire body? Face identification would be the most reliable form of identification. In the case that people are allowed to enter and leave the frame (and change their clothing), you’re going to have an extremely hard time solving this problem.

      • vj March 15, 2016 at 12:21 am #

        i want to compare by their faces

        • Adrian Rosebrock March 15, 2016 at 4:33 pm #

          Got it, so you’re looking for face identification algorithms. There are many popular face identification algorithms, including LBPs and Eigenfaces. I cover both inside the PyImageSearch Gurus course.

  17. anu March 15, 2016 at 12:20 am #

    I want to compare an object captured from live streaming video with already stored image of the object.But i cant find the correct code for executing this.please help me

    • Adrian Rosebrock March 15, 2016 at 4:34 pm #

      Hey Anu, you might want to take a look at Practical Python and OpenCV. Inside the book I detail how to build a system that can recognize the covers of books using keypoint detection, local invariant descriptors, and keypoint matching. This would be a good start for your project.

      Other approaches you should look into include HOG + Linear SVM and template matching.

  18. Mohit April 7, 2016 at 1:42 pm #

    Hi Adrian, read your article and is quite helpful in what I am trying to achieve. Actually I am implementing algorithm for converting grayscale image to colored one based on the given grayscale image and an almost similar colored image. I have implemented it and now want to see how close is the resulting image to the given colored image. I have gone through your article and implemented what you have given here.
    1. Is there any other method to do so for colored images or will the same methods (MSE, SSIM and Locality Sensitive Hashing) work fine?
    2. Also, I read the paper related to SSIM in which it was written that SSIM works for grayscale images only. Is it really so?

    • Adrian Rosebrock April 8, 2016 at 12:57 pm #

      SSIM is normally only applied to a single channel at a time. Traditionally, this normally means grayscale images. However, in both the case of MSE and SSIM just split the image into its respective Red, Green, and Blue channels, apply the metric, and then take the sum the errors/accuracy. This can be a proxy accuracy for the colorization of your image.

  19. sakir mistry April 26, 2016 at 3:19 am #

    How can I compare stored image and capturing image as per the pixel to pixel comparison for open CV python for the Raspberry Pi

  20. Naveed May 7, 2016 at 1:32 pm #

    Hi,
    Very useful article, for a beginner like me.
    I want to compare two “JPG” images captured pi cam, and in result give a bit to GPIO
    images are stored in Pi SD card.
    please help
    thanks.

    • Adrian Rosebrock May 8, 2016 at 8:05 am #

      There are various algorithms you can use to compare two images. I’ve detailed MSE and SSIM in this blog post. You could also compare images based on their color (histograms, moments), texture (LBPs, textons, Haralick), or even shape (Hu moments, Zernike moments). There is also keypoint matching methods which I discuss inside Practical Python and OpenCV. As for passing the result bit to the GPIO, be sure to read this blog post where I demonstrate how to use GPIO + OpenCV together. Next week’s blog post will also discuss this same topic.

  21. Rohan June 24, 2016 at 8:02 am #

    Hi Adrian,

    I am working in photgrammetry and 3D reconstruction.When the user clicks a point in the first image,i want that point to be automatically to be detected in the second image without the user selecting the point in the second image as it leads to large errors.How can this be done,i have tried cropping the portion around the point and trying to match it through brute force matcher and ORB.However it detects no points.
    Please suggest a technique!!
    I can solve for the point mathematically but i want to use image processing to get the point.

    • Adrian Rosebrock June 25, 2016 at 1:33 pm #

      Solving for the point mathematically will always be more reliable than feature based matching methods. Do yourself a favor and do that instead.

  22. Anand Palanisamy July 20, 2016 at 1:04 am #

    Hey Adrian,
    I have a project where I have to use image comparison to identify whether two components are similar. For example, there will be images of several screws from various angles imported from a database. I will then have to compare the image of a particular screw against all these images and find the correct match and identify the type of screw. The program would have to take into account the length, width and other dimensions. I’m not quite sure as to how I would go about this.

    • Adrian Rosebrock July 20, 2016 at 2:37 pm #

      I would suggest treating this like an image search engine problem. I detail how to build a simple color-based image search engine in this post. However, color won’t be too helpful for identifying screws. So you’ll want to consider using a shape descriptor instead. For what it’s worth, I have another 30+ lessons on image descriptors and 20+ lessons on image search engines inside the PyImageSearch Gurus course.

      • Anand Palanisamy July 21, 2016 at 1:53 am #

        Thanks,
        I will go through it. Also, this post was very interesting to read, even though it was completely irrelevant to my project. Great stuff.

  23. Subash Thakuri July 27, 2016 at 2:59 pm #

    does this same concept work for handwritten signature matching?

    • Adrian Rosebrock July 29, 2016 at 8:43 am #

      It can, but only for signatures that are very aligned. Instead, I would recommend a different approach. There is a ton of research on handwritten signature matching. I would suggest starting with the research here and then expanding.

  24. JC July 28, 2016 at 12:08 am #

    I’m new to python but very interested to learn about this. Download the sample files but I’m getting this error.
    Traceback (most recent call last):
    File “/home/pi/Downloads/python-compare-two-images/compare.py”, line 5, in
    from skimage.measure import structural_similarity as ssim
    ImportError: No module named skimage.measure
    >>>

    • Adrian Rosebrock July 29, 2016 at 8:39 am #

      Make sure you have installed scikit-image on your system — based on your error message, it seems that scikit-image is not installed.

  25. Sam August 3, 2016 at 9:35 pm #

    Thanks for the post Adrian!
    Just a very quick ask: If i am to compare a set of images with my reference image using MSE to see which one fits the most, taking the mean wouldn’t be necessary right? Since I can already compare them?
    Just wanna make sure, thanks!

    • Adrian Rosebrock August 4, 2016 at 10:10 am #

      Correct, you would not need to take the mean — the squared error would still work.

  26. Chris August 18, 2016 at 3:08 am #

    Hey Adrian!
    thanks for your tut. i have compared in real time and i use raspberry pi to run my program. i use ssim to compare two frame but i have a problem that ssim algorithm use CPU to process, so it take me more than 10 sec to process two frame. And i know raspberry have GPU, and GPU can support to reduce time processing. do you know any algorithm in opencv to compare images? thank you so much 😀

    • Adrian Rosebrock August 18, 2016 at 9:27 am #

      If it’s taking a long time to compare two images, I’m willing to bet that you are comparing large images (in terms of width and height). Keep in mind that we rarely process images larger than 400-600 pixels along the largest spatial dimension. The more data there is to process, the slower our algorithms run! And while the added detail of high resolution images is visually appealing to the human eye, those extra details actually hurt computer vision algorithm performance. For your situation, simply resize your images and SSIM will run faster.

      • Chris August 18, 2016 at 10:09 am #

        you are right. my program is processing images with 640x480px and i get every frame from camera raspi capture. I test it on my computer with windows OS, it just take less than 1 sec to process 2 images, but when i use and convert that script to run on raspberry, it take more than 10 sec. It make me feel boring. I know raspberry can’t run fast as computer, so i want it process 2 image less than 5 sec, that’s great 😀 i use raspberry pi 2 and camera pi module.

        • Adrian Rosebrock August 22, 2016 at 1:44 pm #

          If you’re running the script on the Pi, make sure you use threading to improve the FPS rate of your pipeline.

  27. Jason Cameron September 14, 2016 at 7:58 pm #

    Hi Adrian.
    Thank you for your nice tutorial.
    I am just a beginner in image processing and it would be great if you answer my questions.

    I have a bunch of photos of clothes (some of them are clothes themselves and the rest of them are human wearing them).

    I’d like to get similar photos from them with an input image.

    1. Is this image search or image compare?
    2. To do this, what methods do you recommend?

    Thank you again.

    • Adrian Rosebrock September 15, 2016 at 9:31 am #

      There are many, many different ways to build an image search engine. In general, you should try to localize the clothing in the images before quantifying them and comparing them. I would also suggest utilizing the bag of visual words model, followed by spatial verification and keypoint matching. I cover building image search engines in-depth inside the PyImageSearch Gurus course.

  28. Mike October 6, 2016 at 10:38 am #

    Hello Adrian,

    Thank for the awesome work you’ve done here. Your tutorial works like a charm, but I’ve been playing around some large satellite images and I get a MemoryError at the mse calculation. 5000*5000 works fine but 7500*6200 throws it out of memory.
    Is it only dependent on my system’s RAM?
    Is there a way to split the array to smaller ones and still have the same result?

    Thanks in advance!

    • Adrian Rosebrock October 7, 2016 at 7:35 am #

      Please see my reply to “Chris” above. You would rarely process an image larger than 600px along it’s maximum dimension. In short, try resizing your images — there won’t be any memory issue. And depending on the contents of your satellite imagery you shouldn’t see any loss in accuracy either.

  29. Goran October 14, 2016 at 5:47 am #

    I was looking for a way to use android camera as “liveview device” for old analog SLR (saw some YT, not my original idea) and possibility of taking a digital photo with all the exif data after analog camera snaps it’s photo. So this method could find similar photos between analog scans and digital batch (with all the viewfinder analog data – focusing tools, various numbers, dials etc.) and embed all the relevant data into scans. Am I on the right track here?

    • Adrian Rosebrock October 15, 2016 at 9:56 am #

      It depends on the quality of the images and how much they vary. Do you have any examples of your images from the two different sources?

      • Goran October 17, 2016 at 4:12 am #

        Hi,

        not yet, I am currently working on a rig that would secure smartphone behind SLR. I was trying with sticky mat but the phone keeps failing off. Actually, to reduce costs I will be testing the setup with DSLR.
        After some thinking I found some obstacles, ex: viewfinder does not represent 100% of image and there are black stripes where data in viewfinder is displayed + so the matching script would first have to crop smartphone pictures (crop parameters are unique from camera to camera, and in case of freely positioned smartphone + from session to session) and then try to compare the images. will inform you when/if it goes alive. thanks

  30. Sakshi Shreya November 20, 2016 at 7:38 am #

    I was trying to compare an image with a part of another image. The method was not working when I used if condition. Then after a lot of search, I ended up here. Finally, your mean square method worked. Thank you.

    • Adrian Rosebrock November 21, 2016 at 12:34 pm #

      I’m glad the post helped Sakshi!

  31. Anonym helper December 3, 2016 at 7:01 am #

    You shouldn’t import
    skimage.measure import structural_similarity as ssim
    because structural_similarity is depricated.

    Use
    from skimage.measure import compare_ssim as ssim
    instead

    • Diane December 12, 2016 at 2:47 pm #

      Thanks! For others who have asked questions, the compare_ssim function (as opposed to the deprecated structural_similarity) accepts multi-channel (such as RGB) images (and averages the SSIM of each channel). It may be faster on large images due to a more optimized algorithm. Also, it will optionally return an image of the SSIM patches, so that you can see which regions of the image match.

      • Adrian Rosebrock December 14, 2016 at 8:46 am #

        Thank you for sharing Diane! I will play around with the updated compare_ssim function and consider writing an updated blog post.

  32. Justice January 19, 2017 at 7:07 am #

    Hello Adrian, and thank you very much for all that you do.
    I am having an issue with simm whilst comparing a saved image (50x40px, .png) and a frame grabbed using my usbcam. I have sliced the np.array to the corresponding size before the comparison (I used the same method to aquire and save the .png) and both report the same size when I use the np.size(img, 0/1) methods. Is there some outlying issue I am overlooking? Because I always get the error message saying that the two images must be the same dimensions. Thank you

    • Adrian Rosebrock January 20, 2017 at 11:03 am #

      Hey Justice — be sure to check the image.shape, not the .size. It could be that your slicing isn’t correct.

  33. Kush January 25, 2017 at 3:25 pm #

    unable to print values of mse and ssim……please help….

  34. Kenil March 14, 2017 at 12:38 am #

    Hey
    Can the above described algorithms be used for comparing human faces with decent accuracy?
    If not can you suggest what I can do?
    Thanks!

    • Adrian Rosebrock March 15, 2017 at 9:00 am #

      For recognizing faces in images I would recommend Eigenfaces, Fisherfaces, LBPs for face recognition, or embeddings using deep learning. I discuss these more inside the PyImageSearch Gurus course.

  35. Jayamathan March 22, 2017 at 2:09 am #

    Hey thks fr the tutorial. I am working on hand palm images to extract patterns of hand of different individuals and store it in a database and again comparing it so it can use for authentication purpose.

    Can u suggest how can i achieve it?

    • Adrian Rosebrock March 22, 2017 at 8:42 am #

      There are many different methods of comparing images to a pre-existing database, this is especially true for hand gesture recognition. In some cases you don’t even need a database of existing images to recognize the gesture.

      In any case, I would suggest working through the PyImageSearch Gurus course where I demonstrate hand gesture recognition methods. I also cover object detection methods (such as HOG + Linear SVM) in great detail.

Leave a Reply