Blur detection with OpenCV

detecting_blur_header

Between myself and my father, Jemma, the super-sweet, hyper-active, extra-loving family beagle may be the most photographed dog of all time. Since we got her as a 8-week old puppy, to now, just under three years later, we have accumulated over 6,000+ photos of the dog.

Excessive?

Perhaps. But I love dogs. A lot. Especially beagles. So it should come as no surprise that as a dog owner, I spend a lot of time playing tug-of-war with Jemma’s favorite toys, rolling around on the kitchen floor with her as we roughhouse, and yes, snapping tons of photos of her with my iPhone.

Over this past weekend I sat down and tried to organize the massive amount of photos in iPhoto. Not only was it a huge undertaking, I started to notice a pattern fairly quickly — there were lots of photos with excessive amounts of blurring.

Whether due to sub-par photography skills, trying to keep up with super-active Jemma as she ran around the room, or her spazzing out right as I was about to take the perfect shot, many photos contained a decent amount of blurring.

Now, for the average person I suppose they would have just deleted these blurry photos (or at least moved them to a separate folder) — but as a computer vision scientist, that wasn’t going to happen.

Instead, I opened up an editor and coded up a quick Python script to perform blur detection with OpenCV.

In the rest of this blog post, I’ll show you how to compute the amount of blur in an image using OpenCV, Python, and the Laplacian operator. By the end of this post, you’ll be able to apply the variance of the Laplacian method to your own photos to detect the amount of blurring.

Looking for the source code to this post?
Jump right to the downloads section.

Variance of the Laplacian

Figure 2: Convolving the input image with the Laplacian operator.

Figure 1: Convolving the input image with the Laplacian operator.

My first stop when figuring out how to detect the amount of blur in an image was to read through the excellent survey work, Analysis of focus measure operators for shape-from-focus [2013 Pertuz et al]. Inside their paper, Pertuz et al. reviews nearly 36 different methods to estimate the focus measure of an image.

If you have any background in signal processing, the first method to consider would be computing the Fast Fourier Transform of the image and then examining the distribution of low and high frequencies — if there are a low amount of high frequencies, then the image can be considered blurry. However, defining what is a low number of high frequencies and what is a high number of high frequencies can be quite problematic, often leading to sub-par results.

Instead, wouldn’t it be nice if we could just compute a single floating point value to represent how blurry a given image is?

Pertuz et al. reviews many methods to compute this “blurryness metric”, some of them simple and straightforward using just basic grayscale pixel intensity statistics, others more advanced and feature-based, evaluating the Local Binary Patterns of an image.

After a quick scan of the paper, I came to the implementation that I was looking for: variation of the Laplacian by Pech-Pacheco et al. in their 2000 ICPR paper, Diatom autofocusing in brightfield microscopy: a comparative study.

The method is simple. Straightforward. Has sound reasoning. And can be implemented in only a single line of code:

You simply take a single channel of an image (presumably grayscale) and convolve it with the following 3 x 3 kernel:

Figure 2: The Laplacian kernel.

Figure 2: The Laplacian kernel.

And then take the variance (i.e. standard deviation squared) of the response.

If the variance falls below a pre-defined threshold, then the image is considered blurry; otherwise, the image is not blurry.

The reason this method works is due to the definition of the Laplacian operator itself, which is used to measure the 2nd derivative of an image. The Laplacian highlights regions of an image containing rapid intensity changes, much like the Sobel and Scharr operators. And, just like these operators, the Laplacian is often used for edge detection. The assumption here is that if an image contains high variance then there is a wide spread of responses, both edge-like and non-edge like, representative of a normal, in-focus image. But if there is very low variance, then there is a tiny spread of responses, indicating there are very little edges in the image. As we know, the more an image is blurred, the less edges there are.

Obviously the trick here is setting the correct threshold which can be quite domain dependent. Too low of a threshold and you’ll incorrectly mark images as blurry when they are not. Too high of a threshold then images that are actually blurry will not be marked as blurry. This method tends to work best in environments where you can compute an acceptable focus measure range and then detect outliers.

Detecting the amount of blur in an image

So now that we’ve reviewed the the method we are going to use to compute a single metric to represent how “blurry” a given image is, let’s take a look at our dataset of the following 12 images:

Figure 3: Our dataset of images. Some are blurry, some are not. Our goal is to perform blur detection with OpenCV and mark the images as such.

Figure 3: Our dataset of images. Some are blurry, some are not. Our goal is to perform blur detection with OpenCV and mark the images as such.

As you can see, some images are blurry, some images are not. Our goal here is to correctly mark each image as blurry or non-blurry.

With that said, open up a new file, name it detect_blur.py , and let’s get coding:

We start off by importing our necessary packages on Lines 2-4. If you don’t already have my imutils package on your machine, you’ll want to install it now:

From there, we’ll define our variance_of_laplacian  function on Line 6. This method will take only a single argument the image  (presumed to be a single channel, such as a grayscale image) that we want to compute the focus measure for. From there, Line 9 simply convolves the image  with the 3 x 3 Laplacian operator and returns the variance.

Lines 12-17 handle parsing our command line arguments. The first switch we’ll need is --images , the path to the directory containing our dataset of images we want to test for blurryness.

We’ll also define an optional argument --thresh , which is the threshold we’ll use for the blurry test. If the focus measure for a given image falls below this threshold, we’ll mark the image as blurry. It’s important to note that you’ll likely have to tune this value for your own dataset of images. A value of 100  seemed to work well for my dataset, but this value is quite subjective to the contents of the image(s), so you’ll need to play with this value yourself to obtain optimal results.

Believe it or not, the hard part is done! We just need to write a bit of code to load the image from disk, compute the variance of the Laplacian, and then mark the image as blurry or non-blurry:

We start looping over our directory of images on Line 20. For each of these images we’ll load it from disk, convert it to grayscale, and then apply blur detection using OpenCV (Lines 24-27).

In the case that the focus measure exceeds the threshold supplied a command line argument, we’ll mark the image as “blurry”.

Finally, Lines 35-38 write the text  and computed focus measure to the image and display the result to our screen.

Applying blur detection with OpenCV

Now that we have detect_blur.py  script coded up, let’s give it a shot. Open up a shell and issue the following command:

Figure 4: Correctly marking the image as "blurry".

Figure 4: Correctly marking the image as “blurry”.

The focus measure of this image is 83.17, falling below our threshold of 100; thus, we correctly mark this image as blurry.

Figure 5: Performing blur detection with OpenCV. This image is marked as "blurry".

Figure 5: Performing blur detection with OpenCV. This image is marked as “blurry”.

This image has a focus measure of 64.25, also causing us to mark it as “blurry”.

Figure 6: Marking an image as "non-blurry".

Figure 6: Marking an image as “non-blurry”.

Figure 6 has a very high focus measure score at 1004.14 — orders of magnitude higher than the previous two figures. This image is clearly non-blurry and in-focus.

Figure 7: Applying blur detection with OpenCV and Python.

Figure 7: Applying blur detection with OpenCV and Python.

The only amount of blur in this image comes from Jemma wagging her tail.

Figure 8: Basic blur detection with OpenCV and Python.

Figure 8: Basic blur detection with OpenCV and Python.

The reported focus measure is lower than Figure 7, but we are still able to correctly classify the image as “non-blurry”.

Figure 9: Computing the focus measure of an image.

Figure 9: Computing the focus measure of an image.

However, we can clearly see the above image is blurred.

Figure 10: An example of computing the amount of blur in an image.

Figure 10: An example of computing the amount of blur in an image.

The large focus measure score indicates that the image is non-blurry.

Figure 11: The subsequent image in the dataset is marked as "blurry".

Figure 11: The subsequent image in the dataset is marked as “blurry”.

However, this image contains dramatic amounts of blur.

Detecting the amount of blur in an image using the variance of Laplacian.

Figure 12: Detecting the amount of blur in an image using the variance of Laplacian.

Figure 13: Compared to Figure 12 above, the amount of blur in this image is substantially reduced.

Figure 13: Compared to Figure 12 above, the amount of blur in this image is substantially reduced.

Figure 14: Again, this image is correctly marked as not being "blurred".

Figure 14: Again, this image is correctly marked as not being “blurred”.

Figure 15: Lastly, we end our example by using blur detection in OpenCV to mark this image as "blurry".

Figure 15: Lastly, we end our example by using blur detection in OpenCV to mark this image as “blurry”.

Summary

In this blog post we learned how to perform blur detection using OpenCV and Python.

We implemented the variance of Laplacian method to give us a single floating point value to represent the “blurryness” of an image. This method is fast, simple, and easy to apply — we simply convolve our input image with the Laplacian operator and compute the variance. If the variance falls below a predefined threshold, we mark the image as “blurry”.

It’s important to note that threshold is a critical parameter to tune correctly and you’ll often need to tune it on a per-dataset basis. Too small of a value, and you’ll accidentally mark images as blurry when they are not. With too large of a threshold, you’ll mark images as non-blurry when in fact they are.

Be sure to download the code using the form at the bottom of this post and give it a try!

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , ,

128 Responses to Blur detection with OpenCV

  1. TBS September 7, 2015 at 6:51 pm #

    Maybe you are going to expand this topic, but what would be really fascinating is if there is a way to take two very similar images and “repair” the blurred one with the unblurred one (I think this feature is in the new Photoshop).

    Thanks for sharing this.

  2. Neil V September 8, 2015 at 11:49 am #

    Since you have so many pictures, you could use deconvolution deblurring to fix the blur.

  3. Chris September 8, 2015 at 1:25 pm #

    As usual, great post Adrian!!!

    • Adrian Rosebrock September 9, 2015 at 6:47 am #

      Thanks Chris! 🙂

      • afshan September 19, 2018 at 4:02 am #

        Hey ! I just can’t get any output. No error nothing. I don’t understand why. Please help

  4. Imre Kerr September 8, 2015 at 6:02 pm #

    I noticed all the pictures in your data set were either completely sharp or completely blurry. How did your program fare on pictures with a sharp background, but blurry subject? (i.e., the “Jemma spazzing out” ones)

    • Adrian Rosebrock September 9, 2015 at 6:46 am #

      They still did fairly well, but the results weren’t as good. To handle situations where only the subject of the image is blurry, I would suggest performing saliency detection (which I’ll do a blog post on soon) and then only computing the focus measure for the subject ROI of the image.

      • Bogdan April 18, 2016 at 8:13 am #

        Great tutorial Adrian! any news on the saliency detection?

        as reddit user 5225225 suggest, another trick would be to process the image” in blocks, and sorted it and compared based on the average of the top 10%. That way, if there’s some non-blurry parts and some blurry parts, you can effectively ignore the blurry parts.
        That seems like it would work, because the actually blurry images look like every part of them is blurry, not just most of it as seen in the macro image.”

        Source: https://www.reddit.com/comments/3k3fjr

        • Adrian Rosebrock April 18, 2016 at 4:45 pm #

          Unfortunately, it’s not quite as easy to partition the image into blocks because you run the risk of the blurry region being partitioned between overlapping blocks. If you also take the top 10%, you could also run into false-positives. As for the saliency detection, I haven’t had a chance to investigate it more, I’ve been too busy with other obligations.

          • Jens-Petter Salvesen December 8, 2017 at 10:14 am #

            I think we have several different cases to cover:

            Blur from motion of the subject. This is typically when your dog was moving.
            The rest of the image can be sharp, but when the subject is moving – well…

            Blur from moving the camera too much. Typically, nothing will be in sharp focus – unless you’re successfully performing motion tracking.

            Blur from selective focus. This is something we strive to achieve from macro shots and the like.

            Blur from “incorrect” focus. Nothing is sharp…

            So – if everything is out of focus, I think we can safely declare the image “out of focus” and leave it up to the photographer if that is desirable or not.

            But if we find some sharp and some out-of-focus elements then iit becomes difficult to tell them apart since we must somehow determine if the in-focus elements are the subject (selective focus) or the surroundings (motion blur due to the subject moving too much).

            Hmh. Interesting problem!

  5. Brian D September 9, 2015 at 3:49 pm #

    A nice feature to add, would be to rename the original file name and append “-blurry” to it, so that users can go to the folder after and quickly filter the files.

  6. Nick Isaacs September 23, 2015 at 2:17 am #

    This is a very helpful blog. Thanks!!!
    I don’t use python, but I port the ones I need to Java/Android.

    • Adrian Rosebrock September 23, 2015 at 6:41 am #

      Very nice Nick!

    • Sarang Sawant April 15, 2017 at 6:03 pm #

      Hey Nick,
      Can you share the converted code from python to java/android??

      • vigneswaran February 19, 2018 at 7:56 am #

        Hi @Sarang, Did you get converted Android code?

        • Sanjh November 23, 2018 at 5:11 am #

          @vigneshwaran: Any updates on your progress for the same ?

  7. nguyen duy cuong November 10, 2015 at 11:45 pm #

    i follow your instruction and realize that Laplace filter is very sensitive with noise. So i wonder how do you choose threshold and what can we do in case of noisy image?

    • Adrian Rosebrock November 11, 2015 at 6:34 am #

      Indeed, the Laplacian can be sensitive to noise. You normally choose your threshold on a dataset-to-dataset basis by trial and error.

    • Noah Spurrier September 21, 2016 at 10:23 pm #

      It may sound counterintuitive but you could apply a blur filter to each image before calculate the blur score. Since all images would have the same blur factor applied it should still be fair to compare their blur scores after. This could also be a way to segment into three categories: clear, blurry, noisy. based on the change in score before blurring and after blurring… or rate of change if you test at different levels of deliberate blurring. The rate of change of blur score may have a different slope depending on the amount of noise or other factors in the original image.

  8. hychanbb December 19, 2015 at 5:17 am #

    Oh my Jesus, i recently working on a project related to blur detection. and i have tried a edge width approach, finally i got stuck at finding the length of all edges. Now i have found this method and i wondering how to get a laplacian ‘s variance of image in java. Hope someone can help me indeed.

    • Adrian Rosebrock December 19, 2015 at 7:41 am #

      I haven’t coded Java in a long, long time, but computing the Laplacian by hand is actually pretty simple. It’s even easier if you have the OpenCV + Java bindings installed. Here is some more information on computing the Laplacian by hand.

  9. Vlad Gorbenko February 18, 2016 at 7:54 am #

    What doesn’t mean .var()?
    I got that this is “variance of the Laplacian” but how to calculate it without OpenCV? What is inside this function?
    Thank you

    • Adrian Rosebrock February 18, 2016 at 9:34 am #

      The cv2.Laplacian function, as the name suggests, computes the Laplacian of the input image. This function is built-in to OpenCV. The var() method is a NumPy function. It computes the statistical variance of a set of data, which in this case is the Laplacian — hence the name “Variance of the Laplacian”.

      If you want to implement the Laplacian by hand, I suggest reading up on convolutions and kernels.

  10. bhaarat April 27, 2016 at 8:08 pm #

    Awesome post Adrian! Wondering if there is a way to fix the blur? What if there was a single image of a book taken with a cell phone that was blurry. Would there be a way to apply some method to make it slightly non-blurry?

    • Adrian Rosebrock April 28, 2016 at 2:31 pm #

      There are methods to “deblur” images; however, the results are less than satisfactory at the moment. If you’ve ever used Photoshop before, you’ve likely played around with these filters. They don’t work all that great, and “deblurring” is still a very active area of research.

  11. Marta April 28, 2016 at 12:49 pm #

    Great work! Is there a way to automatically check if an image is blurred or not without using some threshold? 🙂

    • Adrian Rosebrock April 28, 2016 at 2:21 pm #

      There are certainly other methods that can be used for blur detection (such as investigating the coefficients in the Fourier domain), but in general, most methods require a threshold of some sort. I’ll try to do a followup blog post to this one that contains a more “easier to control” threshold.

      • Noah Spurrier September 21, 2016 at 10:13 pm #

        You could add an option to allow the user to input the file names of reference images that the user knows are “in focus” and “out of focus”. The program can use these to set a threshold. Perhaps a “minimally acceptable” reference image makes more sense. Then the user needs to input just one image file name. Pick the worst image that you would still find acceptable to keep then the program sets the score of that image as the threshold.

      • Prakruti August 31, 2017 at 2:41 am #

        Hi Adrian ,

        Did you get a chance to write a blog with more “easier to control” threshold ? The threshold for variance of Laplacian does not work on all the mobile phone that I test on. An HDR camera with gives blurred images even when variance value is about 5000 while a normal camera of old mobile phones gives blurred image when variance is around 72 ! The image scene captured by both camera is same. How do we generalize the algorithm for different camera ? Becuse even the clear images from old camera is classified as blurry at times. Any leads ? How can the camera properties be utilized to decide on threshold ?

        • Adrian Rosebrock August 31, 2017 at 8:28 am #

          I haven’t written a second blog post on blur detection yet (very busy with deep learning tutorials at the moment). I will try to write a “parameter free” version of blur detection in the future.

  12. Ana April 28, 2016 at 8:08 pm #

    Brilliant! Could you help me to convert this code to C++? Thank you in advance 🙂

    • nanhai ke July 7, 2016 at 3:33 am #

      do you have the c++ code,i really need

  13. YU May 26, 2016 at 10:14 pm #

    Very helpful, thanks. Is it possible to find blur directions? Can we get two floating point values representing x-blur and y-blur?

    • Adrian Rosebrock May 27, 2016 at 1:28 pm #

      Sure, absolutely. But for this, I wouldn’t use the Laplacian. I would compute the Sobel kernel in both the x and y directions. See this blog post for more information on the Sobel operator.

  14. Rish May 27, 2016 at 1:43 am #

    Well done Adrian!! Another good post really helpfull

    • Adrian Rosebrock May 27, 2016 at 1:27 pm #

      Thanks Rish! 🙂

  15. LGG June 29, 2016 at 11:15 pm #

    hello Adrian,I am a graduate student come from Wuhan university.I recently working on a project related to blur detection too,it is helpful to me.But I find the variance is also small when the image is pure color or close to pure color even the image is clearness.The reason is that pure color image variance of Laplacian is close too.So,how to deal with this problem?Really looking forward to your reply!thank you!

    • Adrian Rosebrock June 30, 2016 at 12:21 pm #

      I addressed this question in an email, but I’ll respond here as well. This blur detection method (as well as other blur detection methods) tend to examine the gradient of the image. If there is no gradient in the image (meaning pure color with no “texture”), there the variance will clearly be low. There are non-gradient based methods for blur detection and they are detailed in this survey paper on blur detection — I would suggest giving that a read.

  16. Cheng LI June 30, 2016 at 6:53 am #

    where is the source code of python? I can’t find in the pdf which i donwload

    • Adrian Rosebrock June 30, 2016 at 12:18 pm #

      You can download the source code to this post using the “Downloads” section of this tutorial.

  17. Wim August 31, 2016 at 3:37 pm #

    Hi,

    We try to capture high resolution frames with PI3 and 5Mpix and 8Mpix camera.

    As long as nothing moves the photo is fine.

    Since we like to use it inside a PIR MW motion sensor there will be always a person or object moving in front of the camera.
    We noticed that the moving person/object is always blur on the captured frame.
    Is this normal? Can we capture frames with moving opbjects that are not blur?
    What are the limitations?

    • Adrian Rosebrock September 1, 2016 at 11:05 am #

      Motion blur can be a bit of a problem, especially because when humans watch a video stream we tend to not “see” motion blur — we just see a fluid set of frames moving in front of us. However, if you were to hit “pause” in the middle of the stream, you would likely see motion blur. Ways to get around motion blur can involve using a very high FPS camera or using methods to detect the amount of blur and then choose a frame with minimal blur.

  18. Arvind Mohan September 28, 2016 at 3:37 am #

    Hi Adrian,

    Thanks for this great post. You have done some good work here in explaining the Variance of Laplacian method.
    I have a use case where the input image is going to be somewhat black and white only. Is variance of laplacian method play good there as well?
    What do you think of Sum Modified Laplacian method, in general?

    • Adrian Rosebrock September 28, 2016 at 10:35 am #

      By the vary definition of the variance statistic the variance is only useful if your input images can actually “vary”. However, in this particular example you might still be able to use the Variance of Laplacian. If your images are captured under controlled conditions you can simply define a threshold that determines what “is blurry” versus “what is not blurry”.

      • Arvind Mohan September 28, 2016 at 12:27 pm #

        Yes but which approach you would suggest in this case? There are too many approaches suggest by Perutz et al

        • Adrian Rosebrock September 28, 2016 at 1:21 pm #

          Without seeing your dataset it’s honestly hard to give concrete advice. Spot-check a few of the algorithms that you think will perform reasonably and go from there.

          • Arvind Mohan September 28, 2016 at 2:40 pm #

            Yes, thanks. Will try to take this discussion over the email.
            While writing this comment, I thought of an idea about de-blurring an image upto an extent. What do you think of unsharp-masking?

  19. Riccardo November 8, 2016 at 9:22 am #

    Hey Adrian,

    thanks a lot for the tutorial, I found it really useful and seems to work well for the problem I am considering! However I have two questions:

    – the variance function is applied to the pixel data in “vector format” instead “matrix format”? I ask this because I used R and after the convolution I still have pixel data in “matrix format” and applying var() function to such dataset of course brings to a covariance matrix, not just a number.

    – isn’t the “variance approach” sensitive to the “number of objects” captured into the photo? I mean, a “good focus photo” with just one object on flat background (i.e. classical iphone headphones image) could have a variance not higher than a “not so good focus photo” with lots of objects? I the answer is yes, how would you deal with it?

    Thanks in advance for the reply and kind regards,
    Riccardo

    • Adrian Rosebrock November 10, 2016 at 8:48 am #

      The variance function is computed across the entire matrix. It should return a single floating point value, at least that is the NumPy implementation (I’m not sure about R).

      The variance method is sensitive to a good number of things, but where you’ll really struggle is with objects with a flat background or no texture. The variance will naturally be very low in this case.

  20. Gilberto Borges December 2, 2016 at 8:39 pm #

    Great article!!
    Can you or someone develop a Lightroom plugin based on this knowledge, in order to browse a cataloging mark all the photos with a Blurry or Non-Blurry tag?

    • Adrian Rosebrock December 5, 2016 at 1:35 pm #

      I haven’t used Lightroom and I don’t know anything about their development ecosystem, but yes, I would assume that it’s possible.

  21. Gal December 11, 2016 at 7:49 am #

    Awesome article Adrian!
    What do you think about solving this problem using convolutional neural network?

    • Adrian Rosebrock December 12, 2016 at 10:39 am #

      You can train a CNN to classify just about anything, so provided you have enough examples of blurry vs non-blurry images (in the order of tens of thousands of examples, ideally) then yes, a CNN could likely do well here.

  22. Jose Soto December 29, 2016 at 9:22 pm #

    Hi Adrian, thanks for the tutorial, it really helped me a lot.
    I have one question, instead classify an image as blurred or not blurred, how can I isolate the non blurred area in the image? For example, in your figure 7, where the blur comes from Jemma’ tail, how can I extract this area from the photo?

    Any advice would be great
    Jose Soto

  23. Bharath February 26, 2017 at 6:56 am #

    Hi Adrian, your tutorials are great .But i want to deblur the image .Please suggest any solution

    • Adrian Rosebrock February 27, 2017 at 11:12 am #

      I don’t have any tutorials on deblurring, but I will consider this for a future blog post.

      • Prakruti March 9, 2017 at 6:25 am #

        Hi,

        Thank you for such a well explained tutorial.
        Have you got a chance to work out on/go through good methods for setting up an optimal threshold for blurr image classification ?

  24. Jake March 13, 2017 at 10:01 am #

    I really appreciate your blog posts. I’ve bought your book and supported the kick starter for the new one.

    How would I got about implementing blur detection on a region of interest? (I assume I just slice the array) But what if that ROI is not a rectangle, but something made with a mask? For example, I want to focus on a face?

    Is the best way to just find the ROI, then find the largest rectangle I can slice out of it?

    • Adrian Rosebrock March 13, 2017 at 12:08 pm #

      How you find and determine your ROI is up to you and is highly dependent on what you are trying to accomplish. Edge detection, thresholding, contour extraction, object detection, etc. can all be used for determining what your ROI is. Once you have your ROI, you would need to compute the bounding box for it, extract the ROI via array slicing, and then compute the blur value.

      • Jake March 13, 2017 at 1:14 pm #

        Thanks!

  25. Mubashira April 5, 2017 at 10:24 am #

    This code failed on my dataset :’D The blurred one said not blurry and vice versa for some images.

    • Adrian Rosebrock April 5, 2017 at 11:46 am #

      You may need to tune the threshold of what is considered “blurry” versus “not blurry” for your own dataset.

  26. Ben April 12, 2017 at 8:16 am #

    Presumably this could be skewed by large in focus areas with little variation – for instance a large area of sky, or or carpet, would likely lead to a low value being returned even if the image was in focus.

    Although this would only work if such areas were at the edge(s) of images, perhaps auto-cropping the image before analysis would help here, based on some threshold for colour variation?

  27. Osama Almoayed May 28, 2017 at 9:29 pm #

    great post would love to see this implemented in an over the shelf product or a google photos extension. I have a lot of photos that I have acquired over the years and I have adopted a new strategy this year to keep only the best, as opposed to keeping everything. an easy way to eliminate is to find blurred photos.

  28. Steve Cookson June 3, 2017 at 6:30 am #

    Hi Adrian, what a great post. I have an electronic focuser for my telescope that drives my telescope focus tube. I’ve added an Arduino in front of it powered by a python link and I’ve been using the standard deviation like this:

    np.concatenate([cv2.meanStdDev(cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY))]).flatten()

    to test my focus. Honestly it’s been quite hard, you’ve put a Laplacian in front of it, does that make any difference? Anyhow I’ll try your formula.

    My real problem is that the number is very sensitive and changes for all sorts of reasons, a cloud moves in the sky, neighbour turns a light on, car goes past, none of which alter the focus but they do affect the contrast.

    What do you think?

    Regards

    • Adrian Rosebrock June 4, 2017 at 5:36 am #

      I would suggest breaking out your code a little more to make it more readable with less composite functions (that way you can reuse function outputs). The cv2.meanStdDev will return both the mean and standard deviation. You only need the variance for the Laplacian.

      As for applying this technique for examining skies, I’m not sure it’s the best approach. Provided a cloudless night, there will be very little texture and therefore the variance will be low. As you noted, if a cloud moves in, the variance increases from the baseline. I would suggest looking at other methods, including the distribution of Fourier transform coefficients.

      • Steve Cookson June 5, 2017 at 12:13 pm #

        Hi Adrian, thanks for your answer. I agree with you, I should break out the code some more. I think I might also just highlight the object I’m interested in and manage the focus in a sub-section of the image. It should be faster as well as being potentially more accurate.

  29. Marius Ionescu June 17, 2017 at 2:50 am #

    Wonderful tutorial.

    Please fix this:

    pip intall imutils -> pip install imutils

    • Adrian Rosebrock June 20, 2017 at 11:14 am #

      Thank you for pointing out the typo, Marius!

  30. Gayathri Menath June 19, 2017 at 4:45 am #

    Hi Adrian, It was a great post.

    I am facing a problem for highly focused images while blur detection. The Laplacian result is returning a low value for highly focused images. Is there any method to identify the image is blurred or not especially for focused images?

    Image – http://rhdwalls.com/wp-content/uploads/2016/08/Emma-Watson-15.jpg

    • Adrian Rosebrock June 20, 2017 at 11:01 am #

      If there there is little “texture” in the image, then the variance of the Laplacian will by definition be low. In that case, I would compute a histogram of the Fourier transform coefficients and inspect the slope. I will try to do a more advanced blog post on blur detection in the future.

      • Gayathri Menath June 22, 2017 at 8:19 am #

        Thanks for the update Adrian.

  31. amit June 27, 2017 at 3:18 am #

    I am running this code on command prompt, but getting this message
    usage: detect.py [-h] -i IMAGES [-t THRESHOLD]
    detect.py: error: argument -i/–image is required

    • Adrian Rosebrock June 27, 2017 at 6:06 am #

      Please read up on command line arguments, how they work, and how to supply them to your Python script. This will resolve your error.

  32. Mallesh June 29, 2017 at 5:07 pm #

    Thanks Adrian! Nice post. There are plenty of discussions on how to improve/detect blurriness. But, no one come up with full-pledged implementation. This is a wonderful blog you’re maintaining. Thanks again.

    • Adrian Rosebrock June 30, 2017 at 8:05 am #

      Thank you Mallesh, I appreciate it 🙂

  33. Abder-Rahman Ali July 13, 2017 at 5:59 am #

    Thanks a lot Adrian for the nice post. Just a small question. For the variance of Laplacian, After we convolve the image with the 3 x 3 kernel (Laplacian), the pixel values will apparently change. Do we apply the variance on “all” the pixels? In other words, would the variance be calculated for the “whole” image, and based on that result we can determine whether the image is blurry or not when compared against a threshold?

    • Adrian Rosebrock July 14, 2017 at 7:29 am #

      The variance is computed over the entire output of the Laplacian operation, not just the 3×3 region.

  34. dog lover August 18, 2017 at 11:41 am #

    He is a good boy/girl (your dog I mean)… good job man!

    • Adrian Rosebrock August 21, 2017 at 3:47 pm #

      Jemma is a girl and yes, she’s certainly good 🙂 Thanks for the comment.

  35. Dorin-Mirel Popescu August 30, 2017 at 4:05 pm #

    Is it possible to use logistic regression to determine the threshold between blurred and not blurred in huge dataset? One could label a few image from the dataset into blurred and not blurred then calculated their corresponding laplacian variance. This is followed by logistic regression. The model can then be used to classify the rest of the images in the dataset.

    • Adrian Rosebrock August 31, 2017 at 8:30 am #

      Yes, you can use machine learning to help improve the blur detection but keep in mind that the model will only perform well if the input data matches what it was trained on. As I mentioned to “Prakruti” in the comment below I will try to do an updated blog post on “parameter free” blue detection in the future.

  36. Abi October 23, 2017 at 1:53 pm #

    Thank you for the amazing tutorial and trying to answer all our follow-up questions!

    Is there a similar and decently performing method to detect ‘glare/reflection/spots/’ in an image? Example image: https://i.ebayimg.com/images/i/370396881797-0-1/s-l1000.jpg

    • Adrian Rosebrock October 23, 2017 at 5:16 pm #

      Glare and reflection are a nightmare to deal with but you might consider applying a Gaussian blur followed by binarization at the 75-80% level of a grayscale image (or maybe even the Value component of HSV). This will highlight regions of an image that are very bright. It’s not perfect, but it would be a start.

  37. Majid Einian November 8, 2017 at 9:17 am #

    It would be great if you provided a stand-alone executable (or installable) app to do the task. We not-programmers also need these tools, and there is none out there.

    • Adrian Rosebrock November 9, 2017 at 6:28 am #

      Hi Majid — while it’s not impossible to create a Python + OpenCV stand-alone executable that can be executed across all machines, it’s very close to it. I can certainly understand non-programmers wanting to use these tools and lessons, but please do keep in mind that PyImageSearch is a programming-based blog (in the Python programming language) and programming experience is more-or-less required to read this blog. Furthermore, a standalone executable with Python is again, near impossible when using OpenCV.

  38. Tom Martin November 22, 2017 at 2:42 pm #

    I’m wondering if this method can be adapted by Google Images to help them determine which images have been upsampled (and by how much). Do you know of any work being done in that area? I see nothing on the web but it seems important for anyone doing serious image searches.

    • Adrian Rosebrock November 25, 2017 at 12:44 pm #

      Hi Tom — my upcoming blog post on Monday, November 27th 2017 (image hashing) will address part of this question (i.e., determining the same images).

  39. Fernando taffoya December 1, 2017 at 12:58 pm #

    Hi Adrian, I’ve just discovered just site and it looks very interesting. I was wondering, with this method, how do you account for photos with bokeh (i.e., photos with the background intentionally blurred, like for portraits).

    • Adrian Rosebrock December 2, 2017 at 7:22 am #

      The algorithm cannot tell the difference between unintentional and intentional blur. I’m not sure what your question is in this context. Is your goal to see if the subject is blurred but ignore the rest of the blur in the background?

      • Fernando Taffoya December 12, 2017 at 12:38 am #

        Whenever I go out on vacation, I love to take pictures. I recently got a mirrorless camera so I could dive a little deeper into photography. However, I usually take a lot of pictures (more than 1000 last vacation) and I thought of quickly discarding some of them using code (for example, those which are too blurry). I know the algorithm cannot tell the difference between intentional and unintentional blur. I was just wondering if this algorithm could be used, and choose a different threshold, so that blurry images could be discarded (or marked as blurry), but images with intentional blurr could still be marked as “not blurry” (not because of the algorithm, but maybe by choosing a different threshold), Or would I need a different approach?

        • Adrian Rosebrock December 12, 2017 at 9:05 am #

          If there was such a threshold you would need to manually tune it and determine it yourself. Or perhaps you would need to leverage a machine learning classifier to learn what blurry, not blurry, and intentional blurry looks like.

  40. Wei-Hsiang Lin January 29, 2018 at 11:09 pm #

    Hi Adrain

    If I want to compare the blurriness of images acquired in different environments, do I need to normalize the intensity of these image into same scale?

    • Adrian Rosebrock January 30, 2018 at 10:07 am #

      This method is only meant to detect blurriness, not compare blurriness. You might want to instead treat this as an image search engine/CBIR problem by computing features from the FFT or even wavelet transformation of the image.

      • Wei-Hsiang Lin January 30, 2018 at 8:54 pm #

        I got it! Thanks for your suggestion. I will try the method you mentioned.

  41. Duncan Martin March 2, 2018 at 6:31 am #

    Does anyone know why the images do not get parsed in order by filename? I’m running this on a folder with about 2000 files in it. The images are being called up in seemingly random order.

    • Adrian Rosebrock March 2, 2018 at 10:24 am #

      There is no order imposed on the image paths. You would need to explicitly sort them:

      for imagePath in sorted(paths.list_images(args["images"])):

      • Duncan Martin March 2, 2018 at 4:42 pm #

        Perfect. Thanks!

  42. Abed March 14, 2018 at 9:09 pm #

    Hi, i have problems in line 16
    ap.add_argument(“-i”, “–images”, required=True,
    after i run this line, it show error

    ap.add_argument(“-i”, “C:\\Users\\Bunkbed\\Downloads\\Compressed\\detecting-blur\\detecting-blur\\images\\image_001.png”, required=True,

    File “”, line 1
    ap.add_argument(“-i”, “C:\\Users\\Bunkbed\\Downloads\\Compressed\\detecting-blur\\detecting-blur\\images\\image_001.png”, required=True,
    ^
    SyntaxError: unexpected EOF while parsing

    Please help me

  43. Abed March 29, 2018 at 12:04 pm #

    Hi,

    I have implemented the link tutorial above to detect blur images. I want to ask, what value is raised in the picture what value? is it the blur level? and if I insert images of different sizes, the larger the size of the image, the value that comes out is also very large.

    Oh yes, here’s my screenshot of my experiment, I’m having trouble because the displayed image does not adjust its size, the picture is enlarged and can not zoom out. Do you have a tutorial or suggestion how to resize the output of the image so that it appears the whole picture?

    One more thing, I want to make the blur value resulting from the blur detection as a parameter, is there a way or tutorial to normalize the blur value into the range of 1 -100? and also how to label the blur values to categories like good images, good kurng (can be enhancement) and can not be fixed.

    • Adrian Rosebrock March 30, 2018 at 6:57 am #

      Typically we do not process images larger than 400-600px along their maximum dimensions. You should be resizing your input images to this approximate size before trying to compute the blur detection. The value that is being computed is the “variance of the Laplacian”. I discuss it in the post. As for your screenshot perhaps you forgot to include it?

  44. Renrob April 25, 2018 at 2:45 am #

    Hi, Adrian.
    I want to create a blur detection on Identity Card, which is different to your sample images.
    Is there any “universal threshold” for detecting blur images?

    • Adrian Rosebrock April 25, 2018 at 5:20 am #

      Using this method, unfortunately no, there is no such thing as a “universal threshold”. If you’re building an identity card recognition system I would suggest gathering images of the cards, both blurry and not, and seeing if you can find a reasonable threshold.

      • Renrob April 26, 2018 at 12:52 am #

        Ouch, I’m sorry for my question.

        I mean, the case is just take a photo of identity card and determine it is blur or not.

        Is there any such thing like “universal threshold” for it?

        • Adrian Rosebrock April 28, 2018 at 6:19 am #

          See my previous comment: no, there is no such thing as “universal threshold”.

  45. Rendy Robert April 30, 2018 at 4:09 am #

    Hi Adrian,
    I have tried the algorithm with different images context then compare it to your sample images (ex : card, car, etc). After some experiments, I should set different threshold. Why can it happen? Are there any factors affect the threshold?

    • Adrian Rosebrock April 30, 2018 at 12:30 pm #

      Hi Rendy — the “focus measure” Line 26 is affected by the blurriness of the image. Your supplied command line argument --threshold is compared to this value. The only factor that affects the threshold is you changing it at runtime. See the command line arguments blog post as needed.

      • Rendy Robert May 2, 2018 at 3:08 am #

        I mean, what are the scientific or quantitative reason that affect threshold arrangement?

      • Rendy Robert May 2, 2018 at 4:49 am #

        Sorry, I think I should repair my question.

        why the threshold between your sample images and my sample images (ex: card) is different? is there any quantitative reason?

        • Adrian Rosebrock May 3, 2018 at 9:36 am #

          The threshold is normally set manually. You try various thresholds experimentally and manually until you find one that works well for your dataset.

  46. Robert Luhut May 30, 2018 at 1:42 am #

    Hi adrian, i want to blur detection for card image, then this card has color light blue and dark blue, I want to ask how the effect light blue and dark blue in blur image? thank you

    • Adrian Rosebrock May 31, 2018 at 5:09 am #

      Hey Robert, I’m not sure what you mean by how the light blue vs. dark blue would effect the blurring. Could you elaborate?

  47. Arthur Estrella June 11, 2018 at 6:02 pm #

    Hi Adrian, would you have any tutorial in hand for performing deblurring in a real time video application? Or maybe any other tricks envolving deblurring with opencv? I really appreciate your help.

    • Adrian Rosebrock June 13, 2018 at 5:47 am #

      Sorry, I do not have any tutorials or content related to deblurring. I will certainly consider it for a future topic, thank you for the suggestion!

  48. Andy Hong July 30, 2018 at 10:41 pm #

    HI Adrian,I admired your works!
    While processing, I faced some problems.When I feed the bigger-sized image to detect the blur, the model only caputured part of image,and also only detect blur on the part of it. Do you have any suggestion?

    • Adrian Rosebrock July 31, 2018 at 9:43 am #

      Hey Andy — I’m not sure I’m fully understanding your question. Are you asking how to detect blur in only a single part of an image?

      • Andy Hong July 31, 2018 at 10:22 pm #

        Hey Adrian, I mean when I feed a image that size is 1280*720,the model functioned correctly.
        original image: https://imgur.com/aEDNWJW
        result: https://imgur.com/oJdTaBo

        While I feed a image that size is 5152*3864,the model captured only part of image and detected blur on it.
        original image: https://imgur.com/290Ct2e
        result: https://imgur.com/aZCJ2D2
        (It seems like the image was captured upper left)

        • Adrian Rosebrock August 2, 2018 at 9:38 am #

          Hey Andy — I’m not sure why that would be, that doesn’t make much sense. I think your script may have skipped over an image or something. Secondly, I’ll add that we rarely process images that are greater than 1000px across the largest dimension. I would suggest you resize your images as it will (1) speedup your pipeline and (2) provide more accurate results.

  49. vipul Tiwari August 21, 2018 at 10:40 am #

    Hi. I have a variety of images with different resolution(height and width of image). so, is the focus or threshold value which you chose also dependent on resolution or height and width of the image? what i mean is can a image of high resolution say 2160*1280, be blurry and similarly an image of 640*480 have high focus value.

    And if it does depend on resolution, do we need to set different threshold values for different image resolutions.

    • Adrian Rosebrock August 22, 2018 at 9:29 am #

      Yes, you would need to set different thresholds for each of the resolutions/image types.

  50. Alan Lifni J August 24, 2018 at 12:17 pm #

    Hi Adrian
    Thanks a lot. I used your algorithm to detect condensation in glass door refrigerator along raspberry pi and pi camera. Honestly this algorithm rocks, it achieved 99% successful detection rate.

    • Adrian Rosebrock August 30, 2018 at 9:41 am #

      Awesome! Congrats on a successful project, Alan! 😀

  51. Harshini December 3, 2018 at 4:59 am #

    Hi Adrian ,
    I have used this above algorithm (Laplacian method) for blur detection on a data-set with around 4000 images containing blur and non-blur images.
    The blur is due to human motion. But the results are not consistent for me. Can you please help me in sorting this problem.

Leave a Reply