Detecting multiple bright spots in an image with Python and OpenCV

Figure 7: Detecting multiple bright regions in an image with Python and OpenCV.

Today’s blog post is a followup to a tutorial I did a couple of years ago on finding the brightest spot in an image.

My previous tutorial assumed there was only one bright spot in the image that you wanted to detect…

…but what if there were multiple bright spots?

If you want to detect more than one bright spot in an image the code gets slightly more complicated, but not by much. No worries though: I’ll explain each of the steps in detail.

To learn how to detect multiple bright spots in an image, keep reading.

Looking for the source code to this post?
Jump right to the downloads section.

Detecting multiple bright spots in an image with Python and OpenCV

Normally when I do code-based tutorials on the PyImageSearch blog I follow a pretty standard template of:

  1. Explaining what the problem is and how we are going to solve it.
  2. Providing code to solve the project.
  3. Demonstrating the results of executing the code.

This template tends to work well for 95% of the PyImageSearch blog posts, but for this one, I’m going to squash the template together into a single step.

I feel that the problem of detecting the brightest regions of an image is pretty self-explanatory so I don’t need to dedicate an entire section to detailing the problem.

I also think that explaining each block of code followed by immediately showing the output of executing that respective block of code will help you better understand what’s going on.

So, with that said, take a look at the following image:

Figure 1: The example image that we are detecting multiple bright objects in using computer vision and image processing techniques.

Figure 1: The example image that we are detecting multiple bright objects in using computer vision and image processing techniques (source image).

In this image we have five lightbulbs.

Our goal is to detect these five lightbulbs in the image and uniquely label them.

To get started, open up a new file and name it detect_bright_spots.py . From there, insert the following code:

Lines 2-7 import our required Python packages. We’ll be using scikit-image in this tutorial, so if you don’t already have it installed on your system be sure to follow these install instructions.

We’ll also be using imutils, my set of convenience functions used to make applying image processing operations easier.

If you don’t already have imutils  installed on your system, you can use pip  to install it for you:

From there, Lines 10-13 parse our command line arguments. We only need a single switch here, --image , which is the path to our input image.

To start detecting the brightest regions in an image, we first need to load our image from disk followed by converting it to grayscale and smoothing (i.e., blurring) it to reduce high frequency noise:

The output of these operations can be seen below:

Figure 2: Converting our image to grayscale and blurring it.

Figure 2: Converting our image to grayscale and blurring it.

Notice how our image  is now (1) grayscale and (2) blurred.

To reveal the brightest regions in the blurred image we need to apply thresholding:

This operation takes any pixel value p >= 200 and sets it to 255 (white). Pixel values < 200 are set to 0 (black).

After thresholding we are left with the following image:

Figure 3: Applying thresholding to reveal the brighter regions of the image.

Figure 3: Applying thresholding to reveal the brighter regions of the image.

Note how the bright areas of the image are now all white while the rest of the image is set to black.

However, there is a bit of noise in this image (i.e., small blobs), so let’s clean it up by performing a series of erosions and dilations:

After applying these operations you can see that our thresh  image is much “cleaner”, although we do still have a few left over blobs that we’d like to exclude (we’ll handle that in our next step):

Figure 4: Utilizing a series of erosions and dilations to help "clean up" the thresholded image by removing small blobs and then regrowing the remaining regions.

Figure 4: Utilizing a series of erosions and dilations to help “clean up” the thresholded image by removing small blobs and then regrowing the remaining regions.

The critical step in this project is to label each of the regions in the above figure; however, even after applying our erosions and dilations we’d still like to filter out any leftover “noisy” regions.

An excellent way to do this is to perform a connected-component analysis:

Line 32 performs the actual connected-component analysis using the scikit-image library. The labels  variable returned from measure.label  has the exact same dimensions as our thresh  image — the only difference is that labels  stores a unique integer for each blob in thresh .

We then initialize a mask  on Line 33 to store only the large blobs.

On Line 36 we start looping over each of the unique labels . If the label  is zero then we know we are examining the background region and can safely ignore it (Lines 38 and 39).

Otherwise, we construct a mask for just the current label  on Lines 43 and 44.

I have provided a GIF animation below that visualizes the construction of the labelMask  for each label . Use this animation to help yourself understand how each of the individual components are accessed and displayed:

Figure 5: A visual animation of applying a connected-component analysis to our thresholded image.

Figure 5: A visual animation of applying a connected-component analysis to our thresholded image.

Line 45 then counts the number of non-zero pixels in the labelMask . If numPixels  exceeds a pre-defined threshold (in this case, a total of 300 pixels), then we consider the blob “large enough” and add it to our mask .

The output mask  can be seen below:

Figure 6: After applying a connected-component analysis we are left with only the larger blobs in the image (which are also bright).

Figure 6: After applying a connected-component analysis we are left with only the larger blobs in the image (which are also bright).

Notice how any small blobs have been filtered out and only the large blobs have been retained.

The last step is to draw the labeled blobs on our image:

First, we need to detect the contours in the mask  image and then sort them from left-to-right (Lines 54-57).

Once our contours have been sorted we can loop over them individually (Line 60).

For each of these contours we’ll compute the minimum enclosing circle (Line 63) which represents the area that the bright region encompasses.

We then uniquely label the region and draw it on our image  (Lines 64-67).

Finally, Lines 70 and 71 display our output results.

To visualize the output for the lightbulb image be sure to download the source code + example images to this blog post using the “Downloads” section found at the bottom of this tutorial.

From there, just execute the following command:

You should then see the following output image:

Figure 7: Detecting multiple bright regions in an image with Python and OpenCV.

Figure 7: Detecting multiple bright regions in an image with Python and OpenCV.

Notice how each of the lightbulbs has been uniquely labeled with a circle drawn to encompass each of the individual bright regions.

You can visualize a a second example by executing this command:

Figure 8: A second example of detecting multiple bright regions using computer vision and image processing techniques (source image).

Figure 8: A second example of detecting multiple bright regions using computer vision and image processing techniques (source image).

This time there are many lightbulbs in the input image! However, even with many bright regions in the image our method is still able to correctly (and uniquely) label each of them.

Summary

In this blog post I extended my previous tutorial on detecting the brightest spot in an image to work with multiple bright regions. I was able to accomplish this by applying thresholding to reveal the brightest regions in an image.

The key here is the thresholding step — if your thresh  map is extremely noisy and cannot be filtered using either contour properties or a connected-component analysis, then you won’t be able to localize each of the bright regions in the image.

Thus, you should take care to assess your input images by applying various thresholding techniques (simple thresholding, Otsu’s thresholding, adaptive thresholding, perhaps even GrabCut) and visualizing your results.

This step should be performed before you even bother applying a connected-component analysis or contour filtering.

Provided that you can reasonably segment the light regions from the darker, irrelevant regions of your image then the method outlined in this blog post should work quite well for you.

Anyway, I hope you enjoyed this blog post!

Before you go, be sure to enter your email address in the form below to be notified when future tutorials are published on the PyImageSearch blog.

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 11-page Resource Guide on Computer Vision and Image Search Engines, including exclusive techniques that I don’t post on this blog! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , , ,

28 Responses to Detecting multiple bright spots in an image with Python and OpenCV

  1. Rajeev Ratan October 31, 2016 at 11:09 am #

    Another excellent tutorial!

    • Adrian Rosebrock November 1, 2016 at 8:57 am #

      Thanks Rajeev! Have an awesome day 🙂

  2. Chris October 31, 2016 at 12:25 pm #

    Hi Adrian,
    Thanks for yet another great tutorial. The code runs fine with no errors but only displays the original images without the red circles or numbers. Any thoughts as to why?

    Thanks!

    • Adrian Rosebrock November 1, 2016 at 8:56 am #

      Hey Chris — are you using the code downloaded via the “Downloads” section of the blog post? Or are you copying and pasting the code as you go along? The reason I ask is because it sounds like contours are not being detected in your image for whatever reason. Trying inserting a few debug statements like print(len(cnts)) to ensure at least some of the contours are being detected.

  3. Alvin November 1, 2016 at 2:57 am #

    Do you have a repo on github?

    • Adrian Rosebrock November 1, 2016 at 8:51 am #

      Here is a link to my GitHub account where I maintain libraries such as imutils and color-transfer:

      https://github.com/jrosebr1

      The code for this particular blog post can be obtained by using the “Downloads” section of the tutorial.

      • Robin Kinge November 2, 2016 at 11:14 am #

        Broken Link?

        Another excellent tutorial.

        • Adrian Rosebrock November 3, 2016 at 9:41 am #

          The link should be working now, please give it another try.

  4. Lenny Lemor November 1, 2016 at 1:32 pm #

    Hi, how fast is it? Can I use this for tracking some laser spots?

    Regards

    • Adrian Rosebrock November 3, 2016 at 9:51 am #

      This method is very fast since it’s based on thresholding for segmentation followed by optimized connected-component analysis and contour filtering. It can certainly be used in real-time semi-real-time environments for reasonably sized images.

  5. Tamilmaran November 2, 2016 at 4:47 am #

    Hi Adrian,
    Is that any other ways to segment the bright spots from the RGB image, based on wavelength range of the lights

  6. Osman November 3, 2016 at 5:59 pm #

    Hey Adrian! Nice tutorial. Can this be used (if altered) with a WebCam to detect fire? Thanks

    • Adrian Rosebrock November 4, 2016 at 10:05 am #

      Detecting smoke and fire is an active area of research in computer vision and image processing. We typically use machine learning methods combined with feature extraction methods (or deep learning) to make an approach like this work across a variety of lighting conditions, environments, etc. I would not recommend using this method directly to detect fire as you would likely obtain many false-positives.

  7. John November 3, 2016 at 11:33 pm #

    Hi Adrian,
    Thanks so much for sharing your knowledge.
    Question, how can I make it so that I can detect which light is turned off. In other word, can the label be static? 1, 2, 3, 4, 5 => 1, 2, 5 meaning bulb 3 and 4 are off. Assuming the lights are stationary.

    • Adrian Rosebrock November 4, 2016 at 10:00 am #

      You can do this, but you would have to start with the lights in a fixed position and all of them “on”. Once you’ve determined the ROI for each light just loop over each of the ROIs (no need to detect them each time) and compute the mean of the grayscale region. If the mean is low (close to black) then the light is off. If the mean is high (close to white) then the light is on.

  8. Joel December 3, 2016 at 5:54 pm #

    Thank you for sharing this tutorial. The result was great using a satellite image of the U.S. at night.

    • Adrian Rosebrock December 5, 2016 at 1:33 pm #

      Fantastic, I’m glad to hear it Joel! 🙂

  9. Izru December 14, 2016 at 1:45 am #

    Hello Adrian , it’s Really but i face some Problem. I can’t install SKiImage on My Raspberry Pi 3 i can’t measure anything Please help me.
    thank you in Advance

    • Adrian Rosebrock December 14, 2016 at 8:23 am #

      You can install scikit-image via:

      $ pip install -U scikit-image

      Is there a particular error message you are running into?

      • Bartosz Bartoszewski February 7, 2017 at 8:42 am #

        Dear Adrian, I face the same problem as Izru.
        Got all the steps done for installation of opencv. When I try to install scikit, my pi3b gets to a point “Running setup.py bdist_wheel for scipi …” Then after an hour or two it hangs.
        I can tell it’s hanging as I’ve left it yday for the night, when I turned the screen the system clock was stopped at +1 hour after leaving it to finsh, mouse/kbd not responding. So I had to pull the plug.
        I’ve followed all steps for installation of opencv on my version of pi3b, all packages are up to date.

        • Adrian Rosebrock February 7, 2017 at 8:56 am #

          It sounds like the system might be locking up for some strange reason. I would suggest trying this command and seeing if it helps:

          $ pip install scikit-image --no-cache-dir

          Also be sure to check the power settings on the Pi and ensuring that it’s not accidentally going into sleep mode.

          • Bartosz Bartoszewski February 7, 2017 at 3:18 pm #

            Hey Adrian,

            Many thanks for taking interest in my problem and a blazing fast reply. I’ve actually solved it myself. While the install was running for the nth time I noticed that the system got very unresponsive even though no significant CPU load was present, so I checked the available memory and voila… The system was running out of swap-file space, I’ve had the default setting of 100MB out of the box. After changing this to 1024MB the next run was done within 40-50 minutes.

          • Adrian Rosebrock February 10, 2017 at 2:16 pm #

            Thanks for sharing your solution Bartosz!

  10. Jeff Ward December 21, 2016 at 12:58 pm #

    I had to change line 38 from if label == 0: to if label < 0:
    I didn't dig further than http://scikit-image.org/docs/dev/api/skimage.measure.html#skimage.measure.label to try to find the cause for differing starting indexes despite the thresh array starting at zero.

  11. Alec edwards February 5, 2017 at 2:26 am #

    Hey is there anyway you could use this find rocks in the sand that are whiter than the sand!? I’m trying to use this code, but it’s not working

    • Adrian Rosebrock February 7, 2017 at 9:22 am #

      If the rocks are whiter than the sand itself you might want to try simple thresholding.

  12. Célia February 15, 2017 at 4:28 am #

    Hi Adrian, thanks for this great tutorial.
    Have you ever encountered problems with the skimage module not having measure.label?

    I guess maybe I am using a wrong version of skimage? But can’t find how to solve it..
    Do you have any advice?
    Thanks in advance

    • Adrian Rosebrock February 15, 2017 at 8:59 am #

      Hey Célia — can you run pip freeze and let us know which version of scikit-image you are running?

Leave a Reply