Watershed OpenCV


The watershed algorithm is a classic algorithm used for segmentation and is especially useful when extracting touching or overlapping objects in images, such as the coins in the figure above.

Using traditional image processing methods such as thresholding and contour detection, we would be unable to extract each individual coin from the image — but by leveraging the watershed algorithm, we are able to detect and extract each coin without a problem.

When utilizing the watershed algorithm we must start with user-defined markers. These markers can be either manually defined via point-and-click, or we can automatically or heuristically define them using methods such as thresholding and/or morphological operations.

Based on these markers, the watershed algorithm treats pixels in our input image as local elevation (called a topography) — the method “floods” valleys, starting from the markers and moving outwards, until the valleys of different markers meet each other. In order to obtain an accurate watershed segmentation, the markers must be correctly placed.

In the remainder of this post, I’ll show you how to use the watershed algorithm to segment and extract objects in images that are both touching and overlapping. To accomplish this, we’ll be using a variety of Python packages including SciPy, scikit-image, and OpenCV.

Looking for the source code to this post?
Jump right to the downloads section.

Watershed OpenCV

Figure 1: An example image containing touching objects. Our goal is to detect and extract each of these coins individually.

Figure 1: An example image containing touching objects. Our goal is to detect and extract each of these coins individually.

In the above image you can see examples of objects that would be impossible to extract using simple thresholding and contour detection, Since these objects are touching, overlapping, or both, the contour extraction process would treat each group of touching objects as a single object rather than multiple objects.

The problem with basic thresholding and contour extraction

Let’s go ahead and demonstrate a limitation of simple thresholding and contour detection. Open up a new file, name it contour_only.py , and let’s get coding:

We start off on Lines 2-8 by importing our necessary packages. Lines 11-14 then parse our command line arguments. We’ll only need a single switch here, --image , which is the path to the image that we want to process.

From there, we’ll load our image from disk on Line 18, apply pyramid mean shift filtering (Line 19) to help the accuracy of our thresholding step, and finally display our image to our screen. An example of our output thus far can be seen below:

Figure 2: Output from the pyramid mean shift filtering step.

Figure 2: Output from the pyramid mean shift filtering step.

Now, let’s threshold the mean shifted image:

Given our input image , we then convert it to grayscale and apply Otsu’s thresholding to segment the background from the foreground:

Figure 3: Applying Otsu's automatic thresholding to segment the foreground coins from the background.

Figure 3: Applying Otsu’s automatic thresholding to segment the foreground coins from the background.

Finally, the last step is to detect contours in the thresholded image and draw each individual contour:

Below we can see the output of our image processing pipeline:

Figure 4: The output of our simple image processing pipeline. Unfortunately, our results are pretty poor -- we are not able to detect each individual coin.

Figure 4: The output of our simple image processing pipeline. Unfortunately, our results are pretty poor — we are not able to detect each individual coin.

As you can see, our results are pretty terrible. Using simple thresholding and contour detection our Python script reports that there are only two coins in the images, even though there are clearly nine of them.

The reason for this problem arises from the fact that coin borders are touching each other in the image — thus, the cv2.findContours  function only sees the coin groups as a single object when in fact they are multiple, separate coins.

Note: A series of morphological operations (specifically, erosions) would help us for this particular image. However, for objects that are overlapping these erosions would not be sufficient. For the sake of this example, let’s pretend that morphological operations are not a viable option so that we may explore the watershed algorithm.

Using the watershed algorithm for segmentation

Now that we understand the limitations of simple thresholding and contour detection, let’s move on to the watershed algorithm. Open up a new file, name it watershed.py , and insert the following code:

Again, we’ll start on Lines 2-8 by importing our required packages. We’ll be using functions from SciPy, scikit-image, imutils, and OpenCV. If you don’t already have SciPy and scikit-image installed on your system, you can use pip  to install them for you:

Lines 11-14 handle parsing our command line arguments. Just like in the previous example, we only need a single switch, the path to the image --image  we are going to apply the watershed algorithm to.

From there, Lines 18 and 19 load our image from disk and apply pyramid mean shift filtering. Lines 24-26 perform grayscale conversion and thresholding.

Given our thresholded image, we can now apply the watershed algorithm:

The first step in applying the watershed algorithm for segmentation is to compute the Euclidean Distance Transform (EDT) via the distance_transform_edt  function (Line 32). As the name suggests, this function computes the Euclidean distance to the closest zero (i.e., background pixel) for each of the foreground pixels. We can visualize the EDT in the figure below:

Figure 5: Visualizing the Euclidean Distance Transform.

Figure 5: Visualizing the Euclidean Distance Transform.

On Line 33 we take D , our distance map, and find peaks (i.e., local maxima) in the map. We’ll ensure that is at least a 20 pixel distance between each peak.

Line 38 takes the output of the peak_local_max  function and applies a connected-component analysis using 8-connectivity. The output of this function gives us our markers  which we then feed into the watershed  function on Line 39. Since the watershed algorithm assumes our markers represent local minima (i.e., valleys) in our distance map, we take the negative value of D .

The watershed  function returns a matrix of labels , a NumPy array with the same width and height as our input image. Each pixel value as a unique label value. Pixels that have the same label value belong to the same object.

The last step is to simply loop over the unique label values and extract each of the unique objects:

On Line 44 we start looping over each of the unique labels . If the label  is zero, then we are examining the “background component”, so we simply ignore it.

Otherwise, Lines 52 and 53 allocate memory for our mask  and set the pixels belonging to the current label to 255 (white). We can see an example of such a mask below on the right:

Figure 6: An example mask where we are detecting and extracting only a single object from the image.

Figure 6: An example mask where we are detecting and extracting only a single object from the image.

On Lines 56-59 we detect contours in the mask  and extract the largest one — this contour will represent the outline/boundary of a given object in the image.

Finally, given the contour of the object, all we need to do is draw the enclosing circle boundary surrounding the object on Lines 62-65. We could also compute the bounding box of the object, apply a bitwise operation, and extract each individual object as well.

Finally, Lines 68 and 69 display the output image to our screen:

Figure 7: The final output of our watershed algorithm -- we have been able to cleanly detect and draw the boundaries of each coin in the image, even though their edges are touching.

Figure 7: The final output of our watershed algorithm — we have been able to cleanly detect and draw the boundaries of each coin in the image, even though their edges are touching.

As you can see, we have successfully detected all nine coins in the image. Furthermore, we have been able to cleanly draw the boundaries surrounding each coin as well. This is in stark contrast to the previous example using simple thresholding and contour detection where only two objects were (incorrectly) detected.

Applying the watershed algorithm to images

Now that our watershed.py  script is finished up, let’s apply it to a few more images and investigate the results:

Figure 8: Again, we are able to cleanly segment each of the coins in the image.

Figure 8: Again, we are able to cleanly segment each of the coins in the image.

Let’s try another image, this time with overlapping coins:

Figure 9: The watershed algorithm is able to segment the overlapping coins from each other.

Figure 9: The watershed algorithm is able to segment the overlapping coins from each other.

In the following image, I decided to apply the watershed algorithm to the task of pill counting:

Figure 10: We are able to correctly count the number of pills in the image.

Figure 10: We are able to correctly count the number of pills in the image.

The same is true for this image as well:

Figure 11: Applying the watershed algorithm with OpenCV to count the number of pills in an image.

Figure 11: Applying the watershed algorithm with OpenCV to count the number of pills in an image.


In this blog post we learned how to apply the watershed algorithm, a classic segmentation algorithm used to detect and extract objects in images that are touching and/or overlapping.

To apply the watershed algorithm we need to define markers which correspond to the objects in our image. These markers can be either user-defined or we can apply image processing techniques (such as thresholding) to find the markers for us. When applying the watershed algorithm, it’s absolutely critical that we obtain accurate markers.

Given our markers, we can compute the Euclidean Distance Transform and pass the distance map to the watershed function itself, which “floods” valleys in the distance map, starting from the initial markers and moving outwards. Where the “pools” of water meet can be considered boundary lines in the segmentation process.

The output of the watershed algorithm is a set of labels, where each label corresponds to a unique object in the image. From there, all we need to do is loop over each of the labels individually and extract each object.

Anyway, I hope you enjoyed this post! Be sure download the code and give it a try. Try playing with various parameters, specifically the min_distance  argument to the peak_local_max  function. Note how varying the value of this parameter can change the output image.


If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , ,

92 Responses to Watershed OpenCV

  1. Pranav November 2, 2015 at 1:58 pm #

    Hi Adrian,

    (Particularly for detecting circles say for example red blood cells) How does watershed algorithm compare to hough_circles?


    • Adrian Rosebrock November 3, 2015 at 10:06 am #

      For detecting red blood cells, this method will likely perform better than Hough circles. The parameters to Hough circles can be tricky to tune and even if you get them right, overlapping red blood cells can still be missed.

  2. JetC November 2, 2015 at 5:00 pm #

    Does this only work with round objects, or will it also work with squarish/oblong shapes? Thanks

    • Adrian Rosebrock November 3, 2015 at 10:05 am #

      It will work with square/oblong objects as well.

  3. C.W. Predovic November 6, 2015 at 10:43 am #

    Does this accurately work for 3-D images?

    • Adrian Rosebrock November 7, 2015 at 6:20 am #

      Yes, the watershed algorithm is intended to work with both 2D and 3D images. However, I’ve never tried using watershed with 3D images within OpenCV, only the ImageJ implementation.

  4. Alexandre de Siqueira November 12, 2015 at 6:44 am #

    Awesome post, Adrian! Simple and killing! Learned a couple things on this one!
    I’d like to ask you two questions.
    1) Do you know if there is a relation between “pyramid mean shift filtering” (PMSF) and “discrete wavelet transforms” (Mallat cascade algorithm)?
    2) Could you tell what paper originated PMSF?
    Thank you very much!

    • Adrian Rosebrock November 12, 2015 at 12:26 pm #

      Hey Alexandre — I’m glad you enjoyed the blog post, that’s great! To answer your questions:

      1. Pyramid mean-shift filtering is not related to wavelet transforms. Perhaps you are thinking about Haar cascades for object detection?
      2. As for the original paper, you’ll want to look up Comanicu and Meer’s 2002 paper, Mean shift: A robust approach toward feature space analysis

      • Alexandre de Siqueira December 9, 2015 at 10:05 pm #

        Hey Adrian,
        thank you for that references! I downloaded them and will check when time is available 🙂
        Thanks again! Have a nice one!

  5. Taufiq December 7, 2015 at 12:45 pm #

    Hi, can try this source code in android sir ?

    • Adrian Rosebrock December 8, 2015 at 6:32 am #

      If you need to use this code for your Android device, you’ll need to convert it from Python to Java (or another suitable language for Android). This is mainly a Python blog and I don’t do much Java development.

  6. ghanendra March 28, 2016 at 10:42 am #

    Hi Adrian
    while installing scipy its showing this
    It gets stuck at Running setup.py install for scipy What to do??

    • Adrian Rosebrock March 28, 2016 at 1:29 pm #

      What platform are you installing SciPy on? If it’s a Raspberry Pi, it can take up to 45 minutes to 1 hour to compile and install SciPy. be patient with the install.

  7. Jaime Lopez May 21, 2016 at 9:29 am #

    Hi Adrian,

    How could I used Watershed algorithm on remote sensing image to detect objects, because I have too many different objects so I can not apply simple thresholding?

    Thanks, Jaime

    • Adrian Rosebrock May 23, 2016 at 7:32 pm #

      It really depends on what your image contents are. Normally, you apply watershed on an image that you have already thresholded. If you cannot apply thresholding, you might want to consider applying a more advanced segmentation algorithm such as GrabCut. Otherwise, you could look into training a custom object detector.

  8. Jon May 27, 2016 at 1:35 pm #

    Hi Adrian,

    Great tutorial! I’m using watershed to segment touching objects so that I can track them frame by frame using nearest neighbor distances. Everything works pretty good except that sometimes there are too many new contours formed after watershed and I know that I can decrease this by increasing the min_distance parameter in peak_local_max but I need to have a low value because the objects are really small and I start losing contours if I increase the parameter.

    The problem is that the labels (for tracking) for the objects get switched up because I’m comparing the current object’s centroid to contour centroid’s that aren’t part of the same object. Do you have any advice for combining contours on a single object and getting an average centroid to compare to? Any help is much appreciated!

    • Adrian Rosebrock May 27, 2016 at 1:38 pm #

      That is quite the problem to have! Merging contours together is normally done by heuristics. You can compare adjacent watershed regions and compare them based on their appearance, such as texture or color. Regions with similar appearances can be merged together. In this case, you would generate a new mask for the merged objects and compute their corresponding centroid. Alternatively, if you have both contour variables handy, you should be able to compute the weighted (x, y) spatial coordinates to form the new centroid.

      • Josimar Amilcar Fernandes Andrade March 27, 2019 at 9:46 pm #

        I Rosebrock great work I bean follow you for a long time.
        I have a problem with creating color gradient-weighted distance, can you help me.

        • Josimar Amilcar Fernandes Andrade March 27, 2019 at 9:49 pm #

          my objective is to get the separation lines

          • Adrian Rosebrock April 2, 2019 at 6:36 am #

            Can you elaborate more on what you mean by “color gradient-weighted distance”?

  9. Wanderson September 15, 2016 at 2:33 pm #

    Hi Adrian,

    Can I use the watershed algorithm to segment a group of people walking together? The images must be captured by a video camera installed on the ceiling. I performed tests with GMM and KNN, but I got no success.

    Thanks, Wanderson

    • Adrian Rosebrock September 16, 2016 at 8:24 am #

      If you have a mask that represents the foreground (the people) versus the background, then yes, I would give the watershed algorithm a try. However, you might need a more powerful approach depending on your scene. I could foresee utilizing a custom object detector to detect each of the people individually instead of background subtraction/motion detection.

      • Wanderson September 16, 2016 at 1:53 pm #

        I appreciate your reply. I will direct my research from here.

        Thank you!

  10. Tim Brooks January 4, 2017 at 1:29 pm #

    Great article, Adrian
    I am getting sometimes wrong results and would like to debug. What was used to visualize the Euclidean Distance Transform (fig. 5).



    • Adrian Rosebrock January 7, 2017 at 9:44 am #

      I actually used matplotlib for that visualization.

  11. David February 21, 2017 at 2:01 pm #

    Been reading your tutorials and will be purchasing the opencv book, really good stuff.I have one question:

    The watershed works by specifying a starting point to the algorithm. In your case this is done by an Euclidean distance from the background color (which is very dark) compared to the objects of interest (coins, pills).

    I would like to use the watershed, but have a somewhat uneven specular background (clear plastic) which goes from almost white to very dark (even in the best diffuse lighting).

    Any suggestions as to segmenting pills, coins or candy in such a scenario?


    • Adrian Rosebrock February 22, 2017 at 1:33 pm #

      Hey David — it’s great to hear you are enjoying the PyImageSearch blog! Regarding your question, do you have any example images of what you’re working with? That might be easier to provide a solution on techniques to try.

  12. Philip Hahn March 17, 2017 at 7:31 pm #

    David – How did you generate the distance map in “Figure 5: Visualizing the Euclidean Distance Transform.”? An imshow of D looks identical to thresh. Thanks!

    • Adrian Rosebrock March 21, 2017 at 7:39 am #

      Are you asking me or David? Figure 5 was generated using matplotlib and a plot of the distance map.

  13. Miguel March 22, 2017 at 5:04 pm #

    Hi Adrian,

    Great article. I’m trying to segment touching bean seed using the code that you posted,
    in some cases seeds are well segmented, but in others the beans are splited.
    I was decreasing and increasing the min_distance parameter, but i could not segmented the beans. Please, can you suggest me what can i do in that cases

    these are my images:


    • Adrian Rosebrock March 23, 2017 at 9:30 am #

      Hey Miguel — I can clearly see the beans touching in the second image. But what is the first image supposed to represent? The beans after segmentation?

      • Miguel March 23, 2017 at 11:05 am #

        Hi Adrian

        Yes, the first image represents the beans after segmentation. After obtaining the contours, i draw the segmented beans one by one. As I told you before, in some cases the beans are segmented correctly.


        • Adrian Rosebrock March 25, 2017 at 9:36 am #

          You’ll likely have to continue to fiddle with the thresholding parameters along with the Watershed parameters. There isn’t a one-size-fits-all solution when using these parameters. More advanced solutions would include using machine learning to do a pixel-wise segmentation of the image, but that’s a bit of a pain and I would try to avoid that.

  14. Nada March 27, 2017 at 8:48 am #

    Hi Adrian,
    Hi i’m a beginner in opencv with python, I’m trying to use the code that you posted but i get this error :
    error: argument -i/–image is required
    Please, can you tell me what can i do


    Your comment is awaiting moderation.

  15. Ian V. May 30, 2017 at 3:48 pm #

    Hi Adrien,

    I have to do a documentation about a programm that i have written in python. For a clean documentation, i would like to know how you displayed codefragments in a box with line numbering?


    • Adrian Rosebrock May 31, 2017 at 1:08 pm #

      Hi Ian — the code fragments displayed in this blog post are handled by a WordPress plugin I use.

  16. shyam June 2, 2017 at 5:05 am #

    hi adrian,
    is there any solution for objects( irregular shape) other than coins

  17. Akash Kumar June 26, 2017 at 6:04 am #

    Hi Adrian,

    It’s a great and perfect tutorial. I would like to know what does that [1] mean and even in the contours [-2]? I am new to opencv.

    thresh = cv2.threshold(gray, 0, 255,cv2.THRESH_BINARY | cv2.THRESH_OTSU)[1]
    cnts = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)[-2]

    What is the difference between cv2.THRESH_BINARY|cv2.THRESH_OTSU and cv2.THRESH_BINARY+cv2.THRESH_OTSU?
    Thanks for the help.

    • Adrian Rosebrock June 27, 2017 at 6:26 am #

      The cv2.threshold function returns a 2-tuple of the threshold value T used (computed via Otsu’s method) and the actual thresh image. Since we are only interested in the thresh image, we grab the value via [1]. This is called Python array indexing. I would suggest reading up on it.

      Also, my recommended way to extract contours via OpenCV 3 and OpenCV 2.4 is now:

      This will make it compatible with both OpenCV 2.4 and OpenCV 3.

      As for your last question the vertical pipe “| is a bitwise OR.

  18. max September 28, 2017 at 3:03 pm #

    This site is an invaluable resource. Thanks for the thorough and lucid explanation of the watershed algorithm. I’m wondering if you can help me filter the set of contours returned by cv2.findContours(). Essentially, what I want is the set of contours that _do not_ share a boundary with other contours. I know this sounds contrary to the problem watershed is meant to solve, but my requirement is similar to the following problem: Given a picture with a number of coins (as in your example) some touching and some completely isolated, return the set of contours for the isolated coins only and exclude the return of any contours that are touching each other. Thanks for your help!

    • Adrian Rosebrock October 2, 2017 at 10:30 am #

      I’m happy to hear you are enjoying the PyImageSearch blog, Max!

      My suggestion here is to take the output contours, draw them, and then apply a connected component analysis. This will help you determine which contours touch.

      I don’t like OpenCV’s connected component analysis function as much as the scikit-image one, so I would suggest starting there.

  19. Carl.C October 6, 2017 at 5:32 am #

    Hi, Adrian.

    Great tutorial. I would like to ask why two identical pictures get different result?

    I downloaded the coins picture straight from this website (Figure1, jpg format), and ran on it. It turned out to be 10 coins instead of 9, and #3 is missing, also said “[INFO] 10 unique segments found”.

    Then I downloaded your source code and ran on the original picture(png format), it’s 9 coins.

    I can’t figure it out because they really look identical.

    It is normal for random mistakes? Or the result somehow relys on the picture format/ picutre quality?

    • Adrian Rosebrock October 6, 2017 at 4:53 pm #

      Which version of OpenCV are you using? There are minor differences between the versions that can cause slight differences in results. Furthermore, keep in mind that OpenCV is heavily dependent on a number of pre-req libraries, such as optimization packages, libraries used to load various image file formats, etc. Unless explicitly configured, no two computer vision development environments are 100% exact, so these differences can compound and sometimes lead to different results.

      • John Goodman November 14, 2017 at 3:33 pm #

        Hi Adrian,

        I’m running Python 3.6.1 and OpenCV 3.2.0 and I’m seeing the same results. What’s happening is that the top left nickel is being counted twice as (#2 and #3).

        Is there a way to tune for this by tweaking the filtering, thresholding or something else?

        example: https://i.imgur.com/vHlh4mU.png

        Anyway, Thanks for the great blog and book!

        • Adrian Rosebrock November 15, 2017 at 12:58 pm #

          Thanks for sharing the screenshot, John. I appreciate it. You would need to do some tweaking to the parameters here, I would have to play with the code to determine what actually needs to be changed. I’ll try to check this out and get back to you.

  20. Luana de Oliveira October 10, 2017 at 5:01 pm #

    Hello Adrian,
    Thank you very much for the tutorial. It’s great and it has helped me a lot up here.
    I added at the end of the code a simple function to get the coordinates (x, y, and r) of the centroid of the circles
    My current problem is that I’m trying to use this code in georeferenced images (tiff), to get the UTM coordinates (x, y and radius in meters) of each centroid at the end. I tried the gdal, but I could not. Would you have any tips?
    Thank you!

    • Adrian Rosebrock October 13, 2017 at 9:02 am #

      Hi Luana — unfortunately my experience with georeferenced images is pretty minimal, so I’m not sure what the best solution to the problem is. Sorry I couldn’t be of more help here!

  21. shubham November 16, 2017 at 11:15 am #

    hello Adrian! i have also tried this code but after running the first segment of code its giving output but no image is showing at all.only a window with gray background.
    Please help me…

    • Adrian Rosebrock November 18, 2017 at 8:20 am #

      That is indeed strange behavior! What version of OpenCV are you using? And how did you install OpenCV on your system?

  22. vinayak December 1, 2017 at 7:01 am #

    Hello Adrian,

    i want to automatically segment some specific object if it is present in an image, for example dress, shoes,etc. Will this algorithm work for such a use case. I have implemented a pipeline in deep learning using fcn-image segmentation. It works well. Just wanted to check if watershed algorithm can be used in such a use case also? here images are unknown.

    • Adrian Rosebrock December 2, 2017 at 7:24 am #

      If your deep learning-based segmentation pipeline can output masks for the objects in the image then I would give watershed a try. But again, it really depends on how heavy the overlap is. Really good deep learning segmentation algorithms can actually perform the overlap segmentation.

  23. Garnies Hafitma January 4, 2018 at 8:12 am #

    how can fix it??

    usage: contour_only.py [-h] -i IMAGE
    contour_only.py: error: argument -i/–image is required

    just try i from senior high school, i didn’t have any information about this hehe

    • Adrian Rosebrock January 5, 2018 at 1:32 pm #

      It’s great to hear you are getting involved with programming and OpenCV in high school. The issue you are running into is due to command line arguments. You need to supply them when executing the script via the command line. Please take some time to educate yourself on command line arguments before continuing.

  24. Meg January 25, 2018 at 11:11 pm #

    Hi Adrian,

    In your code, you use “labels = watershed(-D, markers, mask=thresh)”

    When I look at the OpenCV documentation, I only see two parameters, the input and the markers. Can you tell me where you get the third parameter from?


    Thanks much!

    • Adrian Rosebrock January 26, 2018 at 10:09 am #

      We’re using the scikit-image implementation of Watershed, not the OpenCV implementation.

      • Meg January 26, 2018 at 1:12 pm #

        Thank you! One other question I had – what if my background changes? E.g. I want to use this code if the background is white and my coins are darker. If I manually change the threshold to Binary Inverse it works, but do you have any suggestions on how to automatically detect this?

        • Adrian Rosebrock January 30, 2018 at 10:44 am #

          I would suggest using Otsu’s method for thresholding. Additionally you could manually do this yourself and compute a histogram of pixel intensities. Assuming there are more background pixels than foreground you can check the count of darker vs. lighter pixels and determine the correct threshold flag.

      • Jared Turpin February 23, 2018 at 1:52 pm #

        It is unclear to me why you there are two separate implementations of the watershed algorithm. What rationale did you use to select the scikit version over the OpenCV version?

        • Adrian Rosebrock February 26, 2018 at 2:05 pm #

          When this blog post was published OpenCV did not have an easily accessible watershed function with Python bindings. With the OpenCV 3 release; however, the watershed function became more accessible. Because of this, I used the scikit-image version when writing this post.

  25. Tony Holdroyd February 28, 2018 at 9:21 am #

    Hello Adrian, my project involves recognising , and segmenting,tumours in brain scans where there is quite a bit of noise in the image, including a skull outline. We want to detect a whitish patch against a darkerish background. We have tried a DL approach, but with limited success, and I was wondering if you could advise us, please, if we should put our efforts into the watershed function, or some other OpenCV, or indeed sci-kit, technique. Thanks, Best, Tony

    • Adrian Rosebrock March 2, 2018 at 11:00 am #

      Hey Tony — do you have any example images that I could take a look at? Additionally, what deep learning approach did you use?

  26. Ram April 7, 2018 at 1:25 pm #

    hey, i am a beginner. can you suggest how to find the performance of different image segmentation algorithms. I have the output of watershed, kmeans, thresholding. how to find which algorithm is best ?

    • Adrian Rosebrock April 10, 2018 at 12:42 pm #

      Typically you would need the ground-truth of what the correct segmentation looks like. From there you would compute the Intersection over Union for the resulting masks.

  27. trya sovi April 29, 2018 at 1:05 am #

    Hello adrian, can you explain why ” usage: contour_only.py [-h] -i IMAGE
    contour_only.py: error: the following arguments are required: -i/–image ” error messages? thankyou.

    • Adrian Rosebrock May 3, 2018 at 10:25 am #

      If you are new to command line arguments that’s okay but you will need to read this blog post first.

  28. Ata May 10, 2018 at 1:40 am #

    Dear Adrian,
    Thank you for your great projects that you are sharing,
    I’m new with python.
    I followed the procedure as you had mentioned here.
    there is a problem which relates to skimage .
    I can not use it in Python while I have installed in several way like ;
    sudo apt-get install python-skimage
    but still receive same error as;
    ImportError: No Module named skimage
    But When I run the python in LXTerminal then I can import these without any error.
    Can you help me with this?!

    • Adrian Rosebrock May 14, 2018 at 12:23 pm #

      Are you using a Python virtual environment to install scikit-image? If so, make sure you access your Python virtual environment before you install it:

      The “apt-get” command will install your Python packages into the system Python which you likely do not want.

  29. Roshan May 17, 2018 at 5:01 pm #

    Hi Adrian,

    I am trying to count the number of seeds from the image but the background is gray instead of black as in your example and thus I am not able to detect unique segments. I would like to know the best way to deal with this.


    • Adrian Rosebrock May 22, 2018 at 6:50 am #

      You may need to change and manually tune the threshold values you are using for your input image.

      • Bobby January 24, 2019 at 6:55 pm #

        Are you using a light box for this or what material is being used for the table-top black background color? Have a product name or link?

        • Adrian Rosebrock January 25, 2019 at 6:49 am #

          It was actually just my coffee table (my coffee table is a dark expresso color).

  30. Kundan August 21, 2018 at 7:12 am #

    Hi Adrian,
    Thanks for the wonderful tutorial.

    I need to calculate particle size distribution by calculating the sizes of fragments/ pieces in a given image, I hope you would point me in the right direction.


  31. Raftaar October 21, 2018 at 12:46 am #

    Hello Adrian,
    What is the benefit of performing bitwise OR vs just adding Otsu thresholding (+) to THRESH_BINARY i.e

    • Adrian Rosebrock October 22, 2018 at 8:07 am #

      A bitwise OR is different than an add. There has been confusion regarding this in the OpenCV documentation so I believe the developers just made both values and both will perform Otsu thresholding.

  32. Raftaar October 21, 2018 at 7:22 pm #

    Hello Adrian,
    If I remove the square brackets around c here on Line 39:
    cv2.drawContours(image, [c], -1, (0, 255, 0), 2)

    The contours have lot of gaps in them.

    Can you please let me know why is that the case

    • Adrian Rosebrock October 22, 2018 at 7:54 am #

      I’m not sure what you mean by “gaps” here. Why are you trying removing that line?

  33. shahbaz sharif November 9, 2018 at 5:14 am #

    hey bro is there any way to get this work with topographic results of eye { topographer }

    will be very happy for the ans 🙂

  34. harry December 31, 2018 at 2:23 am #

    how to plot the distance map?

  35. yaya March 26, 2019 at 6:17 am #

    hai adrian,
    Thank you for your great projects that you are sharing,
    I’m new with python.

    I followed the procedure as you had mentioned here.
    my project involves to study the technique of image processing to increase the potential of identifying tree species, focusing on the commercial species using drone imagery. we want to detect the species for tropical forest. any suggestion..

    tq so much for your answer

    • Adrian Rosebrock March 27, 2019 at 8:41 am #

      It’s hard to say without seeing example images/video of what you’re working with. If you can share some example images I can try to take a look.

  36. Hassan April 5, 2019 at 8:11 am #

    Hi Adrian thanks for the great tutorial, I have images with black overlapping circles on a white background and would like to detect them as you did here, but I don’t know what changes I should make to let the code work again

    Any help would be appreciated

    • Adrian Rosebrock April 12, 2019 at 1:02 pm #

      Just invert the image to make the foreground white pixels:

      image = cv2.bitwise_not(image)

  37. Omar May 1, 2019 at 4:06 pm #

    Hi there Adrian,
    We have an image dataset for brain tumors and we also have some of the data segmented to be able to start using a deep learning architecture for our model. My question is, is this algorithm capable of extracting contours and mapping it on the original images? If so, do you might have a resource or tutorial for that matter or will refactoring the code snippets here would be enough?
    Many Thanks

    • Adrian Rosebrock May 8, 2019 at 1:47 pm #

      Hey Omar — if your goal is to apply deep learning + segmentation then you should utilize instance segmentation. Deep Learning for Computer Vision with Python covers instance segmentation via Mask R-CNNs. The book takes a medical focus as well, showing you how to train a Mask R-CNN for skin lesion/cancer segmentation as well as prescription pill segmentation. Give it a look, I believe it would really help you with your project.

  38. Henrique May 7, 2019 at 8:52 pm #

    Hi Adrian, is there a way to count the area of ​​each coin in order to be able to classify each coin with its respective value?

    • Adrian Rosebrock May 8, 2019 at 12:50 pm #

      Yes, but the first step would be to recognize the count itself. Most valued coins have different sizes, therefore a good method would simply to be measure the coin size.

  39. Simone June 18, 2019 at 11:09 am #

    Hi Adrian,
    Thank you very much for your great tutorials! This example is particularly interesting as it works much better than the one in the openCV tutorial, at least for my dataset.
    Just a tip for anyone interested in improved performance, above all when you are dealing with thousands of objects: setting the parameter watershed_line to True in the watershed function will mark the basins’ borders with the label 0 (background). Setting thresh[labels == 0] = 0, you can directly call findContours on thresh without the expensive for loop.
    Hope that helps, and thank you again.

    • Adrian Rosebrock June 19, 2019 at 1:46 pm #

      Thanks for sharing, Simone!

  40. sohini goswami August 20, 2019 at 6:38 am #

    Hi Adrian,
    I have some grains [wheat], can this algorithm work for segmenting the grains which are touching each other? Or do I have to use the Mask R CNN approach?

    • Adrian Rosebrock September 5, 2019 at 11:00 am #

      Mask R-CNN may be overkill but it’s hard to say without seeing your images first. Give both a try and then let your empirical results guide you further.

Before you leave a comment...

Hey, Adrian here, author of the PyImageSearch blog. I'd love to hear from you, but before you submit a comment, please follow these guidelines:

  1. If you have a question, read the comments first. You should also search this page (i.e., ctrl + f) for keywords related to your question. It's likely that I have already addressed your question in the comments.
  2. If you are copying and pasting code/terminal output, please don't. Reviewing another programmers’ code is a very time consuming and tedious task, and due to the volume of emails and contact requests I receive, I simply cannot do it.
  3. Be respectful of the space. I put a lot of my own personal time into creating these free weekly tutorials. On average, each tutorial takes me 15-20 hours to put together. I love offering these guides to you and I take pride in the content I create. Therefore, I will not approve comments that include large code blocks/terminal output as it destroys the formatting of the page. Kindly be respectful of this space.
  4. Be patient. I receive 200+ comments and emails per day. Due to spam, and my desire to personally answer as many questions as I can, I hand moderate all new comments (typically once per week). I try to answer as many questions as I can, but I'm only one person. Please don't be offended if I cannot get to your question
  5. Do you need priority support? Consider purchasing one of my books and courses. I place customer questions and emails in a separate, special priority queue and answer them first. If you are a customer of mine you will receive a guaranteed response from me. If there's any time left over, I focus on the community at large and attempt to answer as many of those questions as I possibly can.

Thank you for keeping these guidelines in mind before submitting your comment.

Leave a Reply