Determining object color with OpenCV

determining_object_color_result

This is the final post in our three part series on shape detection and analysis.

Previously, we learned how to:

  1. Compute the center of a contour
  2. Perform shape detection & identification

Today we are going to perform both shape detection and color labeling on objects in images.

At this point, we understand that regions of an image can be characterized by both color histograms and by basic color channel statistics such as mean and standard deviation.

But while we can compute these various statistics, they cannot give us an actual label such as “red”, “green”, “blue”, or “black” that tags a region as containing a specific color.

…or can they?

In this blog post, I’ll detail how we can leverage the L*a*b* color space along with the Euclidean distance to tag, label, and determine the color of objects in images using Python and OpenCV.

Looking for the source code to this post?
Jump right to the downloads section.

Determining object color with OpenCV

Before we dive into any code, let’s briefly review our project structure:

Notice how we are reusing the shapedetector.py  and ShapeDetector  class from our previous blog post. We’ll also create a new file, colorlabeler.py , that will be used to tag image regions with a text label of a color.

Finally, the detect_color.py  driver script will be used to glue all the pieces together.

Before you continue working through this post, make sure that you have the imutils Python package installed on your system:

We’ll be using various functions inside this library through the remainder of the lesson.

Labeling colors in images

The first step in this project is to create a Python class that can be used to label shapes in an image with their associated color.

To do this, let’s define a class named ColorLabeler  in the colorlabeler.py  file:

Line 2-5 imports our required Python packages while Line 7 defines the ColorLabeler  class.

We then dive into the constructor on Line 8. To start, we need to initialize a colors dictionary (Lines 11-14) that specifies the mapping of the color name (the key to the dictionary) to the RGB tuple (the value of the dictionary).

From there, we allocate memory for a NumPy array to store these colors, followed by initializing the list of color names (Lines 18 and 19).

The next step is to loop over the colors  dictionary, followed by updating the NumPy array and the colorNames  list, respectively (Lines 22-25).

Finally, we convert the NumPy “image” from the RGB color space to the L*a*b* color space.

So why are we using the L*a*b* color space rather than RGB or HSV?

Well, in order to actually label and tag regions of an image as containing a certain color, we’ll be computing the Euclidean distance between our dataset of known colors (i.e., the lab  array) and the averages of a particular image region.

The known color that minimizes the Euclidean distance will be chosen as the color identification.

And unlike HSV and RGB color spaces, the Euclidean distance between L*a*b* colors has actual perceptual meaning — hence we’ll be using it in the remainder of this post.

The next step is to define the  label  method:

The label  method requires two arguments: the L*a*b* image  containing the shape we want to compute color channel statistics for, followed by c , the contour region of the image  we are interested in.

Lines 34 and 35 construct a mask for contour region, an example of which we can see below:

Figure 1: (Right) The original image. (Left) The mask image, indicating that we will only perform computations in the "white" region of the image, ignoring the black background.

Figure 1: (Right) The original image. (Left) The mask image for the blue pentagon at the bottom of the image, indicating that we will only perform computations in the “white” region of the image, ignoring the black background.

Notice how the foreground region of the mask  is set to white, while the background is set to black. We’ll only perform computations within the masked (white) region of the image.

We also erode the mask slightly to ensure statistics are only being computed for the masked region and that no background is accidentally included (due to a non-perfect segmentation of the shape from the original image, for instance).

Line 37 computes the mean (i.e., average) for each of the L*, a*, and *b* channels of the image for only the mask ‘ed region.

Finally, Lines 43-51 handles looping over each row of the lab  array, computing the Euclidean distance between each known color and the average color, and then returning the name of the color with the smallest Euclidean distance.

Defining the color labeling and shape detection process

Now that we have defined our ColorLabeler , let’s create the detect_color.py  driver script. Inside this script we’ll be combining both our ShapeDetector  class from last week and the ColorLabeler  from today’s post.

Let’s go ahead and get started:

Lines 2-6 import our required Python packages — notice how we are importing both our ShapeDetector  and ColorLabeler .

Lines 9-12 then parse our command line arguments. Like the other two posts in this series, we only need a single argument: the --image  path where the image we want to process lives on disk.

Next up, we can load the image and process it:

Lines 16-18 handle loading the image from disk and then creating a resized  version of it, keeping track of the ratio  of the original height to the resized height. We resize the image so that our contour approximation is more accurate for shape identification. Furthermore, the smaller the image is, the less data there is to process, thus our code will execute faster.

Lines 22-25 apply Gaussian smoothing to our resized image, converting to grayscale and L*a*b*, and finally thresholding to reveal the shapes in the image:

Figure 2: Thresholding is applied to segment the background from the foreground shapes.

Figure 2: Thresholding is applied to segment the background from the foreground shapes.

We find the contours (i.e., outlines) of the shapes on Lines 29-30, taking care of to grab the appropriate tuple value of cnts  based on our OpenCV version.

We are now ready to detect both the shape and color of each object in the image:

We start looping over each of the contours on Line 38, while Lines 40-42 compute the center of the shape.

Using the contour, we can then detect the shape  of the object, followed by determining its color  on Lines 45 and 46.

Finally, Lines 51-57 handle drawing the outline of the current shape, followed by the color + text label on the output image.

Lines 60 and 61 display the results to our screen.

Color labeling results

To run our shape detector + color labeler, just download the source code to the post using the form at the bottom of this tutorial and execute the following command:

Figure 3: Detecting the shape and labeling the color of objects in an image.

Figure 3: Detecting the shape and labeling the color of objects in an image.

As you can see from the GIF above, each object has been correctly identified both in terms of shape and in terms of color.

Limitations

One of the primary drawbacks to using the method presented in this post to label colors is that due to lighting conditions, along with various hues and saturations, colors rarely look like pure red, green, blue, etc.

You can often identify small sets of colors using the L*a*b* color space and the Euclidean distance, but for larger color palettes, this method will likely return incorrect results depending on the complexity of your images.

So, that being said, how can we more reliably label colors in images?

Perhaps there is a way to “learn” what colors “look like” in the real-world.

Indeed, there is.

And that’s exactly what I’ll be discussing in a future blog post.

Summary

Today is the final post in our three part series on shape detection and analysis.

We started by learning how to compute the center of a contour using OpenCV. Last week we learned how to utilize contour approximation to detect shapes in images. And finally, today we combined our shape detection algorithm with a color labeler, used to tag shapes a specific color name.

While this method works for small color sets in semi-controlled lighting conditions, it will likely not work for larger color palettes in less controlled environments. As I hinted at in the “Limitations” section of this post, there is actually a way for us to “learn” what colors “look like” in the real-world. I’ll save the discussion of this method for a future blog post.

So, what did you think of this series of blog posts? Be sure to let me know in the comments section.

And be sure to signup for the PyImageSearch Newsletter using the form below to be notified when new posts go live!

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 11-page Resource Guide on Computer Vision and Image Search Engines, including exclusive techniques that I don’t post on this blog! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , , , ,

31 Responses to Determining object color with OpenCV

  1. David McDuffee February 15, 2016 at 7:24 pm #

    I’ve worked with RGB and HSV quite a bit, but I never used LAB. Knowing now that it makes Euclidean distance meaningful in perceptual space means that I’ll stop trying to shoehorn RGB or HSV Euclidean distance into tasks for which they’re ill suited. Thanks!

    • Adrian Rosebrock February 16, 2016 at 3:42 pm #

      I’m glad the post helped David! 🙂

  2. Marc April 1, 2016 at 9:26 am #

    Hi Adrian,
    As always nice work,

    but i have a question will this be applicable to other colors eg. (orange) and what if the color is a bit vague orange, will it still detect it?

    Regards,

    • Adrian Rosebrock April 1, 2016 at 3:12 pm #

      You’ll need to define what exactly “orange” is in terms of the L*a*b* color space, but yes, this approach can still work for detecting orange colors.

  3. Denny April 28, 2016 at 9:57 am #

    Hey Adrian,

    Is is possible to run this script (python + openCV) on a web server?
    The idea is to upload an image through web browser and get the result image as response.

    I’ve been able to run python on it, but didn’t get openCV up.

    Cheers D

  4. Jason July 21, 2016 at 12:34 am #

    Hi Adrian,

    I’ve been working for detecting seven colors and I want to do it in less controlled environments. I’ve tried my best, but I can’t find a solution. Then I found your blog post. I’m very looking forward to your method to solve this problem. Thanks for sharing!

  5. Zhi Yong Tey November 2, 2016 at 6:14 am #

    Hi Adrian,

    I have been working on my school project on detecting object and color concurrently using live feed video, would that be fine for you to guide me through to get my code done? Your help will be much appreciated.

    • Adrian Rosebrock November 3, 2016 at 9:43 am #

      I actually cover how to detect and track an object based on its color inside Practical Python and OpenCV. I would suggest starting there.

  6. marlin November 13, 2016 at 7:34 pm #

    Is it still beneficiary to use L*A*B* colors space, as opposed to HSV, for detecting objects in the real world? (Where lighting and shadows play a huge role)

    Also, if I should use HSV how can I approximate the Euclidean distance from HSV? Should i just focus on the Hue and Saturation Value and try to find the shortest distance between them??

    Thanks

    • Adrian Rosebrock November 14, 2016 at 12:06 pm #

      In general, yes, the L*a*b* color space is better at handling lighting condition variations. As for HSV and the Euclidean distance, that’s entirely based on what you are trying to accomplish. Since I don’t know that I would suggest experimenting with and without the Value component in your Euclidean distance and look at the results.

  7. Speedmachine December 17, 2016 at 4:08 pm #

    hey, i just had a minor doubt. You used cvtColor command to convert BGR and RGB to LAB. But, as mentioned in the official documentation, it actually returns 2.55*L, a+128, b+128 and not Lab. Does the euclidean distance have meaning in this space, or else, shouldnt the values returned by this command be converted to get the actual Lab values?

    • Adrian Rosebrock December 18, 2016 at 8:37 am #

      Yes, these values (and the associated Euclidean distance) still have perceptual meaning.

  8. RAVIVARMAN RAJENDIRAN January 20, 2017 at 3:10 am #

    Hi,
    Thanks for the code.
    I just trying to see the leaf color like whether is green or brown or yellow.
    I tried this code i am getting green color as blue.
    how can i correct it.
    Image of output – http://dl.dropboxusercontent.com/u/12382973/leaf_detection_error.png

    • Adrian Rosebrock January 20, 2017 at 10:54 am #

      The color ranges you should use are dependent on your lighting conditions. I would suggest using the range-detector script to help you narrow down on the proper color thresholding values. You should also read this blog post as well.

      • RAVIVARMAN RAJENDIRAN January 25, 2017 at 2:37 am #

        thanks. i will try it out.

  9. Akilesh January 22, 2017 at 1:20 am #

    Hello,
    This is a great post and really helped a lot. Is the post on how to learn colors out? Really waiting to see how that would work!

    • Adrian Rosebrock January 22, 2017 at 10:12 am #

      Hey Akilesh — I have not written the tutorial on learning colors. It is still in my idea queue. I’ll be sure to let you know when I write it.

  10. onur February 1, 2017 at 3:15 am #

    Hi Adrian,

    Thansk for great works,

    How can i use terminal command ‘$ python detect_color.py –image example_shapes.png’ in a python code. I will use this code in a raspberry pi and my main code will be in pyhton?

    Can i arrange upper and lower baundaries for the colors like your Object Track Movement codes.

    Thanks…

    sincerely…

    • Adrian Rosebrock February 1, 2017 at 12:47 pm #

      Hi Onur — I’m not sure what you mean by use the terminal command inside the Python code?

  11. onur February 1, 2017 at 2:50 pm #

    I will use this code as a subroutine in my main python code. How can I call this code in my main python code? For example, in main python code, I want to say like ‘detect_color(example_shapes) . I am beginner in python and raspberry sorry for this.

    sincerely…

    • Adrian Rosebrock February 3, 2017 at 11:22 am #

      You would need to define your own function that encapsulates this code. I would highly recommend that you spend a little time brushing up on Python and how to define your own functions before continuing, otherwise you may find yourself running into many errors and unsure how to solve them. Let me know if you need any good resources to learn the Python programming language.

  12. Julien March 7, 2017 at 10:37 am #

    It is a very useful example, but how to get the “color name” of a ROI in the image, I don’t know how to convert the RIO to a “contour” that I can pass to label() ?
    Could you help with ?

    • Adrian Rosebrock March 8, 2017 at 1:06 pm #

      Hey Julien — can you elaborate more on what you mean by “color name of an ROI”? This blog post demonstrates how to take a region of an image and determine the color name. I’m not sure how this is different from what you want to accomplish?

  13. dhamini March 28, 2017 at 9:12 am #

    hello Adrian –
    can i get a code for how to detect the different colors of an different objects?

    • Adrian Rosebrock March 28, 2017 at 12:48 pm #

      That really depends on your project. What types of objects are you trying to recognize/detect?

  14. dhamini March 30, 2017 at 9:48 am #

    my project is to recognize the color of clothes.

  15. Wilton Oliveira April 21, 2017 at 10:13 am #

    Hello Adrian!

    First of all, amazing work!

    I just started to learn python programming + OpenCV, aiming an Engineering Final project for my university, and your examples and tutorials posted here are helping a lot!
    I was wondering where can i download the image you utilized here in this tutorial?

    Thanks!

    • Adrian Rosebrock April 21, 2017 at 10:39 am #

      Hey Wilton — use the “Downloads” section of this post to download the source code + example image. Cheers.

  16. Fiona June 19, 2017 at 11:06 pm #

    Hi, Adrian,

    I am using windows 7 and could not figure out a way to install scipy, as it is not supported by any windows system, is there any other way to do so or can you help me install scipy please.

    • Adrian Rosebrock June 20, 2017 at 10:51 am #

      Hi Fiona — I have not used Windows in over 10+ years and do not officially support it here on the PyImageSearch blog. I hope another PyImageSearch reader can help you out, otherwise I highly recommend that you use a Unix-based development environment such as Linux or macOS for computer vision development.

Leave a Reply