Computing image “colorfulness” with OpenCV and Python

Today’s blog post is inspired by a question I received from a PyImageSearch reader on Twitter, @makingyouthink.

Paraphrasing the tweets myself and @makingyouthink exchanged, the question was:

Have you ever seen a Python implementation of Measuring colourfulness in natural images (Hasler and Süsstrunk, 2003)?

I would like to use it as an image/produce search engine. By giving each image a “colorfulness” amount, I can sort my images according to their color.

There are many practical uses for image colorfulness, including evaluating compression algorithms, assessing a given camera sensor module’s sensitivity to color, computing the “aesthetic qualities” of an image, or simply creating a bulk image visualization took to show a spectrum of images in a dataset arranged by colorfulness.

Today we are going to learn how to calculate the colorfulness of an image as described in Hasler and Süsstrunk’s 2003 paper, Measuring colorfulness in natural images. We will then implement our colorfulness metric using OpenCV and Python.

After implementing the colorfulness metric, we’ll sort a given dataset according to color and display the results using the image montage tool that we created last week.

To learn about computing image “colorfulness” with OpenCV, just keep reading.

Looking for the source code to this post?
Jump right to the downloads section.

Computing image “colorfulness” with OpenCV and Python

There are three core parts to today’s blog post.

First, we will walk through the colorfulness metric methodology described in in the Hasler and Süsstrunk paper.

We’ll then implement the image colorfulness calculations in Python and OpenCV.

Finally, I’ll demonstrate how we can apply the colorfulness metric to a set of images and sort the images according to how “colorful” they are. We will make use of our handy image montage routine for visualization.

To download the source code + example images to this blog post, be sure to use the “Downloads” section below.

Measuring colorfulness in an image

In their paper, Hasler and Süsstrunk first asked 20 non-expert participants to rate images on a 1-7 scale of colorfulness. This survey was conducted on a set of 84 images. The scale values were:

  1. Not colorful
  2. Slightly colorful
  3. Moderately colorful
  4. Averagely colorful
  5. Quite colorful
  6. Highly colorful
  7. Extremely colorful

In order to set a baseline, the authors provided the participants with 4 example images and their corresponding colorfulness value from 1-7.

Through a series of experimental calculations, they derived a simple metric that correlated with the results of the viewers.

They found through these experiments that a simple opponent color space representation along with the mean and standard deviations of these values correlates to 95.3% of the survey data.

We now now derive their image colorfulness metric:

rg = R - G

yb = \frac{1}{2}(R + G) - B

The above two equations show the opponent color space representation where R is Red, G is Green, and B is Blue. In the first equation, rg is the difference of the Red channel and the Green channel. In the second equation, yb is represents half of the sum of the Red and Green channels minus the Blue channel.

Next, the standard deviation (\sigma_{rgyb}) and mean (\mu_{rgyb}) are computed before calculating the final colorfulness metric, C.

\sigma_{rgyb} = \sqrt{\sigma_{rg}^2 + \sigma_{yb}^2}

\mu_{rgyb} = \sqrt{\mu_{rg}^2 + \mu_{yb}^2}

C = \sigma_{rgyb} + 0.3 * \mu_{rgyb}

As we’ll find out, this turns out to be an extremely efficient and practical way for computing image colorfulness.

In the next section, we will implement this algorithm with Python and OpenCV code.

Implementing an image colorfulness metric in OpenCV

Now that we have a basic understanding of the colorfulness metric, let’s calculate it with OpenCV and NumPy.

In this section we will:

  • Import our necessary Python packages.
  • Parse our command line arguments.
  • Loop through all images in our dataset and compute the corresponding colorfulness metric.
  • Sort the images based on their colorfulness.
  • Display the “most colorful” and “least colorful” images in a montage.

To get started open up your favorite text editor or IDE, create a new file named , and insert the following code:

Lines 2-7 import our required Python packages.

If you do not have imutils installed on your system (v0.4.3 as of this writing), then make sure you install/upgrade it via pip:

Note: If you are using Python virtual environments (as all of my OpenCV install tutorials do), make sure you use the workon  command to access your virtual environment first and then install/upgrade imutils .

Next, we will define a new function, image_colorfullness:

Line 9 defines the image_colorfulness function, which takes an image as the only argument and returns the colorfulness metric as described in the section above.

Note: Line 11, Line 14, and Line 17 make use of color spaces which are beyond the scope of this blog post. If you are interested in learning more about color spaces, be sure to refer to Practical Python and OpenCV and the PyImageSearch Gurus course.

To break the image into it’s Red, Green, and Blue (RGB) channels we make a call to cv2.split on Line 11. The function returns a tuple in BGR order as this is how images are represented in OpenCV.

Next we use a very simple opponent color space.

As in the referenced paper, we compute the Red-Green opponent, rg, on Line 14. This is a simple difference of the Red channel minus the Blue channel.

Similarly, we compute the Yellow-Blue opponent on Line 17. In this calculation, we take half of the Red+Green channel sum and then subtract the Blue channel. This produces our desired opponent, yb.

From there, on Lines 20 and 21 we compute the mean and standard deviation of both rg and yb, and store them in respective tuples.

Next, we combine the rbStd (Red-Blue standard deviation) with the ybStd (Yellow-Blue standard deviation) on Line 24. We add the square of each and then take the square root, storing it as stdRoot.

Similarly, we combine the rbMean with the ybMean by squaring each, adding them, and taking the square root on Line 25. We store this value as meanRoot.

The last step of computing image colorfulness is to add stdRoot and 1/3 meanRoot followed by returning the value to the calling function.

Now that our image image_colorfulness  metric is defined, we can parse our command line arguments:

We only need one command line argument here, --images , which is the path to a directory of images residing on your machine.

Now let’s loop through each image in the dataset and compute the corresponding colorfulness metric:

Line 38 initializes a list, results , which will hold a 2-tuple containing the image path and the corresponding colorfulness of the image.

We begin our loop through our images in our dataset specified by our command line argument, --images  on Line 41.

In the loop, we first load the image on Line 44, then we resize the image to a width=250 pixels on Line 45, maintaining the aspect ratio.

Our image_colorfulness function call is made on Line 46 where we provide the only argument, image, storing the corresponding colorfulness metric in C.

On Lines 49 and 50, we draw the colorfulness metric on the image using cv2.putText. To read more about the parameters to this function, see the OpenCV Documentation (2.4, 3.0).

On the last line of the for  loop, we append the tuple, (imagePath, C) to the results  list (Line 53).

Note: Typically, you would not want to store each image in memory for a large dataset. We do this here for convenience. In practice you would load the image, compute the colorfulness metric, and then maintain a list of the image ID/filename and corresponding colorfulness metric. This is a much more efficient approach; however, for the sake of this example we are going to store the images in memory so we can easily build our montage of “most colorful” and “least colorful” images later in the tutorial.

At this point, we have answered our PyImageSearch reader’s question. The colorfulness metric has been calculated for all images.

If you’re using this for an image search engine as @makingyouthinkcom is, you probably want to display your results.

And that is exactly what we will do next, where we will:

  • Sort the images according to their corresponding colorfulness metric.
  • Determine the 25 most colorful and 25 least colorful images.
  • Display our results in a montage.

Let’s go ahead and tackle these three tasks now:

On Line 59 we sort the results  in reverse order (according to their colorfulness metric) making use of Python Lambda Expressions.

Then on Line 60, we store the 25 most colorful images into a list, mostColor .

Similarly, on Line 61, we load the least colorful images which are the last 25 images in our results list. We reverse this list so that the images are displayed in ascending order. We store these images as leastColor .

Now, we can visualize the mostColor and leastColor images using the build_montages function we learned about last week.

A most-colorful and least-colorful montage are each built on Lines 64 and 65. Here we indicate that all images in the montage will be resized to 128 x 128 and there will be 5 columns by 5 rows of images.

Now that we have assembled the montages, we will display each on the screen.

On Lines 68 and 69 we display each montage in a separate window.

The cv2.waitKey call on Line 70 pauses execution of our script until we select a currently active window. When a key is pressed, the windows close and the script exits.

Image colorfulness results

Now let’s put this script to work and see the results. Today we will use a sample (1,000 images) of the popular UKBench dataset, a collection of images containing everyday objects.

Our goal is to sort the images by most colorful and least colorful.

To run the script, fire up a terminal and execute the following command:

Figure 1: (Left) Least colorful images. (Right) Most colorful images.

Notice how our image colorfulness metric has done a good job separating non-colorful images (left) that are essentially black and white from “colorful” images that are vibrant (right).


In today’s blog post we learned how to compute the “colorfulness” of an image using the approach detailed by Hasler and Süsstrunk’s in their 2003 paper, Measuring colorfulness in nature images.

Their method is based on the mean and standard deviation of pixel intensities values in an opponent color space. This metric was derived by examining correlations between experimental metrics and the colorfulness assigned to images by participants in their study.

We then implemented the image colorfulness metric and applied it to the UKBench dataset. As our results demonstrated, the Hasler and Süsstrunk method is a quick and easy way to quantify the colorfulness contents of an image.

Have fun using this method to experiment with the image colorfulness in your own datasets!

And before you go, be sure to enter your email address in the form below to be notified when new tutorials are published here on the PyImageSearch blog.


If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , ,

19 Responses to Computing image “colorfulness” with OpenCV and Python

  1. Kenny June 5, 2017 at 10:37 am #

    Thanks Adrian for another interesting and refreshing read! I love vibrant colours!

    • Adrian Rosebrock June 6, 2017 at 8:36 am #

      Thank you Kenny!

  2. JBeale June 5, 2017 at 12:11 pm #

    Interesting idea. If someone asked me to make a “colorful” ranking, I might first convert RGB to the HSV representation (hue, saturation, value) and then make an index based on the standard deviation or other variance measurement of hue (total variation in color) as well as the average value of saturation (0 = pure greyscale, 1.0 = most vivid colors). Looks like this Hasler index is just about saturation and contrast, so images that are almost entirely one color tone (eg. red) rank highly so long as they also have high contrast on the black-to-white axis.

  3. Douae June 8, 2017 at 6:29 am #

    Thank you Adrian for this magnificent tutorial. It has been a while I’m thinking about a safe way to determine if an image is colorful or not, you’re solution is even better than my needs.

    • Adrian Rosebrock June 9, 2017 at 1:41 pm #

      Thanks Douae, I’m happy you found the tutorial helpful! 🙂

  4. Bhargav June 8, 2017 at 7:03 am #

    Nice, interesting article! I enjoy your posts! Just one question: will the “colorfulness” indicator be lighting invariant?

  5. stephan schulz June 8, 2017 at 8:02 pm #

    thanks for this great article.

    i am wondering how i could apply technique to finding the most colourful section in just one image but large image. dividing the image in to many small section would work and i could use your complete approach. but this might miss sections that lie right on the intersection of two sections.
    can you think of options other then sliding a window across the large image?

    thanks a lot.

    • Adrian Rosebrock June 9, 2017 at 1:35 pm #

      Sliding windows could be a good approach here, especially if you combine them image image pyramids. I would likely recommend using superpixels and then computing the colorfulness metric for each superpixel.

  6. Stefano Tommesani June 15, 2017 at 7:57 am #

    Thanks Adrian!
    Here is a C# version of the code in this article:

    • Adrian Rosebrock June 16, 2017 at 11:18 am #

      Thank you for sharing Stefano!

  7. Arnaud June 30, 2017 at 12:50 pm #

    I’m not sure why you’re using the absolute value of the channel differences. I couldn’t find it in the paper.

  8. Ethan July 7, 2017 at 12:34 pm #

    Hey Adrian, thanks for sharing this! I have a similar question. Is there a reason you are using absolute values here since the original paper did not include that?

    • Adrian Rosebrock July 11, 2017 at 6:55 am #

      While the authors did not include the absolute value in their paper, their MATLAB implementation did, hence why I used it.

    • reborn April 18, 2019 at 9:39 am #

      thank u so much this help me a lot

  9. Ramaswamy October 12, 2017 at 5:26 am #

    Nice article. Thanks for sharing us Adrian.

    • Adrian Rosebrock October 13, 2017 at 8:46 am #

      It’s my pleasure, Ramaswamy!

  10. Wenmin Wu March 30, 2018 at 4:03 am #

    why use np.absolute ? The paper has never mentioned absolute difference between R and G.

  11. esha mehra June 20, 2019 at 5:23 am #

    hello sir
    how to find the value of color variance of an image

    • Adrian Rosebrock June 26, 2019 at 1:57 pm #

      What do you mean by “color variance”?

Before you leave a comment...

Hey, Adrian here, author of the PyImageSearch blog. I'd love to hear from you, but before you submit a comment, please follow these guidelines:

  1. If you have a question, read the comments first. You should also search this page (i.e., ctrl + f) for keywords related to your question. It's likely that I have already addressed your question in the comments.
  2. If you are copying and pasting code/terminal output, please don't. Reviewing another programmers’ code is a very time consuming and tedious task, and due to the volume of emails and contact requests I receive, I simply cannot do it.
  3. Be respectful of the space. I put a lot of my own personal time into creating these free weekly tutorials. On average, each tutorial takes me 15-20 hours to put together. I love offering these guides to you and I take pride in the content I create. Therefore, I will not approve comments that include large code blocks/terminal output as it destroys the formatting of the page. Kindly be respectful of this space.
  4. Be patient. I receive 200+ comments and emails per day. Due to spam, and my desire to personally answer as many questions as I can, I hand moderate all new comments (typically once per week). I try to answer as many questions as I can, but I'm only one person. Please don't be offended if I cannot get to your question
  5. Do you need priority support? Consider purchasing one of my books and courses. I place customer questions and emails in a separate, special priority queue and answer them first. If you are a customer of mine you will receive a guaranteed response from me. If there's any time left over, I focus on the community at large and attempt to answer as many of those questions as I possibly can.

Thank you for keeping these guidelines in mind before submitting your comment.

Leave a Reply