Hobbits and Histograms – A How-To Guide to Building Your First Image Search Engine in Python

One Ring to rule them all, One ring to find them; One ring to bring them all
and in the darkness bind them.

The image search engine we are about to build is going to be so awesome, it could have destroyed The One Ring itself, without the help of the fires of Mt. Doom.

Okay, I’ve obviously been watching a lot of The Hobbit and the Lord of the Rings over this past week.

And I thought to myself, you know what would be awesome?

Building a simple image search engine using screenshots from the movies. And that’s exactly what I did.

Here’s a quick overview:

  • What we’re going to do: Build an image search engine, from start to finish, using The Hobbit and Lord of the Rings screenshots.
  • What you’ll learn: The 4 steps required to build an image search engine, with code examples included. From these examples, you’ll be able to build image search engines of your own.
  • What you need: Python, NumPy, and OpenCV. A little knowledge of basic image concepts, such as pixels and histograms, would help, but is absolutely not required. This blog post is meant to be a How-To, hands on guide to building an image search engine.

Looking for the source code to this post?
Jump right to the downloads section.

OpenCV and Python versions:
This example will run on Python 2.7 and OpenCV 2.4.X.

I’ve never seen a “How-To” guide on building a simple image search engine before. But that’s exactly what this post is. We’re going to use (arguably) one of the most basic image descriptors to quantify and describe these screenshots — the color histogram.

I discussed the color histogram in my previous post, a guide to utilizing color histograms for computer vision and image search engines. If you haven’t read it, no worries, but I would suggest that you go back and read it after checking out this blog post to further understand color histograms.

But before I dive into the details of building an image search engine, let’s check out our dataset of The Hobbit and Lord of the Rings screenshots:

Figure 1: Our dataset of The Hobbit and Lord of the Rings screenshots. We have 25 total images of 5 different classes.

Figure 1: Our dataset of The Hobbit and Lord of the Rings screenshots. We have 25 total images of 5 different categories including Dol Guldur, the Goblin Town, Mordor/The Black Gate, Rivendell, and The Shire.

So as you can see, we have a total of 25 different images in our dataset, five per category. Our categories include:

  • Dol Guldur: “The dungeons of the Necromancer”, Sauron’s stronghold in Mirkwood.
  • Goblin Town: An Orc town in the Misty Mountains, home of The Goblin King.
  • Mordor/The Black Gate: Sauron’s fortress, surrounded by mountain ranges and volcanic plains.
  • Rivendell: The Elven outpost in Middle-earth.
  • The Shire: Homeland of the Hobbits.

The images from Dol Guldur, the Goblin Town, and Rivendell are from The Hobbit: An Unexpected Journey. Our Shire images are from The Lord of the Rings: The Fellowship of the Ring. And finally, our Mordor/Black Gate screenshots from The Lord of the Rings: The Return of the King.

The Goal:

The first thing we are going to do is index the 25 images in our dataset. Indexing is the process of quantifying our dataset by using an image descriptor to extract features from each image and storing the resulting features for later use, such as performing a search.

An image descriptor defines how we are quantifying an image, hence extracting features from an image is called describing an image. The output of an image descriptor is a feature vector, an abstraction of the image itself. Simply put, it is a list of numbers used to represent an image.

Two feature vectors can be compared using a distance metric. A distance metric is used to determine how “similar” two images are by examining the distance between the two feature vectors. In the case of an image search engine, we give our script a query image and ask it to rank the images in our index based on how relevant they are to our query.

Think about it this way. When you go to Google and type “Lord of the Rings” into the search box, you expect Google to return pages to you that are relevant to Tolkien’s books and the movie franchise. Similarly, if we present an image search engine with a query image, we expect it to return images that are relevant to the content of image — hence, we sometimes call image search engines by what they are more commonly known in academic circles as Content Based Image Retrieval (CBIR) systems.

So what’s the overall goal of our Lord of the Rings image search engine?

The goal, given a query image from one of our five different categories, is to return the category’s corresponding images in the top 10 results.

That was a mouthful. Let’s use an example to make it more clear.

If I submitted a query image of The Shire to our system, I would expect it to give me all 5 Shire images in our dataset back in the first 10 results. And again, if I submitted a query image of Rivendell, I would expect our system to give me all 5 Rivendell images in the first 10 results.

Make sense? Good. Let’s talk about the four steps to building our image search engine.

The 4 Steps to Building an Image Search Engine

On the most basic level, there are four steps to building an image search engine:

  1. Define your descriptor: What type of descriptor are you going to use? Are you describing color? Texture? Shape?
  2. Index your dataset: Apply your descriptor to each image in your dataset, extracting a set of features.
  3. Define your similarity metric: How are you going to define how “similar” two images are? You’ll likely be using some sort of distance metric. Common choices include Euclidean, Cityblock (Manhattan), Cosine, and chi-squared to name a few.
  4. Searching: To perform a search, apply your descriptor to your query image, and then ask your distance metric to rank how similar your images are in your index to your query images. Sort your results via similarity and then examine them.

Step #1: The Descriptor – A 3D RGB Color Histogram

Our image descriptor is a 3D color histogram in the RGB color space with 8 bins per red, green, and blue channel.

The best way to explain a 3D histogram is to use the conjunctive AND. This image descriptor will ask a given image how many pixels have a Red value that falls into bin #1 AND a Green value that falls into bin #2 AND how many Blue pixels falls into bin #1. This process will be repeated for each combination of bins; however, it will be done in a computationally efficient manner.

When computing a 3D histogram with 8 bins, OpenCV will store the feature vector as an (8, 8, 8) array. We’ll simply flatten it and reshape it to (512,). Once it’s flattened, we can easily compare feature vectors together for similarity.

Ready to see some code? Okay, here we go:

As you can see, I have defined a RGBHistogram class. I tend to like to define my image descriptors as classes rather than functions. The reason for this is because you rarely ever extract features from a single image alone. You instead extract features from an entire dataset of images. Furthermore, you expect that the features extracted from all images utilize the same parameters — in this case, the number of bins for the histogram. It wouldn’t make much sense to extract a histogram using 32 bins from one image and then 128 bins for another image if you intend on comparing them for similarity.

Let’s take the code apart and understand what’s going on:

  • Lines 6-8: Here I am defining the constructor for the RGBHistogram. The only parameter we need is the number of bins for each channel in the histogram. Again, this is why I prefer using classes instead of functions for image descriptors — by putting the relevant parameters in the constructor, you ensure that the same parameters are utilized for each image.
  • Line 10: You guessed it. The describe method is used to “describe” the image and return a feature vector.
  • Line 15: Here we extract the actual 3D RGB Histogram (or actually, BGR since OpenCV stores the image as a NumPy array, but with the channels in reverse order).  We assume self.bins is a list of three integers, designating the number of bins for each channel.
  • Line 16: It’s important that we normalize the histogram in terms of pixel counts. If we used the raw (integer) pixel counts of an image, then shrunk it by 50% and described it again, we would have two different feature vectors for identical images. In most cases, you want to avoid this scenario. We obtain scale invariance by converting the raw integer pixel counts into real-valued percentages. For example, instead of saying bin #1 has 120 pixels in it, we would say bin #1 has 20% of all pixels in it. Again, by using the percentages of pixel counts rather than raw, integer pixel counts, we can assure that two identical images, differing only in size, will have (roughly) identical feature vectors.
  • Line 20: When computing a 3D histogram, the histogram will be represented as a NumPy array with (N, N, N) bins. In order to more easily compute the distance between histograms, we simply flatten this histogram to have a shape of (N ** 3,). Example: When we instantiate our RGBHistogram, we will use 8 bins per channel. Without flattening our histogram, the shape would be (8, 8, 8). But by flattening it, the shape becomes (512,).

Now that we have defined our image descriptor, we can move on to the process of indexing our dataset.

Step #2: Indexing our Dataset

Okay, so we’ve decided that our image descriptor is a 3D RGB histogram. The next step is to apply our image descriptor to each image in the dataset.

This simply means that we are going to loop over our 25 image dataset, extract a 3D RGB histogram from each image, store the features in a dictionary, and write the dictionary to file.

Yep, that’s it.

In reality, you can make indexing as simple or complex as you want. Indexing is a task that is easily made parallel. If we had a four core machine, we could divide the work up between the four cores and speedup the indexing process. But since we only have 25 images, that’s pretty silly, especially given how fast it is to compute a histogram.

Let’s dive into some code:

Alright, the first thing we are going to do is import the packages we need. I’ve decided to store the RGBHistogram class in a module called pyimagesearch. I mean, it only makes sense, right? We’ll use cPickle to dump our index to disk. And we’ll use glob to get the paths of the images we are going to index.

The --dataset argument is the path to where our images are stored on disk and the --index option is the path to where we will store our index once it has been computed.

Finally, we’ll initialize our index — a builtin Python dictionary type. The key for the dictionary will be the image filename. We’ve made the assumption that all filenames are unique, and in fact, for this dataset, they are. The value for the dictionary will be the computed histogram for the image.

Using a dictionary for this example makes the most sense, especially for explanation purposes. Given a key, the dictionary points to some other object. When we use an image filename as a key and the histogram as the value, we are implying that a given histogram H is used to quantify and represent the image with filename K

Again, you can make this process as simple or as complicated as you want. More complex image descriptors make use of term frequency-inverse document frequency weighting (tf-idf) and an inverted index, but we are going to stay clear of that for now. Don’t worry though, I’ll have plenty of blog posts discussing how we can leverage more complicated techniques, but for the time being, let’s keep it simple.

Here we instantiate our RGBHistogram. Again, we will be using 8 bins for each, red, green, and blue, channel, respectively.

Here is where the actual indexing takes place. Let’s break it down:

  • Line 2: We use glob to grab the image paths and start to loop over our dataset.
  • Line 4: We extract the “key” for our dictionary. All filenames are unique in this sample dataset, so the filename itself will be enough to serve as the key.
  • Line 8-10: The image is loaded off disk and we then use our RGBHistogram to extract a histogram from the image. The histogram is then stored in the index.

Now that our index has been computed, let’s write it to disk so we can use it for searching later on.

Step #3: The Search

We now have our index sitting on disk, ready to be searched.

The problem is, we need some code to perform the actual search. How are we going to compare two feature vectors and how are we going to determine how similar they are?

This question is better addressed first with some code, then I’ll break it down.

First off, most of this code is just comments. Don’t be scared that it’s 41 lines. If you haven’t already guessed, I like well-commented code. Let’s investigate what’s going on:

  • Lines 4-7: The first thing I do is define a Searcher class and a constructor with a single parameter — the index. This index is assumed to be the index dictionary that we wrote to file during the indexing step.
  • Line 11: We define a dictionary to store our results. The key is the image filename (from the index) and the value is how similar the given image is to the query image.
  • Lines 14-26: Here is the part where the actual searching takes place. We loop over the image filenames and corresponding features in our index. We then use the chi-squared distance to compare our color histograms. The computed distance is then stored in the results dictionary, indicating how similar the two images are to each other.
  • Lines 30-33: The results are sorted in terms of relevancy (the smaller the chi-squared distance, the relevant/similar) and returned.
  • Lines 35-41: Here we define the chi-squared distance function used to compare the two histograms. In general, the difference between large bins vs. small bins is less important and should be weighted as such. This is exactly what the chi-squared distance does. We provide an epsilon dummy value to avoid those pesky “divide by zero” errors. Images will be considered identical if their feature vectors have a chi-squared distance of zero. The larger the distance gets, the less similar they are.

So there you have it, a Python class that can take an index and perform a search. Now it’s time to put this searcher to work.

Note: For those who are more academically inclined, you might want to check out The Quadratic-Chi Histogram Distance Family from the ECCV ’10 conference if you are interested in histogram distance metrics.

Step #4: Performing a Search

Finally. We are closing in on a functioning image search engine.

But we’re not quite there yet. We need a little extra code to handle loading the images off disk and performing the search:

First things first. Import the packages that we will need. As you can see, I’ve stored our Searcher class in the pyimagesearch module. We then define our arguments in the same manner that we did during the indexing step. Finally, we use cPickle to load our index off disk and initialize our Searcher.

Most of this code handles displaying the results. The actual “search” is done in a single line (#31). Regardless, let’s examine what’s going on:

  • Line 3: We are going to treat each image in our index as a query and see what results we get back. Normally, queries are external and not part of the dataset, but before we get to that, let’s just perform some example searches.
  • Line 5: Here is where the actual search takes place. We treat the current image as our query and perform the search.
  • Lines 8-11: Load and display our query image.
  • Lines 17-35: In order to display the top 10 results, I have decided to use two montage images. The first montage shows results 1-5 and the second montage results 6-10. The name of the image and distance is provided on Line 27.
  • Lines 38-40: Finally, we display our search results to the user.

So there you have it. An entire image search engine in Python. Let’s see how this thing performs:

Figure 2: Search Results using Mordor-002.png as a query.

Figure 2: Search Results using Mordor-002.png as a query. Our image search engine is able to return images from Mordor and the Black Gate.

Let’s start at the ending of The Return of the King using Frodo and Sam’s ascent into the volcano as our query image. As you can see, our top 5 results are from the “Mordor” category.

Perhaps you are wondering why the query image of Frodo and Sam is also the image in the #1 result position? Well, let’s think back to our chi-squared distance. We said that an image would be considered “identical” if the distance between the two feature vectors is zero. Since we are using images we have already indexed as queries, they are in fact identical and will have a distance of zero. Since a value of zero indicates perfect similarity, the query image appears in the #1 result position.

Now, let’s try another image, this time using The Goblin King in Goblin Town:

Figure 3: Search Results using Goblin-004.png as a query.

Figure 3: Search Results using Goblin-004.png as a query. The top 5 images returned are from Goblin Town.

The Goblin King doesn’t look very happy. But we sure are happy that all five images from Goblin Town are in the top 10 results.

Finally, here are three more example searches for Dol-Guldur, Rivendell, and The Shire. Again, we can clearly see that all five images from their respective categories are in the top 10 results.

Figure 4: Using images from Dol-Guldur, Rivendell, and The Shire as queries.

Figure 4: Using images from Dol-Guldur (Dol-Guldur-004.png), Rivendell (Rivendell-003.png), and The Shire (Shire-002.png) as queries.

Bonus: External Queries

As of right now, I’ve only shown you how to perform a search using images that are already in your index. But clearly, this is not how all image search engines work. Google allows you to upload an image of your own. TinEye allows you to upload an image of your own. Why can’t we? Let’s see how we can perform a search using an image that we haven’t already indexed:

  • Lines 2-17: This should feel like pretty standard stuff by now. We are importing our packages and setting up our argument parser, although, you should note the new argument –query. This is the path to our query image.
  • Lines 20-21: We’re going to load your query image and show it to you, just in case you forgot what your query image is.
  • Lines 27-28: Instantiate our RGBHistogram with the exact same number of bins as during our indexing step. I put that in bold and italics just to drive home how important it is to use the same parameters. We then extract features from our query image.
  • Lines 31-33: Load our index off disk using cPickle and perform the search.
  • Lines 39-62: Just as in the code above to perform a search, this code just shows us our results.

Before writing this blog post, I went on Google and downloaded two images not present in our index. One of Rivendell and one of The Shire. These two images will be our queries. Check out the results below:

Figure 5: Using external Rivendell and the Shire query images. For both cases, we find the top 5 search results are from the same category.

Figure 5: Using external Rivendell (Left) and the Shire (Right) query images. For both cases, we find the top 5 search results are from the same category.

In this case, we searched using two images that we haven’t seen previously. The one on the left is of Rivendell. We can see from our results that the other 5 Rivendell images in our index were returned, demonstrating that our image search engine is working properly.

On the right, we have a query image from The Shire. Again, this image is not present in our index. But when we look at the search results, we can see that the other 5 Shire images were returned from the image search engine, once again demonstrating that our image search engine is returning semantically similar images.

Summary

In this blog post we’ve explored how to create an image search engine from start to finish. The first step was to choose an image descriptor — we used a 3D RGB histogram to characterize the color of our images. We then indexed each image in our dataset using our descriptor by extracting feature vectors (i.e. the histograms). From there, we used the chi-squared distance to define “similarity” between two images. Finally, we glued all the pieces together and created a Lord of the Rings image search engine.

So, what’s the next step?

We’re just getting started. Right now we’re just scratching the surface of image search engines. The techniques in this blog post are quite elementary. There is a lot to build on. For example, we focused on describing color using just histograms. But how do we describe texture? Or shape? And what is this mystical SIFT descriptor?

All those questions and more, answered in the coming months.

If you liked this blog post, please consider sharing it with others.

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 11-page Resource Guide on Computer Vision and Image Search Engines, including exclusive techniques that I don’t post on this blog! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , , , ,

90 Responses to Hobbits and Histograms – A How-To Guide to Building Your First Image Search Engine in Python

  1. kerosene January 27, 2014 at 9:04 pm #

    How robust is this approach?

    Add in random images from the web. Your corpus of images are obviously selected from clusters of different, strong colors. (the rows you organized them in).

    I realize it’s only one feature, and it’s an early start, but tiny datasets do more harm than good when making broad claims to the image or search communities.

    • Adrian Rosebrock January 27, 2014 at 11:04 pm #

      Hi kerosene, thanks for leaving the reply.

      Perhaps you didn’t see my previous post, Clever Girl: A Guide to Utilizing Color Histograms for Computer Vision and Image Search Engines. Specifically, check out the “Drawbacks” section.

      Color histograms, by definition, ignore the texture and shape of objects in an image. Semantically, they cannot tell the difference between a red shoe and a red shirt if the color distribution is the only aspect being examined. Thus, more robust methods include local invariant descriptors, such as SIFT, SURF, etc. However, these methods are considerably more complex and very confusing if you are just getting started in computer vision and image search engines.

      This blog post simply served as an example of an image search engine. It shows readers that it’s cool and what’s possible with an imagination and an eagerness to explore a new field. I am certainly by no means am making broad claims regarding image or search (and I don’t think this blog post did so either).

      Remember the title of this blog post places emphasis on your first search engine. We all learned basic algebra before we learned about systems of linear equations and linear algebra. Similarly, we must examine simple methods such as color histograms and understand why they work (along with their drawbacks) before we explore more complex solutions.

    • Dinesh Vadhia January 31, 2014 at 4:37 am #

      As long as broad claims are not made then starting with a tiny data set is fine. There are many aspects to building an image search capability and these set of blog posts are are doing a wonderful job to get these key areas across to people – technical and non-technical – of what is involved. It might help if each introductory post contained a visible disclaimer of sorts. Otherwise, a terrific set of posts so far.

      • Adrian Rosebrock January 31, 2014 at 7:55 am #

        I’ll definitely keep in mind the disclaimer idea, thanks!

    • Viktor December 5, 2014 at 8:14 am #

      Hello! Just found your article! Are you using python to build Iphone app? and again, how can i connect my images to bring out the information of the thing or person on google? Pls, contact me through my mail

      • Adrian Rosebrock December 5, 2014 at 9:02 am #

        I am not using Python to build the iPhone app, just to demonstrate the algorithm. The code can easily be ported over to iOS though.

  2. Andrei Blaj January 29, 2014 at 6:08 am #

    Great article Adrian 😉

    • Adrian Rosebrock January 29, 2014 at 7:21 am #

      Thank you! I’m glad you enjoyed it!

  3. Tommy February 11, 2014 at 3:03 pm #

    Amazing post! Thanks! Even a grumpy old man like me can now figure out how to do image search….! 🙂

  4. Jose Luis de la Rosa June 13, 2014 at 7:46 am #

    Hello Adrian,

    Congratulations for this post, it’s amazing how easy you show it.

    What about images in black and white? Is it different? What’s the best way to clusterize and do an accurate search on them?

    • Adrian Rosebrock June 13, 2014 at 7:38 pm #

      Hi Jose, shoot me an email and we can discuss clustering images.

  5. Harjot Singh Parmar July 10, 2014 at 3:13 am #

    It more like collaborative filtering I must say, but how scalable is this ? Consider You have 100 thousand images to churn through.
    A nicely compiled post none the less.

    • Adrian Rosebrock July 10, 2014 at 7:28 am #

      This current approach isn’t that scalable. In order to make it scale to 100 thousand images you would either have to put this behind a MapReduce cluster or (a better approach) would be to generate clusters of similar color distributions and then only search within the relevant clusters.

      • Shelly November 15, 2015 at 1:57 pm #

        I’ll be giving this a try shortly. Any suggestions of how to go about it? What should I try first?

        • Adrian Rosebrock November 15, 2015 at 2:03 pm #

          The easiest method is to apply k-means clustering on the color histograms for the images. Then at search time, compute the distance from the color histogram to all centroids. Once you find the closest centroid, you can rank all images that belong to the cluster.

          Another approach is to use keypoint detection, local invariant descriptors, and the bag of visual words model to build a scalable image search engine with an inverted index. I cover exactly how to do this inside the PyImageSearch Gurus course.

  6. Ankit Dixit September 12, 2014 at 8:02 am #

    Great Work, Awesome entertaining way to describe Image algorithms…. Keep it up Adrian… 🙂

  7. PassCode September 16, 2014 at 7:32 am #

    hello Adrian,
    Thanks for a great Blog ,i previously had no knowledge in computer vision , and i find your contribution here fun and interesting …. thirsty for more !
    keep it up adrian , this is very good .

    • Adrian Rosebrock September 16, 2014 at 7:34 am #

      Thanks for the comment! I’m glad you’re enjoying the content — and trust me, I won’t be slowing down any time soon!

  8. dmitry November 5, 2014 at 6:09 pm #

    >how many Blue pixels fall into bin #1
    Looks like a typo to me.

    • Adrian Rosebrock November 6, 2014 at 7:24 am #

      Hi Dmitry, where are you seeing the typo?

      • Marcin March 26, 2016 at 8:38 am #

        Hi Adrian,
        I think Dmitry refer to sentence in Histogram section.

        “The best way to explain a 3D histogram is to use the conjunctive AND. This image descriptor will ask a given image how many pixels have a Red value that falls into bin #1 AND a Green value that falls into bin #2 AND how many Blue pixels fall into bin #1.”

        • Adrian Rosebrock March 27, 2016 at 9:10 am #

          Thanks for pointing this out!

  9. wantph279 March 3, 2015 at 6:06 am #

    Hi, Adrian.

    The post is great and easy to understand.

    One question, I tried to dump the index to my disc like this:

    f=open(“D:\\test”,”w”)
    f.write(cPickle.dumps(index))

    however, it didn’t work with the error message ” IOError: [Errno 13] Permission denied”

    I have tried to modify the permission setting of the folder, but still the same error.

    could you help to tell me where is the problem?

    • Adrian Rosebrock March 3, 2015 at 6:32 am #

      Is test a directory? If so, that is likely why you are getting the permission error since you are trying to write an existing directory as a file. If test is indeed a directory, I would change your code to f=open("D:\\test\\index.cpickle") and that should take care of the problem.

      • wantph279 March 3, 2015 at 6:44 am #

        Thanks a lot! That’s the point!!

        I’m a beginner, and hope to learn more from you !!!

        u are soooooo great!

        • Adrian Rosebrock March 3, 2015 at 7:59 am #

          Fantastic, glade to help! And if you’re just getting started learning computer vision and OpenCV, definitely take a look at Practical Python and OpenCV. It will definitely help jumpstart your education!

      • Michael September 24, 2015 at 11:05 am #

        Hi adrian,
        thank you for a great post

        I’ve also tried to dump the index to my disc:

        f = open(args.index+”\\index.pickle”, “w”)
        f.write(pickle.dumps(lst))
        f.close()

        i’m writing on python 3, and I get the following: “must be str, not bytes”, regarding to the writing line.

        can you help me out here?

        • Adrian Rosebrock September 25, 2015 at 6:43 am #

          Thanks for letting me know Michael, I’ll have to look into this. This blog post is meant to be run with OpenCV 2.4 + Python 2.7, not OpenCV 3 + Python 3 — it was written well before OpenCV 3 was released. In the meantime, change the file mode from “w” to “wb” and that should work.

      • Gerard December 14, 2015 at 2:23 am #

        Thanks for this!

        I also encountered the same problem….

  10. Michael March 11, 2015 at 8:19 am #

    Hi,
    how can i change the query image to another image. And where exactly is the query image picked from

    • Adrian Rosebrock March 11, 2015 at 11:14 am #

      To change the query image, you’ll need to modify this line of code: queryImage = cv2.imread(path) and change the path variable to be the path to where your image resides on disk. An example could be path = "/path/to/my/image.jpg.

      • JetC October 7, 2015 at 7:45 pm #

        hi,
        i’m trying to run the code, and i get the error “invalid load key #”

        • Adrian Rosebrock October 8, 2015 at 5:56 am #

          What command are you using to execute the code?

  11. Pedro Ferreira October 8, 2015 at 6:12 am #

    Great tutorial!
    If you’re using python on windows for any reason 😉 you’ll have an issue with index.py because of the direct use of /, you need to do the following change:

    The problem is that you don’t end up with just the filename on the index and then search.py will fail.

    • Adrian Rosebrock October 9, 2015 at 6:49 am #

      Thanks for passing along the update Pedro.

    • Bob December 12, 2016 at 4:06 pm #

      ^ and of course… import os 🙂


      import os

  12. Jet October 8, 2015 at 4:09 pm #

    Hi, I made this program, and it worked with the hobbit pics. But, when I use my own pics, it doesn’t work? Can you help?

    • Adrian Rosebrock October 9, 2015 at 6:44 am #

      For external query images (i.e. images not inside the Hobbits dataset), you’ll need to manually supply the path to your query image.

      • Tantravahi Aditya October 13, 2015 at 10:55 pm #

        Because I am getting this error

        In [1]: runfile(‘C:/pythonpkg/IndexingImages.py’, wdir=’C:/pythonpkg’)
        usage: IndexingImages.py [-h] -d DATASET -i INDEX
        IndexingImages.py: error: argument -d/–dataset is required
        An exception has occurred, use %tb to see the full traceback

        • Adrian Rosebrock October 14, 2015 at 6:18 am #

          You need to execute this script via command line and provide the appropriate command line arguments. I would suggest downloading the code to this post (using the download form at the bottom of the article). At the top of index.py and search.py you’ll find example commands that you can copy and paste into your terminal.

  13. Shelly November 15, 2015 at 3:19 pm #

    One more thing. Plotting multiple images is much easier if you just use MatPlotLib subplots – no need to know the image size or have all your images the same size 😉
    Personally, I did a 2×3 subplot to show the original query and the top five results.

  14. Johannes Brodwall December 25, 2015 at 11:26 am #

    In order to get this to work with OpenCV 3.1.0, I had to change

    hist = cv2.normalize(hist)

    to

    hist = cv2.normalize(hist, hist)

    The rest seems to work fine. 🙂

    Thanks!

    • Adrian Rosebrock December 25, 2015 at 12:29 pm #

      The same is true for OpenCV 3.0.0 as well. There is another important change associated with the cv2.findContours method for future reference as well.

    • Vishnu Teja January 3, 2016 at 1:40 am #

      Your comment helped me!! 😀

  15. Melrick Nicolas December 26, 2015 at 3:32 am #

    nice article .. do you have code for ear recognition ? all i can do is an ear detection i dont know how to save the data of a detected ear and how to query the new image of detected ear .. tnx for this tutorial ..i bought your 1st and second edition of your book in bundles with this email addres: team_microtech@yahoo.com.ph .. tnx once again .. cheers!

    • Adrian Rosebrock December 26, 2015 at 8:56 am #

      I personally have never done ear recognition, but I can try to do a tutorial on it in the future.

      • Melrick Nicolas December 27, 2015 at 3:58 am #

        i just want to know how to save and retrieve data just like face recognition .. i can detect ears using haarcascades but i dont know how torecognize it ..any help will be appreciated tnx in advance and happy holidays .

        • Adrian Rosebrock December 27, 2015 at 7:58 am #

          Have you tried using template matching or SSIM? These are simple approaches that don’t work well in complex cases, but they will at least get you started.

  16. Saeid January 7, 2016 at 7:34 am #

    Hi

    Thanks for your amazing guide
    however when i run Search_external.py or index.py i get this error

    Again Thank you

    • Adrian Rosebrock January 7, 2016 at 12:37 pm #

      Please the comment from Johannes Brodwall above. The issue is you are using OpenCV 3, but this post was written for OpenCV 2.4. Changing the offending line to cv2.normalize(hist, hist) will resolve the problem.

  17. Sahil February 9, 2016 at 8:12 pm #

    Can anyone please suggest how can i cluster these images, once i have retrieved the feature vectors?

    • Adrian Rosebrock February 10, 2016 at 4:37 pm #

      Hey Sahil — as I mentioned in an email to you, I cover image clustering inside the PyImageSearch Gurus course. In the meantime, you should read up on the scikit-learn implementation of KMeans. This post also demonstrates how to use KMeans, just swap out the raw pixel intensities for your feature vectors.

  18. SINDHU June 22, 2016 at 9:21 am #

    Hello Adrian,

    Could you please send me the dataset of 25 images which you have used in this?

    Thanks in advance!!!

    • Adrian Rosebrock June 23, 2016 at 1:17 pm #

      You can download the dataset of images (and the source code) using the “Downloads” form in this blog post.

  19. Hoang June 22, 2016 at 10:27 pm #

    How to change:

    if I use opencv 3+ and python 3.0

    • Adrian Rosebrock June 23, 2016 at 1:06 pm #

      Just change the open method to use the “write” + “binary” flag:

      f = open(args["index"], "wb")

  20. Samuel Lee July 18, 2016 at 8:55 am #

    Hey, I tried this on Python 3.4 and Open CV 3.1 I changed the following as stated in the comments things like ‘wb’ and from cPickle to pickle. I managed to run the files without error till the last search part where when I ran the command prompt I got this error UnicodeDecodeError: ‘charmap’ codec can’t decode byte 0x8d in position 187:character maps to

    # load the index and initialize our searcher
    index = pickle.loads(open(args[“index”]).read())
    searcher = Searcher(index)

    this was where it said it went wrong

    • Adrian Rosebrock July 18, 2016 at 5:10 pm #

      You mentioned using Python 3.4, so to resolve the issue change the write mode from “w” to “wb” to indicate that the index should be serialized as a binary file. This should fix the problem.

  21. Samuel Lee July 19, 2016 at 10:46 pm #

    Hey Adrian,
    Basically you telling me to change the paths form / to \ was right. Just watch out guys when using windows some paths need \\ to run. If not youll get a open cv error. Thanks

  22. vani July 21, 2016 at 2:33 am #

    hi,
    I had some problems in my lap. How to update python in windows..
    which is os is best ? running in LINUX or WINDOWS?

    • Adrian Rosebrock July 21, 2016 at 12:39 pm #

      If you’re interested in studying computer vision and OpenCV, I would encourage you to use a Unix environment. Linux; specifically Ubuntu, and OSX are good choices.

  23. Carlos Arroyave August 17, 2016 at 3:32 pm #

    Thanks Adam,

    very clear, I Iearned a lot from this tutorial.

    • Adrian Rosebrock August 18, 2016 at 9:29 am #

      Fantastic, I’m glad to hear it Carlos! 🙂

  24. Rahul September 11, 2016 at 2:29 pm #

    Where do I save these files.?I am always getting ImportError: No module named pyimagesearch.rgbhistogram ?

    • Adrian Rosebrock September 12, 2016 at 12:46 pm #

      If this is your first time creating a Python project, make sure you download the source code to this post using the “Downloads” form. Then, compare your directory structure with the correct directory structure of the downloaded project. One of the best ways to learn how to organize a Python project is to learn by example.

  25. john September 23, 2016 at 7:10 pm #

    Really liked this beginning intro. I’m looking for ways to make computer science more practical and hands on. Image searching might be to “high bar” for intro computer science, but the kiddos need to see code in action, not just in endless repetitive examples of index array foo…. 🙂 (my opinion)

    Really like the 21 day program so far.

    -many blessings

    • Adrian Rosebrock September 27, 2016 at 8:55 am #

      Thanks John, I’m happy to hear that you’ve enjoyed the tutorial! I try to make all tutorials on the PyImageSearch blog super practical and hands-on. If you have any suggestions for future tutorials feel free to let me know.

  26. Megha November 14, 2016 at 5:16 pm #

    hi Adrian. Thanks for the awesome tutorial. Can you please tell me how to run the codes?

    • Adrian Rosebrock November 16, 2016 at 1:56 pm #

      Download the source code to this post using the “Downloads” section. At the top of every file I’ve included a line that says “USAGE”. Directly below that line is the example command you can use to execute the script.

  27. Suleyman December 9, 2016 at 6:10 am #

    Hi Adrian, Great article and very nice explanation thank you! I have a question for you, I am not newbie for OpenCV but I’m struggling to choose proper search engine algorithm on these days. Im triyng to find and count some spesific products on the self which is very complicated. Which algorithm would you choose for that work? Thank You.

    • Adrian Rosebrock December 10, 2016 at 7:11 am #

      It is highly dependent on what your products are, how large your database is, etc. I would suggest utilizing keypoints, local invariant descriptors, and keypoint matching as this will allow you to localize the objects in your image.

  28. Youcef December 17, 2016 at 8:15 am #

    Hello,
    I have just found your Blog, and it’s a great blog
    Thank you for your effort giving help to others.
    I would ask you a question if I can at the end of learning your guide, make an application like this in the link
    https://www.youtube.com/watch?v=2Zj9o1KsUT4&feature=share

    Like barcode scanner but with webcam with particularity it is faster than ocr and performant.

    • Adrian Rosebrock December 18, 2016 at 8:40 am #

      I took a look at the video, but I’m not exactly sure what your question is. Is your goal to build a barcode scanner using a webcam that also performs OCR?

  29. Lugia February 12, 2017 at 6:08 pm #

    Hi Adrian,

    Where can I download the dataset for this tutorial?

    Thanks

    • Adrian Rosebrock February 13, 2017 at 1:37 pm #

      Hi Lugia — the dataset + source code for this tutorial are included in the “Downloads” section.

  30. zizo February 21, 2017 at 9:42 pm #

    thanks very much, but can you give me a code to how I detect cars by python with opencv <3

    • Adrian Rosebrock February 22, 2017 at 1:31 pm #

      I would suggest taking a look at the PyImageSearch Gurus course where I discuss and provide code to detect cars in images.

  31. Anusha March 31, 2017 at 6:22 am #

    Traceback (most recent call last):
    File “search.py”, line 24, in
    index = cPickle.loads(open(args[“index”]).read())
    TypeError: a bytes-like object is required, not ‘str’
    how to solve this error..

    • Adrian Rosebrock March 31, 2017 at 1:47 pm #

      Which version of Python are you using? Python 2.7? Or Python 3?

    • Dzmitry April 1, 2017 at 8:30 am #

      I’ve changed the interpreter from Python27 to Python36 and after several minor fixes obtained the same error:

      TypeError: a bytes-like object is required, not ‘str’

      Just switch to Python 2.7 version. It is really easy:
      1) In your IDE (if you have IDE) change project interpreter to Python 2.7
      2) In your terminal window call path with python27. For example:

      c:\Python27\python.exe search.py -d “d:\path\to\the\hobbit-lotr-image-search-engine\images” -i index.cpickle

      If you’re using Windows, add the line of code before

      k = imagePath[imagePath.rfind(“/”) + 1:]
      add
      imagePath = imagePath.replace(“\\”, “/”) # replace backslashes in Windows os
      k = imagePath[imagePath.rfind(“/”) + 1:]

      If you’re using Windows, do not include trailing backslash “\” to your dataset path (“…\images\”), otherwise it’ll not work.

      If you’re using OpenCV3 instead of OpenCV2, change the line of code in rgbhistogram.py file.

      hist = cv2.normalize(hist)
      to
      hist = cv2.normalize(hist, hist)

      • Adrian Rosebrock April 3, 2017 at 2:10 pm #

        Thank you for sharing Dzmitry!

      • DEEPAK KOCHHAR October 14, 2017 at 4:04 am #

        can you please guide me how to use the source code, like I am using Pycharm with python 2.7 and OpenCv 2.4.9 (on windows). How do we run the file?
        Do we need to set the dataset images path, query image path etc anywhere, if so, where and how it is to be done ?

  32. Wade Johnson May 13, 2017 at 4:58 pm #

    Wow!

    I am a complete neophyte but I absolutely loved this article. Great presentation style because it is so easy to understand and to follow with my own code while appreciating the fact that we are just scratching the surface.

    I can’t wait to continue my journey with the other posts.

    You rock, Adrian!

    Live long and prosper, my friend.

    • Adrian Rosebrock May 15, 2017 at 8:43 am #

      Thanks Wade 🙂

  33. Fan July 23, 2017 at 5:06 am #

    Dear Professor, I run the program under window. I want to know how to give the path of image library ? I tried to type the command under the CMD window, but there was someting wrong. Would you please help me.

  34. ron333 August 20, 2017 at 1:42 pm #

    I changed the downloaded files to run on Python 3.x and OpenCV 3.3
    Here are the changed statements.

    index.py
    import _pickle as cPickle
    f = open(args[“index”], “wb”)

    search.py
    import _pickle as cPickle
    index = cPickle.loads(open(args[“index”], “rb”).read())
    for j in range(0, 10):

    search_external.pi
    import _pickle as cPickle
    index = cPickle.loads(open(args[“index”], “rb”).read())
    for j in range(0, 10):

Trackbacks/Pingbacks

  1. Building an Image Search Engine: Defining Your Image Descriptor (Step 1 of 4) - PyImageSearch - February 3, 2014

    […] Monday, I showed you how to build an awesome Lord of the Rings image search engine, from start to finish. It was a lot of fun and we learned a lot. More importantly, we got to look […]

  2. How To Describe and Quantify an Image Using Feature Vectors - April 28, 2014

    […] Clever Girl: A Guide to Utilizing Color Histograms for Computer Vision and Image Search Engines and Hobbits and Histograms, we could also use a 3D color histogram to describe our […]

  3. Building a Pokedex in Python: Indexing our Sprites using Shape Descriptors (Step 3 of 6) - PyImageSearch - April 28, 2014

    […] you may know from the Hobbits and Histograms post, I tend to like to define my image descriptors as classes rather than functions. The reason […]

  4. Accessing the Raspberry Pi Camera with OpenCV and Python - PyImageSearch - May 29, 2015

    […] been the most popular PyImageSearch article for months. And the first (big) tutorial I ever wrote, Hobbits and Histograms, an article on building a simple image search engine, still gets a lot of hits […]

Leave a Reply