How to (quickly) build a deep learning image dataset

An example of a Pokedex (thank you to Game Trader USA for the Pokedex template!)

When I was a kid, I was a huge Pokemon nerd. I collected the trading cards, played the Game Boy games, and watched the TV show. If it involved Pokemon, I was probably interested in it.

Pokemon made a lasting impression on me — and looking back, Pokemon may have even inspired me to study computer vision.

You see, in the very first episode of the show (and in the first few minutes of the game), the protagonist, Ash Ketchum, was given a special electronic device called a Pokedex.

A Pokedex is used to catalogue and provide information regarding species of Pokemon encounters Ash along his travels. You can think of the Pokedex as a “Pokemon Encyclopedia” of sorts.

When stumbling upon a new species of Pokemon Ash had not seen before, he would hold the Pokedex up to the Pokemon and then the Pokedex would automatically identify it for him, presumably via some sort of camera sensor (similar to the image at the top of this post).

In essence, the Pokedex was acting like a smartphone app that utilized computer vision!

We can imagine a similar app on our iPhone or Android today, where:

  1. We open the “Pokedex” app on our phone
  2. The app accesses our camera
  3. We snap a photo of the Pokemon
  4. And then the app automatically identifies the Pokemon

As a kid, I always thought the Pokedex was so cool…

…and now I’m going to build one.

In this three-part blog post series we’re going to build our very own Pokedex:

  1. We’ll start today by using the Bing Image Search API to (easily) build our image dataset of Pokemon.
  2. Next week, I’ll demonstrate how to implement and train a CNN using Keras to recognize each Pokemon.
  3. And finally, we’ll use our trained Keras model and deploy it to an iPhone app (or at the very least a Raspberry Pi — I’m still working out the kinks in the iPhone deployment).

By the end of the series we’ll have a fully functioning Pokedex!

To get started using the Bing Image Search API to build an image dataset for deep learning, just keep reading.

Looking for the source code to this post?
Jump right to the downloads section.

How to (quickly) build a deep learning image dataset

In order to build our deep learning image dataset, we are going to utilize Microsoft’s Bing Image Search API, which is part of Microsoft’s Cognitive Services used to bring AI to vision, speech, text, and more to apps and software.

In a previous blog post, you’ll remember that I demonstrated how you can scrape Google Images to build your own dataset — the problem here is that it’s a tedious, manual process.

Instead, I was looking for a solution that would enable me to programmatically download images via a query.

I did not want to have to open my browser or utilize browser extensions to download the image files from my search.

Many years ago Google deprecated its own image search API (which is the reason we need to scrape Google Images in the first place).

A few months ago I decided to give Microsoft’s Bing Image Search API a try. I was incredibly pleased.

The results were relevant and the API was easy to consume.

It also includes a free 30-day trial as well, after which the API seems reasonably priced (I haven’t converted to a paying customer yet but I probably will given the pleasant experience).

In the remainder of today’s blog post, I’ll be demonstrating how we can leverage the Bing Image Search API to quickly build an image dataset suitable for deep learning.

Creating your Cognitive Services account

In this section, I’ll provide a short walkthrough of how to get your (free) Bing Image Search API account.

The actual registration process is straightforward; however, finding the actual page that kicks off the registration process is a bit confusing — it’s my primary critique of the service.

To get started, head to the Bing Image Search API page:

Figure 1: We can use the Microsoft Bing Search API to download images for a deep learning dataset.

As we can see from the screenshot, the trial includes all of Bing’s search APIs with a total of 3,000 transactions per month — this will be more than sufficient to play around and build our first image-based deep learning dataset.

To register for the Bing Image Search API, click the “Get API Key” button.

From there you’ll be able to register by logging in with your Microsoft, Facebook, LinkedIn, or GitHub account (I went with GitHub for simplicity).

After you finish the registration process you’ll end up on the Your APIs page which should look similar to my browser below:

Figure 2: The Microsoft Bing API endpoints along with my API keys which I need in order to use the API.

Here you can see my list of Bing search endpoints, including my two API keys (blurred out for obvious reasons).

Make note of your API key as you’ll need it in the next section.

Building a deep learning dataset with Python

Now that we have registered for the Bing Image Search API, we are ready to build our deep learning dataset.

Read the docs

Before continuing, I would recommend opening up the following two Bing Image Search API documentation pages in your browser:

You should reference these two pages if you have any questions on either (1) how the API works or (2) how we are consuming the API after making a search request.

Install the requests package

If you do not already have requests  installed on your system, you can install it via:

The requests  package makes it super easy for us to make HTTP requests and not get bogged down in fighting with Python to gracefully handle requests.

Additionally, if you are using Python virtual environments make sure you use the workon  command to access the environment before installing requests :

Create your Python script to download images

Let’s go ahead and get started coding.

Open up a new file, name it search_bing_api.py , and insert the following code:

Lines 2-6 handle importing the packages necessary for this script. You’ll need OpenCV and requests installed in your virtual environment. To set up OpenCV on your system, just follow the relevant guide for your system here.

Next, we parse two command line arguments:

  • --query:  The image search query you’re using, which could be anything such as “pikachu”, “santa” or “jurassic park”.
  • --output:  The output directory for your images. My personal preference (for the sake of organization and sanity) is to separate your images into separate class subdirectories, so be sure to specify the correct folder that you’d like your images to go into (shown below in the “Downloading images for training a deep neural network” section).

You do not need to modify the command line arguments section of this script (Lines 9-14). These are inputs you give the script at runtime. To learn how to properly use command line arguments, see my recent blog post.

From there, let’s configure some global variables:

The one part of this script that you must modify is the API_KEY . You can grab an API key by logging into Microsoft Cognitive Services and selecting the service you’d like to use (as shown above where you need to click the “Get API Key” button). From there, simply paste the API key within the quotes for this variable.

You can also modify MAX_RESULTS  and GROUP_SIZE  for your search. Here, I’m limiting my results to the first 250  images and returning the maximum number of images per request by the Bing API ( 50  total images).

You can think of the GROUP_SIZE  parameter as the number of search results to return “per page”. Therefore, if we would like a total of 250 images, we would need to go through 5 “pages” with 50 images “per page”.

When training a Convolutional Neural Network, I would ieally like to have ~1,000 images per class but this is just an example. Feel free to download as many images as you would like, just be mindful:

  1. That all images you download should still be relevant to the query.
  2. You don’t bump up against the limits of Bing’s free API tier (otherwise you’ll need to start paying for the service).

From there, let’s make sure that we are prepared to handle all (edit: most) of the possible exceptions that can arise when trying to fetch an image by first making a list of the exceptions we may encounter:

When working with network requests there are a number of exceptions that can be thrown, so we list them on Lines 30-32. We’ll try to catch them and handle them gracefully later.

From there, let’s initialize our search parameters and make the search:

On Lines 36-38, we initialize the search parameters. Be sure to review the API documentation as needed.

From there, we perform the search (Lines 42-43) and grab the results in JSON format (Line 47).

We calculate and print the estimated number of results to the terminal next (Lines 48-50).

We’ll be keeping a counter of the images downloaded as we go, so I initialize total  on Line 53.

Now it’s time to loop over the results in GROUP_SIZE  chunks:

Here we are looping over the estimated number of results in GROUP_SIZE  batches as that is what the API allows (Line 56).

The current offset  is passed as a parameter when we call requests.get  to grab the JSON blob (Line 62).

From there, let’s try to save the images in the current batch:

Here we’re going to loop over the current batch of images and attempt to download each individual image to our output folder.

We establish a try-catch block so that we can catch the possible EXCEPTIONS  which we defined earlier in the script. If we encounter an exception we’ll be skipping that particular image and moving forward (Line 71 and Lines 88-93).

Inside of the try  block, we attempt to fetch the image by URL (Line 74), and build a path + filename for it (Lines 77-79).

We then try to open and write the file to disk (Lines 82-84). It’s worth noting here that we’re creating a binary file object denoted by the b  in "wb" . We access the binary data via r.content .

Next, let’s see if the image can actually be loaded by OpenCV which would imply (1) that the image file was downloaded successfully and (2) the image path is valid:

In this block, we load the image file on Line 96.

As long as the image  data is not None , we update our total  counter and loop back to the top.

Otherwise, we call os.remove  to delete the invalid image and we continue back to the top of the loop without updating our counter. The if-statement on Line 100 could trigger due to network errors when downloading the file, not having the proper image I/O libraries installed, etc. If you’re interested in learning more about NoneType  errors in OpenCV and Python, refer to this blog post.

Downloading images for training a deep neural network

Figure 3: The Bing Image Search API is so easy to use that I love it as much as I love Pikachu!

Now that we have our script coded up, let’s download images for our deep learning dataset using Bing’s Image Search API.

Make sure you use the “Downloads” section of this guide to download the code and example directory structure.

In my case, I am creating a dataset  directory:

All images downloaded will be stored in dataset . From there, execute the following commands to make a subdirectory and run the search for “charmander”:

As I mentioned in the introduction of this post, we are downloading images of Pokemon to be used when building a Pokedex (a special device to recognize Pokemon in real-time).

In the above command I am downloading images of Charmander, a popular Pokemon. Most of the 250 images will successfully download, but as shown in the output above, there will be a few that aren’t able to be opened by OpenCV and will be deleted.

I do the same for Pikachu:

Along with Squirtle:

Then Bulbasaur:

And finally Mewtwo:

We can count the total number of images downloaded per query by using a bit of find  magic (thank you to Glenn Jackman on StackOverflow for this great command hack):

Here we can see we have approximately 230-245 images per class. Ideally, I would like to have ~1,000 images per class, but for the sake of simplicity in this example and network overhead (for users without a fast/stable internet connection), I only downloaded 250.

Note: If you use that ugly find  command often, it would be worth making an alias in your ~/.bashrc !

Pruning our deep learning image dataset

However, not every single image we downloaded will be relevant to the query — most will be, but not all of them.

Unfortunately this is the manual intervention step where you need to go through your directories and prune irrelevant images.

On macOS this is actually a pretty quick process.

My workflow involves opening up Finder and then browseing all images in the “Cover Flow” view:

Figure 4: I’m using the macOS “Cover Flow” view in order to quickly flip through images and filter out those that I don’t want in my deep learning dataset.

If an image is not relevant I can move it to the Trash via cmd + delete  on my keyboard. Similar shortcuts and tools exist on other operating systems as well.

After pruning the irrelevant images, let’s do another image count:

As you can see, I only had to delete a handful of images per class — the Bing Image Search API worked quite well!

Note: You should also consider removing duplicate images as well. I didn’t take this step as there weren’t too many duplicates (except for the “squirtle” class; I have no idea why there were so many duplicates there), but if you’re interested in learning more about how to find duplicates, see this blog post on image hashing.

Summary

In today’s blog post you learned how to quickly build a deep learning image dataset using Microsoft’s Bing Image Search API.

Using the API we were able to programmatically download images for training a deep neural network, a huge step up from having to manually scrape images using Google Images.

The Bing Image Search API is free to use for 30 days which is perfect if you want to follow along with this series of posts.

I’m still in my trial period, but given the positive experience thus far I would likely pay for the API in the future (especially since it will help me quickly create datasets for fun, hands-on deep learning PyImageSearch tutorials).

In next week’s blog post I’ll be demonstrating how to train a Convolutional Neural Network with Keras on top of the deep learning images we downloaded today. And in the final post in the series (coming in two weeks), I’ll show you how to deploy your Keras model to your smartphone (if possible — I’m still working out the kinks in the Keras + iOS integration).

This is a can’t miss series of posts, so don’t miss out! To be notified when the next post in the series goes live, just enter your email address in the form below.

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , , , ,

83 Responses to How to (quickly) build a deep learning image dataset

  1. Marcos Gomes-Borges April 9, 2018 at 12:45 pm #

    Thank you very much for this useful and inspirational tutorial! Your posts are always creative and helpful. This post about Pokemon gave me lots of insights and reminded me good moments 🙂

    • Adrian Rosebrock April 9, 2018 at 1:53 pm #

      Thanks Marcos!

    • Adrian Rosebrock April 10, 2018 at 12:10 pm #

      Thanks Marcos! 🙂

  2. Oliver R April 9, 2018 at 1:36 pm #

    I am incredibly grateful that you are still posting free content for your visitors. I am currently learning all about machine learning (have been for a few months) and your tutorials are incredibly well done and lead me to a quite good understanding of it all – without the math bits that you usually get on other tutorials.

    You present it far better than any other website I’ve used so far.

    • Adrian Rosebrock April 9, 2018 at 1:53 pm #

      Thank you so much for the kind words, Oliver. I really appreciate that 🙂

    • Paul Zikopoulos April 10, 2018 at 12:22 pm #

      You should try the paid stuff … it’s my go to … made me a different kind of employee

      • Adrian Rosebrock April 10, 2018 at 12:49 pm #

        Thanks Paul 🙂

  3. Steve Cox April 9, 2018 at 4:33 pm #

    Keep up the great work Adrian, looking forward to seeing how this example works out.

    I just finished up developing an iOS / Android mobile app for Texas A&M and it can be “interesting” to say the least with regards to deploying to app stores.

    I’m saving up for your deep learning book bundle !!!

    • Adrian Rosebrock April 9, 2018 at 5:30 pm #

      I think “interesting” is the exact way to phrase it 😉 Congrats on the app for Texas A&M, that’s great. Let me know if you have any questions about Deep Learning for Computer Vision with Python.

  4. Gidi April 9, 2018 at 4:33 pm #

    Hi Adrian, Great Post!

    I am currently working on a similar thing!

    Did you try this package – https://github.com/hardikvasa/google-images-download? A simple CLI utility, good for scraping google images, I’m curious how is it in compare to Yahoo API

    Additionally, I though about how can I replace the manual deletion of images (extremely painful when working on a remote server, but on a PC as well) perhaps running some kind of anomaly detection of image feature space?

    Looking forward for your next posts, specifically for the deployment (though I prefer Android 🙂 )

    • Adrian Rosebrock April 9, 2018 at 5:33 pm #

      It’s on my list of tools to try (so little time) but I haven’t actually used it yet. If the tool works as promised it’s like nice but in this particular case I’d tend to stick with an API that has consistent endpoints/results versus trying to scrape. Google Images can and will update without warning.

      As far as replacing the manual deletion of images, you need to be careful. While it’s a tedious, arduous process, you still need to keep in mind “garbage in, garbage out”. You can’t expect a machine learning or deep learning model to perform well if it doesn’t have data that is labeled mostly correct. I say “mostly” here because some mistakes are not only possible but also encouraged — no real-world dataset is 100% cleanly labeled.

      • David Bonn April 9, 2018 at 8:54 pm #

        I’ve found in my limited experience that you can’t replace the manual curation of images. But you can make it a lot more efficient.

        You can often use selective search or tiling plus a simple color filter to identify interesting candidate images. This technique is also helpful for making efficient use of a very-high resolution image and/or taking an image that is somehow cluttered and automagically acquiring a much less cluttered image. Selective search and tiling are also often (but not always) great ways to produce many more images, and more images rarely hurt.

        You do need to be careful with selective search because a lot of the images you generate will be very similar and when you downscale them they might end up being identical or very nearly identical, which wouldn’t be a good thing.

    • kaisar khatak July 5, 2018 at 3:46 am #

      https://github.com/hardikvasa/google-images-download works pretty well (for now)…

  5. Marcos April 9, 2018 at 9:10 pm #

    Very good article, I’m looking forward to the next part. I am studying a lot about computer vision, your articles have been great allies.

    • Adrian Rosebrock April 10, 2018 at 11:59 am #

      Thank you for the kind words, Marcos. I’m happy you have found the articles useful. The next two posts in the series are going to be even better!

  6. David Bendell April 9, 2018 at 9:31 pm #

    This code is incredible. Thank you so much for sharing! In 4 minutes I have more and higher quality images than I was able to achieve over the weekend in many painful hours (manually on Bing)!

    • Adrian Rosebrock April 10, 2018 at 11:59 am #

      Fantastic news, David! I’m glad you found the post helpful 🙂

  7. Gilad April 10, 2018 at 4:22 am #

    Adrian – Thx,
    I try to do same for actresses (pikachu is less of my type)
    Can’t wait for next parts.
    G

  8. Paul Zikopoulos April 10, 2018 at 4:07 pm #

    Nice work — I’m ready to go and expand as we build out the model perhaps to do some logo recognition … like a Stanley Cup bound hockey team such as the Toronto Maple Leafs. And Pikachu and hit mates are all ready to go for Part 2 next week!

    • Adrian Rosebrock April 11, 2018 at 9:09 am #

      Logo recognition and logo detection is absolutely possible. That will also be a subject of a future PyImageSearch blog post 😉

  9. Dave Xanatos April 11, 2018 at 7:34 am #

    Your timing here was perfect! I had just completed installing Anaconda, Tensorflow and Keras on my laptop when your post arrived. I got my API key, plugged it into your great code, and by the end of the evening I had downloaded my first three datasets (pinecones in grass, firewood, loose cordwood) I’ll be using for my yard utility bot. You saved me many, many neck-crunching long hours of manual image saving! Looking forward to the next steps (which I am guessing will include resizing/cropping to a standard size, etc?) Thanks again!!

    • Adrian Rosebrock April 11, 2018 at 8:55 am #

      Congrats on getting your deep learning system configured — and a second congrats on getting your dataset downloaded. The next tutorial will cover how to load the image dataset into memory and prepare it for training. We’ll then train a Convolutional Neural Network on the dataset. Enjoy it and feel free to reach out if you have any questions.

      • Dave Xanatos April 11, 2018 at 8:00 pm #

        Looking forward to the next installments enormously. Your excellent materials got me started with OpenCV on the R Pi and I’ve done some amazing stuff since then. Learning how to “feed” the net I will train up on these datasets to my robotic systems will be a huge step forward for me. Thanks again, very much.

        • Adrian Rosebrock April 13, 2018 at 6:54 am #

          Thanks Dave. I’m so happy to hear my tutorials and posts have helped you with OpenCV and the Raspberry Pi. Comments like these really make my day 🙂

      • Clinton April 12, 2018 at 1:42 pm #

        Looking forward to training period. This post was one of your best Adrian!

        • Adrian Rosebrock April 13, 2018 at 6:44 am #

          Thanks Clinton, I really appreciate that! 🙂

  10. Akbar Hidayatuloh April 13, 2018 at 7:18 am #

    Really waiting for the next part, this will be great tutorial series.

    Thank you so much!

    • Adrian Rosebrock April 16, 2018 at 2:41 pm #

      Thanks Akbar! The latest blog post is live. You can find it here.

  11. Duhai April 13, 2018 at 7:37 am #

    Thanks, Adrian

    I don’t know how you make such a complicated task so simple. I just made my first Cats vs. Dogs dataset. I am looking forward to the next tutorial.

    Duhai

    • Adrian Rosebrock April 16, 2018 at 2:40 pm #

      Thank you for the kind words, Duhai. I really appreciate that 🙂 Congrats on creating your first image dataset. You can find the latest post here.

  12. Chad April 18, 2018 at 4:46 pm #

    Adrian, this is super helpful! Quick question: I’m finding that a lot of the images that Bing is returning have watermarks on them. Do you think there’s a way to prevent that? If there’s not a simple fix, do you think the watermarks will adversely affect the training (intuition says yes, but I’d like to hear your opinion). Thanks!

    • Adrian Rosebrock April 20, 2018 at 10:12 am #

      Keep in mind that the Bing Image Search API doesn’t really care if the images have watermarks in them — it’s just returning images that are most relevant to your query. Whatever you are searching for just appears to have a lot of watermarked results. This may hurt training slightly but you would need to run experiments of your own to determine this. It’s really impossible to know without knowing more about the project, what images you are using, and how/where you intend on deploying the end model.

  13. Avinash Patel April 20, 2018 at 5:01 am #

    Nice tutorial

    • Adrian Rosebrock April 20, 2018 at 9:37 am #

      Thanks Avinash, I’m glad you enjoyed it! 🙂

  14. Chaos April 20, 2018 at 7:33 am #

    I wonder what I did wrong, but the script skips every single image it fetches, doesn’t matter what I search for. This happens even if I switch to the second API key on my account. 🙁

    • Adrian Rosebrock April 20, 2018 at 8:01 am #

      My guess is that you did not create the correct directory structure. Before you run the script you should ensure you have the “dataset” and whatever subdirectory for the class inside the “dataset” directory:

      Double-check your directory structure. I’m confident this is the issue.

      • Chaos April 20, 2018 at 8:45 am #

        Thank you. The directory was correct seems it was a permissions issue 🙂

        • Adrian Rosebrock April 20, 2018 at 8:59 am #

          Nice, congrats on resolving the issue 🙂

  15. Buse Onek April 21, 2018 at 8:28 am #

    Thank you so much, ı’am following this steps to make app, I wish next week you run this model on android :/ ı need information about this app on android

    • Adrian Rosebrock April 23, 2018 at 12:02 pm #

      Hey Buse — I do not have an Android device. If someone wants to lend me an Android device so I can figure out how to do it I will consider it.

      • MImranKhan May 1, 2018 at 2:21 am #

        please make a tutorial on keras and android please

        • Adrian Rosebrock May 1, 2018 at 1:21 pm #

          MImranKhan — stay tuned for a potential future blog post on using Keras on Android!

  16. Simeon Trieu April 22, 2018 at 9:56 am #

    Adrian, I’ve run your example code on a Google Cloud Platform Compute instance, and I’m getting an error:

    Traceback (most recent call last):

    requests.exceptions.HTTPError: 401 Client Error: Access Denied for url: https:/
    /api.cognitive.microsoft.com/bing/v7.0/images/search?q=squirtle&offset=0&count=50

    Any idea what I’m missing? Thanks.

    • Adrian Rosebrock April 23, 2018 at 12:02 pm #

      Since you’re getting an “Access Denied” error my guess would be that your API key is incorrect.

      • Merwan August 22, 2018 at 12:56 am #

        Hi Adrian

        First thank you for this great tutorial as always,
        I am getting the same error as Simeon Trieu (i.e. Access Denied for url: …)

        Any idea how this can be solved ?

        About the “API key”, when requested I received also two keys the same thing as you showed in the picture above, which one should be used in the python script or it doesn’t matter?
        and should it be used with the link URL = “https://api.cognitive.microsoft.com/bing/v7.0/images/search” ?

        Merwan
        and thank you again

        • Adrian Rosebrock August 22, 2018 at 9:24 am #

          You can use either API key. Keep in mind that your API keys are valid for 30 day so my guess is that your API keys have now expired. In that case you’ll need to actually start paying for the API.

  17. Danny April 30, 2018 at 11:02 am #

    I am using python 2.7, I just deletd “FileNotFoudError” and works very well.

    • Adrian Rosebrock April 30, 2018 at 12:10 pm #

      Hi Danny — in Python 2.7 IOError is equivalent to FileNotFoundError. You did the right thing!

  18. Mao May 1, 2018 at 11:05 pm #

    Thank you very much Adrian!!! for the nice tutorial!!! As a starter, I really enjoy your tutorial with kind explanation for every line of code.

    I have a question that may sound stupid but I was told there’s no stupid question, so I’m going to ask 🙂

    In line 40 to 50, you made one research and I guess the goal is to get estNumResults?

    But since you set params[“count”] as 50 (the GROUP_SIZE) in line 38, won’t the search return 50 results at maximum? Then the estNumResults cannot be greater than 50 and the for loop will only execute for 1 time.

    I read the API document but I’m still confused about the count params used here.

    Thanks again for the great (maybe the best I can find) tutorial!!!

    • Mao May 1, 2018 at 11:31 pm #

      Oh! I was wrong about the return from search.

      I only check the API for count but not totalEstimatedMatches. Now I know that totalEstimatedMatches returns the total number relative to the query regardless of the count…

  19. Dong May 5, 2018 at 2:24 am #

    Thank you very much for this excellent tutorial!

    • Adrian Rosebrock May 5, 2018 at 8:18 am #

      I’m glad you enjoyed it! 🙂

  20. Jaylen May 11, 2018 at 4:38 pm #

    Hey,

    for some reason when it fetches the images, the images overwrite each other instead of saving separately. Right now I only have 3 images, but when i run it again it just replaces those images instead of saving each one that is fetched

    • Adrian Rosebrock May 12, 2018 at 6:13 am #

      It sounds like the counter is not incrementing correctly. Did you use the “Downloads” section of this page to download the code? Or did you copy and paste? I would recommend using the “Downloads” section to download my code so you execute it exactly (and don’t run into any issues that could have happened during copying and pasting).

  21. Hermes May 12, 2018 at 2:58 pm #

    Thanks a lot for this tutorial, Adrian. It’s so simple and elegant. Would you know if there is a cap on the totalEstimatedMatches? Made a lot of searches and none of them returned more than 900 results. Someone online couldn’t retrieve more than 1000 results, even for simple terms like “cat”. Could this be due to the free tier?

    • Adrian Rosebrock May 14, 2018 at 6:56 am #

      I’m not sure why you would not be able to retrieve more than 1,000 results. I would suggest contacting the Bing API team.

  22. Nick May 24, 2018 at 11:45 am #

    Hi, i want to ask you about creating image dataset
    Can i use such image transforming like a changing lighting (increase the brightness of image), or add some shadow via opencv library, and how effecient it will be ?
    I ask this because i want to create regression model based on image input but i have only few dozen of images

    I know that it depend on situation, but i didn’t see anything about using this
    What do you think about it ?

    • Adrian Rosebrock May 25, 2018 at 5:58 am #

      What you are referring to is data “data augmentation” or more specifically in this case “image augmentation”. The Keras library has build in support for image augmentation via the ImageDataGenerator class. You should also take a look at the imgaug library.

      That said with only a few dozen images I would recommend gathering more data. Even with data augmentation you could easily be at risk of overfitting even small models.

  23. Nikita Voloshenko May 27, 2018 at 6:29 pm #

    Hey!

    I made a bit quicker solution using gevent based on your code, check it out here https://gist.github.com/stivens13/5fc95ea2585fdfa3897f45a2d478b06f.

    I’m sure it can be faster, so comment any improvements if you’d like. In another project of mine, it gave me x20 speed (time) acceleration, much slower in here though for some reason.

    • Adrian Rosebrock May 28, 2018 at 9:32 am #

      Thanks for sharing, Nikita!

  24. Nilesh June 19, 2018 at 1:12 am #

    Hi Adrian

    Fantastic post. Is there a way one can make a dataset of individuals for face recognition? For example if I want to make a dataset of my face, how would i do that?

    Thanks.

  25. Galiya July 4, 2018 at 4:13 am #

    Hi Adrian!
    I love your blog, it is educating and entertaining at one time. I have a question regarding the training dataset preparation. If our task is classification, is it required to cut image so that it contains only our object? Are there any tools to make an automatic cropping of images so that it contains the object only? Also when there are several objects in one image and we want to crop them to have several images what tools are best to use?

    Thank you!

    • Adrian Rosebrock July 5, 2018 at 6:36 am #

      If your task is classification you shouldn’t have to crop out just the object provided the object is the dominant object in the image. For example, if you had an image of an elephant that took up 95% of the image but you wanted to classify a small bird sitting on top of that elephant that takes up 0.5% of the image you may want to consider cropping.

      As far as “automatic” cropping goes you could see if there a pre-trained object detector for the objects you want to crop but that partially defeats the purpose. If you have an object detector for the class why you are training it in the first place?

      For tools, take a look at your standing image processing apps like Photoshop and Gimp but for pure annotation take a look at LabelMe and imglab.

  26. Ari Singh July 4, 2018 at 4:59 pm #

    Hey Adrian!

    The instructions in the post worked very well for 5 of my 50 image collections, but on the 6th, decided to fail. The program skips every one of the images, leaving me with nothing to train. How could this be resolved?

    Thanks,
    Ari

    • Adrian Rosebrock July 5, 2018 at 6:21 am #

      It could be the case that all of the remaining images are invalid and the script is removing them from your machine. Could you try with a different query to confirm tha tis the case?

  27. kaisar khatak July 5, 2018 at 2:41 am #

    Great Post! The BING API worked pretty well. I ran for “owen grady” but did notice there were a lot of duplicates/cosplay imitators and some images did not even have a face/person in them. Maybe add a face detector check during the pruning process???

    • Adrian Rosebrock July 5, 2018 at 6:12 am #

      You could certainly do that as well 🙂

  28. Ario July 11, 2018 at 12:57 pm #

    Hi and thanks for this useful post.
    Is there any problem if the difference in the number of samples of the two classes is too high?
    For example class A has 900 images and class B has 2000…

    • Adrian Rosebrock July 13, 2018 at 5:13 am #

      That could potentially create a problem depending on which algorithm you use for face recognition. Some machine learning algorithms can help correct for class imbalance but in an ideal world you should have a very similar number of examples for each class.

  29. HandsomeJoe July 13, 2018 at 3:33 pm #

    Hello:

    I continue to learn a lot from your Blogs, I hope I get smart enough to make similar contributions to the user community.

    I am following your example to build a deep learning data set and I have a question:

    If your Bing searches for your learning data set only found images that had multiple Pikachus in each of the images would your Pokadex app still be able to use it to recognize a “single” Pikachu in subsequent images?

    Thanks

    • Adrian Rosebrock July 17, 2018 at 8:09 am #

      Yes, the CNN would very likely be able to recognize single Pikachus even if some or even most of the training data contained multiple Pikachus.

  30. Kelvin July 31, 2018 at 6:04 pm #

    For some odd reason, after doing the ‘python search_bing_api.py –query “pikachu” –output dataset/pikachu’ part, I only get about 50 downloaded images. Is there someway to download more?

    • Adrian Rosebrock August 2, 2018 at 9:40 am #

      Does the script immediately error out? Are you receiving any type of warning/notification? Does the script exit gracefully? If you can provide more information that would be helpful.

  31. Mudair Hussain August 10, 2018 at 8:09 am #

    Hello Sir how are you!
    First of all i am very thankful to you for all !. Dear Sir i am new here will you please guide me in Noise addition,filters,Morphologyical operation,thresholding.Also i face too much dificulties during this article.if You share us video it will be so help full>

    Thanks a Lot!

    • Adrian Rosebrock August 10, 2018 at 8:37 am #

      Hey Mudair — it sounds like you’re interested in studying the fundamentals of computer vision and image processing using OpenCV. I would suggest reading through Practical Python and OpenCV to help you get up to speed. I hope that points you in the right direction!

  32. vishal August 22, 2018 at 10:40 pm #

    Hey can we download keras and opencv on windows ,if yes can you give any tutorial or reference.
    I wanted to do all this on my windows and then transfer it to raspberry pi.

  33. SOORAJ September 17, 2018 at 10:22 am #

    Hello Andrian, first of all, the post is very helpful even for novices. I followed every step to the end, but the folder which selected to download images empty. Why is it like that?

    • Adrian Rosebrock September 17, 2018 at 2:05 pm #

      Most likely your path to your output directory/subdirectory is incorrect. Double-check those paths and try again.

  34. Saurabh October 4, 2018 at 2:20 pm #

    Thanks a lot!
    Going through your blogs, have made me so much interested in computer vision, that I can’t wait, to dig deep inside.

    • Adrian Rosebrock October 8, 2018 at 9:56 am #

      Thanks Saurabh, I’m glad you’re enjoying the blog 🙂

Trackbacks/Pingbacks

  1. Keras and Convolutional Neural Networks (CNNs) - PyImageSearch - April 16, 2018

    […] Part 1: How to (quickly) build a deep learning image dataset […]

  2. Running Keras models on iOS with CoreML - PyImageSearch - April 23, 2018

    […] (Quickly) create a deep learning image dataset […]

Leave a Reply