Building a Pokedex in Python: Finding the Game Boy Screen (Step 4 of 6)

Finding a Game Boy screen in an image using Python and OpenCV.

Figure 1: Finding a Game Boy screen in an image using Python and OpenCV.

Quick question.

How does a Pokedex work?

Well, you simply point it a Pokemon, the Pokedex examines its physical characteristics, and the Pokemon is identified instantly.

Looking for the source code to this post?
Jump right to the downloads section.

In this case, our smartphone camera is our “Pokedex”. We point our smartphone at our Game Boy, snap a photo of it, and our rival Pokemon is identified (if you don’t believe me, you can see my Pokedex in action by watching this YouTube clip).

However, there is a lot of information in our image that we don’t need.

We don’t need the shell of the Game Boy case. We don’t need the A, B, up, down, left, right, or start or select buttons. And we certainly don’t care about the background our image was photographed on.

All we care about is the Game Boy screen.

Because once we find that Game Boy screen, we can crop out the Pokemon, and perform the identification.

In this post I’ll show you how to automatically find a Game Boy screen in an image using nothing but Python and OpenCV. Specifically, we’ll be using the OpenCV contours functionality and the findContours function in the cv2 package.


Here we go.

OpenCV and Python versions:
This example will run on Python 2.7 and OpenCV 2.4.X.

Previous Posts

This post is part of an on-going series of blog posts on how to build a real-life Pokedex using Python, OpenCV, and computer vision and image processing techniques. If this is the first post in the series that you are reading, definitely take the time to read through it and check it out.

Being able to find a Game Boy screen in an image isn’t just cool, it’s super practical. I can think of 10-15 different ways to build a small mobile app business using nothing but Game Boy screenshots and mobile technology, such as smartphones.

Sound interesting? Don’t be shy. Send me a message and we can chat some more.

Anyway, after you read this post, go back to the previous posts in this series for some added context and information.

Building a Pokedex in Python: Finding the Game Boy Screen

Before we can find the Game Boy screen in an image, we first need an image of a Game Boy:

Figure 1: Our original Game Boy query image. Our goal is to find the screen in this image.

Figure 2: Our original Game Boy query image. Our goal is to find the screen in this image.

By the way, if you want the raw, original image, be sure to download the source code at the bottom of this post. I’ve thrown in my FREE 11-page Image Search Engine Resource Guide PDF just to say thanks for downloading the code.

Okay, so now that we have our image, our goal is to find the screen of our Game Boy and highlight it, just as we did in the middle screenshot of Figure 1 at the top of this post.

Fire up your favorite text editor and create a new file named We’re about to get our hands dirty:

Lines 2-6 just handle importing our packages. We’ll make use of skimage, but I won’t go over that until the next blog post in the series, so don’t worry about that for now. We’ll use NumPy like we always do, argparse to parse our command line arguments, and cv2 contains our OpenCV bindings.

We only need one command line argument: --query points to the path to where our query image is stored on disk.

If you’re wondering about imutils on Line 2, refer back to my post on Basic Image Manipulations in Python and OpenCV, where I go over resizing, rotating, and translating. The file in the pyimagesearch module simply contains convenience methods to handle basic image processing techniques.

Next up, let’s load our query image and start processing the image:

On Line 16 we load our query image off disk. We supplied the path to the query image using the --query command line argument.

In order to make our processing steps faster, we need to resize the image. The smaller the image is, the faster it is to process. The tradeoff is that if you make your image too small, then you miss out on valuable details in the image.

In this case, we want our new image height to be 300 pixels. On Line 17 we compute the ratio of the old height to the new height, then we make a clone of the original image on Line 18. Finally, Line 19 handles resizing the image to a height of 300 pixels.

From there, we convert our image to grayscale on Line 23. We then blur the image slightly by using the cv2.bilateralFilter function. Bilateral filtering has the nice property of removing noise in the image while still preserving the actual edges. The edges are important since we will need them to find the screen in the Game Boy image.

Finally, we apply Canny edge detection on Line 25.

As the name suggests, the Canny edge detector finds edge like regions in our image. Check out the image below to see what I mean:

Figure 3: Applying edge detection to our Game Boy image. Notice how we can clearly see the outline of the screen.

Figure 3: Applying edge detection to our Game Boy image. Notice how we can clearly see the outline of the screen.

Clearly we can see that there is a rectangular edge region that corresponds to the screen of our Game Boy. But how do we find it? Let me show you:

In order to find the Game Boy screen in our edged image, we need to find the contours in the image. A contour refers to the outline or silhouette of an object — in this case, the outline of the Game Boy screen.

To find contours in an image, we need the OpenCV cv2.findContours function on Line 29. This method requires three parameters. The first is the image we want to find edges in. We pass in our edged image, making sure to clone it first. The cv2.findContours method is destructive (meaning it manipulates the image you pass in) so if you plan on using that image again later, be sure to clone it. The second parameter cv2.RETR_TREE tells OpenCV to compute the hierarchy (relationship) between contours. We could have also used the cv2.RETR_LIST option as well. Finally, we tell OpenCV to compress the contours to save space using cv2.CV_CHAIN_APPROX_SIMPLE.

In return, the cv2.findContours function gives us a list of contours that it has found.

Now that we have our contours, how are we going to determine which one corresponds to the Game Boy screen?

Practical Python and OpenCV

Well, the first thing we should do is prune down the number of contours we need to process. We know the area of our Game Boy screen is quite large with respect to the rest of the regions in the image. Line 30 handles sorting our contours, from largest to smallest, by calculating the area of the contour using cv2.contourArea. We now have only the 10 largest contours. Finally, we initialize screenCnt, the contour that corresponds to our Game Boy screen on Line 31.

We are now ready to determine which contour is the Game Boy screen:

On Line 34 we start looping over our 10 largest contours in the query image. Then, we approximate the contour using cv2.arcLength and cv2.approxPolyDP. These methods are used to approximate the polygonal curves of a contour. In order to approximate a contour, you need to supply your level of approximation precision. In this case, we use 2% of the perimeter of the contour. The precision is an important value to consider. If you intend on applying this code to your own projects, you’ll likely have to play around with the precision value.

Let’s stop and think about the shape of our Game Boy screen.

We know that a Game Boy screen is a rectangle.

And we know that a rectangle has four sides, thus has four vertices.

On Line 41 we check to see how many points our approximated contour has. If the contour has four points it is (likely) our Game Boy screen. Provided that the contour has four points, we then store our approximated contour on Line 43.

The reason why I was able to do this four point check was because I had only a very small number of contours to investigate. I kept only the 10 largest contours and threw the others out. The likelihood of another contour being that large with a square approximation is quite low.

Drawing our screen contours, we can clearly see that we have found the Game Boy screen:

Figure 4: We have successfully found our Game Boy screen and highlighted it with a green rectangle.

Figure 4: We have successfully found our Game Boy screen and highlighted it with a green rectangle.

If you want to draw the contours yourself, just use the following code:

So there you have it, Part 1 of finding the Game Boy screen.

In Step 2 of this post, I’ll show you how to perform a perspective transform on the Game Boy screen as if you were “looking down” at your Game Boy from above. Then, we’ll crop out the actual Pokemon. Take a look at the screenshot below to see what I mean:

Figure 5: Performing a perspective transformation on the Game Boy screen and cropping out the Pokemon.

Figure 5: Performing a perspective transformation on the Game Boy screen and cropping out the Pokemon.


In this post I showed you how to find a Game Boy screen in an image using Python, OpenCV, and computer vision and image processing techniques.

We performed edge detection on our image, found the largest contours in the image using OpenCV and the cv2.findContours function, and approximated them to find their rudimentary shape. The largest contour with four points corresponds to our Game Boy screen.

Being able to find a Game Boy screen in an image isn’t just cool, it’s super practical. I can think of 10-15 different ways to build a small business using nothing but Game Boy screenshots and mobile technology, such as smartphones.

Sound interesting? Don’t be shy. Send me a message and we can chat some more.

In the next post, I’ll show how you how to apply a perspective transformation to our Game Boy screen so that we have a birds-eye-view of the image. From there, we can easily crop out the Pokemon.


If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 11-page Resource Guide on Computer Vision and Image Search Engines, including exclusive techniques that I don’t post on this blog! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , , , , , ,

48 Responses to Building a Pokedex in Python: Finding the Game Boy Screen (Step 4 of 6)

  1. zhahir May 1, 2014 at 5:09 am #

    Great guidance…thanks.

  2. Meng Xipeng June 28, 2014 at 8:29 pm #

    You website is awesome, I’m working on commuter vision field. focus on OCR.
    I find OpenCV+Python is a quick way to prototype computer vision field.
    You shared many great and fun stuff in your website, Thank you for your work.

    • Adrian Rosebrock June 29, 2014 at 6:40 am #

      I’m glad you are finding the content useful! :-)

  3. Erkki Nyfors July 1, 2014 at 9:59 pm #

    how do you suggest to find a license plate from a car picture? :)

    • Adrian Rosebrock July 2, 2014 at 5:38 am #

      Sure, we can chat about that. Send me a message and we can talk.

      • Sasa November 2, 2014 at 4:21 am #

        I am also intrested in this, maybe you could make some blog post abouth it.

        • Adrian Rosebrock November 4, 2014 at 6:54 am #

          Maybe something along the lines of this? The code is still rough around the edges, I need to clean it up first.

  4. Ashwin November 10, 2014 at 8:15 pm #

    I followed your code in the end when you draw the contours on the source image I am not getting the green highlighted region

    • Adrian Rosebrock November 11, 2014 at 6:59 am #

      Hi Ashwin, the code below can be used to draw the green highlighted region:

      cv2.drawContours(image, [screenCnt], -1, (0, 255, 0), 3)
      cv2.imshow("Game Boy Screen", image)

  5. Praveen February 5, 2015 at 12:33 am #

    Sir, You have used the following module in one of your programs ” from skimage import exposure”. I am unable to find this skimage module anywhere. Please send me this skimage module

    • Adrian Rosebrock February 5, 2015 at 6:38 am #

      You need to install skimage from first.

  6. Shelly February 12, 2015 at 3:13 pm #

    Hey, just a heads up that this page seems to be a little broken layout-wise – all the others on your site are working for me. I also couldn’t find it using tags… Just wanted to let you know!

    • Shelly February 12, 2015 at 3:26 pm #

      Of course, it seems to look fine on the comment-submitted page… And the only tag it’s missing is ‘pokedex’, I think. 😉

    • Adrian Rosebrock February 12, 2015 at 7:47 pm #

      Hi Shelly. I’ve updated the post to include the ‘pokedex’ tag. Can you send me a screenshot of the formatting that’s messed up? It looks normal on my web browser (Chrome on OSX).

  7. Patrick April 14, 2015 at 4:29 pm #

    Thanks for the guide! This is an awesome post and blog. Very helpful.

    • Adrian Rosebrock April 14, 2015 at 5:27 pm #

      I’m glad you are enjoying it Patrick! :-)

      • sai pattnaik May 30, 2016 at 9:22 am #

        Hello Adrian its a very very helpful post but i have one problem, i need to get the cropped cheques from scanned jpeg in which the background color is white (A4) and the edges of the cheques are also white so its not able to pick the whole cheque rather its picking a portion from the cheque,i think its unable to identify because of the white border and background.. can you help me on this..

        • Adrian Rosebrock May 31, 2016 at 3:53 pm #

          Keep in mind that it’s easier to write computer vision code for good environment conditions that bad ones. Simply put — don’t use a white background 😉 Select a background that contrasts your cheque color and this will make the project much easier to solve. If you don’t, you’ll simply make life harder for yourself and may have to resort to utilizing a bit of machine learning to find the cheque borders. This is especially wasteful if you can change your background.

  8. Tham June 9, 2015 at 10:20 pm #

    Thanks for your interesting post

    I have two qusetions about this

    1 : What if the four vertices contours is not the rectangle we need?
    I noticed shape of the gameboy is rectangle too, if the image contain all
    of the gameboy but not part of it, the largest rectangle will be the gameboy itself

    2 : How could you detect the rotation need to be adjust?
    The original image is not 90 degree, I need to rotate the original picture “query_marowak.jpg”
    to crop the image of pokemon, how could you let the computer knowthe image need to do some rotation?

    • Adrian Rosebrock June 10, 2015 at 7:08 am #

      Hey Tham, great questions, thanks for asking.

      1. If the four vertices do not correspond to the Game Boy, then indeed, the script will not work. Here we make the assumption that the largest 4-corner region is the Game Boy screen.

      2. You can determine the rotation of the bounding box by using the cv2.minAreaRect function. But a better method is to simply apply a perspective transform and obtain the view of the screen, like we do in this post.

  9. Mohammed Ali November 12, 2015 at 1:07 pm #

    hello my friend
    thanks for you
    but i have a problem that when i am running the example
    on the same arguments on the same image it gives me this error

    can you help me please?

    • Adrian Rosebrock November 13, 2015 at 5:43 am #

      Hey Mohammed — I’ve referenced this problem many times on the PyImageSearch blog. Please note that this blog post is intended for OpenCV 2.4 (which is stated at the top of this post). You are running OpenCV 3. There is a difference in the return signature of cv2.findContours between the two versions. You can read more about this change here. To make the cv2.findContours compatible with both versions, change the line to:

      cnts = cv2.findContours(edged.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)[:-2]

    • Say Hong January 28, 2016 at 4:43 am #

      the return values are now 3 instead of 2 (as shown on the opencv docs pages).

      the return values are image, contours, hierarchy

      • Adrian Rosebrock January 28, 2016 at 3:40 pm #

        Just to follow up on this comment, for anyone interested in reading how the cv2.findContours method has changed between OpenCV 2.4 (which this tutorial uses) and OpenCV 3, be sure to read this post.

  10. Sachin November 13, 2015 at 10:26 pm #

    i want to find rectangular shapes in image but if rectangle is incomplete then how do i know which side is missing (i.e top or bottom or right or left?)

    • Adrian Rosebrock November 14, 2015 at 6:19 am #

      Hey Sachin — contour approximation is can help with finding shapes, but if you do not have a complete side, then your shape is not a rectangle. In that case, you might want to look into morphological operations such as dilation and closing to help close the gaps between the sides.

  11. Lakshmi January 20, 2016 at 6:50 am #

    if its not a rectangle, if its a ellipse how can i find the pionts

    • Adrian Rosebrock January 20, 2016 at 1:43 pm #

      I demonstrate how to perform circle detection this post. For ellipse detection, take a look at scikit-image.

      • Lakshmi January 22, 2016 at 12:33 am #

        thanks a lot.. its very useful to me.. this blog is helpful to learners.. thanks :-)

  12. Manoj February 11, 2016 at 11:53 pm #

    Hey,i actually want to use like specific points on a part of the image.How can i use contours?

    • Adrian Rosebrock February 12, 2016 at 3:18 pm #

      I’m not sure what you mean by “use like specific points on part of an image”. Can you elaborate?

  13. Jaime Lopez February 23, 2016 at 4:04 pm #


    I think you have to update your function call cv2.findContours, because it now returns a tuple of 3, this way:

    (cnts, _ , _) = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

    • Adrian Rosebrock February 24, 2016 at 4:37 pm #

      Hey Jaime, your code is actually incorrect. It should be:

      (_, cnts , _) = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

      You can read more about how the cv2.findContours function changed in between OpenCV 2.4 and OpenCV 3 in this blog post.

  14. Jaime Lopez April 25, 2016 at 2:45 pm #

    Hi Adrian,

    Do you know how can I find the maximum square (not rotated rectangular) inside a specific contour, giving as parameter the square’s centroid?
    I need all square area fall inside the contour. Any advice?

    Thanks in advance, Jaime

    • Adrian Rosebrock April 26, 2016 at 5:18 pm #

      You can compute a normal bounding rectangle (not rotated) by using cv2.boundingRect(c) where c is the current contour you are examining. Converting the bounding rectangle to a square can be accomplished by finding the side with the largest length and adjusting all other sides to be equal to that length.

  15. Cat June 22, 2016 at 2:29 pm #

    Hi Adrian!!!

    This isperfect for my project. Do you know how to extract the coordinates of multiple bar codes?

    I need to extract 3 bar codes per image. Is there a way to change the command that sorts the contours to do this for me? Thank you!

    • Adrian Rosebrock June 23, 2016 at 1:16 pm #

      Hm, I think you might be posting this comment on the wrong blog post? This post doesn’t deal with barcodes. In any case, if you want to filter a set of contours (rather than simply grabbing the largest one), simply loop over them and ensure they meet a minimum width, height, or area requirement.

  16. sajad July 28, 2016 at 2:28 pm #


  17. Megha November 15, 2016 at 2:18 am #

    I have an image that I am fiddling with to detect the outer contours . How can we chat ? I tried using contour sorting to get the largest areas and than use that to get only the outer edges but since my image is complex I am having issues.

  18. Megha November 15, 2016 at 2:20 am #

    I am also a bit confused with your example because even the outer most edge is a rectangle as well. Why wasnt that detected?

    • Adrian Rosebrock November 16, 2016 at 1:50 pm #

      The outermost edge of what? The GameBoy? That’s actually not a rectangle — there is no bottom line to the edge.

  19. Akkumaru December 7, 2016 at 2:15 pm #

    Hi Adrian,

    Thank you for making this blog!

    I have a peculiar question: is it possible to perform contour detection on one image only to “apply” the detected contours on a different image, given that the dimensions of the two images are identical.

    This is because I have an image that contains collages of photographs, and I aim to isolate those photos. Since it is hard to do contour detection on the original image itself, I’m thinking of doing the detection on another image (which is much more simplified), then ‘paste’ the resulting contours on the original to get the original photographs.

    Do you think this is possible? Or would you suggest any other approach?

    Thank you!

    • Adrian Rosebrock December 10, 2016 at 7:40 am #

      You mentioned that the dimensions of the two images are identical — does that mean they line up perfectly? If so, sure. You can absolutely detect contours in one image and then extract ROIs from another image. But you should make sure that the dimensions are indeed correct first.

      However, I get the impression that these images may not line up perfectly. In that case, you should apply keypoint detection, local invariant descriptors, and keypoint matching. Inside Practical Python and OpenCV I demonstrate how to build a computer vision system that can match and recognize book covers in two separate images. The same technique could be used for your photograph collage project as well.


  1. Python and OpenCV Example: Warp Perspective and Transform - May 5, 2014

    […] In my previous blog post, I showed you how to find a Game Boy screen in an image using Python and OpenCV. […]

  2. Comparing Shape Descriptors for Similarity using Python and OpenCV - May 30, 2014

    […] sprites using Zernike moments. We’ve analyzed query images and found our Game Boy screen using edge detection and contour finding techniques. And we’ve performed perspective warping and transformations using the cv2.warpPerspective […]

  3. Detecting Circles in Images using OpenCV and Hough Circles - PyImageSearch - July 21, 2014

    […] your blog. I saw your post on detecting rectangles/squares in images, but I was wondering, how do you detect circles in images using […]

  4. Target acquired: Finding targets in drone and quadcopter video streams using Python and OpenCV - PyImageSearch - June 7, 2015

    […] image. We’ve used in in building a kick-ass mobile document scanner. We’ve used it to find the Game Boy screen in an image. And we’ve even used it on a higher level to actually filter shapes from an […]

  5. Zero-parameter, automatic Canny edge detection with Python and OpenCV - PyImageSearch - June 14, 2015

    […] times. We’ve used it to build a kick-ass mobile document scanner and we’ve used to find a Game Boy screen in a photo, just two name a couple […]

Leave a Reply