Bubble sheet multiple choice scanner and test grader using OMR, Python and OpenCV

Figure 14: Recognizing bubble sheet exams using computer vision.

Over the past few months I’ve gotten quite the number of requests landing in my inbox to build a bubble sheet/Scantron-like test reader using computer vision and image processing techniques.

And while I’ve been having a lot of fun doing this series on machine learning and deep learning, I’d be lying if I said this little mini-project wasn’t a short, welcome break. One of my favorite parts of running the PyImageSearch blog is demonstrating how to build actual solutions to problems using computer vision.

In fact, what makes this project so special is that we are going to combine the techniques from many previous blog posts, including building a document scanner, contour sorting, and perspective transforms. Using the knowledge gained from these previous posts, we’ll be able to make quick work of this bubble sheet scanner and test grader.

You see, last Friday afternoon I quickly Photoshopped an example bubble test paper, printed out a few copies, and then set to work on coding up the actual implementation.

Overall, I am quite pleased with this implementation and I think you’ll absolutely be able to use this bubble sheet grader/OMR system as a starting point for your own projects.

To learn more about utilizing computer vision, image processing, and OpenCV to automatically grade bubble test sheets, keep reading.

Looking for the source code to this post?
Jump right to the downloads section.

Bubble sheet scanner and test grader using OMR, Python, and OpenCV

In the remainder of this blog post, I’ll discuss what exactly Optical Mark Recognition (OMR) is. I’ll then demonstrate how to implement a bubble sheet test scanner and grader using strictly computer vision and image processing techniques, along with the OpenCV library.

Once we have our OMR system implemented, I’ll provide sample results of our test grader on a few example exams, including ones that were filled out with nefarious intent.

Finally, I’ll discuss some of the shortcomings of this current bubble sheet scanner system and how we can improve it in future iterations.

What is Optical Mark Recognition (OMR)?

Optical Mark Recognition, or OMR for short, is the process of automatically analyzing human-marked documents and interpreting their results.

Arguably, the most famous, easily recognizable form of OMR are bubble sheet multiple choice tests, not unlike the ones you took in elementary school, middle school, or even high school.

If you’re unfamiliar with “bubble sheet tests” or the trademark/corporate name of “Scantron tests”, they are simply multiple-choice tests that you take as a student. Each question on the exam is a multiple choice — and you use a #2 pencil to mark the “bubble” that corresponds to the correct answer.

The most notable bubble sheet test you experienced (at least in the United States) were taking the SATs during high school, prior to filling out college admission applications.

believe that the SATs use the software provided by Scantron to perform OMR and grade student exams, but I could easily be wrong there. I only make note of this because Scantron is used in over 98% of all US school districts.

In short, what I’m trying to say is that there is a massive market for Optical Mark Recognition and the ability to grade and interpret human-marked forms and exams.

Implementing a bubble sheet scanner and grader using OMR, Python, and OpenCV

Now that we understand the basics of OMR, let’s build a computer vision system using Python and OpenCV that can read and grade bubble sheet tests.

Of course, I’ll be providing lots of visual example images along the way so you can understand exactly what techniques I’m applying and why I’m using them.

Below I have included an example filled in bubble sheet exam that I have put together for this project:

Figure 1: The example, filled in bubble sheet we are going to use when developing our test scanner software.

Figure 1: The example, filled in bubble sheet we are going to use when developing our test scanner software.

We’ll be using this as our example image as we work through the steps of building our test grader. Later in this lesson, you’ll also find additional sample exams.

I have also included a blank exam template as a .PSD (Photoshop) file so you can modify it as you see fit. You can use the “Downloads” section at the bottom of this post to download the code, example images, and template file.

The 7 steps to build a bubble sheet scanner and grader

The goal of this blog post is to build a bubble sheet scanner and test grader using Python and OpenCV.

To accomplish this, our implementation will need to satisfy the following 7 steps:

  • Step #1: Detect the exam in an image.
  • Step #2: Apply a perspective transform to extract the top-down, birds-eye-view of the exam.
  • Step #3: Extract the set of bubbles (i.e., the possible answer choices) from the perspective transformed exam.
  • Step #4: Sort the questions/bubbles into rows.
  • Step #5: Determine the marked (i.e., “bubbled in”) answer for each row.
  • Step #6: Lookup the correct answer in our answer key to determine if the user was correct in their choice.
  • Step #7: Repeat for all questions in the exam.

The next section of this tutorial will cover the actual implementation of our algorithm.

The bubble sheet scanner implementation with Python and OpenCV

To get started, open up a new file, name it test_grader.py , and let’s get to work:

On Lines 2-7 we import our required Python packages.

You should already have OpenCV and Numpy installed on your system, but you might not have the most recent version of imutils, my set of convenience functions to make performing basic image processing operations easier. To install imutils  (or upgrade to the latest version), just execute the following command:

Lines 10-12 parse our command line arguments. We only need a single switch here, --image , which is the path to the input bubble sheet test image that we are going to grade for correctness.

Line 17 then defines our ANSWER_KEY .

As the name of the variable suggests, the ANSWER_KEY  provides integer mappings of the question numbers to the index of the correct bubble.

In this case, a key of 0 indicates the first question, while a value of 1 signifies “B” as the correct answer (since “B” is the index 1 in the string “ABCDE”). As a second example, consider a key of 1 that maps to a value of 4 — this would indicate that the answer to the second question is “E”.

As a matter of convenience, I have written the entire answer key in plain english here:

  • Question #1: B
  • Question #2: E
  • Question #3: A
  • Question #4: D
  • Question #5: B

Next, let’s preprocess our input image:

On Line 21 we load our image from disk, followed by converting it to grayscale (Line 22), and blurring it to reduce high frequency noise (Line 23).

We then apply the Canny edge detector on Line 24 to find the edges/outlines of the exam.

Below I have included a screenshot of our exam after applying edge detection:

Figure 2: Applying edge detection to our exam neatly reveals the outlines of the paper.

Figure 2: Applying edge detection to our exam neatly reveals the outlines of the paper.

Notice how the edges of the document are clearly defined, with all four vertices of the exam being present in the image.

Obtaining this silhouette of the document is extremely important in our next step as we will use it as a marker to apply a perspective transform to the exam, obtaining a top-down, birds-eye-view of the document:

Now that we have the outline of our exam, we apply the cv2.findContours  function to find the lines that correspond to the exam itself.

We do this by sorting our contours by their area (from largest to smallest) on Line 37 (after making sure at least one contour was found on Line 34, of course). This implies that larger contours will be placed at the front of the list, while smaller contours will appear farther back in the list.

We make the assumption that our exam will be the main focal point of the image, and thus be larger than other objects in the image. This assumption allows us to “filter” our contours, simply by investigating their area and knowing that the contour that corresponds to the exam should be near the front of the list.

However, contour area and size is not enough — we should also check the number of vertices on the contour.

To do, this, we loop over each of our (sorted) contours on Line 40. For each of them, we approximate the contour, which in essence means we simplify the number of points in the contour, making it a “more basic” geometric shape. You can read more about contour approximation in this post on building a mobile document scanner.

On Line 47 we make a check to see if our approximated contour has four points, and if it does, we assume that we have found the exam.

Below I have included an example image that demonstrates the docCnt  variable being drawn on the original image:

Figure 3: An example of drawing the contour associated with the exam on our original image, indicating that we have successfully found the exam.

Figure 3: An example of drawing the contour associated with the exam on our original image, indicating that we have successfully found the exam.

Sure enough, this area corresponds to the outline of the exam.

Now that we have used contours to find the outline of the exam, we can apply a perspective transform to obtain a top-down, birds-eye-view of the document:

In this case, we’ll be using my implementation of the four_point_transform  function which:

  1. Orders the (x, y)-coordinates of our contours in a specific, reproducible manner.
  2. Applies a perspective transform to the region.

You can learn more about the perspective transform in this post as well as this updated one on coordinate ordering, but for the time being, simply understand that this function handles taking the “skewed” exam and transforms it, returning a top-down view of the document:

Figure 4: Obtaining a top-down, birds-eye view of both the original image along with the grayscale version.

Figure 4: Obtaining a top-down, birds-eye view of both the original image (left) along with the grayscale version (right).

Alright, so now we’re getting somewhere.

We found our exam in the original image.

We applied a perspective transform to obtain a 90 degree viewing angle of the document.

But how do we go about actually grading the document?

This step starts with binarization, or the process of thresholding/segmenting the foreground from the background of the image:

After applying Otsu’s thresholding method, our exam is now a binary image:

Figure 5: Using Otsu's thresholding allows us to segment the foreground from the background of the image.

Figure 5: Using Otsu’s thresholding allows us to segment the foreground from the background of the image.

Notice how the background of the image is black, while the foreground is white.

This binarization will allow us to once again apply contour extraction techniques to find each of the bubbles in the exam:

Lines 64-67 handle finding contours on our thresh  binary image, followed by initializing questionCnts , a list of contours that correspond to the questions/bubbles on the exam.

To determine which regions of the image are bubbles, we first loop over each of the individual contours (Line 70).

For each of these contours, we compute the bounding box (Line 73), which also allows us to compute the aspect ratio, or more simply, the ratio of the width to the height (Line 74).

In order for a contour area to be considered a bubble, the region should:

  1. Be sufficiently wide and tall (in this case, at least 20 pixels in both dimensions).
  2. Have an aspect ratio that is approximately equal to 1.

As long as these checks hold, we can update our questionCnts  list and mark the region as a bubble.

Below I have included a screenshot that has drawn the output of questionCnts  on our image:

Figure 6: Using contour filtering allows us to find all the question bubbles in our bubble sheet exam recognition software.

Figure 6: Using contour filtering allows us to find all the question bubbles in our bubble sheet exam recognition software.

Notice how only the question regions of the exam are highlighted and nothing else.

We can now move on to the “grading” portion of our OMR system:

First, we must sort our questionCnts  from top-to-bottom. This will ensure that rows of questions that are closer to the top of the exam will appear first in the sorted list.

We also initialize a bookkeeper variable to keep track of the number of correct  answers.

On Line 90 we start looping over our questions. Since each question has 5 possible answers, we’ll apply NumPy array slicing and contour sorting to to sort the current set of contours from left to right.

The reason this methodology works is because we have already sorted our contours from top-to-bottom. We know that the 5 bubbles for each question will appear sequentially in our list — but we do not know whether these bubbles will be sorted from left-to-right. The sort contour call on Line 94 takes care of this issue and ensures each row of contours are sorted into rows, from left-to-right.

To visualize this concept, I have included a screenshot below that depicts each row of questions as a separate color:

Figure 7: By sorting our contours from top-to-bottom, followed by left-to-right, we can extract each row of bubbles. Therefore, each row is equal to the bubbles for one question.

Figure 7: By sorting our contours from top-to-bottom, followed by left-to-right, we can extract each row of bubbles. Therefore, each row is equal to the bubbles for one question.

Given a row of bubbles, the next step is to determine which bubble is filled in.

We can accomplish this by using our thresh  image and counting the number of non-zero pixels (i.e., foreground pixels) in each bubble region:

Line 98 handles looping over each of the sorted bubbles in the row.

We then construct a mask for the current bubble on Line 101 and then count the number of non-zero pixels in the masked region (Lines 107 and 108). The more non-zero pixels we count, then the more foreground pixels there are, and therefore the bubble with the maximum non-zero count is the index of the bubble that the the test taker has bubbled in (Line 113 and 114).

Below I have included an example of creating and applying a mask to each bubble associated with a question:

Figure 8: An example of constructing a mask for each bubble in a row.

Figure 8: An example of constructing a mask for each bubble in a row.

Clearly, the bubble associated with “B” has the most thresholded pixels, and is therefore the bubble that the user has marked on their exam.

This next code block handles looking up the correct answer in the ANSWER_KEY , updating any relevant bookkeeper variables, and finally drawing the marked bubble on our image:

Based on whether the test taker was correct or incorrect yields which color is drawn on the exam. If the test taker is correct, we’ll highlight their answer in green. However, if the test taker made a mistake and marked an incorrect answer, we’ll let them know by highlighting the correct answer in red:

Figure 9: Drawing a "green" circle to mark "correct" or a "red" circle to mark "incorrect".

Figure 9: Drawing a “green” circle to mark “correct” or a “red” circle to mark “incorrect”.

Finally, our last code block handles scoring the exam and displaying the results to our screen:

Below you can see the output of our fully graded example image:

Figure 10: Finishing our OMR system for grading human-taken exams.

Figure 10: Finishing our OMR system for grading human-taken exams.

In this case, the reader obtained an 80% on the exam. The only question they missed was #4 where they incorrectly marked “C” as the correct answer (“D” was the correct choice).

Why not use circle detection?

After going through this tutorial, you might be wondering:

“Hey Adrian, an answer bubble is a circle. So why did you extract contours instead of applying Hough circles to find the circles in the image?”

Great question.

To start, tuning the parameters to Hough circles on an image-to-image basis can be a real pain. But that’s only a minor reason.

The real reason is:

User error.

How many times, whether purposely or not, have you filled in outside the lines on your bubble sheet? I’m not expert, but I’d have to guess that at least 1 in every 20 marks a test taker fills in is “slightly” outside the lines.

And guess what?

Hough circles don’t handle deformations in their outlines very well — your circle detection would totally fail in that case.

Because of this, I instead recommend using contours and contour properties to help you filter the bubbles and answers. The cv2.findContours  function doesn’t care if the bubble is “round”, “perfectly round”, or “oh my god, what the hell is that?”.

Instead, the cv2.findContours  function will return a set of blobs to you, which will be the foreground regions in your image. You can then take these regions process and filter them to find your questions (as we did in this tutorial), and go about your way.

Our bubble sheet test scanner and grader results

To see our bubble sheet test grader in action, be sure to download the source code and example images to this post using the “Downloads” section at the bottom of the tutorial.

We’ve already seen test_01.png  as our example earlier in this post, so let’s try test_02.png :

Here we can see that a particularly nefarious user took our exam. They were not happy with the test, writing “#yourtestsux” across the front of it along with an anarchy inspiring “#breakthesystem”. They also marked “A” for all answers.

Perhaps it comes as no surprise that the user scored a pitiful 20% on the exam, based entirely on luck:

Figure 11: By using contour filtering, we are able to ignore the regions of the exam that would have otherwise compromised its integrity.

Figure 11: By using contour filtering, we are able to ignore the regions of the exam that would have otherwise compromised its integrity.

Let’s try another image:

This time the reader did a little better, scoring a 60%:

Figure 12: Building a bubble sheet scanner and test grader using Python and OpenCV.

Figure 12: Building a bubble sheet scanner and test grader using Python and OpenCV.

In this particular example, the reader simply marked all answers along a diagonal:

Figure 13: Optical Mark Recognition for test scoring using Python and OpenCV.

Figure 13: Optical Mark Recognition for test scoring using Python and OpenCV.

Unfortunately for the test taker, this strategy didn’t pay off very well.

Let’s look at one final example:

Figure 14: Recognizing bubble sheet exams using computer vision.

Figure 14: Recognizing bubble sheet exams using computer vision.

This student clearly studied ahead of time, earning a perfect 100% on the exam.

Extending the OMR and test scanner

Admittedly, this past summer/early autumn has been one of the busiest periods of my life, so I needed to timebox the development of the OMR and test scanner software into a single, shortened afternoon last Friday.

While I was able to get the barebones of a working bubble sheet test scanner implemented, there are certainly a few areas that need improvement. The most obvious area for improvement is the logic to handle non-filled in bubbles.

In the current implementation, we (naively) assume that a reader has filled in one and only one bubble per question row.

However, since we determine if a particular bubble is “filled in” simply by counting the number of thresholded pixels in a row and then sorting in descending order, this can lead to two problems:

  1. What happens if a user does not bubble in an answer for a particular question?
  2. What if the user is nefarious and marks multiple bubbles as “correct” in the same row?

Luckily, detecting and handling of these issues isn’t terribly challenging, we just need to insert a bit of logic.

For issue #1, if a reader chooses not to bubble in an answer for a particular row, then we can place a minimum threshold on Line 108 where we compute cv2.countNonZero :

Figure 15: Detecting if a user has marked zero bubbles on the exam.

Figure 15: Detecting if a user has marked zero bubbles on the exam.

If this value is sufficiently large, then we can mark the bubble as “filled in”. Conversely, if total  is too small, then we can skip that particular bubble. If at the end of the row there are no bubbles with sufficiently large threshold counts, we can mark the question as “skipped” by the test taker.

A similar set of steps can be applied to issue #2, where a user marks multiple bubbles as correct for a single question:

Figure 16: Detecting if a user has marked multiple bubbles for a given question.

Figure 16: Detecting if a user has marked multiple bubbles for a given question.

Again, all we need to do is apply our thresholding and count step, this time keeping track if there are multiple bubbles that have a total  that exceeds some pre-defined value. If so, we can invalidate the question and mark the question as incorrect.

Summary

In this blog post, I demonstrated how to build a bubble sheet scanner and test grader using computer vision and image processing techniques.

Specifically, we implemented Optical Mark Recognition (OMR) methods that facilitated our ability of capturing human-marked documents and automatically analyzing the results.

Finally, I provided a Python and OpenCV implementation that you can use for building your own bubble sheet test grading systems.

If you have any questions, please feel free to leave a comment in the comments section!

But before you, be sure to enter your email address in the form below to be notified when future tutorials are published on the PyImageSearch blog!

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 11-page Resource Guide on Computer Vision and Image Search Engines, including exclusive techniques that I don’t post on this blog! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , ,

54 Responses to Bubble sheet multiple choice scanner and test grader using OMR, Python and OpenCV

  1. evandrix October 3, 2016 at 12:13 pm #

    what if the candidate marked one bubble, realised it’s wrong, crossed it out, and marked another? will this system still work?

    • Adrian Rosebrock October 4, 2016 at 7:00 am #

      When taking a “bubble sheet” exam like this you wouldn’t “cross out” your previous answer — you would erase it. The assumption is that you always use pencils for these types of exams.

  2. Jurriaan Schreuder October 3, 2016 at 2:25 pm #

    Made this a long time ago for android, when I used to give a cocktail party on every Friday the 13th. We always had a quiz, which took more than an hour to grade, so I made an app for it!

    Has long since been taken offline because I was banned from the Google Play Store.

    Was pure Java, no libraries. Could do max 40 questions, on 2 a4 papers (detected if first or second sheet)

    http://multiplechoicescanner.com/

    • Adrian Rosebrock October 4, 2016 at 6:58 am #

      Very nice, thanks for sharing Jurriaan!

    • Chacrit October 20, 2016 at 2:18 pm #

      Can I get you source code ?

    • Johnny March 10, 2017 at 11:13 am #

      You do you have to go to java code help me with. Thanhk

  3. hgeorge October 3, 2016 at 6:19 pm #

    Great article!

    One question though. ideally (assuming the input image was already a birds-eye view), won’t the loop in lines 26-49 be sufficient to detect the circle contours too?

    • Adrian Rosebrock October 4, 2016 at 6:55 am #

      If the image is already a birds-eye-view, then yes, you can use the same contours that were extracted previously — but again, you would have to make the assumption that you already have a birds-eye-view of the image.

  4. Madhup October 4, 2016 at 3:24 am #

    Hi Adrian,

    I am trying to run this code and am getting an error on running this code:

    from imutils.perspective import four_point_transform

    ImportError: No module named scipy.spatial

    I have installed imutils successfully and am not sure why I am getting this error. It would be great if you could help me here

    Thanks,
    Madhup

    • Adrian Rosebrock October 4, 2016 at 6:48 am #

      Make sure you install NumPy and SciPy:

  5. King October 5, 2016 at 9:57 am #

    Wonderful Tut!
    I was wondering how to handle such OMR sheets.
    http://i.stack.imgur.com/r9GEx.jpg

    any idea or algorithm please?
    Thanks!!

    • Adrian Rosebrock October 6, 2016 at 6:53 am #

      I would suggest using more contour filtering. You can use contours to find each of the “boxes” in the sheet. Sort the contours from left-to-right and top-to-bottom. Then extract each of the boxes and process the bubbles in each box.

      • King October 6, 2016 at 10:07 am #

        Thanks!
        What can I do to detect the four anchor points and transform the paper incase it rotates?

        • Adrian Rosebrock October 7, 2016 at 7:37 am #

          As long as you can detect the border of the paper, it doesn’t matter how the paper is oriented. The four_point_transform function will take care of the point ordering and transformation for you.

          • King October 8, 2016 at 10:16 am #

            I understand, but what If the paper is cropped being rotated without border of the paper?
            What technique shall I use to detect the four anchor points please?

          • Adrian Rosebrock October 11, 2016 at 1:09 pm #

            If you do not have the four corners of the paper (such as the corners being cropped out) then you cannot apply this perspective transform.

    • Johannes Brodwall November 15, 2016 at 5:25 pm #

      @King

      It looks like the marks on the right side of the paper are aligned with the target areas. You could threshold the image, findContours and filter contours in the leftmost 10% of the image to find the rows and sort them by y-position.

      Then you could look for contours in the rest of the area. The index of the closest alignment mark for y-direction gives row, the x position as percentage of the page width gives column.

      Once you have the column and row of each mark, you just need “normal code” to interpret which question and answer this represents.

      Watch out for smudges, though! 😉

    • silver January 18, 2017 at 11:04 am #

      you see this project:
      project.auto-multiple-choice.net
      it’s free and opensource.

      you can design any form in the world.
      example
      http://2.bp.blogspot.com/-tSleoOLzh3o/WChq5qIQj1I/AAAAAAAAA-g/ZL7JlLNxRTUkPBlM_fnCK_giXb2JULgtgCLcB/s1600/last.png

  6. Edwin October 8, 2016 at 5:33 pm #

    Nice to see your implementation of this. I started a similar project earlier this year but I ended up putting it on parking for now.
    My main concern was the amount of work it goes into making one work right without errors and the demand didn’t seem to be there.
    Seems like scantron has a monopoly on this.
    What are your thoughts on that?

    • Adrian Rosebrock October 11, 2016 at 1:08 pm #

      There are a lot of companies in this space actually. I would suggest reading this thread on reddit to learn more about the companies involved and what they are doing for OMR.

  7. Linus October 14, 2016 at 12:35 pm #

    This is indeed a very cool post! Well explained 🙂

    • Adrian Rosebrock October 15, 2016 at 9:55 am #

      Thank you Linus, I’m glad you enjoyed it 🙂

  8. simon October 17, 2016 at 4:32 am #

    please, send me your code!

    • Adrian Rosebrock October 17, 2016 at 4:01 pm #

      You can download the code + example images to this post by using the “Downloads” form above.

  9. bhanu prakash December 13, 2016 at 10:58 am #

    Hi thank you very much..

    But

    cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)

    giving me only one counter as a result only one question is being identified.

    Could you pls help

    • Adrian Rosebrock December 14, 2016 at 8:30 am #

      I’m not sure what you mean by “giving me only one contour as a result”. Can you please elaborate?

  10. Ruhi December 27, 2016 at 5:15 am #

    Hi adrian,

    In my case, Image contain 4 reference rectangle which is base for image deskewing. Assume, Image contain some other information like text, circle and rectangle. Now, I want to write a script to straighten the image based on four rectangle.my resultant image should be straighten. So i can extract some information after deskewing it.How can be possible it? When i used for my perspective transformation, it only detects highest rectangle contour.
    my image is like http://i.stack.imgur.com/46rsL.png
    output image must be like http://i.stack.imgur.com/rqgsY.png

    • Adrian Rosebrock December 31, 2016 at 1:36 pm #

      So your question is on deskewing? I don’t have any tutorials on how to deskew an image, but I’ll certainly add it to my queue.

      • Ruhi January 7, 2017 at 12:19 am #

        I am waiting for that tutorial. i am not getting proper reference for deskewing of image in my scenario. In image there is barcode as well as that 4 small rectangle. i am not able to deskew it because of barcode in side that. As i am building commercial s/w, i can not provide real images here only. In side image, i have to extract person’s unique id and age, DOB which are in terms of optical mark. Once i scan form which is based on OMR i need to extract those information. Is there any approach which can help me to achieve this goals?
        I am very thankful to your guidance.

        • Adrian Rosebrock January 7, 2017 at 9:25 am #

          As I mentioned, I’ve added it to my queue. I’ll try to bump it up, but please keep in mind that I am very busy and cannot accommodate every single tutorial request in a timely manner. Thank you for your patience.

  11. silver January 18, 2017 at 10:39 am #

    Dear Adrian
    thank you
    i upgraded the code:
    – the code now capturing the image from laptop camera.
    – added dropdown to selected answer key.
    – added the date in the name of the result (result+date+.png).
    can i send the cod to you, and is this code opensource, free.
    best regards

    • Adrian Rosebrock January 18, 2017 at 12:18 pm #

      Hi Silver — feel free to send me the code or you can release it on your own GitHub page. If you don’t mind, I would appreciate a link back to the PyImageSearch site, but that’s not necessary if you don’t want to.

  12. silver January 21, 2017 at 12:15 pm #

    Hi adrian
    i made gui for this projects and add the program in sourceforge.net
    https://sourceforge.net/projects/ohodo/
    best regards

  13. Sanna Khan February 27, 2017 at 4:43 am #

    Hi adrian,

    I am facing below issues while making bounding rectangle on bubbles:
    1. In image, bubbles are somewhere near to rectangle where student can write manually their roll number because after thresolding bubbles get touched to rectangle. so, it can’t find circle.
    2. If bubble filled out of the boundary, again it can’t be detectable.
    3. False detection of circle because of similar height and width.

    Best Regards,
    Sanna

    • Adrian Rosebrock February 27, 2017 at 11:06 am #

      If you’re running into issues where the bubbles are touching other important parts of the image, applying an “opening” morphological operation to disconnect them.

      • Sanna Khan February 28, 2017 at 1:46 am #

        What about second and third issue? Is there any rough idea which can help me to sort out it?

        • Adrian Rosebrock February 28, 2017 at 6:55 am #

          It’s hard to say without seeing examples of what you’re working with. I’m not sure what you mean by if the bubble is filled in outside the circle it being impossible to detect — the code in this post actually helps prevent that by alleviating the need for Hough circles which can be hard to tune the parameters to. Again, I get the impression that you’re using Hough circles instead of following the techniques in this post.

  14. Usman March 10, 2017 at 12:38 pm #

    Dear sir I have install imutils but I am still facing “ImportError: No module named ‘imutils'” kingly guide me.

    • Adrian Rosebrock March 10, 2017 at 3:43 pm #

      You can install imutils using pip:

      $ pip install imutils

      If you are using a Python virtual environment, access it first and then install imutils via pip:

  15. Hoang Ngoc March 10, 2017 at 9:26 pm #

    How to convert py to android

  16. Nyx March 11, 2017 at 4:28 am #

    Can this also work with many items in the exam like 50 or 100?

    • Adrian Rosebrock March 13, 2017 at 12:19 pm #

      Yes. As long as you can detect and extract the rows of bubbles this approach can work.

  17. Dana March 11, 2017 at 12:00 pm #

    Adrian, do you have the android version of this application?

    • Adrian Rosebrock March 13, 2017 at 12:17 pm #

      You will need to port the Python code to Java if you would like to use it as an Android application.

  18. Nic March 13, 2017 at 1:08 am #

    Hello Adrian,

    Do you have a code for this in java? I am planning a project similar to this one, I am having problems especially since this program was created in python and using many plugin modules which is not available in java.

    I hope you can consider my request since this is related for my school work. Thank you

    • Adrian Rosebrock March 13, 2017 at 12:12 pm #

      Hey Nic — I only provide Python and OpenCV code on this blog. If you are doing this for a school project I would really suggest you struggle and fight your way through the Python to Java conversion. You will learn a lot more that way.

  19. Nic March 14, 2017 at 6:11 am #

    Hi again adrian, thanks for the reply on my previous comments.

    Can you provide a code that can allow this code to run directly on a python compiler rather than running the program on cmd. I would like to focus on python for developing a project same on this one, I’ve ask many experts and python was the first thing they recommended since it can create many projects and provides many support on many platforms unlike java.

    • Adrian Rosebrock March 15, 2017 at 8:57 am #

      Hey Nic — while I’m happy to help point readers like yourself in the write direction, I cannot write code for you. I would suggest you taking the time to learn and study the language. If you need help learning OpenCV and computer vision, take a look at Practical Python and OpenCV.

      • Nic March 24, 2017 at 5:22 am #

        Can this process of computation be possible in a mobile devices alone using openCV and python? If yes, In what way can it be done?

        • Adrian Rosebrock March 25, 2017 at 9:27 am #

          Most mobile devices won’t run native Python + OpenCV code. If you’re building for iOS, you would want to use Swift/Objective-C + OpenCV. For Android, Java + OpenCV.

  20. pawan March 18, 2017 at 2:14 am #

    hi Adrain

    could u please tell the code about how did u draw the output of questionCnts  on  image

    • Adrian Rosebrock March 21, 2017 at 7:37 am #

      I’m not sure what you mean Pawan. Can you please elaborate?

  21. Ckj April 7, 2017 at 3:29 pm #

    I’m capturing image through USB web camera and executing this program but that image not giving any answers and it’s shows multiple errors

    • Adrian Rosebrock April 8, 2017 at 12:41 pm #

      Without knowing what your errors are, it’s impossible to point you in the right direction.

Leave a Reply