# Recognizing digits with OpenCV and Python

Today’s tutorial is inspired by a post I saw a few weeks back on /r/computervision asking how to recognize digits in an image containing a thermostat identical to the one at the top of this post.

As Reddit users were quick to point out, utilizing computer vision to recognize digits on a thermostat tends to overcomplicate the problem — a simple data logging thermometer would give much more reliable results with a fraction of the effort.

On the other hand, applying computer vision to projects such as these are really good practice.

Whether you are just getting started with computer vision/OpenCV, or you’re already writing computer vision code on a daily basis, taking the time to hone your skills on mini-projects are paramount to mastering your trade — in fact, I find it so important that I do exercises like this one twice a month.

Every other Friday afternoon I block off two hours on my calendar and practice my basic image processing and computer vision skills on computer vision/OpenCV questions I’ve found on Reddit or StackOverflow.

Doing this exercise helps me keep my skills sharp — it also has the added benefit of making great blog post content.

In the remainder of today’s blog post, I’ll demonstrate how to recognize digits in images using OpenCV and Python.

Looking for the source code to this post?

## Recognizing digits with OpenCV and Python

In the first part of this tutorial, we’ll discuss what a seven-segment display is and how we can apply computer vision and image processing operations to recognize these types of digits (no machine learning required!)

From there I’ll provide actual Python and OpenCV code that can be used to recognize these digits in images.

### The seven-segment display

You’re likely already familiar with a seven-segment display, even if you don’t recognize the particular term.

A great example of such a display is your classic digital alarm clock:

Figure 1: A classic digital alarm clock that contains four seven-segment displays to represent the time of day.

Each digit on the alarm clock is represented by a seven-segment component just like the one below:

Figure 2: An example of a single seven-segment display. Each segment can be turned “on” or “off” to represent a particular digit (source: Wikipedia).

Sevent-segment displays can take on a total of 128 possible states:

Figure 3: A seven-segment display is capable of 128 possible states (source: Wikipedia).

Luckily for us, we are only interested in ten of them — the digits zero to nine:

Figure 4: For the task of digit recognition we only need to recognize ten of these states.

Our goal is to write OpenCV and Python code to recognize each of these ten digit states in an image.

### Planning the OpenCV digit recognizer

Just like in the original post on /r/computervision, we’ll be using the thermostat image as input:

Figure 5: Our example input image. Our goal is to recognize the digits on the thermostat using OpenCV and Python.

Whenever I am trying to recognize/identify object(s) in an image I first take a few minutes to assess the problem. Given that my end goal is to recognize the digits on the LCD display I know I need to:

• Step #1: Localize the LCD on the thermostat. This can be done using edge detection since there is enough contrast between the plastic shell and the LCD.
• Step #2: Extract the LCD. Given an input edge map I can find contours and look for outlines with a rectangular shape — the largest rectangular region should correspond to the LCD. A perspective transform will give me a nice extraction of the LCD.
• Step #3: Extract the digit regions. Once I have the LCD itself I can focus on extracting the digits. Since there seems to be contrast between the digit regions and the background of the LCD I’m confident that thresholding and morphological operations can accomplish this.
• Step #4: Identify the digits. Recognizing the actual digits with OpenCV will involve dividing the digit ROI into seven segments. From there I can apply pixel counting on the thresholded image to determine if a given segment is “on” or “off”.

So see how we can accomplish this four-step process to digit recognition with OpenCV and Python, keep reading.

### Recognizing digits with computer vision and OpenCV

Let’s go ahead and get this example started.

Open up a new file, name it recognize_digits.py , and insert the following code:

Lines 2-5 import our required Python packages. We’ll be using imutils, my series of convenience functions to make working with OpenCV + Python easier. If you don’t already have imutils  installed, you should take a second now to install the package on your system using pip :

Lines 9-20 define a Python dictionary named DIGITS_LOOKUP . Inspired by the approach of /u/Jonno_FTW in the Reddit thread, we can easily define this lookup table where:

1. They key to the table is the seven-segment array. A one in the array indicates that the given segment is on and a zero indicates that the segment is off.
2. The value is the actual numerical digit itself: 0-9.

Once we identify the segments in the thermostat display we can pass the array into our DIGITS_LOOKUP  table and obtain the digit value.

For reference, this dictionary uses the same segment ordering as in Figure 2 above.

Let’s continue with our example:

Line 23 loads our image from disk.

We then pre-process the image on Lines 27-30 by:

• Resizing it.
• Converting the image to grayscale.
• Applying Gaussian blurring with a 5×5 kernel to reduce high-frequency noise.
• Computing the edge map via the Canny edge detector.

After applying these pre-processing steps our edge map looks like this:

Figure 6: Applying image processing steps to compute the edge map of our input image.

Notice how the outlines of the LCD are clearly visible — this accomplishes Step #1.

We can now move on to Step #2, extracting the LCD itself:

In order to find the LCD regions, we need to extract the contours (i.e., outlines) of the regions in the edge map (Lines 34-36).

We then sort the contours by their area, ensuring that contours with a larger area are placed at the front of the list (Line 37).

Given our sorted contours list, we loop over them individually on Line 41 and apply contour approximation.

If our approximated contour has four vertices then we assume we have found the thermostat display (Lines 48-50). This is a reasonable assumption since the largest rectangular region in our input image should be the LCD itself.

After obtaining the four vertices we can extract the LCD via a four point perspective transform:

Applying this perspective transform gives us a top-down, birds-eye-view of the LCD:

Figure 7: Applying a perspective transform to our image to obtain the LCD region.

Obtaining this view of the LCD satisfies Step #2 — we are now ready to extract the digits from the LCD:

To obtain the digits themselves we need to threshold the warped  image (Lines 59 and 60) to reveal the dark regions (i.e., digits) against the lighter background (i.e., the background of the LCD display):

Figure 8: Thresholding LCD allows us to segment the dark regions (digits/symbols) from the lighter background (the LCD display itself).

We then apply a series of morphological operations to clean up the thresholded image (Lines 61 and 62):

Figure 9: Applying a series of morphological operations cleans up our thresholded LCD and will allow us to segment out each of the digits.

Now that we have a nice segmented image we once again need to apply contour filtering, only this time we are looking for the actual digits:

To accomplish this we find contours in our thresholded image (Lines 66 and 67). We also initialize the digitsCnts  list on Line 69 — this list will store the contours of the digits themselves.

Line 72 starts looping over each of the contours.

For each contour, we compute the bounding box (Line 74), ensure the width and height are of an acceptable size, and if so, update the digitsCnts  list (Lines 77 and 78).

Note: Determining the appropriate width and height constraints requires a few rounds of trial and error. I would suggest looping over each of the contours, drawing them individually, and inspecting their dimensions. Doing this process ensures you can find commonalities across digit contour properties.

If we were to loop over the contours inside digitsCnts  and draw the bounding box on our image, the result would look like this:

Figure 10: Drawing the bounding box of each of the digits on the LCD.

Sure enough, we have found the digits on the LCD!

The final step is to actually identify each of the digits:

Here we are simply sorting our digit contours from left-to-right based on their (x, y)-coordinates.

This sorting step is necessary as there are no guarantees that the contours are already sorted from left-to-right (the same direction in which we would read the digits).

Next, comes the actual digit recognition process:

We start looping over each of the digit contours on Line 87.

For each of these regions, we compute the bounding box and extract the digit ROI (Lines 89 and 90).

I have included a GIF animation of each of these digit ROIs below:

Figure 11: Extracting each individual digit ROI by computing the bounding box and applying NumPy array slicing.

Given the digit ROI we now need to localize and extract the seven segments of the digit display.

Lines 94-96 compute the approximate width and height of each segment based on the ROI dimensions.

We then define a list of (x, y)-coordinates that correspond to the seven segments on Lines 99-107. This list follows the same order of segments as Figure 2 above.

Here is an example GIF animation that draws a green box over the current segment being investigated:

Figure 12: An example of drawing the segment ROI for each of the seven segments of the digit.

Finally, Line 108 initializes our on  list — a value of one inside this list indicates that a given segment is turned “on” while a value of zero indicates the segment is “off”.

Given the (x, y)-coordinates of the seven display segments, identifying a whether a segment is on or off is fairly easy:

We start looping over the (x, y)-coordinates of each segment on Line 111.

We extract the segment ROI on Line 115, followed by computing the number of non-zero pixels on Line 116 (i.e., the number of pixels in the segment that are “on”).

If the ratio of non-zero pixels to the total area of the segment is greater than 50% then we can assume the segment is “on” and update our on  list accordingly (Lines 121 and 122).

After looping over the seven segments we can pass the on  list to DIGITS_LOOKUP  to obtain the digit itself.

We then draw a bounding box around the digit and display the digit on the output  image.

Finally, our last code block prints the digit to our screen and displays the output image:

Notice how we have been able to correctly recognize the digits on the LCD screen using Python and OpenCV:

Figure 13: Correctly recognizing digits in images with OpenCV and Python.

## Summary

In today’s blog post I demonstrated how to utilize OpenCV and Python to recognize digits in images.

This approach is specifically intended for seven-segment displays (i.e., the digit displays you would typically see on a digital alarm clock).

By extracting each of the seven segments and applying basic thresholding and morphological operations we can determine which segments are “on” and which are “off”.

From there, we can look up the on/off segments in a Python dictionary data structure to quickly determine the actual digit — no machine learning required!

As I mentioned at the top of this blog post, applying computer vision to recognizing digits in a thermostat image tends to overcomplicate the problem itself — utilizing a data logging thermometer would be more reliable and require substantially less effort.

However, in the case that (1) you do not have access to a data logging sensor or (2) you simply want to hone and practice your computer vision/OpenCV skills, it’s often helpful to see a solution such as this one demonstrating how to solve the project.

I hope you enjoyed today’s post!

### 150 Responses to Recognizing digits with OpenCV and Python

1. Andrei February 13, 2017 at 11:18 am #

Hi,

in Figure 4 the image showing a 2 is missing 🙂
Perhaps you can add it to complete this good tutorial 😀

• Adrian Rosebrock February 13, 2017 at 1:32 pm #

You’re correct Andrei, thank you for pointing this out. I’ll get an updated image uploaded 🙂

2. James-Hung February 13, 2017 at 11:23 am #

Do you have the similar implementation in C++ ?

regards,

James-Hung

• Adrian Rosebrock February 13, 2017 at 1:31 pm #

Hi James-Hung — I only cover Python implementations on this blog.

• Tham February 13, 2017 at 4:08 pm #

I think it is quite easy to convert it to modern c++ implementation. One of the best things of learning c++ is, after you get familiar with it, you will find out you can pick up lot of languages in short time, especially a language with nice syntax and libs like python.

Thanks for the tutorial, this is a nice solution, especially step 4, I believe I would use machine learning(trained by mnist or other datasets) to recognize the digits rather than this creative, simple solution.

• Tham February 13, 2017 at 4:13 pm #

Sorry, I think I did not express my though clearly, what I mean was I do not know there are such creative solution before I study this post, so I would prefer machine learning as character recognition, although ml may be a more robust solutions, it also takes more times and much expensive than this solution.

• Adrian Rosebrock February 14, 2017 at 1:25 pm #

Using machine learning to solve this problem is 100% acceptable; however, there are times when a clever use of image processing can achieve just as high accuracy with less effort. Of course, this is a bit of a contrived example and for more robustness machine learning should absolutely be considered, especially if there are changes in reflection, lighting conditions, etc.

3. Preetinder Singh February 13, 2017 at 11:28 am #

Interesting. really liked the post. Thanks for sharing. In case the scene illumination changes, the algorithm usually breaks or becomes less accurate. Please suggest all the different computer vision techniques in practice in order to remove or minimize the effects of illumination/brightness/contrast changes of the image for the algorithm to still work correctly OR at least with high accuracy ?

• Adrian Rosebrock February 13, 2017 at 1:31 pm #

If you need a more robust solution you should consider using machine learning to both localize the screen followed by recognize the actual digits.

4. Douglas Jones February 13, 2017 at 11:28 am #

A most excellent post and your timing is impeccable! I happen to have a need for just such 7-segment digit recognizer. Leaving the data logging sensor aside (where’s the fun in that) obviously this is just one way of using computer vision to recognize these digits. In your bag of goodies do you happen to have some thoughts on how one would do this WITH machine learning? I am guessing that KNN might be a good approach. Thoughts?

Thanks!

• Adrian Rosebrock February 13, 2017 at 1:30 pm #

Hey Douglas — I’m glad the post was timed well! As for doing this with machine learning, yes, it’s absolutely possible. I demonstrate how to recognize (handwritten) digits inside Practical Python and OpenCV and then discuss more advanced solutions inside the PyImageSearch Gurus course, but a good starting point would be the HOG descriptor + SVM.

• Douglas Jones February 13, 2017 at 8:07 pm #

Thanks Adrian! I have the books and associated video and have gone through them quite a lot (seems transatlantic/transpacific flights leave LOTs of reading time!)

I have not had the chance to try HOG and SVM. Since I am under the gun, so to speak, I will try and get a comparison of the two once converted to C#. I mentioned KNN because it is a lazy learning method and might be a touch faster. I am having to do all this in real time based on 60fps, so speed is always a worry. Especially when a single frame might contain several indicators with varying numbers of digits.

• Adrian Rosebrock February 14, 2017 at 1:23 pm #

k-NN is faster to train (since there is no training process) but is slower at test time, depending on how large your dataset is. You can apply approximate nearest neighbors to speed it up, but in the long run I think you’ll find better accuracy with HOG + SVM. Only quantify the digit region of the image via HOG and then pass the feature vector into the SVM.

• Douglas Jones February 15, 2017 at 2:50 pm #

Thanks. I will give that a try. As I have discovered fr0om this blog’s code, translating to C# EmguCV/OpenCV is not straightforward at all. You have numpy and imutils plus some home grown routines which I do not have in C#. One thing I thought odd is doing the exact same steps up to performing the canny edge detection gave me an image that looked different from yours. I guess I find that odd because at the bottom of the code it is all OpenCV. You would think converting to gray, Gaussian blurring and doing Canny should give me the same image.

I shall look into the HOG + SVM in your book and will continue to see if I can translate this blog’s code into C#.

5. sinhue February 13, 2017 at 12:43 pm #

Hi, Adrian. I read the same post on reddit a few weeks ago, so when I checked my email it was a great surprise for me. I will check your solution. Thanks for sharing!

• Adrian Rosebrock February 13, 2017 at 1:28 pm #

I’m glad you saw the same post as well Sinhue! /r/computervision is a great resource for keeping up with computer vision news.

6. Manisha February 13, 2017 at 10:39 pm #

I get an error at print(u”{}{}.{} \u00b0C”.format(*digits))

“IndexError: tuple index out of range”

if I comment out print stmt I see bounding box for last digit only

• Adrian Rosebrock February 15, 2017 at 9:13 am #

Hi Manisha — make sure you use the “Downloads” section of this tutorial to download the source code + example image. It sounds like you might have copied and pasted code and accidentally introduced an error where not all digits are detected.

• Dwait June 29, 2017 at 5:15 pm #

Hey, Adrian. I’m getting the same error and I’m using the downloaded code and image. What’s going wrong here?

• Adrian Rosebrock June 30, 2017 at 8:05 am #

It’s hard to say what the exact issue is without physical access to your machine. What versions of Python + OpenCV are you running?

• Dwait July 13, 2017 at 12:48 am #

I’m using python 3.4.2 and OpenCV 3.2.0

• Dwait June 29, 2017 at 5:26 pm #

So I just tried running the code after commenting the print line out. Now the output image detects two digits – the 3 and the 4. And among those also it detects the 3 as a 9.

• Nishant January 20, 2018 at 1:59 am #

Am facing same error, did you resolve the issue

• Guilherme June 20, 2018 at 2:16 pm #

This is happening exactly because the tuple digits isn’t complete…failing in detect any digits causes this… check your code again, you’ve probably miss copy something.

7. Leena February 13, 2017 at 11:17 pm #

Thanks for really useful post for many application.

8. Cristobal February 14, 2017 at 6:07 am #

Hi Adrian! In Figure 4 the number 1 it would be on segments 2 and 5 like in the dictionary. Thanks for sharing!!

9. sun February 14, 2017 at 11:30 am #

i want to used this code in your project(bubbles sheet omr), to read students numbers.

regards

• sun February 14, 2017 at 11:32 am #

sorry i mean, can i used it in bubbles sheet project to read students numbers.
regards

• Adrian Rosebrock February 14, 2017 at 1:20 pm #

Can you elaborate more on “read students numbers”? The numbers on what? The bubble sheet itself?

• sun February 14, 2017 at 7:39 pm #

i mean “student id” in the top of any omr paper like this image:

we can used your project to getting student id by
print row containt 10 “seven-segment” on the top of the paper. then the student shadded his id before he answering the questions (shadded the bubbles).

• Adrian Rosebrock February 15, 2017 at 9:04 am #

If you’re doing OMR and bubble sheet recognition, why not follow the approach detailed here? Or is your goal to validate that what the user bubbles in matches what they wrote? If it’s the latter, one of the best methods to determine these digits would be to train a simple digit detector using machine learning. I demonstrate how to train your own digit detectors inside Practical Python and OpenCV.

10. Steve February 14, 2017 at 12:37 pm #

Hi Adrian, very interesting. I have a note which is beside the point of the image-recognition, but may be useful: you have the “1” as represented by two vertical segments on the left, but it may be two vertical segments on the right (take a look at the alarm clock picture on this very page). I imagine it would be simple to add a second entry to your lookup table to account for this. Cheers.

• Steve February 14, 2017 at 12:38 pm #

Or rather: your image of the ten digits has it on the left, but the lookup table seems to have it on the right (2, 5). Either way, a second entry would help to make this work across different displays.

• Adrian Rosebrock February 14, 2017 at 1:19 pm #

Great point, thanks for sharing Steve!

11. Shuvam Ghosh February 14, 2017 at 3:35 pm #

Awesome post.
After the canny edge detection and countour analysis, we assume that the largest rectangle with four vertices is the LCD. But in fact it is the whole outline of the thermostat(I.e. the output after canny edge detection as shown) and not the LCD. I found this part confusing. Can you please explain me this. The largest rectangle with 4 vertices for me is the thermostat outline not the LCD.

• Adrian Rosebrock February 15, 2017 at 9:09 am #

After contour approximation the thermostat box does not have 4 vertices. Looking at the edge map you can also see that the thermostat box does not form a rectangle — there are disconnects along its path. Therefore, it’s not a rectangle and our algorithm does not consider it.

12. delta February 14, 2017 at 10:33 pm #

in python3, opencv3, i got this msg:
object has no attrute ‘reshape’.

13. Mark February 15, 2017 at 3:08 pm #

Hello Adrian, excellent post. Visiting your blog is always satisfying for those who work in image processing.

• Adrian Rosebrock February 16, 2017 at 9:51 am #

Thank you Mark, I really appreciate that 🙂

14. Harsh February 15, 2017 at 3:34 pm #

I am trying to run code under Ubuntu with python 3.5 and opencv 3.0 and getting an import error

File “recognize_digits.py”, line 5, in
from imutils.perspective import four_point_transform
ImportError: No module named imutils.perspective

• Adrian Rosebrock February 16, 2017 at 9:51 am #

Make sure you install the `imutils` library on your system:

`\$ pip install --upgrade imutils`

• Harsh February 16, 2017 at 4:04 pm #

Thanks for your quick reply. I already run upgrade imutils but it still shows the same error. Previously I followed steps from [https://www.pyimagesearch.com/2015/07/20/install-opencv-3-0-and-python-3-4-on-ubuntu/] to setup my environment

• Adrian Rosebrock February 20, 2017 at 8:04 am #

If you are using a Python virtual environment for your code, make sure you upgrade `imutils` there as well. The imutils library published to PyPI is indeed the latest version, so you likely have an old version either in (1) a Python virtual environment or (2) your global system Python install and are accidentally using the older version.

15. Daly February 27, 2017 at 12:49 pm #

the code works fine while tested on the image that comes along with it, when I try using other image, error showed, using an image with same type but not the same dimensions the error is ‘NoneType’ object has no attribute ‘reshape’
and when using an image with same type and dimensions, this is showed “key=lambda b: b[1][i], reverse=reverse))
ValueError: need more than 0 values to unpack

• Adrian Rosebrock March 2, 2017 at 7:00 am #

This blog post was built around a specific example (the thermostat image at the top of the post). It’s unlikely to work out-of-the-box with other images. You will likely need to debug the script and ensure that you can find the LCD screen followed by the digits on the screen.

• TT May 9, 2017 at 7:49 pm #

Could you give instructions on how to do digit recognition in a real-time webcam? I’ve met the same problems, and it’s hard to do

• Adrian Rosebrock May 11, 2017 at 8:50 am #

The same techniques you apply to a single image can be applied to a video stream as well. Remember that a video stream is just a collection of single images.

16. Antonios Kats March 6, 2017 at 3:12 am #

I would like to ask you if it’s possible instead of cv2.something( ),
to text straight something( ).
Is it the namespace command , so to not need to type ( cv2. )every single time ?

It’s an amazing example. It works perfect. Hope for more in the future 😉
K.R.
Antonios

• Adrian Rosebrock March 6, 2017 at 3:37 pm #

I DO NOT recommend you do this as namespacing is extremely important to tidy, maintainable code. That said, this will get you what you want:

`from cv2 import *`

17. suhas March 24, 2017 at 7:59 am #

Can you post a code on Recognizing alphabets with OpenCV and python?

18. mohsen April 6, 2017 at 12:46 pm #

hi
i have one error :

The function is not implemented. Rebuild the library with Windows, GTK+ 2.x

• Adrian Rosebrock April 8, 2017 at 12:55 pm #

This sounds like an error related to the “highgui” module of OpenCV. Re-compile and re-install OpenCV following one of my tutorials.

19. Ravenjam April 25, 2017 at 2:43 pm #

Hey man I really like your posts. Can you do one for opencv 3 and python on face recognition (not detection)? I’m having trouble finding a workable example online. Thanks!

20. hgfav May 27, 2017 at 9:13 pm #

I get this error while trying to compile : “ImportError: No module named cv2”. Then I try to install this module using pip, it doesn’t exist.

21. Phuong June 1, 2017 at 1:46 am #

Hi, Adrian I have problems in using open cv and python. Because it is the identity on the lcd if it is identified on a numeric table is how. When running your program all the things I did not break is due to code I just break it when using open cv with python. What are you using python and opencv? love you
and you give code because i not dowload guide of you ?

• Adrian Rosebrock June 4, 2017 at 6:27 am #

I used OpenCV 3 and Python 2.7 for this blog post. It also works with OpenCV 2.4 and Python 2.7 along with OpenCV 3 and Python 3. If you are getting an error with the code, please share it. Without the error myself and other readers cannot help you.

22. shruthi June 14, 2017 at 6:22 pm #

Thanks for one more awesome tutorial.
I would like to know whether this can be used for speed limit board recognition. I mean to read numbers on road side speed limit.

Thanks

• Adrian Rosebrock June 16, 2017 at 11:25 am #

Are you referring to the LED segment display boards? Or the actual signs? For the LED segment boards, this approach would likely work. If you want to use this approach actual signs I would train a custom object detector to first detect the sign, then extract the digits, followed by classifying them.

23. jmbwell June 28, 2017 at 2:57 pm #

This would be useful for reading the display of a window unit that has IR remote control but lacks a two-way control protocol for querying status.

You keep saying a simple data-logging thermometer would be simpler, but that doesn’t help if what you want is not just to know the temperature of the room, but to know what a machine with only a visual display is currently reporting.

Thanks for the inspiring write-up.

24. vishal gupta June 30, 2017 at 7:32 am #

I am looking for something which can take input image having character strings in tabular format and give me output in text file containing those strings in same order as input have

• Adrian Rosebrock June 30, 2017 at 8:01 am #

Hi Vishal — next week we will be starting a series on Optical Character Recognition (OCR). These upcoming tutorials should help you with your project.

25. Suhas Sreenivas July 26, 2017 at 6:28 am #

Hi Adrian! Great tutorial! Learnt a lot of from your posts to effectively leverage openCV APIs for a given problem. Especially a fan of the pokedex project, the screen recognition and extraction part of which is integral part of my project. Many thanks! It’s humbling to see you take the time to give back to the community. Coming back to the tutorial at hand, wouldn’t the digit ‘1’ have a smaller ROI than other digits, specifically only 2 of the 7 segments, namely ‘1’ and ‘4’ would be a part of the ROI and wouldn’t be possible to check if the other segments are on or off. Sorry if it’s a noob question or if i missed something. Let me know! Thank you! Appreciate your time!

• Adrian Rosebrock July 28, 2017 at 10:02 am #

Hi Suhas — it’s great to hear that you are enjoying the PyImageSearch blog, that’s great! As for your question, yes, a “1” would have a smaller ROI. In that case you would want to pre-define what the (approximate) width and height of the ROI are. Otherwise, you can extract all ROIs and then pad them until they are the same size.

• Suhas Sreenivas July 31, 2017 at 5:52 am #

26. Aditya September 22, 2017 at 4:43 pm #

So I was curious and wanted to try this OCR on my NVIDIA Jetson TX2.

I am using OpenCV3.3.0 on my board as well as my PC. And both have ubuntu16.04 running.

What I found surprising is that this program gives the wrong output in case of the same image you provided on Jetson while it works fine on my PC.

On Jetson I have the output as 9, 4 and it does not detect the 5 at all in the image. (Note that, 3 is detected as 9.)

While in my machine, it is able to detect (3, 4, 5) all correctly.

Do you have any idea why is this happens?
It will be great if you can suggest me some solution.

• Adrian Rosebrock September 23, 2017 at 10:08 am #

Can you confirm which version of OpenCV you have installed on the Jetson? This seems like an OpenCV version discrepancy issue, likely a problem with OpenCV 2.4 and OpenCV 3.

• Aditya September 25, 2017 at 11:34 am #

It is openCV3.3.0, With Ubuntu16.04

• Adrian Rosebrock September 26, 2017 at 8:22 am #

I used OpenCV 3 for this blog post, so I’m surprised that there is a different output. I’m honestly not sure what the problem is. I would compare the output threshold maps from the Jetson output to the output on your local system.

27. Matt October 2, 2017 at 3:42 pm #

When I try to identify my digit, I am getting “Key Error: (0, 0, 1, 1, 1, 0, 1)”

I added this key to DIGITS_LOOKUP to try to get an output and I am getting the following image: https://ibb.co/bJooyw

Is there anything you can suggest to debug this odd behavior?

Thank you.

• Adrian Rosebrock October 4, 2017 at 12:43 pm #

This algorithm assumes you can nicely segment the digit from the background. It then does a test on each of the 7-segments. The reason you are getting a key error is because the “on/off” test failed for one or more of the segments. Go back to the segmentation and “on/off” test — you’ll likely need to turn parameters here to make it work on your particular image.

• Matt October 13, 2017 at 11:50 am #

What ended up making the difference was blurring the image 3 more times so that the segments bled together. It does not see the segments as rectangles now and the warped image includes the whole digit. Thanks again for your post.

• Adrian Rosebrock October 14, 2017 at 10:39 am #

Nice, congrats on resolving the issue, Matt!

28. Aditya C. October 28, 2017 at 5:30 pm #

Hello Mr. Rosebrock,

I am developing a number detection system for my Raspberry Pi.

This guide is about a 7-segment system and the numbers I am trying to read are on an analog dial.

How should I set up the code for this?

Thanks

• Adrian Rosebrock October 31, 2017 at 8:08 am #

Hey Aditya — do you have any example images you are working with? It’s hard to provide a suggestion without seeing an example of your analog dial.

29. aagontuk December 14, 2017 at 3:59 am #

I think in DIGIT_LOOKUP mapping for 2 and 7 are wrong!

For digit 2 it should be (1, 0, 1, 1, 1, 0, 1) rather than (1, 0, 1, 1, 1, 1, 0)
and for 7 it should be (1, 1, 1, 0, 0, 1, 0) rather than (1, 0, 1, 0, 0, 1, 0)

I was doing a similar project. This post was a great help to me. Actually whole pyimagesearch. Great articles! Really a good site for beginners. Thanks!

30. mk December 20, 2017 at 11:47 am #

This is awesome! Thank you so much for sharing this. If we use machine learning for a better accuracy, how can we prepare the training dataset? As in the hand writing example, do we need to prepare a same sized digit image dataset for each number?

• Adrian Rosebrock December 22, 2017 at 7:08 am #

That really depends on your particular project and what method you are using. I would need to know more details on the project. If you’re working with raw images and their pixels each image should be the same size before passing them into a model. If you’re applying feature extraction the output feature vector should be the same for all images.

For what it’s worth, I have over 40+ lessons on machine learning and preparing your dataset inside the PyImageSearch Gurus course. Be sure to take a look as it will help you learn how to build your datasets for machine learning.

31. Jonathan M January 16, 2018 at 8:23 pm #

I only just came across this now and noticed you used my method from the reddit thread for converting pixel areas to a digit! Cool

• Adrian Rosebrock January 17, 2018 at 10:15 am #

I did! Thank you for recommending the lookup dictionary, Jonathan.

32. tom February 9, 2018 at 12:07 pm #

Am I the only one having a probleme to detect Ones. Because the programme is creating a square around the 1 digit and thus scanning for the 8 bar in that area. This is why One is always giving me a different answer….. I mean is it working for you ?

• Sally September 26, 2018 at 2:59 pm #

Hi, have you figured this out? I’m running into the same issue where it detects the 1 as an 8.

33. ayse February 12, 2018 at 6:50 am #

hello.
how can I do Recognizing digits text OpenCV and Python

• ayse February 12, 2018 at 6:51 am #

sory recognizin text

• Adrian Rosebrock February 12, 2018 at 6:11 pm #

Try taking a look at OCR algorithms.

34. March 23, 2018 at 9:58 am #

I am a Chinese student and thank you very much for sharing.

• Adrian Rosebrock March 27, 2018 at 6:35 am #

I’m happy to share!

35. Robert Poor March 30, 2018 at 9:46 am #

Super helpful, but what is going on with line 36?

cnts = cnts[0] if imutils.is_cv2() else cnts[1]

And how do you know that cnts[0] (or cnts[1]) is the one you want to keep?

• Robert Poor March 30, 2018 at 10:00 am #

Ah – figured it out. The return values of findCountours() was previously:

contours, hierarchy = cv.findContours(…)

and is now

file, contours, hierarchy = cv.findContours(…)

so line 36 is accommodating both versions.

• Adrian Rosebrock March 30, 2018 at 10:49 am #

Yep, you got it Robert! 🙂

36. tinkeringengr April 11, 2018 at 1:51 am #

Awesome, thanks for sharing!

37. Rucha Mewada April 16, 2018 at 5:04 am #

Hello frenzz
I’ve tried this code but it’s not working for any other downloaded image …

38. Sukhmeet April 24, 2018 at 12:26 pm #

Hi adrian, I am really impressed as usual by your blog posts and examples. I am working on a similar project, only thing is that the data that I am working with is handwritten (numbers from 0 to 9) and I was looking for a way to recognize these digits and put them in a output format maybe JSON, text.

• Adrian Rosebrock April 25, 2018 at 5:39 am #

If you would like to recognize handwritten digits you would need a machine learning approach for any type of reasonable accuracy. I have three suggestions:

1. Work through Practical Python and OpenCV which includes a chapter on how to recognize handwritten digits

2. If you would like a more advanced treatment of computer vision in general (but still includes handwritten digit recognition) then the PyImageSearch Gurus course would be your best bet

3. If you want to study deep learning and perform handwritten digit recognition then you should go through Deep Learning for Computer Vision with Python

I hope that helps point you in the right direction!

39. vimal aditya April 30, 2018 at 2:55 am #

Hii sir,
Nice article, iam trying to do this on android phone any suggestions?

• Adrian Rosebrock April 30, 2018 at 12:34 pm #

I don’t have any experience developing Android applications. There are Java + OpenCV bindings, but you’ll need to come up with your own Java implementation for Android. The other alternative would be to use a framework such as PhoneGap/Cordova or React Native and send the image (with either of those frameworks) to a server to process the image where you could easily run Python.

40. Sohrab May 8, 2018 at 7:23 pm #

Thanks Adrian, great articles, great job man,

but your algorithm will always detect 1 as 8!

• Luiz July 9, 2019 at 1:03 pm #

exactly! it detects the edges, so it would fit around the whole number 1 making the edge analysis 100% true, making the number 1 in 8

41. vinit May 22, 2018 at 8:08 am #

How will the approach of using OpenCV to extract images (LCD area in this case) and then detecting numbers after that using tesseract OCR library will work?

• Adrian Rosebrock May 23, 2018 at 7:23 am #

Are you asking specifically on how to use Tesseract for this project? The gist is that you would want to obtain a very cleanly segmented image of the digits. From there, allow Tesseract to OCR them.

42. Prathmesh Dudhe June 26, 2018 at 12:24 pm #

I have changes input file then it is not showing the output.
Now what should I do ?

• Adrian Rosebrock June 28, 2018 at 8:16 am #

I’m not sure what you mean. Could you clarify?

43. Imran Shafiq July 27, 2018 at 3:10 am #

I get the error of invalid syntax for cnts = cnts[0] if imutils.is_cv2() else cnts[1]

• Adrian Rosebrock July 31, 2018 at 12:04 pm #

Hey Imran — make sure you using the “Downloads” section of the blog post to download the code rather than trying to copy and paste. Using the downloads will ensure there are no errors due to copying and pasting.

44. Nguyen August 1, 2018 at 6:26 am #

I run the code with different image and get Key error: (0,1,1,1,1,0). I checked step 1 and found that in my image the boundary between LCD and the scale is not significant, so that means when running canny the edge of LCD is not complete. Is that the main problem?

• Adrian Rosebrock August 2, 2018 at 9:31 am #

The error is that the key does not exist in the “DIGITS_LOOKUP” dictionary. To resolve the error you can:

1. Adjust the preprocessing steps by experimentation, including more/less blur, different Canny parameter values, etc.
2. Or you can apply a more advanced OCR algorithm, such as HOG and ML model, Google Vision API, and potentially even Tesseract

45. Anjar August 1, 2018 at 9:56 pm #

Hey adrian, i’ve change input the picture with another digit seven segment, but the output not recognize horozontal linr of seven segment, how i must to do?

46. uri August 2, 2018 at 2:11 am #

Great article!

I have a mission to recognize digits on Street View House format, not using machine learning algorithem…

Can your code above be good or it is only for analog format?

Uri

• Adrian Rosebrock August 2, 2018 at 9:21 am #

To be honest I don’t think trying to recognize digits from the Street View House dataset without machine learning is a good investment of time. Is there a particular reason you would want to do this?

47. Michael September 3, 2018 at 4:07 pm #

Hello,
I’ve found a little bug in the code just coping the lines and when trying to detect the digit ‘2’ it jsut gives errors dues to the DIGIT_LOOKUP TABLE not being correct just change the line:

(1, 0, 1, 1, 1, 1, 0): 2,

to

(1, 0, 1, 1, 1, 0, 1): 2,

sins seg number 6 is on.

Thanks for the tutorial BTW

• Johannes January 4, 2019 at 4:11 am #

Yeah, came to the same conclusion. Digit 2 should be off at bottom right and on at bottom.

48. Felipe Freitas October 10, 2018 at 9:51 am #

Hi Adrian, thanks for that. I’m having an issue. When I run the entire code that comes in the download section, everything works fine, but when I run step by step, I never get to the picture showed in Figure 6.
Do you have any idea of what might be happening?

• Felipe Freitas October 10, 2018 at 4:14 pm #

Nevermind, solved.

• Adrian Rosebrock October 12, 2018 at 9:12 am #

Congrats on resolving the issue, Felipe! 🙂

49. Alvin December 19, 2018 at 4:45 am #

Hi Adrian, I was trying to run the first part of the tutorial within the virtual environment but I met with a error ” Import error: No module named ‘scipy’ “. I tried looking online on how to install Scipy into virtual environment of RPI but whenever i run “pip install scipy”, RPI will hang. Do you have any idea on how to install Scipy in virtual environment?

• Adrian Rosebrock December 19, 2018 at 1:49 pm #

Try installing SciPy (inside your Python virtual environment) via:

`\$ pip install scipy --no-cache-dir`

Let your Pi sit overnight, the compile will take a few hours.

50. Warner Losh December 25, 2018 at 11:28 pm #

I have an pump on my in-floor radiant heat system. It doesn’t have a data-port. I’d like to keep track of when it is running (my boiler turns it on and off when necessary). So I need to know (a) if it’s on, or (b) if it’s on, what the values are. It’s a little more complicated than your example. The display toggles between Watts and GPM (lighting two different indicators), and I’d like to collect both values. Second, there’s 7 different pump setting speeds, again, with different LEDs that light up behind different graphics. Finally, the camera I have pointing at this LED display isn’t completely fixed, so I have to cope with both rotation issues and scale issues. I think I’ll start with this program and see how far I can get with this method…. If that works, I’ll move on to a Radon sensor that I have which only has LEDs for output… 🙂 Oh, and the room it’s in has variable lighting. With the lights out, only the LEDs are lit, while when I turn the lights on to do something in the room, the base image changes somewhat…

• Adrian Rosebrock December 27, 2018 at 10:20 am #

What a great project, Warner! Be sure to let us know how it goes.

51. BEEZEE January 7, 2019 at 6:39 am #

Hi , i am using the exact same image and code, but my after edge detection image is different to hat yo have got. I do not have a sharp rectangle for LCD. Thus at the end I get no result

• Adrian Rosebrock January 8, 2019 at 6:51 am #

What version of OpenCV and Python are you using?

52. Bruno February 27, 2019 at 10:27 am #

The algorithm does not recognize the ‘1’ and ‘7’ digits, the algorithm ignores them.
Any susgestión of how I can solve?

• Luiz July 9, 2019 at 1:08 pm #

I have no idea either .. I thought of a vertical dilation to join the pieces, because when I ask him to return the detected edges of 1 and 7 he splits in the middle because segment 4 is off. I also have doubts on how it would recognize the number 1, since its edges is completely it .. which causes 100% detection, making the algorithm recognize it as number 8

• Luiz August 1, 2019 at 1:38 pm #

I was able to solve, modified the code and now it meets all the requirements.

I simply stretched 20% of the found parts down and dilated them to fit them.

I changed the validation method by taking only the means of sigma (because I dilated and this has a chance of modifying the result)

What about the identification of number 1 I simply took that the contour with the ratio of 10% of the analyzed image (depends on each test) is small enough to be number 1. Of course, the display obtained can not have any stain of same unwanted ratio.

• Adrian Rosebrock August 7, 2019 at 12:38 pm #

Congrats on resolving the issue!

53. Sid Razdan March 8, 2019 at 12:18 pm #

Adrian, I’ve just come across this article of yours even though I’ve been following you for months. Great work!

Just a small question, will this work with 4 or 5 digits?

Please tell me what to do for 4,5,6,7,8 or even 9 digit detection.
Regards.

• Adrian Rosebrock March 13, 2019 at 4:02 pm #

Four or five total digits? Yes, it will work but you might need to tune the thresholding parameters to each each digit is nicely segmented.

54. Gary Zheng March 21, 2019 at 3:08 pm #

Hi! Enjoy your articles a lot. Is there any tutorial aiming at non-7-segment digits display? I am currently working on printed digits recognition and neither of handwritten or 7-segment seems to work.

55. Dries De Kegel April 8, 2019 at 9:22 am #

I’ve just come across this article when I was searching for a tool to read-out LCD displays from a movie (or sequential images).
However in my case, I have 2 LCD screens in my image frame.

How would you tackle that problem?

• Adrian Rosebrock April 12, 2019 at 12:35 pm #

You would find the contours of the screens and loop over each of them versus just trying to find the largest screen.

56. Benz April 9, 2019 at 7:02 am #

Thanks for this Tuto,

I have a question, what if we take another image that may be have more contrast or noise,
the preprocessing (threshold ) done in this Tuto is specified to this image of thermostat , but in real case we will take different images in different position,so this treshold can affect the image badly,
so how can we make a preprocessing step that generalize all images ?

• Adrian Rosebrock April 12, 2019 at 12:26 pm #

You would want to preprocess/clean the images as best as you could and then apply OpenCV OCR to it.

57. Laxman Hanamant Jeergal April 17, 2019 at 3:28 am #

what about handwritten digit recognition ?plz help

• Adrian Rosebrock April 18, 2019 at 6:43 am #

I cover the basics of handwriting recognition inside Practical Python and OpenCV. I would suggest starting there.

58. jaychiou April 18, 2019 at 5:45 am #

Thanks for this post,

I had some bugs when i was running the program. Here is how it goes:

I did’t have the complete edge map as you did, that is ,lines of my preprocessed edge map were not continuous,so I can’t detect the four vertices as lcd.

what is the problem?

• Adrian Rosebrock April 18, 2019 at 6:24 am #

Were you using the example images in this post or your own custom images?

• jaychiou April 19, 2019 at 2:45 am #

Correct my question,
I showed the edge map by using
”plt.imshow(imutils.opencv2matplotlib(edged))”

It turned to be a few discontinuous lines rather than what you showed in the post. But it can still find the vertices and plot the lcd.

I was just curious why it happened like this.
Thanks

59. Karthick April 27, 2019 at 8:28 pm #

hi Adrian, many thanks for this blog. do we have similar implementation for Fourteen-segment display?

thanks
Karthick

• Adrian Rosebrock May 1, 2019 at 11:53 am #

Sorry, I don’t have an implementation for 14-segment display. You should be able to use this code as a starting point though.

60. artur May 14, 2019 at 10:26 am #

can i use this digits recognition code for recognize numbers is plate ? for ANPR system ?

Artur

61. William Waplington May 15, 2019 at 11:35 am #

Hi.

I really love this solution and it really helps me out. I just wondered if you could provide some help in adapting this to read seven-segment multimeter.

If you could help me out I would really appreciate.

Thanks

• Adrian Rosebrock May 15, 2019 at 2:27 pm #

Hey William — adapting code to a project is a common question I get asked on the PyImageSearch blog. Please see the FAQ — doing custom adjustments is not something I can do.

62. Asad June 17, 2019 at 11:34 pm #

Hi sir, I am working on the same problem, but my image contain too much information (the picture is taken in the room using mobile camera which which also contains a small screen) and the screen is also small, so this program is not detecting screen, what changes in can made ? how can i send you image?

• Asad June 17, 2019 at 11:35 pm #

I Just want to detect and separate the screen from the other contents of the image, the digits recognition is not compulsory.

• Adrian Rosebrock June 19, 2019 at 1:57 pm #

Can you segment the screen from the image using thresholding or edge detection? What have you tried so far?

63. Pragash August 2, 2019 at 2:35 am #

Great Tutroial. Can you suggest what can we do if the digits are on same color as the background? How can I differentiate and threshold it out?

• Luiz August 10, 2019 at 12:11 pm #

Shows the image you are considering analyzing.
I’m new to openCV but I believe that threshold is possible if you analyze the minimum color difference of the analyzed pixels, otherwise try applying straight edge methods instead of thresould and filtering the contour area. Example: the area has 400 pixels and your digits occupy a considerable area, I guess 25% each, so you only filter that has the size ratio 25% and color to see if it is catching correctly. I believe that if you play around like this you will get the expected digits.
(sorry if the English went wrong, I’m Brazilian and had to use google translate to facilitate my communication :D)

[email]
[email]