Local Binary Patterns with Python & OpenCV

lbp_results_montage

Well. I’ll just come right out and say it. Today is my 27th birthday.

As a kid I was always super excited about my birthday. It was another year closer to being able to drive a car. Go to R rated movies. Or buy alcohol.

But now as an adult, I don’t care too much for my birthday — I suppose it’s just another reminder of the passage of time and how it can’t be stopped. And to be totally honest with you, I guess I’m a bit nervous about turning the “Big 3-0” in a few short years.

In order to rekindle some of that “little kid excitement”, I want to do something special with today’s post. Since today is both a Monday (when new PyImageSearch blog posts are published) and my birthday (two events that will not coincide again until 2020), I’ve decided to put together a really great tutorial on texture and pattern recognition in images.

In the remainder of this blog post I’ll show you how to use the Local Binary Patterns image descriptor (along with a bit of machine learning) to automatically classify and identify textures and patterns in images (such as the texture/pattern of wrapping paper, cake icing, or candles, for instance).

Read on to find out more about Local Binary Patterns and how they can be used for texture classification.

Looking for the source code to this post?
Jump right to the downloads section.

PyImageSearch Gurus

The majority of this blog post on texture and pattern recognition is based on the Local Binary Patterns lesson inside the PyImageSearch Gurus course.

While the lesson in PyImageSearch Gurus goes into a lot more detail than what this tutorial does, I still wanted to give you a taste of what PyImageSearch Gurus — my magnum opus on computer vision — has in store for you.

If you like this tutorial, there are over 29 lessons spanning 324 pages covering image descriptors (HOG, Haralick, Zernike, etc.), keypoint detectors (FAST, DoG, GFTT, etc.), and local invariant descriptors (SIFT, SURF, RootSIFT, etc.), inside the course.

At the time of this writing, the PyImageSearch Gurus course also covers an additional 166 lessons and 1,291 pages including computer vision topics such as face recognitiondeep learningautomatic license plate recognition, and training your own custom object detectors, just to name a few.

If this sounds interesting to you, be sure to take a look and consider signing up for the next open enrollment!

What are Local Binary Patterns?

Local Binary Patterns, or LBPs for short, are a texture descriptor made popular by the work of Ojala et al. in their 2002 paper, Multiresolution Grayscale and Rotation Invariant Texture Classification with Local Binary Patterns (although the concept of LBPs were introduced as early as 1993).

Unlike Haralick texture features that compute a global representation of texture based on the Gray Level Co-occurrence Matrix, LBPs instead compute a local representation of texture. This local representation is constructed by comparing each pixel with its surrounding neighborhood of pixels.

The first step in constructing the LBP texture descriptor is to convert the image to grayscale. For each pixel in the grayscale image, we select a neighborhood of size r surrounding the center pixel. A LBP value is then calculated for this center pixel and stored in the output 2D array with the same width and height as the input image.

For example, let’s take a look at the original LBP descriptor which operates on a fixed 3 x 3 neighborhood of pixels just like this:

Figure 1: The first step in constructing a LBP is to take the 8 pixel neighborhood surrounding a center pixel and construct it to construct a set of 8 binary digits.

Figure 1: The first step in constructing a LBP is to take the 8 pixel neighborhood surrounding a center pixel and threshold it to construct a set of 8 binary digits.

In the above figure we take the center pixel (highlighted in red) and threshold it against its neighborhood of 8 pixels. If the intensity of the center pixel is greater-than-or-equal to its neighbor, then we set the value to 1; otherwise, we set it to 0. With 8 surrounding pixels, we have a total of 2 ^ 8 = 256 possible combinations of LBP codes.

From there, we need to calculate the LBP value for the center pixel. We can start from any neighboring pixel and work our way clockwise or counter-clockwise, but our ordering must be kept consistent for all pixels in our image and all images in our dataset. Given a 3 x 3 neighborhood, we thus have 8 neighbors that we must perform a binary test on. The results of this binary test are stored in an 8-bit array, which we then convert to decimal, like this:

Figure 2: Taking the 8-bit binary neighborhood of the center pixel and converting it into a decimal representation. (Thanks to Hanzra Tech for the inspiration on this visualization!)

Figure 2: Taking the 8-bit binary neighborhood of the center pixel and converting it into a decimal representation. (Thanks to Bikramjot of Hanzra Tech for the inspiration on this visualization!)

In this example we start at the top-right point and work our way clockwise accumulating the binary string as we go along. We can then convert this binary string to decimal, yielding a value of 23.

This value is stored in the output LBP 2D array, which we can then visualize below:

Figure 3: The calculated LBP value is then stored in an output array with the same width and height as the original image.

Figure 3: The calculated LBP value is then stored in an output array with the same width and height as the original image.

This process of thresholding, accumulating binary strings, and storing the output decimal value in the LBP array is then repeated for each pixel in the input image.

Here is an example of computing and visualizing a full LBP 2D array:

Figure 4: An example of computing the LBP representation (right) from the original input image (left).

Figure 4: An example of computing the LBP representation (right) from the original input image (left).

The last step is to compute a histogram over the output LBP array. Since a 3 x 3 neighborhood has 2 ^ 8 = 256 possible patterns, our LBP 2D array thus has a minimum value of 0 and a maximum value of 255, allowing us to construct a 256-bin histogram of LBP codes as our final feature vector:

Figure 5: Finally, we can compute a histogram that tabulates the number of times each LBP pattern occurs. We can treat this histogram as our feature vector.

Figure 5: Finally, we can compute a histogram that tabulates the number of times each LBP pattern occurs. We can treat this histogram as our feature vector.

A primary benefit of this original LBP implementation is that we can capture extremely fine-grained details in the image. However, being able to capture details at such a small scale is also the biggest drawback to the algorithm — we cannot capture details at varying scales, only the fixed 3 x 3 scale!

To handle this, an extension to the original LBP implementation was proposed by Ojala et al. to handle variable neighborhood sizes. To account for variable neighborhood sizes, two parameters were introduced:

  1. The number of points p in a circularly symmetric neighborhood to consider (thus removing relying on a square neighborhood).
  2. The radius of the circle r, which allows us to account for different scales.

Below follows a visualization of these parameters:

Figure 6: Three neighborhood examples with varying p and r used to construct Local Binary Patterns.

Figure 6: Three neighborhood examples with varying p and r used to construct Local Binary Patterns.

Lastly, it’s important that we consider the concept of LBP uniformity. A LBP is considered to be uniform if it has at most two 0-1 or 1-0 transitions. For example, the pattern 00001000  (2 transitions) and 10000000  (1 transition) are both considered to be uniform patterns since they contain at most two 0-1 and 1-0 transitions. The pattern 01010010 ) on the other hand is not considered a uniform pattern since it has six 0-1 or 1-0 transitions.

The number of uniform prototypes in a Local Binary Pattern is completely dependent on the number of points p. As the value of p increases, so will the dimensionality of your resulting histogram. Please refer to the original Ojala et al. paper for the full explanation on deriving the number of patterns and uniform patterns based on this value. However, for the time being simply keep in mind that given the number of points p in the LBP there are p + 1 uniform patterns. The final dimensionality of the histogram is thus p + 2, where the added entry tabulates all patterns that are not uniform.

So why are uniform LBP patterns so interesting? Simply put: they add an extra level of rotation and grayscale invariance, hence they are commonly used when extracting LBP feature vectors from images.

Local Binary Patterns with Python and OpenCV

Local Binary Pattern implementations can be found in both the scikit-image and mahotas packages. OpenCV also implements LBPs, but strictly in the context of face recognition — the underlying LBP extractor is not exposed for raw LBP histogram computation.

In general, I recommend using the scikit-image implementation of LBPs as they offer more control of the types of LBP histograms you want to generate. Furthermore, the scikit-image implementation also includes variants of LBPs that improve rotation and grayscale invariance.

Before we get started extracting Local Binary Patterns from images and using them for classification, we first need to create a dataset of textures. To form this dataset, earlier today I took a walk through my apartment and collected 20 photos of various textures and patterns, including an area rug:

Figure 7: Example images of the area rug texture and pattern.

Figure 7: Example images of the area rug texture and pattern.

Notice how the area rug images have a geometric design to it.

I also gathered a few examples of carpet:

Figure 8: Four examples of the carpet texture.

Figure 8: Four examples of the carpet texture.

Notice how the carpet has a distinct pattern with a coarse texture.

I then snapped a few photos of the keyboard sitting on my desk:

Figure 9: Example images of my keyboard.

Figure 9: Example images of my keyboard.

Notice how the keyboard has little texture — but it does demonstrate a repeatable pattern of white keys and silver metal spacing in between them.

Finally, I gathered a few final examples of wrapping paper (since it is my birthday after all):

Figure 10: Our final texture we are going to classify -- wrapping paper.

Figure 10: Our final texture we are going to classify — wrapping paper.

The wrapping paper has a very smooth texture to it, but also demonstrates a unique pattern.

Given this dataset of area rugcarpetkeyboard, and wrapping paper, our goal is to extract Local Binary Patterns from these images and apply machine learning to automatically recognize and categorize these texture images.

Let’s go ahead and get this demonstration started by defining the directory structure for our project:

We’ll be creating a pyimagesearch  module to keep our code organized. And within the pyimagesearch  module we’ll create localbinarypatterns.py , which as the name suggests, is where our Local Binary Patterns implementation will be stored.

Speaking of Local Binary Patterns, let’s go ahead and create the descriptor class now:

We start of by importing the feature  sub-module of scikit-image which contains the implementation of the Local Binary Patterns descriptor.

Line 5 defines our constructor for our LocalBinaryPatterns  class. As mentioned in the section above, we know that LBPs require two parameters: the radius of the pattern surrounding the central pixel, along with the number of points along the outer radius. We’ll store both of these values on Lines 8 and 9.

From there, we define our describe  method on Line 11, which accepts a single required argument — the image we want to extract LBPs from.

The actual LBP computation is handled on Lines 15 and 16 using our supplied radius and number of points. The uniform  method indicates that we are computing the rotation and grayscale invariant form of LBPs.

However, the lbp  variable returned by the local_binary_patterns  function is not directly usable as a feature vector. Instead, lbp  is a 2D array with the same width and height as our input image — each of the values inside lbp  ranges from [0, numPoints + 2], a value for each of the possible numPoints + 1 possible rotation invariant prototypes (see the discussion of uniform patterns at the top of this post for more information) along with an extra dimension for all patterns that are not uniform, yielding a total of numPoints + 2 unique possible values.

Thus, to construct the actual feature vector, we need to make a call to np.histogram  which counts the number of times each of the LBP prototypes appears. The returned histogram is numPoints + 2-dimensional, an integer count for each of the prototypes. We then take this histogram and normalize it such that it sums to 1, and then return it to the calling function.

Now that our LocalBinaryPatterns  descriptor is defined, let’s see how we can use it to recognize textures and patterns. Create a new file named recognize.py , and let’s get coding:

We start off on Lines 2-6 by importing our necessary command line arguments. Notice how we are importing the LocalBinaryPatterns  descriptor from the pyimagesearch  sub-module that we defined above.

From there, Lines 9-14 handle parsing our command line arguments. We’ll only need two switches here: the path to the --training  data and the path to the --testing  data.

In this example, we have partitioned our textures into two sets: a training set of 4 images per texture (4 textures x 4 images per texture = 16 total images), and a testing set of one image per texture (4 textures x 1 image per texture = 4 images). The training set of 16 images will be used to “teach” our classifier — and then we’ll evaluate performance on our testing set of 4 images.

On Line 18 we initialize our LocalBinaryPattern  descriptor using a numPoints=24 and radius=8.

In order to store the LBP feature vectors and the label names associated with each of the texture classes, we’ll initialize two lists:  data  to store the feature vectors and labels  to store the names of each texture (Lines 19 and 20).

Now it’s time to extract LBP features from our set of training images:

We start looping over our training images on Line 23. For each of these images, we load them from disk, convert them to grayscale, and extract Local Binary Pattern features. The label (i.e., texture name) is then extracted from the image path and both our labels  and data  lists are updated, respectively.

Once we have our features and labels extracted, we can train our Linear Support Vector Machine on Lines 35 and 36 to learn the difference between the various texture classes.

Once our Linear SVM is trained, we can use it to classify subsequent texture images:

Just as we looped over the training images on Line 22 to gather data to train our classifier, we now loop over the testing images on Line 39 to test the performance and accuracy of our classifier.

Again, all we need to do is load our image from disk, convert it to grayscale, extract Local Binary Patterns from the grayscale image, and then pass the features onto our Linear SVM for classification (Lines 42-45).

I’d like to draw your attention to hist.reshape(1, -1) on Line 45. This reshapes our histogram from a 1D array to a 2D array allowing for the potential of multiple feature vectors to run predictions on.

Lines 48-51 show the output classification to our screen.

Results

Let’s go ahead and give our texture classification system a try by executing the following command:

And here’s the first output image from our classification:

Figure 11: Our Linear SVM + Local Binary Pattern combination is able to correctly classify the area rug pattern.

Figure 11: Our Linear SVM + Local Binary Pattern combination is able to correctly classify the area rug pattern.

Sure enough, the image is correctly classified as “area rug”.

Let’s try another one:

Figure 12: We are also able to recognize the carpet pattern.

Figure 12: We are also able to recognize the carpet pattern.

Once again, our classifier correctly identifies the texture/pattern of the image.

Here’s an example of the keyboard pattern being correctly labeled:

Figure 13: Classifying the keyboard pattern is also easy for our method.

Figure 13: Classifying the keyboard pattern is also easy for our method.

Finally, we are able to recognize the texture and pattern of the wrapping paper as well:

Figure 14: Using Local Binary Patterns to classify the texture of an image.

Figure 14: Using Local Binary Patterns to classify the texture of an image.

While this example was quite small and simple, it was still able to demonstrate that by using Local Binary Pattern features and a bit of machine learning, we are able to correctly classify the texture and pattern of an image.

Summary

In this blog post we learned how to extract Local Binary Patterns from images and use them (along with a bit of machine learning) to perform texture and pattern recognition.

If you enjoyed this blog post, be sure to take a look at the PyImageSearch Gurus course where the majority this lesson was derived from.

Inside the course you’ll find over 166+ lessons covering 1,291 pages of computer vision topics such as:

  • Face recognition.
  • Deep learning.
  • Automatic license plate recognition.
  • Training your own custom object detectors.
  • Building image search engines.
  • …and much more!

If this sounds interesting to you, be sure to take a look and consider signing up for the next open enrollment!

See you next week!

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , , ,

211 Responses to Local Binary Patterns with Python & OpenCV

  1. Neeraj December 7, 2015 at 11:18 am #

    Happy Birthday Adrian!, God bless you with happiness,peace in your life. Thanks always for sharing knowledge with us and teaching us so many new things. Always love your all posts.

    • Adrian Rosebrock December 8, 2015 at 6:33 am #

      Thanks so much Neeraj!

  2. Charles December 7, 2015 at 2:03 pm #

    Hi Adrian.

    Thanks for this material. Great text, like always. And btw, happy birthday to you! 🙂

    • Adrian Rosebrock December 8, 2015 at 6:29 am #

      Thanks Charles! 🙂

  3. michael December 8, 2015 at 10:19 am #

    happy birthday adrian i hope this can classify all your presents !!!

    • Adrian Rosebrock December 8, 2015 at 2:38 pm #

      Thanks Michael!

  4. Sveder December 10, 2015 at 1:42 pm #

    Great article, thank you!
    Any chance to get the images you used to train this and the test images?
    Thanks!

    Nevermind, found it, kinda looks like every other generic “sign up here” so ignored it.

    • Adrian Rosebrock December 10, 2015 at 2:22 pm #

      Thanks for the feedback Sveder, I’ll see if I can make the form stand out more in the future.

      • Sveder December 10, 2015 at 2:58 pm #

        No problem, I got it to work, great job. I was especially curious how well it would do with different types of keyboards (or carpets etc) and it worked amazingly.

        One thing though – you should use the os.path module instead of splitting by “/” (specifically this line: labels.append(imagePath.split("/")[-2]) as it doesn’t work like you expect on windows, that uses \ as path delimiters.

        (for completeness, I changed the above line to be:
        labels.append(os.path.split(os.path.dirname(imagePath))[-1])
        )

        Once again – thank you for the awesome post!

        • Adrian Rosebrock December 11, 2015 at 6:34 am #

          Thanks for the tip Sveder. I honestly haven’t used a Windows system in over 9 years so I usually miss those Windows nuances.

  5. Abu December 27, 2015 at 1:31 am #

    Thanks for the awesome tutorial! Love reading your blog.

    • Adrian Rosebrock December 27, 2015 at 7:54 am #

      Thanks Abu! 🙂

  6. suyuancheng January 12, 2016 at 8:06 am #

    hi,Adrian. i have some problems. as i known svn just can classify two kind data, so i think neural network is better to prove the LBP

    • Adrian Rosebrock January 12, 2016 at 11:59 am #

      The original implementation of SVMs were only intended for binary classification (i.e., two classes); however, modern implementations of SVMs (such as the one used in this course) can handle multi-class data without a problem.

      • suyuancheng January 12, 2016 at 7:58 pm #

        ooh,i learn something new,thinks

      • abu January 25, 2016 at 12:43 pm #

        i have problem to create the pyimagesearch module.. can u show steps on how u create the module??

        • Adrian Rosebrock January 25, 2016 at 4:05 pm #

          I would suggest downloading the source code under the “Downloads” section of this post. The source code download will properly explain how to create the PyImageSearch module. All you need to do is create a pyimagesearch directory and then place a __init__.py file inside of it.

          • abu February 2, 2016 at 12:02 am #

            is it possible to separate the coding for the training and testing?? for example i run the training first and after that i run the testing

          • Adrian Rosebrock February 2, 2016 at 10:28 am #

            Sure, that’s not an issue at all. Once the model has been trained, dump it to file using pickle or cPickle:

            You can then load it from disk in a separate Python script:

  7. Pradeep February 12, 2016 at 11:28 am #

    Adrian, can you please also share how to use LBP to train an object detector? I googled but don’t see any simple, concrete example

    • Adrian Rosebrock February 12, 2016 at 3:17 pm #

      You normally wouldn’t use LBPs strictly for object detection. Normally HOG + Linear is better suited for object detection. Is there a reason why you would like to use LBPs for object detection?

  8. Zheng Rui February 29, 2016 at 2:03 am #

    Thanks Adrian, very nice tutorial as usual, one thing i found is the histogram returned from ‘class LocalBinaryPatterns’, if set ‘bins=np.arange(0, self.numPoints + 2) in np.histogram()’, the number of bins returned will be only ‘self.numPoints+1’ rather than ‘self.numPoints+2’, as ‘np.arange(0, self.numPoints+2)’ will generate ‘[0, 1, …, self.numPoints+1]’, which generates bins ‘[0,1), [1,2), …, [self.numPoints, self.numPoints+1]’ for ‘np.histogram()’.

    Either just use ‘bins=self.numPoints + 2’ or use ‘bins=np.arange(0, self.numPoints+3)’ will return self.numPoints+2 bins

    • Adrian Rosebrock March 13, 2016 at 10:38 am #

      Thanks for pointing this out! The code was correct in the “Downloads”, but not in the actual blog post itself. The actual code should read:

  9. Wanderson April 12, 2016 at 11:31 am #

    Wow, you’re the best!
    Adrian, what would be the best solution to work with the counting crowd in small place. For example, count for passengers crossing the train door (camera on the ceiling and out of the train). The input data would be the overhead and / or shoulders.

    • Adrian Rosebrock April 13, 2016 at 6:59 pm #

      For simple counting with a fixed camera, basic motion detection and background subtraction will work well. I would recommend starting with this post.

  10. Wanderson April 12, 2016 at 11:40 pm #

    Hi Adrian,
    It all worked!
    Nice work!
    But instead of using static images, how do you do to train, test and classify with video cameras?

    • Adrian Rosebrock April 13, 2016 at 6:56 pm #

      You’ll want to train your classifier using static images/frames. But classifying with a video can easily be accomplished by modifying the code to access the video stream. I recommend using this blog post as a starting point.

  11. danieal April 19, 2016 at 12:09 pm #

    hi Adrian really great work, but the question is if I want to compute lbp for the first pixel
    in your first example 5 how to do this ? do u mind explaining it plz ?

    • Adrian Rosebrock April 19, 2016 at 3:03 pm #

      Are you referring to pixels being on “border” of the image and therefore not having a true neighborhood? If so, just pad image with zeros such that you have pixels to fit the neighborhood. Other types of padding into replication, where you “replicate” the borders along the border to create the neighborhood. You can also “wrap around” and use the pixel values from the opposite side of the image. But in general, zero padding is normally used.

  12. danieal April 19, 2016 at 4:22 pm #

    thank u very much 🙂

  13. mahmod May 5, 2016 at 4:10 am #

    ImportError: No module named scipy.ndimage

    Has anyone faced this issue (OS X 10.11) ??

    • Adrian Rosebrock May 5, 2016 at 6:42 am #

      You need to install scipy and likely scikit-image:

       pip install scipy
      $ pip install -U scikit-image
      • mahmod May 7, 2016 at 4:59 am #

        Thanks man, you rock!
        I needed to install the following:
        scipy matplotlib scikit-learn imutils

  14. mahmod May 7, 2016 at 5:55 am #

    Hi again,
    The code is working as expected; however, the following warning is thrown:

    python2.7/site-packages/sklearn/utils/validation.py:386: DeprecationWarning: Passing 1d arrays as data is deprecated in 0.17 and willraise ValueError in 0.19. Reshape your data either using X.reshape(-1, 1) if your data has a single feature or X.reshape(1, -1) if it contains a single sample.

    so in the future, it will start throwing exceptions!? Any idea how to avoid that?

    • Adrian Rosebrock May 7, 2016 at 12:35 pm #

      There are two ways to avoid this issue. The first is to re-shape the feature vector:

      prediction = model.predict(hist.reshape(1, -1)[0]

      The second is to simply wrap hist as a list:

      prediction = model.predict([hist])[0]

      Both options will give the desired result.

  15. Aquib May 10, 2016 at 4:24 am #

    Hey, Adrian how can I use LBP for face recognition?

  16. Paul G. May 30, 2016 at 6:00 pm #

    Adrian, how can I output the picture representation of the LBP Histogram?

  17. Srinidhi Bhat June 22, 2016 at 7:18 am #

    Hi Adrian great set of tutorials keep continuing please
    just one doubt , i have applied LBP on a set of faces and have extracted a histogram but how do you get the face, as you have shown before the tutorial above. Kindly guide

    • Adrian Rosebrock June 23, 2016 at 1:20 pm #

      If you’re using simple LBPs with 2^8 = 256 possible combinations of LBP codes, then you just take the output of the LBP operation, change it to an unsigned 8-bit integer data type, and display it. See the PyImageSearch Gurus course for more details.

  18. Nisha July 25, 2016 at 2:03 am #

    Hello Adrian, is it possible to include camshift and LBP to track the object efficiently for a live video in python.

    • Adrian Rosebrock July 27, 2016 at 2:34 pm #

      CamShift is typically used for color histograms. For objects that cannot be tracked based on color, I would instead use something like correlation tracking.

  19. sonic August 10, 2016 at 11:03 am #

    Thanks for this tutorial. Actually thanks for the whole website 🙂

    I have one additional question. I don’t want to classify pictures, but extract small area with texture, calculate lbp histogram and then try to match histograms and find similar textures in the entire image. Something similar to Opencv Back Projection for color histograms. And actually I am trying to play with calcBackProject() function, but I have trouble with data types and can’t make it work.

    Other solution on my mind is to calculate the lbp histogram on the template image, and then manually iterate through picture (like we do for convolution), calculate lbp histogram for every region, compare that with template histogram using compareHist() and Chi-Square, and declare similarity. But that would be pretty coarse. Any other option?

    • Adrian Rosebrock August 11, 2016 at 10:43 am #

      This certainly sounds like a texture matching problem, which I admittedly don’t have much experience in. Using a combination of image pyramids and sliding windows you can get an scale and location independent method to matching textures, but this by definition is very computationally expensive. I would suggest starting with this method and seeing how far it gets you. I would also suggest treating the problem as a texture connected-component labeling problem as well.

      • sonic August 12, 2016 at 7:51 am #

        Thanks for the answer.

        One more complication is the fact that I want to do this for a live video feed, on an ARM processor 🙂
        I implemented naive method: getting lbp hist for template region, and then manually iterating through patches of the image, calculating lbp hist for them, comparing histograms, and then setting whole region to 0 or 255 depending on the Chi Square distance. Result is not great: 1) manual iteration through image is slow, just few fps (but there has to be some way to vectorize that operation); 2) result is coarse (I am using 10×10 blocks on a 240×320 image) and kinda looks like an edge detector

        Of well. I’ll try to play with it a bit more before discarding the idea.

        • Adrian Rosebrock August 12, 2016 at 10:45 am #

          Correct, this method will be painfully slow due to having to loop over the entire image. In some cases, you might be able to speed this up by implementing the function in C/C++ and then calling the method from Python.

    • David May 8, 2018 at 5:48 pm #

      Hi, this is exactly what I want to do as well. Did you get far with this approach? Any tips for texture matching and/or texture segmentation in OpenCV?

  20. Sarang August 16, 2016 at 1:37 am #

    Hi Adrian,

    Can we print the prediction/accuracy % of the training samples.
    If yes, how?

    • Adrian Rosebrock August 16, 2016 at 12:58 pm #

      You can apply the .score function of the model. For example:

      print(model.score(trainData))

      • ezza June 15, 2018 at 3:20 pm #

        Hi,

        what is this input “trainData” ?? As this is not in the code.

        • Adrian Rosebrock June 19, 2018 at 9:03 am #

          I meant to say print(model.score(data)). I use “trainData” as a variable name in other blog posts.

  21. Abdul Baset October 16, 2016 at 9:26 am #

    Hi,Adrian
    Thanks for the lovely post. I downloaded your code and ran it . i can only see the area_rug classified in the output. The rest images are not showing . I also get a warning after i run the scripts which is same as someone else pointed out that is :
    “DeprecationWarning: Passing 1d arrays as data is deprecated in 0.17 and willraise ValueError in 0.19. Reshape your data either using X.reshape(-1, 1) if your data has a single feature or X.reshape(1, -1) if it contains a single sample.”

    Is this the reason for getting only one kind of class classified?

    • Adrian Rosebrock October 17, 2016 at 4:09 pm #

      That is very strange regarding only the “area rug” class being utilized — that should not happen. I haven’t heard of that happening before either. As for the DeprecationWarning that can be resolved by wrapping the LBP hist as a list before prediction:

      prediction = model.predict([hist])[0]

      Or by reshaping the array via NumPy:

      prediction = model.predict(hist.reshape(1, -1))[0]

      • Abdul Baset October 18, 2016 at 8:34 am #

        ok im sorry …it was my bad …its working fine now !
        One more question , so im working on this college project and i need to extract only eyes and lips using Local binary patterns. Can you give me a lead as to how i can do that and in what format are they stored after extraction ?

        • Adrian Rosebrock October 20, 2016 at 8:55 am #

          LBP features are simply NumPy arrays. You can write them to file using cPickle, HDF5, or simple CSV. For what it’s worth, I demonstrate how to train custom object detectors inside the PyImageSearch Gurus course.

  22. Romanzo January 13, 2017 at 12:11 am #

    Hey Adrian,
    Tanks for the great post.
    Regarding the numpy histogram, i am not sure about your code. Shouldn’t the number of bins be equal to p + 2 (not p + 3). There are p + 1 bins for uniform patterns and 1 bin for non uniform patterns (total p + 2 bins). And why is range equals to [0, p + 2] and not the number of pixels in the image?

    Also do you get number of uniform patterns equals p + 1 because it’s rotation invariant? Otherwise it will be p * (p – 1) + 2 (equals 58 for p=8)

    Thanks.

    • Adrian Rosebrock January 13, 2017 at 8:35 am #

      The code can be a bit confusing due to subtleties in the range and np.histogram function.

      To start, the range function is not inclusive on the upper bound, therefore we have to use p + 3 instead of p + 2 to get the proper number of bins. Open up a Python shell and play with the range function to confirm this for yourself.

      The range parameter to np.histogram is p + 2 because there are p + 1 uniform patterns. We then need to add an extra bin to the histogram for all non-uniform patterns.

      For more information on LBPs, please see the PyImageSearch Gurus course.

  23. Suresh January 17, 2017 at 1:22 am #

    I have Problem with error please solve this

    from pyimagesearch.localbinarypatterns import LocalBinaryPatterns
    ImportError: No module named pyimagesearch.localbinarypatterns

    • Adrian Rosebrock January 17, 2017 at 8:43 am #

      Hey Suresh — make sure you download the source code to this blog post using the “Downloads” section in this post. The .zip archive of the code includes the exact directory and project structure to ensure the code works out-of-the-box. My guess is that your project structure is incorrect/does not include a __init__.py file in the pyimagesearch directory.

  24. Suresh January 17, 2017 at 9:28 am #

    Thank you Adrian, It is working fine.
    that is my mistake only.

    I am using python recognize.py this only

    actually we use this only at command prompt (# or $) python recognize.py –training images/training –testing images/testing

  25. Suresh January 17, 2017 at 6:23 pm #

    Which formula your using for SVM ? (Classification)

    • Adrian Rosebrock January 18, 2017 at 7:12 am #

      I’m not sure what you mean my “formula”, but this is just a SVM with a linear kernel.

  26. Sarvesh January 18, 2017 at 5:05 pm #

    Hey, Thanks for the excellent tutorial!!! I downloaded the source code and ran it, However, it gives me following error :

    ValueError: This solver needs samples of at least 2 classes in the data, but the data contains only one class: ‘images’

    Any idea how should I go about solving this???
    Any help is appreciated 🙂

    Thank you 🙂

    • Adrian Rosebrock January 20, 2017 at 11:08 am #

      Hey Sarvesh — it sounds like the paths to your input images may be incorrect. What is the command that you are running to execute the script?

      • Aurora Guerra March 29, 2017 at 5:41 pm #

        Hello Adrian
        Your tutorials are very good, I’m learning a lot
        I have the same problem
        how do I solve it?

        command: python recognize.py –training images/training –testing images/testing
        SO: windows
        Thank you

  27. Romanzo January 29, 2017 at 11:19 pm #

    Hi Adrian,
    Just a note, If you are using local_binary_pattern from skimage, the value assigned to the centre pixel is the opposite of what you are describing in the blog.
    In skimage it is: “If the intensity of the center pixel is greater-than-or-equal to its neighbor, then we set the value to 0; otherwise, we set it to 1”. You might want to keep everything uniform.

    • Adrian Rosebrock January 30, 2017 at 4:22 pm #

      Hey Romanzo — thanks for pointing this out.

  28. Aritro Saha February 8, 2017 at 8:40 pm #

    Since I couldn’t find the comment I posted, this is the error I got:

    DeprecationWarning: Passing 1d arrays as data is deprecated in 0.17 and will raise ValueError in 0.19. Reshape your data either using X.reshape(-1, 1) if your data has a single feature or X.reshape(1, -1) if it contains a single sample.
    DeprecationWarning)

    model.fit(data, labels)
    ValueError: Found array with 0 feature(s) (shape=(1, 0)) while a minimum of 1 is required.

    • Adrian Rosebrock February 10, 2017 at 2:20 pm #

      First, it’s important to note that this isn’t an error (yet) — it’s a warning. Secondly, to resolve the issue, just follow the instructions in the warning:

      prediction = model.predict(hist.reshape(1, -1))[0]

      You can also just wrap the hist as a list:

      prediction = model.predict([hist])[0]

  29. Maheswari February 12, 2017 at 1:39 am #

    What is the computation time for this program.How to calculate the time .

    • Adrian Rosebrock February 13, 2017 at 1:45 pm #

      A quick, easy method to determine approximate computation time is to simply use the time command:

      $ time python recognize.py --training images/training --testing images/testing

  30. Milap Jhumkhawala March 15, 2017 at 8:07 am #

    Hey Adrian, first of all great post, really useful.
    I tried implementing the code and I am stuck with this error: “ValueError: Found array with 0 feature(s) (shape=(1,0)) while a minimum of 1 is required”. How do I get rid of this error?

    • Milap Jhumkhawala March 15, 2017 at 8:25 am #

      Never mind, solved it. Image path was incorrect.

      • Adrian Rosebrock March 15, 2017 at 8:45 am #

        Congrats on resolving the issue Milap!

  31. sarra March 22, 2017 at 1:39 pm #

    Hay,I’am working on my code and I use the function “lbp = feature.local_binary_pattern(b, npoints,radius, method=”Uniform”)” to display just image lbp, so how should I put the numPoints and radius settings

  32. Ameer March 28, 2017 at 7:05 am #

    Hey Adrian
    i downloaded your code and when i tried to run it i had some import error i googled them and downloaded them but i still have issues

    (cv) ameerherbawi@clienta:~/Desktop/local-binary-patterns$ python3 recognize.py –training images/training –testing images/testing

    ImportError: No module named ‘scipy’

    am sure i downloaded scipy and installed it since i get this when i tried again

    Requirement already satisfied: scipy in /usr/local/lib/python2.7/dist-packages
    Requirement already satisfied: numpy>=1.8.2 in /usr/local/lib/python2.7/dist-packages (from scipy)

    just to help you a bit i followed your tut. Ubuntu 16.04: How to install OpenCV and then i downloaded the code and it didn’t run i installed opencv 3.1 as you guided in addition to python 3
    thanks for your time

    • Adrian Rosebrock March 28, 2017 at 12:50 pm #

      Did you install SciPy into your “cv” virtual environment? You can use pip freeze to check:

      • Ameer March 29, 2017 at 3:45 am #

        don’t worry i found that my bad i didn’t read the rest of the comments, one more thing, i tried to add a new testing image (baby face) and it recognized it as keyboard !

        where should i go to do face recognition ?

  33. Ameer March 30, 2017 at 6:03 am #

    Hey Adrian

    I followed your code and upgraded it a bit, but i have noticed that if i add another image the code will miss identify it, i searched and saw that cheeking confidence value will help to print unknown on the low confidence values images, and also found that OpenCV3 isn’t supporting confidence any more, what can i do to be able to find the confidence to build a further work on its value ?

    i used this and it didn’t work, whenever i remove conf it works so what can i do then ???
    Thanks again

    id,conf = recognizer.predict(objectNp[y: y + h, x: x + w])

    • Adrian Rosebrock March 31, 2017 at 1:52 pm #

      I’m not sure what you mean by OpenCV 3 not supporting confidence anymore. This blog post uses scikit-learn for the machine learning functionality.

  34. JKCS April 11, 2017 at 10:06 pm #

    Hi Adrian,
    Thanks for the excellent tutorial!!. I downloaded the source code and ran it, However, it gives me following error :

    usage: recognize.py [-h] -t TRAINING -e TESTING
    recognize.py: error: the following arguments are required: -t/–training, -e/–testing

    Any idea how should I go about solving this issue?
    Your help is appreciated.

    Thank you.

    • Adrian Rosebrock April 12, 2017 at 1:03 pm #

      I suggest you read up on command line arguments before continuing.

    • Rafflesia July 4, 2017 at 8:28 am #

      Hi JKCS ,
      I am having the same problem while trying to run this code,
      Did you solved this problem ?
      If yes, could you please tell me how you solved it ,
      Your help is appreciated.

      Thank you.

      • Vhulenda November 22, 2017 at 3:13 am #

        Hi Rafflesia.

        Use this to run the code from terminal.

    • subhiran July 7, 2018 at 6:55 am #

      i have also got same error . how to fix it. pls help me.. your help will be appreciated.

  35. Gaby April 25, 2017 at 5:02 am #

    Hello Adrian,
    I’m working on a Rasberry Pi using Python. I want to use LBP for face recognition, I read your earlier comment that it was in your Guru book, how would I go about accessing that specific module?

    Also, I just tried running the first code on this post, but I get the error that the module skimage doesnt exits. I have already install scikit-image and matplots succesfully. Can you think of any other reason why that would be the case?

    Thanks in advance. I hope you can get back to me soon!

    • Adrian Rosebrock April 25, 2017 at 11:48 am #

      The LBP for face recognition is part of the Face Recognition Module inside PyImageSearch Gurus. Computer vision topics tend to overlap and intertwine (you would need to understand feature extractors and a bit of machine learning first before applying face recognition) so I would suggest working through the entire course.

      As for the scikit-image not existing, did you use a Python virtual environment when installing them? Perhaps you installed them outside of the Python virtual environment where you normally access OpenCV.

  36. syamsul May 11, 2017 at 5:36 am #

    please lbp for delphi 7..I find it difficult

    • Adrian Rosebrock May 11, 2017 at 8:42 am #

      Hi Syamsul — this blog primarily covers OpenCV and Python, not the Delphi programming language. I am unaware of an LBP implementation for the Delphi programming language.

  37. Shravani May 31, 2017 at 7:09 am #

    Hi Adrian,

    I am getting an error No module named sklearn.svm while executing this code. Can you plz tell me how to solve this.

    • Adrian Rosebrock May 31, 2017 at 1:00 pm #

      Make sure you install scikit-learn:

      $ pip install scikit-learn

  38. sapikum June 12, 2017 at 5:19 pm #

    Hi Adrian

    Can we use a 10 fold cross validation on your images folder?

    if yes , how?

    • Adrian Rosebrock June 13, 2017 at 10:56 am #

      The dataset used in this blog post really isn’t large enough to apply 10 fold cross validation. I would suggest using a larger dataset, extracting features from each image, and then use scikit-learn’s cross-validation methods to help you accomplish this.

  39. Rafflesia July 4, 2017 at 8:32 am #

    Hi Adrian,
    You are great,
    I am learning python recently and i follow some of your tutorials,
    All of them are really great and helpful ,
    Thank you .

    • Adrian Rosebrock July 5, 2017 at 5:57 am #

      Thank you for the kind words, Rafflesia.

  40. Ethan July 5, 2017 at 6:45 pm #

    You commented in the post that LBP implementations can be found in scikit-image and mahotas packages (or in OpenCV more specifically in the context of facial recognition). Is there any other package that contains LBP implementations or just those same ones?

    • Adrian Rosebrock July 7, 2017 at 10:02 am #

      Those are the primary ones, at least in terms of Python bindings. I highly recommend you use the scikit-image implementation.

  41. Ethan July 11, 2017 at 8:53 pm #

    Instead of using SVM, is it possible to use CNN? Or is it unnecessary to use LBP to extract features since CNN does basically the same thing? Correct me if I’m wrong.

    • Adrian Rosebrock July 12, 2017 at 2:45 pm #

      A CNN will learn various filters to discriminate amongst object classes. These filters could learn color blobs, edges, contours, and eventually higher-level, more abstract features. CNNs and LBPs are not the same. If you’re interested in learning more about feature extraction and CNNs, take a look at the PyImageSearch Gurus course and Deep Learning for Computer Vision with Python.

      • Ethan July 13, 2017 at 11:18 am #

        So is it possible to use it together with LBP? right?

        • Adrian Rosebrock July 14, 2017 at 7:25 am #

          No, a CNN will learn its own filters. An LBP is a feature extraction algorithm. You would then feed these features into a standard machine learning classifier like an SVM, Random Forest, etc. A CNN is an end-to-end classifier. An image comes in as input and classifications at the output. You wouldn’t use LBPs as an input to a CNN.

  42. Robert August 24, 2017 at 12:19 pm #

    Hi Adrian,

    I have just started programming in Python and I found your website is a great source to learn from. Your tutorials also helped me a lot. Thank you so much.

    Now, I am trying to do exactly what you have done above, however, instead of using the LBP features, I want to use the BRIEF (Binary Robust Independent Elementary Features) as the texture features. Therefore, I would appreciate if you could provide me with any tips on how to do this. Thanks in advance.

    Cheers,
    Robert

    • Adrian Rosebrock August 24, 2017 at 3:27 pm #

      Hi Robert — it’s great to hear that you’ve enjoyed PyImageSearch! In order to use BRIEF you actually need to build what’s called a “bag of visual words” model (BOVW). From there you can apply machine learning, image search engines, etc. I provide a detailed guide on the BOVW and the applications to image classifiers and scalable image search engines inside the PyImageSearch Gurus course. I would definitely suggest that you take a look.

  43. Robert September 7, 2017 at 6:40 am #

    Hi Adrian,

    Thanks for your reply. That’s helped me a lot, but I am a bit confused since I am new to Python.

    How about if I use only BRIEF descriptor for image classification without using the keypoint detector such as (StarDetector)? Would this be possible? So, it will be exactly as same as what you have done above.

    If I used the BOVW model, I would have to use the keypoint detector to compute the BRIEF features and then perform k-means clustering and calculate the histogram of features.

    To be clear, I would like to compute the BRIEF features of the images, and then use the BRIEF features to build a histogram of features without using the keypoint detector for conducting image classification.

    Cheers,
    Robert

    • Adrian Rosebrock September 7, 2017 at 6:53 am #

      You need both a keypoint detector and local invariant descriptor. You could skip the actual keypoint detection and use a Dense detector instead, but again, you need to mark certain pixels in an image as “keypoints” (whether via an explicit algorithm or the Dense detector) before you extract the features and apply k-means to build your BOVW.

      Again, I would highly recommend that you work through the PyImageSearch Gurus course as I explain keypoint detectors, feature extractors, and the BOVW model in tons of detail with lots of code.

      • Robert September 8, 2017 at 2:51 pm #

        Hi Adrian,

        Sorry for asking you so many questions.

        I hope you got what I was trying to explain and I am so sorry again for not being clear in the first question.

        Cheers,
        Robert

      • Robert September 10, 2017 at 11:42 am #

        Hi Adrian,

        Thanks for your reply and of course I will have a look at PyImageSearch Gurus course. I am very excited to join this course.

        This is the last question 🙂

        Basically, I need to extract the BRIEF features from a region surrounding each pixel in an image, not certain pixels. So, I need to compute the BRIEF over all the pixels in the image and then build a histogram of BRIEF features and perform image classification based on the histogram. Thus, I don’t want to use keypoints detector since the BRIEF features will be extracted from each pixel in the image.

        Would this be possible? If yes, could you provide me with any tips on how to do that, please?

        I am so sorry again for any inconvenience.

        Cheers,
        Robert

        • Adrian Rosebrock September 11, 2017 at 9:08 am #

          By the very definition of BRIEF and how all local invariant descriptors work you need to examine the pixels surrounding a given center pixel to build the feature vector. If you want to extract BRIEF features from every single pixel in the image simply create a cv2.KeyPoint object for every (x, y)-coordinate and then pass the keypoints list into the extractor.

          • Robert September 13, 2017 at 3:29 pm #

            Got it worked. That’s helped me a lot. Many thanks, Adrian.

            Cheers,
            Robert

  44. Jabr September 13, 2017 at 3:26 pm #

    Hello Adrian,

    Is it possible to set the parameters for the BRIEF descriptor (cv2.xfeatures2d.BriefDescriptorExtractor_create()) in python, namely (the sample pairs and the patch size)?

    For example, set the patch size to 25 instead of the default one, which is 48.

    Thank you.

    • Adrian Rosebrock September 13, 2017 at 3:31 pm #

      As far as I understand from the documentation you can set the number of bytes used but not the patch size.

  45. Ian Maynard September 23, 2017 at 9:37 am #

    Hello Adrian,

    I tried to follow your instructions, got the scikit-image installed(following instruction on the link), then i tried to run the localbinarypatterns.py in python3.4. It always give me this error

    Traceback (most recent call last):
    File “/home/pi/pythonpy/videofacedet/craft/localbinarypatterns.py”, line 1, in
    from skimage import feature
    ImportError: No module named ‘skimage’

    I tried to search the internet for answers, I even tried to do my own solution, but none works for it. Then i tried to run that program in python2.7, and it did not give any error statement so i assume that it works in python2.7. How can I make it work for python 3.4? because I read somewhere that scikit only works in python3 and newer version.

    • Adrian Rosebrock September 23, 2017 at 9:58 am #

      Hi Ian — it sounds like scikit-image is not installed on your system. Either (1) scikit-image failed to install or (2) you did not install scikit-image into the Python virtual environment where you have OpenCV installed.

      Also scikit-image will work with BOTH Python 2.7 and Python 3.

  46. Sagar Patil September 23, 2017 at 12:31 pm #

    I really want to know how you did what you did in figure 4. Can you please give me the function? I am actually doing a project in which I can open my garage using my face. Also, I am 13 years old.

    • Adrian Rosebrock September 23, 2017 at 2:36 pm #

      It’s great to hear that you are getting involved with computer vision at such a young age, awesome!

      To generate the figure I computed the LBP image pixel-by-pixel. From there I took the output LBP image and scaled it to the range [0, 255].

      • Sagar Patil September 23, 2017 at 10:36 pm #

        Thank You, but can I please see the code, because, I don’t really know how to compute the LBP of an image in code. Also, I don’t know how to scale a matrix. I am really new to machine learning and computer vision.

        • Sagar Patil September 24, 2017 at 10:03 am #

          I want to do what you did in figure 4. Can you please provide the function to do that?

          • Adrian Rosebrock September 24, 2017 at 10:07 am #

            Hi Sagar — unfortunately I do not think I have that code anymore. I just checked the repo but I couldn’t find it. I’ll check again, but please understand that while I’m happy to help and point you in the right direction it’s extremely rare that I can provide or even write code for you outside what is covered in the freely available tutorials. Between keeping PyImageSearch updated, writing new content, and releasing a new book I’m simply too busy. I do hope you understand.

  47. Sagar Patil September 24, 2017 at 10:41 am #

    Thank you for trying your best. I really do appreciate it. I am doing that because I don’t want the image affected by lighting. If there is any other way to do it, please mention it.

    • Adrian Rosebrock September 24, 2017 at 12:27 pm #

      In general illumination invariance is extremely challenging is highly dependent on your application. LBPs are theoretically robust to illumination — they key word being “theoretically”. In practice you might get varying results.

  48. isd October 13, 2017 at 7:22 pm #

    thank you for tutorial plz can you tell me how I can use LBP to extract facial expressions from image a

    • Adrian Rosebrock October 14, 2017 at 10:35 am #

      Depending on the facial expressions you want to recognize, LBPs may not be the best choice. I actually cover facial expression recognition inside my new book, Deep Learning for Computer Vision with Python.

  49. Karl Sonnen October 26, 2017 at 3:48 pm #

    Hello,

    I’ve been trying to implement this for days now.

    I’ve tried it on a fresh machine with Ubuntu and tried installing open cv but constant errors, so I stopped that.

    I’ve tried it on a pre-compiled ubuntu vmware machine that has pycharm and working opencv examples but getting errors with the sklearn module not being found even though I have done the pip install scikit-learn.

    The closes I have come to getting this to work is using winpython. However, I have encountered this error:

    Traceback (most recent call last):
    File “recognize.py”, line 49, in
    model.fit(data_shuf, labels_shuf)
    File “C:\RED\WinPython-64bit-2.7.10.3\python-2.7.10.amd64\lib\site-packages\sklearn\svm\classes.py”, line 235, in fit
    self.loss, sample_weight=sample_weight)
    File “C:\RED\WinPython-64bit-2.7.10.3\python-2.7.10.amd64\lib\site-packages\sklearn\svm\base.py”, line 853, in _fit_liblinear
    ” class: %r” % classes_[0])
    ValueError: This solver needs samples of at least 2 classes in the data, but the data contains only one class: ‘images’

    This is the command I used to run the program:
    python recognize.py –training images/training –testing images/testing

    any tips on how to fix it?

    I’m tried running it on both the Python 2.7 and 3.x versions.

    • Adrian Rosebrock October 27, 2017 at 10:59 am #

      Hi Karl — I don’t recommend using Windows for image processing. Windows support is usually an afterthought for the open source libraries required for image processing. Line 31 of the script calls for a ‘/’ which is different for the Windows system. Try ‘\\’ (you need two backslashes because one is an escape character). Also, see Sveder’s comment.

  50. Sheyna October 27, 2017 at 4:13 pm #

    Hi Adrian,

    Is there any simple(r) version to implement LBP to a 3D data for pattern/feature classification? I saw a paper, but I am not sure if I want to implement through that route for a basic test as I need to check many other feature extraction methods.

    Thanks in advance.

    • Adrian Rosebrock October 31, 2017 at 8:11 am #

      Hi Sheyna — I have never used LBPs for 3D data so I’m unfortunately not sure here.

  51. supraja November 10, 2017 at 3:54 am #

    hai sir,
    i want help regarding image processing which takes plant leaf image as input and shows is it healthy or any disease.it will specify which disease it has im confusing with so many methods and techniques to use
    can u suggest which method to use and how to code
    thank you

    • Adrian Rosebrock November 13, 2017 at 2:16 pm #

      It sounds like you should be using a bit of machine learning instead. I would suggest working through Practical Python and OpenCV where I include a few examples of classifying images (such as predicting a species of flower in an image). If you have any examples of diseased leaf images I can take a look and suggest a specific algorithm to apply.

  52. Marina Castro November 12, 2017 at 6:01 pm #

    This was so nice! One of the best tutorials and overall explanations I’ve ever read!

    I really hope you had a fantastic birthday back in 2015 because it was well-deserved!

    Best wishes from Portugal!

    • Adrian Rosebrock November 13, 2017 at 1:58 pm #

      Thank you Marina, I really appreciate that 🙂

  53. doug November 13, 2017 at 5:24 pm #

    hello adrian a very good tutorial served me a lot
    I do not know if you could help me I want to make a classifier of faces that based on an image tell me the mood of the person I am new to python and it would be very helpful thank you very much

    • Adrian Rosebrock November 15, 2017 at 1:15 pm #

      Hey Doug, thanks for the comment. Are you referring to emotion/facial expression recognition? LBPs are a good start. You could perform face detection, extract the LBPs, and then classify the emotion via a machine learning algorithm (I would recommend Logistic Regression or a Linear SVM). I cover all of these techniques inside the PyImageSearch Gurus course.

      Otherwise, I think a better solution would be to use deep learning. I actually cover exactly how to build an emotion recognition system inside my new book, Deep Learning for Computer Vision with Python. Be sure to take a look!

  54. Vhulenda November 22, 2017 at 7:11 am #

    Hi

    Thanks for an awesome tutorial.

    I have a question though. How can I in testing instead of images to use video? I saw one comment which Adrain said can implement function in C++ and then call the method from python, if that’s it can you elaborate? I’m new in both computer vision and python/C++.

    Thanks in advance.

    • Adrian Rosebrock November 22, 2017 at 9:49 am #

      Hey Vhulenda — I think you are asking two separate questions. To use this code in video you need to access your video stream, such as your USB camera or builtin webcam. If you’re new to Python and OpenCV I would recommend reading through Practical Python and OpenCV where I discuss this quite extensively for beginners.

      Secondly, there are a number of ways to implement a Python function in C++ and call it from Python. The easiest method is to use Cython.

      • Vhulenda November 23, 2017 at 8:01 am #

        Thank you

  55. Steve November 25, 2017 at 11:42 am #

    I would like it to show me the accuracy of the training, and then the test accurácia, instead of opening all the images … for example, test accuracy: 100% 100/100

    • Adrian Rosebrock November 25, 2017 at 12:01 pm #

      Hi Steve, take a look at the classification_report function inside scikit-learn. You can make predictions on your training and testing sets and then view the output.

  56. Steve November 25, 2017 at 11:54 am #

    And I would like to do a test with the traditional LBP, how to do?

  57. Norm December 1, 2017 at 4:59 am #

    This looks amazing–great job, though a bit complex for the newbie. I purchased some of your material & working through it, albeit a slow pace. Is LBP well-suited for locating a small object of a known pattern in a “noisy” background, say a small leaf in a lawn…you can easily “see” there is a leaf there (on top or slightly buried), but the lawn represents quite a bit of “noisy coverage”. It seems like a difficult challenge to separate the two, since they may share similar colors & lighting can vary. Only the edges or patterns seem viable.

    • Adrian Rosebrock December 2, 2017 at 7:26 am #

      Hi Norm — if the edges are only viable and perhaps the veins of the leaf themselves I think a more structural image descriptor such as Histogram of Oriented Gradients would work better.

  58. Norm December 1, 2017 at 11:17 am #

    This is awsume! I am wondering a few things

  59. yousef December 5, 2017 at 10:28 am #

    Hi Adrian , it was so useful to me thank you for supporting me and for helping me .

    • Adrian Rosebrock December 8, 2017 at 5:16 pm #

      Thanks Yousef 🙂

  60. Ratih December 29, 2017 at 1:31 am #

    Hi Adrian, how to import cross validation to LBP? thank you

  61. Son Vo January 22, 2018 at 12:46 am #

    Hi Adrian,

    Thanks for the useful post.

    I just wonder one thing: after you find lbp, then you calculate histogram for the whole lbp image using numpy. Is there any way we can only calculate hist of a particular roi of lbp image based on given mask?

    If we want to compare objects in images, we should only care about hist of the object rather than the whole image. I know that opencv supports calculating hist based on mask. But, when I try to apply that function to lbp image, it shows an error. Please give me an advice. Thanks Adrian.

    • Adrian Rosebrock January 22, 2018 at 6:16 pm #

      Great question. Technically yes, you can compute an LBP based only on a mask, but there are a lot of problems with implementation. To start, consider how the LBP algorithm works — you need to access the pixels surrounding a center one. What would you do with pixels that lie along the border of a mask? If these pixels are treated as center pixels then their neighborhoods would fall outside the mask. You would need to make a decision on how to handle this. I would suggest looking into “NumPy masked arrays” if you’re interested in doing this, then applying the masked array to the LBP array generated from scikit-image before computing your histogram. I hope that helps!

      • Son Vo January 23, 2018 at 6:26 am #

        Thanks Adrian for the advice. I’ll try that way.

  62. hassan January 30, 2018 at 2:48 pm #

    Hy Andrian . I am quite beginner and trying my best to follow your blog.
    I have some basic queries the first class localbinarypatterns. What is meant by eps=1e-7, why we are using that?
    and second please slightly explain that code snippet.

    (hist, _) = np.histogram(lbp.ravel(),
    bins=np.arange(0, self.numPoints + 3),
    range=(0, self.numPoints + 2))

    What are you doing here?
    (Sorry for quite basic question, but i am quite beginner and not getting that at all)

    • Adrian Rosebrock January 31, 2018 at 6:46 am #

      1. The “eps” value prevents a “division by zero” error in Line 23. If the histogram has zero entries and the sum is zero then we cannot divide by zero.

      2. The snippet you are referring to constructs a histogram of each unique LBP prototype. Please see the section that starts with “Thus, to construct the actual feature vector…” for a detailed explanation.

      If you’re new to OpenCV and computer vision/image processing, I would recommend working through Practical Python and OpenCV where I teach the fundamentals. Be sure to take a look, I’m confident that after going through it you’ll get up to speed quickly 🙂

  63. hassan February 3, 2018 at 1:59 pm #

    Hy Adrian,
    I need some explanation.
    1) As I have read that SVM is used to classify the images as positive and negative. Mean,
    vector is sketched between 2 types of sample classes. But in your case there are four classes(carpet, paper, area_rug.,keyboard) then how classification will be done by vector??

    2)In your case your input to classifier is hist matrix, while you can also pass LBP features, Why it is so?

    3)in your code you used desc = LocalBinaryPatterns(24, 8), how to choose these parameters??

    Please help me to satisfy these questions.

    [Suggestion: You should also add option to add snapshot in the comments/reply, it would be more useful]

    (Sorry for basic questions).

    • Adrian Rosebrock February 6, 2018 at 10:37 am #

      1. You are thinking of a 2-class SVM. Multi-class SVMs can be created via “one versus rest” or “one versus all” schemes. See the LinearSVC documentation in scikit-learn for more information.

      2. You wouldn’t want to pass in the raw LBP matrix as the LBP matrix could be significantly different for every input image. The matrix encodes local LBP information. We need to put it into a histogram to make it more robust.

      3. You normally perform hyperparameter tuning experiments to determine the parameters.

      I cover all of this and more inside the PyImageSearch Gurus course. Take a look, I think it would really help you on your computer vision and machine learning journey! 🙂

  64. hassan February 6, 2018 at 2:51 pm #

    Thanks Adrian for your reply. I need some explanation that :
    1) You mean in Sklean.svm.LinearSVC there is already builtin implementation for one versus
    rest schemes. Isn’t it?

    2) In your case you are not passing “multiclass” as an argument to LinearSVC, then how it is performing multi class classification?

    Thanks in anticipation for kind reply.

    • Adrian Rosebrock February 8, 2018 at 8:43 am #

      The scikit-learn implementation can automatically infer whether it’s binary (two class) or multi-class (more than two classes) based on the number of unique class labels.

  65. hassan February 8, 2018 at 3:15 pm #

    Hy Adrian, thanks for your healthy replies. I have some queries :

    1) What is C and how you did choose the values of C and randrom_State in linear_SVC arguments? Is it based on hit and trial method? I have data set with 3 types of labels, how I can select best values of C and random_state ?

    2) I need your suggestion :
    I am trying to classify human facial expressions. What should be the best algorithm to extract facial features from face to train the model (as LBP histogram is not showing satisfactory results)? What would be your suggestion in that regard.

    Thanks in anticipation for your reply

    • Adrian Rosebrock February 12, 2018 at 6:43 pm #

      1. C is a hyperparameter you tune. It controls the “strictness” of the SVM. You normally tune it via cross-validation.

      2. There are many algorithms for facial expression recognition. LBPs are actually quite good depending on the number of facial expressions you want to recognize.

      I actually cover facial expression recognition inside my book, Deep Learning for Computer Vision with Python. This book would also help address many of your machine learning questions. Be sure to take a look!

  66. Barathy February 13, 2018 at 11:21 am #

    Hi Adrian,
    Your tutorial is very help to me, I am doing my research project to detect bloody texture. is LBP is suitable for detect bloody texture in the image.

    • Adrian Rosebrock February 18, 2018 at 10:21 am #

      I’m not sure what “bloody texture” means in this context. Could you elaborate?

      • Barathy March 1, 2018 at 11:53 am #

        It means an image contains blood region. i want to detect if it contains blood regions or not.

        • Adrian Rosebrock March 2, 2018 at 10:42 am #

          You’ll have to be more specific. Are you working with blood cultures/cells? Are you trying to detect blood in trauma photos? Keep in mind that I can only help you if you be more specific and put effort into describing exactly what you are trying to accomplish.

          • Barathy March 7, 2018 at 11:28 pm #

            Thank You Adrian, I am working with blood trauma photos like an image has violence. for example accident, blood on the floor.

          • Adrian Rosebrock March 9, 2018 at 9:20 am #

            LBPs likely wouldn’t be a good here. You could give them a try but I think CNNs will give you better accuracy but you would need a lot of data first.

  67. Amy March 8, 2018 at 11:27 pm #

    Hi Adrian!

    I plan to print out the histogram as what u have shown in Figure 5. Do you know what to add in the codes?

    Thanks!

    • Adrian Rosebrock March 9, 2018 at 8:51 am #

      I do not have the code handy, but you need to use matplotlib to construct a plot with the LBP prototype of on the x-axis (the bin of the histogram) and the number of prototypes assigned to the bin on the y-axis. If you’re new to matplotlib I would suggest playing with it and getting used to it before trying to create this plot.

  68. Ravi March 14, 2018 at 1:50 am #

    I have executed the above code in jupyter notebook and i got the following error

    usage: ipykernel_launcher.py [-h] -t TRAINING -e TESTING
    ipykernel_launcher.py: error: the following arguments are required: -t/–training, -e/–testing

    I read the above comments and tried to execute the same but still the error persists.
    plz help

  69. walter figueiredo March 20, 2018 at 1:37 pm #

    Hi Adrian, first I want to thank you for this well-explained tutorial, as a beginner and in a windows environment, I could follow everything and even solved a small problem because of your response rate in the comments.

    My Thesis topic it is a Natural Scene Classification where the program has to tell if a picture was taken on an indoor-outdoor environment.The LBP could do the job?

    Thanks in advance and I’m looking forward to buying the Hardcopy Bundle.

    ThankYou…

    • Adrian Rosebrock March 22, 2018 at 10:14 am #

      Hey Walter, LBPs could be used here but I would recommend using the “Gist Descriptor”. Take a look at the ,a target=”blank” href=”http://people.csail.mit.edu/torralba/code/spatialenvelope/”>spatial envelope for more information. Best of luck with your thesis!

  70. Danilo Borges March 28, 2018 at 6:47 pm #

    Hello Adrian, thank you for your incredible work. Can LPB be used for hand recognition?

    • Adrian Rosebrock March 30, 2018 at 7:37 am #

      Do you mean hand gesture recognition? Recognizing if a hand is in the field of view of the camera? Recognizing someone’s specific hand? Could you elaborate a bit?

  71. Amy April 13, 2018 at 10:40 pm #

    hi Adrian, im newbie to programming.

    can i use the printed information to proceed to the next level? for example, if “wrapping_paper” then …….

    • Adrian Rosebrock April 16, 2018 at 2:32 pm #

      Hey Amy — I’m not sure what you mean by “proceed to the next level”? Could you clarify?

  72. Amyraah April 16, 2018 at 3:37 am #

    Hi Adrian, may i know how u specify the value for line 18 as below:

    desc = LocalBinaryPatterns(24, 8)

    because when i try to change a random value here, it gives me different prediction result.
    Could u pls help me to explain why? 🙂

    Thanks Adrian!

    • Adrian Rosebrock April 16, 2018 at 2:20 pm #

      The values here are the number of points and radius to the Local Binary Patterns descriptor. Be sure to refer to the scikit-image documentation on the local_binary_pattern function.

      • Ashutosh Gupta April 18, 2018 at 2:55 am #

        Hi Adrian

        Great Article.Thank you. I am looking for a feature vector for image texture description , that can be used to compare images directly using distance measures of images. As described in article can we use histogram feature vector of LBP directly to compare images using euclidean,chi square etc. instead of doing training on dataset. and if not that which texture descriptor I can use for such direct image comparisons.

  73. ashutosh gupta April 18, 2018 at 4:21 am #

    Hi Adrian,

    I want to apply some texture descriptor for my project. Can I use histogram feature vector obtained from LBP directly to compare two images texture instead of doing testing/training on images in dataset ?

    Thanks.

    • Adrian Rosebrock April 18, 2018 at 2:53 pm #

      Yep! You can simply compare the LBP histograms using some similarity metric/distance function.

      • ashutosh gupta April 24, 2018 at 3:13 am #

        Thanks Adrian. Like color histogram, dividing image into some parts and compare individual LBP histogram of such parts between two images will improve efficiency of LBP descriptor?

        • Adrian Rosebrock April 25, 2018 at 5:47 am #

          I think you meant to say “accuracy” rather than “efficiency”. In terms of efficiency it will actually be slower since you are computing a LBP histogram for each cell in the image.

          In terms of accuracy, that’s highly dependent. If your images are spatially arranged, then yes, dividing the image into parts will help improve accuracy.

  74. Jean April 19, 2018 at 9:51 pm #

    Hi Adrian,

    after calculating the LBP of a given image, one typically takes the histograms of 16×16 blocks from the original image. How could you treat the case where a 16×16 block near the original image boundaries doesn’t fit ? Suppose that you have a 100×50 image and you want to split it in 16×16 blocks, obviously there will be a region near the right and bottom image border where the 16×16 block won’t fit exactly the give image.

    Regards

    • Adrian Rosebrock April 20, 2018 at 9:49 am #

      There are a few ways to handle this but the two most popular ways include:

      1. Zero-padding, where we fill the boundary pixels with zero to ensure a 16×16 region
      2. Replicate padding where we use the border pixel values themselves to pad to a 16×16 region

      Zero-padding is often used in deep learning and machine learning for efficiency. Replicate padding is also used quite a bit. You would need to refer to the documentation of a given library to see exactly which method is used.

  75. Sushil April 26, 2018 at 5:55 pm #

    This is such a brilliant piece of an algorithm. Simple, neat yet good. But Adrian, will you please suggest me any ideas, blog or your posts on how to train the model for texture (or background ONLY) and later predict in test image where the learned textures possibly are. I appreciate your attention, thanks. Keep posting. I love your blog.

    • Adrian Rosebrock April 28, 2018 at 6:10 am #

      You can use LBPs for texture classification, in fact, that was a primary motivation behind why they were developed. It sounds like your problem here is segmenting the background from the foreground. Is that the case?

  76. Joey Dela Cruz April 26, 2018 at 11:29 pm #

    Good day Adrian, can we ask what is the use of the following declarations:
    desc = LocalBinaryPatterns (24, 8)
    data = [ ]
    labels = [ ]

    Thank you

    • Adrian Rosebrock April 28, 2018 at 6:09 am #

      The “desc” is the instantiation of our LocalBinaryPattern feature extractor object. The “data” list will hold the extracted LBP histograms and the “labels” list holds their corresponding class labels.

      • Joey Dela Cruz April 29, 2018 at 10:02 am #

        Can this run on windows 10 OS?

        • Adrian Rosebrock April 30, 2018 at 1:00 pm #

          It can, but I haven’t used Windows in over 10 years. Once you are able to install OpenCV and relevant packages on Windows, you shouldn’t have a problem though.

  77. Rishabh May 17, 2018 at 2:29 am #

    Could please share the code for the face example that you have shown, i did try it out your way but i think it is not working out for me .

  78. ezza June 15, 2018 at 3:23 pm #

    Hi,

    What is this input “trainData” in the following line of code you recommended to check the accuracy? As this is not in the whole code?
    What should I pass here if I have the same code?

    print(model.score(trainData))

    • Adrian Rosebrock June 19, 2018 at 9:02 am #

      Sorry, could you clarify exactly which line of code or paragraph you are referring to?

  79. Supratim June 22, 2018 at 1:08 pm #

    Hi Adrian,

    thanks a lot for the post. I have a question on how to extract Geometric Texton Histograms and include this along with LBP together.

    Thanks in advance.

    • Adrian Rosebrock June 25, 2018 at 2:00 pm #

      I have experience with both “textons” and “LBPs” but I’m not sure what you mean by “Geometric Texton Histograms” — are you referring to some particular paper?

  80. fromCN July 25, 2018 at 4:39 am #

    Adrian you are so cool!!

    • Adrian Rosebrock July 25, 2018 at 7:54 am #

      Thank you, you are too kind 🙂

  81. Dicky R. August 6, 2018 at 8:12 am #

    Adrian, can you give directions how to plotting the decision function of svm classifier of this LBP project. many thanks before.

    • Adrian Rosebrock August 7, 2018 at 6:41 am #

      The scikit-learn library has a few examples of plotting an SVM decision boundary. I would suggest starting there.

  82. Venu August 7, 2018 at 12:51 am #

    Hello Adria, great explanation, but i still have a question about the LPB, how about the corner pixels? ex: bottom left corner, is it still using 3×3 neighbour? or using the three neighbour around it, thankyou..

    • Adrian Rosebrock August 7, 2018 at 6:31 am #

      Hey Venu — are you referring to a particular figure/image in the post? I’m not sure which 3×3 region you’re referring to.

      • Venu August 9, 2018 at 2:32 pm #

        thankyou for answering Adrian, its refering to the figure 3, in this post, how do we calculate the pixel at the bottom, since it only have 3 neighbour around it, thankyou for always helping Adrian, hv a great day

        • Adrian Rosebrock August 9, 2018 at 2:34 pm #

          You would apply either:

          1. Zero-padding to pad the border of the image with zero
          2. Or replicate padding to pad the border of the image with its corresponding pixel value

          • Venu August 10, 2018 at 4:36 pm #

            thankyou Adrian,

  83. ri August 14, 2018 at 7:12 pm #

    hi adrian i am quiet beginner in machine learning so could you jelp to determine the mean accuracy of this model i try acc=model.score(data,labels) give me nothing

    • Adrian Rosebrock August 15, 2018 at 8:20 am #

      The “acc” should return the accuracy of the model. How have you verified that it’s returning nothing?

  84. xiaoyang cui August 28, 2018 at 4:36 am #

    Hello Adria. I was trying to run this demo. but when i typed the ‘python recognize.py –training images/training –testing images/testing’ in the terminal, it raised the error as follow. ValueError: This solver needs samples of at least 2 classes in the data, but the data contains only one class: ‘images’. Could you please tell me the reasons. thank you very much!

    • Adrian Rosebrock August 28, 2018 at 3:12 pm #

      This sounds like a path issue of some sort. Double-check your input paths. Additionally, are you on a Unix machine or Windows?

      • xiaoyang cui August 28, 2018 at 11:40 pm #

        Thank you very much. It is indeed a path issue indeed。

        • Adrian Rosebrock August 30, 2018 at 9:05 am #

          Awesome, nice job resolving the issue!

Trackbacks/Pingbacks

  1. Fast, optimized 'for' pixel loops with OpenCV and Python - PyImageSearch - August 28, 2017

    […] implement that will require you to perform these manual for  loops. Whether you need to implement Local Binary Patterns from scratch, create a custom convolution algorithm, or simply cannot rely on vectorized […]

Quick Note on Comments

Please note that all comments on the PyImageSearch blog are hand-moderated by me. By moderating each comment on the blog I can ensure (1) I interact with and respond to as many readers as possible and (2) the PyImageSearch blog is kept free of spam.

Typically, I only moderate comments every 48-72 hours; however, I just got married and am currently on my honeymoon with my wife until early October. Please feel free to submit comments of course! Just keep in mind that I will be unavailable to respond until then. For faster interaction and response times, you should join the PyImageSearch Gurus course which includes private community forums.

I appreciate your patience and thank you being a PyImageSearch reader! I will see you when I get back.

Leave a Reply