OpenCV Face Recognition

Click here to download the source code to this post.

In this tutorial, you will learn how to use OpenCV to perform face recognition. To build our face recognition system, we’ll first perform face detection, extract face embeddings from each face using deep learning, train a face recognition model on the embeddings, and then finally recognize faces in both images and video streams with OpenCV.

Today’s tutorial is also a special gift for my fiancée, Trisha (who is now officially my wife). Our wedding was over the weekend, and by the time you’re reading this blog post, we’ll be at the airport preparing to board our flight for the honeymoon.

To celebrate the occasion, and show her how much her support of myself, the PyImageSearch blog, and the PyImageSearch community means to me, I decided to use OpenCV to perform face recognition on a dataset of our faces.

You can swap in your own dataset of faces of course! All you need to do is follow my directory structure in insert your own face images.

As a bonus, I’ve also included how to label “unknown” faces that cannot be classified with sufficient confidence.

To learn how to perform OpenCV face recognition, just keep reading!

Looking for the source code to this post?
Jump right to the downloads section.

OpenCV Face Recognition

In today’s tutorial, you will learn how to perform face recognition using the OpenCV library.

You might be wondering how this tutorial is different from the one I wrote a few months back on face recognition with dlib?

Well, keep in mind that the dlib face recognition post relied on two important external libraries:

  1. dlib (obviously)
  2. face_recognition (which is an easy to use set of face recognition utilities that wraps around dlib)

While we used OpenCV to facilitate face recognition, OpenCV itself was not responsible for identifying faces.

In today’s tutorial, we’ll learn how we can apply deep learning and OpenCV together (with no other libraries other than scikit-learn) to:

  1. Detect faces
  2. Compute 128-d face embeddings to quantify a face
  3. Train a Support Vector Machine (SVM) on top of the embeddings
  4. Recognize faces in images and video streams

All of these tasks will be accomplished with OpenCV, enabling us to obtain a “pure” OpenCV face recognition pipeline.

How OpenCV’s face recognition works

Figure 1: An overview of the OpenCV face recognition pipeline. The key step is a CNN feature extractor that generates 128-d facial embeddings. (source)

In order to build our OpenCV face recognition pipeline, we’ll be applying deep learning in two key steps:

  1. To apply face detection, which detects the presence and location of a face in an image, but does not identify it
  2. To extract the 128-d feature vectors (called “embeddings”) that quantify each face in an image

I’ve discussed how OpenCV’s face detection works previously, so please refer to it if you have not detected faces before.

The model responsible for actually quantifying each face in an image is from the OpenFace project, a Python and Torch implementation of face recognition with deep learning. This implementation comes from Schroff et al.’s 2015 CVPR publication, FaceNet: A Unified Embedding for Face Recognition and Clustering.

Reviewing the entire FaceNet implementation is outside the scope of this tutorial, but the gist of the pipeline can be seen in Figure 1 above.

First, we input an image or video frame to our face recognition pipeline. Given the input image, we apply face detection to detect the location of a face in the image.

Optionally we can compute facial landmarks, enabling us to preprocess and align the face.

Face alignment, as the name suggests, is the process of (1) identifying the geometric structure of the faces and (2) attempting to obtain a canonical alignment of the face based on translation, rotation, and scale.

While optional, face alignment has been demonstrated to increase face recognition accuracy in some pipelines.

After we’ve (optionally) applied face alignment and cropping, we pass the input face through our deep neural network:

Figure 2: How the deep learning face recognition model computes the face embedding.

The FaceNet deep learning model computes a 128-d embedding that quantifies the face itself.

But how does the network actually compute the face embedding?

The answer lies in the training process itself, including:

  1. The input data to the network
  2. The triplet loss function

To train a face recognition model with deep learning, each input batch of data includes three images:

  1. The anchor
  2. The positive image
  3. The negative image

The anchor is our current face and has identity A.

The second image is our positive image — this image also contains a face of person A.

The negative image, on the other hand, does not have the same identity, and could belong to person B, C, or even Y!

The point is that the anchor and positive image both belong to the same person/face while the negative image does not contain the same face.

The neural network computes the 128-d embeddings for each face and then tweaks the weights of the network (via the triplet loss function) such that:

  1. The 128-d embeddings of the anchor and positive image lie closer together
  2. While at the same time, pushing the embeddings for the negative image father away

In this manner, the network is able to learn to quantify faces and return highly robust and discriminating embeddings suitable for face recognition.

And furthermore, we can actually reuse the OpenFace model for our own applications without having to explicitly train it!

Even though the deep learning model we’re using today has (very likely) never seen the faces we’re about to pass through it, the model will still be able to compute embeddings for each face — ideally, these face embeddings will be sufficiently different such that we can train a “standard” machine learning classifier (SVM, SGD classifier, Random Forest, etc.) on top of the face embeddings, and therefore obtain our OpenCV face recognition pipeline.

If you are interested in learning more about the details surrounding triplet loss and how it can be used to train a face embedding model, be sure to refer to my previous blog post as well as the Schroff et al. publication.

Our face recognition dataset

Figure 3: A small example face dataset for face recognition with OpenCV.

The dataset we are using today contains three people:

  • Myself
  • Trisha (my wife)
  • “Unknown”, which is used to represent faces of people we do not know and wish to label as such (here I just sampled faces from the movie Jurassic Park which I used in a previous post — you may want to insert your own “unknown” dataset).

As I mentioned in the introduction to today’s face recognition post, I was just married over the weekend, so this post is a “gift” to my new wife ?.

Each class contains a total of six images.

If you are building your own face recognition dataset, ideally, I would suggest having 10-20 images per person you wish to recognize — be sure to refer to the “Drawbacks, limitations, and how to obtain higher face recognition accuracy” section of this blog post for more details.

Project structure

Once you’ve grabbed the zip from the “Downloads” section of this post, go ahead and unzip the archive and navigate into the directory.

From there, you may use the tree  command to have the directory structure printed in your terminal:

There are quite a few moving parts for this project — take the time now to carefully read this section so you become familiar with all the files in today’s project.

Our project has four directories in the root folder:

  • dataset/ : Contains our face images organized into subfolders by name.
  • images/ : Contains three test images that we’ll use to verify the operation of our model.
  • face_detection_model/ : Contains a pre-trained Caffe deep learning model provided by OpenCV to detect faces. This model detects and localizes faces in an image.
  • output/ : Contains my output pickle files. If you’re working with your own dataset, you can store your output files here as well. The output files include:
    • embeddings.pickle : A serialized facial embeddings file. Embeddings have been computed for every face in the dataset and are stored in this file.
    • le.pickle : Our label encoder. Contains the name labels for the people that our model can recognize.
    • recognizer.pickle : Our Linear Support Vector Machine (SVM) model. This is a machine learning model rather than a deep learning model and it is responsible for actually recognizing faces.

Let’s summarize the five files in the root directory:

  • extract_embeddings.py : We’ll review this file in Step #1 which is responsible for using a deep learning feature extractor to generate a 128-D vector describing a face. All faces in our dataset will be passed through the neural network to generate embeddings.
  • openface_nn4.small2.v1.t7 : A Torch deep learning model which produces the 128-D facial embeddings. We’ll be using this deep learning model in Steps #1, #2, and #3 as well as the Bonus section.
  • train_model.py : Our Linear SVM model will be trained by this script in Step #2. We’ll detect faces, extract embeddings, and fit our SVM model to the embeddings data.
  • recognize.py : In Step #3 and we’ll recognize faces in images. We’ll detect faces, extract embeddings, and query our SVM model to determine who is in an image. We’ll draw boxes around faces and annotate each box with a name.
  • recognize_video.py : Our Bonus section describes how to recognize who is in frames of a video stream just as we did in Step #3 on static images.

Let’s move on to the first step!

Step #1: Extract embeddings from face dataset

Now that we understand how face recognition works and reviewed our project structure, let’s get started building our OpenCV face recognition pipeline.

Open up the extract_embeddings.py  file and insert the following code:

We import our required packages on Lines 2-8. You’ll need to have OpenCV and imutils  installed. To install OpenCV, simply follow one of my guides (I recommend OpenCV 3.4.2, so be sure to download the right version while you follow along). My imutils package can be installed with pip:

Next, we process our command line arguments:

  • --dataset : The path to our input dataset of face images.
  • --embeddings : The path to our output embeddings file. Our script will compute face embeddings which we’ll serialize to disk.
  • --detector : Path to OpenCV’s Caffe-based deep learning face detector used to actually localize the faces in the images.
  • --embedding-model : Path to the OpenCV deep learning Torch embedding model. This model will allow us to extract a 128-D facial embedding vector.
  • --confidence : Optional threshold for filtering week face detections.

Now that we’ve imported our packages and parsed command line arguments, lets load the face detector and embedder from disk:

Here we load the face detector and embedder:

  • detector : Loaded via Lines 26-29. We’re using a Caffe based DL face detector to localize faces in an image.
  • embedder : Loaded on Line 33. This model is Torch-based and is responsible for extracting facial embeddings via deep learning feature extraction.

Notice that we’re using the respective cv2.dnn  functions to load the two separate models. The dnn  module wasn’t made available like this until OpenCV 3.3, but I recommend that you are using OpenCV 3.4.2 or higher for this blog post.

Moving forward, let’s grab our image paths and perform initializations:

The imagePaths  list, built on Line 37, contains the path to each image in the dataset. I’ve made this easy via my imutils  function, paths.list_images .

Our embeddings and corresponding names will be held in two lists: knownEmbeddings  and knownNames  (Lines 41 and 42).

We’ll also be keeping track of how many faces we’ve processed via a variable called total  (Line 45).

Let’s begin looping over the image paths — this loop will be responsible for extracting embeddings from faces found in each image:

We begin looping over imagePaths  on Line 48.

First, we extract the name  of the person from the path (Line 52). To explain how this works, consider the following example in my Python shell:

Notice how by using  imagePath.split  and providing the split character (the OS path separator — “/” on unix and “\” on Windows), the function produces a list of folder/file names (strings) which walk down the directory tree. We grab the second-to-last index, the persons name , which in this case is 'adrian' .

Finally, we wrap up the above code block by loading the image  and resize  it to a known width  (Lines 57 and 58).

Let’s detect and localize faces:

On Lines 62-64, we construct a blob. To learn more about this process, please read Deep learning: How OpenCV’s blobFromImage works.

From there we detect faces in the image by passing the imageBlob  through the detector  network (Lines 68 and 69).

Let’s process the detections :

The detections  list contains probabilities and coordinates to localize faces in an image.

Assuming we have at least one detection, we’ll proceed into the body of the if-statement (Line 72).

We make the assumption that there is only one face in the image, so we extract the detection with the highest confidence  and check to make sure that the confidence meets the minimum probability threshold used to filter out weak detections (Lines 75-81).

Assuming we’ve met that threshold, we extract the face  ROI and grab/check dimensions to make sure the face  ROI is sufficiently large (Lines 84-93).

From there, we’ll take advantage of our embedder  CNN and extract the face embeddings:

We construct another blob, this time from the face ROI (not the whole image as we did before) on Lines 98 and 99.

Subsequently, we pass the faceBlob  through the embedder CNN (Lines 100 and 101). This generates a 128-D vector ( vec ) which describes the face. We’ll leverage this data to recognize new faces via machine learning.

And then we simply add the name  and embedding vec  to knownNames  and knownEmbeddings , respectively (Lines 105 and 106).

We also can’t forget about the variable we set to track the total  number of faces either — we go ahead and increment the value on Line 107.

We continue this process of looping over images, detecting faces, and extracting face embeddings for each and every image in our dataset.

All that’s left when the loop finishes is to dump the data to disk:

We add the name and embedding data to a dictionary and then serialize the data  in a pickle file on Lines 110-114.

At this point we’re ready to extract embeddings by running our script.

To follow along with this face recognition tutorial, use the “Downloads” section of the post to download the source code, OpenCV models, and example face recognition dataset.

From there, open up a terminal and execute the following command to compute the face embeddings with OpenCV:

Here you can see that we have extracted 18 face embeddings, one for each of the images (6 per class) in our input face dataset.

Step #2: Train face recognition model

At this point we have extracted 128-d embeddings for each face — but how do we actually recognize a person based on these embeddings? The answer is that we need to train a “standard” machine learning model (such as an SVM, k-NN classifier, Random Forest, etc.) on top of the embeddings.

In my previous face recognition tutorial we discovered how a modified version of k-NN can be used for face recognition on 128-d embeddings created via the dlib and face_recognition libraries.

Today, I want to share how we can build a more powerful classifier on top of the embeddings — you’ll be able to use this same method in your dlib-based face recognition pipelines as well if you are so inclined.

Open up the train_model.py  file and insert the following code:

We’ll need scikit-learn, a machine learning library, installed in our environment prior to running this script. You can install it via pip:

We import our packages and modules on Lines 2-5. We’ll be using scikit-learn’s implementation of Support Vector Machines (SVM), a common machine learning model.

From there we parse our command line arguments:

  • --embeddings : The path to the serialized embeddings (we exported it by running the previous extract_embeddings.py  script).
  • --recognizer : This will be our output model that recognizes faces. It is based on SVM. We’ll be saving it so we can use it in the next two recognition scripts.
  • --le : Our label encoder output file path. We’ll serialize our label encoder to disk so that we can use it and the recognizer model in our image/video face recognition scripts.

Each of these arguments is required.

Let’s load our facial embeddings and encode our labels:

Here we load our embeddings from Step #1 on Line 19. We won’t be generating any embeddings in this model training script — we’ll use the embeddings previously generated and serialized.

Then we initialize our scikit-learn LabelEncoder  and encode our name labels  (Lines 23 and 24).

Now it’s time to train our SVM model for recognizing faces:

On Line 29 we initialize our SVM model, and on Line 30 we fit  the model (also known as “training the model”).

Here we are using a Linear Support Vector Machine (SVM) but you can try experimenting with other machine learning models if you so wish.

After training the model we output the model and label encoder to disk as pickle files.

We write two pickle files to disk in this block — the face recognizer model and the label encoder.

At this point, be sure you executed the code from Step #1 first. You can grab the zip containing the code and data from the “Downloads” section.

Now that we have finished coding train_model.py  as well, let’s apply it to our extracted face embeddings:

Here you can see that our SVM has been trained on the embeddings and both the (1) SVM itself and (2) the label encoding have been written to disk, enabling us to apply them to input images and video.

Step #3: Recognize faces with OpenCV

We are now ready to perform face recognition with OpenCV!

We’ll start with recognizing faces in images in this section and then move on to recognizing faces in video streams in the following section.

Open up the recognize.py  file in your project and insert the following code:

We import  our required packages on Lines 2-7. At this point, you should have each of these packages installed.

Our six command line arguments are parsed on Lines 10-23:

  • --image : The path to the input image. We will attempt to recognize the faces in this image.
  • --detector : The path to OpenCV’s deep learning face detector. We’ll use this model to detect where in the image the face ROIs are.
  • --embedding-model : The path to OpenCV’s deep learning face embedding model. We’ll use this model to extract the 128-D face embedding from the face ROI — we’ll feed the data into the recognizer.
  • --recognizer : The path to our recognizer model. We trained our SVM recognizer in Step #2. This is what will actually determine who a face is.
  • --le : The path to our label encoder. This contains our face labels such as 'adrian'  or 'trisha' .
  • --confidence : The optional threshold to filter weak face detections.

Be sure to study these command line arguments — it is important to know the difference between the two deep learning models and the SVM model. If you find yourself confused later in this script, you should refer back to here.

Now that we’ve handled our imports and command line arguments, let’s load the three models from disk into memory:

We load three models in this block. At the risk of being redundant, I want to explicitly remind you of the differences among the models:

  1. detector : A pre-trained Caffe DL model to detect where in the image the faces are (Lines 27-30).
  2. embedder : A pre-trained Torch DL model to calculate our 128-D face embeddings (Line 34).
  3. recognizer : Our Linear SVM face recognition model (Line 37). We trained this model in Step 2.

Both 1 & 2 are pre-trained meaning that they are provided to you as-is by OpenCV. They are buried in the OpenCV project on GitHub, but I’ve included them for your convenience in the “Downloads” section of today’s post. I’ve also numbered the models in the order that we’ll apply them to recognize faces with OpenCV.

We also load our label encoder which holds the names of the people our model can recognize (Line 38).

Now let’s load our image and detect faces:

Here we:

  • Load the image into memory and construct a blob (Lines 42-49). Learn about  cv2.dnn.blobFromImage  here.
  • Localize faces in the image via our detector  (Lines 53 and 54).

Given our new detections , let’s recognize faces in the image. But first we need to filter weak detections  and extract the face  ROI:

You’ll recognize this block from Step #1. I’ll explain it here once more:

  • We loop over the detections  on Line 57 and extract the confidence  of each on Line 60.
  • Then we compare the confidence  to the minimum probability detection threshold contained in our command line args  dictionary, ensuring that the computed probability is larger than the minimum probability (Line 63).
  • From there, we extract the face  ROI (Lines 66-70) as well as ensure it’s spatial dimensions are sufficiently large (Lines 74 and 75).

Recognizing the name of the face  ROI requires just a few steps:

First, we construct a faceBlob  (from the face  ROI) and pass it through the embedder  to generate a 128-D vector which describes the face (Lines 80-83)

Then, we pass the vec  through our SVM recognizer model (Line 86), the result of which is our predictions for who is in the face ROI.

We take the highest probability index (Line 87) and query our label encoder to find the name  (Line 89). In between, I extract the probability on Line 88.

Note: You cam further filter out weak face recognitions by applying an additional threshold test on the probability. For example, inserting if proba < T  (where T  is a variable you define) can provide an additional layer of filtering to ensure there are less false-positive face recognitions.

Now, let’s display OpenCV face recognition results:

For every face we recognize in the loop (including the “unknown”) people:

  • We construct a text  string containing the name  and probability on Line 93.
  • And then we draw a rectangle around the face and place the text above the box (Lines 94-98).

And then finally we visualize the results on the screen until a key is pressed (Lines 101 and 102).

It is time to recognize faces in images with OpenCV!

To apply our OpenCV face recognition pipeline to my provided images (or your own dataset + test images), make sure you use the “Downloads” section of the blog post to download the code, trained models, and example images.

From there, open up a terminal and execute the following command:

Figure 4: OpenCV face recognition has recognized me at the Jurassic World: Fallen Kingdom movie showing.

Here you can see me sipping on a beer and sporting one of my favorite Jurassic Park shirts, along with a special Jurassic World pint glass and commemorative book. My face prediction only has 47.15% confidence; however, that confidence is higher than the “Unknown” class.

Let’s try another OpenCV face recognition example:

Figure 5: My wife, Trisha, and I are recognized in a selfie picture on an airplane with OpenCV + deep learning facial recognition.

Here are Trisha and I, ready to start our vacation!

In a final example, let’s look at what happens when our model is unable to recognize the actual face:

Figure 6: Facial recognition with OpenCV has determined that this person is “unknown”.

The third image is an example of an “unknown” person who is actually Patrick Bateman from American Psycho — believe me, this is not a person you would want to see show up in your images or video streams!

BONUS: Recognize faces in video streams

As a bonus, I decided to include a section dedicated to OpenCV face recognition in video streams!

The actual pipeline itself is near identical to recognizing faces in images, with only a few updates which we’ll review along the way.

Open up the recognize_video.py  file and let’s get started:

Our imports are the same as the Step #3 section above, except for Lines 2 and 3 where we use the imutils.video  module. We’ll use VideoStream  to capture frames from our camera and FPS  to calculate frames per second statistics.

The command line arguments are also the same except we aren’t passing a path to a static image via the command line. Rather, we’ll grab a reference to our webcam and then process the video. Refer to Step #3 if you need to review the arguments.

Our three models and label encoder are loaded here:

Here we load face detector , face embedder  model, face recognizer  model (Linear SVM), and label encoder.

Again, be sure to refer to Step #3 if you are confused about the three models or label encoder.

Let’s initialize our video stream and begin processing frames:

Our VideoStream  object is initialized and started on Line 43. We wait for the camera sensor to warm up on Line 44.

We also initialize our frames per second counter (Line 47) and begin looping over frames on Line 50. We grab a frame  from the webcam on Line 52.

From here everything is the same as Step 3. We resize  the frame (Line 57) and then we construct a blob from the frame + detect where the faces are (Lines 61-68).

Now let’s process the detections:

Just as in the previous section, we begin looping over detections  and filter out weak ones (Lines 71-77). Then we extract the face  ROI as well as ensure the spatial dimensions are sufficiently large enough for the next steps (Lines 84-89).

Now it’s time to perform OpenCV face recognition:

Here we:

  • Construct the faceBlob  (Lines 94 and 95) and calculate the facial embeddings via deep learning (Lines 96 and 97).
  • Recognize the most-likely name  of the face while calculating the probability (Line 100-103).
  • Draw a bounding box around the face and the person’s name  + probability (Lines 107 -112).

Our fps  counter is updated on Line 115.

Let’s display the results and clean up:

To close out the script, we:

  • Display the annotated frame  (Line 118) and wait for the “q” key to be pressed at which point we break out of the loop (Lines 119-123).
  • Stop our fps  counter and print statistics in the terminal (Lines 126-128).
  • Cleanup by closing windows and releasing pointers (Lines 131 and 132).

To execute our OpenCV face recognition pipeline on a video stream, open up a terminal and execute the following command:

Figure 7: Face recognition in video with OpenCV.

As you can see, both Trisha and my face are correctly identified! Our OpenCV face recognition pipeline is also obtaining ~16 FPS on my iMac. On my MacBook Pro I was getting ~14 FPS throughput rate.

Drawbacks, limitations, and how to obtain higher face recognition accuracy

Figure 8: All face recognition systems are error-prone. There will never be a 100% accurate face recognition system.

Inevitably, you’ll run into a situation where OpenCV does not recognize a face correctly.

What do you do in those situations?

And how do you improve your OpenCV face recognition accuracy? In this section, I’ll detail a few of the suggested methods to increase the accuracy of your face recognition pipeline

You may need more data

Figure 9: Most people aren’t training their OpenCV face recognition models with enough data. (image source)

My first suggestion is likely the most obvious one, but it’s worth sharing.

In my previous tutorial on face recognition, a handful of PyImageSearch readers asked why their face recognition accuracy was low and faces were being misclassified — the conversation went something like this (paraphrased):

Them: Hey Adrian, I am trying to perform face recognition on a dataset of my classmate’s faces, but the accuracy is really low. What can I do to increase face recognition accuracy?

Me: How many face images do you have per person?

Them: Only one or two.

Me: Gather more data.

I get the impression that most readers already know they need more face images when they only have one or two example faces per person, but I suspect they are hoping for me to pull a computer vision technique out of my bag of tips and tricks to solve the problem.

It doesn’t work like that.

If you find yourself with low face recognition accuracy and only have a few example faces per person, gather more data — there are no “computer vision tricks” that will save you from the data gathering process.

Invest in your data and you’ll have a better OpenCV face recognition pipeline. In general, I would recommend a minimum of 10-20 faces per person.

Note: You may be thinking, “But Adrian, you only gathered 6 images per person in today’s post!” Yes, you are right — and I did that to prove a point. The OpenCV face recognition system we discussed here today worked but can always be improved. There are times when smaller datasets will give you your desired results, and there’s nothing wrong with trying a small dataset — but when you don’t achieve your desired accuracy you’ll want to gather more data.

Perform face alignment

Figure 9: Performing face alignment for OpenCV facial recognition can dramatically improve face recognition performance.

The face recognition model OpenCV uses to compute the 128-d face embeddings comes from the OpenFace project.

The OpenFace model will perform better on faces that have been aligned.

Face alignment is the process of:

  1. Identifying the geometric structure of faces in images.
  2. Attempting to obtain a canonical alignment of the face based on translation, rotation, and scale.

As you can see from Figure 9 at the top of this section, I have:

  1. Detected a faces in the image and extracted the ROIs (based on the bounding box coordinates).
  2. Applied facial landmark detection to extract the coordinates of the eyes.
  3. Computed the centroid for each respective eye along with the midpoint between the eyes.
  4. And based on these points, applied an affine transform to resize the face to a fixed size and dimension.

If we apply face alignment to every face in our dataset, then in the output coordinate space, all faces should:

  1. Be centered in the image.
  2. Be rotated such the eyes lie on a horizontal line (i.e., the face is rotated such that the eyes lie along the same y-coordinates).
  3. Be scaled such that the size of the faces is approximately identical.

Applying face alignment to our OpenCV face recognition pipeline was outside the scope of today’s tutorial, but if you would like to further increase your face recognition accuracy using OpenCV and OpenFace, I would recommend you apply face alignment.

Check out my blog post, Face Alignment with OpenCV and Python.

Tune your hyperparameters

My second suggestion is for you to attempt to tune your hyperparameters on whatever machine learning model you are using (i.e., the model trained on top of the extracted face embeddings).

For this tutorial, we used a Linear SVM; however, we did not tune the C  value, which is typically the most important value of an SVM to tune.

The C value is a “strictness” parameter and controls how much you want to avoid misclassifying each data point in the training set.

Larger values of C will be more strict and try harder to classify every input data point correctly, even at the risk of overfitting.

Smaller values of   C will be more “soft”, allowing some misclassifications in the training data, but ideally generalizing better to testing data.

It’s interesting to note that according to one of the classification examples in the OpenFace GitHub, they actually recommend to not tune the hyperparameters, as, from their experience, they found that setting C=1  obtains satisfactory face recognition results in most settings.

Still, if your face recognition accuracy is not sufficient, it may be worth the extra effort and computational cost of tuning your hyperparameters via either a grid search or random search.

Use dlib’s embedding model (but not it’s k-NN for face recognition)

In my experience using both OpenCV’s face recognition model along with dlib’s face recognition model, I’ve found that dlib’s face embeddings are more discriminative, especially for smaller datasets.

Furthermore, I’ve found that dlib’s model is less dependent on:

  1. Preprocessing such as face alignment
  2. Using a more powerful machine learning model on top of extracted face embeddings

If you take a look at my original face recognition tutorial, you’ll notice that we utilized a simple k-NN algorithm for face recognition (with a small modification to throw out nearest neighbor votes whose distance was above a threshold).

The k-NN model worked extremely well, but as we know, more powerful machine learning models exist.

To improve accuracy further, you may want to use dlib’s embedding model, and then instead of applying k-NN, follow Step #2 from today’s post and train a more powerful classifier on the face embeddings.

Did you encounter a “USAGE” error running today’s Python face recognition scripts?

Each week I receive emails that (paraphrased) go something like this:

Hi Adrian, I can’t run the code from the blog post.

My error looks like this:

Or this:

I’m using Spyder IDE to run the code. It isn’t running as I encounter a “usage” message in the command box.

There are three separate Python scripts in this tutorial, and furthermore, each of them requires that you (correctly) supply the respective command line arguments.

If you’re new to command line arguments, that’s fine, but you need to read up on how Python, argparse, and command line arguments work before you try to run these scripts!

I’ll be honest with you — face recognition is an advanced technique. Command line arguments are a very beginner/novice concept. Make sure you walk before you run, otherwise you will trip up. Take the time now to educate yourself on how command line arguments.

Secondly, I always include the exact command you can copy and paste into your terminal or command line and run the script. You might want to modify the command line arguments to accommodate your own image or video data, but essentially I’ve done the work for you. With a knowledge of command line arguments you can update the arguments to point to your own datawithout having to modify a single line of code.

For the readers that want to use an IDE like Spyder or PyCharm my recommendation is that you learn how to use command line arguments in the command line/terminal first. Program in the IDE, but use the command line to execute your scripts.

I also recommend that you don’t bother trying to configure your IDE for command line arguments until you understand how they work by typing them in first. In fact, you’ll probably learn to love the command line as it is faster than clicking through a GUI menu to input the arguments each time you want to change them. Once you have a good handle on how command line arguments work, you can then configure them separately in your IDE.

From a quick search through my inbox, I see that I’ve answered over 500-1,000 of command line argument-related questions. I’d estimate that I’d answered another 1,000+ such questions replying to comments on the blog.

Don’t let me discourage you from commenting on a post or emailing me for assistance — please do. But if you are new to programming, I urge you to read and try the concepts discussed in my command line arguments blog post as that will be the tutorial I’ll link you to if you need help.

Summary

In today’s blog post we used OpenCV to perform face recognition.

Our OpenCV face recognition pipeline was created using a four-stage process:

  1. Create your dataset of face images
  2. Extract face embeddings for each face in the image (again, using OpenCV)
  3. Train a model on top of the face embeddings
  4. Utilize OpenCV to recognize faces in images and video streams

Since I was married over this past weekend, I used photos of myself and Trisha (my now wife) to keep the tutorial fun and festive.

You can, of course, swap in your own face dataset provided you follow the directory structure of the project detailed above.

If you need help gathering your own face dataset, be sure to refer to this post on building a face recognition dataset.

I hope you enjoyed today’s tutorial on OpenCV face recognition!

To download the source code, models, and example dataset for this post (and be notified when future blog posts are published here on PyImageSearch), just enter your email address in the form below!

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , , ,

119 Responses to OpenCV Face Recognition

  1. Harald Vaessin September 24, 2018 at 10:53 am #

    My heartfelt congratulations and best wishes for your future together.
    And thank you for your wonderful tutorials!
    HV

  2. Jesudas September 24, 2018 at 10:56 am #

    Congratulations Adrian on your marriage. Wishing you and Trisha the Very Best in Life !

  3. Tosho Futami September 24, 2018 at 11:09 am #

    I am very appreciated for your weekly new code support. Conglaturation your marriage, please enjoy your forepufule future…

    • raj shah September 25, 2018 at 3:01 am #

      hey can u help me to figure out this module (Opencv) ,i m getting an error i know its command line argument can u tell me the configuration parts of ur file.

  4. Ayush September 24, 2018 at 11:22 am #

    Can this be used for detecting and recognising faces in a classroom with many students?

    • David Hoffman September 24, 2018 at 2:23 pm #

      Hi Ayush, potentially it can be used for a classroom. There are several considerations to make:

      1. Due to the camera angle, some students’ faces may be obscured if the camera is positioned at the front of the classroom.
      2. Scaling of faces especially for low resolution cameras (depends on camera placement).
      3. Privacy concerns — especially since students/children are involved.
  5. falahgs September 24, 2018 at 11:38 am #

    congratulations Adrian ..
    i like all you are posts in geat blog
    u really great prof.
    thanks for this post

  6. Nika September 24, 2018 at 12:06 pm #

    Congratulations Adrian and thanks for the tutorial!

  7. mohamed September 24, 2018 at 12:17 pm #

    Congratulations Adrian
    happy Days

  8. Jesus Hdz Soberon September 24, 2018 at 1:28 pm #

    Congratulations Adrian for you and now for your wife. My best wishes in this new stage of your lives.
    Best regards from México.

  9. Gary September 24, 2018 at 2:04 pm #

    Hello Adrian,
    I got married in February this year and it feels very good and right 🙂 Nerds like us need great women on our side. Take good care of them and congratulation.

  10. Cyprian September 24, 2018 at 2:07 pm #

    Congrats on getting married!
    Thank you again for this great tutorial on face recognition!
    Have a nice honeymoon.

  11. Yinon Bloch September 24, 2018 at 3:14 pm #

    Congratulations Adrian,
    I wish you both a happy life together!
    I read your blog from time to time and enjoy it a lot, I gain a lot of knowledge and ideas from your posts. thank you very much!
    Regarding your comments about improving the accuracy of the identity, I would like to share with you that I also play a lot with the various libraries of facial identification.
    I’v tried the code I found in Martin Krasser’s post: http://krasserm.github.io/2018/02/07/deep-face-recognition/
    Which is very similar to what you’ve shown in this post. I would like to know if there are any significant differences between the two.
    After a lot of poking around and testing I also came to conclusion that the dlib library gives the best results (at least for my needs), but without GPU – we get very slow performance.

    I wanted to know if you tried to use the facenet library, which uses a vector of 512D, from my experiments it seems to have the same accuracy as nn4 (more or less), but maybe I’m doing something wrong here.

    I would appreciate a response from your experience ,

    Great appreciation,
    Yinon Bloch

  12. Horelvis September 24, 2018 at 4:23 pm #

    Congratulations Adrian!
    But now you will don’t have more free time! 😉
    Enjoy with your wife for all life!

  13. Hossein September 24, 2018 at 4:39 pm #

    Congratulations
    I wish a green life for you.
    great Thanks

  14. Nico September 24, 2018 at 4:41 pm #

    Hi Adrian,
    first of all congrats.

    Regarding the code, I tend to agree with Yinon about the fact that the version that uses dlib seems to work better. In particular this version sometimes finds inexistent faces.
    What is your opinion ?
    Thanks
    Nico

  15. Naser September 24, 2018 at 5:33 pm #

    Congratulations Adrian and thank you for good tutorial!

  16. Hugues September 24, 2018 at 6:10 pm #

    Very nice postings, and congratulations on your wedding.

  17. Prateek Xaxa September 24, 2018 at 6:50 pm #

    Thanks for the great contents

    Wishing Happy Life Together!

  18. Sinh Huynh September 24, 2018 at 8:27 pm #

    Wishing you and Trisha all the best in your marriage.
    Many thanks for your tutorials, they are really great, easy to understand for beginner like me.

  19. kus September 24, 2018 at 8:28 pm #

    Congratulations!

  20. brett September 24, 2018 at 9:23 pm #

    Congratulations, wish you both the best! Thank-you for this post,ill be attempting it in the next few days, great tutorials always worth a read.

  21. Guanghui Yang September 24, 2018 at 9:25 pm #

    Congratulations Adrian!

  22. Tran Tuan Vu September 24, 2018 at 10:41 pm #

    Hi Adrian,
    I have tried on my big dataset (250 persons with ~30 image/person). But when I run recogization scripts, I got very low accuracy? So I think I should not use Linear-SVM for training on the big dataset.

    • David Hoffman September 25, 2018 at 11:25 am #

      Hi Tran, I believe that you need more training data. Thirty images per class isn’t likely enough.

  23. Keesh September 24, 2018 at 11:52 pm #

    Congrats Adrian and Trisha!
    I hope you have a wonderful Honeymoon and life together.

  24. Emmanuel Girard September 25, 2018 at 12:02 am #

    Félicitations. Nous vous souhaitons du bonheur, de la joie, de l’amour et beaucoup de souvenirs. / Congratulations. We wish you happiness, joy, love and many memories.

  25. Namdev September 25, 2018 at 12:23 am #

    Many congratulations, Adrian and Trisha

  26. andreas September 25, 2018 at 1:06 am #

    Hi Adrian,
    Thank you for your tutorial. Could you please point out where non max suppression is solved in this pipeline?
    Thanks,
    Andreas

    • Abhishek Thanki September 26, 2018 at 1:23 pm #

      Hi Andreas,

      There was no non-maxima suppression applied explicitly in the pipeline. Instead, it’s applied by the deep learning based face detector used (which uses a SSD model).

  27. Waheed September 25, 2018 at 1:08 am #

    Congratulation Adrian. You deserve it! Thanks for all your posts. I really enjoy them

  28. Evgeny September 25, 2018 at 2:31 am #

    Congratulations Adrian! Thanks for your great post. Wish you a happy life together!

  29. Pardis September 25, 2018 at 3:31 am #

    Wishing you both a lifetime of love and happiness. And thank you for this great tutorial.

  30. Chunan September 25, 2018 at 3:53 am #

    Congratulations! Happy wedding.

  31. MD Khan September 25, 2018 at 5:05 am #

    Congratulations Dr!

  32. siavash September 25, 2018 at 6:56 am #

    <3

  33. Srinivasan Ramachandran September 26, 2018 at 8:00 am #

    Hello Adrian,

    Hearty congratulations and best wishes to you and your wife.

    Regards,

    #0K

  34. Devkar September 26, 2018 at 8:42 am #

    Congratulations….

  35. Zak Zebrowski September 26, 2018 at 1:38 pm #

    Congrats!

  36. Murthy Udupa September 26, 2018 at 11:08 pm #

    Congratulations Adrian and Trisha. Wish you a wonderful life ahead.

  37. PFC September 27, 2018 at 8:33 am #

    If I want to add a person’s face model, do I just need to add that person’s face data set to the dataset folder?

    • David Hoffman September 27, 2018 at 8:58 am #

      Hi Peng — you’ll need a folder of face pictures for each person in the dataset directory. Then you’ll need to extract embeddings for the dataset and continue with the next steps.

      • noura November 28, 2018 at 3:01 pm #

        how do the extract embedding ?

  38. wayne September 30, 2018 at 5:16 am #

    Thanks for your course and congrats!

    • Adrian Rosebrock October 8, 2018 at 12:07 pm #

      Thanks Wayne, I’m glad you’re enjoying the course 🙂

  39. Hariprasad October 1, 2018 at 2:13 am #

    Happy Married Life

    • Adrian Rosebrock October 8, 2018 at 10:48 am #

      Thanks Hariprasad!

  40. Cara Manual October 2, 2018 at 3:12 am #

    Thank you, this really helped me …

    • Adrian Rosebrock October 8, 2018 at 10:39 am #

      Thanks Cara, I’m happy the tutorial has helped you 🙂

  41. Jasa Print Kain Jakarta October 2, 2018 at 3:13 am #

    Congratulations Adrian and thanks for the tutorial, this is ver usefull…

    • Adrian Rosebrock October 8, 2018 at 10:38 am #

      Thank you 🙂

  42. Hermy Cruz October 2, 2018 at 10:43 am #

    Hi Adrian! First of all Congratulations!!

    I have a question, how can I run this at startup if it has command line arguments(crontab).
    Thank you in advance!!

    • Adrian Rosebrock October 8, 2018 at 10:33 am #

      I would suggest creating a shell script that calls your Python script. Then call the shell script from the crontab.

  43. Stephen Fischer October 2, 2018 at 5:11 pm #

    Congratulations to you and Trisha! Many of your readers got a chance to meet both of you at PyImageConf, and you make a great couple! Here’s to many happy years ahead!

    One quick suggestion – I had been receiving an error as follows in the sample code:
    [INFO] loading face detector…
    [INFO] loading face recognizer…
    [INFO] starting video stream…
    [INFO] elasped time: 8.33
    [INFO] approx. FPS: 22.09
    FATAL: exception not rethrown
    Aborted (core dumped)

    I’m wondering if this is related to imutils Bug #86? Anyways, I put a sleep command in and it addressed the “waiting producer/stream issue”:
    # do a bit of cleanup
    cv2.destroyAllWindows()
    time.sleep(1.0)
    vs.stop()

    • Adrian Rosebrock October 8, 2018 at 10:32 am #

      Thanks Stephen 🙂 And yes, I believe the error is due to the threading bug.

    • tommy November 4, 2018 at 10:00 pm #

      Dear Stephen,

      How about trying to chage code excution order as below?

      vs.stop()
      time.sleep(0.5)
      cv2.destroyAllWindows()

      It worked for me.

  44. Luis M October 2, 2018 at 5:12 pm #

    Congratulations, Adrian! 😀

    • Adrian Rosebrock October 8, 2018 at 10:32 am #

      Thank you Luis!

  45. Ravindran October 3, 2018 at 1:20 am #

    Congratulations Adrian and Trisha! Happy wedding!

    • Adrian Rosebrock October 8, 2018 at 10:28 am #

      Thanks so much Ravindran! 🙂

  46. Francisco Rodriguez October 3, 2018 at 12:28 pm #

    Hello Adrian, excellent post I want to ask you a question if I follow your course pyimagesearch-gurus or buy the most extensive version of ImageNet Bundle. I could have support and the necessary information to start a project of face-recognition at a distance for example more than 8 meters

    • Adrian Rosebrock October 8, 2018 at 10:22 am #

      Hi Francisco, I always do my best to help readers and certainly prioritize customers. I provide the best support I possibly can but do keep in mind that I expect you to put in the hard work, read the books/courses, and run your own experiments. I’m more than happy to keep you going in the right direction but do keep in mind that I cannot do the hard work for you. Keep up the great work! 🙂

      • Francisco Rodriguez November 18, 2018 at 5:32 pm #

        Thanks Adrian, I know that the effort should be mine, the important thing is to have good bibliography and information, thank you I am very motivated and tis post are of great help especially to developing countries like in which I live

  47. Chintan October 4, 2018 at 1:55 am #

    Congratulations to both of you!!

    I want to use this face recognition method in form of a mobile application. Currently I have used https://codelabs.developers.google.com/codelabs/tensorflow-for-poets-2/#0 article for developing mobile application from tensorflow for face detection.

    Can you suggest me a direction?

    Thanks

  48. Kalicharan October 4, 2018 at 12:01 pm #

    I dont have 30+ pictures for each person, can i use the data augmentation tool to create many pictures of the pictures i have by blur, shifting etc

    • Adrian Rosebrock October 8, 2018 at 10:11 am #

      Yes, but make sure your data augmentation is realistic of how a face would look. For example, don’t use too much shearing or you’ll overly distort the face.

  49. Neleesh October 5, 2018 at 3:01 pm #

    Congratulations Adrian, thank you for the tutorial. I am starting to follow you more regularly. I am amazed with the detail in your blogs. I am just curious how long each of these tutorial takes you to plan and author.

    • Adrian Rosebrock October 8, 2018 at 9:50 am #

      Thanks Neleesh. As far as how long it takes to create each tutorial, it really depends. Some tutorials take less than half a day. Others are larger, on-going projects that can span days to weeks.

  50. Huy Ngo October 6, 2018 at 11:58 am #

    Hi Adrian.
    How to apply this model on my own dataset?
    Thank you in advance.

    • Adrian Rosebrock October 8, 2018 at 9:41 am #

      This tutorial actually covers how to build your own face recognition system on your own dataset. Just refer to the directory structure I provided and insert your own images.

  51. dadiouf October 6, 2018 at 2:51 pm #

    You both make a lovely couple

    • Adrian Rosebrock October 8, 2018 at 9:39 am #

      Thank you 🙂

  52. Q October 6, 2018 at 6:52 pm #

    Adrian,
    Congratulations on your marriage!
    Take some time off for your honeymoon and enjoy the best time of your life!

    • Adrian Rosebrock October 8, 2018 at 9:39 am #

      Thank you so much! 🙂

  53. Rayomond October 8, 2018 at 10:59 pm #

    Hearty Congratulations! Wish you both the very best

    • Adrian Rosebrock October 9, 2018 at 6:05 am #

      Thanks Rayomond 🙂

  54. dauey October 10, 2018 at 4:21 am #

    have you liveness detection for face recognition systems?its necessary for face recognition systems.

    • Adrian Rosebrock October 12, 2018 at 9:17 am #

      I do not have any liveliness detection tutorials but I will try to cover the topic in the future.

  55. Nguyen Anh Tuan October 16, 2018 at 11:34 am #

    Congratulation man

    • Adrian Rosebrock October 20, 2018 at 8:07 am #

      Thank you!

  56. Eric October 17, 2018 at 11:17 pm #

    Hi Adrian, Congratulations on the marriage!

    Thank you for all the interesting posts!

    I wonder if Adrian or anyone else has actually combined the dlib landmarks with the training described in this post? It seems to require additional steps which are not that easy to infer.

    I have successfully created embeddings/encodings from the older posts dlib instructions but when I combine them with this posts training 100% of the faces get recognized as the same face with very high accurace despite my dataset containing several different faces. When I changed up the model I saw that it basically only recognized the first name in the dict that is created and then matches every found face to that name (in one case it even matched a backpack).

    I spotted a difference between the dicts that get pickled. The one from this post has a text: dtype=float32 at the end of every array but the dlib dict does not have this text. Maybe this is a problem cause? In any case I can’t spot anything else I could change. But I also don’t know how to change that. (Another small difference is that this post uses embeddings in its code and the previous one calls them encodings).

    Also, in the text above, shouldn’t it be proba > T?

  57. Varun October 22, 2018 at 9:47 pm #

    Thanks a lot man

    • Adrian Rosebrock October 29, 2018 at 2:15 pm #

      You are welcome, Varun 🙂

  58. Arvand Homer October 24, 2018 at 12:14 pm #

    Hey Adrian, thanks for the tutorial.

    We are trying to run the code off an Nvidia Jetson TX2 with a 2.1 mm fisheye lens camera, but the frame rate of our video stream is very low and there is significant lag. Is there any way to resolve these problems?

    Best wishes.

  59. Praveen October 28, 2018 at 4:52 am #

    hi adrian, will this algo is useful for faceliveliness detection..

    Thanq

    with regards,
    praveen

    • Adrian Rosebrock October 29, 2018 at 1:25 pm #

      No, face recognition and liveliness detection are two separate subjects. You would need a dedicated liveliness detector.

  60. Somo October 30, 2018 at 5:48 am #

    Hi Adrian,

    First of all thanks for the tutorial.
    If I were going to use the dlib’s embedding model, but wanting to change from k-NN to SVM how do I do that.

    Thanks,
    Somo

    • Adrian Rosebrock November 2, 2018 at 8:27 am #

      You would replace use the model from dlib face recognition tutorial instead of the OpenCV face embedder. Just swap out the models and relevant code. Give it a try!

  61. akhil alexander October 31, 2018 at 6:21 am #

    Hi Andrian, your posts are always inspiring.Congratulations and wishing you a Happy married life… I invite both of you to my state, you should visit Kerala at least once in your lifetime https://www.youtube.com/watch?v=gpTMhLWUZCQ

    • Adrian Rosebrock November 2, 2018 at 7:38 am #

      Thank you Akhil, I really appreciate your kind words 🙂

  62. Zong October 31, 2018 at 10:17 pm #

    hi Adrian,thanks for your tutorial!
    I’m trying to replace the resnet caffemodel with squeezenet caffemodel. Simply replace the caffemodel file seems not work. How should I rewrite the code?
    PS: Congratulations on your marriage!
    Thanks again
    Zong

    • Adrian Rosebrock November 2, 2018 at 7:22 am #

      Hey Zong — which SqueezeNet model are you using? Keep in mind that OpenCV doesn’t support all Caffe models.

  63. M O Leong November 4, 2018 at 5:19 am #

    Hi Adrian.

    Having attempted the 1st few sections of your post (recognize.py), surprisingly, when I run patrack_bateman.jpg it appears to recognise the photo as “adrian”. Did you actually add more photos to your dataset so that “patrick bateman” doesn’t get recognised wrongly?

    Yes, I read further down the post that more datasets will eventually lead to much-needed accuracy. But I was just wondering how u got to the part to achieve “patrick bateman’ being ‘unknown’ or unrecognized in your tutorial example. Look forward to your feedback.

    Many thanks!

    • Adrian Rosebrock November 6, 2018 at 1:26 pm #

      That is quite strange. What version of OpenCV, dlib, and scikit-learn are you using?

  64. Harshpal November 4, 2018 at 7:58 am #

    Hi Adrian, Thanks for the informative article on Face Recognition. Loved it!!!

    I have a question on this. What if, I already have pre-trained model for face recognition (say FaceNet) and on top of it I want to train the same model for a few more faces. Is it possible to retrain the same model by updating the weights file.

    Or how can this be done. Please suggest ideas.

    Regards,
    Harshpal

    • Adrian Rosebrock November 6, 2018 at 1:25 pm #

      Yes. What you are referring to is called “fine-tuning” the model and can be used to take a model trained on one dataset and ideally tune the weights to work on another dataset as well.

  65. tommy November 4, 2018 at 10:31 pm #

    Hi, Adrian.
    Always thanks for your wonderful article.

    I have tested your code for a week.
    It was working for small dataset(1~2 people face).
    But when I increased number of people(upto 10), it looked unstable sometims.

    1.In my test, sometimes, face naming was too fluctuated, I mean,
    real name and other name was switched too frequently.
    sometimes it worked a bit stable, but sometimes looked very unstable
    or gave wrong face-name.
    So as you said before, I added more pictures(more than 30 pieces)
    to each person’s directory to increase accuracy.
    After that, face naming seemed to get more stable, but there are
    still fluctuated output or wrong naming output frequenty.
    Is there any method to increase accuracy?

    2. Is there possibility on a relation-formula of between face landmark points to distinguish each face more accurately? ( I tried ti find ,but I still failed.)

    Thanks in advance for your advice.

    • Adrian Rosebrock November 6, 2018 at 1:21 pm #

      1. Once you start getting more and more people in your dataset this method will start to fail. Keep in mind that we’re leveraging a pre-trained network here to compute the 128-d facial embeddings. Try instead fine-tuning the network itself on the people you want to recognize to increase accuracy.

      2. 2D facial landmarks in some cases can be used for face recognition but realistically they aren’t good for face recognition. The models covered in this post will give you better accuracy.

  66. Vijay November 5, 2018 at 11:30 am #

    What happend if any person other than the one in data set entered in to the frame….

    • Adrian Rosebrock November 6, 2018 at 1:14 pm #

      The person would be marked as “unknown”.

  67. Ankita November 13, 2018 at 1:32 pm #

    Hi Adrian,

    Firstly, I would like to Congratulate you on your wedding though it’s pretty late!
    I wish to know do you follow any algorithms, kindly mention, if any?

    • Adrian Rosebrock November 13, 2018 at 4:10 pm #

      I’m not sure what you mean by “follow any algorithms” — could you clarify?

  68. Vijay November 15, 2018 at 6:24 am #

    “Try instead fine-tuning the network itself on the people you want to recognize to increase accuracy”

    Can u plz tell me how to do that ☺️

  69. Ray November 20, 2018 at 2:37 am #

    Hi Adrian,
    Thanks for the info.
    I have 2 questions related to this:

    First, How would I use an RTSP stream instead of the webcam as input. My rtsp source is in the following format:
    rtsp://username:password@IP:port/videoMain

    I can see this stream in vlc on any computer on my network, so i should be able to use that as the source in your script

    Second, instead of viewing the results on my screen, how can I can Output it in a format so I can watch it from another computer. Example, How can I create a stream that I can feed into a vlc server, so I can watch it from another computer on my network.

    Thanks for your guidance

    • Adrian Rosebrock November 20, 2018 at 9:07 am #

      Hey Ray — I don’t have any tutorials on how to display or read an RTSP stream on the Pi but I will be covering it in my upcoming Raspberry Pi + computer vision book.

  70. anu November 22, 2018 at 3:32 am #

    thanks a lot for this page…
    how do we include our own pictures into this to recognize?

    • Adrian Rosebrock November 25, 2018 at 9:31 am #

      Refer to the “Project structure” section of the tutorial where I describe the directory structure for adding your images. If you need help actually building the face dataset itself, refer to this tutorial.

  71. Teresa DiMeola November 22, 2018 at 1:27 pm #

    Hi Adrian,

    You are so kind and generous…you must be an amazing human being. Thank you for this tutorial. I cannot wait to use it (I’m still learning some python basics…so not quite ready yet).

    But I do have a general question for you, which is – well – not off topic entirely, but also something which you may not know of the top of your head, but anyway here goes: Can you guess at or estimate at what camera resolution/focal length one would go from being a “resolved image” to a “low resolution image?” Let’s assume for the sake of the question/answer that it is a cooperative subject.

    Thanks again, for all you do!

    • Adrian Rosebrock November 25, 2018 at 9:22 am #

      Hi Teresa — each camera will have it’s own specific resolution and focal length so I don’t think there is “one true” resolution that will achieve the best results. The results are entirely dependent on the algorithm and the camera itself.

  72. hendrick November 22, 2018 at 5:49 pm #

    hi adrian. i got this error “File “extract_embeddings.py”, line 62, in
    (h, w) = image.shape[:2]
    AttributeError: ‘NoneType’ object has no attribute ‘shape'”. This code i ran in ubuntu. But in my Mac everything was fine. I used the same version python and opencv.
    Thank you

    • Adrian Rosebrock November 25, 2018 at 9:19 am #

      It’s not an issue with Python and OpenCV, it’s an issue with your input path of images. The path to your input images does not exist on disk. Double-check your images and paths.

  73. Tim November 24, 2018 at 2:43 am #

    Hi Adrian.
    Great job u have done~
    Here is my question.
    How can I plot the decision boundaries for each class after train_model?

    • Adrian Rosebrock November 25, 2018 at 9:04 am #

      The scikit-learn documentation has an excellent example of plotting the decision boundaries from the SVM.

  74. S M Yu November 25, 2018 at 4:54 am #

    What should I do if the camera recognizes a person who is not being trained, does not appear as ‘unknown’, but appears in the name of another person?

  75. Dorra November 28, 2018 at 12:47 pm #

    Hi Doctor Adrian
    Great job
    I don’t understand this error ” ValueError: unsupported pickle protocol: 3 ” ?

    • Adrian Rosebrock November 30, 2018 at 9:06 am #

      Re-train your face recognition model and serialize it to disk. You are trying to use my pre-trained model and we’re using two different versions of Python, hence the error.

  76. Rico November 29, 2018 at 3:38 am #

    LabelEncoder seems to be reversing the labels. If you try to print knownNames and le.classes_, the results are reversed. So when you call le.classes_[j], incorrect mapping is done. It seems to be causing misidentification on my datasets.

    • Rico December 3, 2018 at 10:21 pm #

      This happens when the list of images are not sorted. After adding sorting of the list of dataset images, it works without problem.

      By the way, linear SVM seems to perform bad with few dataset images per person. Using other classification algorithms such as Naive Bayes are better suited few datasets.

      • Adrian Rosebrock December 4, 2018 at 9:50 am #

        Thank you for sharing your experience, Rico!

Leave a Reply