Raspberry Pi Face Recognition

In last week’s blog post you learned how to perform Face recognition with Python, OpenCV, and deep learning.

But as I hinted at in the post, in order to perform face recognition on the Raspberry Pi you first need to consider a few optimizations — otherwise, the face recognition pipeline would fall flat on its face.

Namely, when performing face recognition on the Raspberry Pi you should consider:

  • On which machine you are computing your face recognition embeddings for your training set (i.e., onboard the Raspberry Pi, on a laptop/desktop, on a machine with a GPU)
  • The method you are using for face detection (Haar cascades, HOG + Linear SVM, or CNNs)
  • How you are polling for frames from your camera sensor (threaded vs. non-threaded)

All of these considerations and associated assumptions are critical when performing accurate face recognition on the Raspberry Pi — and I’ll be right here to guide you through the trenches.

To learn more about using the Raspberry Pi for face recognition, just follow along.

Looking for the source code to this post?
Jump right to the downloads section.

Raspberry Pi Face Recognition

This post assumes you have read through last week’s post on face recognition with OpenCV — if you have not read it, go back to the post and read it before proceeding.

In the first part of today’s blog post, we are going to discuss considerations you should think through when computing facial embeddings on your training set of images.

From there we’ll review source code that can be used to perform face recognition on the Raspberry Pi, including a number of different optimizations.

Finally, I’ll provide a demo of using my Raspberry Pi to recognize faces (including my own) in a video stream.

Configuring your Raspberry Pi for face recognition

Let’s configure our Raspberry Pi for today’s blog post.

First, go ahead and install OpenCV if you haven’t done so already. You can follow my instructions linked on this OpenCV Install Tutorials page for the most up to date instructions.

Next, let’s install Davis King’s dlib toolkit software into the same Python virtual environment (provided you are using one) that you installed OpenCV into:

If you’re wondering who Davis King is, check out my 2017 interview with Davis!

From there, simply use pip to install Adam Geitgey’s face_recognition module:

And don’t forget to install my imutils package of convenience functions:

PyImageConf 2018, a PyImageSearch conference

Would you like to receive live, in-person training from myself, Davis King, Adam Geitgey, and others at PyImageSearch’s very own conference in San Francisco, CA?

Both Davis King (creator of dlib) and Adam Geitgey (author of the Machine Learning is Fun! series) will be teaching at PyImageConf 2018 and you don’t want to miss it! You’ll also be able to learn from other prominent computer vision and deep learning industry speakers, including me!

You’ll meet others in the industry that you can learn from and collaborate with. You’ll even be able to socialize with attendees during evening events.

There are only a handful of tickets remaining, and once I’ve sold a total of 200 I won’t have space for you. Don’t delay!

I want to attend PyImageConf 2018! 

Project structure

If you want to perform facial recognition on your Raspberry Pi today, head to the “Downloads” section of this blog post and grab the code. From there, copy the zip to your Raspberry Pi (I use SCP) and let’s begin.

On your Pi, you should unzip the archive, change working directory, and take a look at the project structure just as I have done below:

Our project has one directory with two sub-directories:

  • dataset/ : This directory should contain sub-directories for each person you would like your facial recognition system to recognize.
    • adrian/ : This sub-directory contains pictures of me. You’ll want to replace it with pictures of yourself 😁.
    • ian_malcolm/ : Pictures of Jurassic Park’s character, Ian Malcolm, are in this folder, but again you’ll likely replace this directory with additional directories of people you’d like to recognize.

From there, we have four files inside of pi-face-recognition/ :

  • encode_faces.py : This file will find faces in our dataset and encode them into 128-d vectors.
  • encodings.pickle : Our face encodings (128-d vectors, one for each face) are stored in this pickle file.
  • haarcascade_frontalface_default.xml : In order to detect and localize faces in frames we rely on OpenCV’s pre-trained Haar cascade file.
  • pi_face_recognition.py : This is our main execution script. We’re going to review it later in this post so you understand the code and what’s going on under the hood. From there feel free to hack it up for your own project purposes.

Now that we’re familiar with the project files and directories, let’s discuss the first step to building a face recognition system for your Raspberry Pi.

Step #1: Gather your faces dataset

Figure 1: A face recognition dataset is necessary for building a face encodings file to use with our Python + OpenCV + Raspberry Pi face recognition method.

Before we can apply face recognition we first need to gather our dataset of example images we want to recognize.

There are a number of ways we can gather such images, including:

  1. Performing face enrollment by using a camera + face detection to gather example faces
  2. Using various APIs (ex., Google, Facebook, Twitter, etc.) to automatically download example faces
  3. Manually collecting the images

This post assumes you already have a dataset of faces gathered, but if you haven’t yet, or are in the process of gathering a faces dataset, make sure you read my blog post on How to create a custom face recognition dataset to help get you started.

For the sake of this blog post, I have gathered images of two people:

Using only this small number of images I’ll be demonstrating how to create an accurate face recognition application capable of being deployed to the Raspberry Pi.

Step #2: Compute your face recognition embeddings

Figure 2: Beginning with capturing input frames from our Raspberry Pi, our workflow consists of detecting faces, computing embeddings, and comparing the vector to the database via a voting method. OpenCV, dlib, and face_recognition are required for this face recognition method.

We will be using a deep neural network to compute a 128-d vector (i.e., a list of 128 floating point values) that will quantify each face in the dataset. We’ve already reviewed both (1) how our deep neural network performs face recognition and (2) the associated source code in last week’s blog post, but as a matter of completeness, we’ll review the code here as well.

Let’s open up encode_faces.py  from the “Downloads” associated with this blog post and review:

First, we need to import required packages. Notably, this script requires imutils , face_recognition , and OpenCV installed. Scroll up to the “Configuring your Raspberry Pi for face recognition” section to install the necessary software.

From there, we handle our command line arguments with argparse :

  • --dataset : The path to our dataset (we created a dataset using method #2 of last week’s blog post).
  • --encodings : Our face encodings are written to the file that this argument points to.
  • --detection-method : Before we can encode faces in images we first need to detect them. Our two face detection methods include either hog  or cnn . Those two flags are the only ones that will work for --detection-method .

Note: The Raspberry Pi is not capable of running the CNN detection method. If you want to run the CNN detection method, you should use a capable compute, ideally one with a GPU if you’re working with a large dataset. Otherwise, use the hog  face detection method.

Now that we’ve defined our arguments, let’s grab the paths to the images files in our dataset (as well as perform two initializations):

From there we’ll proceed to loop over each face in the dataset:

Inside of the loop, we:

  • Extract the person’s name  from the path (Line 32).
  • Load and convert the image  to rgb  (Lines 36 and 37).
  • Localize faces in the image (Lines 41 and 42).
  • Compute face embeddings and add them to knownEncodings  along with their name  added to a corresponding list element in knownNames  (Lines 45-52).

Let’s export the facial encodings to disk so they can be used in our facial recognition script:

Line 56 constructs a dictionary with two keys — "encodings"  and "names" . The values associated with the keys contain the encodings and names themselves.

The data  dictionary is then written to disk on Lines 57-59.

To create our facial embeddings open up a terminal and execute the following command:

After running the script, you’ll have a pickle file at your disposal. Mine is named   encodings.pickle  — this file contains the 128-d face embeddings for each face in our dataset.

Wait! Are you running this script on a Raspberry Pi?

No problem, just use the --detection-method hog  command line argument. The --detection-method cnn  will not work on a Raspberry Pi, but certainly can be used if you’re encoding your faces with a capable machine. If you aren’t familiar with command line arguments, just be sure to give this post a quick read and you’ll be a pro in no time!

Step #3: Recognize faces in video streams on your Raspberry Pi

Figure 3: Face recognition on the Raspberry Pi using OpenCV and Python.

Our pi_face_recognition.py  script is very similar to last week’s  recognize_faces_video.py  script with one notable change. In this script we will use OpenCV’s Haar cascade to detect and localize the face. From there, we’ll continue on with the same method to actually recognize the face.

Without further ado, let’s get to coding pi_face_recognition.py :

First, let’s import packages and parse command line arguments. We’re importing two modules ( VideoStream  and FPS ) from imutils  as well as imutils  itself. We also import face_recognition  and cv2  (OpenCV). The rest of the modules listed are part of your Python installation. Refer to “Configuring your Raspberry Pi for face recognition” to install the software.

We then parse two command line arguments:

  • --cascade : The path to OpenCV’s Haar cascade (included in the source code download for this post).
  • --encodings : The path to our serialized database of facial encodings. We just built encodings in the previous section.

From there, let’s instantiate several objects before we begin looping over frames from our camera:

In this block we:

  • Load the facial encodings data  (Line 22).
  • Instantiate our face detector  using the Haar cascade method (Line 23).
  • Initialize our VideoStream — we’re going to use a USB camera, but if you want to use a PiCamera with your Pi, just comment Line 27 and uncomment Line 28.
  • Wait for the camera to warm up (Line 29).
  • Start our frames per second, fps , counter (Line 32).

From there, let’s begin capturing frames from the camera and recognizing faces:

We proceed to grab a frame  and preprocess it. The preprocessing steps include resizing followed by converting to grayscale and rgb  (Lines 38-44).

In the words of Ian Malcolm:

Your scientists were so preoccupied with whether they could, they didn’t stop to think if they should.

Well, he was referring to growing dinosaurs. As far as face recognition, we can and we should detect and recognize faces with our Raspberry Pi. We’ve just got to be careful not to overload the Pi’s limited memory with a complex deep learning method. Therefore, we’re going to use a slightly dated but very prominent approach to face detection — Haar cascades!

Haar cascades are also known as the Viola-Jones algorithm from their paper published in 2001.

The highly cited paper proposed their method to detect objects in images at multiple scales in realtime. For 2001 it was a huge discovery and share of knowledge — Haar cascades are still well known today.

We’re going to make use of OpenCV’s trained face Haar cascade which may require a little bit of parameter tuning (as compared to a deep learning method for face detection).

Parameters to the detectMultiScale  method include:

  • gray : A grayscale image.
  • scaleFactor : Parameter specifying how much the image size is reduced at each image scale.
  • minNeighbors : Parameter specifying how many neighbors each candidate rectangle should have to retain it.
  • minSize : Minimum possible object (face) size. Objects smaller than that are ignored.

For more information on these parameters and how to tune them, be sure to refer to my book, Practical Python and OpenCV as well as the PyImageSearch Gurus course.

The result of our face detection is rects , a list of face bounding box rectangles which correspond to the face locations in the frame (Lines 47 and 48). We convert and reorder the coordinates of this list on Line 53.

We then compute the 128-d  encodings  for each face on Line 56, thus quantifying the face.

Now let’s loop over the face encodings and check for matches:

The purpose of the code block above is to identify faces. Here we:

  1. Check for matches  (Lines 63 and 64).
  2. If matches are found we’ll use a voting system to determine whose face it most likely is (Lines 68-87). This method works by checking which person in the dataset has the most matches (in the event of a tie, the first entry in the dictionary is selected).

From there, we simply draw rectangles surrounding each face along with the predicted name of the person:

After drawing the boxes and text, we display the image and check if the quit (“q”) key is pressed. We also update our fps  counter.

And lastly. let’s clean up and write performance diagnostics to the terminal:

Face recognition results

Be sure to use the “Downloads” section to grab the source code and example dataset for this blog post.

From there, open up your Raspberry Pi terminal and execute the following command:

Raspberry Pi Face Recognition

I’ve included a demo video, along with additional commentary below, so be sure to take look:

Our face recognition pipeline is running at approximately 1-2 FPS. The vast majority of the computation is happening when a face is being recognized, not when it is being detected. Furthermore, the more faces in the dataset, the more comparisons are made for the voting process, resulting in slower facial recognition.

Therefore, you should consider computing the full face recognition (i.e., extracting the 128-d facial embedding) once every N frames (where N is user-defined variable) and then apply simple tracking algorithms (such as centroid tracking) to track the detected faces. Such a process will enable you to reach 8-10 FPS on the Raspberry Pi for face recognition.

We will be covering object tracking algorithms, including centroid tracking, in a future blog post.


In today’s blog post we learned how to perform face recognition using the Raspberry Pi, OpenCV, and deep learning.

Using this method we can obtain highly accurate face recognition, but unfortunately could not obtain higher than 1-2 FPS.

Realistically, there isn’t a whole lot we can do about speeding up the algorithm — the Raspberry Pi, while powerful for such a small device, is naturally limited in terms of computation power and memory (especially without a GPU).

If you would like to speedup face recognition on the Raspberry Pi I would suggest to:

  1. Take a look at the PyImageSearch Gurus course where we use algorithms such as Eigenfaces and LBPs to obtain faster frame rates of ~13 FPS.
  2. Train your own, shallower deep learning network for facial embedding. The downside here is that training your own facial embedding network is more of an advanced deep learning technique, to say the least. If you’re interested in learning the fundamentals of deep learning applied to computer vision tasks, be sure to refer to my book, Deep Learning for Computer Vision with Python.

I hope you enjoyed today’s post on face recognition!

To be notified when future blog posts are published here on PyImageSearch, just enter your email address in the form below!


If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , , , , ,

141 Responses to Raspberry Pi Face Recognition

  1. Zubair Ahmed June 25, 2018 at 10:46 am #

    Loved to hear your voice for the first time and your accent 🙂

    Before you said it while going through the post I was also thinking what would it be like to run this on Intel Movidius NCS, would love to see a post on it in the future


    • Adrian Rosebrock June 25, 2018 at 1:39 pm #

      I’m not sure if you can call a Maryland/Baltimore accent a true “accent” but people do pick up on it. I’ve actually started taking speech therapy lessons to help not speak the way I do 😉

      • Zubair June 26, 2018 at 12:31 am #

        You’re kidding me, are you serious right now?

        I think this is a fine accent and you don’t need to change it, what would you sound like after this therapy or rather who would you sound like?

        • Adrian Rosebrock June 26, 2018 at 8:10 am #

          Hah! Yes, I am serious. It’s a long, boring story but basically I talk with a low register of my voice, common for Marylanders. It’s sometimes called “vocal fry”. Just fixing that, that’s all 🙂

          • Zubair Ahmed July 2, 2018 at 8:27 am #

            Oh wow I googled ‘vocal fry’ right now, sounds like something you should definitely do if you have this, I’m wiser now.

            Its interesting to know that having a deeper voice is correlated with making more money (not bad) and attracting opposite gender (think you’re set over here, hello T 🙂

            Good luck

          • Adrian Rosebrock July 3, 2018 at 7:26 am #

            Googling for vocal fry can lead you to a lot of really, really bad cases of what it is. Mine is nowhere near as bad — I just talk in a low voice 😉

          • Zubair Ahmed July 5, 2018 at 8:52 am #

            Happy to hear that its not that bad

      • claudio September 26, 2018 at 11:44 am #

        hello, is it possible have youre email, i have some questions for you

    • Vijay June 26, 2018 at 5:57 am #

      me too looking in this direction. Would be good idea to try this in small “toy” experiments at home.

  2. Francisco Rodriguez June 25, 2018 at 1:07 pm #

    Hello Adrian Rosebrock, I want to congratulate you for all your contribution in this field, I have a question and that is that I have mounted the topic of facial recognition, but the same program that I run on my laptop recognizes a distance of up to 5 meters but in the Raspberry device does not do it at 1 max and sometimes at 2 meters away, is there any way to overcome this problem?

    • Adrian Rosebrock June 25, 2018 at 1:37 pm #

      That sounds like a difference in your camera sensors — your Raspberry Pi camera is not good enough to detect the faces from your distance. You can either:

      1. Use a better camera sensor
      2. Upsample the image prior to applying face detection — the problem here will be speed. The more data, the longer it will take to process.

      • rush June 26, 2018 at 7:23 am #


  3. Shan June 25, 2018 at 1:19 pm #

    Thanks for this tutorial Adrian. I was somewhere waiting to see how Adrian would run Deep Learning on SMB’s like RBP.

    Very informative post and I learned a lot.

    Next I will keep my eyes open for Centroid Tracking that interests me more than anything.


    • Adrian Rosebrock June 25, 2018 at 1:36 pm #

      Thanks Shan, I’m glad you enjoyed the post.

  4. Mansoor June 25, 2018 at 3:23 pm #

    Great tutorial Adrian!!

    Can Intel Movidius NCS improve the FPS? and by how much?

    Thank you.

    • Adrian Rosebrock June 26, 2018 at 8:20 am #

      This model (dlib) cannot be directly used by the Movidius NCS so a comparison cannot really be done. Some work has been done with OpenFace and FaceNet to run on the NCS, such as this repo but I haven’t been able to run it on the NCS.

  5. Damir June 25, 2018 at 3:28 pm #

    Hi Adrian,

    Love your work, I’ve been learning about neural networks and machine learning in the last couple of months and your blog has been of HUGE help for me, so wanted to thank you for that 🙂

    Regarding this topic, have you considered converting some Tensorflow model for face recognition, such as those provided with facenet by David Sandberg, to Movidius graph in order to increase FPS for face recognition on RPi platform?

    • Adrian Rosebrock June 26, 2018 at 8:21 am #

      See my reply to Mansoor.

  6. Gus June 25, 2018 at 6:29 pm #

    Hi Adrian! I recently discovered your site and I love your tutorials. I have a question about implementing the face detection technique you described in your post “Face detection with OpenCV and deep learning” with the face recognition technique described here on a RPI3. The caffe model from the previous post achieves between 1 and 0.25 fps on my PI (running a few other real-time things). I’ve yet to implement the face recognition technique described in this post but it sounds like this method will slow my face detection/recognition pipeline down to about 0.1 fps or worse. I’m really impressed by the accuracy of the caffe model vs the haar cascades so I’d like to continue using them if possible. Do you have any suggestions for using these two models together on a RPI? I don’t expect to achieve anywhere near real-time performance but a frame rate of ~0.5fps would be nice if possible.

    • Adrian Rosebrock June 26, 2018 at 8:24 am #

      The Pi just isn’t fast enough to run both the Caffe face detector along with dlib’s facial embedding network. There isn’t really any “tricks” here, unfortunately. You’ll likely less than 1 FPS if you try to combine both of them on the Pi. There is some work being done with the Movidius NCS (see other comments on this post) to help speedup the pipeline but all the pieces aren’t quite there yet.

  7. Xue Wen June 25, 2018 at 7:29 pm #

    Thank you for the wonderful post! Always wait for your post to learn new things. Are you planning to write a blog about running face recognition on Intel Movidius NCS?

    • Adrian Rosebrock June 26, 2018 at 8:19 am #

      I’m considering it but I do not have any definite plans yet.

  8. naitik June 25, 2018 at 10:42 pm #

    Thanks for creating this level of informative posts which anyone can learn, This post is also very informative and useful too..
    Can i ask you for some more updated posts on OCR from image let’s say my driving license with current advancements in the field will be really helpful for many.

  9. Ian Carr-de Avelon June 26, 2018 at 12:14 am #

    Dear Adrian,
    In your post on face recognition on the Raspberry Pi you say:
    “is naturally limited in terms of computation power and memory (especially without a GPU)”

    I can’t imagine that you are unaware that different information is out there:
    ” and on-chip graphics processing unit (GPU).”

    apparently the most openly documented GPU:

    and other video hardware they will uncripple for a price:

    What are you saying? Is this all “fake news”? or the Pi’s GPU is some kind of joke you shouldn’t really call GPU? or you just mean it’s not supported by your favourite software?


    • Adrian Rosebrock June 26, 2018 at 8:18 am #

      Indeed, the Pi does have a GPU. The problem is pushing the computation to the GPU using existing libraries — it’s not an easy task. Secondly, I would suggest you read through Pete Warden’s post again. Notice how the inference on a single image took 3.3 seconds (even while using the GPU).

      The Raspberry Pi GPU is not a “joke” but when people think of GPUs they are normally thinking of more powerful ones, such as NVIDIA’s line. Keep in mind that the Raspberry Pi, no matter what, is still limited by it’s power draw and processing power. It’s not a powerful GPU.

      Furthermore, while OpenCL is making it easier, but we’ve still got a long way to go.

  10. Anthony The Koala June 26, 2018 at 2:51 am #

    Dear Dr Adrian,
    Thank you for this tutorial. My particular question is about increasing the frame rate. You informed us about using eigenfaces and local binary patterns (LPB) as a method of increasing the processing rate.

    You also have a tutorial at https://www.pyimagesearch.com/2017/02/06/faster-video-file-fps-with-cv2-videocapture-and-opencv/ which talks about faster video file FPS with cv2.VideoCapture and OpenCV, and another tutorial at https://www.pyimagesearch.com/2015/12/28/increasing-raspberry-pi-fps-with-python-and-opencv/.

    Would these tutorials help with speeding up the processing?

    Thank you,
    Anthony of Sydney

    • Adrian Rosebrock June 26, 2018 at 8:14 am #

      Both do. It just depends if you’re using a USB camera or the Raspberry Pi camera module. I actually implemented the VideoStream class in this blog post to combine the two blog posts you are referring to into an easy to use class. The code used in this post is already taking advantage of threaded video stream.

  11. Amit Roy June 26, 2018 at 5:00 am #

    Adrain, check below blog.


    They claimed to achieve 18FPS on Pi-Zero-W with ResNet18 trained on CIFAR-10 with their technology. And they claim that they were your students 🙂

    • Adrian Rosebrock June 26, 2018 at 8:11 am #

      This is awesome, thanks so much for sharing Amit! 😀

  12. Adrian Rosebrock June 26, 2018 at 8:18 am #

    I tested with two cameras:

    1. The Raspberry Pi camera module
    2. A Logitech C920 USB camera

    • priyanka August 16, 2018 at 1:15 pm #

      i only have Raspberry pi camera module but it doesn’t work with that and it shows error no module named ‘PiCamera’

  13. Lotfi June 27, 2018 at 5:03 am #

    Hello Adrian,

    Firstly Thanks for this tutorial Adrian.

    i have a RaspberryPi and i want do the same thing that you do, but instead o do detection and the recongition in the raspberryPi , i want to stream the camera feed to the cloud and do all the proccesing their because raspberryPi is very slow.

    could you suggest me a way to do that, especially the stream step.


    • Adrian Rosebrock June 27, 2018 at 2:48 pm #

      Thanks for reaching out. I don’t have any tutorials on taking a Raspberry Pi stream and piping it to the cloud for processing, but if I do cover it in the future, I’ll certainly let you know.

      However, I will say that if your goal is true real-time processing that this likely isn’t a good idea. The I/O latency introduced by network overhead will be slower than just processing the frame locally.

  14. Daniel Lopez June 28, 2018 at 6:20 am #

    Hello Adrian,
    First of all thanks for this tutorial.
    I’m having problems when trying to install the dlib libraries on my Raspberry Pi 3 Model B.
    I’m using your Raspbian.img on 32GB SD card, updated and upgraded the system (as suggested in some post) and using this command to get into the Python3 + OpenCV environment:
    source start_py3cv3.sh

    Once I got the py3cv3 shell I have tried: pip install dlib
    and the libraries downloaded fine but the installation procedure never finish ( it was running for almost one hour) and the command cc1plus is using almost the 100% of the CPU.

    Any help will be appreciate.

    • Daniel Lopez June 28, 2018 at 7:04 am #

      Hi again,

      please discard this post, finally the library was installed (it took almost two hours to complete)


      • Adrian Rosebrock June 28, 2018 at 7:55 am #

        Indeed, it can take awhile for dlib and the face_recognition libraries to compile and install. Congrats on configuring your Pi for face recognition, Daniel!

  15. Julian June 28, 2018 at 6:32 pm #

    Hello Adrian,
    i really appreciate your work !
    But i have a problem right now. If i want to intall the dlib toolkit, the installation stucks at “Running setup.py bdist_wheel for dlib…” this also happens if i try to install the face_recognition module.
    I tried to install dlib by your guide :https://www.pyimagesearch.com/2018/01/22/install-dlib-easy-complete-guide/ but if i want to check at the end if its installed it doenst show up in the terminal. I dont know why it is not working. Any idea?

    • Adrian Rosebrock June 29, 2018 at 5:33 am #

      It’s probably not “stuck” — it’s more likely compiling and installing. Check your CPU usage and let the Pi sit overnight just to make sure.

  16. Lucian June 29, 2018 at 11:11 am #

    Hi Adrian

    Will you ever make a tutorial for object detection based on HOG/SVM which not includes face detection ?

    I am asking because, using Haar cascades, this task seems to be “too simple” compared to detecting, for example, an apple / a car / a pen.


    • Adrian Rosebrock July 1, 2018 at 6:38 am #

      Hey Lucian — I actually cover HOG + Linear SVM detectors for non-face detection inside the PyImageSearch Gurus course. One of the examples in the course is training a car detector with HOG + Linear SVM.

  17. pursotam niraula July 4, 2018 at 2:21 am #

    cant we detect particular object in the similar way??

    • Adrian Rosebrock July 5, 2018 at 6:36 am #

      You would need a model trained to recognize an object. If you’re new to object detection give this post a read.

  18. Michael July 4, 2018 at 6:45 am #

    Hi Adrian. First of all, I’m sure that you haven’t heard it before :-): you articles rocks. Very informative and interesting, but also pedagogical.

    I’m building an architecture of different classifications of live video from multiple Raspberry Pi’s (Zero’s preferred) where I need to classify:
    1. different objects (people, cars, animals)
    2. states in specific locations in in the image (door open/door closed
    3. face detection/recognition

    I lean towards 3 different models, but would like to hear your take on this architecture?

    I’m satisfied with 1-2 FPS, so with the architecture of 3 models in mind (3 * 1-2 FPS = 3-6 FPS), I believe the Pi will come to short. I’m therefore thinking of a low powered centralized unit that handles image processing from 3-4 livestreams (3-4 Pi’s * 3-6 FPS = 9-24 FPS)
    What low powered unit do you recommend to handle this processing or do you recommend another overall architecture?

    • Adrian Rosebrock July 5, 2018 at 6:31 am #

      The Pi Zero is far too underpowered — I would immediately exclude it unless you wanted to play around with something like a Movidius NCS or Google’s AIY kit, but then you need to be concerned with power consumption as I assume you are. You could have a centralized system for processing all frames but keep in mind network overhead — while the central machine is technically faster you also need to account for the time it takes for the frame to be transmitted and the results returned. You might want to run some experiments to determine if that is viable. Otherwise, you might be able to replace the entire Pi architecture with a Jetson TX1 or TX2.

      • Michael July 11, 2018 at 1:53 am #

        Hi Adrian,
        Thank you for the quick answer. Yeah, need to do some testing with network latency.

        Have a wonderful summer

  19. Chris July 6, 2018 at 5:26 pm #

    Hi David,

    Which generation Raspberry Pie did you use for this case?

    • Chris July 6, 2018 at 5:29 pm #

      Also, will the 1st generation Raspberry Pie work for this case, if performance is not a concern at this moment? It is said to be an ARM11 running at 700MHz

      • Adrian Rosebrock July 10, 2018 at 8:49 am #

        I used a Pi 3B for this example. I would not use a Pi 2 or earlier.

  20. Patrik July 6, 2018 at 9:41 pm #

    Hi Adrian!
    Is it possible that the face recognition omit the photographs?


    • Adrian Rosebrock July 10, 2018 at 8:48 am #

      Hey Patrik — could you be a bit more specific in what you mean by saying omitting a photograph?

  21. Paul Christian July 9, 2018 at 11:14 am #

    Adrian, thanks for your efforts in developing this demo. Can you please tell me the OS version on RPi that you used? I have been having a difficult time just getting the python packages installed! Also, did you develop and test the detector on a PC or MAC then transfer to the RPi? If not, what editor did you use on the RPi? The default RPi editor is unable to find the required python libraries? The python 3 command line can identify the libraries.

    Thanks for the help

    • Adrian Rosebrock July 10, 2018 at 8:23 am #

      I used Raspbian Stretch for the example. I normally use either Sublime Text or PyCharm with the SFTP plugin to code on my Mac but the code itself is actually stored on the Pi. Sublime Text will run on the Pi though, that’s another good option.

      If you are having trouble getting your Pi configured make sure you take a look at my Raspbian .img file included in the Quickstart Bundle and Hardcopy Bundle of my book, Practical Python and OpenCV. The .img file comes with OpenCV, Python, and the face recognition modules pre-installed. Just flash the .img file to your Pi, boot, and you’re good to go. It will save you a lot of time and hassle.

  22. Antony Smith July 11, 2018 at 6:49 am #

    Hey Adrian, will try a couple ideas on this one but it seems I’m the only one to get the:

    ValueError: unsupported pickle protocol: 3

    when it come to line 22: data = pickle.loads(open(args[“encodings”], “rb”).read())?

    Any idea what could be causing this as I get the same error if I just run the rec on a single still image.
    Same ‘pickle’ error, though no error on importing any of the libraries?

    • Adrian Rosebrock July 13, 2018 at 5:17 am #

      Which version of Python are you using? I would suggest you re-run the script to extract the facial embeddings (which generates the pickle file). Then try to execute the facial recognitions scripts.

  23. Khaw Oat July 12, 2018 at 1:21 pm #

    Is this a deep learning?

    • Adrian Rosebrock July 13, 2018 at 5:01 am #

      It’s using deep learning under the hood. See this post for more details.

      • Khaw Oat July 13, 2018 at 6:40 am #

        I don’t understand this word.

        • Adrian Rosebrock July 13, 2018 at 7:30 am #

          Hood as in “under the hood of a car”. The blog post I linked you to will show you how deep learning object detection works similar to how if I opened the hood of a car you would see how the engine works.

          • Khaw Oat July 13, 2018 at 10:37 am #

            Thank You.
            I’m working on a deep learning project.

  24. Vincent Kok July 15, 2018 at 10:04 am #

    Hi Adrian,

    Very cool tutorial! I am doing some research on how a small or large database would affect the performance of the face recognition. Any ways I could measure the performance/time from when the input image is given to it is being recognize as a person ID? I would like to try for a database with 100 person VS 50 person to see if there is speed difference.
    Hope you could help me on this.


    • Adrian Rosebrock July 17, 2018 at 7:26 am #

      All you would need is a simple call to the “time” function in Python:

      From there you can perform your evaluation.

  25. Kasidet Pea July 16, 2018 at 9:44 pm #

    Hi Adrian! Thanks for this tutorial. Would you recommend what camera I could use to do face recognition with raspberry pi

  26. Sahil July 17, 2018 at 4:21 am #

    Instead of only name, is it possible to make a user interface that can display all the information of person.?

    • Adrian Rosebrock July 17, 2018 at 7:08 am #

      Sure, that’s totally possible. You’ll want to look into dedicated Python GUI libraries such as TKinter, QT, and Kivy.

  27. Pegah July 22, 2018 at 9:27 am #

    Hi Adrian,

    Very cool tutorial , but I’m trying to run the code on Raspberry pi , every time i run the code after about 1 minute i get segment fault

    do you have any suggestion for me ?

    • Adrian Rosebrock July 25, 2018 at 8:21 am #

      Which script is generating the segmentation fault?

      • Sarah October 6, 2018 at 12:23 pm #

        I also got a segmentation fault when running encode_faces.py

  28. Kai July 23, 2018 at 4:18 am #

    Hi, adrian

    when i wan to run python encode_faces.py –dataset dataset –encodings encodings.pickle \–detection-method hog

    it has error saying that importError: no module named face_recognition.
    is that the face_recognition module must be install in the environment in order to run?

    Ps: i have installed the face_recognition module, but not in the environment.

    • Adrian Rosebrock July 25, 2018 at 8:17 am #

      Are you using a Python virtual environment when you execute the script? If so, you need to install face_recognition into the Python virtual environment as well. Keep in mind that Python virtual environments are independent of your system install.

  29. Jenson July 24, 2018 at 6:05 am #

    could anyone help me with a possible fix for this please?
    [INFO] loading encodings + face detector…
    [INFO] starting video stream…
    Traceback (most recent call last):
    File “pi_face_recognition.py”, line 42, in
    frame = imutils.resize(frame, width=500)
    File “/home/pi/.virtualenvs/cv/lib/python3.5/site-packages/imutils/convenience.py”, line 69, in resize
    (h, w) = image.shape[:2]
    AttributeError: ‘NoneType’ object has no attribute ‘shape’

    • Adrian Rosebrock July 25, 2018 at 8:04 am #

      OpenCV is unable to access your Raspberry Pi camera module or your USB webcam. Which are you using? USB camera or Raspberry Pi camera module? Keep in mind that depending on which one you are using you’ll need to update either Line 27 or Line 28.

  30. Andrew August 2, 2018 at 1:39 pm #

    Hi Adrian! Thanks for this guide!

    The face_recognition.face_encodings() method is causing a segmentation fault in the “encode_faces.py” file on my Raspberry Pi 3B with a fresh install of Raspbian Stretch. Any idea on how to fix this?


    • Adrian Rosebrock August 7, 2018 at 7:44 am #

      Hey Andrew — it sounds like there may be a problem with your dlib install but it’s hard to pinpoint what the exact error is. I would start by posting the problem on the official face_recognition GitHub page.

    • Milenko August 23, 2018 at 5:05 am #

      Hi Andrew,

      I have the same problem. Did you manage to fix it?


  31. Anuj August 6, 2018 at 2:09 am #

    Hi Adrian,

    I have a dataset containing 6 faces of 3 people each. I ran this code and it works fine when detecting my face and my friend’s face. It faces trouble while detecting the third person’s face. It detects it as my face. Does this algorithm work only for binary classification?

    • Adrian Rosebrock August 7, 2018 at 6:46 am #

      This algorithm can work for multi-person classification. Keep in mind that we are using a simple k-NN classifier here. For improved accuracy try taking the embeddings and training a non-linear model on them.

  32. Vamshi August 7, 2018 at 12:35 pm #

    Hi Adrian.. Thanks for the dlib installation process. Installed dlib successfully but cant install face_recognition, showing memory after downloading 99%. Please help me..

    • Adrian Rosebrock August 9, 2018 at 3:05 pm #

      Hey Vamshi, I’m not sure I understand the error. Has the download stalled or are you actually getting an error message?

    • pedroprates September 23, 2018 at 10:45 pm #

      The installation is probably too big for your available pip cache. Try running pip –no-cache-dir install face_recognition to avoid this issue.

  33. arulraj August 8, 2018 at 5:06 am #

    Hi Adrian,

    Instead of video streaming, can I give the image directly to identify the face. If yes, what is the function I should use. Please assist.

    frame =

    • Adrian Rosebrock August 9, 2018 at 2:56 pm #

      You should follow this post instead.

  34. Ahmed August 9, 2018 at 6:16 am #

    Hi Adrian,

    I am trying to save the data that I got from previous training to re-use them later on, without having to train again on that person but train on another person and still be able to recognize the person person.

    Any ideas?

  35. Vamshi August 9, 2018 at 7:34 am #

    Hi Adrian when i show any face to camera it is showing segment.. Untill then video streaming was working perfect. When i show any face window is getting closed and showing segmentation fault.. Please help me

    • Vamshi August 9, 2018 at 7:35 am #

      I mean segmentation fault

      • Adrian Rosebrock August 9, 2018 at 2:41 pm #

        I would insert “print” statements to determine which line is causing the segfault. Off the top of my head it’s likely a problem with your dlib install.

  36. Ankit Kumar Singh August 11, 2018 at 1:04 pm #

    Nice Tutorial Adrian!!

    I would like to know whether this method will detect and recognize faces when we don’t look straight into the camera i.e. the face is tilted by at least 45 degrees in either direction?


    • Adrian Rosebrock August 15, 2018 at 9:08 am #

      Hey Ankit — have you tried it with your own data yet? Be sure to give it a try first. Secondly, you might want to look into face alignment.

  37. hami August 12, 2018 at 4:19 am #

    hello adrian,i want to use two camera for face recognition but i no idea for this,how i add another camera for this project?

  38. Alian August 14, 2018 at 5:27 am #

    Hi Adrian,
    first of all thanks for your great tutorial.
    i am a beginner and learn step by step from you and now i want to do this project on my PI 3 B.
    the last step i have done was installing imutils.
    i dont have a camera, is it possible to go on this project on a video file? what are the differences in steps?
    i read https://www.pyimagesearch.com/2018/06/18/face-recognition-with-opencv-python-and-deep-learning/ tutorial too, but you prefer not to use it on Raspberry PI.
    what should i do now? could you help me please?

    • Adrian Rosebrock August 15, 2018 at 8:33 am #

      If you’ve already read the previous tutorial then you’ll notice we use the “cv2.VideoWriter” function to write frames to disk. You can use that method with this code as well.

  39. aliyan August 15, 2018 at 3:59 am #

    hi Adrian,
    i’m a beginner on pi 3 B and learn from you.
    i want to do this project, is it possible to do this project without camera on a video file?
    what are the different steps?
    could you help me please?

    • Adrian Rosebrock August 15, 2018 at 8:16 am #

      Yes, you can absolutely perform face recognition on a video file. You should refer to this post for more information.

  40. Tommy August 16, 2018 at 12:23 am #

    Hello Adrian,
    When I use:
    vs = VideoStream(usePiCamera=True).start()

    I have below error:

    (cv) $ python pi_face_recognition.py –cascade haarcascade_frontalface_default.xml –encodings encodings.pickle

    ImportError: No module named picamera.array

    What I should to do.
    Thank a lot.

    • Adrian Rosebrock August 17, 2018 at 7:42 am #

      Hey Tommy — you need to install the “picamera” module into your “cv” Python virtual environment:

  41. Steven Veenma August 22, 2018 at 5:44 am #

    First of all thanks for you wonderfull contributions to offer image processing to a broad public. Recently I concentrated on this tutorial to use this as a building block for a smart drone we intend to make.

    I got some problems running the pi_face_recognition.py script. Below the message I got.
    [INFO] loading encodings + face detector…
    [INFO] starting video stream…
    Unable to init server: Could not connect: Connection refused

    (Frame:905): Gtk-WARNING **: cannot open display:

    Browsing these errors I realized that I choosed to use the rasbian stretch lite image that has no graphical interface. Without a GUI the image could not be shown. To avoid the hassle of a new installation I found some sources to repair this

    Then I had to solve some additional problems:
    – Automatical login didn’t accept the credentials so I choosed B3 in raspi-config
    – The profile file appeared not to be loaded automatically, so I loaded this manually

    But fortunately when I runned the script now from this very basic graphical environment it did very well. Suprisingly I got fps rates between 5 and 6. So apparently it outperformes the other RPI3 based solutions with fps rates that are reported considerably lower. Perhaps the performance of these is limited by the burden of complete graphical processes. I think avoiding the graphical environment is a angle to improve the performance. In many use cases the graphical environment is not needed.

    • Adrian Rosebrock August 22, 2018 at 9:19 am #

      Thanks so much for sharing, Steven!

      • Steven Veenma August 24, 2018 at 2:11 pm #

        I justed tested the sd card in a RPI2 without problems: FPS 2.5-3.0

  42. KKaisern August 27, 2018 at 12:04 am #

    Hi, Adrian

    Currently, I working a raspi project which is biometric authentication for smart mirror, and I plan to implement face recognition into the Magic Mirror as a third party modules. Can this pi face recognition work well with Magic Mirror? Appreciate that if you could reply me. Thanks.

    • Adrian Rosebrock August 30, 2018 at 9:21 am #

      I haven’t built a magic mirror myself, but yes, I imagine it would. As long as the camera can easily detect the face it shouldn’t be a problem.

  43. Huang-Yi Li September 2, 2018 at 8:35 pm #

    Hi, Adrian
    I try to construct a dataset consist of 5 persons. But I found out the results with low accurateness. How can I improve the accurateness?

    • Adrian Rosebrock September 5, 2018 at 9:01 am #

      You may want to play with the confidence and threshold parameters of the actual face_recognition method (see the documentation for more details). I’m also not sure how many images you have per person — you may need to gather more.

  44. Benedict September 4, 2018 at 11:08 am #

    Hi Adrian,

    I’m currently working with my raspberry pi project with OpenCV that should detect vehicles and work just like your OpenCV people counter blog. I need identifier like haar that detect only vehicles…. Also would it be slow running in the raspberry pi? Thanks

    • Adrian Rosebrock September 5, 2018 at 8:37 am #

      I would start by reading this post on object detection so you can understand the concept better. The problem is super accurate methods like deep learning object detectors will run super slow on the Pi. You should also look at Haar cascades and HOG + Linear SVM detectors. You may need to train your own model.

  45. claude September 7, 2018 at 8:21 am #

    Hi Adrian,

    Thanks you for you post.

    is it possible to use this face recognition method with a Movidius stick plugged in RPI 3B ?

    If yes, do you have a solution ?

    Thanks in advance.


    • Adrian Rosebrock September 11, 2018 at 8:29 am #

      I’m sure it’s possible to some degree, but I do not have a tutorial dedicated to Movidius face recognition. There is a thread on the Movidius forums that may interest you.

  46. Huang-Yi Li September 10, 2018 at 11:23 am #

    Hi Adrian,
    I notice a problem about the accuracy. According to your method and code, I try so many faces and I think it has good accuracy for recognizing westerner. But I try it by using Asian faces, it has very low accuracy. Do you know the reason(s)?

    • Adrian Rosebrock September 11, 2018 at 8:10 am #

      Hi Huang-Yi — that is strange behavior, but I will say that the dataset was trained on images of popular celebrities (actors, musicians, etc.), many of which are of western descent. I imagine there is some unconscious bias in the dataset itself. That said, if you have a dataset of Asian faces you could perform transfer learning via fine-tuning to make the model more accurate on your own dataset.

      • Jason September 13, 2018 at 10:06 pm #

        Hi Adrian,

        Thank you for your code. The same case as what Huang Yi said, it can hardly recognize my friends who are from East Asian.

        • Adrian Rosebrock September 14, 2018 at 9:25 am #

          Unfortunately I think it’s a bias of the dataset the model was trained on. I highly doubt that anyone “purposely” excluded East Asians from the dataset, but it unfortunately looks like East Asians may have been under represented in the dataset — this is a problem that we all need to be careful and mindful of now that machine learning is becoming more prevalent in our daily lives. In your case I would suggest training a model on an East Asian dataset if you are specifically interested in recognizing East Asian friends.

    • Jason September 13, 2018 at 10:09 pm #

      Hi Huang-Yi,

      Have you found any method to improve the accuracy for Asian people?

      • Huang-Yi Li September 14, 2018 at 2:06 pm #

        Hi Jason,
        In order to solve this problem, I am trying find a dataset of Asian faces. And I use other way temporarily to recognize East Asians. I use the model from https://github.com/davidsandberg/facenet and I use SVM to classify faces.

  47. Huang-Yi Li September 12, 2018 at 10:30 pm #

    Thanks for your reply. If i want to perform transfer learning, what should I study or learn something? Since I only know you use dlib and face recognition module in this post, but I don’t have any idea about tuning their parameters (assume that I have a dataset of Asian faces.)

    • Adrian Rosebrock September 14, 2018 at 9:44 am #

      Actually, Deep Learning for Computer Vision with Python covers how to perform transfer learning, including how to perform transfer learning for object detection. I would suggest starting there.

      • Huang-Yi Li September 14, 2018 at 10:40 am #

        Could you tell me which bundle(s) contain these contents?

        • Adrian Rosebrock September 17, 2018 at 3:01 pm #

          Both the Practitioner Bundle and ImageNet Bundle discuss both transfer learning and object detection. The ImageNet Bundle includes more information on object detection and more transfer learning examples as well.

  48. Sachin September 14, 2018 at 8:50 am #

    Hi Adrian, many thanks for the tutorial!

    Was wondering if there is a way to return the percentage match that was acquired in recognizing the face detected? Or does the algorithm give a binary match/no match type answer?

    Many Thanks

    • Adrian Rosebrock September 14, 2018 at 9:15 am #

      Hey Sachin — the algorithm used here is a modified k-NN algorithm. You could further modify it to return a percentage but it doesn’t mean much when using k-NN. In a couple of weeks I’ll be showing a different face recognition method that can return actual probabilities. Be sure to stay tuned for the post!

  49. Salih September 18, 2018 at 10:20 pm #

    boxes = [(y, x + w, y + h, x) for (x, y, w, h) in rects]

    I did not get this line much how do we reorder those?

  50. rubentxo September 19, 2018 at 10:04 am #

    Hi Adrian,

    Running the code in a Raspberry Pi and coding with Thonny IDE, I get this error in the command line:

    $ sudo python encode_faces.py –dataset /home/pi/PruebasPython/pi-face-recognition/dataset –encodings encodings.pickle –detection-method hog

    Traceback (most recent call last):
    File “encode_faces.py”, line 1, in
    from imutils import paths
    ImportError: No module named imutils

    Your imutils module is installed (I’m not using the virtual environment).

    If I create a example.py code with:

    from imutils import paths

    I don’t get any error. So, I supoose the imutils is installed well and it also appears in the
    Manage packages of Thonny IDE.

    I’m stuck! 🙁

    Regards! And thanks for your awesome lessons!

    • rubentxo September 20, 2018 at 3:23 am #

      The problem is SOLVED!
      Executing Python 3 in the command line I’ve made the script could work.

      Sorry for inconvenience.

      Thansk for you r lessosn!


      • Adrian Rosebrock October 8, 2018 at 1:20 pm #

        Congrats on resolving the issue!

  51. Haziq Sabtu September 24, 2018 at 10:57 am #

    Hey Adrian Rosebrock,
    I’m still new to the world of programming and Raspberry Pi. I am really hype about this project. What code do I need to run in order to make the output of the face recognition to interact with other program (such as I want the light in my room to turn on as it detected my face). I am not asking for a complete guide but it will be very much appreciated if u could give some keywords or links to this kind of things. Basically I just want things to interact.

    Many thanks =D

    • Adrian Rosebrock October 8, 2018 at 12:52 pm #

      Hey Haziq, it sounds like you are building an IoT application. Exactly what code you write is heavily dependent on your application (i.e., opening a lock, turning on a light, etc.). You should first decide on what action you want performed and then research what libraries you can use to achieve that goal. From there, you can link the two together — but only continue until you know how you can programmatically perform your “action”, whatever that may be.

  52. adnan September 24, 2018 at 11:08 am #

    hi Adrian,

    ..is there any an alternative way of face recognition. is there any tutorial for face recognition other than open cv?

  53. Sonam September 30, 2018 at 1:51 pm #


    I am Sonam from Bhutan, a small landlocked country between India and China. Currently, I am doing a computer vision project on implementation of face recognition in surveillance systems using OpenCV with Python. I found this post important for my project and informative too.

    I look forward to taking this blog as a guide for my project.

    With Regards.

    • Adrian Rosebrock October 8, 2018 at 10:53 am #

      Thanks Sonam and best of luck with your project!

  54. Prema October 1, 2018 at 5:27 am #

    Hi Adrian,

    I ran encode_faces and it successfully created encodings.pickle but when I ran pi_face_recognition.py either pi camera or USB camera I received this error.
    ** (Frame:1626): WARNING **: Error retrieving accessibility bus address: org.freedesktop.DBus.Error.ServiceUnknown: The name org.a11y.Bus was not provided by any .service files

    It manage to detect the person but the above error is displayed. Google to check what is wrong but couldn’t find an answer. I wonder if you could determine what is causing the above error. Thanks

    • Adrian Rosebrock October 8, 2018 at 10:47 am #

      It’s a warning message from the GTK library. It can be safely ignored and it has no impact on the ability of your code to successfully and correctly run.

  55. Hasan October 14, 2018 at 8:21 pm #

    Hello Adrian,
    I have a question, will the commands or the syntax in general be different if I’m using windows?
    I tried some of your codes but it doesn’t work properly.

    • Adrian Rosebrock October 16, 2018 at 8:36 am #

      The only thing that may be different in Windows would be the path separator “\” versus the standard Unix path separator “/”. Otherwise there should be no other differences.

  56. abc October 22, 2018 at 5:55 am #

    picamera.exc.PiCameraMMALError: Failed to enable connection: Out of resources

    I am getting this error please help

    • Adrian Rosebrock October 22, 2018 at 7:47 am #

      It sounds like your Raspberry Pi camera module is (1) already in use by another application or (2) is not properly connected to the Pi. Make sure you double-check and try again.

  57. Alessandro Marques Gentil October 23, 2018 at 11:57 pm #

    Can I run this code in an Android smartphone?
    I want to use for a project in my university, if I can do this will be perfect xD
    Thanks in advance!

    • Adrian Rosebrock October 29, 2018 at 2:04 pm #

      No, this code is not portable to the Android. You could build a simple REST interface though, that would likely be the fastest solution if it’s a university project.

  58. Wilfred November 9, 2018 at 12:14 am #

    can not find zip folder to copy

    • Adrian Rosebrock November 10, 2018 at 10:03 am #

      Hey Wilfred — which .zip file are you referring to?

Leave a Reply