Raspberry Pi and Movidius NCS Face Recognition

In this tutorial you will learn how to use the Movidius NCS to speedup face detection and face recognition on the Raspberry Pi by over 243%!

If you’ve ever tried to perform deep learning-based face recognition on a Raspberry Pi, you may have noticed significant lag.

Is there a problem with the face detection or face recognition models themselves?

No, absolutely not.

The problem is that your Raspberry Pi CPU simply can’t process the frames quickly enough. You need more computational horsepower.

As the title to this tutorial suggests, we’re going to pair our Raspberry Pi with the Intel Movidius Neural Compute Stick coprocessor. The NCS Myriad processor will handle both face detection and extracting face embeddings. The RPi CPU processor will handle the final machine learning classification using the results from the face embeddings.

The process of offloading deep learning tasks to the Movidius NCS frees up the Raspberry Pi CPU to handle the non-deep learning tasks. Each processor is then doing what it is designed for. We are certainly pushing our Raspberry Pi to the limit, but we don’t have much choice short of using a completely different single board computer such as an NVIDIA Jetson Nano.

By the end of this tutorial, you’ll have a fully functioning face recognition script running at 6.29FPS on the RPi and Movidius NCS, a 243% speedup compared to using just the RPi alone!

Note: This tutorial includes reposted content from my new Raspberry Pi for Computer Vision book (Chapter 14 of the Hacker Bundle). You can learn more and pick up your copy here.

To learn how to perform face recognition using the Raspberry Pi and Movidius Neural Compute Stick, just keep reading!

Looking for the source code to this post?
Jump right to the downloads section.

Raspberry Pi and Movidius NCS Face Recognition

In this tutorial, we will learn how to work with the Movidius NCS for face recognition.

First, you’ll need an understanding of deep learning face recognition using deep metric learning and how to create a face recognition dataset. Without understanding these two concepts, you may feel lost reading this tutorial.

Prior to reading this tutorial, you should read any of the following:

  1. Face Recognition with OpenCV, Python, and deep learning, my first blog post on deep learning face recognition.
  2. OpenCV Face Recognition, my second blog post on deep learning face recognition using a model that comes with OpenCV. This article also includes a section entitled “Drawbacks, limitations, and how to obtain higher face recognition accuracy” that I highly recommend reading.
  3. Raspberry Pi for Computer Vision‘s “Face Recognition on the Raspberry Pi” (Chapter 5 of the Hacker Bundle).

Additionally, you must read either of the following:

  1. How to build a custom face recognition dataset, a tutorial explaining three methods to build your face recognition dataset.
  2. Raspberry Pi for Computer Vision‘s “Step #1: Gather your dataset” (Chapter 5, Section 5.4.2 of the Hacker Bundle),

Upon successfully reading and understanding those resources, you will be prepared for Raspberry Pi and Movidius NCS face recognition.

In the remainder of this tutorial, we’ll begin by setting up our Raspberry Pi with OpenVINO, including installing the necessary software.

From there, we’ll review our project structure ensuring we are familiar with the layout of today’s downloadable zip.

We’ll then review the process of extracting embeddings for/with the NCS. We’ll train a machine learning model on top of the embeddings data.

Finally, we’ll develop a quick demo script to ensure that our faces are being recognized properly.

Let’s dive in.

Configuring your Raspberry Pi + OpenVINO environment

Figure 1: Configuring OpenVINO on your Raspberry Pi for face recognition with the Movidius NCS.

This tutorial requires a Raspberry Pi (3B+ or 4B is recommended) and Movidius NCS2 (or higher once faster versions are released in the future). Lower Raspberry Pi and NCS models may struggle to keep up. Another option is to use a capable laptop/desktop without OpenVINO altogether.

Configuring your Raspberry Pi with the Intel Movidius NCS for this project is admittedly challenging.

I suggest you (1) pick up a copy of Raspberry Pi for Computer Vision, and (2) flash the included pre-configured .img to your microSD. The .img that comes included with the book is worth its weight in gold as it will save you countless hours of toiling and frustration.

For the stubborn few who wish to configure their Raspberry Pi + OpenVINO on their own, here is a brief guide:

  1. Head to my BusterOS install guide and follow all instructions to create an environment named cv . The Raspberry Pi 4B model (either 1GB, 2GB, or 4GB) is recommended.
  2. Head to my OpenVINO installation guide and create a 2nd environment named openvino . Be sure to download the latest OpenVINO and not an older version.

At this point, your RPi will have both a normal OpenCV environment as well as an OpenVINO-OpenCV environment. You will use the openvino  environment for this tutorial.

Now, simply plug in your NCS2 into a blue USB 3.0 port (the RPi 4B has USB 3.0 for maximum speed) and start your environment using either of the following methods:

Option A: Use the shell script on my Pre-configured Raspbian .img (the same shell script is described in the “Recommended: Create a shell script for starting your OpenVINO environment” section of my OpenVINO installation guide).

From here on, you can activate your OpenVINO environment with one simple command (as opposed to two commands like in the previous step:

Option B: One-two punch method.

Open a terminal and perform the following:

The first command activates our OpenVINO virtual environment. The second command sets up the Movidius NCS with OpenVINO (and is very important). From there we fire up the Python 3 binary in the environment and import OpenCV.

Both Option A and Option B assume that you either are using my  Pre-configured Raspbian .img or that you followed my OpenVINO installation guide and installed OpenVINO with your Raspberry Pi on your own.


  • Some versions of OpenVINO struggle to read .mp4 videos. This is a known bug that PyImageSearch has reported to the Intel team. Our preconfigured .img includes a fix — Abhishek Thanki edited the source code and compiled OpenVINO from source. This blog post is long enough as is, so I cannot include the compile-from-source instructions. If you encounter this issue please encourage Intel to fix the problem, and either (A) compile from source using our customer portal instructions, or (B) pick up a copy of Raspberry Pi for Computer Vision and use the pre-configured .img.
  • We will add to this list if we discover other caveats.

Project Structure

Go ahead and grab today’s .zip from the “Downloads” section of this blog post and extract the files.

Our project is organized in the following manner:

An example 5-person dataset/  is included. Each subdirectory contains 20 images for the respective person.

Our face detector will detect/localize a face in the image to be recognized. The pre-trained Caffe face detector files (provided by OpenCV) are included inside the face_detection_model/ directory. Be sure to refer to this deep learning face detection blog post to learn more about the detector and how it can be put to use.

We will extract face embeddings with a pre-trained OpenFace PyTorch model included in the face_embedding_model/ directory. The openface_nn4.small2.v1.t7 file was trained by the team at Carnegie Mellon University as part of the OpenFace project.

When we execute extract_embeddings.py, two pickle files will be generated. Both embeddings.pickle and le.pickle will be stored inside of the output/ directory if you so choose. The embeddings consist of a 128-d vector for each face in the dataset.

We’ll then train a Support Vector Machines (SVM) machine learning model on top of the embeddings by executing the train_model.py script. The result of training our SVM will be serialized to recognizer.pickle in the output/ directory.

Note: If you choose to use your own dataset (instead of the one I have supplied with the downloads), you should delete the files included in the output/ directory and generate new files associated with your own face dataset.

The recognize_video.py script simply activates your camera and detects + recognizes faces in each frame.

Our Environment Setup Script

Our Movidius face recognition system will not work properly unless an additional system environment variable, OPENCV_DNN_IE_VPU_TYPE , is set.

Be sure to set this environment variable in addition to starting your virtual environment.

This may change in future revisions of OpenVINO, but for now, a shell script is provided in the project associated with this tutorial.

Open up setup.sh and inspect the script:

The “shebang” ( #!) on Line 1 indicates that this script is executable.

Line 3 sets the environment variable using the export  command. You could, of course, manually type the command in your terminal, but this shell script alleviates you from having to memorize the variable name and setting.

Let’s go ahead and execute the shell script:

Provided that you have executed this script, you shouldn’t see any strange OpenVINO-related errors with the rest of the project.

If you encounter the following error message in the next section, be sure to execute setup.sh:

Extracting Facial Embeddings with Movidius NCS

Figure 2: Raspberry Pi facial recognition with the Movidius NCS uses deep metric learning, a process that involves a “triplet training step.” The triplet consists of 3 unique face images — 2 of the 3 are the same person. The NN generates a 128-d vector for each of the 3 face images. For the 2 face images of the same person, we tweak the neural network weights to make the vector closer via distance metric. (image credit: Adam Geitgey)

In order to perform deep learning face recognition, we need real-valued feature vectors to train a model upon. The script in this section serves the purpose of extracting 128-d feature vectors for all faces in your dataset.

Again, if you are unfamiliar with facial embeddings/encodings, refer to one of the three aforementioned resources.

Let’s open extract_embeddings.py and review:

Lines 2-8 import the necessary packages for extracting face embeddings.

Lines 11-22 parse five command line arguments:

  • --dataset: The path to our input dataset of face images.
  • --embeddings: The path to our output embeddings file. Our script will compute face embeddings which we’ll serialize to disk.
  • --detector: Path to OpenCV’s Caffe-based deep learning face detector used to actually localize the faces in the images.
  • --embedding-model: Path to the OpenCV deep learning Torch embedding model. This model will allow us to extract a 128-D facial embedding vector.
  • --confidence: Optional threshold for filtering week face detections.

We’re now ready to load our face detector and face embedder:

Here we load the face detector and embedder:

  • detector: Loaded via Lines 26-29. We’re using a Caffe-based DL face detector to localize faces in an image.
  • embedder: Loaded on Line 33. This model is Torch-based and is responsible for extracting facial embeddings via deep learning feature extraction.

Notice that we’re using the respective cv2.dnn functions to load the two separate models. The dnn module is optimized by the Intel OpenVINO developers.

As you can see on Line 30 and Line 36 we call setPreferableTarget and pass the Myriad constant setting. These calls ensure that the Movidius Neural Compute Stick will conduct the deep learning heavy lifting for us.

Moving forward, let’s grab our image paths and perform initializations:

The imagePaths list, built on Line 40, contains the path to each image in the dataset. The imutils function, paths.list_images automatically traverses the directory tree to find all image paths.

Our embeddings and corresponding names will be held in two lists: (1) knownEmbeddings, and (2) knownNames (Lines 44 and 45).

We’ll also be keeping track of how many faces we’ve processed the total variable (Line 48).

Let’s begin looping over the imagePaths — this loop will be responsible for extracting embeddings from faces found in each image:

We begin looping over imagePaths on Line 51.

First, we extract the name of the person from the path (Line 55). To explain how this works, consider the following example in a Python shell:

Notice how by using imagePath.split and providing the split character (the OS path separator — “ / ” on Unix and “ \ ” on non-Unix systems), the function produces a list of folder/file names (strings) which walk down the directory tree. We grab the second-to-last index, the person’s name, which in this case is adrian.

Finally, we wrap up the above code block by loading the image and resizing it to a known width (Lines 60 and 61).

Let’s detect and localize faces:

On Lines 65-67, we construct a blob. A blob packages an image into a data structure compatible with OpenCV’s dnn module. To learn more about this process, read Deep learning: How OpenCV’s blobFromImage works.

From there we detect faces in the image by passing the imageBlob through the detector network (Lines 71 and 72).

And now, let’s process the detections:

The detections list contains probabilities and bounding box coordinates to localize faces in an image. Assuming we have at least one detection, we’ll proceed into the body of the if-statement (Line 75).

We make the assumption that there is only one face in the image, so we extract the detection with the highest confidence and check to make sure that the confidence meets the minimum probability threshold used to filter out weak detections (Lines 78-84).

When we’ve met that threshold, we extract the face ROI and grab/check dimensions to make sure the face ROI is sufficiently large (Lines 87-96).

From there, we’ll take advantage of our embedder CNN and extract the face embeddings:

We construct another blob, this time from the face ROI (not the whole image as we did before) on Lines 101 and 102.

Subsequently, we pass the faceBlob through the embedder CNN (Lines 103 and 104). This generates a 128-D vector ( vec) which quantifies the face. We’ll leverage this data to recognize new faces via machine learning.

And then we simply add the name and embedding vec to knownNames and knownEmbeddings, respectively (Lines 108 and 109).

We also can’t forget about the variable we set to track the total number of faces either — we go ahead and increment the value on Line 110.

We continue this process of looping over images, detecting faces, and extracting face embeddings for each and every image in our dataset.

All that’s left when the loop finishes is to dump the data to disk:

We add the name and embedding data to a dictionary and then serialize it into a pickle file on Lines 113-117.

At this point we’re ready to extract embeddings by executing our script. Prior to running the embeddings script, be sure your openvino  environment and additional environment variable is set if you did not do so in the previous section. Here is the quickest way to do it as a reminder:

From there, open up a terminal and execute the following command to compute the face embeddings with OpenCV and Movidius:

This process completed in 57s on a RPi 4B with an NCS2 plugged into the USB 3.0 port. You may notice a delay at the beginning as the model is being loaded. From there, each image will process very quickly.

Note: Typically I don’t recommend using the Raspberry Pi for extracting embeddings as the process can require significant time (a full-size, more-powerful computer is recommended for large datasets). Due to our relatively small dataset (120 images) and the extra “oomph” of the Movidius NCS, this process completed in a reasonable amount of time.

As you can see we’ve extracted 120 embeddings for each of the 120 face photos in our dataset. The embeddings.pickle file is now available in the output/ folder as well:

The serialized embeddings filesize is 66KB — embeddings files grow linearly according to the size of your dataset. Be sure to review the “How to obtain higher face recognition accuracy” section later in this tutorial about the importance of an adequately large dataset for achieving high accuracy.

Training an SVM model on Top of Facial Embeddings

Figure 3: Python machine learning practitioners will often apply Support Vector Machines (SVMs) to their problems (such as deep learning face recognition with the Raspberry Pi and Movidius NCS). SVMs are based on the concept of a hyperplane and the perpendicular distance to it as shown in 2-dimensions (the hyperplane concept applies to higher dimensions as well). For more details, refer to my Machine Learning in Python blog post.

At this point we have extracted 128-d embeddings for each face — but how do we actually recognize a person based on these embeddings?

The answer is that we need to train a “standard” machine learning model (such as an SVM, k-NN classifier, Random Forest, etc.) on top of the embeddings.

For small datasets a k-Nearest Neighbor (k-NN) approach can be used for face recognition on 128-d embeddings created via the dlib (Davis King) and face_recognition (Adam Geitgey) libraries.

However, in this tutorial, we will build a more powerful classifier (Support Vector Machines) on top of the embeddings — you’ll be able to use this same method in your dlib-based face recognition pipelines as well if you are so inclined.

Open up the train_model.py file and insert the following code:

We import our packages and modules on Lines 2-6. We’ll be using scikit-learn’s implementation of Support Vector Machines (SVM), a common machine learning model.

Lines 9-16 parse three required command line arguments:

  • --embeddings: The path to the serialized embeddings (we saved them to disk by running the previous extract_embeddings.py script).
  • --recognizer: This will be our output model that recognizes faces. We’ll be saving it to disk so we can use it in the next two recognition scripts.
  • --le: Our label encoder output file path. We’ll serialize our label encoder to disk so that we can use it and the recognizer model in our image/video face recognition scripts.

Let’s load our facial embeddings and encode our labels:

Here we load our embeddings from our previous section on Line 20. We won’t be generating any embeddings in this model training script — we’ll use the embeddings previously generated and serialized.

Then we initialize our scikit-learn LabelEncoder and encode our name labels (Lines 24 and 25).

Now it’s time to train our SVM model for recognizing faces:

We are using a machine learning Support Vector Machine (SVM) with a Radial Basis Function (RBF) kernel, which is typically harder to tune than a linear kernel. Therefore, we will undergo a process known as “gridsearching”, a method to find the optimal machine learning hyperparameters for a model.

Lines 30-33 set our gridsearch parameters and perform the process. Notice that n_jobs=1. If you were utilizing a more powerful system, you could run more than one job to perform gridsearching in parallel. We are on a Raspberry Pi, so we will use a single worker.

Line 34 handles training our face recognition model on the face embeddings vectors.

Note: You can and should experiment with alternative machine learning classifiers. The PyImageSearch Gurus course covers popular machine learning algorithms in depth.

From here we’ll serialize our face recognizer model and label encoder to disk:

To execute our training script, enter the following command in your terminal:

Let’s check the output/ folder now:

With our serialized face recognition model and label encoder, we’re ready to recognize faces in images or video streams.

Real-Time Face Recognition in Video Streams with Movidius NCS

In this section we will code a quick demo script to recognize faces using your PiCamera or USB webcamera. Go ahead and open recognize_video.py and insert the following code:

Our imports should be familiar at this point.

Our five command line arguments are parsed on Lines 12-24:

  • --detector: The path to OpenCV’s deep learning face detector. We’ll use this model to detect where in the image the face ROIs are.
  • --embedding-model: The path to OpenCV’s deep learning face embedding model. We’ll use this model to extract the 128-D face embedding from the face ROI — we’ll feed the data into the recognizer.
  • --recognizer: The path to our recognizer model. We trained our SVM recognizer in the previous section. This model will actually determine who a face is.
  • --le: The path to our label encoder. This contains our face labels such as adrian or unknown.
  • --confidence: The optional threshold to filter weak face detections.

Be sure to study these command line arguments — it is critical that you know the difference between the two deep learning models and the SVM model. If you find yourself confused later in this script, you should refer back to here.

Now that we’ve handled our imports and command line arguments, let’s load the three models from disk into memory:

We load three models in this block. At the risk of being redundant, here is a brief summary of the differences among the models:

  1. detector: A pre-trained Caffe DL model to detect where in the image the faces are (Lines 28-32).
  2. embedder: A pre-trained Torch DL model to calculate our 128-D face embeddings (Line 37 and 38).
  3. recognizer: Our SVM face recognition model (Line 41).

One and two are pre-trained deep learning models, meaning that they are provided to you as-is by OpenCV. The Movidius NCS will perform inference using each of these models.

The third recognizer model is not a form of deep learning. Rather, it is our SVM machine learning face recognition model. The RPi CPU will have to handle making face recognition predictions using it.

We also load our label encoder which holds the names of the people our model can recognize (Line 42).

Let’s initialize our video stream:

Line 47 initializes and starts our VideoStream object. We wait for the camera sensor to warm up on Line 48.

Line 51 initializes our FPS counter for benchmarking purposes.

Frame processing begins with our while loop:

We grab a frame from the webcam on Line 56. We resize the frame (Line 61) and then construct a blob prior to detecting where the faces are (Lines 65-72).

Given our new detections , let’s recognize faces in the frame. But, first we need to filter weak detections and extract the face ROI:

Here we loop over the detections on Line 75 and extract the confidence of each on Line 78.

Then we compare the confidence to the minimum probability detection threshold contained in our command line args dictionary, ensuring that the computed probability is larger than the minimum probability (Line 81).

From there, we extract the face ROI (Lines 84-89) as well as ensure it’s spatial dimensions are sufficiently large (Lines 92 and 93).

Recognizing the name of the face ROI requires just a few steps:

First, we construct a faceBlob (from the face ROI) and pass it through the embedder to generate a 128-D vector which quantifies the face (Lines 98-102)

Then, we pass the vec through our SVM recognizer model (Line 105), the result of which is our predictions for who is in the face ROI.

We take the highest probability index and query our label encoder to find the name (Lines 106-108).

Note: You can further filter out weak face recognitions by applying an additional threshold test on the probability. For example, inserting if proba < T (where T is a variable you define) can provide an additional layer of filtering to ensure there are fewer false-positive face recognitions.

Now, let’s display face recognition results for this particular frame:

To close out the script, we:

  • Draw a bounding box around the face and the person’s name and corresponding predicted probability (Lines 112-117).
  • Update our fps counter (Line 120).
  • Display the annotated frame (Line 123) and wait for the q key to be pressed at which point we break out of the loop (Lines 124-128).
  • Stop our fps counter and print statistics in the terminal (Lines 131-133).
  • Cleanup by closing windows and releasing pointers (Lines 136 and 137).

Face Recognition with Movidius NCS Results

Now that we have (1) extracted face embeddings, (2) trained a machine learning model on the embeddings, and (3) written our face recognition in video streams driver script, let’s see the final result.

Ensure that you have followed the following steps:

  1. Step #1: Gather your face recognition dataset.
  2. Step #2: Extract facial embeddings (via the extract_embeddings.py  script).
  3. Step #3: Train a machine learning model on the set of embeddings (such as Support Vector Machines per today’s example) using train_model.py .

From there, set up your Raspberry Pi and Movidius NCS for face recognition:

  • Connect your PiCamera or USB camera and configure either Line 46 or Line 47 of the realtime face recognition script (but not both) to start your video stream.
  • Plug in your Intel Movidius NCS2 (the NCS1 is also compatible).
  • Start your openvino  virtual environment and set the key environment variable as shown below:

From there, open up a terminal and execute the following command:

As you can see, faces have correctly been identified. What’s more, we are achieving 6.29 FPS using the Movidius NCS in comparison to 2.59 FPS using strictly the CPU. This comes out to a speedup of 243% using the RPi 4B and Movidius NCS2.

I asked PyImageSearch team member, Abhishek Thanki, to record a demo of our Movidius NCS face recognition in action. Below you can find the demo:

As you can see the combination of the Raspberry Pi and Movidius NCS is able to recognize Abhishek’s face in near real-time — using just the Raspberry Pi CPU alone would not be enough to obtain such speed.

My face recognition system isn’t recognizing faces correctly

Figure 4: Misclassified faces occur for a variety of reasons when performing Raspberry Pi and Movidius NCS face recognition.

As a  reminder, be sure to refer to the following two resources:

  1. OpenCV Face Recognition includes a section entitled “Drawbacks, limitations, and how to obtain higher face recognition accuracy”.
  2. “How to obtain higher face recognition accuracy”, a section of Chapter 14, Face Recognition on the Raspberry Pi (Raspberry Pi for Computer Vision).

Both resources help you in situations where OpenCV does not recognize a face correctly.

In short, you may need:

  • More data. This is the number one reason face recognition systems fail. I recommend 20-50 face images per person in your dataset as a general rule.
  • To perform face alignment as each face ROI undergoes the embeddings process.
  • To tune your machine learning classifier hyperparameters.

Again, if your face recognition system is mismatching faces or marking faces as “Unknown” be sure to spend time improving your face recognition system.

Where can I learn more?

If you’re interested in learning more about applying Computer Vision, Deep Learning, and OpenCV to embedded devices such as the:

  • Raspberry Pi
  • Intel Movidus NCS
  • Google Coral
  • NVIDIA Jetson Nano

…then you should definitely take a look at my brand new book, Raspberry Pi for Computer Vision.

This book has over 40 projects (including 60+ chapters) on embedded Computer Vision and Deep Learning. You can build upon the projects in the book to solve problems around your home, business, and even for your clients.

Each and every project on the book has an emphasis on:

  • Learning by doing.
  • Rolling up your sleeves.
  • Getting your hands dirty in code and implementation.
  • Building actual, real-world projects using the Raspberry Pi.

A handful of the highlighted projects include:

  • Traffic counting and vehicle speed detection
  • Classroom attendance
  • Hand gesture recognition
  • Daytime and nighttime wildlife monitoring
  • Security applications
  • Deep Learning classification, object detection, and instance segmentation on resource-constrained devices
  • …and many more!

The book also covers deep learning using the Google Coral and Intel Movidius NCS coprocessors (Hacker + Complete Bundles). We’ll also bring in the NVIDIA Jetson Nano to the rescue when more deep learning horsepower is needed (Complete Bundle).

Are you ready to join me and learn how to apply Computer Vision and Deep Learning to embedded devices such as the Raspberry Pi, Google Coral, and NVIDIA Jetson Nano?

If so, check out the book and grab your free table of contents!

Grab my free table of contents!


In this tutorial, we used OpenVINO and our Movidius NCS to perform face recognition.

Our face recognition pipeline was created using a four-stage process:

  1. Step #1: Create your dataset of face images. You can, of course, swap in your own face dataset provided you follow the same dataset directory structure of today’s project.
  2. Step #2: Extract face embeddings for each face in the dataset.
  3. Step #3: Train a machine learning model (Support Vector Machines) on top of the face embeddings.
  4. Step #4: Utilize OpenCV and our Movidius NCS to recognize faces in video streams.

We put our Movidius NCS to work for the following deep learning tasks:

  • Face detection: Localizing faces in an image
  • Extracting face embeddings: Generating 128-D vectors which quantify a face numerically

We then used the Raspberry Pi CPU to handle the non-DL machine learning classifier used to make predictions on the 128-D embeddings.

This process of separating responsibilities allowed the CPU to call the shots, while employing the NCS for the heavy lifting. We achieved a speedup of 243% using the Movidius NCS for face recognition in video streams.

To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), just drop your email in the form below!


If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , , , ,

24 Responses to Raspberry Pi and Movidius NCS Face Recognition

  1. Simon Platten January 6, 2020 at 11:13 am #

    How would you rate the Raspberry Pi #4 with the Movidius module compared to the mentioned NVIDIA Jetson Nano ?

    Can you give any figures on either of them in terms of performance as looking at the low cost of the Pi and the price of a since Movidius module, the NVIDIA Jetson Nano is a comparable price.

    • Adrian Rosebrock January 16, 2020 at 10:20 am #

      I prefer the Pi 4 with the NCS over the Nano, but what’s even faster is the Pi 4 with the Coral USB Accelerator. The USB 3 on the RPi 4 with the NCS makes it super fast.

  2. Simon Platten January 7, 2020 at 3:21 am #

    Will you be replacing the Intel Movidius stick with the Intel® Neural Compute Stick 2 ?

    As there is a notice on the Intel Movidius web-site stating that its discontinued.

    • Adrian Rosebrock January 16, 2020 at 10:21 am #

      Perhaps I’m not understanding your question, but we’re using the NCS2 in this tutorial.

  3. Simon Platten January 7, 2020 at 3:25 am #


    This is interesting I actually have one of these cameras that I purchased for use with my iPAD to scan in 3D.

    Can I use this camera with the Raspberry Pi?

    • Adrian Rosebrock January 16, 2020 at 10:21 am #

      Sorry, I don’t have any experience with Intel’s depth camera.

  4. Simon Platten January 7, 2020 at 7:51 am #

    Would you be interested in a C++ implementation of your code? I’m an experienced software developer with experience dating back to the mid 80’s.

    C++ being much faster than Python.

    • Adrian Rosebrock January 16, 2020 at 10:21 am #

      I primarily cover Python here at PyImageSearch. I don’t do much work in C++.

  5. sg January 7, 2020 at 10:04 pm #

    Dear Adrian,

    I know this is out of the scope question.
    But i still want to ask, what changes we need to do if we want to run the script in windows instead of raspberry pi.


    • Adrian Rosebrock January 16, 2020 at 10:22 am #

      Sorry, I don’t support Windows here on the PyImageSearch blog.

  6. TB January 8, 2020 at 8:14 am #

    Hello Adrian.

    First, I want to thank you for your post and your work – I am always amazed by all the projects realized here and the dedication you put in giving it all to the readers.

    I recently got a NCS2 and am trying to play a bit with it, so I came and tried your tutorials.

    The one one using OpenVino to perform real-time object detection went great, so I tried this one too.

    Well, I encounter quite a problem on this one. The error message I get is ” Failed to Initialize Inference Engine backend : Device with “CPU” name is not registered in the InferenceEngine in function ‘initPlugin’ “.

    At first, I thought I forgot to source the setup.sh, but it changed nothing.
    Then I thought to myself my NCS2 is not detected anymore, but it is if I use “lsusb” in my terminal.
    What bothers me is when I checked if this had wrecked everything, well it had. I now have the same error message for the previous tutoriel (OpenVINO, OpenCV, and Movidius NCS on the Raspberry Pi) that worked before – meaning, something has definitely changed. So I tried a lot of things (rebooting, removing and pluging the NCS2 again, new python environments for boths tutorials, some env variables modifications), nothing worked : this message pops ups everytime.

    Do you have any ideas where it could come from or should I go for a full reinstall of the OS and try again ?

    Thanks in advance,

    • David Hoffman January 16, 2020 at 1:47 pm #

      Hi TB! Thanks so much for the detailed comment. Can you please confirm which version of OpenVINO you are running? I tested the code for this blog post today in the following configurations: (1) Pi 3B+, NCS1, Buster, OpenVINO 4.1.1, (2) Pi 3B+, NCS2, Buster, OpenVINO 4.1.1, (3) Pi 4B, NCS1, Buster, OpenVINO 4.1.1, (4) Pi 4B, NCS2, Buster, OpenVINO 4.1.1. I used our preconfigured Raspbian Buster .img for all testing. The .img saves you a lot of configuration hassle and ensures that you are using the exact same development/deployment environment as us. I recommend using our .img if you will be following along with Raspberry Pi tutorials on our website. If you don’t have it already, you can grab it with a copy of Raspberry Pi for Computer Vision. Of course the preconfigured .img is not required, but it is next to impossible for us to replicate your RPi environment to the T and debug. I would also recommend reading this FAQ about priority customer support.

      • Lorenzo January 16, 2020 at 2:10 pm #

        Hello all,
        I have exactly the same error message as TB : ” Failed to Initialize Inference Engine backend : Device with “CPU” name is not registered in the InferenceEngine in function ‘initPlugin’ “
        My configuration is Pi 4B, NCS2, Buster, OpenVINO 4.1.2 (not using your .img. I followed your tuto, fully compiled open cv, and everything went well)

        • Adrian Rosebrock January 20, 2020 at 1:00 pm #

          See the following sentence in the tutorial:

          “If you encounter the following error message in the next section, be sure to execute setup.sh”

          You need to run the “setup.sh” script detailed in the “Our Environment Setup Script” section.

  7. Vladimir January 9, 2020 at 2:03 am #

    Hi Adrian! What version of scikit-learn are you using?

    • Adrian Rosebrock January 16, 2020 at 10:22 am #

      I was using v0.21 and v0.22.

      • Vladimir January 16, 2020 at 9:03 pm #

        Due to changes in the scikit-learn version starting from 0.22.1, your projects do not work.

        • Adrian Rosebrock January 20, 2020 at 12:59 pm #

          Can you be more specific? What error are you getting?

          • Vladimir January 22, 2020 at 8:08 pm #

            e.g. AttributeError: ‘SVC’ object has no attribute ‘_n_support’

          • Adrian Rosebrock January 23, 2020 at 9:17 am #

            I haven’t seen that issue before but I’m sure if other PyImageSearch readers encounter it they will be able to reply here. Thanks Vladimir.

  8. Curtis Brown January 19, 2020 at 12:16 pm #

    Great tutorial. I started with Python 3.7.6 and had some errors, but when I dropped back to Python 3.6.5, everything works and I get 14.67 fps…

    • Adrian Rosebrock January 20, 2020 at 12:59 pm #

      Great job, Curtis!

  9. Adnan January 20, 2020 at 12:56 am #

    No module sklearn when i run recognise_video_py ???????????

    • Adrian Rosebrock January 20, 2020 at 12:59 pm #

      You need to install the “scikit-learn” library:

Before you leave a comment...

Hey, Adrian here, author of the PyImageSearch blog. I'd love to hear from you, but before you submit a comment, please follow these guidelines:

  1. If you have a question, read the comments first. You should also search this page (i.e., ctrl + f) for keywords related to your question. It's likely that I have already addressed your question in the comments.
  2. If you are copying and pasting code/terminal output, please don't. Reviewing another programmers’ code is a very time consuming and tedious task, and due to the volume of emails and contact requests I receive, I simply cannot do it.
  3. Be respectful of the space. I put a lot of my own personal time into creating these free weekly tutorials. On average, each tutorial takes me 15-20 hours to put together. I love offering these guides to you and I take pride in the content I create. Therefore, I will not approve comments that include large code blocks/terminal output as it destroys the formatting of the page. Kindly be respectful of this space.
  4. Be patient. I receive 200+ comments and emails per day. Due to spam, and my desire to personally answer as many questions as I can, I hand moderate all new comments (typically once per week). I try to answer as many questions as I can, but I'm only one person. Please don't be offended if I cannot get to your question
  5. Do you need priority support? Consider purchasing one of my books and courses. I place customer questions and emails in a separate, special priority queue and answer them first. If you are a customer of mine you will receive a guaranteed response from me. If there's any time left over, I focus on the community at large and attempt to answer as many of those questions as I possibly can.

Thank you for keeping these guidelines in mind before submitting your comment.

Leave a Reply