Face detection with OpenCV and deep learning

Today I’m going to share a little known secret with you regarding the OpenCV library:

You can perform fast, accurate face detection with OpenCV using a pre-trained deep learning face detector model shipped with the library.

You may already know that OpenCV ships out-of-the-box with pre-trained Haar cascades that can be used for face detection…

…but I’m willing to bet that you don’t know about the “hidden” deep learning-based face detector that has been part of OpenCV since OpenCV 3.3.

In the remainder of today’s blog post I’ll discuss:

  • Where this “hidden” deep learning face detector lives in the OpenCV library
  • How you can perform face detection in images using OpenCV and deep learning
  • How you can perform face detection in video using OpenCV and deep learning

As we’ll see, it’s easily to swap out Haar cascades for their more accurate deep learning face detector counterparts.

To learn more about face detection with OpenCV and deep learning, just keep reading!

Looking for the source code to this post?
Jump right to the downloads section.

Face detection with OpenCV and deep learning

Today’s blog post is broken down into three parts.

In the first part we’ll discuss the origin of the more accurate OpenCV face detectors and where they live inside the OpenCV library.

From there I’ll demonstrate how you can perform face detection in images using OpenCV and deep learning.

I’ll then wrap up the blog post discussing how you can apply face detection to video streams using OpenCV and deep learning as well.

Where do these “better” face detectors live in OpenCV and where did they come from?

Back in August 2017, OpenCV 3.3 was officially released, bringing it with it a highly improved “deep neural networks” ( dnn ) module.

This module supports a number of deep learning frameworks, including Caffe, TensorFlow, and Torch/PyTorch.

The primary contributor to the dnn  module, Aleksandr Rybnikov, has put a huge amount of work into making this module possible (and we owe him a big round of thanks and applause).

And since the release of OpenCV 3.3, I’ve been sharing a number of deep learning OpenCV tutorials, including:

However, what most OpenCV users do not know is that Rybnikov has included a more accurate, deep learning-based face detector included in the official release of OpenCV (although it can be a bit hard to find if you don’t know where to look).

The Caffe-based face detector can be found in the face_detector  sub-directory of the dnn samples:

Figure 1: The OpenCV repository on GitHub has an example of deep learning face detection.

When using OpenCV’s deep neural network module with Caffe models, you’ll need two sets of files:

  • The .prototxt file(s) which define the model architecture (i.e., the layers themselves)
  • The .caffemodel file which contains the weights for the actual layers

Both files are required to when using models trained using Caffe for deep learning.

However, you’ll only find the prototxt files here in the GitHub repo.

The weight files are not included in the OpenCV samples  directory and it requires a bit more digging to find them…

Where can I can I get the more accurate OpenCV face detectors?

For your convenience, I have included both the:

  1. Caffe prototxt files
  2. and Caffe model weight files

…inside the “Downloads” section of this blog post.

To skip to the downloads section, just click here.

How does the OpenCV deep learning face detector work?

Figure 2: Deep Learning with OpenCV’s DNN module.

OpenCV’s deep learning face detector is based on the Single Shot Detector (SSD) framework with a ResNet base network (unlike other OpenCV SSDs that you may have seen which typically use MobileNet as the base network).

A full review of SSDs and ResNet is outside the scope of this blog post, so if you’re interested in learning more about Single Shot Detectors (including how to train your own custom deep learning object detectors), start with this article here on the PyImageSearch blog and then take a look at my book, Deep Learning for Computer Vision with Python, which includes in-depth discussions and code enabling you to train your own object detectors.

Face detection in images with OpenCV and deep learning

In this first example we’ll learn how to apply face detection with OpenCV to single input images.

In the next section we’ll learn how to modify this code and apply face detection with OpenCV to videos, video streams, and webcams.

Open up a new file, name it detect_faces.py , and insert the following code:

Here we are importing our required packages (Lines 2-4) and parsing command line arguments (Lines 7-16).

We have three required arguments:

  • --image : The path to the input image.
  • --prototxt : The path to the Caffe prototxt file.
  • --model : The path to the pretrained Caffe model.

An optional argument, --confidence , can overwrite the default threshold of 0.5 if you wish.

From there lets load our model and create a blob from our image:

First, we load our model using our --prototxt  and --model  file paths. We store the model as net  (Line 20).

Then we load the image  (Line 24), extract the dimensions (Line 25), and create a blob  (Lines 26 and 27).

The dnn.blobFromImage  takes care of pre-processing which includes setting the blob  dimensions and normalization. If you’re interested in learning more about the dnn.blobFromImage  function, I review in detail in this blog post.

Next, we’ll apply face detection:

To detect faces, we pass the blob  through the net  on Lines 32 and 33.

And from there we’ll loop over the detections  and draw boxes around the detected faces:

We begin looping over the detections on Line 36.

From there, we extract the confidence  (Line 39) and compare it to the confidence threshold (Line 43). We perform this check to filter out weak detections.

If the confidence meets the minimum threshold, we proceed to draw a rectangle and along with the probability of the detection on Lines 46-56.

To accomplish this, we first calculate the (x, y)-coordinates of the bounding box (Lines 46 and 47).

We then build our confidence text  string (Line 51) which contains the probability of the detection.

In case the our text  would go off-image (such as when the face detection occurs at the very top of an image), we shift it down by 10 pixels (Line 52).

Our face rectangle and confidence text  is drawn on the image  on Lines 53-56.

From there we loop back for additional detections following the process again. If no detections  remain, we’re ready to show our output image  on the screen (Lines 59 and 60).

Face detection in images with OpenCV results

Let’s try out the OpenCV deep learning face detector.

Make sure you use the “Downloads” section of this blog post to download:

  • The source code used in this blog post
  • The Caffe prototxt files for deep learning face detection
  • The Caffe weight files used for deep learning face detection
  • The example images used in this post

From there, open up a terminal and execute the following command:

Figure 3: My face is detected in this image with 74% confidence using the OpenCV deep learning face detector.

The above photo is of me during my first trip to Ybor City in Florida, where chickens are allowed to roam free throughout the city. There are even laws protecting the chickens which I thought was very cool. Even though I grew up in rural farmland, I was still totally surprised to see a rooster crossing the road — which of course spawned many “Why did the chicken cross the road?” jokes.

Here you can see my face is detected with 74.30% confidence, even though my face is at an angle. OpenCV’s Haar cascades are notorious for missing faces that are not at a “straight on” angle, but by using OpenCV’s deep learning face detectors, we are able to detect my face.

And now we’ll see how another example works, this time with three faces:

Figure 4: The OpenCV DNN face detector finds all three images without any trouble.


This photo was taken in Gainesville, FL after one of my favorite bands finished up a show at Loosey’s, a popular bar and music venue in the area. Here you can see my fiance (left), me (middle), and Jason (right), a member of the band.

I’m incredibly impressed that OpenCV can detect Trisha’s face, despite the lighting conditions and shadows cast on her face in the dark venue (and with 86.81% probability!)

Again, this just goes to show how much better (in terms of accuracy) the deep learning OpenCV face detectors are over their standard Haar cascade counterparts shipped with the library.

Face detection in video and webcam with OpenCV and deep learning

Now that we have learned how to apply face detection with OpenCV to single images, let’s also apply face detection to videos, video streams, and webcams.

Luckily for us, most of our code in the previous section on face detection with OpenCV in single images can be reused here!

Open up a new file, name it detect_faces_video.py , and insert the following code:

Compared to above, we will need to import three additional packages: VideoStream , imutils , and time .

If you don’t have imutils  in your virtual environment, you can install it via:

Our command line arguments are mostly the same, except we do not have an --image  path argument this time. We’ll be using our webcam’s video feed instead.

From there we’ll load our model and initialize the video stream:

Loading the model is the same as above.

We initialize a VideoStream  object specifying camera with index zero as the source (in general this would be your laptop’s built in camera or your desktop’s first camera detected).

A few quick notes here:

  • Raspberry Pi + picamera users can replace Line 25 with vs = VideoStream(usePiCamera=True).start() if you wish to use the Raspberry Pi camera module.
  • If you to parse a video file (rather than a video stream) swap out the VideoStream  class for FileVideoStream . You can learn more about the FileVideoStream class in this blog post.

We then allow the camera sensor to warm up for 2 seconds (Line 26).

From there we loop over the frames and compute face detections with OpenCV:

This block should look mostly familiar to the static image version in the previous section.

In this block, we’re reading a frame  from the video stream (Line 32), creating a blob  (Lines 37 and 38), and passing the blob  through the deep neural net  to obtain face detections (Lines 42 and 43).

We can now loop over the detections, compare to the confidence threshold, and draw face boxes + confidence values on the screen:

For a detailed review of this code block, please review the previous section where we perform face detection to still, static images. The code here is nearly identical.

Now that our OpenCV face detections have been drawn, let’s display the frame on the screen and wait for a keypress:

We display the frame  on the screen until the “q” key is pressed at which point we break  out of the loop and perform cleanup.

Face detection in video and webcam with OpenCV results

To try out the OpenCV deep learning face detector make sure you use the “Downloads” section of this blog post to grab:

  • The source code used in this blog post
  • The Caffe prototxt files for deep learning face detection
  • The Caffe weight files used for deep learning face detection

Once you have downloaded the files, running the deep learning OpenCV face detector with a webcam feed is easy with this simple command:

Figure 5: Face detection in video with OpenCV’s DNN module.

You can see a full video demonstration, including my commentary, in the following video:


In today’s blog post you discovered a little known secret about the OpenCV library — OpenCV ships out-of-the-box with a more accurate face detector (as compared to OpenCV’s Haar cascades).

The more accurate OpenCV face detector is deep learning based, and in particular, utilizes the Single Shot Detector (SSD) framework with ResNet as the base network.

Thanks to the hard work of Aleksandr Rybnikov and the other contributors to OpenCV’s dnn  module, we can enjoy these more accurate OpenCV face detectors in our own applications.

The deep learning face detectors can be hard to find in the OpenCV library, so
for your convenience, I have put gathered the Caffe prototxt and weight files for you — just use the “Downloads” form below to download the (more accurate) deep learning-based OpenCV face detector.

See you next week with another great computer vision + deep learning tutorial!


If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 11-page Resource Guide on Computer Vision and Image Search Engines, including exclusive techniques that I don’t post on this blog! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , , , , , , ,

113 Responses to Face detection with OpenCV and deep learning

  1. Chris Combs February 26, 2018 at 10:50 am #

    How well does the new OpenCV model recognize faces of various skin tones? Do we know how it was trained?

    • Adrian Rosebrock February 26, 2018 at 1:40 pm #

      I have not performed an exhaustive evaluation for various skin tones so I’m not sure. That would make for a interesting article. If I cannot write one I would love to see one written by a PyImageSearch reader. The GitHub repo has more information on the training process.

  2. wildan February 26, 2018 at 10:53 am #

    what’s different this method with haar cascade’s face detection ?

    • Adrian Rosebrock February 26, 2018 at 1:39 pm #

      This method uses deep learning, in particular a Single Shot Detector (SSD) with ResNet base network architecture. That is the difference.

      • Sampreet Sarkar May 27, 2018 at 1:54 pm #

        Hi Adrian, big fan of your blog. I was amazed when I tried out the results myself! Saved me a lot of hassle. However, I was wondering how I could add a face recognition feature on top of this. Would help a lot if you could explain. Cheers!

        • Adrian Rosebrock May 28, 2018 at 9:38 am #

          Hey Sampreet, I’ll likely be covering facial recognition soon but I would also suggest taking a look at the PyImageSearch Gurus course where I cover facial recognition in depth.

  3. Ratman February 26, 2018 at 10:58 am #

    Sounds nice, but how is the the performance? There are a lot of face detection frameworks but they are not even near real time.

    • Adrian Rosebrock February 26, 2018 at 1:39 pm #

      Please see the video where I provide a live demonstration. This face detector can be used in real-time.

      • ratman February 28, 2018 at 3:41 pm #

        Thanks for the reply, Adran! I think it is still a bit slow for our family album including 40k pictures, but it is worth a trial then 🙂

  4. KK February 26, 2018 at 10:59 am #

    Hi Adrian, nice to know ..is this detector faster than dlib’s detector? thanks much!

    • Adrian Rosebrock February 26, 2018 at 1:38 pm #

      I haven’t tested them side-by-side, but it should be comparably fast to dlib’s HOG + Linear SVM detector.

  5. Curious Observer February 26, 2018 at 12:02 pm #

    Hey Adrian,

    First off, I want to thank you for the great work you’ve done so far. Your blog is the basis for the computer vision startup we’re founding.

    Coming back to this specific blog post, I haven’t tested it yet, but how do you think the speed of this DNN will compare to Haar cascades on a Raspberry Pi? On my computer, I’m seeing about a 25% slowdown.

    Which would you choose for detecting faces on a Pi where both speed and accuracy were equally important?

    • Adrian Rosebrock February 26, 2018 at 1:38 pm #

      Haar cascades will be the fastest here, but the deep learning face detector will give you the most accuracy. As for which one to choose, that really depends on your project. If you’re looking to detect faces that will naturally have more variability in viewing angle, use the deep learning detector. If the faces will almost always be “straight on” then the Haar cascades will likely be sufficient. Again, it really depends on the project.

    • TetsFR March 10, 2018 at 12:42 pm #

      Not sure about Haar cascades but this deepnet runs on my rpi3 at about 1fps, maybe a bit slower. So that is still usable for some projects although difficult for realtime applications. I am trying to control servos of a pan tilt camera mount and there is so much delay in the feedback loop that it is tricky to manage (if you guys have a suggestion of an algo that is robust to such delay I would take it).
      Tks Andrian for the great tutorial, as always.

  6. Matt February 26, 2018 at 12:45 pm #

    This is awesome to hear OpenCV now ships with the dnn module.

    If you wanted to take this a step further to start recognizing particular faces, would you have to go back to deep learning to actually teach the faces? Would there be a way to leverage this to assist in the collection of labeled samples?


    • Adrian Rosebrock February 26, 2018 at 1:36 pm #

      Facial recognition is a two stage process. The first stage is detecting the presence of a face in an image but not knowing “who” the actual face is. The second stage is taking each detected face and recognizing it. For this, you would need a dedicated facial recognition algorithm. I actually discuss how to create a Python script to assist in collecting labeled faces (as you suggested) inside the PyImageSearch Gurus course. From there you’ll be able to build your own facial recognizer.

  7. swapnil February 26, 2018 at 1:13 pm #

    One of the post I was eagerly looking for. Thanks adrian for this post. You are the best when it comes to computer vision.

    • Adrian Rosebrock February 26, 2018 at 1:34 pm #

      Thank you swapnil! 🙂

  8. Steve Cox February 26, 2018 at 9:04 pm #

    Very nice !!!! I look forward to playing with this example.

    Now how do we train this deep model to recognize “Our” faces 😉

    I think this is in the right direction and away from eigenfaces which I noticed don’t seem to be accurate. (Not an exhaustive test on my part) I can still see using a har cascade in front of the this deep learning SSD. Har is so fast I think the two algo stacked together make sense.

    Thanks again !!!!

    • Adrian Rosebrock February 27, 2018 at 11:33 am #

      There are a bunch of ways to perform face recognition using deep learning. One of my favorites is to use deep learning embeddings of the faces. I’ll cover this is well in the future.

  9. phillity February 26, 2018 at 11:58 pm #

    Hi Adrian. Thanks for the great tutorial!

    • Adrian Rosebrock February 27, 2018 at 11:29 am #

      Thanks so much Phillity, I’m happy you enjoyed it! 🙂

  10. ray February 27, 2018 at 10:38 am #

    great tutorial!!!!

    • Adrian Rosebrock February 27, 2018 at 11:20 am #

      Thank you Ray, I’m glad you enjoyed it! 🙂

  11. Nam Phan February 27, 2018 at 12:53 pm #

    first off, great tut as usual ! excellent !
    i wonder if there are some helper functions accompanying with the package that i can use to extract extra information of face’s parts like : eyes, nose, ears , forehead … positions .
    If not then is there any packages to do such extraction after detecting using this dnn

    • Adrian Rosebrock February 28, 2018 at 1:49 pm #

      It sounds like you’re referring to facial landmarks. See this post for more details, including code.

  12. han February 28, 2018 at 2:45 am #

    Thank you for your efforts and sharing.
    I really like to read your posts!!

    • Adrian Rosebrock February 28, 2018 at 1:50 pm #

      Thanks Han, I’m glad you’re enjoying them!

  13. GabriellaK February 28, 2018 at 5:16 am #

    Great, Is there something new for full body detection?

    • Adrian Rosebrock February 28, 2018 at 1:50 pm #

      What specifically regarding full body detection? Detecting the presence of a body in an image? Localizing each of the arms, legs, joints, etc.?

  14. Mark February 28, 2018 at 7:17 am #

    Thanks Adrian,

    Brilliant stuff as usual, I managed to replicate that in C++ but still very slow on my RPi.
    Any idea if Movidius stick can be used to boost the recognition part here?


    • Adrian Rosebrock February 28, 2018 at 1:52 pm #

      I haven’t tried this code on the Movidius but from the previous post I used a Caffe model weights + architecture for a MobileNet + SSD. It seems to reason that a ResNet + SSD would work as well. I would try loading the face detector via the Movidius but I get the feeling that you might have to work with it a bit.

  15. Chris Burns February 28, 2018 at 11:38 am #

    Adrian, thank you for this post and some excellent insight into OpenCV. Can you elaborate on why you chose to use the VideoStream feature of imutils? I have a non-traditional set up (Rpi3 with custom ARM64 (aarch64) kernel. I’ve compiled OpenCV and everything looks good there but the imutils – vs.read() call is returning null. I was thinking about going to OpenCV.videoCapture but thought I would ask the above question before I started. Thanks!

    • Adrian Rosebrock February 28, 2018 at 1:53 pm #

      The VideoStream is my implementation of a faster, threaded frame polling class. It is compatible with both USB/built-in webcams along with the Raspberry Pi camera module. You could use OpenCV’s cv2.VideoCapture function but you’ll get an extra performance boost from VideoStream which is a must when working with the Raspberry Pi. You can read more about the VideoStream class here.

  16. Anish Varsha March 1, 2018 at 12:07 pm #

    Hey Adrian,

    I find the tutorial very useful with the differences between SSD and HOG detection are night and day. Can you suggest me where I can find the Face Recognition Using Deep Learning in OpenCV? Thanks!

  17. Dominic Pritham March 3, 2018 at 3:41 pm #

    This is super cool Adrian. It is very reliable. In traditional face detection, I have had issues when the face leaves the frame and re-enters. This is so exciting. Thank you so very much for writing this blog.

    • Adrian Rosebrock March 7, 2018 at 9:42 am #

      Thanks Dominic, I’m glad you enjoyed the post! 🙂

  18. Peshmerge March 6, 2018 at 10:36 am #

    Hi Adrian,

    Thanks for your great article. It’s really helpful! I have couple of notes based on my observation while testing your code.
    I am writing now my thesis at Amsterdam University of Applied Sciences, and it’s about Facial detection and recognition on children. My target group is children aged between 0 and 12 years old.
    I am stil busy with researching, but I tried your code just to build a fast proof-of-concept and it didn’t work well in the beginning. I have adjusted 3 parameters and it did it well.
    Those parameters were:
    1) –confidence, which is now 0.40
    2) x and y while calling cv2.dnn.blobFromImage(). In your original code it’s 300*300 in my code I just changed to be the height of the input image.

    Here is the result of the running your code without changing anything

    here the result after changing the parameters (the confidence doesn’t matter 0.4 or 0.5)

    Do you have any explanation?

    Kind regards, Peshmerge

    • Adrian Rosebrock March 7, 2018 at 9:10 am #

      The confidence is the minimum probability. It is used to filter out weak(er) detections. You can set the confidence as you see best fit for your project/application.

      • Peshmerge March 12, 2018 at 7:18 am #

        Thanks for you reply! To be honest adjusting the x and y just made the difference for me! I wonder why did you choose to give it 300*300?

        • Adrian Rosebrock March 14, 2018 at 1:07 pm #

          300×300 is the typical input image size for SSDs and Faster R-CNNs.

  19. Amal March 6, 2018 at 11:02 pm #

    hey Adrian
    I applied the same code on my raspberry pi 3 but it work very slowly and reboot after few scond each time I run the code

    • Adrian Rosebrock March 7, 2018 at 9:08 am #

      Hey Amal — the OpenCV deep learning face detector will certainly run slow on the Pi. For fast, but less accurate face detection you should use Haar cascades. As for the Pi automatically rebooting, I’m not sure what the problem is. It sounds like a hardware problem or your Pi is overheating.

  20. Ed n. March 7, 2018 at 3:45 pm #

    Hi Adrian,

    Is there a good way to covert Caffe based code to Keras? Using Caffe in the production os kind of hassle.


    • Adrian Rosebrock March 9, 2018 at 9:26 am #

      I would start by going through this resource which includes a number of deep learning model/weight converters.

  21. saverio March 8, 2018 at 9:24 am #

    I tried to run the compiled graph on movidius, but I’am a little bit confused about the retruned value from graph.GetResult(): an array of shape (1407,)!

    For sure a made some mistake…

  22. Adanalı March 8, 2018 at 6:03 pm #

    Hi Adrian
    This is a bit off topic, but I was wondering if you would be so kind as to write an article on making a “people counting” with OpenCV — that is, a program that counts people going in and out of a building via a live webcam feed. There are no great resources available online for this, so if you would write one I’m sure it would drive plenty of traffic to your site. It’s a win for both of us!

    • Adrian Rosebrock March 9, 2018 at 8:54 am #

      Thank you for the suggestion. I will certainly consider it for the future.

  23. gopalakrishna March 9, 2018 at 9:26 pm #

    I am new to OpenCV it would be a great help if you tell how to add the path in argument parse (line 8 in the code)

    • Adrian Rosebrock March 14, 2018 at 1:28 pm #

      Take a look at this post to get started with command line arguments.

  24. lii March 15, 2018 at 9:25 am #

    hi, can someone help me with this error:

    [INFO] loading model…
    [INFO] starting video stream…
    Traceback (most recent call last):

    (h, w) = image.shape[:2]
    AttributeError: ‘NoneType’ object has no attribute ‘shape’

    • Adrian Rosebrock March 19, 2018 at 5:53 pm #

      OpenCV cannot access your webcam. See this post for more details.

  25. Kapil Goyal March 20, 2018 at 4:27 pm #

    This code is not working for a group photo with 50 people.

    • Adrian Rosebrock March 22, 2018 at 10:09 am #

      That’s not too surprising. If there are 50 people in the photo the faces are likely very small. Try increasing the resolution of the image before passing it into the network for detection.

  26. Kapil Goyal March 22, 2018 at 10:14 am #

    I want to create my own training set. How to train my own neural network in python for my college project?

    • Adrian Rosebrock March 22, 2018 at 10:18 am #

      I demonstrate how to train your first Convolutional Neural Network in this post. I would suggest starting there. If you’re interested in a deeper dive and understanding of how to train your own custom networks I would suggest Deep Learning for Computer Vision with Python where I discuss deep learning + computer vision in detail (including how to train your own custom networks). I hope that helps point you in the right direction!

      • Kapil Goyal March 22, 2018 at 12:31 pm #

        This means that these files are not opensource and we can’t generate these files by own and using these files will create copyright issue?

        • Adrian Rosebrock March 27, 2018 at 6:50 am #

          You would need to check the license associated with each model you wanted to use. Some are allowed for open source projects, others are just academic, and some do not allow commercial use. Typically if you wanted to use a model without restrictions you would need to train it yourself.

  27. Luan March 22, 2018 at 1:18 pm #

    Congratulations Adrian, great post.
    I am Brazilian would like to know if it has a way to decrease the quality of the image, or the frames per second, it was very slow running on the raspberry

    • Adrian Rosebrock March 27, 2018 at 6:48 am #

      This method really isn’t meant to be ran on the Raspberry Pi. You can decrease the resolution of the input image but it will still be too slow. See this post for more details. For the Raspberry Pi you should consider using Haar cascades if you need speed.

  28. Martina Rathna March 26, 2018 at 12:34 pm #

    Can you please tell me what went wrong?
    [INFO] loading model…
    OpenCV(3.4.1) Error: Unspecified error (FAILED: fs.is_open(). Can’t open “res10_300x300_ssd_iter_140000.caffemode”) in ReadProtoFromBinaryFile, file /home/pi/opencv-3.4.1/modules/dnn/src/caffe/caffe_io.cpp, line 1126
    Traceback (most recent call last):
    File “detect_faces.py”, line 23, in
    net = cv2.dnn.readNetFromCaffe(args[“prototxt”], args[“model”])
    (-2) FAILED: fs.is_open(). Can’t open “res10_300x300_ssd_iter_140000.caffemode” in function ReadProtoFromBinaryFile

    • Adrian Rosebrock March 27, 2018 at 6:18 am #

      It looks like your path to the input .caffemodel file is incorrect. This is most likely due to not correctly specifying the command line arguments path. If you’re new to command line arguments, that’s okay, but you should read up on them first.

  29. Abhilash March 26, 2018 at 3:20 pm #

    please let us know how to add PROTOTXT and MODEL path

  30. chopin March 27, 2018 at 2:36 am #

    Hi,Adrian,Thank you very much for your generosity.I am very fortunate to meet you here.

    I read carefully your blogs about object detection and pi project many times.It is undeniable that these has helped me very much and has given me a lot of inspiration.I admire your knowledge and ability, I almost became your fan.
    I‘m a ’sophomore and I am really interested in computer vision.I want to train my object detection model In defining the scene after read your blog three weeks ago(Real-time object detection with deep learning and OpenCV),it’s great and very funny.so I use these days to search about object detection papers amd I know YOLO SSD are great. so I successfully configured the environment about caffe-ssd(git clone https://github.com/weiliu89/caffe.git ) on Ubuntu16.04 .It can run about ssd_pascal_webcam.py and ssd_pascal_video.py, but when I run exmples/ssd/ssd_pascal.py to train pascal VOC data,I got an error.I spent three days trying to fix this error. I can’t remember how many times it was recompiled, but the problem persists. I asked Github but I didn’t get a reply.(The error issues:https://github.com/weiliu89/caffe/issues/875)

    I remember I received your patient reply. It feels warm. I would like you to take a look at this error and give me some advice to work it if you have time and like to do.Thanks again.

    • Adrian Rosebrock March 27, 2018 at 6:09 am #

      Hey Chopin — thank you for the kind words, I really appreciate that. Your comment really made my day 🙂

      While I’ve used Caffe quite a bit to train image classification networks I must admit that I have not used it to build an LMDB database and train it for object detection via an SSD so I’m not sure what the exact error is. Most of the work I’ve done with object detection involve either Keras, mxnet, or the TensorFlow Object Detection API (TFOD API). I would recommend starting with the TFOD API to get your feet wet.

  31. Cedric March 28, 2018 at 11:30 am #

    Hi Adrian,
    I tried this code using the Movidius and the Raspberry Pi. I interpreted the output similar to this post of yours:

    Unfortunately the face/faces are not detected in the right positions. The maximum confidence of all bounding boxes is around 40 %.

    Any advice on how I can update the model for usage on the Movidius?

    Thanks a lot!

    • Adrian Rosebrock March 30, 2018 at 7:18 am #

      Hey Cedric — are you confident that it’s the model itself? If the bounding boxes are in an incorrect position there might be a bug in decoding the (x, y)-coordinates from the model.

  32. Martin Faucheux March 29, 2018 at 11:27 am #

    Hey Adrian ! Everytime I’m looking for some help on a computer vision project, I come back to one of your tutorials. They are just excellent, clear and complete ! Thanks a lot for this you’re really helping me !

    • Adrian Rosebrock March 30, 2018 at 6:58 am #

      Thanks so much Martin, I really appreciate that! 🙂

  33. ryan March 29, 2018 at 3:36 pm #

    I tried the script and overall it works well. However, I noticed that when I moved my face to be very close to my camera, it started drawing a second rectangle adjacent to the correctly identified one. This issue appeared more easily (meaning that the distance between the face and the camera is shorter) when I increase the image size of the frame input (e.g. from width=300 -> width=300) to cv2.dnn.blobFromImage. Any advice on why it’s happening and how to fix would be much appreciated!

    • Adrian Rosebrock March 30, 2018 at 6:51 am #

      Object detectors are not perfect so you are bound to see some false-positives. The SSD algorithm works (at a very simplistic level) by dividing your image into boxes and classifying each of them, class-wise. Since your face most of the frame being close up to the camera, there are likely a large number of boxes that contain face-like regions. This would imply why you may see a detection adjacent to the real one.

  34. Martin Faucheux March 30, 2018 at 5:47 am #

    Hey Adrian ! Thanks again for this post, it is great !
    I need to recognize smaller faces on my video stream. Is it possible to adjust some parameters here to fit my problem without training my own model ? Maybe something in the blobToImage function ? I lack time and compute power.

    • Adrian Rosebrock March 30, 2018 at 6:41 am #

      Yes, you’ll want to modify this line:

      And change it o:

      blob = cv2.dnn.blobFromImage(cv2.resize(image, (NEW_WIDTH, NEW_HEIGHT)), 1.0,
      (300, 300), (104.0, 177.0, 123.0))

      Using the larger values.

      • Martin Faucheux March 30, 2018 at 11:02 am #

        Cool thanks ! I also read the blob tutorial but I didn’t really get why you need to resize the image. Also, what is the other size parameter (the provided (300,300) ) ?
        should this size match the resize image ?

  35. Peshmerge April 3, 2018 at 5:26 am #

    Hi Adrian,

    I have a question! Can I give feedback back to OpenCV to edit the model? I will explain what I mean. For example, I run the program on an image to detect faces, but what I get is that one of the detected objects isn’t a face, the program just identify it as a face. Is there a way which I can return value/parameter back to the program such that it edit it’s model and learn that that detected object isn’t a face so that it will correct itself!

    I hope my question is clear!

    Kind regards,

    • Adrian Rosebrock April 4, 2018 at 12:16 pm #

      There are less parameters to tune with the CNN-based detectors, as opposed to HOG + Linear SVM or Haar cascades, which is both a good thing and a bad thing. I would suggest trying different image sizes, both smaller and larger, to see if it has an impact on the quality of your detections.

      • Peshmerge April 6, 2018 at 5:49 am #

        Thanks Adrian!

  36. Ajithkumar April 4, 2018 at 2:33 am #

    drawing multiple boxes for single face

  37. A.N. O'Nyme April 11, 2018 at 5:15 am #


    I think that the mean value for the colors should be 104, 117, 123 instead of 104, 177, 123 (it is the mean value used in the training prototxt)

  38. Yunui April 12, 2018 at 12:51 pm #

    Hi Adrian,
    Thanks again for the great post. I just have a simple question.

    Which training dataset is used for this res10_300x300_ssd_iter_140000 model?

    I have searched a lot online but the only thing I have found is this link : https://github.com/opencv/opencv/blob/master/samples/dnn/face_detector/how_to_train_face_detector.txt

    The link says “The model was trained in Caffe framework on some huge and available online dataset.”

    Do you know the dataset in which the model is trained?

    Kind Regards


    • Adrian Rosebrock April 13, 2018 at 6:45 am #

      I do not know off the top of my head. You would need to reach out to Aleksandr Rybnikov, the creator of the model and “dnn” module in OpenCV.

  39. Abdulkadir April 18, 2018 at 3:02 am #

    Face recognition is in the process of registering. If more than one person passes in front of the camera, faces are confused. How can I separate them? You help me.

    • Adrian Rosebrock April 18, 2018 at 2:57 pm #

      You would need to detect both faces in the frame and identify each of them individually. Whatever model you are using for detection should localize each. If a face is too obfuscated you will not be able to recognize it.

  40. Mat April 19, 2018 at 2:43 pm #

    Hi Adrian!

    Is it possible to count the number of people in the screen at the same with this code? (Using a webcam)



    • Adrian Rosebrock April 20, 2018 at 9:58 am #

      Yep. You can create a “counter” variable that counts the total number of faces. Something like:

      Would work well.

      If you’re new to working with OpenCV and computer vision for these types of applications I would suggest reading through Practical Python and OpenCV. Inside you’ll learn the fundamentals of computer vision and image processing — I also include chapters on face counting as well which would resolve your exact question.

  41. Ali Hormati May 10, 2018 at 12:45 pm #

    Hi Adrian

    Thanks for this great post.

    If one wants to combine multiple detectors to get a higher accuracy, what approach do you suggest? Will it be useful?


    • Adrian Rosebrock May 14, 2018 at 12:14 pm #

      I’m not sure what you mean by “combine” in this context. Are you referring combining Haar cascades, HOG + Linear SVM, and deep learning-based detectors into a sort of “meta” detector?

  42. Raunak May 13, 2018 at 11:24 pm #

    Hi. I’m running the code on a google colab python notebook, with the required files uploaded to my drive. I’m getting the following output on running the code:

    [INFO] loading model…
    [INFO] computing object detections…
    [ INFO:0] Initialize OpenCL runtime…
    : cannot connect to X server

    Please advise.
    Great article, though.

    • Adrian Rosebrock May 14, 2018 at 11:55 am #

      I’m not familiar with the Google cloud setup here, but I assume the Google cloud notebook does not have an X server installed. You won’t be able to access your webcam or display a video stream using the notebook. I would suggest executing the code on your local system.

  43. Pierre May 27, 2018 at 9:28 pm #

    Hello, I need information on facial recognition and not just facial detection.
    Can you help me?

    • Adrian Rosebrock May 28, 2018 at 9:32 am #

      Hey Pierre, thanks for the comment. I cover facial recognition inside the PyImageSearch Gurus course. Be sure to give it a look!

  44. Mukul Sharma May 31, 2018 at 3:09 pm #

    As usual a very good post. I have a question, what if I use Mobilenet for SSD, will it be faster than the given by Opencv, also what are the accuracy tradeoffs of using Mobilenet. Since we are pushing towards embedded system, what according to you is the best system to run on raspberry pi (with good accuracy)?

    • Adrian Rosebrock June 5, 2018 at 8:34 am #

      I would suggest you read the MobileNet and SSD papers to understand speed/accuracy tradeoffs. The gist is that using MobileNet as a base network to an SSD is typically faster but less accurate. Again, you’ll want to read the papers for more details.

  45. David June 5, 2018 at 2:01 pm #

    Hey Adrian,

    thanks so much for the useful tutorials and code! They helped me a lot. I have a somewhat weird question: is there any way to implement the 5-point landmark detection (from your later post: https://www.pyimagesearch.com/2018/04/02/faster-facial-landmark-detector-with-dlib/) with the deep learning face detection? Because the DL face detection works better with profile views of the face and this would be sth really useful for my research. Thanks for your help! Cheers

    • Adrian Rosebrock June 7, 2018 at 3:19 pm #

      Yes. Once you have the bounding box coordinates of the face you can convert them to a dlib “rectangle” object and then apply the facial landmark detector. This post shows you how to do it but you’ll want to swap out the 68-point detector for the 5-point detector.

  46. dan June 6, 2018 at 5:04 pm #

    Is there some way to specify the camera to be used with the code? I have multiple cameras and want to specify the one for face detection. I am using a udev rule that creates a symlink for each camera, so that there are unique names for them, such as “/dev/vidFaceDetector”

    I noticed that if I put the Ubuntu assigned name, such as “/dev/video1” into line 25, it works:
    vs = VideoStream(‘/dev/video1’).start()

    but putting in the symlink name does not:
    vs = VideoStream(‘/dev/vidFaceDetector’).start()

    but the Ubuntu assigned name changes, so it is no more useful than just the index number.

    • dan June 6, 2018 at 5:09 pm #

      here is the error that occurs when using the symlink:

      [INFO] starting video stream…
      Unable to stop the stream: Inappropriate ioctl for device

  47. dan June 6, 2018 at 6:35 pm #

    Sorry for the string of replies, there does not seem to be a way to edit the original one.

    I tried to see if I could get the ubuntu assigned name from the symlink using

    for camera in glob.glob(“/dev/vid*”):
    print(camera, os.readlink(camera))

    but the output is the bus address:

    /dev/vidFaceDetector bus/usb/001/006

    Is there a way to map the bus address to the corresponding “/dev/videoX” device?

    • Adrian Rosebrock June 7, 2018 at 3:05 pm #

      This is a great question and I remember another reading asking the same question on another blog post. To be totally candid I do not know the solution to this problem as I’ve never encountered it but it apparently does happen. I would suggest either (1) posting on the official OpenCV forums or (2) opening an Issue on OpenCV’s GitHub.

      If you do find out the solution can you come back and post it on this thread so others can learn from it as well?

      Thanks so much!

  48. murat June 8, 2018 at 7:49 am #


    I can make your code work by adjusting (300, 300) to the size of images I have and it is normally working perfectly. However now I have to use 12 MP (4056×3040) images. I adjusted the size argument in blobFromImage function in the same way I used to, but somehow it is not working anymore. I also tried to adjust the input_shape part in deploy.prototxt.txt file but I couldn’t get any result.

    Do you have any advice for this problem ?

    Thanks so much

    • Adrian Rosebrock June 13, 2018 at 6:15 am #

      Correct, this network is fully convolutional so you can update the size of the input images and it should still work. As far as your 12MP images go, I’m not sure what the problem is there. What were the previous image sizes you were using that the network was still working?

  49. Aditya Mishra June 13, 2018 at 3:34 am #

    Hi Adrian,

    Thanks for the awesome article! However, I was unable to understand why did the detect_faces.py would only detect faces. There are only 2 things that seem to do the trick
    1. deploy.prototxt file
    2. res10_300x300_ssd_iter_140000.caffemodel

    The prototxt file, shows the configuration of the model, so I assume the “res10_300x300_ssd_iter_140000.caffemodel” is responsible for the face detection.

    So, I wanted to know whether is it possible to replace the above model weights with any other model weights used for detecting other objects (say car, tree, street light, etc) as well as prototxt file & follow the rest of the tutorial as it is and expect it to work just fine?

    Could you point me to some other example for detecting other object that you know of following the similar approach?

    • Adrian Rosebrock June 13, 2018 at 5:24 am #

      Correct, the .prototxt file contains the model definition and the .caffemodel contains the actual weights for the model. Together, they are used to detect objects in images — in this case faces. You can replace these files with other models trained on various objects and recognize them as well.

  50. Huzefa June 15, 2018 at 12:30 am #

    Hey great post! I had one question. Is this algorithm cloud based or can also work on edge?

    • Adrian Rosebrock June 15, 2018 at 12:03 pm #

      This algorithm is not cloud-based. It will run locally.

  51. huzefa June 15, 2018 at 1:53 am #

    hey adrian! What is the use of opencl in this algorithm?

    • Adrian Rosebrock June 15, 2018 at 12:02 pm #

      Sorry, are you asking how to use OpenCL with this example? Or what OpenCL is?

  52. Eric Nguyen June 18, 2018 at 3:17 pm #

    Hi Adrian,

    Should this work ok with dlib’s 68 point facial landmark predictor? I tried taking the bounding box from this tutorial and passing it to dlib’s keypoint predictor, but it’s really unstable, ie, when moving my head side to side (pitch-wise) the points are predicted incorrectly and moving everywhere. I even cropped the bounding box to make it square to pass to dlib, and it still didn’t work there well. It seemed like dlib’s HOG face detector worked better. Any idea why? Thanks so much!

    • Adrian Rosebrock June 19, 2018 at 8:37 am #

      OpenCV and dlib order bounding box coordinates differently so I think that might be your issue. Take a look at this blog post where I take the bounding box coordinates from a Haar cascade and construct a dlib rectangle object from it. You should do the same with the deep learning face detection coordinates.


  1. Python, argparse, and command line arguments - PyImageSearch - March 12, 2018

    […] Adrian, I just downloaded the source code to your deep learning face detection blog post, but when I execute it I get the following […]

Leave a Reply