Facial landmarks with dlib, OpenCV, and Python

Last week we learned how to install and configure dlib on our system with Python bindings.

Today we are going to use dlib and OpenCV to detect facial landmarks in an image.

Facial landmarks are used to localize and represent salient regions of the face, such as:

  • Eyes
  • Eyebrows
  • Nose
  • Mouth
  • Jawline

Facial landmarks have been successfully applied to face alignment, head pose estimation, face swapping, blink detection and much more.

In today’s blog post we’ll be focusing on the basics of facial landmarks, including:

  1. Exactly what facial landmarks are and how they work.
  2. How to detect and extract facial landmarks from an image using dlib, OpenCV, and Python.

In the next blog post in this series we’ll take a deeper dive into facial landmarks and learn how to extract specific facial regions based on these facial landmarks.

To learn more about facial landmarks, just keep reading.

Looking for the source code to this post?
Jump right to the downloads section.

Facial landmarks with dlib, OpenCV, and Python

The first part of this blog post will discuss facial landmarks and why they are used in computer vision applications.

From there, I’ll demonstrate how to detect and extract facial landmarks using dlib, OpenCV, and Python.

Finally, we’ll look at some results of applying facial landmark detection to images.

What are facial landmarks?

Figure 1: Facial landmarks are used to label and identify key facial attributes in an image (source).

Detecting facial landmarks is a subset of the shape prediction problem. Given an input image (and normally an ROI that specifies the object of interest), a shape predictor attempts to localize key points of interest along the shape.

In the context of facial landmarks, our goal is detect important facial structures on the face using shape prediction methods.

Detecting facial landmarks is therefore a two step process:

  • Step #1: Localize the face in the image.
  • Step #2: Detect the key facial structures on the face ROI.

Face detection (Step #1) can be achieved in a number of ways.

We could use OpenCV’s built-in Haar cascades.

We might apply a pre-trained HOG + Linear SVM object detector specifically for the task of face detection.

Or we might even use deep learning-based algorithms for face localization.

In either case, the actual algorithm used to detect the face in the image doesn’t matter. Instead, what’s important is that through some method we obtain the face bounding box (i.e., the (x, y)-coordinates of the face in the image).

Given the face region we can then apply Step #2: detecting key facial structures in the face region.

There are a variety of facial landmark detectors, but all methods essentially try to localize and label the following facial regions:

  • Mouth
  • Right eyebrow
  • Left eyebrow
  • Right eye
  • Left eye
  • Nose
  • Jaw

The facial landmark detector included in the dlib library is an implementation of the One Millisecond Face Alignment with an Ensemble of Regression Trees paper by Kazemi and Sullivan (2014).

This method starts by using:

  1. A training set of labeled facial landmarks on an image. These images are manually labeled, specifying specific (x, y)-coordinates of regions surrounding each facial structure.
  2. Priors, of more specifically, the probability on distance between pairs of input pixels.

Given this training data, an ensemble of regression trees are trained to estimate the facial landmark positions directly from the pixel intensities themselves (i.e., no “feature extraction” is taking place).

The end result is a facial landmark detector that can be used to detect facial landmarks in real-time with high quality predictions.

For more information and details on this specific technique, be sure to read the paper by Kazemi and Sullivan linked to above, along with the official dlib announcement.

Understanding dlib’s facial landmark detector

The pre-trained facial landmark detector inside the dlib library is used to estimate the location of 68 (x, y)-coordinates that map to facial structures on the face.

The indexes of the 68 coordinates can be visualized on the image below:

Figure 2: Visualizing the 68 facial landmark coordinates from the iBUG 300-W dataset (higher resolution).

These annotations are part of the 68 point iBUG 300-W dataset which the dlib facial landmark predictor was trained on.

It’s important to note that other flavors of facial landmark detectors exist, including the 194 point model that can be trained on the HELEN dataset.

Regardless of which dataset is used, the same dlib framework can be leveraged to train a shape predictor on the input training data — this is useful if you would like to train facial landmark detectors or custom shape predictors of your own.

In the remaining of this blog post I’ll demonstrate how to detect these facial landmarks in images.

Future blog posts in this series will use these facial landmarks to extract specific regions of the face, apply face alignment, and even build a blink detection system.

Detecting facial landmarks with dlib, OpenCV, and Python

In order to prepare for this series of blog posts on facial landmarks, I’ve added a few convenience functions to my imutils library, specifically inside face_utils.py.

We’ll be reviewing two of these functions inside face_utils.py  now and the remaining ones next week.

The first utility function is rect_to_bb , short for “rectangle to bounding box”:

This function accepts a single argument, rect , which is assumed to be a bounding box rectangle produced by a dlib detector (i.e., the face detector).

The rect  object includes the (x, y)-coordinates of the detection.

However, in OpenCV, we normally think of a bounding box in terms of “(x, y, width, height)” so as a matter of convenience, the rect_to_bb  function takes this rect  object and transforms it into a 4-tuple of coordinates.

Again, this is simply a matter of conveinence and taste.

Secondly, we have the shape_to_np  function:

The dlib face landmark detector will return a shape  object containing the 68 (x, y)-coordinates of the facial landmark regions.

Using the shape_to_np  function, we cam convert this object to a NumPy array, allowing it to “play nicer” with our Python code.

Given these two helper functions, we are now ready to detect facial landmarks in images.

Open up a new file, name it facial_landmarks.py , and insert the following code:

Lines 2-7 import our required Python packages.

We’ll be using the face_utils  submodule of imutils  to access our helper functions detailed above.

We’ll then import dlib . If you don’t already have dlib installed on your system, please follow the instructions in my previous blog post to get your system properly configured.

Lines 10-15 parse our command line arguments:

  • --shape-predictor : This is the path to dlib’s pre-trained facial landmark detector. You can download the detector model here or you can use the “Downloads” section of this post to grab the code + example images + pre-trained detector as well.
  • --image : The path to the input image that we want to detect facial landmarks on.

Now that our imports and command line arguments are taken care of, let’s initialize dlib’s face detector and facial landmark predictor:

Line 19 initializes dlib’s pre-trained face detector based on a modification to the standard Histogram of Oriented Gradients + Linear SVM method for object detection.

Line 20 then loads the facial landmark predictor using the path to the supplied --shape-predictor .

But before we can actually detect facial landmarks, we first need to detect the face in our input image:

Line 23 loads our input image from disk via OpenCV, then pre-processes the image by resizing to have a width of 500 pixels and converting it to grayscale (Lines 24 and 25).

Line 28 handles detecting the bounding box of faces in our image.

The first parameter to the detector  is our grayscale image (although this method can work with color images as well).

The second parameter is the number of image pyramid layers to apply when upscaling the image prior to applying the detector (this it the equivalent of computing cv2.pyrUp N number of times on the image).

The benefit of increasing the resolution of the input image prior to face detection is that it may allow us to detect more faces in the image — the downside is that the larger the input image, the more computaitonally expensive the detection process is.

Given the (x, y)-coordinates of the faces in the image, we can now apply facial landmark detection to each of the face regions:

We start looping over each of the face detections on Line 31.

For each of the face detections, we apply facial landmark detection on Line 35, giving us the 68 (x, y)-coordinates that map to the specific facial features in the image.

Line 36 then converts the dlib shape  object to a NumPy array with shape (68, 2).

Lines 40 and 41 draw the bounding box surrounding the detected face on the image  while Lines 44 and 45 draw the index of the face.

Finally, Lines 49 and 50 loop over the detected facial landmarks and draw each of them individually.

Lines 53 and 54 simply display the output image  to our screen.

Facial landmark visualizations

Before we test our facial landmark detector, make sure you have upgraded to the latest version of imutils  which includes the face_utils.py  file:

Note: If you are using Python virtual environments, make sure you upgrade the imutils  inside the virtual environment.

From there, use the “Downloads” section of this guide to download the source code, example images, and pre-trained dlib facial landmark detector.

Once you’ve downloaded the .zip archive, unzip it, change directory to facial-landmarks , and execute the following command:

Figure 3: Applying facial landmark detection using dlib, OpenCV, and Python.

Notice how the bounding box of my face is drawn in green while each of the individual facial landmarks are drawn in red.

The same is true for this second example image:

Figure 4: Facial landmarks with dlib.

Here we can clearly see that the red circles map to specific facial features, including my jawline, mouth, nose, eyes, and eyebrows.

Let’s take a look at one final example, this time with multiple people in the image:

Figure 5: Detecting facial landmarks for multiple people in an image.

For both people in the image (myself and Trisha, my fiancée), our faces are not only detected but also annotated via facial landmarks as well.


In today’s blog post we learned what facial landmarks are and how to detect them using dlib, OpenCV, and Python.

Detecting facial landmarks in an image is a two step process:

  1. First we must localize a face(s) in an image. This can be accomplished using a number of different techniques, but normally involve either Haar cascades or HOG + Linear SVM detectors (but any approach that produces a bounding box around the face will suffice).
  2. Apply the shape predictor, specifically a facial landmark detector, to obtain the (x, y)-coordinates of the face regions in the face ROI.

Given these facial landmarks we can apply a number of computer vision techniques, including:

  • Face part extraction (i.e., nose, eyes, mouth, jawline, etc.)
  • Facial alignment
  • Head pose estimation
  • Face swapping
  • Blink detection
  • …and much more!

In next week’s blog post I’ll be demonstrating how to access each of the face parts individually and extract the eyes, eyebrows, nose, mouth, and jawline features simply by using a bit of NumPy array slicing magic.

To be notified when this next blog post goes live, be sure to enter your email address in the form below!


If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 11-page Resource Guide on Computer Vision and Image Search Engines, including exclusive techniques that I don’t post on this blog! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , ,

126 Responses to Facial landmarks with dlib, OpenCV, and Python

  1. Anto April 3, 2017 at 11:24 am #

    Superrrrrbbbbbbbbbb blog .searched continuosly about facial landmarks.wellll explained.looking forward for face recongnition using facial landmark measurements……please go ahead soon..!!!!
    awesome blog….!!!!!!

    • Adrian Rosebrock April 3, 2017 at 1:50 pm #

      Thanks Anto!

      • elliot August 20, 2017 at 3:06 am #

        thanks for providing such nice code and helping people.
        Can you please guide me how to save extracted landmarks in .mat file so i could use it in matlab.

    • Jaechang Shim June 6, 2017 at 3:15 am #

      It’s great!! Working very well, Thanks a lot!!

  2. Danny April 3, 2017 at 3:51 pm #

    Thank you so much Adrian!!

  3. Mário April 3, 2017 at 8:37 pm #

    Very good job Adrian.
    All of your explanation are very useful.
    This one, in special, is very important for my research.
    Thank you a lot!!!

    • Adrian Rosebrock April 5, 2017 at 12:03 pm #

      Thank you Mário! 🙂 I wish you the best of luck with your research.

  4. Abkul Orto April 4, 2017 at 4:47 am #

    This is a clear, clean, and EXCELLENT demystification of the concept.

    Any plan to include this concept and the Deep learning version of training and implementation in your up coming Deep learning book?

    • Adrian Rosebrock April 5, 2017 at 12:01 pm #

      Thanks Abkul, I’m glad you enjoyed the tutorial!

      I don’t plan on covering how to train custom facial landmark detectors via deep learning inside Deep Learning for Computer Vision with Python, but I will consider it for a future tutorial.

  5. Dimitri April 4, 2017 at 6:33 am #

    This blog is a goldmine. Thank you so much for writing this.

    • Adrian Rosebrock April 5, 2017 at 12:00 pm #

      I’m glad you’re enjoying the blog Dimitri, I’m happy to share 🙂

  6. Thimira Amaratunga April 4, 2017 at 12:23 pm #

    Hi Adrian,

    This is a great article. Cant wait for next week’s article on how to access face features individually.

    After some experimenting (and with your hint on array slicing), I managed to extract the features. Here is the method I used,

    Undoubtedly, this method I used could use more improvements.
    So, waiting for your article 🙂

    • Adrian Rosebrock April 5, 2017 at 11:57 am #

      Nice job Thimira. The method I will demonstrate in next week’s blog post is similar, but uses the face_utils sub-module of imutils for convenience. I’ll also be demonstrating how to draw semi-transparent overlays for each region of the face.

  7. Neeraj Kumar April 4, 2017 at 2:58 pm #

    Hey Adrian,

    I have already configured dlib with your previous week blog and now when i am trying to run “python facial_landmarks.py –shape-predictor shape_predictor_68_face_landmarks.dat \
    –image images/example_01.jpg” command my ubuntu terminal is showing error
    “python: can’t open file ‘facial_landmarks.py’ : [Errno 2] no such file or directory “.

    PS : I have already downloaded your code and files and i am running my code inside that ‘facial-landmarks’ folder. All the files are present as well.

  8. Neeraj Kumar April 4, 2017 at 3:13 pm #

    Dear Adrian,

    Fixed the previous issue by providing the full path of py file. Thanks for this great blog.

    Thanks and Regards
    Neeraj Kumar

    • Rehan Shaikh September 4, 2017 at 2:34 am #

      How did you managed to remove this path issue ?
      Where do we have to specify the path ?

  9. Neeraj Kumar April 4, 2017 at 3:25 pm #

    Dear Adrian,

    I tried working for side faces but its not working, can you please guide what can be the possibilities for side face landmark detection and yes i was also trying working on your example_02.jpg there imutils.resize() method was giving some error.
    Attribute error : ‘NoneType’ object has no attribute ‘shape’.

    Thanks and Regards
    Neeraj Kumar

    • Adrian Rosebrock April 5, 2017 at 11:56 am #

      If you’re getting a “NoneType” error it’s because you supplied an invalid path to the input image. You can read more about NoneType errors in this blog post.

      • Neeraj Kumar April 7, 2017 at 6:02 am #

        Fixed Buddy. Thanks a ton.
        can you please help me out with – how can i detect landmarks in video and compare with existing dataset of images.

        • Adrian Rosebrock April 8, 2017 at 12:51 pm #

          I will be discussing landmark detection in video streams in next weeks blog post.

  10. Manh Nguyen April 5, 2017 at 2:01 am #

    I hope next post you can use infrared camera

    • Adrian Rosebrock April 5, 2017 at 11:49 am #

      I don’t have any plans right now to cover IR cameras, but I’ll add it to my potential list of topics to cover.

  11. Sachin April 5, 2017 at 3:57 am #

    Nice article Adrian! Btw shouldn’t the shape points in Figure 2 be 0 based?

    • Adrian Rosebrock April 5, 2017 at 11:49 am #

      Figure 2 was provided by the creators of the iBUG dataset. They likely used MATLAB which is 1-indexed rather than 0-indexed.

    • Oli April 6, 2017 at 4:06 am #

      I also came across this. I have created an image and printed the index numbers as they are with dlib and python here: http://cvdrone.de/dlib-facial-landmark-detection.html

  12. Parag Jain April 5, 2017 at 10:47 am #

    Isn’t Independent Component Analysis used to find local features of a face? How is that approach different from this? Advantages? Drawbacks?

  13. Mansoor April 5, 2017 at 11:30 am #

    Adrian, i’m a huge fan! i don’t know how to thank you for this.

    I don’t know but i am having trouble running this code. It says that imutils package does not contain face_utils. I think it is not upgrading properly.

    • Adrian Rosebrock April 5, 2017 at 11:46 am #

      Make sure you run:

      $ pip install --upgrade imutils

      To make sure you have the latest version of imutils installed. You can check which version is installed via pip freeze

  14. addouch April 6, 2017 at 3:23 pm #

    amazing adrian

    I hope next time to show us how to recognize emotions on image

  15. tony April 6, 2017 at 3:53 pm #

    Hi Adrian ,thanks for this great post

    how dlib eye landmarks can be used to detect eye blinks ?

    • Adrian Rosebrock April 8, 2017 at 12:53 pm #

      Hi Tony — I’ll be covering how to perform blink detection with facial landmarks and dlib in the next two weeks. Stay tuned!

  16. bumbu April 10, 2017 at 8:59 am #

    May we have a tutorial about apply deep learning(CNN) using keras and tensorflow to classify some dataset, thanks sir, you are super!!!

  17. Rijul Paul April 24, 2017 at 4:38 am #

    Hey Adrian,thanks for this blog post.IS there a way so that we can create our own custom shape predictor?

    • Adrian Rosebrock April 24, 2017 at 9:32 am #

      Yes, but you will have to use the dlib library + custom C++ code to train the custom shape predictor.

  18. Benu April 27, 2017 at 6:14 am #

    I’ve tried my best to play with the parameters of train_shape_predictor in dlib but the result is never as close as shape_predictor_68_face_landmarks.dat. data that I’ve used is custom data of face similar to that of ibug.

  19. wiem April 28, 2017 at 5:37 am #

    Hi Adrean,
    I’m following your post about Facial landmarks with dlib, OpenCV, and Python. It’s really amazing. thank you a lot for such helpful and util code. So, now i’m tring to save all those landmarks in a file ! So that I’m ask you how cando such thing?
    Thank you

    • Adrian Rosebrock April 28, 2017 at 9:13 am #

      Sure, that’s absolutely possible. I would use pickle for this:

      This will save the NumPy array shape which contains the (x, y)-coordinates for each facial landmark to disk.

  20. Yang April 29, 2017 at 6:53 am #

    Hello, Adrian ,the blog is very useful, thanks for this great blog.

    • Adrian Rosebrock May 1, 2017 at 1:39 pm #

      Thank you Yang!

  21. Ameer May 7, 2017 at 4:59 pm #

    Hello Adrian
    I was wondering if you did any tut. on face alignment too ? if so may you provide the link for me

    • Adrian Rosebrock May 8, 2017 at 12:21 pm #

      Hi Ameer — I’ll be doing a blog post on facial alignment in late May/early June.

  22. pravallika May 16, 2017 at 6:03 am #

    hey adrian,
    i am trying to achieve real-time face – recognition usind dlib. but when i try using the above code with the .compute_face_descriptor(frame , shape) it gives an error that the arguments are not based on c++.please give me a solution sir

  23. wiem May 16, 2017 at 6:42 am #

    Hi Adrian ,
    Thanks a lot for your explanation. It is very useful. However I’m newer in python and I’m trying to save those face landmarks in matrix so I can manipulate them instead of the original image. Would you give me some suggestion. How can I do such thing ?
    Thank you Adrian

    • Adrian Rosebrock May 17, 2017 at 9:59 am #

      Once you have the shape representing the facial landmarks you can save them to disk via JSON, pickle, HDF5, etc. I would recommend pickle if you are new to Python. I would also suggest working through Practical Python and OpenCV so you can learn the fundamentals of computer vision and OpenCV.

  24. Samuel May 17, 2017 at 3:35 pm #

    Hello, i see you used dlib face/object detector for finding face on image transfer it from dlib.rectangle object to bouding values like your “rect_to_bb” funcition do and then with cv2 draw rectangle, but my problem is i need to use my own haar cascade for finding faces/objects and correct me if i am wrong there i need the exact opposite “bb_to_rect” because landmark detector require this rectangle, but i cant find out which is it datatype and how to reproduce this object, “[(386, 486) (461, 561)]” this sample of the found face its seems like 2 tuples but it doesnt, i cant event find out that while i was examining dlib c++ code, I spent on this problem more than 4 hours and with no result, is there any solution or it is approaching to be impossible?

    • Adrian Rosebrock May 18, 2017 at 11:54 am #

      I will look into this and see what I can find.

    • Vikram Voleti June 30, 2017 at 11:43 am #

      You can (sort of) implement bb_to_rect on your own. I wanted to do the same, and figured it out after some probing:

      For example, if you want to align an image called “frame” whose face you have already detected as a rectangle with (x, y, w, h) values known, you can do it thus:

      fa.align(frame, frame, dlib.rectangle(int(x), int(y), int(x + w), int(y + h)))

      Here, I used a BGR image as “frame”, I had used

      (x, y, w, h) = faceCascade.detectMultiScale(frame, scaleFactor=1.1, minNeighbors=5)

      to detect face rectangle, having already defined faceCascade as:

      faceCascade = cv2.CascadeClassifier(‘haarcascade_frontalface_default.xml’)

      • Adrian Rosebrock July 5, 2017 at 6:38 am #

        Thanks for sharing Vikram!

  25. Sai kumar May 23, 2017 at 2:32 am #

    I am student of you rather than saying a huge fan.
    I have a small doubt we are getting coordinates, what are these coordinates represents.
    If i want only a specific points in landmarks how can i get them
    Nose tip
    Left-eye left corner
    Right-eye right corner
    Corner of lips
    These points required to estimate the pose of head,
    Please help me this

    • Adrian Rosebrock May 25, 2017 at 4:30 am #

      If you want only specific facial landmark regions, please refer to this blog post which discusses the indexes of each of the facial landmark points (and how to extract them).

  26. jvf May 23, 2017 at 6:28 pm #

    I decide to run your program again.

    python ‘/home/…/facial_landmarks.py’ –shape-predictor ‘/home/…/shape_predictor_68_face_landmarks.dat’ –image ‘/home/…/30.jpg’

    I have the error again. Now python see this dat file but I have the error anyway:

    Illegal instruction (core dumped)

    What can I do?

    • Adrian Rosebrock May 25, 2017 at 4:23 am #

      It’s hard to say what the exact error is without having physical access to your machine, but I think the issue is that you may have compiled the dlib library against a different Python version than you are importing it into. Are you using Python virtual environments?

      • jvf June 1, 2017 at 3:37 pm #

        Yes, I use virtual environemnts. I reinstalled Ubunt 16.04. But anyway I have this error.

        I have error in this line:

        rects = detector(gray,1)

        Illegal instruction (core dumped)

        What can I do to solve this problem?

        • Adrian Rosebrock June 4, 2017 at 5:46 am #

          It sounds like you might have compiled dlib against a different version of Python than you are importing it into. Try uninstalling and re-installing dlib, taking care to ensure the same Python version is being used.

  27. Jon May 25, 2017 at 4:43 pm #

    Fantastic stuff. Thanks for all you’ve done!
    I am doing face detection / recognition on IR images. This means I cannot use the standard features for detection or recognition. I am trying to build my own detctor using the dlib “train_object_detector.py” and it is working really well – mostly.
    I have a training set that are faces of one size and the detector is getting faces of similar sizes but completely missing smaller face images.

    So my question is how does the system work to detect faces of different sizes. Do you need to have training samples of all the sizes that you expect to be finding? Or does the system take in the training images and resize them?

    If you could clarify how this process works and what kind of training set I need and how it works to find faces of different sizes, I would really appreciate it. I have the recognizer working well, I just need to find the faces.

    I am using the python dlib, specifically:

    Thank you, Jon Hauris

    • Adrian Rosebrock May 28, 2017 at 1:19 am #

      HOG-based object detectors are able to detect objects at various scales (provided the objects across the dataset have the same aspect ratio) using image pyramids. In short, you don’t need to worry about the size of your faces as they will be resized to a constant scale during training and then image pyramids applied when detecting on your testing images.

  28. Moises Rocha May 30, 2017 at 2:21 pm #

    Good afternoon, your tutorials are great.

    I am developing a code for facial recognition but I am having difficulties in the matter of precision.
    To get faster I made 37 proportions based on 12 points. But the variation of the face angle also varies the result. I’m limiting the angle 5 degrees to each side horizontally. I will now try to record the proportions for each angle ie a series of frames for the same face. Thus the comparison would be between several faces at the same angle.

    If you can give me a light, I thank you.

  29. mux June 1, 2017 at 10:25 am #

    I got this error when trying to run your code:

    usage: facial_landmarks.py [-h] -p SHAPE_PREDICTOR -i IMAGE
    facial_landmarks.py: error: argument -p/–shape-predictor is required
    An exception has occurred, use %tb to see the full traceback.

    Can you help me please! thank you

    • Adrian Rosebrock June 4, 2017 at 6:23 am #

      Please read up on command line arguments before continuing. You need to supply the command line arguments to the script.

  30. Jack han June 2, 2017 at 5:02 am #

    How to open shape_predictor_68_landmarks ? Please.

    • Adrian Rosebrock June 4, 2017 at 5:42 am #

      I’m not sure what you mean by “open” the file. You can use the “Downloads” section of this blog post to download the source code + shape_predictor_68_landmarks file.

      • Jack han June 7, 2017 at 5:37 am #

        Hi,Adrian. How to train shape_predictor_68_landmarks model?And do you have train demo?It’s perfect to have directions to train model.I want to train my model.Thanks!

        • Adrian Rosebrock June 9, 2017 at 1:51 pm #

          I don’t have any demos of this, but you would need to refer to the dlib documentation on training a custom shape predictor.

  31. Mayank June 6, 2017 at 10:30 pm #

    Great article, thank you for you efforts.

    Is there any way by which i can get more than 68 points of facial landmark. I see some paper mentioning 83 points or more. Is there any library that can help me find some points on forehead ? I am trying to find golden ratio for a face to score it. Thanks!

    • Adrian Rosebrock June 9, 2017 at 1:56 pm #

      You would need to train your own custom shape predictor from scratch using dlib. The pre-trained facial landmark predictor provided by dlib only returns 68 points. The Helen dataset comes to mind as a good starting point.

    • Caroline Wang September 20, 2017 at 4:00 am #

      Hi Mayank,

      I’m facing the same issue! Would like to train a shape detector for 9 points along hairline. Have you found any solution?


  32. shravankumar June 12, 2017 at 1:30 pm #

    Hey Chief,

    Thank you so much. the post is so clear and works super cool.

  33. Raziye June 20, 2017 at 8:09 pm #

    Hi,do you have MATLAB or c++ cood for your work ?I try that use your code but l could not and I could not solve its error.
    Thanks a lot

    • Adrian Rosebrock June 22, 2017 at 9:34 am #

      I only provide Python code here on the PyImageSearch blog. I do not have any MATLAB or C++ code.

  34. SathyaNK June 23, 2017 at 6:39 am #

    Hi adrian…I’m having problem with the argument constructor, after giving the path to predictor and image when this line ” args = vars(ap.parse_args())” is executed in ipython console it is giving this error
    “In [46]: args = vars(ap.parse_args())
    usage: ipython [-h] -p SHAPE_PREDICTOR -i IMAGE
    ipython: error: argument -p/–shape-predictor is required
    An exception has occurred, use %tb to see the full traceback.

    please help me with this problem

    • Adrian Rosebrock June 27, 2017 at 6:44 am #

      If you’re using ipython I would suggest leaving out the command line arguments and simply hardcoding the path to the shape predictor file.

  35. Pelin GEZER June 25, 2017 at 1:36 pm #

    I tried with the photo of man who has beard. It did not work well. How can we solve?

  36. moises rocha June 26, 2017 at 7:13 pm #

    Hello, how are you?
    I’ve been a big fan of your posts since I started working with python. Even more so when we made the post for Dlib on Raspberry. I was sad because you did not answer me and that you answered all the questions about your post.
    I know I asked a very specific question but could answer me by email if that is the case. Even if it is a negative answer.

    I do biometrics with haarcascade but I’m trying with landmarks.

    I am developing a code for facial recognition but I am having difficulties in the matter of precision.
    To get faster I made 37 proportions based on 12 points. But the variation of the face angle also varies. I’m limiting the angle 5 degrees to each side horizontally. I will now try to record the proportions for each angle ie a series of frames for the same face. Thus the comparison would be between several faces at the same angle.

    Thank you for your attention.

    • Adrian Rosebrock June 27, 2017 at 6:14 am #

      Hi Moises, I do my best to answer as many PyImageSearch comments as I possibly can, but please keep in mind that the PyImageSearch blog is very popular and receives 100’s of emails and comments per day. Again, I do the best I can, but I can’t always respond to them all.

      That said, regarding your specific project (if I understand it correctly), you are performing face recognition but want to use facial landmarks to aid you in the aligning process to obtain better accuracy? Do I understand your question correctly? If so, why not just use the tutorial I provided on face alignment.

      • moises rocha June 29, 2017 at 10:22 pm #

        Thank you response.
        About the project I will explain better.
        My code makes reasons like this:
        Face 1
        Comparison 1 = straight (point1 to point2) / straight (point4 to point8) = “1,2”
        Comparison 2 = straight (point3 to point4) / straight (point5 to point6) = “0.8”

        Face 2
        Comparison 1 = straight (point1 to point2) / straight (point4 to point8) = “1,6”
        Comparison 2 = straight (point3 to point4) / straight (point5 to point6) = “1,0”

        So if the face is straight the comparison is accurate. However if the face is crooked it does not work.

        Example of a crooked face facing left:
        .. …..
             | Head straight but face turned to left

        Straight face:
        …. ….
                | Head straight and face straight

        The turning of the face is not a problem because the comparison is made by the proportion of the lines. If the head is crooked but the face is straight the code works well.

        I hope you have explained it better.

        thank you

        • Adrian Rosebrock June 30, 2017 at 8:03 am #

          Keep in mind that the most facial landmarks and face recognition algorithms assume a frontal facing view of the person. Yes, facial landmarks can tolerate a small degree of rotation in the face (as it turns from side to side), but there becomes a point where the face is too angled to apply these techniques. I’m still not sure I understand your question entirely, but I think this might be the situation you are running into.

  37. tarun July 5, 2017 at 9:57 am #

    Hi Adrian,

    Thanks for the wonderful tutorial. However if I wish to get a roi composed of eye lid, eye ball together, for example like in eye localization tasks where in the whole eye including continuous portion from eye lids to eye brows to eye is to be cropped, how do I do the same with facial landmarks code above

    best regards

    • Adrian Rosebrock July 7, 2017 at 10:11 am #

      I’m not sure I understand your question, but since facial landmarks can localize both eyes and eyebrows, you can simply compute the bounding box of both those regions and extract the ROI.

  38. Avi July 11, 2017 at 3:07 am #

    Great tutorial. Thank a lot!
    However, I have one confounding problem:
    When running the code at:
    rects = detector(gray, 1) I get the following error:

    __ TypeError: No registered converter was able to produce a C++ rvalue of type char from this Python object of type str __
    I investigated the error, upgraded libboost-dev-all (using Ubuntu 17.04), still no resolution

    What confounds me is; in a separate work for face recognition, I imported dlib initialized detector and set rects = detector(img , 1) etc…
    It works fine.
    Redid this exercise on python console (line by line)… error did not show up
    Ran the program and the error turns up.
    No spelling mistakes…
    Any pointers, anything will help…
    Thanks for your time

    • Adrian Rosebrock July 11, 2017 at 6:26 am #

      Hi Avi — I have to admit that I haven’t encountered this issue before so I’m not sure what the exact issue is. I would suggest posting the issue on the official dlib forums.

  39. wiem July 11, 2017 at 5:10 am #

    Hello Sir!
    Thank you very much. All your explanations in this tutorial are very helpful.
    I am currently studying at the national school of engineers of Sousse, coming from Tunisia. I use facial markers to recognize emotionsfrom the image. I try to create a template (model) from all the landmarks I extracted from the images in the CK + dataset. I advocate using this model to qualify the emotion of the face by using it.
    So I wonder if you could help me and guide me how to save the facial landmarks in a training model and how could I predict this pattern to detect facial emotion from benchmarks.
    Thank you for your help

    • Adrian Rosebrock July 11, 2017 at 6:21 am #

      Is there a particular reason why you are using facial landmarks to predict emotions? As I show in Deep Learning for Computer Vision with Python, CNNs can give higher accuracy for emotion recognition and are much easier to use in this context.

  40. khalid July 16, 2017 at 11:45 am #

    hi , thanks for this great tutorial, please i want to crop the detected face . i tried the function crop but it doesn’t work.
    please if you have any idea, help me.
    thanks a lot.

    • Adrian Rosebrock July 18, 2017 at 9:59 am #

      You can drop out the face using NumPy array slicing:

      face = image[startY:endY, startX:endX]

      I cover the basics of image processing, computer vision, and OpenCV (including how to crop images), inside my book, Practical Python and OpenCV. I suggest you start there. I hope that helps!

  41. Julien July 17, 2017 at 11:15 pm #

    Hi Adrian, thanks for a useful website. I just tried your code on a movie in which I want to annotate faces (cf. your real time post, feeding in a video file instead of the webcam input).
    When faces are detected, it works well, however the bottleneck is definitely face detection. Are there other “out of the box” solutions than the pre-trained HOG + Linear SVM object detector? You mention deep learning based approaches, is it something I could quickly deploy (i.e., are there pre-trained weights somewhere, and a pre-built network architecture, which would do a decent job?). Thank you for any hints!

    • Adrian Rosebrock July 18, 2017 at 9:50 am #

      Are you looking to simply increase the speed of the face detection? Use Haar cascades instead. They are less accurate than HOG + Linear SVM detectors, but much faster.

      • Julien July 20, 2017 at 7:06 pm #

        sorry, I wasn’t clear. The issue is that sometimes faces are not detected. I was wondering if there are other methods readily available that may give me hits where HOG+Linear SVM doesn’t. I am not concerned with speed, this is not a real-time project.

        • Adrian Rosebrock July 21, 2017 at 8:50 am #

          Sure, there are many face detectors that you can use. Haar cascades. Deep learning-based face detectors. You could even use a Google or Microsoft face detection API. It really depends on how far down the rabbit hole you want to go. Ideally, I would suggest gathering more training data that reflects the types of faces and environments the faces are in, then train your own custom face detector.

  42. Ankit September 9, 2017 at 2:20 pm #

    Hello Sir,
    I am a huge fan of you.
    You are doing wonder full work.

    This code works very perfectly.

    I want to know is how can I detect face if the image if the image is rotated by 90, 180 or 270 degrees?

    And what if I can do this in a live video from the camera?

    • Adrian Rosebrock September 11, 2017 at 9:18 am #

      The HOG detector is not invariant to rotation so you’ll need to rotate your image by 90, 180, and 270 degrees and then apply the detection to each of the rotated images.

      I cover how to apply facial landmarks to real-time video in this post.

  43. Junior September 13, 2017 at 3:39 pm #

    Hi adrian, thanks for this page.
    How can I implement face detection with dlib in a raspberry Pi ?

    • Adrian Rosebrock September 13, 2017 at 3:40 pm #

      Face detection? Or face recognition? This post already covers how to perform face detection (i.e., detecting the location of a face in a given image).

  44. siva charan September 14, 2017 at 8:02 am #

    Hi Adrian,

    Is it possible to recognize the faces in live cam using OpenCV.I need your suggestions.Currently i am working on face recognization.

    • Adrian Rosebrock September 14, 2017 at 1:15 pm #

      Yes, absolutely. I cover face recognition inside the PyImageSearch Gurus course. I would suggest starting there.

  45. wiem September 21, 2017 at 10:12 pm #

    HI Andrian , thanks for this page, I want ask you if the precision of detected landmarks is related to the image size; because in this tutorial you change it into 500 ??

    • Adrian Rosebrock September 23, 2017 at 10:12 am #

      No, the precision of detected landmarks is not related to the image size. I resized the image because it’s very rare we need to process images larger than 500-700 pixels along their largest dimension. Instead, we reduce the image size so that our algorithms run faster.

  46. Ashish hajagulkar September 24, 2017 at 12:01 am #

    how I can do live facial landmark detection through video analysis

  47. Tonmoy September 29, 2017 at 2:18 pm #

    Hello Adrian, have you written any blog on how to estimate head pose using facial landmarks? Please let me know. Cant seem to find any elegant solution.

    • Adrian Rosebrock October 2, 2017 at 10:08 am #

      Hi Tommy — I have not written a post on head pose estimation, but I will consider this for a future blog post topic.

  48. Arfah S September 29, 2017 at 4:28 pm #

    This is one of the best articles ive ever come across!

    • Adrian Rosebrock September 30, 2017 at 9:42 am #

      Thanks Arfah!

  49. Darshil October 25, 2017 at 2:03 pm #

    Hi Adrian,
    Thanks for this page. This is a great tutorial. I’m having one error running this code. When I run

    python facial_landmarks.py –shape-predictor shape_predictor_68_face_landmarks.dat –image images/example_01.jpg

    I’m getting

    Illegal instruction (core dumped)

    • Adrian Rosebrock October 26, 2017 at 11:47 am #

      Hi Darshil — I’m sorry it isn’t running on your system. Can you try running this example program to see if it works?

      • Darshil October 29, 2017 at 2:15 am #

        Thank you Adrian for your reply. I tried running the code you sent, it is not giving any error but it is not giving any output either. I am using Ubuntu 14. Can I record and send you a video somewhere ,of what error I’m getting.

        • Adrian Rosebrock October 30, 2017 at 3:15 pm #

          Hi Darshil. cv2.imshow() should display the image on your computer screen. Alternatively you could use cv2.imwrite() to write the image to disk.

      • Darshil October 30, 2017 at 11:29 am #

        Hi Adrian, thanks for your reply.

        The code you sent is running without any error but it is not giving any output either. I’m using Ubuntu 14. I know what the error is, please help me with that. I tried running facial_landmarks.py, video_facial_landmarks.py, detect_blinks.py and drowsiness_detection.py and all the four are having same error. When the code execution reaches to “rects = detector(gray, 0)” in all the codes, execution stops and it shows “Illegal instruction (core dumped).

        • Adrian Rosebrock October 30, 2017 at 1:17 pm #

          Hi Darshil — Did the dlib sample code I linked work without a hitch? Without being on your system, it is hard for me to debug from here. Try sending me an email with details about your configuration including OpenCV version, dlib version, and Python version.

          • Darshil November 3, 2017 at 12:41 pm #

            I have dropped you an email. Please help

  50. Eyshika November 3, 2017 at 12:53 pm #

    Am also facing error in parse.arg(). I havent left any space in middle still it shows ASSERTION ERROR

    • Adrian Rosebrock November 6, 2017 at 10:46 am #

      Hi Eyshika — please read up on command line arguments. You DO NOT need to edit any of the code.

  51. Abdul hanan November 7, 2017 at 2:28 am #

    Hey there. This is one of the simple and well written tutorial about facial landmarks.I have a question regarding landmarks.

    Can we find distance between two landmarks? Suppose we want to find length of our lips so according to 68-landmarks detection we should find distance between 49 to 55. Is there anyway to do ?

    • Adrian Rosebrock November 9, 2017 at 7:04 am #

      Compute the Euclidean distance between the two points — this will give you the pixel distance. The distance in a measurable metric an be computed provided you’ve done a simple calibration.

  52. Jigyasu Bagai November 12, 2017 at 3:10 am #

    Hi Adrian great work , can you please suggest a way as to quantify how well the classification or lip extraction is doing , I mean can we plot a Confusion matrix for the above script ????

    looking for an early reply

    • Adrian Rosebrock November 13, 2017 at 2:02 pm #

      You can evaluate the facial landmarks against a testing set; however, you cannot evaluate the performance without knowing the ground-truth locations of each facial coordinate.

  53. ghazi November 19, 2017 at 3:28 pm #

    Hi Adrian great work, thank you for you efforts.

    I have a mini project about lip reading authentication.

    Does you have an idea about extracting letters from an image ?.


  54. ghazi November 20, 2017 at 10:58 am #

    Hi Adrian great work.
    I have a project about Lip reading authentication with the method : Follow-up of lips movement by points of interest.
    Well, your example help me very match in my project but does you have any idea about extracting a letter from the picture?

    • Adrian Rosebrock November 20, 2017 at 3:45 pm #

      I don’t have any experience with lip reading, but I would suggest taking a look at this publication which discusses the topic.

  55. Nick December 13, 2017 at 8:51 pm #

    Hi Adrian!

    Great work! Thank you for running this blog!

    I just tried the code and I am getting an error: facial_landmarks.py: error: the following arguments are required: -i/–image

    and then

    facial-landmarks>–image images/example_01.jpg
    ‘–image’ is not recognized as an internal or external command,
    operable program or batch file.

    Would you be able to help me to resolve it? I didn’t edited the code.

    • Adrian Rosebrock December 15, 2017 at 8:31 am #

      Hey Nick, please see my reply to “mux” (June 1, 2017). Your issue here is that you are not properly supplying the command line arguments. Open up a command line and then execute the following command, just like I do in the blog post:

      You’ll want to make sure you are in the same directory as the facial_landmarks.py script.

      Take a second to read up on how to use the command line and command line arguments and it will help out dramatically. I hope that helps!


  1. Real-time facial landmark detection with OpenCV, Python, and dlib - PyImageSearch - April 17, 2017

    […] We’ve started off by learning how to detect facial landmarks in an image. […]

  2. Face Alignment with OpenCV and Python - PyImageSearch - May 22, 2017

    […] our series of blog posts on facial landmarks, today we are going to discuss face alignment, the process […]

Leave a Reply