Detect eyes, nose, lips, and jaw with dlib, OpenCV, and Python

Today’s blog post is part three in our current series on facial landmark detection and their applications to computer vision and image processing.

Two weeks ago I demonstrated how to install the dlib library which we are using for facial landmark detection.

Then, last week I discussed how to use dlib to actually detect facial landmarks in images.

Today we are going to take the next step and use our detected facial landmarks to help us label and extract face regions, including:

  • Mouth
  • Right eyebrow
  • Left eyebrow
  • Right eye
  • Left eye
  • Nose
  • Jaw

To learn how to extract these face regions individually using dlib, OpenCV, and Python, just keep reading.

Looking for the source code to this post?
Jump right to the downloads section.

Detect eyes, nose, lips, and jaw with dlib, OpenCV, and Python

Today’s blog post will start with a discussion on the (x, y)-coordinates associated with facial landmarks and how these facial landmarks can be mapped to specific regions of the face.

We’ll then write a bit of code that can be used to extract each of the facial regions.

We’ll wrap up the blog post by demonstrating the results of our method on a few example images.

By the end of this blog post, you’ll have a strong understanding of how face regions are (automatically) extracted via facial landmarks and will be able to apply this knowledge to your own applications.

Facial landmark indexes for face regions

The facial landmark detector implemented inside dlib produces 68 (x, y)-coordinates that map to specific facial structures. These 68 point mappings were obtained by training a shape predictor on the labeled iBUG 300-W dataset.

Below we can visualize what each of these 68 coordinates map to:

Figure 1: Visualizing each of the 68 facial coordinate points from the iBUG 300-W dataset (higher resolution).

Examining the image, we can see that facial regions can be accessed via simple Python indexing (assuming zero-indexing with Python since the image above is one-indexed):

  • The mouth can be accessed through points [48, 68].
  • The right eyebrow through points [17, 22].
  • The left eyebrow through points [22, 27].
  • The right eye using [36, 42].
  • The left eye with [42, 48].
  • The nose using [27, 35].
  • And the jaw via [0, 17].

These mappings are encoded inside the FACIAL_LANDMARKS_IDXS  dictionary inside face_utils of the imutils library:

Using this dictionary we can easily extract the indexes into the facial landmarks array and extract various facial features simply by supplying a string as a key.

Visualizing facial landmarks with OpenCV and Python

A slightly harder task is to visualize each of these facial landmarks and overlay the results on an input image.

To accomplish this, we’ll need the visualize_facial_landmarks  function, already included in the imutils library:

Our visualize_facial_landmarks  function requires two arguments, followed by two optional ones, each detailed below:

  • image : The image that we are going to draw our facial landmark visualizations on.
  • shape : The NumPy array that contains the 68 facial landmark coordinates that map to various facial parts.
  • colors : A list of BGR tuples used to color-code each of the facial landmark regions.
  • alpha : A parameter used to control the opacity of the overlay on the original image.

Lines 45 and 46 create two copies of our input image — we’ll need these copies so that we can draw a semi-transparent overlay on the output image.

Line 50 makes a check to see if the colors  list is None , and if so, initializes it with a preset list of BGR tuples (remember, OpenCV stores colors/pixel intensities in BGR order rather than RGB).

We are now ready to visualize each of the individual facial regions via facial landmarks:

On Line 56 we loop over each entry in the FACIAL_LANDMARKS_IDXS  dictionary.

For each of these regions, we extract the indexes of the given facial part and grab the (x, y)-coordinates from the shape  NumPy array.

Lines 63-69 make a check to see if we are drawing the jaw, and if so, we simply loop over the individual points, drawing a line connecting the jaw points together.

Otherwise, Lines 73-75 handle computing the convex hull of the points and drawing the hull on the overlay.

The last step is to create a transparent overlay via the cv2.addWeighted  function:

After applying visualize_facial_landmarks  to an image and associated facial landmarks, the output would look similar to the image below:

Figure 2: A visualization of each facial landmark region overlaid on the original image.

To learn how to glue all the pieces together (and extract each of these facial regions), let’s move on to the next section.

Extracting parts of the face using dlib, OpenCV, and Python

Before you continue with this tutorial, make sure you have:

  1. Installed dlib according to my instructions in this blog post.
  2. Have installed/upgraded imutils to the latest version, ensuring you have access to the face_utils  submodule:  pip install --upgrade imutils

From there, open up a new file, name it , and insert the following code:

The first code block in this example is identical to the one in our previous tutorial.

We are simply:

  • Importing our required Python packages (Lines 2-7).
  • Parsing our command line arguments (Lines 10-15).
  • Instantiating dlib’s HOG-based face detector and loading the facial landmark predictor (Lines 19 and 20).
  • Loading and pre-processing our input image (Lines 23-25).
  • Detecting faces in our input image (Line 28).

Again, for a more thorough, detailed overview of this code block, please see last week’s blog post on facial landmark detection with dlib, OpenCV, and Python.

Now that we have detected faces in the image, we can loop over each of the face ROIs individually:

For each face region, we determine the facial landmarks of the ROI and convert the 68 points into a NumPy array (Lines 34 and 35).

Then, for each of the face parts, we loop over them and on Line 38.

We draw the name/label of the face region on Lines 42 and 43, then draw each of the individual facial landmarks as circles on Lines 47 and 48.

To actually extract each of the facial regions we simply need to compute the bounding box of the (x, y)-coordinates associated with the specific region and use NumPy array slicing to extract it:

Computing the bounding box of the region is handled on Line 51 via cv2.boundingRect .

Using NumPy array slicing we can extract the ROI on Line 52.

This ROI is then resized to have a width of 250 pixels so we can better visualize it (Line 53).

Lines 56-58 display the individual face region to our screen.

Lines 61-63 then apply the visualize_facial_landmarks  function to create a transparent overlay for each facial part.

Face part labeling results

Now that our example has been coded up, let’s take a look at some results.

Be sure to use the “Downloads” section of this guide to download the source code + example images + dlib facial landmark predictor model.

From there, you can use the following command to visualize the results:

Notice how my mouth is detected first:

Figure 3: Extracting the mouth region via facial landmarks.

Followed by my right eyebrow:

Figure 4: Determining the right eyebrow of an image using facial landmarks and dlib.

Then the left eyebrow:

Figure 5: The dlib library can extract facial regions from an image.

Next comes the right eye:

Figure 6: Extracting the right eye of a face using facial landmarks, dlib, OpenCV, and Python.

Along with the left eye:

Figure 7: Extracting the left eye of a face using facial landmarks, dlib, OpenCV, and Python.

And finally the jawline:

Figure 8: Automatically determining the jawline of a face with facial landmarks.

As you can see, the bounding box of the jawline is m entire face.

The last visualization for this image are our transparent overlays with each facial landmark region highlighted with a different color:

Figure 9: A transparent overlay that displays the individual facial regions extracted via the image with facial landmarks.

Let’s try another example:

This time I have created a GIF animation of the output:

Figure 10: Extracting facial landmark regions with computer vision.

The same goes for our final example:

Figure 11: Automatically labeling eyes, eyebrows, nose, mouth, and jaw using facial landmarks.


In this blog post I demonstrated how to detect various facial structures in an image using facial landmark detection.

Specifically, we learned how to detect and extract the:

  • Mouth
  • Right eyebrow
  • Left eyebrow
  • Right eye
  • Left eye
  • Nose
  • Jawline

This was accomplished using dlib’s pre-trained facial landmark detector along with a bit of OpenCV and Python magic.

At this point you’re probably quite impressed with the accuracy of facial landmarks — and there are clear advantages of using facial landmarks, especially for face alignment, face swapping, and extracting various facial structures.

…but the big question is:

“Can facial landmark detection run in real-time?”

To find out, you’ll need to stay tuned for next week’s blog post.

To be notified when next week’s blog post on real-time facial landmark detection is published, be sure to enter your email address in the form below!

See you then.


If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , ,

204 Responses to Detect eyes, nose, lips, and jaw with dlib, OpenCV, and Python

  1. Neeraj Kumar April 10, 2017 at 1:53 pm #

    Dear Adrian,

    You have seriously got a fan man, amazing explanations. It’s something like now i do wait for your new upcoming blog posts just as i do wait for GOT new season.
    Actually you are the one who make this computer vision concept very simple, which is otherwise not.

    Thanks and Regards
    Neeraj Kumar

    • Adrian Rosebrock April 12, 2017 at 1:14 pm #

      Thank you Neeraj, I really appreciate that. Comments like these really make my day 🙂

      • Neeraj Kumar April 12, 2017 at 3:16 pm #

        Keep smiling and keep posting awesome blog posts.

        • Adrian Rosebrock April 16, 2017 at 9:09 am #

          Thanks Neeraj!

      • riyaz January 5, 2019 at 2:05 pm #

        hi adrian
        thanks alot for your sweet explanation but if you add maths behind some of technique you are using just like face alignment … i am little bit confused in angle finding

  2. Anthony The Koala April 10, 2017 at 2:31 pm #

    Dear Dr Adrian,
    There is a lot to learn in your blogs and I thank you for these blogs. I hope I am not off-topic. Recently in the news there was a smart phone that could detect whether the image of a face was from a real person or a photograph. If the image was that of a photograph the smart phone would not allow the user to use the phone’s facility.
    To apply my question to today’s blog in detecting eyes, nose and jaw, is there a way to tell whether the elements of the face can be from a real face or a photo of a face?
    Thank you
    Anthony of Sydney NSW

    • Adrian Rosebrock April 12, 2017 at 1:13 pm #

      There are many methods to accomplish this, but the most reliable is to use stereo/depth cameras so you can determine the depth of the face versus a flat 2D space. As for the actual article you’re referring to, I haven’t read it so it would be great if you could link to it.

      • Anthony The Koala April 13, 2017 at 3:23 am #

        Dear Dr Adrian,
        I apologise by forgetting to put the word ‘not’ between. It should read “there was a smart phone that could not detect whether the image of a face was from a real person or a photograph.

        Similar article, with a statement from Samsung saying that facial recognition currently “..cannot be used to authenticate access to Samsung Pay or Secure Folder….”

        Solution may well that your authentication system may well need two cameras for 3D or more ‘clever’ 2D techniques such that the authentication system cannot be ‘tricked’.

        Anthony of Sydney NSW

  3. ulzii April 10, 2017 at 9:50 pm #

    you can detect beautiful lady as well 🙂

    • Adrian Rosebrock April 12, 2017 at 1:12 pm #

      That beautiful lady is my fiancée.

  4. Leena April 10, 2017 at 11:54 pm #

    Excellent . simple and detail code instructions. Can you more details on how to define the facial part. Definition of eye/nose/jaw…..


    • Adrian Rosebrock April 12, 2017 at 1:12 pm #

      Hi Leena — what do you mean by “define the facial part”?

  5. Rencita April 11, 2017 at 2:49 am #

    Hello Adrian,
    I tried your post for detecting facial features, but it gives me a error saying:
    RuntimeError: Unable to open shape_predictor_68_face_landmarks.dat

    where could i have been gone wrong?

    Thanks in advance

    • Adrian Rosebrock April 12, 2017 at 1:10 pm #

      Make sure you use the “Downloads” section of this blog post to download the shape_predictor_68_face_landmarks.dat and source code. From there the example will work.

    • Salma August 11, 2017 at 7:45 am #

      hello , so basiclly if you dow,loaded “shape_predictor_68_face_landmarks.dat.bz2” with the wget method , you need to unzip it

      bzip2 -d filename.bz2 or bzip2 -dk filename.bz2 if you want to keep the original archive

    • Bai November 20, 2017 at 2:58 pm #

      After –shape-predictor, you need to type in the path of the .dat file. You can try the absolute path of your .dat file. I met the same problem and it has been solved.

  6. aravind April 12, 2017 at 11:57 am #

    Hi adrian,

    how can this be used for video instead of the image as argument.

    • Adrian Rosebrock April 12, 2017 at 12:54 pm #

      I will be demonstrating how to apply facial landmarks to video streams in next week’s blog post.

  7. Wim Valcke April 17, 2017 at 1:21 pm #

    First of all, nice blog post. Keep up doing this, i learned a lot about computer vision in a limited amount of time. I will buy definitely your new book about deep learning.

    There is a small error in the of the imutils library
    In the definition of FACIAL_LANDMARKS_IDXS

    (“nose”, (27, 35)),

    Thus should be (“nose”, (27, 36)),

    As the nose should contain 9 points, in the existing implementation this is only 8 points
    This can be seen in the example images too.

    • Adrian Rosebrock April 19, 2017 at 12:56 pm #

      Good catch — thanks Wim! I’ll make sure this is change is made in the latest version of imutils and I’ll also get the blog post is updated as well.

  8. Taimur Bilal April 17, 2017 at 2:16 pm #

    This is amazing. I just wanted to ask one question.

    If you are running a detector on these images coming from polling a video stream, can we say you are tracking the facial features? Is it true that tracking can be implemented by implementing an “ultra fast” detector on every frame of a video stream.

    Again, thanks a lot for these amazing tutorials.

    • Adrian Rosebrock April 19, 2017 at 12:55 pm #

      “Tracking” algorithms by definition try to incorporate some other extra temporal information into the algorithm so running an ultra fast face detector on a video stream isn’t technically a true tracking algorithm, but it would give you the same result, provided that no two faces overlap in the video stream.

      • Charles November 12, 2019 at 3:56 am #

        Hi Adrian,

        Do you have a knowledge how to detect and change the hair colour?

  9. Dimas April 18, 2017 at 1:23 am #


  10. Anthony The Koala April 24, 2017 at 1:11 pm #

    Dear Dr Adrian,
    In figure 10 “Extracting facial landmark regions with computer vision,” how is it that the program could differentiate between the face of a human and of a non-human. This is contrast to figure 11 where there are two humans where the algorithm could detect two humans.

    Thank you,
    Anthony of Sydney NSW

    • Adrian Rosebrock April 28, 2017 at 10:00 am #

      The best way to handle determining if a face is a “real” face or simply a photo of a face is to apply face detection + use a depth camera so you can compute the depth of the image. If the face is “flat” then you know it’s a photograph.

      • Anthony The Koala April 28, 2017 at 9:49 pm #

        Dear Dr Adrian,
        I think I should have been clearer. The question was not about distinguishing a fake from a real which you addressed earlier.

        I should have been more direct. In figure 10, there is a picture of you with your dog.
        How does the algorithm make the distinction between a human face, yours and a non-human face, the dog.

        Or put it another way, how did the algorithm make the distinction between a human face and a dog. In other words how did the algorithm detect that the dog’s face was not human.

        This is in contrast to figure 11, there is a picture of you with your special lady. The algorithm could detect that there were two human faces present in contrast to figure 10 with one human face and one non-human face.

        I hope I was clearer

        I thank you for your tutorial/blogs,


        Anthony, Sydney NSW
        Anthony from Sydney nSW

        • Adrian Rosebrock May 1, 2017 at 1:46 pm #

          To start, I think it would be beneficial to understand how object detectors work. The dlib library ships with an object detector that is pre-trained to detect human faces. That is why we can detect human faces in the image but not dog faces.

          • Anthony The Koala May 1, 2017 at 8:27 pm #

            Dear Dr Adrian,
            Thank you for the link The article provides a quick review of the various object detection techniques and how the early detection methods for example using Haar wavelets produces a false positive as demonstrated by the soccer player’s face and part of the soccer field’s side advertising being detected as a face.
            The article goes on a brief step-by-step guide on object detection based on papers by Felzenszwalb et al. and Tomasz.
            A lot to learn
            Anthony of Sydney NSW Australia

  11. Hailay Berihu April 25, 2017 at 5:44 am #

    Thank you very much Dr. Adrian! All your blogs are amazing and timely. However, this one is so special to me!! I am enjoying all your blogs! Keep it up!

    • Adrian Rosebrock April 25, 2017 at 11:46 am #

      Thanks Hailay!

  12. Helen April 27, 2017 at 6:06 am #

    Hi,Adrian. Your work is amazing and very useful to me. I’m a undergraduate student and l’m learning things about opencv and computer vision. In my graduation project, I want to finish a program to realize simple virtual makeup. I’m a lazy girl, I want to know what will I look if I put on makeup. I have an idea that I want to draw different colors to different parts of the face,like red color to lips or pink color to cheek or something like that. Now, I can detect 65 points of one face using the realtime camera. I’m writing to ask you, using your way, can I realize my virtual makeup program? And I want to know if you have any good ideas about my virtual makeup program. Your advice will be welcome and appreciate.

    best wishes!

    • Adrian Rosebrock April 28, 2017 at 9:30 am #

      Yes, this is absolutely possible. Once you have detected each facial structure you can apply alpha blending and perhaps even a bit of seamless cloning to accomplish this. It’s not an easy project and will require much research on your part, but again, it’s totally possible.

      • Helen April 28, 2017 at 9:33 pm #

        Thank you Adrian. I’ll try my best to finish it.

      • Surendhar April 9, 2019 at 4:51 am #

        Hi Adrian,

        I am currently working on creating a virtual makeup. I am wondering if you know of any python functions/libraries to achieve a glossy/matte finish on lips.

        Thanks in advance.

  13. Rishabh Gupta April 29, 2017 at 11:43 am #

    I’ve two questions

    #1. While extracting the ROI of the face region as a separate image, on line 52 why have you used
    roi = image[y:y+h, x:x+w] . Shouldn’t it be the reverse ? i.e. roi = image[x:x+w, y:y+h] ??

    #2. What does INTER_CUBIC mean ? I’ve checked the documentation. It says INTER_CUBIC is slow. So, why use it at the first place if you’ve a better alternate(INTER_LINEAR) available ?

    Thanks in advance.

    • Adrian Rosebrock May 1, 2017 at 1:35 pm #

      1. No, images are matrices. We access an individual element of a matrix by supplying the row value first (the y-coordinate) followed by the column number (the x-coordinate). Therefore, roi = image[y:y+h, x:x+w] is correct, although it may feel awkward to write.

      2. This implies that we are doing cubic interpolation, which is indeed slower that linear interpolation, but is better at upsampling images.

      If you’re just getting started with computer vision, image processing, and OpenCV I would definitely suggest reading through Practical Python and OpenCV as this will help you learn the fundamentals quickly. Be sure to take a look!

  14. Anthony The Koala May 11, 2017 at 3:05 am #

    Dear Dr Adrian,
    Suppose a camera was fitted with a fisheye lens. Recall that a fisheye lens produces a wide angled image. As a result the image will be distorted.
    If the image is distorted, is there a way of ‘processing’/’correcting’ the distorted image to a normal image then apply face detection.
    Alternatively, if the camera has a fisheye lens, can a detection algorithm such as yours handle face detection.
    Alternatively, is there a correction algorithm for a fisheye lens.
    Thank you,
    Anthony of Sydney Australia

    • Adrian Rosebrock May 11, 2017 at 8:43 am #

      If you’re using a fisheye lens you can actually “correct” for the fisheye distortion. I would suggest starting here for more details.

  15. Memoona May 13, 2017 at 8:34 am #

    Hi adrian, thanks a lot for this blog and all others too i have learnt a lot from you. I have a few questions please.

    1. If i want to detect a smile on a face by measuring distance between landmarks 49 and 65 by applying simple distance fornula where unit of distance will be pixil. So my question is how can i know x and y coordinates for particular landmarks so i can apply mathematics and compare with database image?
    2. I want to do both face recognition and emotion detection so is there any way i can make it faster? Atleast near to realtime?

    Stay blessed

    • Adrian Rosebrock May 15, 2017 at 8:49 am #

      1. It’s certainly possible to build a smile detector using facial landmarks, but it would be very error prone. Instead, it would be better to train a machine learning classifier on smiling vs. not smiling faces. I cover how to do this inside Deep Learning for Computer Vision with Python.

      2. I also cover real-time facial expression/emotion detection inside Deep Learning for Computer Vision with Python as well.

  16. Arun VIJAY June 15, 2017 at 6:40 am #

    Hi Andrain,

    Is it possible to face verification with your face recognition code like the input is two images one is in ID card of the company which is having my face and the other one is my selfie image i need to compare and find both the person are same

  17. bharath grandhi June 19, 2017 at 7:01 am #

    File “”, line 5, in
    from imutils import face_utils
    ImportError: cannot import name face_utils
    sir can i know solution for this error….

    • Adrian Rosebrock June 20, 2017 at 10:59 am #

      Make sure you update your version of “imutils” to the latest version:

      $ pip install imutils

  18. Valeriano July 4, 2017 at 3:09 pm #

    Dear Adrian. Thanks so much for your clear explanation. This blog is very useful, i’ve learnt about computer vision in a couple days. I have a problem when a I’m trying to execute this program in Ubuntu Terminal. The next error apear:

    Illegal instruction (core dumped)

    I’ve read about it and it’s probably a Boost.Python problema. Can you give me some help to solve this problem?

    • Adrian Rosebrock July 5, 2017 at 5:56 am #

      I would insert a bunch of “print” statements to determine which line is throwing the error. My guess is that the error is coming from “import dlib”, in which case you are importing the library into a different version than it was compiled against.

  19. Abhishek Mane July 6, 2017 at 12:45 pm #

    Hey Mr. Adrian, nice tutorial , I wanted to ask can I do this for android since it requires too many libraries? I’m trying to create an augmented reality program for android using unity game engine so can you tell me relative to unity?

    • Adrian Rosebrock July 7, 2017 at 9:53 am #

      I don’t have any experience developing Android applications. There are Java + OpenCV bindings, but you’ll need to look for a Java implementation of facial landmarks.

  20. Arick Chen July 18, 2017 at 10:37 am #

    Dear Adrian,
    Thanks a lot for all your works in this blog and codes. It is really amazing that it can detect eyes more accurate than many other face detection APIs.

    There is a question I want to ask.
    Recently, I am doing a research which needs really accurate eyes landmarks and this tutorial almost meets my needs.
    However, I also need the landmark of pupils. Have you ever done it before? Or how can I get an accurate pupil landmark of the eye.

    • Adrian Rosebrock July 21, 2017 at 9:06 am #

      I personally haven’t done any work in pupil detection, but I’ve heard that others have had good luck with this tutorial.

      • Badhreesh M Rao September 13, 2019 at 2:11 pm #

        Hello Adrian,

        I realize this is a very late response, but thank you so much for your in depth blog post. With regards to finding the pupil landmark, is it possible to infer it by using the two landmarks for each of the eyelids as a sort of bounding box for the pupil and calculate the coordinates of the center? This would theoretically be the pupil landmark. Do you think this is precise enough?

        I am interested in figuring this out because I want to see if I can accurately calculate the pupillary distance of a person this way.

        Do let me known if I am missing anything 🙂

  21. CHIARA ANDREA LANTAJO August 4, 2017 at 7:02 am #

    Thank you so much for this tutorial! It worked perfectly. I have a question though, can this method of detecting facial features work with images that does not contain whole faces? For example, the picture that I’m about to process only contains the nose, eyes and eyebrows (basically zoomed up images). Or does it only work on images with all the facial features specified above?

    I’m actually trying to detect the center of the nose using zoomed up images that only contain the eyes and the nose and a little bit of the eyebrows. If this method of detection will not work, can you please suggest any other method that I can use. Thank you so much. 🙂

    • Adrian Rosebrock August 4, 2017 at 7:13 am #

      If you can detect the face via a Haar cascade or HOG + Linear SVM detector provided by dlib then you can fit the facial landmarks to the face. The problem is that if you do not have the entire face in view, then the landmarks may not be entirely accurate.

      If you are working with zoomed in images, I would suggest training your own custom nose and eye detectors. OpenCV has a number of Haar cascades for this in their GitHub. However, I would suggest training your own custom object detector which is covered in detail inside the PyImageSearch Gurus course. I hope that helps!

      • Shahnawaz Shaikh August 5, 2017 at 10:17 am #

        Hi adrian fantatic post…after using hog i am able to track the landmarks of face in the video.But is it possible to track the face just the way you did for green ball example.So as to track a persons attention.Like if he moves his face up down or sideways there has to be a prompt like subject is distracted…Help much appreciated.

        • Adrian Rosebrock August 10, 2017 at 9:09 am #

          You could monitor the (x, y)-coordinates of the facial landmarks. If they change in direction you can use this to determine if the person is changing the viewing angle of their face.

  22. Abhranil August 7, 2017 at 2:57 pm #

    How will I detect the nose,eyes and other features in the face? I am a beginner.
    Thanks in advance.

    • Adrian Rosebrock August 10, 2017 at 9:00 am #

      Hi Abhranil — I’m not sure what you mean. This blog post explains how to extract the nose, eyes, etc.

  23. Hansani August 14, 2017 at 5:02 am #

    Hello Adrian,
    I need to detect face when eyes are covered with hand using a 2D video. I could’t do this because both eyes and the hands are in skin color. Could you please help me?
    Thank You

    • Adrian Rosebrock August 14, 2017 at 1:07 pm #

      I would suggest using either pre-trained OpenCV Haar cascades for nose/lip detection or training your own classifier here. This will help if the eyes are obstructed.

      • Hansani August 27, 2017 at 2:51 pm #

        Thank you so much for the response. My aim is to detect the situation of hand and face occlusion. I want to check whether a person is covering eyes by his hands or not.
        Thank You

  24. preethi August 22, 2017 at 8:36 am #

    hi sir!!
    thanks a lot for your wonderful blog posts.. i did face recognition api and eye detection from your blog am trying to do eye recognition api i detected pupil and iris but i dont know how to recognize it.. can you please help me!!

  25. David August 31, 2017 at 4:03 am #

    How do I get the all the points and store them in a text file for each image?+

    • Adrian Rosebrock August 31, 2017 at 8:28 am #

      You would simply use your favorite serialization library such as pickle, json, etc. and save the shape object to disk.

  26. Rahul August 31, 2017 at 2:06 pm #

    Hi Adrian,

    Thanks a lot for a wonderful blog, it’s so good to see people sharing the knowledge and motivating people to take up the field more seriously . you have made difficult concepts really easy.
    thanks once again, I am presently working on Lip reading, can you suggest some blogs of yours or repo where I n I can find the work on similar line.

    Thanks in advance

    • Adrian Rosebrock September 1, 2017 at 12:01 pm #

      I don’t have any prior experience with lip reading but I would suggest reading this IEEE article to help you get started.

  27. arash September 18, 2017 at 8:20 am #


    u are publishing cool stuff here
    thank u adrian

    but i believe the image of facial landmarks is not right u started from 1 to 68
    while mentioned the categories 0 to 68 and it does not match the original dlib landmark numbering

    just saying

    thank u again

    • Adrian Rosebrock September 18, 2017 at 1:58 pm #

      Hi Arash — I’ve actually updated the code in the “imutils” library so the indexing is correct.

  28. urameez September 21, 2017 at 8:08 am #


    i love you work on computer vision and deep learning and have been learning a lot form you

    can you make a post regarding face profile detection.
    i have tried using “bob.ip.facelandmarks” but it does not work on windows.

    can you help

    • Adrian Rosebrock September 23, 2017 at 10:14 am #

      I have not worked with profile-based landmark detections but I will consider this for a blog post in the future.

  29. Abhishek Inamdar September 23, 2017 at 1:13 pm #

    Really really amazing article sir. Please provide me the syntax explanation of the code from line number 63 to 69 in visaualize_facial_landmarks() function and why did you find the convex hull?

    • Adrian Rosebrock September 23, 2017 at 2:35 pm #

      Lines 63-69 handles the special case of drawing the jawline. We loop over the points and just “connect the dots” by drawing lines in between each (x, y)-coordinate.

      Otherwise, we compute the convex hull so when we draw the shape it encompasses all the points.

  30. Danny September 24, 2017 at 1:17 pm #

    can this library detect change in landmarks? like when we raise our right eyebrow would this be able to return us the changed position of eyebrow?

  31. Vishnu October 14, 2017 at 12:04 am #

    Dear Adrian,

    Your explanation is fantastic. I learned a lot.

    Sir actually I am trying to develop a project based on face recognition. Some how I want to measure distance between each parts(Mouth, Right eyebrow, Left eyebrow, Right eye, Left eye, Nose, Jawline) of face. Or I want the real position (x,y) of parts of face in the resized image.

    Sir is it possible?
    Any reply will be very helpful.

    Thank you so much.

    • Adrian Rosebrock October 14, 2017 at 10:34 am #

      We typically don’t use facial landmarks to perform face recognition. Algorithms for face recognition include Eigenfaces, Fisherfaces, and LBPs for face recognition. I cover these algorithms inside the PyImageSearch Gurus course.

  32. Suganya Robert October 27, 2017 at 7:56 am #

    Hi Adrian,

    I was seriously following you posts 4-5 months before. Then I had my own research work in some other area. Now back to vision processing and deep learning. I just moved all your mails to a folder. I have just started to read the mails one by one. Really I am interested with your posts and works. You will see me in frequent comments and queries.

    • Adrian Rosebrock October 27, 2017 at 11:20 am #

      Hi Suganya — thank you for being a PyImageSearch reader — without you, blog posts, books, and courses wouldn’t be possible. Are you done with your research work now? Are you interested in doing research or hobby work in image processing and deep learning?

      • Suganya Robert October 28, 2017 at 1:29 pm #

        That was my Ph.D work. I was busy in writing manuscripts. Now it is over. I was doing my research on Video coding with non-linear representations. I want to enter into the emerging vision tech and deep learning for my future research works. Previously I followed your blogs and posts to complete a project with real time video processing (Motion detection) for remote alert. Now I am back.

  33. SHOBA October 31, 2017 at 1:33 am #

    Hi Adrian,
    Can you help me how to detect the eyes down in face

  34. PRANSHOO VERMA December 26, 2017 at 2:07 am #

    does finding the distance between landmarks could help to recognize the face. Every person’s landmarks are different so would it be a good approach to recognize the face using this?If so then please give me some hints regarding that.

    • Adrian Rosebrock December 26, 2017 at 3:51 pm #

      You would want to use a face embedding which would produce a vector used to quantify the face. The OpenFace library would be a good start. I also cover face recognition inside the PyImageSearch Gurus course.

  35. abdallah gad January 16, 2018 at 3:51 pm #

    can i ask you about the algorithm under this code and reference paper

  36. Stefano January 22, 2018 at 7:04 am #

    Great tutorial and maybe you can help me. I am looking for a way to recognize very similar object to each other distinguishing them. For example I’d like to identify different kind of handmade pots creating a pattern witch can help me to recognize the category who the pot belongs. There is an other problem. Several of this pots are broken so I need to recognize just a part of them. Sometimes I have just the foot or sometimes i’ve got just the head.

    So I need a function lets me create a landmark pattern (like face landmark) to identify the objects.

    Could you tell me the right way to built it? How can I create my own dataset for objects?

    I just need the right approach to start. Thanks a lot

    • Adrian Rosebrock January 22, 2018 at 6:11 pm #

      Hey Stefano — there are a few ways to approach this problem, but I would start with using keypoint detection + local invariant features + feature matching. The final chapter of my book, Practical Python and OpenCV, demonstrates how to build a system to recognize the cover of a book — this method could be applied to your problem of recognize pots/pot categories. To make the system more robust and scalable take a look at the bag of visual words (BOVW) model. I cover BOVW and how to train classifiers on top of them in-depth inside the PyImageSearch Gurus course. I hope that helps!

  37. Agostina January 23, 2018 at 11:41 am #

    Hey Adrian! Thanks for your great tutorial. I have a question! I need to detect only the eye landmarks, in an eye image. You know if is it posible to used This dlib’s pre-trained facial landmark detector directly in an eye image, without detecting faces?
    Thanks a lot

    • Adrian Rosebrock January 23, 2018 at 1:53 pm #

      Unfortunately, no. You need to detect the face first in order to localize the eye region. You may consider applying an eye detector, such as OpenCV’s Haar cascades — but then you still need to localize the landmarks of the eye. This would likely entail training your own eye landmark detector.

      • Agostina January 23, 2018 at 2:16 pm #

        Ok! Thank You! Im already have the eye region localized, so I suppose that the only possibility now is to train some eye landmark detector

  38. Alex February 7, 2018 at 5:37 pm #

    I want to recognize and identify each part of the body so that I can accurately determine that this is the eye of a particular person, how can this be done in real time?

    • Adrian Rosebrock February 8, 2018 at 7:53 am #

      Is there a particular reason you need to identify each part of the body if you’re only interested in the eye?

  39. drviruz February 9, 2018 at 2:01 am #

    Hi. Awesome tutorial. How if i want a particularly part ie eyes ?

    • Adrian Rosebrock February 12, 2018 at 6:41 pm #

      I’m not sure what you mean by particular part of the eye?

  40. Sabina February 12, 2018 at 6:06 am #

    Hi, your work is great thank you for this post. I have a question. I have used dlib for face landmark detection and now I want to detect using face landmark coordinates cheeks and forehead area and use ROI. How I can do this with dlib

    • Adrian Rosebrock February 12, 2018 at 6:13 pm #

      You’ll basically want to develop heuristic percentages. For example, if you know the entire bounding box of the face, the forehead region will likely be 30-40% of the face height above the detected face. You could define similar heuristics based on the eyes as well. For the cheeks try computing a rectangle between the left/right boundaries of the face, the lower part of the eye, and the lower part of the face bounding box.

      • Rea October 9, 2018 at 5:23 pm #

        To follow up on my comment: If I need the forehead region to be accurately outlined (like the jaw), are you saying that there are no pre-trained models for this?

        • Adrian Rosebrock October 12, 2018 at 9:24 am #

          I personally have not encountered any.

  41. Sabina February 13, 2018 at 12:13 am #

    Thank You

  42. Sekar February 19, 2018 at 8:22 pm #

    Hi Adrian,

    Is there a way to find other facial features like fore head, right cheek and left cheek from this?


    • Adrian Rosebrock February 22, 2018 at 9:22 am #

      No, not with the standard facial landmark detector included with dlib.

      That said, see my reply to “Sabina” on February 12, 2018 where I discuss how you can extract the forehead and cheek regions.

  43. abdallah gad February 22, 2018 at 4:54 pm #

    i need to visualize_only lips facial_landmarks
    haw can i do it ?

    • Adrian Rosebrock February 26, 2018 at 2:17 pm #

      This post demonstrates how you can extract the facial landmarks for the mouth region. Figure 1 in particular shows you the indexes for the upper and lower lip. You can use these indexes to extract or visualize the lip regions.

  44. Dinesh Kumar February 27, 2018 at 11:30 am #

    Hi Adrian,

    I am Getting This Error.How to Solve it?

    usage: [-h] -p SHAPE_PREDICTOR -i IMAGE error: the following arguments are required: -p/–shape-predictor, -i/–image

    Dinesh Kumar.K

    • Adrian Rosebrock February 27, 2018 at 11:38 am #

      You need to supply the command line arguments when executing the script via the terminal. If you are new to command line argument that’s okay, but you should spend some time reading up on them before continuing.

      • Dinesh Kumar February 28, 2018 at 9:54 am #

        I Got the Output..Thank You So Much

        • awais March 5, 2018 at 1:15 am #

          hi .i am also getting the same error . could you please help me how sort out this issue

          • awais March 5, 2018 at 1:20 am #

            thanks….i got the out put thanks

  45. TaeWoo Kim April 3, 2018 at 6:28 am #

    Any idea how I would determine if the mouth landmark points are moving?

    • TaeWoo Kim April 3, 2018 at 6:34 am #

      Sorry.. pressed submit too fast

      I meant that i’d like to determine if the mouth is moving in a VIDEO . This the algo should work regardless of variance in position, rotation, and scale (i.e. zoom)

      • Adrian Rosebrock April 4, 2018 at 12:12 pm #

        I would suggest computing the centroid of the mouth facial landmark coordinates and then pass it into a deque data structure, similar to what I do in this blog post.

  46. vishwash bhardwaj April 11, 2018 at 6:10 am #

    Sir i have a an error when we compile our code ,that is dlib module not found . how to fix this error . please sir tell me any specific command to install the dlib module in windows on anaconda.

    • Adrian Rosebrock April 11, 2018 at 8:58 am #

      You should double-check that dlib successfully installed on your machine. It sounds like you did not install dlib correctly. I haven’t used Windows in a good many years (and only formally support Linux and macOS on this blog) so I need to refer you to the official dlib install instructions. I hope at the very least that helps point you in the right direction.

  47. Vamshi Reddy Pothuganti April 19, 2018 at 10:24 pm #

    You are such a freaking guy wherever i go and run to look for the files i can’t find anything really except the usage of your library with those 68 points can you please tell me in an easier way how did you construct your library when i look at your imutils library it has lots of other things which i don’t need i want to use plane Open CV to reduce my memory. So, can you please guide me that?

    • Adrian Rosebrock April 20, 2018 at 9:44 am #

      Hey Vamshi:

      First, I’ve replied to your previous comments in the original blog post you commented on. Please read them.

      Secondly, I’ve explained why (or why not) you may want to use imutils in those comments. It’s an open source library. You are free to modify the code. You can strip parts out of it. You can add to it. It’s open source. Have fun from it and learn from it; however, I cannot provide you exact code to solve your exact solution. You will need to work on the project yourself. If you’re new to Python, OpenCV, or image processing in general, that’s totally okay — but you will need to educate yourself along the way.

      Thirdly, I do not appreciate your tone, both in this comment and your previous ones. Please stop and be more considerate and professional. I am making these (free) tutorials available for you to learn from. I’m also taking time to help you with your questions. If you cannot do me this courtesy I will not be able to help you. Thank you.

  48. alaa April 22, 2018 at 7:50 pm #

    Great tutorial and very helpful
    I have a question..What is the best way to detect irises?

    • Adrian Rosebrock April 25, 2018 at 6:09 am #

      I do not have any tutorials on iris detection but I know other PyImageSearch readers have tried this method.

  49. Jameshwart Lopez May 10, 2018 at 4:25 am #

    Hi Adrian, do you have a tutorial where i can copy each pixel inside the jaw line and save it to a file while making the other parts transparent?

    • Adrian Rosebrock May 14, 2018 at 12:21 pm #

      I do not, but you can certainly implement that method yourself without too much of an issue. You’ll want to create a mask for the jaw region, apply a bitwise AND, and then save the resulting masked image to disk. If you’re new to masking and image processing basics that’s totally okay but I would recommend learning the fundamentals first — my personal suggestion would be to refer to Practical Python and OpenCV.

  50. lisa bennet May 17, 2018 at 1:51 am #

    i want to draw a curve along lips but don’t know how to access points 48 to 68. how can i do that?

    • Adrian Rosebrock May 17, 2018 at 6:43 am #

      There are a few ways to do this. The easiest would be to take the raw “shape” array from Lines 35 and take the indexes of 48 and 68 (keeping in mind that Python is zero-indexed).

  51. Septian May 17, 2018 at 9:13 am #

    Hey Doctor, how detect eye to people use glasses

  52. Jhonatan June 14, 2018 at 1:28 am #

    How can i outpout a overlaid image as yout figure 2 ?

    • Adrian Rosebrock June 15, 2018 at 12:34 pm #

      The visualize_facial_landmarks function discussed in the blog post will generate the overlaid image for you.

  53. Prashant June 19, 2018 at 1:17 am #

    Can you please define me how to get whole face landmarks including forehead and how to detect hairs.

    • Adrian Rosebrock June 19, 2018 at 8:29 am #

      There are no facial landmarks for the forehead region. You could either train your own facial landmark predictor (which would require labeled data, of course). Or you can try using heuristics, such as “the forehead region is above the eyes and 40% as tall as the rest of the face”. You’ll want to experiment with that, of course.

  54. Ashwin June 19, 2018 at 5:18 am #

    Hi Adrian,
    I was trying to extract the eye features in the sense the pupil movement in the eye. To do that i suppose i would have to increase the points on the face. Can you tell me a way to do that.

    • Adrian Rosebrock June 19, 2018 at 8:25 am #

      Unfortunately you would need to train your own custom facial landmark detector to include more points. You cannot “add more points” to the current model.

  55. F Ahmad June 22, 2018 at 3:26 am #

    Hey Adrain,

    Thanks a lot for such a wonderful tutorial.Your blogs are the most informative and detailed ones available on the internet today.

    I have a small problem and I hope you will solve it.When I am providing images with side faces as input (in which only one eye is visible), the above code generates a wrong ROI for the eye which is not even in the frame.Can you please suggest some idea so that I can exclude the ROIs for the features which are not there in the image and display only the ones that are visible.

    • Adrian Rosebrock June 25, 2018 at 2:04 pm #

      So the face is correctly detected but then the eye location is not even on the frame? That’s really odd as this post is used to detect blinks without a problem using facial landmarks.

  56. Amit June 27, 2018 at 1:58 am #

    Dear Adrian,
    This is an amazing post.
    I have a question, when we detect eyebrows, nose, lips, there are sharp edges; specifically in eyebrows. if you try to change the color you will notice this very easily. So how to remove these sharp edges

    • Adrian Rosebrock June 28, 2018 at 8:08 am #

      I’m not sure what you mean by “sharp edges”. Do you have a screenshot you could share?

  57. Vincent Kok July 1, 2018 at 10:02 pm #

    Hi Adrian,
    Fantastic tutorial. I am doing a research on improving the speed for attendance system using facial recognition.

    I wanted to first classify the database to different group of database (based on Facial features such as the size of eyes or nose). Hence in a large group of students, the matching would be faster. Would you advise how I can do that?

    And is there an algorithm to calculate the size of eyes after detecting the eyes?

    Look forward to talk more with you on this.


    • Adrian Rosebrock July 3, 2018 at 8:30 am #

      That’s likely overkill for a face recognition system and actually likely prone to errors. You should instead leverage a model that can produce robust facial embeddings. See this blog post for more information on face recognition.

  58. Kim July 6, 2018 at 10:13 am #

    How can we do exactly this, but instead of using the 68 landmarks we use the 194 landmarks provided by the the Helen dataset? Since the orderedDict specifies where the facial parts are using 1 to 68.

    • Adrian Rosebrock July 10, 2018 at 8:53 am #

      If you are using a different facial landmark predictor you will need to update the OrderedDict code to correctly point to the new coordinates. You’ll want to refer to the documentation of your facial landmark predictor for the coordinates.

  59. Joel September 12, 2018 at 6:31 am #

    You’re a champ brother. Im struggling to learn to find if two given images of a same person matches. can you explain the concept or if you have already done. can you share the blog? Please

  60. Amir September 21, 2018 at 7:50 am #

    Thanks for your interesting article.
    Why imutils library does not exist in github anymore?

  61. Adrian October 6, 2018 at 7:39 am #

    Hi Adrian,

    I want to do it with a lot of images that are in a directory all at once. How i can do it?

    Thanks 🙂

    • Adrian Rosebrock October 8, 2018 at 9:45 am #

      You can use the list_images function from the imutols library to loop over all images in your input directory. Refer to this blog post for an example.

  62. Rahil October 8, 2018 at 2:14 pm #

    HI Adrian, thank you for such an amazing tutorial, I’ve learnt a lot from this.
    I have 2 questions,
    First, can I change the dots that detect the eyes to a line that passes through all the dots?

    Second, after detecting the face parts, I’ve detected only the eyes. However while using face_utils.visualize_facial_landmarks(image, shape) all the face parts are detected. So my question is can we edit the visualize_facial_landmarks function so that it colours only the sliced points(the eyes for example) and not all the face parts.

    • Adrian Rosebrock October 8, 2018 at 3:04 pm #

      1. Are you referring to a “line of best fit”? If so, take the (x, y)-coordinates for each eye and fit a line to them.

      2. I think this followup post will help you out.

      • Rahil October 9, 2018 at 2:32 am #

        Hi Adrian, I’ve checked the post. But I need to use visualize_facial_landmarks function to only colour the eyes in a transparent colour. Could you please help me on that?

        • Adrian Rosebrock October 9, 2018 at 5:59 am #

          Hey Rahil, I’m happy to help out and point you in the right direction but I cannot write the code for you. The original visualize_facial_landmarks function is in my GitHub. You can modify the function to only draw the eyes by using an “if” statement in the function to check if we are investigating the eyes.

          • Rahil October 9, 2018 at 6:22 am #

            Oh great I found my error. Thank you so much

          • Rahil October 9, 2018 at 2:12 pm #

            I found my error, thank you so much

          • Adrian Rosebrock October 12, 2018 at 9:28 am #

            Awesome, congrats on resolving the issue!

  63. Rea October 9, 2018 at 5:18 pm #

    Hi Adrian. Excellent Guide. Would you happen to know of any facial recognition models out there that also extract the hairline (yielding a closed circle for the entire face)? For my application, I need to have the forehead extracted as well and I am having trouble finding trained models with these points extracted. Thank you in advance for your help!

  64. Muskan October 11, 2018 at 9:40 am #

    What if at last, I want to color only one
    feature? For eg just mouth and nothing else. How could it be done?

    • Adrian Rosebrock October 12, 2018 at 9:01 am #

      Hey Muskan, I’ve addressed this question a number of times in the comments section. Please read through them.

  65. Rahul October 11, 2018 at 3:42 pm #

    Hey Adrian, thank you for such an amazing post, I’ve learnt a lot from this. But I need to detect the only mouth and colour only the mouth what changes will be needed in this code?

    • Adrian Rosebrock October 12, 2018 at 8:55 am #

      Please see my previous reply to you — you will need to implement your own custom “visualize_facial_landmarks” function. Again, see my previous reply.

  66. Rahul October 11, 2018 at 11:13 pm #

    Hey Adrian, your post is amazing. can you please tell me how to change the colour of detected parts?

    • Adrian Rosebrock October 12, 2018 at 8:52 am #

      You can pass in your own custom “colors” list to the visualize_facial_landmarks function.

      • Rahul October 12, 2018 at 10:51 am #

        Can you please provide steps for doing it?

      • Rahul October 12, 2018 at 12:17 pm #

        Got it.Thank you so much

  67. Pradeep Bansal October 20, 2018 at 3:57 am #

    Hello Adrian!,
    Amazing post and thank you for the 17day crash course.
    Have registered for that course.
    I am working to classify facial expressions and for that region around mouth is crucial,
    As I can see from this post, we can easily get ‘mouth’ part.
    Any thoughts on how can I get more area around mouth which is closer to nose but not including nose.
    For ex. When we laugh, there are some lines which develop on sides of nose till mouth.
    Can we get that ?

    • Adrian Rosebrock October 20, 2018 at 7:22 am #

      Hey Pradeep, thanks for the comment! So stoked to hear you are enjoyed the crash course!

      As for your question, typically we wouldn’t directly use facial landmarks for emotion/facial expression recognition. It’s possible for sure, but such a system would be fragile. I would instead recommend training a machine learning model on the detected faces themselves. For what it’s worth, I actually do have a chapter dedicated to emotion/facial expression recognition inside Deep Learning for Computer Vision with Python.

  68. lex November 24, 2018 at 12:47 pm #

    Hi Adrian,

    Is it possible to only have an overlay for the lips? Like how the output would only add “lipstick” to the face and not to the entire mouth region. Thank you

    • Adrian Rosebrock November 25, 2018 at 8:57 am #

      There are a few ways to go about it but you’ll ultimately need alpha blending. Refer to this tutorial for an example.

  69. Ali Ne November 30, 2018 at 5:28 am #

    Thanks Adrian for you amazing post!

    • Adrian Rosebrock November 30, 2018 at 8:44 am #

      Thanks Ali!

  70. Marina December 24, 2018 at 7:18 pm #

    Hi, Adrian! Just wanted to let you know that the link to “visualize_facial_landmarks” is broken.

    • Adrian Rosebrock December 27, 2018 at 10:32 am #

      Thanks, I have updated the link.

  71. Debaditya January 3, 2019 at 2:45 pm #

    Hi Adrian,

    Thanks for the wonderful post – Lot to learn – Amazing article.

    Thanks again,

    • Adrian Rosebrock January 5, 2019 at 8:45 am #

      Thanks so much, Deb!

  72. Yuyutsa Shrivastava February 5, 2019 at 6:42 am #

    Hello Adrian,

    What if I need the hair too in the cropped face?

    • Adrian Rosebrock February 5, 2019 at 9:13 am #

      Are you trying to create a binary mask for the hair? Or just extract the face + forehead region?

      • Yuyutsa Shrivastava February 6, 2019 at 12:43 am #

        No I am not trying to create a binary mask! I need a cropped face with hair without any background. Not only face+forehead.

        • Adrian Rosebrock February 7, 2019 at 7:15 am #

          In order to create a cropped face + hair without any background you will need to create a binary mask first. The binary mask is what enables you to segment the background from the foreground.

          • Yuyutsa Shrivastava February 7, 2019 at 11:07 am #

            Can you please help me with that? Any links or something would be very helpful! Thank you in advance.

          • Adrian Rosebrock February 14, 2019 at 3:06 pm #

            I would suggest taking a look at instance segmentation algorithms such as Mask R-CNN, U-Net, and FCN.

  73. Tian February 6, 2019 at 6:01 am #

    Hi, Mr Adrian, thanks for sharing. what a cool post!
    I wanna ask u about how can the face detector read the landmarks if the person in the video is not stable? move forward and backward or left and right?
    Thank u so much, Mr!

    • Adrian Rosebrock February 7, 2019 at 7:09 am #

      Applying facial landmark prediction is a two step process. First the face must be detected. Then landmarks are predicted. As long as the face is detected you can estimate the landmarks.

  74. Christophe Jacquelin March 4, 2019 at 6:33 am #

    Is it possible to have a drawing of a face with the numbers for the 194 points ?
    To locate the position of the face feature in function of the numbers.
    Thnak you,

    • Adrian Rosebrock March 5, 2019 at 8:44 am #

      Figure 1 shows the indexes of the facial landmarks.

  75. Namrata Chavan March 12, 2019 at 12:53 am #

    Is it possible to detect sunglasses? If yes then how?

    • Adrian Rosebrock March 13, 2019 at 3:21 pm #

      Detect sunglasses in general? Or eyes behind sunglasses? If you simply want to detect sunglasses you could train an object detector to detect sunglasses. The HOG + Linear SVM detector would be a good first step.

  76. woppaiazz March 18, 2019 at 6:11 am #

    very helpfulll thankyou very much

    • Adrian Rosebrock March 19, 2019 at 10:01 am #

      You are welcome!

  77. Kishan March 21, 2019 at 12:35 am #

    Is it possible to detect only the eyes and save the image of the eyes as another image using a part of the code given?
    Thank you

    • Adrian Rosebrock March 22, 2019 at 8:45 am #

      Yes, it is. See this tutorial as an example of exactly that.

  78. lii April 3, 2019 at 5:47 am #

    Hi Andrian, really good post and helpful. I am your fan and was a silent reader for the past 3 years.

    Do you mind to share some code to do the following sequence:

    I have thousands of image frame with label. filename.ImageFrame.class.jpg (class is 0 0r 1).

    I want to:

    1) Detect Face, Left_Eye, Right_Eye, Face_Grid from list of image frame in a folder.

    2) Create rectangle of Face, Left_Eye, Right_Eye, Face_Grid

    3) Extract the detected Face, Left_Eye, Right_Eye, Face_Grid as npy array (npz file)…. Output as Face.npy, Left_Eye.npy, Right_Eye.npy, Face_Grid.npy and y.npy (label 0 or 1)

    I want to feed these output as to a pre-trained model for classification problem.

    Can someone help?

    Really appreciate your kindness.

    • Adrian Rosebrock April 4, 2019 at 1:23 pm #

      I provide the PyImageSearch blog as an education blog. These tutorials are free for you to use. I would suggest you give it a try yourself. Experiment with the code. Struggle with it. Teach yourself something new. It’s good to push your boundaries, I have faith in you!

  79. Benjamin Netanyahu April 3, 2019 at 10:40 am #

    Hi Adrian,

    The post is very nice and well explained.
    One question I have is regarding missing face regions.
    The predictor always finds all the regions, and even side face images results with two detected eyes.
    Its like there is a correlation calculation that fits the regions best and not actually search for them. What am I missing?

  80. sri April 6, 2019 at 2:57 pm #

    I tried this code.but i got some error in installation of dlib package in windows.can anyone give me solution for it

  81. Ong Chung Yau April 7, 2019 at 10:57 pm #

    Hi can you recommend me an opensource python code that determine whether the face is being block or not or any face quality estimation python code?
    Because nowadays many face recognition model can detect face even it left 2 eyes showing. I’m having difficulties to find facial landmark detector that don’t produce result when being blocked so that I can identify that the face is being block.

    • Ong Chung Yau April 7, 2019 at 10:58 pm #

      I found opencv haarcascade mouth, eye, nose detector. The accuracy is very bad.

    • Adrian Rosebrock April 12, 2019 at 12:41 pm #

      Sorry, I do not have any source code for that use case.

  82. Rishab Aggarwal April 13, 2019 at 2:20 am #

    hey man can you build a basic lip reader with some lip movements. I am waiting for that.

    • Adrian Rosebrock April 18, 2019 at 7:39 am #

      Thanks for the suggestion and I’ll definitely consider it, although I cannot guarantee if/when I will be able to cover it.

  83. Reynaldo April 24, 2019 at 7:52 am #

    Hello Adrian

    How if i want to detect faces and cropped like this

    I have done many ways but, I have not managed to get such results

    Thank you for the help

  84. mahi May 21, 2019 at 3:36 am #

    hey, when we run your code only we can detect mouth, how can we detect nose and eyes both.

    • Adrian Rosebrock May 23, 2019 at 9:41 am #

      This code shows you how to detect the nose, eyes, etc. so I’m not sure what your question is?

  85. Rob August 27, 2019 at 3:10 am #

    Hi Adrian, I need to extract exact shape of the lips from an image. The image will contain only the lips of the user. the user might or might not be smiling, showing tooth, different skin tone etc. Do you think your code will help achieving this?

  86. ITRAJU SANKARA RAO October 1, 2019 at 3:40 am #

    Hi You have given such a great tutorial for OpenCV thank you so much

    Please tell me how to do LIP reading using OpenCV and Raspberry Pi

    I have been following your tutorial face landmarks which are awsome .

    thank you

    • Adrian Rosebrock October 3, 2019 at 12:29 pm #

      Sorry, I don’t have any tutorials on lip reading at the moment.

  87. Gunjan October 10, 2019 at 1:32 pm #

    Thank you for this amazing blog. It is really informative.
    Is there any way I can make this run in real-time using webcam.

  88. Jack October 19, 2019 at 12:27 pm #

    if you are working on mac use Xquartz and on windows use Xminge

  89. Woojoung January 3, 2020 at 12:53 am #

    Dear Adrian.

    First of all, thank for your awesome blog posts that I have learned.

    I’m newer of app development.

    I would like to develop some kind of Android mobile application with using OpenCV and mobile camera to detect eyes, nose, lips, and jaw.

    Could you tell me what I am suppose to do? or suggest some tutorials?

    Kind Regard.

    • Adrian Rosebrock January 16, 2020 at 11:02 am #

      Sorry, I don’t cover Android development here. I would suggest looking into using OpenCV’s Java bindings.


  1. Drowsiness detection with OpenCV - PyImageSearch - May 8, 2017

    […] The facial landmarks produced by dlib are an indexable list, as I describe here: […]

Before you leave a comment...

Hey, Adrian here, author of the PyImageSearch blog. I'd love to hear from you, but before you submit a comment, please follow these guidelines:

  1. If you have a question, read the comments first. You should also search this page (i.e., ctrl + f) for keywords related to your question. It's likely that I have already addressed your question in the comments.
  2. If you are copying and pasting code/terminal output, please don't. Reviewing another programmers’ code is a very time consuming and tedious task, and due to the volume of emails and contact requests I receive, I simply cannot do it.
  3. Be respectful of the space. I put a lot of my own personal time into creating these free weekly tutorials. On average, each tutorial takes me 15-20 hours to put together. I love offering these guides to you and I take pride in the content I create. Therefore, I will not approve comments that include large code blocks/terminal output as it destroys the formatting of the page. Kindly be respectful of this space.
  4. Be patient. I receive 200+ comments and emails per day. Due to spam, and my desire to personally answer as many questions as I can, I hand moderate all new comments (typically once per week). I try to answer as many questions as I can, but I'm only one person. Please don't be offended if I cannot get to your question
  5. Do you need priority support? Consider purchasing one of my books and courses. I place customer questions and emails in a separate, special priority queue and answer them first. If you are a customer of mine you will receive a guaranteed response from me. If there's any time left over, I focus on the community at large and attempt to answer as many of those questions as I possibly can.

Thank you for keeping these guidelines in mind before submitting your comment.

Leave a Reply