Face Alignment with OpenCV and Python

Continuing our series of blog posts on facial landmarks, today we are going to discuss face alignment, the process of:

  1. Identifying the geometric structure of faces in digital images.
  2. Attempting to obtain a canonical alignment of the face based on translation, scale, and rotation.

There are many forms of face alignment.

Some methods try to impose a (pre-defined) 3D model and then apply a transform to the input image such that the landmarks on the input face match the landmarks on the 3D model.

Other, more simplistic methods (like the one discussed in this blog post), rely only on the facial landmarks themselves (in particular, the eye regions) to obtain a normalized rotation, translation, and scale representation of the face.

The reason we perform this normalization is due to the fact that many facial recognition algorithms, including Eigenfaces, LBPs for face recognition, Fisherfaces, and deep learning/metric methods can all benefit from applying facial alignment before trying to identify the face.

Thus, face alignment can be seen as a form of “data normalization”. Just as you may normalize a set of feature vectors via zero centering or scaling to unit norm prior to training a machine learning model, it’s very common to align the faces in your dataset before training a face recognizer.

By performing this process, you’ll enjoy higher accuracy from your face recognition models.

Note: If you’re interested in learning more about creating your own custom face recognizers, be sure to refer to the PyImageSearch Gurus course where I provide detailed tutorials on face recognition.

To learn more about face alignment and normalization, just keep reading.

Looking for the source code to this post?
Jump right to the downloads section.

Face alignment with OpenCV and Python

The purpose of this blog post is to demonstrate how to align a face using OpenCV, Python, and facial landmarks.

Given a set of facial landmarks (the input coordinates) our goal is to warp and transform the image to an output coordinate space.

In this output coordinate space, all faces across an entire dataset should:

  1. Be centered in the image.
  2. Be rotated that such the eyes lie on a horizontal line (i.e., the face is rotated such that the eyes lie along the same y-coordinates).
  3. Be scaled such that the size of the faces are approximately identical.

To accomplish this, we’ll first implement a dedicated Python class to align faces using an affine transformation. I’ve already implemented this FaceAligner class in imutils.

Note: Affine transformations are used for rotating, scaling, translating, etc. We can pack all three of the above requirements into a single cv2.warpAffine  call; the trick is creating the rotation matrix, M .

We’ll then create an example driver Python script to accept an input image, detect faces, and align them.

Finally, we’ll review the results from our face alignment with OpenCV process.

Implementing our face aligner

The face alignment algorithm itself is based on Chapter 8 of Mastering OpenCV with Practical Computer Vision Projects (Baggio, 2012), which I highly recommend if you have a C++ background or interest. The book provides open-access code samples on GitHub.

Let’s get started by examining our FaceAligner  implementation and understanding what’s going on under the hood.

Lines 2-5 handle our imports. To read about facial landmarks and our associated helper functions, be sure to check out this previous post.

On Line 7, we begin our FaceAligner  class with our constructor being defined on Lines 8-20.

Our constructor has 4 parameters:

  • predictor : The facial landmark predictor model.
  • desiredLeftEye : An optional (x, y) tuple with the default shown, specifying the desired output left eye position. For this variable, it is common to see percentages within the range of 20-40%. These percentages control how much of the face is visible after alignment. The exact percentages used will vary on an application-to-application basis. With 20% you’ll basically be getting a “zoomed in” view of the face, whereas with larger values the face will appear more “zoomed out.”
  • desiredFaceWidth : Another optional parameter that defines our desired face with in pixels. We default this value to 256 pixels.
  • desiredFaceHeight : The final optional parameter specifying our desired face height value in pixels.

Each of these parameters is set to a corresponding instance variable on Lines 12-15.

Next, let’s decide whether we want a square image of a face, or something rectangular. Lines 19 and 20 check if the desiredFaceHeight  is None , and if so, we set it to the desiredFaceWidth , meaning that the face is square. A square image is the typical case. Alternatively, we can specify different values for both   desiredFaceWidth  and desiredFaceHeight  to obtain a rectangular region of interest.

Now that we have constructed our FaceAligner  object, we will next define a function which aligns the face.

This function is a bit long, so I’ve broken it up into 5 code blocks to make it more digestible:

Beginning on Line 22, we define the align function which accepts three parameters:

  • image : The RGB input image.
  • gray : The grayscale input image.
  • rect : The bounding box rectangle produced by dlib’s HOG face detector.

On Lines 24 and 25, we apply dlib’s facial landmark predictor and convert the landmarks into (x, y)-coordinates in NumPy format.

Next, on Lines 28 and 29 we read the left_eye  and right_eye  regions from the FACIAL_LANDMARK_IDXS  dictionary, found in the helpers.py  script. These 2-tuple values are stored in left/right eye starting and ending indices.

The leftEyePts  and rightEyePts  are extracted from the shape list using the starting and ending indices on Lines 30 and 31.

Next, let’s will compute the center of each eye as well as the angle between the eye centroids.

This angle serves as the key component for aligning our image.

The angle of the green line between the eyes, shown in Figure 1 below, is the one that we are concerned about.

Figure 1: Computing the angle between two eyes for face alignment.

To see how the angle is computed, refer to the code block below:

On Lines 34 and 35 we compute the centroid, also known as the center of mass, of each eye by averaging all (x, y) points of each eye, respectively.

Given the eye centers, we can compute differences in (x, y)-coordinates and take the arc-tangent to obtain angle of rotation between eyes.

This angle will allow us to correct for rotation.

To determine the angle, we start by computing the delta in the y-direction, dY . This is done by finding the difference between the rightEyeCenter  and the leftEyeCenter  on Line 38.

Similarly, we compute dX , the delta in the x-direction on Line 39.

Next, on Line 40, we compute the angle of the face rotation. We use NumPy’s arctan2  function with arguments dY  and dX , followed by converting to degrees while subtracting 180 to obtain the angle.

In the following code block we compute the desired right eye coordinate (as a function of the left eye placement) as well as calculating the scale of the new resulting image.

On Line 44, we calculate the desired right eye based upon the desired left eye x-coordinate. We subtract self.desiredLeftEye[0]  from 1.0  because the desiredRightEyeX  value should be equidistant from the right edge of the image as the corresponding left eye x-coordinate is from its left edge.

We can then determine the scale  of the face by taking the ratio of the distance between the eyes in the current image to the distance between eyes in the desired image

First, we compute the Euclidean distance ratio, dist , on Line 50.

Next, on Line 51, using the difference between the right and left eye x-values we compute the desired distance, desiredDist .

We update the desiredDist  by multiplying it by the desiredFaceWidth  on Line 52. This essentially scales our eye distance based on the desired width.

Finally, our scale is computed by dividing desiredDist  by our previously calculated dist .

Now that we have our rotation angle  and scale , we will need to take a few steps before we compute the affine transformation. This includes finding the midpoint between the eyes as well as calculating the rotation matrix and updating its translation component:

On Lines 57 and 58, we compute eyesCenter , the midpoint between the left and right eyes. This will be used in our rotation matrix calculation. In essence, this midpoint is at the top of the nose and is the point at which we will rotate the face around:

Figure 2: Computing the midpoint (blue) between two eyes. This will serve as the (x, y)-coordinate in which we rotate the face around.

To compute our rotation matrix, M , we utilize cv2.getRotationMatrix2D  specifying eyesCenter , angle , and scale (Line 61). Each of these three values have been previously computed, so refer back to Line 40, Line 53, and Line 57 as needed.

A description of the parameters to cv2.getRotationMatrix2D  follow:

  • eyesCenter : The midpoint between the eyes is the point at which we will rotate the face around.
  • angle : The angle we will rotate the face to to ensure the eyes lie along the same horizontal line.
  • scale : The percentage that we will scale up or down the image, ensuring that the image scales to the desired size.

Now we must update the translation component of the matrix so that the face is still in the image after the affine transform.

On Line 64, we take half of the desiredFaceWidth  and store the value as tX , the translation in the x-direction.

To compute tY , the translation in the y-direction, we multiply the desiredFaceHeight  by the desired left eye y-value, desiredLeftEye[1] .

Using tX  and tY , we update the translation component of the matrix by subtracting each value from their corresponding eyes midpoint value, eyesCenter (Lines 66 and 67).

We can now apply our affine transformation to align the face:

For convenience we store the desiredFaceWidth  and desiredFaceHeight  into w  and h  respectively (Line 70).

Then we perform our last step on Lines 70 and 71 by making a call to cv2.warpAffine . This function call requires 3 parameters and 1 optional parameter:

  • image : The face image.
  • M : The translation, rotation, and scaling matrix.
  • (w, h) : The desired width and height of the output face.
  • flags : The interpolation algorithm to use for the warp, in this case INTER_CUBIC . To read about the other possible flags and image transformations, please consult the OpenCV documentation.

Finally, we return the aligned face on Line 75.

Aligning faces with OpenCV and Python

Now let’s put this alignment class to work with a simple driver script. Open up a new file, name it align_faces.py , and let’s get to coding.

On Lines 2-7 we import required packages.

If you do not have imutils  and/or dlib  installed on your system, then make sure you install/upgrade them via pip :

Note: If you are using Python virtual environments (as all of my OpenCV install tutorials do), make sure you use the workon  command to access your virtual environment first, and then install/upgrade imutils  and dlib .

Using argparse  on Lines 10-15, we specify 2 required command line arguments:

  • --shape-predictor : The dlib facial landmark predictor.
  • --image : The image containing faces.

In the next block of code we initialize our HOG-based detector (Histogram of Oriented Gradients), our facial landmark predictor, and our face aligner:

Line 19 initializes our detector object using dlib’s  get_frontal_face_detector .

On Line 20 we instantiate our facial landmark predictor using, --shape-predictor , the path to dlib’s pre-trained predictor.

We make use of the FaceAligner  class that we just built in the previous section by initializing a an object, fa , on Line 21. We specify a face width of 256 pixels.

Next, let’s load our image and prepare it for face detection:

On Line 24, we load our image specified by the command line argument -image . We resize the image maintaining the aspect ratio on Line 25 to have a width of 800 pixels. We then convert the image to grayscale on Line 26.

Detecting faces in the input image is handled on Line 31 where we apply dlib’s face detector. This function returns  rects  , a list of bounding boxes around the faces our detector has found.

In the next block, we iterate through rects , align each face, and display the original and aligned images.

We begin our loop on Line 34.

For each bounding box rect  predicted by dlib we convert it to the format (x, y, w, h) (Line 37).

Subsequently, we resize the box to a width of 256 pixels, maintaining the aspect ratio, on Line 38. We store this original, but resized image, as faceOrig .

On Line 39, we align the image, specifying our image, grayscale image, and rectangle.

Finally, Lines 42 and 43 display the original and corresponding aligned face image to the screen in respective windows.

On Line 44, we wait for the user to press a key with either window in focus, before displaying the next original/aligned image pair.

The process on Lines 35-44 is repeated for all faces detected, then the script exits.

To see our face aligner in action, head to next section.

Face alignment results

Let’s go ahead and apply our face aligner to some example images. Make sure you use the “Downloads” section of this blog post to download the source code + example images.

After unpacking the archive, execute the following command:

From there you’ll see the following input image, a photo of myself and my financée, Trisha:

Figure 3: An input image to our OpenCV face aligner.

This image contains two faces, therefore we’ll be performing two facial alignments.

The first is seen below:

Figure 4: Aligning faces with OpenCV.

On the left we have the original detected face. The aligned face is then displayed on the right.

Now for Trisha’s face:

Figure 5: Facial alignment with OpenCV and Python.

Notice how after facial alignment both of our faces are the same scale and the eyes appear in the same output (x, y)-coordinates.

Let’s try a second example:

Here I am enjoying a glass of wine on Thanksgiving morning:

Figure 6: An input image to our face aligner.

After detecting my face, it is then aligned as the following figure demonstrates:

Figure 7: Using facial landmarks to align faces in images.

Here is a third example, this one of myself and my father last spring after cooking up a batch of soft shell crabs:

Figure 8: Another example input to our face aligner.

My father’s face is first aligned:

Figure 9: Applying facial alignment using OpenCV and Python.

Followed by my own:

Figure 10: Using face alignment to obtain canonical representations of faces.

The fourth example is a photo of my grandparents the last time they visited North Carolina:

Figure 11: Inputting an image to our face alignment algorithm.

My grandmother’s face is aligned first:

Figure 12: Performing face alignment using computer vision.

And then my grandfather’s:

Figure 13: Face alignment in unaffected by the person in the photo wearing glasses.

Despite both of them wearing glasses the faces are correctly aligned.

Let’s do one final example:

Figure 14: The final example input image to our face aligner.

After applying face detection, Trisha’s face is aligned first:

Figure 15: Facial alignment using facial landmarks.

And then my own:

Figure 16: Face alignment still works even if the input face is rotated.

The rotation angle of my face is detected and corrected, followed by being scaled to the appropriate size.

To demonstrate that this face alignment method does indeed (1) center the face, (2) rotate the face such that the eyes lie along a horizontal line, and (3) scale the faces such that they are approximately identical in size, I’ve put together a GIF animation that you can see below:

Figure 17: An animation demonstrating face alignment across multiple images.

As you can see, the eye locations and face sizes are near identical for every input image.


In today’s post, we learned how to apply facial alignment with OpenCV and Python. Facial alignment is a normalization technique, often used to improve the accuracy of face recognition algorithms, including deep learning models.

The goal of facial alignment is to transform an input coordinate space to output coordinate space, such that all faces across an entire dataset should:

  1. Be centered in the image.
  2. Be rotated that such the eyes lie on a horizontal line (i.e., the face is rotated such that the eyes lie along the same y-coordinates).
  3. Be scaled such that the size of the faces are approximately identical.

All three goals can be accomplished using an affine transformation. The trick is determining the components of the transformation matrix, M .

Our facial alignment algorithm hinges on knowing the (x, y)-coordinates of the eyes. In this blog post we used dlib, but you can use other facial landmark libraries as well — the same techniques apply.

Facial landmarks tend to work better than Haar cascades or HOG detectors for facial alignment since we obtain a more precise estimation to eye location (rather than just a bounding box).

If you’re interested in learning more about face recognition and object detection, be sure to take a look at the PyImageSearch Gurus course where I have over 25+ lessons on these topics.


If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , , , ,

91 Responses to Face Alignment with OpenCV and Python

  1. ringlayer May 23, 2017 at 3:30 am #

    nice article as always. I proudly announce that I’m a subscription visitors of this site.
    Hey Adrian , do you have some articles about ground / floor ground recognition or detection ?

    • Adrian Rosebrock May 25, 2017 at 4:29 am #

      Thank you for the kind words, I really appreciate it 🙂

      As for your question on ground/floor recognition, that really depends on the type of application you are building and how you are capturing your images. I would need more details on the project to provide any advice.

  2. achraf robinho May 26, 2017 at 2:10 pm #

    Good Tuto as always Thank’s Adrian !

    • Adrian Rosebrock May 28, 2017 at 1:06 am #

      Thanks Achraf!

  3. ringlayer May 26, 2017 at 3:11 pm #

    Dear Adrian
    floor ground extraction will be used for robot navigation .
    E.g the robot will navigate in this room:

    so the robot will need to extract areas with the carpet

  4. ringlayer May 26, 2017 at 3:22 pm #

    btw my current approach result is very dirty, as you can see here


    it’s histogram back projection as given in this example :

    but the result is dirty and contains unused pixels.

    Do you have suggestion for any better method than histogram back projection
    Best regards

    Ringlayer Robotic

    • Adrian Rosebrock May 28, 2017 at 1:06 am #

      This is a pretty advanced project, one that I wouldn’t necessarily recommend if you are new to computer vision and OpenCV. In either case, I would recommend that you look into stereo vision and depth cameras as they will enable you to better segment the floor from objects in front of you. Basic image processing isn’t going to solve the problem for all possible floors.

      • ringlayer May 31, 2017 at 3:55 pm #

        Thank you for answer Adrian. I just modify my robot vision using different approach, it’s no longer need to extract the floor segment, instead it just detect possible obstacle using combionation of computer vision and ultrasonic sensor.

        Thank you very much for informations

      • kaisar khatak December 25, 2017 at 12:24 am #

        Have you thought about a blog post on monocular SLAM?

  5. PranavAgarwal May 28, 2017 at 11:01 pm #

    Does this face alignment result (output which we get)is applied to the actual image or do we just get the (only)aligned image as a result?

    • Adrian Rosebrock May 31, 2017 at 1:25 pm #

      This method will return the aligned ROI of the face.

      • PranavAgarwal June 1, 2017 at 12:05 am #

        Is there any procedure instead of ROI we get the face aligned on the actual image.

  6. Pelin GEZER July 4, 2017 at 6:47 am #

    I am wondering how to calculate distance between any landmark points. I think this one is easy because eye landmark points are on linear plane. For example, if I want to measure distance between landmarks on jawline [4,9], how to?

    • Adrian Rosebrock July 5, 2017 at 5:59 am #

      You would simply compute the Euclidean distance between your points. If you’ve done a simple camera calibration you can determine the real-world distance as well.

  7. tiffany July 5, 2017 at 8:20 am #

    hi, thanks for you post. i would like to know when computing angle = np.degrees(np.arctan2(dY, dX)) – 180. why subtracting 180?

  8. Abder-Rahman July 28, 2017 at 12:29 pm #

    Thanks for the nice post. Does the method work with other images than faces?

    • Adrian Rosebrock August 1, 2017 at 9:53 am #

      This method was designed for faces, but I suppose if you wanted to align an object in an image based on two reference points it would still work. But again, this method was intended for faces.

  9. lastfiddler July 29, 2017 at 8:07 pm #

    Nice article Adrian , I need your help in license plate recognition in the localisation of the plate any help please !!?

  10. John August 5, 2017 at 1:12 pm #

    Hey how to center the face on the image? Basically I want to divide the image in half so that it divides right through the center of the nose bridge. But as of now, when I run the image through the face aligner, the nose bridge is not really in the center. Thanks so much! I need help ASAP I have a project due tomorrow ahahah.

  11. Shreyasta Samal August 23, 2017 at 9:59 am #

    Hello Adrian,

    I have read your articles on face recognition and also taken your book Practical Python and OpenCV + Case studies. They are very good and to the point. Is it possible to calculate the distances between nose, lips and eyes all together and mark these points together as shown in this blogpost ?


  12. Sourabh Mane August 29, 2017 at 5:29 am #

    Hello Sir,
    How to detect whether eyes are closed or opened in an image??Because i want only those images to be aligned whose eyes are opened.Sir please help me as I want to implement this in my project

    • Adrian Rosebrock August 31, 2017 at 8:40 am #

      Take a look at this blog post on drowsiness detection. In particular pay attention to the Eye Aspect Ratio (EAR).

  13. Manu BN December 3, 2017 at 11:10 am #

    Hi Adrian,

    Thanks for the awesome tutorial.

    I really hate python and all your tutorials are in python.

    So I have done the alignment in C++.

    Can you please take a look at the code here: https://github.com/ManuBN786/Face-Alignment-using-Dlib-OpenCV

    Thanks in advance.

  14. Tan Nguyen December 11, 2017 at 2:01 pm #

    My result is:
    usage: Face_alignment.py [-h] -p SHAPE_PREDICTOR -i IMAGE
    Face_alignment.py: error: the following arguments are required: -p/–shape-predictor, -i/–image
    [Finished in 0.5s]

    how to fix it?

    • Adrian Rosebrock December 12, 2017 at 9:09 am #

      You need to supply command line arguments to the script, just like I do in the blog post:

      Notice how the script is executed via the command line using the --shape-predictor and --image switches. If you are new to command line arguments, please read up on them.

  15. Luis January 13, 2018 at 12:33 pm #

    Hello, it’s an excellent tutorial.
    You could tell me what command you used to draw the green rect line that is between the eyes of figure one, please.

    • Adrian Rosebrock January 15, 2018 at 9:18 am #

      I drew the circles of the facial landmarks via cv2.circle and then the line between the eye centers was drawn using cv2.line.

  16. Omar January 30, 2018 at 12:33 pm #

    Hi Adrian, excellent tutorial.

    I have this error when defining
    dY = rightEyeCentre[1] – leftEyeCentre[1]

    IndexError: index 1 is out of bounds for axis 0 with size 1

    what am doing wrong here? ( i have the facial landmarks in arrays, i am not using these: ( FACIAL_LANDMARKS_IDXS[“left_eye”] )


    • Adrian Rosebrock January 31, 2018 at 6:51 am #

      I’m a bit confused — is there a particular reason you are not using the FACIAL_LANDMARKS_IDXS to lookup the array slices? I would suggest using my code exactly if your goal is to perform face alignment.

  17. Igor February 15, 2018 at 5:56 am #

    Hello, Adrian. In all samples we see that chin and forehead are little bit croped, how to easy make it full size? Thanks for advice.

    • Adrian Rosebrock February 18, 2018 at 10:01 am #

      You would typically take a heuristic approach and extend the bounding box coordinates by N% where “N” is a manually tuned value to give you a good approximation and accuracy on your dataset.

  18. Igor February 15, 2018 at 6:45 am #

    And also a queshion. When I send video to this process, I’ve got a very different frames in output, very noisy in ouput video, even the face dosent move in original video, like in grid corpus.

  19. Jamie March 1, 2018 at 9:18 pm #

    What Cascade Classifier are you using when ingesting this data into an application and what is the application used for?

    I’ve implemented this as a subprocess within a larger training prototype. I’ve yet to receive a 0.0 confidence using the lbpcascade_frontalface cascade while streaming video over a WiFi network.

    What I have found is that when using this method of alignment too much of the background is contained within the aligned image. Sample images that contain a similar amount of background information are recognized at lower confidence scores than the training data. I’m assuming this is an error on my part, but that seems to be the only common denominator.

    I’d like to experiment by changing how tightly faces are cropped as illustrated here: https://docs.opencv.org/2.4/modules/contrib/doc/facerec/tutorial/facerec_gender_classification.html#aligning-face-images (scroll to the bottom)

    What are your thoughts?

  20. Dorin April 1, 2018 at 8:27 am #

    What if the face is rotated in 3D – is LPB happy ?

    I tested this algoritm and it aligned all the detected faces in the 2D section plan of the standard camera (It did not detect all the faces and I did not found your threshold parameter, that you used in other projects, to lower it, to accept more faces)
    (I wrote “standard camera” because Intel is working now with simultan connected multi cameras that can give you “any” angle – filmed or computed)
    If the subject is looking at 45 degrees of the camera, eyes are closer than the front view, one ear become visible, one ear is hidden
    I suppose that the LPB is not very happy about that so there is one more step
    – rotate the face in one more plan ?
    – rotate LPB templates ?
    What should we do next (except detecting the 45 degree angle which is another step 🙁 )?

    Keep writing !

    • Adrian Rosebrock April 4, 2018 at 12:34 pm #

      Sorry, are you asking about using LBPs specifically for face recognition? Or using LBPs for face alignment?

  21. Yoni Keren May 7, 2018 at 10:53 am #

    Hi everyone!

    Does anyone get lines 64-67?

    Why was the matrix changed like that? What ate the elements which were changed?
    And why is Tx half of desiredFaceWidth?!

    • Adrian Rosebrock May 9, 2018 at 10:01 am #

      In order to apply an affine transformation we need to compute the matrix used to perform the transformation. Make sure you read up on the components of this matrix. I would also suggest taking a look at Practical Python and OpenCV where I discuss the fundamentals of image processing (including transformations) using OpenCV.

  22. Farhan May 10, 2018 at 7:15 pm #

    Hello Adrian, great tutorial. Do you have any tutorial on text localization in a video?

    • Adrian Rosebrock May 14, 2018 at 12:13 pm #

      I do not. The closest tutorial I would have is on Tesseract OCR.

  23. Ankit June 26, 2018 at 4:54 am #

    Hello Master,
    Again Awesome tutorial from your side.
    I want to do this thing in real time video/ camera.
    Can you please guide me for that?

    What I wanted is, from the video it will crop the frontal face and do alignment process and save it to one folder.

    waiting for you reply guru.

    • Adrian Rosebrock June 28, 2018 at 8:21 am #

      If you are new to working with OpenCV and video streams I would recommend reading this blog post. From there you should consider working through Practical Python and OpenCV to help you learn the fundamentals of the library. I hope that helps point you in the right direction!

  24. Shreyasta Samal August 17, 2018 at 6:23 am #

    Hi Adrian,

    Nice article, I wanted to know up to what extent of variations in the horizontal or vertical axis does the Dlib detect the face and annotate it with landmarks?

    Best regards,

    • Adrian Rosebrock August 17, 2018 at 7:11 am #

      Hey Shreyasta — I’m not sure what you mean by extent of variations in the horizontal and vertical directions. I would suggest you download the source code and test it for your own applications.

  25. holger August 22, 2018 at 11:13 am #

    Thank you for this article and contribution to imutils.

  26. Bosman August 29, 2018 at 3:51 am #

    Dear Dr Adrian,

    Where do i save the newly created “pysearchimage” module on my system? I’m using window 10 and running the code on Spyder IDE. Thank you

    • Adrian Rosebrock August 30, 2018 at 9:01 am #

      Make sure you use the “Downloads” section of the blog post to download the “pyimagesearch” module. From there, you can import the module into your IDE. Alternatively, you could simply execute the script from the command line.

  27. Weng Siang October 2, 2018 at 9:46 am #

    Hi Dr Adrian, first of all this is a very good and detailed tutorial, i really like it very much!
    I would like you ask you a question. I had modified the code to run in a real time environment using video stream (webcam), but the result of the alignment seems to be “flickering” or “shaking”. I would like to for your opinion is there any solution that able to solve this issue ? Thank you very much!

    • Adrian Rosebrock October 8, 2018 at 10:34 am #

      The flickering or shaking may be due to slight variations in the positions of the facial landmarks themselves. You might try to smooth them a bit with optical flow.

      • Weng Siang October 9, 2018 at 1:27 am #

        is it possible if I implement video stabilization technique to stabilize it ?

        • Adrian Rosebrock October 9, 2018 at 6:02 am #

          Video stabilization operates on the video/frame itself, not the facial landmarks, so no, a video stabilization algorithm wouldn’t help much here unless the video you are working with is very “bouncy” and unstabilized.

          • Weng Siang October 9, 2018 at 1:01 pm #

            The frame was like keep changing its height and width even though I used imutils.resize() to resize it. Is there any other way to restrict the height and width of the frame?

          • Weng Siang October 10, 2018 at 6:10 am #

            Thank you very much for the information, Dr Adrian !
            Really learnt a lot of knowledge from you !
            Appreciate it !!!

  28. Seung Min November 5, 2018 at 11:36 am #

    Hi Adrian,

    I’m Korean and I can’t English well.

    I want to do learning with the aligned image to increase recognition accuracy in face recognition.

    Please how can i apply face alignment to face recognition.

    First, I want to save the image after the face alignment to another folder.

    Thank you !

    • Seung Min November 5, 2018 at 11:41 am #

      and is there any way to face alignment all images in ‘images folder’ at once?

      • Adrian Rosebrock November 6, 2018 at 1:14 pm #

        Absolutely. You can use the imutils.list_images function to loop over all images in an input directory. An example of using the function can be found in this tutorial.

  29. Sarnath November 8, 2018 at 12:57 pm #

    Thanks for this awesome work! And of course, sharing all your knowledge with us!

    For me, this thing worked perfectly (I use HAAR based detector though). The only thing I had to change was subtracting the 180 degrees. It was not necessary as it was completely flipping my image. Otherwise, this code is just a gem!

    • Adrian Rosebrock November 10, 2018 at 10:07 am #

      Thanks Sarnath! And congratulations on a successful project.

  30. Naman Dosi November 14, 2018 at 9:17 pm #

    I want to perform face recognition with face alignment. So first I performed face alignment and got the the aligned crop images. Now when I am trying to apply face recognition on this using haar cascade or even LBP, face is not getting detected only where as before face alignment, it was. Please help as soon as possible and thanks a lot for a wonderful tutorial

  31. Al November 19, 2018 at 10:43 am #

    Hi Adrian, thanks for your amazing tutorial.

    Everything works fine, just one dumb question: how do I save the result?

    Thank you again

    • Adrian Rosebrock November 19, 2018 at 12:21 pm #

      You can use the cv2.imwrite function to write an image to disk.

  32. Kai Xin December 2, 2018 at 10:08 am #

    Hi Adrian, greeting from Malaysia!

    I would like to know can I perform the face alignment with the video?

    Thank in advance =)

    • Adrian Rosebrock December 4, 2018 at 10:09 am #

      It’s the exact same technique, you just apply it to every frame of the video. To get started you need to access your webcam. I suggest using the cv2.VideoCapture function or my VideoStream class.

  33. Gunya December 3, 2018 at 10:34 pm #

    Thanks for this amazing post.

    But the script gives an error of NoneType in imutils.convinience.py file line 69 under recognize.
    I have gone through your other posts also including the one Resolving NoneType Error but there seems to be no solution I could come up with.

    • Adrian Rosebrock December 4, 2018 at 9:44 am #

      Are you referring to the cv2.warpAffine call? If so, what is the output of:


  34. mincas December 13, 2018 at 10:58 am #

    HI , I am planning to use this face alignment concept in my face recognition .. may i know roughly how the process can be done ?

    • Adrian Rosebrock December 18, 2018 at 9:33 am #

      Are you following one of my face recognition tutorials? If so, align the faces first and then extract the 128-d embeddings used to quantify each face.

  35. Yong Shen December 18, 2018 at 12:54 am #

    Hi Adrian, how can I configure the code to process a dataset of pictures instead of one picture in this tutorial .. thanks in advance 🙂

    • Adrian Rosebrock December 18, 2018 at 8:48 am #

      How is your dataset stored? Is it just a directory of images on disk? If so, just use my paths.list_images function. An example of using the function can be found here.

  36. Anonymous December 27, 2018 at 9:11 pm #

    Hi Adrian, How can I save the aligned images into a file path/folder?

    • Adrian Rosebrock January 2, 2019 at 9:43 am #

      You can use the “cv2.imwrite” function.

  37. Nel January 10, 2019 at 5:14 pm #

    Thanks for your amazing article.

    I am new in Python and just working with Jupyter notebook. I am going to use alignment for video files and do your code for each frame.
    When I run your code, the error relating to the “argparse” is shown. As I have googled, “argparse” is not compatible with Jupyter notebook. I really appreciate if you can help me out.

  38. Alexander January 28, 2019 at 9:43 am #

    Thanks a lot for this module. But there is a problem – I’m trying to use it for batch processing many images in a loop. And looks like I’m getting memory leaks – every image gives.
    Deleting image variables not helps. Would you please give a direction, how to solve this?

    • Adrian Rosebrock January 28, 2019 at 5:49 pm #

      Have you tried using Python’s debugger (pdb) to help debug the problem? That should help you determine where the memory consumption is coming from.

  39. Victoria J April 30, 2019 at 1:52 pm #

    Hey Adrian,

    I’m attempting to use this to improve the accuracy of the opencv facial recognition. I got the face recognition to work great, but i’m hoping to combine the two codes so that it will align the face in the photo and then attempt to recognize the face. I plan on working on that on my end, but my question is, what command can I use to save the resulting file? I was planning on running my whole database through this program and I was hoping to have it automatically save the resulting file, but I’m having trouble finding a command to do that. I can screenshot it if need be, but it will make my life easier as I update the database quite a bit to test different things. If you can think of a command that will make it go through them all automatically (i.e. replace the original and move on automatically to the next one so I don’t have to manually run it for every photo) let me know, but I already have a few ideas about that part. thanks in advance!

    • Adrian Rosebrock May 1, 2019 at 11:28 am #

      Are you referring to saving the cropped face to disk? If so, use “cv2.imwrite”. I would also suggest you read through Practical Python and OpenCV first. Take the time to learn the basics of OpenCV, walk before you run. Learn the fundamentals and you’ll be able to improve your face recognition system.

  40. Raja Babu Jha May 3, 2019 at 7:59 am #

    please sir, give an article on head posture in either left or right using web camera and mobile.

    • Adrian Rosebrock May 8, 2019 at 1:35 pm #

      Thanks for the suggestion. I’m not sure if/when I would be able to cover the topic but I’ll consider it.

  41. Rod May 16, 2019 at 3:14 pm #

    Hey, I’m loving your tutorials. I’ve aligned the faces of my dataset, and the resulting new aligned images where used as the dataset for the tutorial ‘opencv-face-recognition’ but most faces are ignored when extracting the embeddings.
    Am I doing something wrong?
    I suspect that by having aligned the faces there are some steps in the face recognition tutorial I have to either skip or adapt but I can’t figure it out.

    • Adrian Rosebrock May 23, 2019 at 10:19 am #

      Did you save the aligned face ROIs to disk? Or did you rotate the original image and then save it?

  42. Nick May 21, 2019 at 3:14 am #

    Thank you for your wonderful article introduction.
    I have a problem, because the edge of the aligned face is a bit too much. Although it can be detected again, it may take too much time, so I would like to ask if I want to leave the edge a little less, is there any The way to do it, can you teach me to adjust that part?

  43. Luis Tripa June 26, 2019 at 7:41 am #

    Is there a way to do this in faces facing sideways? I mean, attempting to place all the face landmarks in a position such as if the person was looking at you instead of looking at something that is beside you?

  44. Siddhardha September 17, 2019 at 8:57 am #

    Hi Adrian,
    How does your alignment method differ from the alignment provided by dlib? (dlib.get_face_chip method also aligns the face).

    • Adrian Rosebrock September 17, 2019 at 9:00 am #

      I believe the face chip function is also used to perform data augmentation/jittering when training the face recognizer, but you should consult the dlib documentation to confirm.

  45. Dinesh October 11, 2019 at 3:06 am #

    Hi Adrian, how do I get the face aligned on the actual/original image, not just the face?

  46. Bambi Haber November 9, 2019 at 11:51 am #

    This is super interesting and useful. I have played with this example, and am trying to align a face but without cropping it – seems like we get information lost in the photo.

    How can I align an image according to a face within its predefined size? (even if I will have “black” pixels in places where info is missing[edges of image])

  47. hira kshatriya November 14, 2019 at 6:14 am #

    hey Adrian thanks for such script
    but for some images not detecting face or eye position


  1. (Faster) Facial landmark detector with dlib - PyImageSearch - April 2, 2018

    […] The most appropriate use case for the 5-point facial landmark detector is face alignment. […]

Before you leave a comment...

Hey, Adrian here, author of the PyImageSearch blog. I'd love to hear from you, but before you submit a comment, please follow these guidelines:

  1. If you have a question, read the comments first. You should also search this page (i.e., ctrl + f) for keywords related to your question. It's likely that I have already addressed your question in the comments.
  2. If you are copying and pasting code/terminal output, please don't. Reviewing another programmers’ code is a very time consuming and tedious task, and due to the volume of emails and contact requests I receive, I simply cannot do it.
  3. Be respectful of the space. I put a lot of my own personal time into creating these free weekly tutorials. On average, each tutorial takes me 15-20 hours to put together. I love offering these guides to you and I take pride in the content I create. Therefore, I will not approve comments that include large code blocks/terminal output as it destroys the formatting of the page. Kindly be respectful of this space.
  4. Be patient. I receive 200+ comments and emails per day. Due to spam, and my desire to personally answer as many questions as I can, I hand moderate all new comments (typically once per week). I try to answer as many questions as I can, but I'm only one person. Please don't be offended if I cannot get to your question
  5. Do you need priority support? Consider purchasing one of my books and courses. I place customer questions and emails in a separate, special priority queue and answer them first. If you are a customer of mine you will receive a guaranteed response from me. If there's any time left over, I focus on the community at large and attempt to answer as many of those questions as I possibly can.

Thank you for keeping these guidelines in mind before submitting your comment.

Leave a Reply