Eye blink detection with OpenCV, Python, and dlib

In last week’s blog post, I demonstrated how to perform facial landmark detection in real-time in video streams.

Today, we are going to build upon this knowledge and develop a computer vision application that is capable of detecting and counting blinks in video streams using facial landmarks and OpenCV.

To build our blink detector, we’ll be computing a metric called the eye aspect ratio (EAR), introduced by Soukupová and Čech in their 2016 paper, Real-Time Eye Blink Detection Using Facial Landmarks.

Unlike traditional image processing methods for computing blinks which typically involve some combination of:

  1. Eye localization.
  2. Thresholding to find the whites of the eyes.
  3. Determining if the “white” region of the eyes disappears for a period of time (indicating a blink).

The eye aspect ratio is instead a much more elegant solution that involves a very simple calculation based on the ratio of distances between facial landmarks of the eyes.

This method for eye blink detection is fast, efficient, and easy to implement.

To learn more about building a computer vision system to detect blinks in video streams using OpenCV, Python, and dlib, just keep reading.

Looking for the source code to this post?
Jump right to the downloads section.

Eye blink detection with OpenCV, Python, and dlib

Our blink detection blog post is divided into four parts.

In the first part we’ll discuss the eye aspect ratio and how it can be used to determine if a person is blinking or not in a given video frame.

From there, we’ll write Python, OpenCV, and dlib code to (1) perform facial landmark detection and (2) detect blinks in video streams.

Based on this implementation we’ll apply our method to detecting blinks in example webcam streams along with video files.

Finally, I’ll wrap up today’s blog post by discussing methods to improve our blink detector.

Understanding the “eye aspect ratio” (EAR)

As we learned from our previous tutorial, we can apply facial landmark detection to localize important regions of the face, including eyes, eyebrows, nose, ears, and mouth:

Figure 1: Detecting facial landmarks in an video stream in real-time.

This also implies that we can extract specific facial structures by knowing the indexes of the particular face parts:

Figure 2: Applying facial landmarks to localize various regions of the face, including eyes, eyebrows, nose, mouth, and jawline.

Figure 2: Applying facial landmarks to localize various regions of the face, including eyes, eyebrows, nose, mouth, and jawline.

In terms of blink detection, we are only interested in two sets of facial structures — the eyes.

Each eye is represented by 6 (x, y)-coordinates, starting at the left-corner of the eye (as if you were looking at the person), and then working clockwise around the remainder of the region:

Figure 3: The 6 facial landmarks associated with the eye.

Based on this image, we should take away on key point:

There is a relation between the width and the height of these coordinates.

Based on the work by Soukupová and Čech in their 2016 paper, Real-Time Eye Blink Detection using Facial Landmarks, we can then derive an equation that reflects this relation called the eye aspect ratio (EAR):

Figure 4: The eye aspect ratio equation.

Where p1, …, p6 are 2D facial landmark locations.

The numerator of this equation computes the distance between the vertical eye landmarks while the denominator computes the distance between horizontal eye landmarks, weighting the denominator appropriately since there is only one set of horizontal points but two sets of vertical points.

Why is this equation so interesting?

Well, as we’ll find out, the eye aspect ratio is approximately constant while the eye is open, but will rapidly fall to zero when a blink is taking place.

Using this simple equation, we can avoid image processing techniques and simply rely on the ratio of eye landmark distances to determine if a person is blinking.

To make this more clear, consider the following figure from Soukupová and Čech:

Figure 5: Top-left: A visualization of eye landmarks when then the eye is open. Top-right: Eye landmarks when the eye is closed. Bottom: Plotting the eye aspect ratio over time. The dip in the eye aspect ratio indicates a blink (Figure 1 of Soukupová and Čech).

On the top-left we have an eye that is fully open — the eye aspect ratio here would be large(r) and relatively constant over time.

However, once the person blinks (top-right) the eye aspect ratio decreases dramatically, approaching zero.

The bottom figure plots a graph of the eye aspect ratio over time for a video clip. As we can see, the eye aspect ratio is constant, then rapidly drops close to zero, then increases again, indicating a single blink has taken place.

In our next section, we’ll learn how to implement the eye aspect ratio for blink detection using facial landmarks, OpenCV, Python, and dlib.

Detecting blinks with facial landmarks and OpenCV

To get started, open up a new file and name it detect_blinks.py . From there, insert the following code:

To access either our video file on disk ( FileVideoStream ) or built-in webcam/USB camera/Raspberry Pi camera module ( VideoStream ), we’ll need to use my imutils library, a set of convenience functions to make working with OpenCV easier.

If you do not have imutils  installed on your system (or if you’re using an older version), make sure you install/upgrade using the following command:

Note: If you are using Python virtual environments (as all of my OpenCV install tutorials do), make sure you use the workon  command to access your virtual environment first and then install/upgrade imutils .

Otherwise, most of our imports are fairly standard — the exception is dlib, which contains our implementation of facial landmark detection.

If you haven’t installed dlib on your system, please follow my dlib install tutorial to configure your machine.

Next, we’ll define our eye_aspect_ratio  function:

This function accepts a single required parameter, the (x, y)-coordinates of the facial landmarks for a given eye .

Lines 16 and 17 compute the distance between the two sets of vertical eye landmarks while Line 21 computes the distance between horizontal eye landmarks.

Finally, Line 24 combines both the numerator and denominator to arrive at the final eye aspect ratio, as described in Figure 4 above.

Line 27 then returns the eye aspect ratio to the calling function.

Let’s go ahead and parse our command line arguments:

Our detect_blinks.py  script requires a single command line argument, followed by a second optional one:

  • --shape-predictor : This is the path to dlib’s pre-trained facial landmark detector. You can download the detector along with the source code + example videos to this tutorial using the “Downloads” section of the bottom of this blog post.
  • --video : This optional switch controls the path to an input video file residing on disk. If you instead want to work with a live video stream, simply omit this switch when executing the script.

We now need to set two important constants that you may need to tune for your own implementation, along with initialize two other important variables, so be sure to pay attention to this explantation:

When determining if a blink is taking place in a video stream, we need to calculate the eye aspect ratio.

If the eye aspect ratio falls below a certain threshold and then rises above the threshold, then we’ll register a “blink” — the EYE_AR_THRESH  is this threshold value. We default it to a value of 0.3  as this is what has worked best for my applications, but you may need to tune it for your own application.

We then have an important constant, EYE_AR_CONSEC_FRAME  — this value is set to 3  to indicate that three successive frames with an eye aspect ratio less than EYE_AR_THRESH  must happen in order for a blink to be registered.

Again, depending on the frame processing throughput rate of your pipeline, you may need to raise or lower this number for your own implementation.

Lines 44 and 45 initialize two counters. COUNTER  is the total number of successive frames that have an eye aspect ratio less than EYE_AR_THRESH  while TOTAL  is the total number of blinks that have taken place while the script has been running.

Now that our imports, command line arguments, and constants have been taken care of, we can initialize dlib’s face detector and facial landmark detector:

The dlib library uses a pre-trained face detector which is based on a modification to the Histogram of Oriented Gradients + Linear SVM method for object detection.

We then initialize the actual facial landmark predictor on Line 51.

You can learn more about dlib’s facial landmark detector (i.e., how it works, what dataset it was trained on, etc., in this blog post).

The facial landmarks produced by dlib follow an indexable list, as I describe in this tutorial:

Figure 6: The full set of facial landmarks that can be detected via dlib (higher resolution).

We can therefore determine the starting and ending array slice index values for extracting (x, y)-coordinates for both the left and right eye below:

Using these indexes we’ll be able to extract eye regions effortlessly.

Next, we need to decide if we are working with a file-based video stream or a live USB/webcam/Raspberry Pi camera video stream:

If you’re using a file video stream, then leave the code as is.

Otherwise, if you want to use a built-in webcam or USB camera, uncomment Line 62.

For a Raspberry Pi camera module, uncomment Line 63.

If you have uncommented either Line 62 or Line 63, then uncomment Line 64 as well to indicate that you are not reading a video file from disk.

Finally, we have reached the main loop of our script:

On Line 68 we start looping over frames from our video stream.

If we are accessing a video file stream and there are no more frames left in the video, we break from the loop (Lines 71 and 72).

Line 77 reads the next frame from our video stream, followed by resizing it and converting it to grayscale (Lines 78 and 79).

We then detect faces in the grayscale frame on Line 82 via dlib’s built-in face detector.

We now need to loop over each of the faces in the frame and then apply facial landmark detection to each of them:

Line 89 determines the facial landmarks for the face region, while Line 90 converts these (x, y)-coordinates to a NumPy array.

Using our array slicing techniques from earlier in this script, we can extract the (x, y)-coordinates for both the left and right eye, respectively (Lines 94 and 95).

From there, we compute the eye aspect ratio for each eye on Lines 96 and 97.

Following the suggestion of Soukupová and Čech, we average the two eye aspect ratios together to obtain a better blink estimate (making the assumption that a person blinks both eyes at the same time, of course).

Our next code block simply handles visualizing the facial landmarks for the eye regions themselves:

You can read more about extracting and visualizing individual facial landmark regions in this post.

At this point we have computed our (averaged) eye aspect ratio, but we haven’t actually determined if a blink has taken place — this is taken care of in the next section:

Line 111 makes a check to see if the eye aspect ratio is below our blink threshold — if it is, we increment the number of consecutive frames that indicate a blink is taking place (Line 112).

Otherwise, Line 116 handles the case where the eye aspect ratio is not below the blink threshold.

In this case, we make another check on Line 119 to see if a sufficient number of consecutive frames contained an eye blink ratio below our pre-defined threshold.

If the check passes, we increment the TOTAL  number of blinks (Line 120).

We then reset the number of consecutive blinks COUNTER  (Line 123).

Our final code block simply handles drawing the number of blinks on our output frame, as well as displaying the current eye aspect ratio:

To see our eye blink detector in action, proceed to the next section.

Blink detection results

Before executing any of these examples, be sure to use the “Downloads” section of this guide to download the source code + example videos + pre-trained dlib facial landmark predictor. From there, you can unpack the archive and start playing with the code.

Over this past weekend I was traveling out to Las Vegas for a conference. While I was waiting for my plane to board, I sat at the gate and put together the code for this blog post — this involved recording a simple video of myself that I could use to evaluate the blink detection software.

To apply our blink detector to the example video, just execute the following command:

And as you’ll see, we can successfully count the number of blinks in the video using OpenCV and facial landmarks:

Later, at my hotel, I recorded a live stream of the blink detector in action and turned it into a screencast.

To access my built-in webcam I executed the following command (taking care to uncomment the correct VideoStream  class, as detailed above):

Here is the output of the live blink detector along with my commentary:

Improving our blink detector

This blog post focused solely on using the eye aspect ratio as a quantitative metric to determine if a person has blinked in a video stream.

However, due to noise in a video stream, subpar facial landmark detections, or fast changes in viewing angle, a simple threshold on the eye aspect ratio could produce a false-positive detection, reporting that a blink had taken place when in reality the person had not blinked.

To make our blink detector more robust to these challenges, Soukupová and Čech recommend:

  1. Computing the eye aspect ratio for the N-th frame, along with the eye aspect ratios for N – 6 and N + 6 frames, then concatenating these eye aspect ratios to form a 13 dimensional feature vector.
  2. Training a Support Vector Machine (SVM) on these feature vectors.

Soukupová and Čech report that the combination of the temporal-based feature vector and SVM classifier helps reduce false-positive blink detections and improves the overall accuracy of the blink detector.


In this blog post I demonstrated how to build a blink detector using OpenCV, Python, and dlib.

The first step in building a blink detector is to perform facial landmark detection to localize the eyes in a given frame from a video stream.

Once we have the facial landmarks for both eyes, we compute the eye aspect ratio for each eye, which gives us a singular value, relating the distances between the vertical eye landmark points to the distances between the horizontal landmark points.

Once we have the eye aspect ratio, we can threshold it to determine if a person is blinking — the eye aspect ratio will remain approximately constant when the eyes are open and then will rapidly approach zero during a blink, then increase again as the eye opens.

To improve our blink detector, Soukupová and Čech recommend constructing a 13-dim feature vector of eye aspect ratios (N-th frame, N – 6 frames, and N + 6 frames), followed by feeding this feature vector into a Linear SVM for classification.

Of course, a natural extension of blink detection is drowsiness detection which we’ll be covering in the next two weeks here on the PyImageSearch blog.

To be notified when the drowsiness detection tutorial is published, be sure to enter your email address in the form below!


If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 11-page Resource Guide on Computer Vision and Image Search Engines, including exclusive techniques that I don’t post on this blog! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , , , , ,

113 Responses to Eye blink detection with OpenCV, Python, and dlib

  1. Marc Boudreau April 24, 2017 at 12:39 pm #

    Hi Adrian,
    Looks very interesting as usual!

    I saw on Twitter that you got dlib working on Raspi.
    Are you planning a tutorial on installing dlib on Raspi?

    • Adrian Rosebrock April 25, 2017 at 11:52 am #

      Correct — the dlib + Raspberry Pi install blog post will go live next week (May 1st, 2017).

  2. Eric Sobczak April 24, 2017 at 4:01 pm #

    This looks great. I like the fact that you explain the science behind it all. Should we be using Python 2.7 or 3.0?

    • Adrian Rosebrock April 25, 2017 at 11:51 am #

      You can use either Python 2.7 or Python 3.

  3. RAVIVARMAN RAJENDIRAN April 25, 2017 at 4:44 am #

    Hi, Thanks for the code.
    When i run the code, i get following two lines in output and not opening any video.

    [INFO] loading facial landmark predictor…
    [INFO] starting video stream thread…

    • Adrian Rosebrock April 25, 2017 at 11:48 am #

      What type of camera are you using?

      • haili April 26, 2017 at 3:42 am #

        Hi Adrian,Very thanks for you code.
        But when i run the code,i I also can’t open any video.I use the USB camera of Logitech…

        • Adrian Rosebrock April 26, 2017 at 6:49 am #

          It sounds like your version of OpenCV was compiled without video support. I would suggest re-compiling and re-installing OpenCV using one of my tutorials.

      • Jorge April 26, 2017 at 9:40 pm #

        Hi Adrian. I have the same issue with the blink detection:
        [INFO] loading facial landmark predictor…
        [INFO] starting video stream thread…
        and then the prompt.
        (The code for “real-time-facial-landmarks” works fine)

        • Adrian Rosebrock April 28, 2017 at 9:41 am #

          Hi Jorge — please check my reply to “haili” above. You’ll want to compile OpenCV with video support so you can access your webcam.

          • Jorge April 30, 2017 at 2:35 am #

            Hi Adrian. Thanks for your great job!!
            I found the cause of the issue. I missed to uncomment two lines of code:
            66: vs = VideoStream(src=0).start()
            68: fileStream = False
            (So we can use the built-in webcam or USB cam, as you say in the blog (in your instructions are lines 63 and 64 but in the downloaded code are 66 and 68)
            I Hope this could help HAILI and RAVIVARMAN RAJENDIRAN
            Thanks a lot for this blog

          • Adrian Rosebrock May 1, 2017 at 1:26 pm #

            Congrats on resolving the issue Jorge, thank you for sharing.

  4. Nurulhasan April 25, 2017 at 8:00 am #

    Really great job..
    Face recognition also possible??.

    • Adrian Rosebrock April 25, 2017 at 11:46 am #

      You typically wouldn’t use facial landmarks directly for face recognition. Instead you would try Eigenfaces, Fisherfaces, and LBPs for face recognition (covered inside the PyImageSearch Gurus course. Otherwise, you would look into OpenFace.

  5. Christian April 25, 2017 at 9:12 am #

    Very cool! Great post. Thanks!!

    • Adrian Rosebrock April 25, 2017 at 11:45 am #

      Thanks Christian, I’m glad you enjoyed it!

  6. JBeale April 25, 2017 at 7:32 pm #

    Great article, this is really impressive. In my case, my eye aspect ratio never goes much above 0.3 no matter how wide I open my eyes. Also, I missed some blinks with the 3 frame setting, maybe your frame rate is higher than mine. This is what works better on my system:

    EYE_AR_THRESH = 0.23 # was 0.3

    • Adrian Rosebrock April 25, 2017 at 8:59 pm #

      Thanks for sharing! As I mentioned in the post, it might take a little tweaking depending on the frame processing rate of the system.

  7. JBeale April 25, 2017 at 8:03 pm #

    It is interesting to note that the green outlines around my eye always seem to show both eyes are roughly the same amount open, even when one eye is completely wide open and the other eye is entirely shut. Looks like the facial landmark detector (HOG) is making some assumption that the face should be symmetric, so both eyes should be about the same.

    • Adrian Rosebrock April 25, 2017 at 9:01 pm #

      The face detector is HOG-based. The facial landmark predictor is NOT HOG-based. Instead it interprets these landmarks as probabilities and attempts to fit a model to it. You can read more about the facial landmark detector here.

  8. Arzoo April 26, 2017 at 4:21 am #

    Hi Adrian,
    thanks for the detailed tutorial!
    I downloaded the zip file and executed the required command in terminal.
    I got this error:
    RuntimeError: Unable to open shape_predictor_68_face_landmarks.dat
    I’m using a macOS Sierra.
    can you please help me figure this out.

    • Adrian Rosebrock April 26, 2017 at 6:49 am #

      Make sure you are executing your Python script from the same directory as the .dat file. Based on your error message, it seems like your paths are incorrect.

      • Arzoo May 6, 2017 at 7:41 am #

        Got it. Thank you!

      • Jack November 1, 2017 at 9:50 am #

        Hi Adrian
        thanks for the detailed turorial!
        I have the same error like Arzoo,but I work on win10,dlib18.17,opencv3.1,python3.4.5
        now I do not know how to solve it.
        can you please help me solve the problem.

        • Adrian Rosebrock November 2, 2017 at 2:24 pm #

          Hi Jack — I don’t officially support Windows here on the PyImageSearch blog, I highly recommend you use a Unix-based environment such as Linux or macOS to study computer vision. In either case it looks like your path to the shape prediction model is incorrect. Double-check your paths and ensure you are using Windows path separators (‘\’) instead of Unix path separators (‘/’).

  9. wallace April 26, 2017 at 11:34 pm #

    Hi Adrian Rosebrock
    Thanks for your great work. But I’m using a windows OS. So how can I install the dlib library correctly for the facial landmark detection?

    • Adrian Rosebrock April 28, 2017 at 9:38 am #

      Hi Wallace — I don’t cover Windows here on the PyImageSearch blog. I recommend using Unix-based operating systems such as macOS and Ubuntu for computer vision development. IF you would like to install dlib on Windows, please refer to the official dlib site.

    • Tarun April 29, 2017 at 3:13 am #

      Hi wallace,

      I used pip install dlib and that worked. I am able to use dlib in my code

    • Tarun April 29, 2017 at 3:19 am #

      Hi Adrian,

      Firstly, thank you so much. Your blog is of immense help for a computer vision enthusiast like me.

      A bit off topic – now days I am playing with YOLO to get real time object detection. I am trying to implement this in Python but without any success. Do you plan to cover this up?

      • Adrian Rosebrock May 1, 2017 at 1:41 pm #

        I’ll be covering YOLO along with Faster R-CNNs and SSDs inside Deep Learning for Computer Vision with Python. Object detection with deep learning is still a very volatile field of research. I’ll be discussing how these frameworks work and how to use them, but a pure Python implementation will be outside the scope of the book.

        Keep in mind that many state-of-the-art deep learning frameworks for object detections are based on forks of libraries like Caffe or mxnet. The authors then implement custom layers. It will likely be a few years until we see these types of object detectors (or even the building blocks) naturally existing in a stable state inside Keras, mxnet, etc.

  10. Gökhan Aras April 30, 2017 at 3:52 am #

    Thanks Adrian,

    you are a wonderful man

    this blog very good

    • Adrian Rosebrock May 1, 2017 at 1:25 pm #

      Thank you Gökhan, I really appreciate that! 🙂

  11. jon May 1, 2017 at 5:03 am #


    could you please make this clear :

    A = dist.euclidean(eye[1], eye[5])

    why eye[1], eye[5] ? in dlib eye landmarks to which eye[1], eye[5] is referring ?

    • Adrian Rosebrock May 1, 2017 at 1:19 pm #

      These are the individual indexes of the (x, y)-coordinates of the eyes. These indexes map to the equation in Figure 4 above (keep in mind that the equation is one-indexed while Python is zero-indexed). Furthermore, you can read more about the individual facial landmarks in this post.

  12. Shivani Junawane May 2, 2017 at 12:25 pm #

    vs.show() no such function found..

    This is the error i am getting.. I am able to run the code with built in video.. but not using laptop camera.. please resolve this issue..

    • Adrian Rosebrock May 3, 2017 at 5:46 pm #

      There is no function called vs.show() anywhere in this blog post. Did you mean vs.stop()?

  13. Firatov May 4, 2017 at 6:35 am #

    Hi Adrian,

    Great post! Tested this one with mobile phone (iPhone) and it works okay. dlib detection is a bit problematic on iphone camera so sometimes it doesn’t detect blinks because of bad lighting.

    I like the simple idea behind it. I tried to apply this idea to “eyebrow raise” detection mechanism but the ratio is not changing as drastically as eye ratio. Do you maybe have suggestion or idea to apply the same idea to eyebrow raising or any other facial gesture?

    • Adrian Rosebrock May 4, 2017 at 12:30 pm #

      Keep in mind that the eyebrow facial landmarks are only represented by 5 points each — they don’t surround the eyebrow like the facial landmarks do for the eye so the aspect ratio doesn’t have much meaning here. I would monitor the (x, y)-coordinates of the eyebrows, but otherwise you might want to look into other facial landmark models that can detect more points and encapsulate the entire eyebrow.

      • Roy Gustafson June 6, 2017 at 3:08 pm #

        Hi Adrian, thanks for this guide. As of now, I’m trying to use this system for wink detection. Right now I’m gathering data, but what I’ve determined is that both EAR’s decrease by something like 40% no matter which eye winks. Then I determine which EAR decreased more, and which decreased less. I may hard code it, if I can generalize it to a ratio indicating a wink, then a difference indicating WHICH eye is winking. From there, I need to make sure it doesn’t give me false positives for blinks and squints (although squints might be impossible to rule out).

        But this project has given me a lot to go off of! Thanks so much

  14. Jinwoo May 7, 2017 at 3:40 am #

    Hi Adrian,

    I’m having a problem running this code. It says
    “detect_blinks.py: error: the following arguments are required: -p/–shape-predictor”
    Can you please help me solve this error?

    • Adrian Rosebrock May 8, 2017 at 12:27 pm #

      Hi Jinwoo — I would suggest that you read up on command line arguments before continuing.

      • Mohammed Salman September 10, 2017 at 11:38 am #

        Do you know what is the issue here ?

        I’ve read what you have linked.. Don’t seem to find the solution here.

        • Adrian Rosebrock September 11, 2017 at 9:09 am #

          Please read the article again. It provides a discussion on command line arguments and how to use them. The problem is that you’re not specifying the --shape-predictor argument to your script.

  15. Ketan Vaidya May 8, 2017 at 12:24 pm #

    Hi! Great tutorials on the site! Just to give you a suggestion;
    Could you also use an IR lighting rig to light up the subject at night time? Because most webcams have the capability to detect IR.

  16. Ayush Karapagale May 28, 2017 at 1:11 am #

    its very slow isnt there any way to fasten it ??…..

    • Adrian Rosebrock May 28, 2017 at 1:28 am #

      What are the specs of the computer you are using to execute the code? This code can easily run a modern day laptops/desktops.

      • ayush karapagale May 28, 2017 at 2:25 am #

        i have apple macbook air… but i am doing a project that requires raspberry pi only because i have to make the device portable… i ahe tried increasing the gpu to 256 mb but its still the same

        • Adrian Rosebrock May 31, 2017 at 1:32 pm #

          I will be writing an updated blog post that provides a number of optimizations for blink detection on the Raspberry Pi within the next couple of weeks. Stay tuned!

      • ayush karapagale May 28, 2017 at 2:26 am #

        in the video you are able to run the program fast… can u
        please tell how ??

  17. reza June 4, 2017 at 8:02 pm #

    Hi .I want to write a program that count people(footfall counting).can you help me?tnx

  18. Dheeraj June 13, 2017 at 7:10 am #

    Hi Adrian,

    Its not accurate at all, even if i move my eyes and don’t blink it count it as blinked. why ?

    • Adrian Rosebrock June 13, 2017 at 10:51 am #

      It sounds like you need to adjust the EYE_AR_THRESH variable as discussed in the post.

  19. zjfsharp June 20, 2017 at 3:12 am #

    It’s amazing! Thank you, Adrian Rosebrock! I want to use this to my fatigue detection experience.

    • Adrian Rosebrock June 20, 2017 at 10:46 am #

      Thanks, I’m happy to hear you found the project helpful! 🙂 Best of luck on your fatigue detection work.

  20. shubhank rahangdale June 23, 2017 at 3:10 pm #

    Hi Adrian

    if fileStream and not vs.more(VideoStream(src=0).start()):
    TypeError: more() takes exactly 1 argument (2 given)

    This is the error i am getting.. I am able to run the code with built in video.. but not using laptop camera.. please resolve this issue..

    • shubhank rahangdale June 23, 2017 at 3:23 pm #

      Thanks, I found the bug..

      • Adrian Rosebrock June 27, 2017 at 6:40 am #

        Congrats on resolving the issue Shubhank!

        • shubhank November 12, 2017 at 10:28 am #

          hi… Adrian
          Could you share your content number so that I can contact you for my project details and idea .

  21. Adrian Perez June 28, 2017 at 9:32 am #

    Hello! Adrian, thanks for share your experience.

    how do you add a graphic or plot with the eye blink data?

    could you share this code to?


    • Adrian Rosebrock June 30, 2017 at 8:19 am #

      Are you trying to plot the EAR over time? If so, I would suggest using matplotlib, Bokeh, or whatever plotting library you feel comfortable with.

  22. Tommy Tang July 13, 2017 at 5:32 pm #

    Hi Adrian,

    Thank you so much for sharing and I followed all your steps to create a blink rate monitoring as well as a head tilting monitoring using some other data among the 68 points. All works pretty well but I found that the predictor will fail to predict the accurate eye contour when the object is wearing optics. Do you have any suggestion for that? Cause I know in openCV haar cascade there seems to be a specific classifier for eyes with glasses. Will there be a specific predictor built for this case?

    • Adrian Rosebrock July 14, 2017 at 7:23 am #

      That’s a great question. If the driver is wearing glasses and the eyes cannot be localized properly, you would likely need to create a specific predictor. Another option might be to try an IR camera, but again, that would also imply having to train a custom predictor. I’ve never tried this method with the user wearing glasses.

  23. Johan July 18, 2017 at 6:10 am #

    HI Adrian, I want to convert this to C++ code, can you please point me right direction, Thank you

  24. Chamath August 11, 2017 at 3:25 am #

    Hi thank for your amazing tutorial. But i got an error and i can’t resolve it. can you please help me.

    [usage: faceland.py [-h] -p SHAPE_PREDICTOR_68_FACE_LANDMARKS.DAT
    faceland.py: error: the following arguments are required: -p/–shape_predictor_68_face_landmarks.dat]

    Thank You.

    • Adrian Rosebrock August 14, 2017 at 1:20 pm #

      Please read the comments before you post. I have already answered this question in reply to “Jinwoo” above.

  25. Arighi August 16, 2017 at 3:31 pm #

    Hey Adrian, i really love your tutorial. It helped me a lot for finising my project. But i have some problem making it automatically set the threshold based on the person’s default opened eye EAR (different race, different eye EAR).

    Like i set it for 0.2, ot worked great for me, but not for my chinese friend. It detected his eyes closed. So i have to manually edit the threshold. Is there any way to make it more dynamic?

    • Adrian Rosebrock August 17, 2017 at 9:07 am #

      In short, not easily. I would suggest collecting as much EAR data as possible across a broad range of ethnicities and then using that as training data. Secondly, keep in mind what I said regarding in the “Improving our blink detector” section. You can treat the EAR as a feature vector and feed them into a SVM for better accuracy.

  26. Jack August 17, 2017 at 4:42 am #

    Hello there:
    I have a quick question about the facial landmark detection: if only a partial face is presented, is it possible to detect any part of the face elements, e.g. only one eye? Or do we need to retrain the data set to just detect an eye? Thanks.

    • Adrian Rosebrock August 17, 2017 at 9:02 am #

      Keep in mind that facial landmark detection is a two phase process. First we must localize the face in an image. This normally done using HOG + Linear SVM or Haar cascades. Only after the face is localized can we detect facial landmarks. In short, you need to be able to detect the face first.

      • Jack August 23, 2017 at 2:30 pm #

        Thanks for the reply. If we know that the image is only for an eye area, is it possible to use facial landmark to just detect the feature or outline of the eye? Thanks.

        • Adrian Rosebrock August 23, 2017 at 3:39 pm #

          Unfortunately, no. The facial landmark detector assumes you are working with the entire face. If you had just eye images and wanted to localize the eye you would need an eye detector + a trained shape predictor for the eye region.

  27. J-F Duquette August 18, 2017 at 9:28 am #

    Hi Adrian,

    Many thanks for you sharing your knowledge. I’m trying to use your video file and everything is loading without errors but the video doesn’t show at all. I have follow your tutorial on how to install OpenCV and i have install video lib. I found to the web that i could be something with FFMPEG ?

    Thanks for your help

    • Adrian Rosebrock August 21, 2017 at 3:48 pm #

      It’s hard to say what the exact issue could be, although it sounds likely that your system does not have the proper video codecs installed to read your video. I would suggest installing FFMPEG then re-compiling and re-installing OpenCV.

  28. Fernando J. August 19, 2017 at 6:52 pm #

    Hi Adrian, thank you very much for the wonderful tutorials and courses that you present each time.

    First question: Could this flicker detector be used as liveness detection in any facial recognition system?.

    Second question: Do you plan to prepare a tutorial on some effective mechanism for the liveness detection in a facial recognition system?.

    Thank you!.

    • Adrian Rosebrock August 21, 2017 at 3:42 pm #

      1. Yes, this could be used for liveness detection, although I would recommend using a depth camera as well.

      2. I’ll add liveness detection as a future potential blog post, thank you for the suggestion.

  29. Ankit September 8, 2017 at 1:12 pm #

    line 116, its give invalid syntax error, please help,, i am using this code in Windows, and use Webcam of my PC.

    • Adrian Rosebrock September 11, 2017 at 9:24 am #

      What is the exact error you are getting? Did you use the “Downloads” section of this tutorial to download the code instead of copying and pasting it?

  30. Mohammed Salman September 9, 2017 at 9:29 pm #

    Hi Adrian,

    I’m an Engineering student. I am getting the following errors after de-commenting Line 62:

    usage: detect_blinks.py [-h] -p SHAPE_PREDICTOR [-v VIDEO]
    detect_blinks.py: error: the following arguments are required: -p/–shape-predictor

    Please help! (I am going to take your 21 day crash course, very exicited sir!)

    Best Regards.

    • Adrian Rosebrock September 11, 2017 at 9:16 am #

      Hi Mohammed — please see my previous reply to your comment. You need to read up on command line arguments. The issue is you’re not supplying the --shape-predictor switch.

  31. Carlos September 19, 2017 at 1:22 am #

    Hi Adrian, I installed OpenCV from this link, I followed all the steps https://www.pyimagesearch.com/2016/04/18/install-guide-raspberry-pi-3-raspbian-jessie-opencv-3/ and I got this right
    $ source ~/.profile
    $ workon cv
    $ python
    >>> import cv2
    >>> cv2.__version__
    But with this code I get an error: ImportError: No module named spicy (and imutils and cv2 )
    I made a ‘test’: when I write source~/.profile->workon cv->python->import cv2, I dont have any error, but when I write ->import cv2 on the python shell (3.4.2) I get the error No module named cv2
    (I’m sorry for my english)
    Good job and thanks sooo much

    • Adrian Rosebrock September 20, 2017 at 7:18 am #

      I’m bit confused. You were able to install OpenCV 3.1 on your Raspberry Pi and access it via the command then. But then you try to import SciPy and you cannot access it? It sounds like you didn’t install SciPy on your Raspberry Pi:

      • Carlos September 21, 2017 at 7:34 pm #

        Adrian, thanks for your reply, I installed the scipy. When I open the terminal and I write->source~/.profile->workon cv->python->import cv2 there’s no any error, but when I open the Python shell and I write import cv2, then it shows “No module named cv2”. I get this error only in the Python Shell, not in the terminal, it’s like:
        $ source ~/.profile
        $ workon cv
        $ python
        >>> import cv2
        >>> (Here there’s no problem)

        Python Shell: (3.4.2)
        >>>import cv2
        “No module named cv2”

        Thank you Adrian, regards!!!

        • Adrian Rosebrock September 22, 2017 at 8:56 am #

          Can you clarify what you mean by “Python shell”? Are you referring to the GUI version of Python IDLE? If so, the GUI version of IDLE does not support Python virtual environments and it will not work. Please use the terminal or use Jupyter Notebooks.

          • Carlos September 22, 2017 at 8:57 pm #

            Yeah!!! exactly, now I understand, then I’ll use Jupyter.
            Again, thanks so much Adrian,

  32. Jack September 26, 2017 at 7:52 pm #

    Hey! How do you contact EAR if the framework returns 8 points for the eye?

    • Adrian Rosebrock September 27, 2017 at 6:45 am #

      You might not be able to. How many landmarks are detected around each eye?

  33. Reza October 1, 2017 at 3:52 am #

    Ur code cant work for multiple faces at the same time right?
    The counter variable gets violated

    • Adrian Rosebrock October 2, 2017 at 9:46 am #

      Correct, this code is intended for a single face. You can update it to work with multiple faces by tracking each face and associating a counter with each face.

  34. Carlos October 8, 2017 at 1:12 am #

    Hi Adrian, At first thank so much, you’re the best.
    I have a question, when I execute the program, I get this message (After INFO)


    ** (Frame:32335): WARNING **: Error retrieving accessibility bus address: org.freedesktop.DBus.Error.ServiceUnknown: The name org.a11y.Bus was not provided by any .service files

    I received this warning after install opencv 3.3, before, I was executing the program with opencv 3.1 and I never received any warning, do you know what is it about?
    Thank you

    • Adrian Rosebrock October 9, 2017 at 12:30 pm #

      It’s important to understand that that is is a warning and not an error. The message has no impact on your ability to execute the code or obtain the correct results.

      To make the message go away run:

      $ sudo apt-get install libcanberra-gtk*

      Then re-compile and re-install OpenCV>

      • Carlos October 9, 2017 at 10:48 pm #

        Ok, thank you

  35. Reza Ghoddoosian October 13, 2017 at 6:46 pm #

    Hi and thanks for the content
    i get this error:
    Unable to stop the stream: Inappropriate ioctl for device
    do you know how i should sole it?

    • Reza Ghoddoosian October 13, 2017 at 7:05 pm #

      btw it does not work to read from a file , th e webcam version works

      • Adrian Rosebrock October 14, 2017 at 10:36 am #

        Unfortunately I’m not sure what the error is here. Can you try using the FileVideoStream class?

  36. Chris October 29, 2017 at 8:26 am #

    Hi Adrian thanks for the content!

    I have this error when running in my command prompt:
    Unable to open shape_predictor_68_face_landmarks.dat

    Both my script and the dat. file is in the same directory(desktop),do you know how to make it work?

    • Adrian Rosebrock October 30, 2017 at 2:59 pm #

      Hi Chris, you must specify the proper path to the file. You also must make sure permissions for files are set.

      • Chris November 1, 2017 at 10:18 am #

        Hi Adrian, thanks for the reply, the error has been resolved.
        Thanks again for the great content!

  37. Salman November 22, 2017 at 12:37 pm #

    Thank you for this great article.
    I’m working on images of eyes (just eyes, no face) to find out the drowsiness. Is it possible to use dlib for the case the eyes are cropped out of the face? Or you have better suggestions.

    Thank you

    • Adrian Rosebrock November 25, 2017 at 12:45 pm #

      Hey Salman — you would need to train your own custom shape predictor for just the eyes. The method outlined in this blog post requires the entire face to be detected which in turns allows the eyes to be localized.

  38. Daniel Obeng December 6, 2017 at 12:50 pm #

    Hello Adrian, this works amazingly well. I’m currently using it for a project in college where we need to map an eye blink to the time within the video that it occurred. Do you think you can help me modify the code so that I can also log the time (i.e the time within the video) that the blink occurred? Thanks!

    • Daniel Obeng December 6, 2017 at 12:53 pm #

      Also, I was hoping you can show me how to modify the code so that it doesn’t show the video but just works in the background.

      • Adrian Rosebrock December 8, 2017 at 5:06 pm #

        Hey Daniel, can you elaborate more on what you mean by “time within the video”? Secondly, while I’m happy to help point you in the right direction and provide suggestions please keep in mind that I publish all tutorials here on PyImageSearch free of cost. I’m simply too busy to take on additional customizations. I hope you understand.

  39. Narmi December 19, 2017 at 3:57 am #

    Sir your explanation is easily understandable..Can this will be implement using c++ in opencv?

    • Adrian Rosebrock December 19, 2017 at 4:15 pm #

      You can use this method in any programming language provided you can localize the eye region and apply the EAR algorithm.

  40. Andrea December 20, 2017 at 4:55 pm #

    Hey Adrian, just a quick note to say thanks – we have used this as a base to build upon for a blink based lie detection system for a project at university. (we have credited you, of course!)
    We found that blink detection is improved a lot if the EAR threshold is dynamically set to respond to the mean of recent EAR values. Also, getting some nice graphical output tracking EAR and counting blinks really helped with algorithm tuning.
    Anyway, thanks for a super readable code – I enjoyed playing around with it. 🙂
    Take care x

    • Adrian Rosebrock December 22, 2017 at 7:04 am #

      Congrats on the successful project, Andrea! The blink-based lie detection system sounds very interesting. Do you have a writeup of the report so I can learn more about it?

  41. Ashish Sharma January 6, 2018 at 3:56 am #

    I am using windows
    after running this code i got an error
    detector = dlib.get_frontal_face_detector()
    AttributeErrors: ‘module’ object has no attribute ‘get_frontal_face_detector’

    Please help

    • Adrian Rosebrock January 8, 2018 at 2:55 pm #

      That is odd…which version of dlib are you using?

      • ASHISH Sharma January 9, 2018 at 1:36 am #

        We are using dlib 18.17.100

        • Adrian Rosebrock January 10, 2018 at 1:04 pm #

          Thanks for sharing the version information. Unfortunately I’m not sure what the error would be in this case. I would suggest posting on the official dlib forums. Sorry I couldn’t be of more help here!

  42. Hadi January 7, 2018 at 11:25 am #

    Hi Adrian
    I need your .exe of program. Can you give it to me?


  1. Detecting eye blinks with Python – Full-Stack Feed - April 24, 2017

    […] Learn how to detect blinks, count blinks, and recognize blinks in video streams using OpenCV and Python. Read more […]

  2. Drowsiness detection with OpenCV - PyImageSearch - May 8, 2017

    […] weeks ago I discussed how to detect eye blinks in video streams using facial […]

Leave a Reply