Raspberry Pi: Facial landmarks + drowsiness detection with OpenCV and dlib

Today’s blog post is the long-awaited tutorial on real-time drowsiness detection on the Raspberry Pi!

Back in May I wrote a (laptop-based) drowsiness detector that can be used to detect if the driver of a motor vehicle was getting tired and potentially falling asleep at the wheel.

The driver drowsiness detector project was inspired by a conversation I had with my Uncle John, a long haul truck driver who has witnessed a more than a few accidents due to fatigued drivers.

The post was really popular and a lot of readers got value out of it…

…but the method was not optimized for the Raspberry Pi!

Since then readers have been requesting me to write a followup blog post that covers the necessary optimizations to run the drowsiness detector on the Raspberry Pi.

I caught up with my Uncle John a few weeks ago and asked him what he would think of a small computer that could be mounted inside his truck cab to help determine if he was getting tired at the wheel.

He wasn’t crazy about the idea of being monitored by a camera his entire work day (and I don’t necessarily blame him either — I wouldn’t want to be monitored all the time either). But he did eventually concede that a device like this, and ideally less invasive, would certainly help avoid accidents due to fatigued drivers.

To learn more about these facial landmark optimizations and how to run our drowsiness detector on the Raspberry Pi, just keep reading!

Looking for the source code to this post?
Jump right to the downloads section.

Raspberry Pi: Facial landmarks + drowsiness detection with OpenCV and dlib

Today’s tutorial is broken into four parts:

  1. Discussing the tradeoffs between Haar cascades and HOG + Linear SVM detectors.
  2. Examining the TrafficHAT used to create the alarm that will sound if a driver/user gets tired.
  3. Implementing dlib facial landmark optimizations so we can deploy our drowsiness detector to the Raspberry Pi.
  4. Viewing the results of our optimized driver drowsiness detection algorithm on the Raspberry Pi.

Before we get started I would highly encourage you to read through my previous tutorial on Drowsiness detection with OpenCV.

While I’ll be reviewing the code in its entirety here, you should still read the previous post as I discuss the actual Eye Aspect Ratio (EAR) algorithm in more detail.

The EAR algorithm is responsible for detecting driver drowsiness.

Haar cascades: less accurate, but faster than HOG

The major optimization we need to run our driver drowsiness detection algorithm on the Raspberry Pi is to swap out the default dlib HOG + Linear SVM face detector and replace it with OpenCV’s Haar cascade face detector.

While HOG + Linear SVM detectors tend to be significantly more accurate than Haar cascades, the cascade method is also much faster than HOG + Linear SVM detection algorithms.

A complete review of both HOG + Linear SVM and Haar cascades work is outside the scope of this blog post, but I would encourage you to:

  1. Read this post on Histogram of Oriented Gradients and Object Detection where I discuss the pros and cons of HOG + Linear SVM and Haar cascades.
  2. Work through the PyImageSearch Gurus course where I demonstrate how to implement your own custom HOG + Linear SVM object detectors from scratch.

The Raspberry Pi TrafficHAT

In our previous tutorial on drowsiness detection I used my laptop to execute driver drowsiness detection code — this enabled me to:

  1. Ensure the drowsiness detection algorithm would run in real-time due to the faster hardware.
  2. Use the laptop speaker to sound an alarm by playing a .WAV file.

The Raspberry Pi does not have a speaker so we cannot play any loud alarms to wake up the driver…

…but the Raspberry Pi is a highly versatile piece of hardware that includes a large array of hardware add-ons.

One of my favorites is the TrafficHAT:

Figure 1: The Raspberry Pi 3 with TrafficHat board containing button, buzzer, and lights.

The TrafficHAT includes:

  • Three LED lights
  • A button
  • A loud buzzer (which we’ll be using as our alarm)

This kit is an excellent starting point to get some exposure to GPIO. If you’re just getting started as well, be sure to take a look at the TrafficHat.

You don’t have to use the TrafficHAT of course; any other piece of hardware that emits a loud noise will do.

Another approach I like to do is just plug a 3.5mm audio cable in the audio jack, and then set up text to speech using espeak  (a package available via apt-get ). Using this method you could have your Pi say “WAKEUP WAKEUP!” when you’re drowsy. I’ll leave this as an exercise for you to implement if you so choose.

However, for the sake of this tutorial I will be using the TrafficHAT. You can buy your own TrafficHAT here.

And from there you can install the required Python packages you need to use the TrafficHAT via pip . But first, ensure you’re in your appropriate virtual environment on your Pi. I have a thorough explanation on virtual environments on this previous post.

Here are the installation steps upon opening a terminal or SSH connection:

From there, if you want to check that everything is installed properly in your virtual environment you may run the Python interpreter directly:

Note: I’ve made the assumption that the virtual environment you are using already has the above packages installed in it. My cv  virtual environment has NumPy, dlib, OpenCV, and imutils already installed, so by using pip  to install the RPi.GPIO  and gpiozero  to install the respective GPIO packages, I’m able to access all six libraries from within the same environment. You may pip install  each of the packages (except for OpenCV). To install an optimized OpenCV on your Raspberry Pi, then just follow this previous post. If you are having trouble getting dlib installed, please follow this guide.

The driver drowsiness detection algorithm is identical to the one we implemented in our previous tutorial.

To start, we will apply OpenCV’s Haar cascades to detect the face in an image, which boils down to finding the bounding box (x, y)-coordinates of the face in the frame.

Given the bounding box the face we can apply dlib’s facial landmark predictor to obtain 68 salient points used to localize the eyes, eyebrows, nose, mouth, and jawline:

Figure 2: Visualizing the 68 facial landmark coordinates from the iBUG 300-W dataset.

As I discuss in this tutorial, dlib’s 68 facial landmarks are indexable which enables us to extract the various facial structures using simple Python array slices.

Given the facial landmarks associated with an eye, we can apply the Eye Aspect Ratio (EAR) algorithm which was introduced by Soukupová and Čech’s in their 2017 paper, Real-Time Eye Blink Detection suing Facial Landmarks:

Figure 3: Top-left: A visualization of eye landmarks when then the eye is open. Top-right: Eye landmarks when the eye is closed. Bottom: Plotting the eye aspect ratio over time. The dip in the eye aspect ratio indicates a blink (Image credit: Figure 1 of Soukupová and Čech).

On the top-left we have an eye that is fully open and the eye facial landmarks plotted. Then on the top-right we have an eye that is closed. The bottom then plots the eye aspect ratio over time. As we can see, the eye aspect ratio is constant (indicating that the eye is open), then rapidly drops to close to zero, then increases again, indicating a blink has taken place.

You can read more about the blink detection algorithm and the eye aspect ratio in this post dedicated to blink detection.

In our drowsiness detector case, we’ll be monitoring the eye aspect ratio to see if the value falls but does not increase again, thus implying that the driver/user has closed their eyes.

Once implemented, our algorithm will start by localizing the facial landmarks on extracting the eye regions:

Figure 4: Me with my eyes open — I’m not drowsy, so the Eye Aspect Ratio (EAR) is high.

We can then monitor the eye aspect ratio to determine if the eyes are closed:

Figure 5: The EAR is low because my eyes are closed — I’m getting drowsy.

And then finally raising an alarm if the eye aspect ratio is below a pre-defined threshold for a sufficiently long amount of time (indicating that the driver/user is tired):

Figure 6: My EAR has been below the threshold long enough for the drowsiness alarm to come on.

In the next section, we’ll implement the optimized drowsiness detection algorithm detailed above on the Raspberry Pi using OpenCV, dlib, and Python.

A real-time drowsiness detector on the Raspberry Pi with OpenCV and dlib

Open up a new file in your favorite editor or IDE and name it pi_drowsiness_detection.py . From there, let’s get started coding:

Lines 1-9 handle our imports — make sure you have each of these installed in your virtual environment.

From there let’s define a distance function:

On Lines 11-14 we define a convenience function for calculating the Euclidean distance using NumPy. Euclidean is arguably the most well known and must used distance metric. The Euclidean distance is normally described as the distance between two points “as the crow flies”.

Now let’s define our Eye Aspect Ratio (EAR) function which is used to compute the ratio of distances between the vertical eye landmarks and the distances between the horizontal eye landmarks:

The return value will be approximately constant when the eye is open and will decrease towards zero during a blink. If the eye is closed, the eye aspect ratio will remain constant at a much smaller value.

From there, we need to parse our command line arguments:

We have defined two required arguments and one optional one on Lines 33-40:

  • --cascade : The path to the Haar cascade XML file used for face detection.
  • --shape-predictor : The path to the dlib facial landmark predictor file.
  • --alarm : A boolean to indicate if the TrafficHat buzzer should be used when drowsiness is detected.

Both the --cascade  and --shape-predictor  files are available in the “Downloads” section at the end of the post.

If the --alarm  flag is set, we’ll set up the TrafficHat:

As shown in Lines 43-46 if the argument supplied is greater than 0, we’ll import the TrafficHat function to handle our buzzer alarm.

Let’s also define a set of important configuration variables:

The two constants on Lines 52 and 53 define the EAR threshold and number of consecutive frames eyes must be closed to be considered drowsy, respectively.

Then we initialize the frame counter and a boolean for the alarm (Lines 57 and 58).

From there we’ll load our Haar cascade and facial landmark predictor files:

Line 64 differs from the face detector initialization from our previous post on drowsiness detection — here we use a faster detection algorithm (Haar cascades) while sacrificing accuracy. Haar cascades are faster than dlib’s face detector (which is HOG + Linear SVM-based) making it a great choice for the Raspberry Pi.

There are no changes to Line 65 where we load up dlib’s shape_predictor  while providing the path to the file.

Next, we’ll initialize the indexes of the facial landmarks for each eye:

Here we supply array slice indexes in order to extract the eye regions from the set of facial landmarks.

We’re now ready to start our video stream thread:

If you are using the PiCamera module, be sure to comment out Line 74 and uncomment Line 75 to switch the video stream to the Raspberry Pi camera. Otherwise if you are using a USB camera, you can leave this unchanged.

We have one second sleep so the camera sensor can warm up.

From there let’s loop over the frames from the video stream:

The beginning of this loop should look familiar if you’ve read the previous post. We read a frame, resize it (for efficiency), and convert it to grayscale (Lines 83-85).

Then we detect faces in the grayscale image with our detector on Lines 88-90.

Now let’s loop over the detections:

Line 93 begins a lengthy for-loop which is broken down into several code blocks here. First we extract the coordinates and width + height of the  rects  detections. Then, on Lines 96 and 97 we construct a dlib rectangle  object using the information extracted from the Haar cascade bounding box.

From there, we determine the facial landmarks for the face region (Line 102) and convert the facial landmark (x, y)-coordinates to a NumPy array.

Given our NumPy array, shape , we can extract each eye’s coordinates and compute the EAR:

Utilizing the indexes of the eye landmarks, we can slice the shape  array to obtain the (x, y)-coordinates each eye (Lines 107 and 108).

We then calculate the EAR for each eye on Lines 109 and 110.

Soukupová and Čech recommend averaging both eye aspect ratios together to obtain a better estimation (Line 113).

This next block is strictly for visualization purposes:

We can visualize each of the eye regions on our frame by using cv2.drawContours  and supplying the cv2.convexHull  calculation of each eye (Lines 117-120). These few lines are great for debugging our script but aren’t necessary if you are making an embedded product with no screen.

From there, we will check our Eye Aspect Ratio ( ear ) and frame counter ( COUNTER ) to see if the eyes are closed, while sounding the alarm to alert the drowsy driver if needed:

On Line 124 we check the ear  against the EYE_AR_THRESH  — if it is less than the threshold (eyes are closed), we increment our COUNTER  (Line 125) and subsequently check it to see if the eyes have been closed for enough consecutive frames to sound the alarm (Line 129).

If the alarm isn’t on, we turn it on for a few seconds to wake up the drowsy driver. This is accomplished on Lines 136-138.

Optionally (if you’re implementing this code with a screen), you can draw the alarm on the frame as I have done on Lines 141 and 142.

That brings us to the case where the ear  wasn’t less than the EYE_AR_THRESH  — in this case we reset our COUNTER  to 0 and make sure our alarm is turned off (Lines 146-148).

We’re almost done — in our last code block we’ll draw the EAR on the frame , display the frame , and do some cleanup:

If you’re integrating with a screen or debugging you may wish to display the computed eye aspect ratio on the frame as I have done on Lines 153 and 154. The frame is displayed to the actual screen on Lines 157 and 158.

The program is stopped when the ‘q’ key is pressed on a keyboard (Lines 157 and 158).

You might be thinking, “I won’t have a keyboard hooked up in my car!” Well, if you’re debugging using your webcam and your computer at your desk, you certainly do. If you want to use the button on the TrafficHAT to turn on/off the drowsiness detection algorithm, that is perfectly fine — the first reader to post the solution in the comments to using the button to turn on and off the drowsiness detector with the Pi deserves an ice cold craft beer or a hot artisan coffee.

Finally, we clean up by closing any open windows and stopping the video stream (Lines 165 and 166).

Drowsiness detection results

To run this program on your own Raspberry Pi, be sure to use the “Downloads” section at the bottom of this post to grab the source code, face detection Haar cascade, and dlib facial landmark detector.

I didn’t have enough time to wire everything up in my car and record the screen while as I did previously. It would have been quite challenging to record the Raspberry Pi screen while driving as well.

Instead, I’ll demonstrate at my desk — you can then take this implementation and use it inside your own car for drowsiness detection as you see fit.

You can see an image of my setup below:

Figure 7: My desk setup for coding, testing, and debugging the Raspberry Pi Drowsiness Detector.

To run the program, simply execute the following command:

I have included a video of myself demoing the real-time drowsiness detector on the Raspberry Pi below:

Our Raspberry Pi 3 is able to accurately determine if I’m getting “drowsy”. We were able to accomplish this using our optimized code.

Disclaimer: I do not advise that you rely upon the hobbyist Raspberry Pi and this code to keep you awake at the wheel if you are in fact drowsy while driving. The best thing to do is to pull over and rest; walk around; or have a coffee/soda. Have fun with this project and show it off to your friends, but do not risk your life or that of others.

How do I run this program automatically when the Pi boots up?

This is a common question I receive. I have a blog post covering the answer here: Running a Python + OpenCV script on reboot.


In today’s blog post, we learned how to optimize facial landmarks on the Raspberry Pi by swapping out a HOG + Linear SVM-based face detector for a Haar cascade.

Haar cascades, while less accurate, are significantly faster than HOG + Linear SVM detectors.

Given the detections from the Haar cascade we were able to construct a dlib.rectangle  object corresponding to the bounding box (x, y)-coordinates in the image. This object was fed into dlib’s facial landmark predictor which in turn gives us the set of localized facial landmarks on the face. From there, we applied the same algorithm we used in our previous post to detect drowsiness in a video stream.

I hope you enjoyed this tutorial!

To be notified when new blog posts are published here on the PyImageSearch blog, be sure to enter your email address in the form below — I’ll be sure to notify you when new content is released!


If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , , , , , , ,

234 Responses to Raspberry Pi: Facial landmarks + drowsiness detection with OpenCV and dlib

  1. Mike October 23, 2017 at 11:07 am #

    Great article! Do you plan an article (or series) on low light environment face/eye blink detection. Followed your guide recently, but really excited to know how to raise the detection quality on low-light environments/low quality video stream.

    • Adrian Rosebrock October 23, 2017 at 12:23 pm #

      It’s always easier to write code for (reliable) computer vision algorithms for higher quality video streams than try to write code that compensates for a poor environment. If you’re running into situations where you are considering writing code for a poor environment I would encourage you to first examine the environment and see if you can update to make it higher quality.

      • Mike October 24, 2017 at 7:57 am #

        Due to business requirements I can’t force our clients to shot themselves only in good-to-process conditions. They could be using our software anywhere they want, so… I’d like to read of any approaches available to solve this problem. Just as an idea for you for future publications.

        • Adrian Rosebrock October 24, 2017 at 10:33 am #

          Other readers have suggested infrared cameras and infrared lights. I would expect that solution to solve the problem when it is dark outside. There are other “poor conditions” such as reflection and glare which you would need to overcome too. This blog post will get you started but it isn’t intended to be a solution that you can sell.

          • kaisar khatak November 5, 2017 at 9:21 pm #

            I would suggest taking a look at iPhone X, Intel Realsense F200/R200, Logitech C922 and the Structure sensor (structure.io) to name a few. Also take a look at how Google Tango approaches depth for AR. I personally think everything (apps and sensors) are moving to 3D now…

      • zz November 7, 2017 at 8:05 am #

        hi,I am chinese,I like your essay.

        Do you know TrafficHAT,Buy link invalidation.
        Can you use raspberry pi to write an article about face recognition using tensorflow, opencv, Dlib?

      • sudhir kumar April 3, 2019 at 4:00 am #

        sir, im getting problem with — cascade path… plz resolve issue asap.

        • Adrian Rosebrock April 4, 2019 at 1:25 pm #

          It sounds like you’re struggling with command line arguments. Make sure you read this post first.

    • Petri K. October 24, 2017 at 1:47 am #

      Raspberry Pi has “night vision” camera boards. They have IR LED spotlights and some of the cameras come without IR filter. Your eyes are not able to see the infrared light, but the camera is. Add light to low light and create higher quality video stream…

      There is also IR webcams available and it is possible to use infrared light with some of the standard non IR webcam. Most of the webcams have IR blocking filter, but some of them doesn’t filter properly. (And it is possible to remove the filters in some cases. Use google for this.)

      Maybe this could help you?

      (The articles are excellent! Thank you Adrian!)

  2. fariborz October 23, 2017 at 11:31 am #


    That is great

    now this is what i need

    very very thank you Adrian

    • Adrian Rosebrock October 23, 2017 at 12:21 pm #

      Thanks Fariborz, I’m glad you enjoyed the tutorial 🙂

  3. Some Guy October 23, 2017 at 11:39 am #

    Hi Dr. Rosebrock, great article as usual! Thank you for the good consistent content. I’m learning a lot 🙂

    • Adrian Rosebrock October 23, 2017 at 12:21 pm #

      Thank you 🙂

  4. rohit October 23, 2017 at 9:37 pm #

    Hi Adrian,
    Thanks for posting this.
    In this post from May ’17 about running dlib on a raspberry pi, you mention that a Raspberry Pi3 is not fast enough to do dlib’s face landmark detection in realtime.


    Since the drowsiness detection also uses dlib’s face landmarks, does it have similar performance issues as you mention in your older post? Or have you figured out some optimizations for RPi3 to improve performance?


    • Adrian Rosebrock October 24, 2017 at 7:17 am #

      Hi Rohit — please see the section entitled “Haar cascades: less accurate, but faster than HOG”. This is where our big speedup comes from.

  5. pochao October 23, 2017 at 10:14 pm #

    Logitech webcam is better than Pi camera?

    • Adrian Rosebrock October 24, 2017 at 7:16 am #

      It depends on how you define “better”. What is your use case? How do you intend on deploying it? Both cameras can be good for different reasons. The Raspberry Pi camera module is cheaper but the Logitech C920 is technically “better” for many uses. It is nice being able to connect the camera directly to the Pi though.

  6. arash allahari October 24, 2017 at 1:40 am #

    oh Come on man i just wrote this idea two weeks ago in C++
    obviously ideas could go beyond pacific through continents

    but good news for me is i optimized it with an awesome idea and now i can process drowsiness with almost 30 frame per second from 1 megapixel image stream in Raspberry Pi

    I beg u Dr. Rosebrock do not publish such ideas, image processing fans and researchers will get it with just a hint

    • Adrian Rosebrock October 24, 2017 at 7:14 am #

      Hey Arash — I actually wrote the original drowsiness detection tutorial way back in May. Secondly, I tend to write blog posts 2-3 weeks ahead of time before they are actually published. I’m not sure what your point is — you would prefer I note publish tutorials?

    • jamhan November 3, 2017 at 11:48 pm #

      Where can i see your blogs?

    • HanSol February 11, 2018 at 1:07 pm #

      Hi Arash,

      Can you share the source codes ? is this c++ using dlib in Rapberry Pi and 30 fps ? is it around 1280×960 . resolution ?

      I would love to discuss you if you have contact address


  7. Melrick Nicolas October 24, 2017 at 3:25 am #

    How to download updated imutils?

    • Adrian Rosebrock October 24, 2017 at 7:08 am #

      I would suggest using “pip”:

      $ pip install --upgrade imutils

      If you are using a Python virtual environment please make sure you activate it before installing/upgrading.

  8. Marcus Souza October 24, 2017 at 11:07 am #

    Hey Adrian,

    Thanks for sharing!

    As always a great job !!
    I tested with webcom and verified a great performance in the identification of drowsiness, with a processing load of 70%. Perhaps there is something that can be improved to reduce PLOAD, perhaps by altering Haarcascade, perhaps by using the one haarcascade_eye.xml or similar, targeting only the eye area. I wanted you to share your opinion with us. Can you comment on the subject?

    Thanks for all help, Adrian

    • Adrian Rosebrock October 25, 2017 at 8:23 am #

      In order to apply drowsiness detection we need to detect the entire face — this enables us to localize the eyes. We could use a Haar cascade to detect eyes but the problem is that we need to train a facial landmark detector for just the eyes. That wouldn’t do much to improve processing speed.

  9. Raghu October 24, 2017 at 11:32 am #

    Hi Adrian,

    I’m impressed with the tutorial!

    Please let me know what Operating System used in the Raspberry Pi 3.

    • Adrian Rosebrock October 24, 2017 at 2:50 pm #

      Hi Raghu — Raspbian is the official operating system and the one used. You can download it here.

  10. Marcus Souza October 24, 2017 at 1:29 pm #

    Hey Adrian,

    First thank you for sharing this great edition !!

    Doing some tests I found the following error in the code, when I used the “PICamera”, I got the following TypeError:

    vs = VideoStream(usePicamera=True).start()

    vs = VideoStream(usePiCamera=True).start()

    This corrects the following failure:

    vs = VideoStream(usePicamera=True).start()
    TYpeError: __init__() got an unexpected keyword argument ‘usePicamera’


    • Adrian Rosebrock October 24, 2017 at 2:34 pm #

      Thank you Marvin — you are correct. I’ve updated the post, and I’ll update the download soon. Thanks for bringing this to my attention.

      • Adrian Rosebrock October 31, 2017 at 8:54 am #

        I have now updated the code download as well. Thanks again!

  11. Yoni October 25, 2017 at 6:59 am #


    So there’s something which bothers me here:

    Your original article used something like HOG+SVM and a sliding window for detection.
    I got to say, that face detector that you have provided does work most of the time (~75%).
    However, doesn’t RCNN (or faster RCNN etc,whatever you get the point) just work better than pre-deep learning techniques? I mean,that’s what Justin from stanford claims (https://www.youtube.com/watch?v=nDPWywWRIRo&index=11&list=PL3FW7Lu3i5JvHM8ljYj-zLfQRF3EO8sYv&t=1950s).

    Is that really the case? If so, when should I NOT prefer RCNN & Why?

    • Adrian Rosebrock October 25, 2017 at 8:21 am #

      RCNN-based methods will be more accurate than both Haar cascades and/or HOG + Linear SVM (provided the network is properly trained and deployed). The problem can be speed — we need to achieve that balance on the Raspberry Pi.

      • Yoni October 26, 2017 at 10:49 am #

        Cool, thx for the answer.

        Just another thing: Does RCNN require more training data as well?
        I mean, it requires a bounding box for each object for each picture.
        HOG+SVM requires negative and positive examples, and for the false positives, we need to manually tell the learning algo that those are false positives.

        So,in your experience, which learning algo requires more training data to work decently?

        • Adrian Rosebrock October 31, 2017 at 8:13 am #

          It will vary on a dataset to dataset basis, but in general, you can assume that your CNN will need more example images.

  12. Muhammad Zohair October 26, 2017 at 8:23 am #

    Hello Adrian,
    Just started Image processing and sounds like fun but really tired of installing libraries, I have been “setting” up my pi for about a week now.
    Stuck on pip install scipy.
    running setup.py bdist_wheel for scipy … takes forever.
    Any tips?


    • Adrian Rosebrock October 26, 2017 at 11:27 am #

      Hi Muhammad — yes setting up the Pi can be quite frustrating. For some of the PIP installs you must be patient and let the Pi finish. If you’re interested in a pre-configured and pre-installed Raspbian image, it comes with the Quickstart and Hardcopy Bundles of my book, Practical Python and OpenCV + Case Studies.

  13. majid azimi October 26, 2017 at 6:42 pm #

    Hi Adrian,

    I think it is now time to use cnn based algorithms for face detection part. Is it slow?! not anymore. You can make an awesome binarization model method tutorial in your website which face detector part would be more accurate and faster. Let me know if you need help.

    Best and Greeting from Venice ICCV17,

    • Adrian Rosebrock October 27, 2017 at 11:01 am #

      Hi Majid — thanks for the suggestion. Enjoy your conference!

    • Manus S. May 20, 2018 at 2:09 pm #

      Hi Majid,

      I am a university student (not in computer field) and I have interest in face detection with many methods but I have a less information about cnn-based methods.Would you mind to show me the name of the paper about cnn-based for face detection in ICCV17 (or maybe not in that conference) or relate paper in this topic.

  14. Suganya Robert October 30, 2017 at 12:36 am #

    Hi Adrian

    Recently I came to know about thin client. Can you please tell me the difference between thin client and a Pi with a SD card. Is there any additional memory support?. Is it possible to connect a thin client with a portable display?(7”). Please reply.

    • Adrian Rosebrock October 30, 2017 at 1:45 pm #

      Hi Suganya, see this information about thin clients. Basically thin clients rely on a server for storage and applications. You don’t store or process much locally on a thin client. A Raspberry Pi is not a thin client, but I suppose you could make it into one. Raspberry Pis (at least the Raspbian OS), allow for processing and storage on the device — it’s a fully functional small computer. Yes, you can attach a display to a thin client.

  15. Fred Laganiere November 6, 2017 at 4:34 pm #

    Hello Adrian,

    I’m a high school student and I would like to reproduce your project for my science class and try some variables of my own. I wonder what camera and other equipments did you use for this experiment. Would it be possible to specify?

    Thank you in advance.

    If you want, I can share with you the results of my experiment at the end of my project.

    best regards


    • Adrian Rosebrock November 9, 2017 at 7:15 am #

      Hi Fred, thanks for the comment. It’s great to hear you are interested in computer vision! I was in high school as well when I first got into image processing.

      The camera for this tutorial doesn’t matter a whole lot. I like the Raspberry Pi camera module but it might be easier for you to use the Logitech C920 which is plug-and-play compatible with the Raspberry Pi.

      For this specific blog post I used the Logitech C920.

      • Fred Laganiere November 18, 2017 at 5:50 pm #

        Thank you so much,

        I’ll give it a try and let you know how far I can get.



      • Fred Laganiere December 2, 2017 at 3:38 pm #

        Hi Adrian,

        in which folded should I extract the zip file?

        Thank you for you help

        I think I got everything else ready now for my testing.

        Thank you so much


        • Adrian Rosebrock December 5, 2017 at 7:51 am #

          It doesn’t matter where you download and extract the .zip file. Extract it, change directory into it, and execute the script.

  16. Hien November 10, 2017 at 11:50 am #

    Hi Adrian, i run the code, but its very slowly. What is the problem ?

    • Adrian Rosebrock November 13, 2017 at 2:10 pm #

      Hi Hien — what type of system are you executing the code on? What are the specs of the machine?

  17. Angelo November 16, 2017 at 9:35 pm #

    Hi adrian. If i use a night vision cam, do i need change the code?

    • Adrian Rosebrock November 18, 2017 at 8:17 am #

      You might have to. I would verify that faces can still be detected and the facial landmarks localized when switching over to the night vision cam.

  18. Liz November 17, 2017 at 6:14 am #

    Hello Adrian. I am a student and want to make this as my project .Traffic hat is not available so I’m planning on using the 3.5mm audio jack on playing the alarm. Im really a newbie in image processing. The part of the codes in replacing the alarm really confuses me. Can you help me out in replacing the codes instead of using traffic hat? Thank you.

    • Adrian Rosebrock November 18, 2017 at 8:13 am #

      Hi Liz — congrats on working on your project, that’s fantastic. I haven’t used the audio jack or associated audio libraries on a Raspberry Pi so unfortunately I can’t give any direct advice. But in general you’ll need to remove all TrafficHat imports and then play your audio file on Lines 136-138.

      If you’re new to computer vision and OpenCV I would suggest you work through Practical Python and OpenCV. I created this book to help beginners and it would certainly help you get quickly up to speed and complete your project.

  19. yorulmaz December 4, 2017 at 4:59 pm #

    Hi, dr. ROSEBROCK. your work is very good and thank you for your sharing. I just started raspberry. I installed dlib and OpenCV. I ran those codes on raspberry. How can I just run this project when the Raspberry is opened? How can I add .and .xml and .dat files to code? thanks in advance

    • Adrian Rosebrock December 5, 2017 at 7:29 am #

      Hello, thanks for the comment. Can you be a bit more specific when you say “run the project when Raspberry is opened”? Are you referring to running the project on reboot? Secondly, I’m not sure what you mean by “add .xml and .dat files to code”? You are trying to hardcode the paths to the files in the code?

  20. yorulmaz December 5, 2017 at 8:01 am #

    thank you dr. your reply made me very happy:
    yes, I want to run the project during reboot. I would also like to add the paths of xml and dat files to hardcoding. Finally I use a buzzer instead of traffichat and I did not get the sound output. my goal is just to learn something … thanks …

    • Adrian Rosebrock December 8, 2017 at 5:20 pm #

      If you are using a buzzer you should read up on GPIO and the Raspberry Pi. You should also consult the manual/documentation for your particular buzzer. You can hardcode the paths to the XML file if you so wish. Just create a variable that points to the paths. Or you can execute the script at boot and include the full paths to the XML files as command line arguments. Either method will work. For more information on running a script on reboot, take a look at this blog post.

  21. yorulmaz December 9, 2017 at 3:38 am #

    Dr Rosebrock, thank you so much. I ran the project. Your article was very useful.
    ( for buzzer…… buzzer + pin = raspbbery 29 pin, buzzer – pin = raspberry 25 pin gnd). I can send a video for work ..

    • Swathi March 11, 2018 at 3:14 pm #

      Yorulmaz can you please send me the code with this buzzer implementation.

      • SADASIVA April 4, 2019 at 10:15 am #

        Swathi can you please send me the buzzer implemented code.

  22. kaisar khatak December 24, 2017 at 11:03 pm #

    Would testing for a yawn follow a similar approach??? Thanks.

    • Adrian Rosebrock December 26, 2017 at 4:03 pm #

      Yes, monitoring the aspect ratio of the mouth would be a reasonable method to detect a yawn.

      • kaisar khatak January 28, 2018 at 9:58 pm #

        The only problem is occlusion (when the hand moves in front of the mouth) or if the user is singing a song. I think one might need to use a deep learning training and classification approach. Thoughts?

        • Adrian Rosebrock January 30, 2018 at 10:18 am #

          Deep learning might be helpful but it could also be overkill. If a hand, coffee cup, or breakfast sandwich moves in front of the mouth, I’m not sure that matters provided it’s only an occlusion for a short period of time. I doubt many people yawn once and then immediately fall asleep unless they have a specific condition. A more robust drowsiness detector should involve sensor fusion, such as body temperate, heart rate, oxygen levels, etc.

  23. arman December 25, 2017 at 9:48 am #

    this error is shown when i run it, please help… i did not change anything from the code.

    File “pi_detect_drowsiness.py”, line 145
    cv2.putText(frame, “DROWSINESS ALERT!”, (10, 30),
    IndentationError: expected an indented block

    • Adrian Rosebrock December 26, 2017 at 4:03 pm #

      Make sure you use the “Downloads” section of this blog post to download the source code. It looks like you formatted the code incorrectly when copying and pasting.

  24. Raghu January 2, 2018 at 4:46 am #

    Hi Adrian,

    I’m impressed with your Drowsiness Detection algorithm for Raspberry Pi.

    why don’t you develop the algorithm for iOS and Android phones so that it would reduce the cost of buying Raspberry Pi.

  25. glev January 15, 2018 at 4:32 am #

    what is the dlib face landmarks detection speed on raspberry pi when people number is big(about 10)?

    • Adrian Rosebrock January 15, 2018 at 9:09 am #

      The facial landmark detector is extremely fast, it’s the face detection that tends to be slow. It really depends on what your goal is. Are you trying to apply drowsiness detection to all ten people in the input frame?

  26. Gabriel January 16, 2018 at 9:51 pm #

    Hi Dr Rosebrock.
    I have a question. As I reviewed your code in my raspberry and more or less there are 5 frames per second. And in your video you can see the photoprograms much faster. Maybe there is some way to take more frames per second?

    • Adrian Rosebrock January 17, 2018 at 10:14 am #

      Just to clarify — did you use my code exactly (downloaded via the “Downloads” form of this blog post)? Did you make any modifications? It would also be helpful to know which model of the Raspberry Pi you are using.

  27. akalya January 24, 2018 at 12:24 pm #

    I tried to use the same code in the above, but i have the problem in installing the dlib in my windows.can you please tell how to install that in windows.I download dlib package directly from net but its not working.

  28. zjfsharp January 26, 2018 at 2:41 am #

    Hi Adrian, Thank you very much.
    I run the code which download from this blog on raspberry pi3 model B(raspbian stretch), but its very slowly. What is the problem ?
    I followed blogs to install opencv3 and dlib on my raspberry pi3 ( optimizing opencv on raspberry pi and install dlib ( the easy, complete guide ) ).

    • Adrian Rosebrock January 26, 2018 at 10:05 am #

      Can you elaborate on what you mean by “slowly”? Are you using a Raspberry Pi camera module or a USB camera? Additionally, how large are the input frames that you are processing? Make sure you are using the Haar cascades for face detection rather than the HOG + Linear SVM face detector provided by dlib. This will give you additional speed as we do in this blog post.

      • Charlie January 30, 2018 at 12:41 am #

        Hi Adrian,
        Thanks for your sharing. I have the same problem as zjfsharp. I did exactly as the post (with optimizing opencv installed )and ran the downloaded and unchange code in my raspberry pi3 model B successfully. But the fps is around 4. The operating system is raspbian stretch lite with GUI. while runing the code the, the cpu runs at 600MHz(half of 1.2GHz). The memory usage is about 40 percent. The result is far less smooth as your video shown above.

        • Adrian Rosebrock January 30, 2018 at 10:06 am #

          Hey Charlie — just to clarify, how are you accessing your Raspberry Pi? Via SSH or VNC? Or via a standard keyboard + HDMI monitor setup? Additionally, are you using a USB webcam or a Raspberry Pi camera module?

          • Charlie January 30, 2018 at 10:15 pm #

            I’m using a usbcam(logitech em2500) via a standard Keyboard+HDMI setup to access. Low fps seems has nothing to with cpu frequency(boosted to 1.2GHz), cpu and memory usage and power supply(5v,2A).

          • Adrian Rosebrock January 31, 2018 at 6:40 am #

            Thanks for sharing the hardware setup, Charlie. Are you using Python 2.7 or Python 3?

          • Charlie February 1, 2018 at 6:03 am #

            Python 3. I’m still stuck here. Do you have any idea? Thanks for your reply.

          • Adrian Rosebrock February 3, 2018 at 11:05 am #

            I know Python 3 handles threading and queuing slightly different than Python 2. Would you be able to try Python 2 and see if you have the same results?

        • ibrahim April 15, 2018 at 8:54 am #

          hello did you solve your problem charlie ?

  29. Emre Osma January 30, 2018 at 1:48 am #

    Hello Adrian,
    OS is Rasbian Stretch hw is raspberry pi 2.
    OpenCV and all other imports are OK
    but the result is “AttributeError: ‘NoneType’ object has no attribute ‘shape'” 🙂
    any comments?
    thanks in advance

    $ python pi_detect_drowsiness.py –cascade haarcascade_frontalface_default.xml –shape-predictor shape_predictor_68_face_landmarks.dat –alarm 1
    [INFO] using TrafficHat alarm…
    [INFO] loading facial landmark predictor…
    [INFO] starting video stream thread…
    Traceback (most recent call last):
    File “pi_detect_drowsiness.py”, line 88, in
    frame = imutils.resize(frame, width=450)

    (h, w) = image.shape[:2]
    AttributeError: ‘NoneType’ object has no attribute ‘shape’

    • Adrian Rosebrock January 30, 2018 at 10:03 am #

      If you are getting a “NoneType” error than OpenCV cannot read the frame from your Raspberry Pi camera module or USB webcam. Double-check that OpenCV can access your Raspberry Pi camera by following this post. Additionally, you should read up on NoneType errors and how to debug them here.

      • Emre February 2, 2018 at 5:00 am #

        installing cam driver solved my problem 🙂
        sudo modprobe bcm2835-v4l2
        thanks a lot

      • muhammmad January 20, 2019 at 1:57 pm #

        hi Adrian, I have confirmed that the camera captures video and photos as I followed ur previous camera setup tutorial.
        however, I’m having exactly the same error with Emre. Im using raspberry pi camera for my project. what should I do?. any other idea?

        • muhammmad January 20, 2019 at 2:20 pm #

          emre’s solution also worked for me. now working alhamdulillah

  30. Emre Osma January 30, 2018 at 1:49 am #

    using “Raspberry Pi camera module” not a USB one.

  31. Henrick February 1, 2018 at 6:40 am #

    This is awesome. Can I execute this program on boot though? I tried using your tutorial on crontab and instead of using “python pi_reboot_alarm.py” I replaced it with this
    python pi_detect_drowsiness.py –cascade haarcascade_frontalface_default.xml \
    –shape-predictor shape_predictor_68_face_landmarks.dat –alarm 1
    But it did not work at all. Can you help me out?

    • Adrian Rosebrock February 3, 2018 at 11:01 am #

      See this tutorial on running a script on reboot.

      You’ll either need to access your Python virtual environment and then execute the script (best accomplished via a shell script) or supply the full path to the Python binary (which I think is a bit easier).

  32. nandhu February 6, 2018 at 1:14 pm #

    hello adrian …..
    iam a having a doubt that is iam using a rasberry pie controller iam in coded in the python
    whether i need the laptop needs to be attached with the module
    else the coded program can uploaded to the controller and detached the laptop
    can you explain me
    iam a very beginner

    • Adrian Rosebrock February 8, 2018 at 8:46 am #

      If you’re a beginner I would suggest coding directly on the Raspberry Pi, that way you won’t be confused on which system the code is executing on.

  33. JP February 15, 2018 at 5:22 am #

    can i put the codes inside the python shell in the virtual environment? i am having a big trouble on how to start scripting and what ide i should use and also i dont know how to run it. i am very sorry if i look dumb but i am really new to this kind of tech can someone help me?

    • Adrian Rosebrock February 18, 2018 at 10:01 am #

      Instead of trying to use the Python shell or IDE to recommend the code, simply open up a terminal, access your Python virtual environment via the “workon” command execute the script from the terminal. There is no need to launch a shell or IDE.

  34. quantumboy February 25, 2018 at 4:40 am #

    i am facing this error.please help.

    [INFO] loading facial landmark predictor…
    [INFO] starting video stream thread…

    from picamera.array import PiRGBArray
    ImportError: No module named ‘picamera’

    • Adrian Rosebrock February 26, 2018 at 1:55 pm #

      You need to install the picamera library with NumPy array support:

      $ pip install "picamera[array]"

  35. Swathi March 5, 2018 at 12:23 pm #

    Im facing issue with the buzzer, can u help me to fix the buzzer and also share me a code, which runs with buzzer. it would be a great help. Thank you.

    • Adrian Rosebrock March 7, 2018 at 9:23 am #

      What is the exact error/issue you are having?

      • Swathi March 11, 2018 at 2:59 pm #

        Hi Adrian,

        I’m unable to implement the buzzer to my code. can you please share a code with a buzzer, it would be really helpful. Espeak is not working properly. I’m unable to hear the sound from the buzzer, if u could provide a correct code, it would be really helpful. Thank you.

        Also need to know which gpio pin i should connect the buzzer for for I/O Pin.

  36. rishabh rathod March 13, 2018 at 7:53 am #

    i have used this code with logitech c270 but processing is slow on raspberry pi 3B model
    can you help to increase the speed
    the delay is of 1.5 – 2 seconds
    same code works on laptop very seamlessly .

    • rishabh rathod March 13, 2018 at 7:54 am #


    • Adrian Rosebrock March 14, 2018 at 12:44 pm #

      How are you access your Raspberry Pi? Via HDMI monitor and keyboard? Over SSH? Over VNC? It sounds like you may be using SSH or VNC.

  37. Habib ali March 14, 2018 at 6:57 am #

    Hi Adrian , thank you for this tutorial it’s really helpful !

    i want to extract x and y for a specific point from the face ,for example eye[2], how can i do that please? thank in advance 🙂

    • Adrian Rosebrock March 14, 2018 at 12:29 pm #

      Take a look at this post as it demonstrates how to extract the various facial features. Once you have them you can extract individual (x, y)-coordinates as well.

  38. Himadri Chowdhury March 27, 2018 at 4:20 pm #

    Hello sir, I have installed all the dependencies and opencv properly but can not run the file from command prompt. Here is the path that I have included

    ap.add_argument(“-c”, “-/home/pi/Downloads/pi-drowsiness-detection/haarcascade_frontalface_default.xml”, required=True,
    help = “/home/pi/Downloads/pi-drowsiness-detection”)
    ap.add_argument(“-p”, “-/home/pi/Downloads/pi-drowsiness-detection/shape_predictor_68_face_landmarks.dat”, required=True,

    Is it ok?
    while I go for running the file from cmd shell it gives me so such file or directory error.

  39. liu hengli March 30, 2018 at 4:45 am #

    could you write a post about gaze tracking

    • Adrian Rosebrock March 30, 2018 at 6:40 am #

      Sure, I will consider this for a future tutorial.

  40. CS March 30, 2018 at 2:04 pm #

    i am getting black frame screen on executing the program.
    pi camera led is on. But frame screen is blackout.

    • CS March 30, 2018 at 2:23 pm #

      i am using -picamera module, it’s high resolution picamera module

      • Adrian Rosebrock April 4, 2018 at 12:51 pm #

        This is likely a firmware and/or picamera version issue. I discuss how to resolve the problem in this blog post.

  41. liu hengli March 30, 2018 at 10:31 pm #


  42. Rohit Thakur April 12, 2018 at 1:41 am #

    Hi Adrian, i am wondering can be use the Dlib’s 5-point Face Landmark Detector here instead of 68 point? If so, what are the necessary changes i have to make here in this code for running it on Pi. Can you please specify or advice for the same? Also what do you think how much does it help in improving the performance in terms of speed and accuracy and memory size?

    • Adrian Rosebrock April 13, 2018 at 6:51 am #

      No, you cannot use the 5-point model for drowsiness detection (at least in terms of this code). I discuss why you cannot use the 5-point model inside the 5-point facial landmark post. Be sure to give it a read.

  43. cyrille April 23, 2018 at 4:55 am #

    good morning i have try to simplement this algorithme un python 3.6.4
    line 50 ans 66 are giving error ans there are some library that do not work.
    what can i do to solve thé problèmes.

    • Adrian Rosebrock April 25, 2018 at 6:05 am #

      What are the exact errors that you are getting? Keep in mind that if I, or other PyImageSearch readers, do not know what problems or errors you are having we will be unable to help.

  44. Mario April 27, 2018 at 8:58 am #

    Hi Adrian
    When I run your code via python 3 on terminal, it gives me an attribute error on line 69.

    predictor = dlib.shape_predictor(args[“shape-predictor”])
    AttributeError: ‘module’ object has no attribute ‘shape_predictor’

    Could you please help me out ?

    • Adrian Rosebrock April 28, 2018 at 6:07 am #

      Hey Mario — what version of dlib are you using?

      • Mario May 1, 2018 at 10:29 pm #

        Hi Adrian

        I installed OpenCV and all the requirements from scratch by following your tutorial again, and now everything works like a charm. Thanks anyway!

        • Adrian Rosebrock May 3, 2018 at 9:38 am #

          Awesome, I’m glad to hear it Mario! Congrats on getting OpenCV installed!

      • Abdul Hadi December 9, 2018 at 10:37 am #

        i adrian i have same error

  45. Eric Nguyen April 29, 2018 at 2:10 am #

    Is there a reason you chose to use VideoSteam over VideoCapture to get frames?

    I’m trying to add code to record the video to a file using a VideoWriter object. I am having trouble getting the video to record at a regular speed video though, it seems to be in slow mode. I think it has to do with the VideoStream object getting frames. I tried recording with the VideoCapture object (in a test script) and it runs much faster. Any advice would be great! Thanks so much!

    • Adrian Rosebrock April 30, 2018 at 12:53 pm #

      Hi Eric. VideoStream is part of my own threaded implementation in imutils. I also have fileVideoStream which uses VideoCapture. Check out the source here on GitHub.

  46. AM April 29, 2018 at 6:08 am #

    Hi Adrian,
    When i m running this code on my raspberry pi3, th results which i m getting are with delay pf 8 to 10 seconds. Can you please suggest something to make this faster??

    • Adrian Rosebrock April 30, 2018 at 12:50 pm #

      Dlib now has a 5-point facial landmark detector that will be significantly faster than the 68-point one. Please see this blog post introducing the 5-point detector.

      • AM May 2, 2018 at 7:59 am #

        But as you have mentioned in the blog, in 5-point facial landmark detector, we get only two points for eyes and cannot calculate EAR…
        I have to detect drowsiness in RPi 3..Please help me to find as solution..

        • Adrian Rosebrock May 3, 2018 at 9:33 am #

          You cannot use the 5-point facial landmark detector to compute EAR. The 5-point facial landmark detector cannot be used for drowsiness detection.

  47. Sa-rang June 6, 2018 at 8:19 am #

    Hey Adrian, I’ve followed your guide, and everything has been finished except sound. As I don’t use TrafficHat, I think I should change the code in line 40-44,

    # check to see if we are using GPIO/TrafficHat as an alarm
    if args[“alarm”] > 0:
    from gpiozero import TrafficHat
    th = TrafficHat()
    print(“[INFO] using TrafficHat alarm…”)

    I want to make noise from 3.5mm speaker output. Should I change or delete the log?
    I really appreciate your help.

    Best Regards, Sa-rang.

    • Adrian Rosebrock June 7, 2018 at 3:10 pm #

      If you have no intention of using TrafficHat then delete all TrafficHat code from the file.

  48. lalo June 17, 2018 at 3:14 pm #

    hello adrian, i have the same problem as Charlie, i have raspberry pi 3 with a webcam longitech c920, and i run it with python 3 but it gives me 5 fps

    • Adrian Rosebrock June 19, 2018 at 8:53 am #

      What about Python 2.7? Does it give you the same FPS as well?

  49. aashish July 2, 2018 at 3:21 am #

    i need source code for raspberry pi

    • Adrian Rosebrock July 3, 2018 at 7:31 am #

      You can use the “Downloads” section of this blog post to download the source code.

  50. allrightsreversed August 4, 2018 at 2:32 am #

    may i ask what are the limitations of this device?

    • allrightsreversed August 4, 2018 at 2:40 am #

      and what python version are you using here?

      • Adrian Rosebrock August 7, 2018 at 6:58 am #

        I’m not sure what you mean by “limitations”, you’ll need to be more specific as I don’t know if you’re referring to computational limitations, deployment, etc. To address your second question, I used Python 3 but this code will also work with Python 2.7.

        • allrightsreversed August 7, 2018 at 9:13 pm #

          i mean like how about in night time the light is very minimal does it able to detect the eyes and perform its function? And is the design of the device is not blocking the view of the driver? Thanks sir

          • Adrian Rosebrock August 9, 2018 at 3:01 pm #

            Provided you can detect the face and facial landmarks this method will work. If you cannot detect the face, such as if the face is obscured, it will not work.

  51. rabbani August 7, 2018 at 2:41 am #

    is this possible drowsiness alert can be send to mobile r web-page

    • Adrian Rosebrock August 7, 2018 at 6:29 am #

      Technically yes, but you would need to modify the code to upload the alert to a web server first. Again, 100% possible but you would need to decide which web service you are using and then read the corresponding documentation.

  52. try August 10, 2018 at 11:38 pm #

    is it okay to use python latest version here (python 3.7? )

    • Adrian Rosebrock August 15, 2018 at 9:15 am #

      Yes, it should be okay.

  53. Test August 20, 2018 at 9:45 am #

    Sir if i used the Raspberry Pi model 3 b+ here, is there any changes in terms of peformance?

    • Adrian Rosebrock August 22, 2018 at 9:53 am #

      The Pi 3B+ is slightly faster so you will see a small increase in speed but not a massive amount.

  54. Rohan September 14, 2018 at 1:41 pm #

    Sir I need to implement this in dark as my project. Can you help me out with the code changes as I am not able to get the changes done.

    • Adrian Rosebrock September 17, 2018 at 2:59 pm #

      It’s unfortunately not as simple as changing a few lines of code here and there. What have you already done to get this project working in low light or no light conditions? What camera are you using?

  55. Muhammad September 17, 2018 at 9:03 pm #

    Thanks Adrian. I really enjoy this tutorial.

    1.Can i upload this system’s information to a database using the raspberry pi. if yes, which database do you recommend? (firebase, sql or…?)

    2. If i intend to use a buzzer instead of the traffic hat, would there be significant difference in the code you provided?

    • Adrian Rosebrock September 18, 2018 at 7:19 am #

      1. Exactly which database you use is really dependent on your project specifications. You should do your own research there. But typically a good first start is a SQL-based database and then go from there.

      2. No, there would not be a significant change. Just swap out the TrafficHat code for the GPIO code specific to your buzzer.

  56. Prerna September 23, 2018 at 11:54 am #


    I am new to image processing and python as well…can u give detailed information regarding installations to carry out this project?

    Thanks in advance.

  57. Rendhel Ricohermoso October 9, 2018 at 10:18 am #

    Good Day Doctor Adrian!

    Your project is amazing! I would like to ask about if you had already tried using IR camera for low light conditions for detecting drowsiness? Your reply will be highly appreciated. How about sir using PI Noir Camera?

    Thanks and Regards

    • Adrian Rosebrock October 12, 2018 at 9:29 am #

      Hey there Rendhel, I have not tried the code directly with a Pi Noir camera.

  58. Muhammad October 9, 2018 at 12:59 pm #

    hi adrian, Im new to IOT. I need ur clarifications. As for the traffic hat and the raspberry PI 3 , are they 2 different things or the same( PI combined with traffci hat).? u made them sound like 2 but according to your image its 1? so which is which?

    • Adrian Rosebrock October 12, 2018 at 9:28 am #

      The TrafficHat is a component that connects to the Raspberry Pi itself. They are two different pieces of hardware that connect together.

  59. Muhammad October 10, 2018 at 11:23 am #

    hi adrian,
    thanks for taking ur time to reply to us
    if i plan to use a night vision camera for this program, does anything of the code needs to be modified or can i use the same code to detect faces in low light/darkness?

    • Adrian Rosebrock October 12, 2018 at 9:14 am #

      Potentially, but I would start by trying with the night vision camera first before you plan on making any changes.

  60. Moh October 11, 2018 at 12:03 pm #

    hi adrian.
    is it possible to control the volume of the trafficHat buzzer.? as in, start from low to high according to the frequency of eye closure. in other words the deeper the sleep, the louder the sound.

    pls reply sir

    • Adrian Rosebrock October 12, 2018 at 8:58 am #

      As far as I know it’s not but you should reach out to the creators of the TrafficHat to verify.

  61. Tim October 15, 2018 at 2:46 pm #

    Hi Adrian!

    Great work as you do!

    But I have the same problem with Charlie.

    I’m using a usbcam (Logitech c920) via a standard Keyboard+HDMI setup to access. Unfortunately, when I ran this script by using Haar cascade and 68 facial landmarks with python2.7 I got a bad performance which fps is low and the speed of video stream is slow. How can I get a better performance?

    Thanks in advance.

    • Adrian Rosebrock October 16, 2018 at 8:25 am #

      Hey Tim — I haven’t been able to replicate the problem that both you and Charlie have had, unfortunately. Try to debug the issue by writing a separate Python script that only pulls frames from your camera and display them to your screen. Is the lag as bad? If so, probably an OpenCV or hardware issue. If not, then you can further debug which part of the code is really slowing you down.

  62. sajid rehman October 19, 2018 at 7:59 am #

    i want to add a condition to this code that if no eyes detected it should also start alarm like driver is sleeping and fall aside from camera i want to apply if condition so where should i apply if condition and on which value plz help

    • Adrian Rosebrock October 20, 2018 at 7:31 am #

      I would instead modify the code that if “no face is detected for N frames”, where N is a value you define, then you sound the alarm.

  63. Moh October 23, 2018 at 9:33 pm #

    using the same system, is it possible to emit vibration when drowsiness is detected.? what extra, do i need to attach?

    • Adrian Rosebrock October 29, 2018 at 2:05 pm #

      You would need a hat for your Pi that has a vibration functionality.

  64. Rendhel Ricohermoso October 27, 2018 at 10:30 am #

    Sir Adrian,

    Does the camera you use cuts infrared? I am planning to use NoIR camera so it can be applied in low light condition. Thanks for the reply.


    • Adrian Rosebrock October 29, 2018 at 1:33 pm #

      No, I used a standard USB camera for this project but you could use a NoIR camera if you wished.

      • Rendhel Ricohermoso October 29, 2018 at 9:07 pm #

        will there be any to change in your set up sir Adrian?

        • Adrian Rosebrock November 2, 2018 at 8:36 am #

          I would suggest you try and see. It’s nearly impossible for me to predict without seeing your actual environment and where you intend on deploying it. The best way to learn is to learn by doing — it is now your turn 🙂

  65. Akter Ahsan November 12, 2018 at 1:03 am #

    Hi Adrian!

    Great Work, I run the project, it’s working fine but, I don’t have the TrafficHat. instead of using TrafficHat I want to use directly gpio to operate a relay module. please suggest me in which section of the code should i change. and whice code should i put?


    • Adrian Rosebrock November 13, 2018 at 4:47 pm #

      Anywhere you see TrafficHat code you’ll want to swap that out for your GPIO code. Exactly what that code looks like is 100% dependent on which relay module you’re using, which GPIO pins you’re using, etc.

  66. Pacquier November 20, 2018 at 3:37 am #

    Hi. I just read this article and I would like to know if it would be possible to create a program that would let the rpi automatically select between the Rpi cam or webcam if the two of them are simultaneously connected on the Raspberry pi, depending on which of the two cameras detect a face.
    Im trying to create a drowsy system like yours only with multiple cameras attached(probably 2 or 3). And the cameras would only start capturing depending on which of them detects a face

    • Adrian Rosebrock November 20, 2018 at 9:06 am #

      It’s absolutely possible. Start by reading this tutorial on multiple cameras with the Pi. Each frame will need to be read and face detection applied. If a face is found, hand off the face to a separate process to perform recognition then highlight that show the stream of that camera on your desktop.

      • Pacquier March 2, 2019 at 9:49 am #

        Hi. I tried using three cameras.I used if-else statements to interchange between which cameras are detecting a face and then lock on that face as long as that specific camera is still detecting a face. I ran dlib facial detection on each cameras frames. However, I noticed a decrease in speed of the frames captured. The alarm triggers a little bit later than usual after detecting closed eyes. Is it because of my code or because of the facial detection function running on 3 different camera frames? I’d like to know your thought about it.

        • Adrian Rosebrock March 5, 2019 at 9:01 am #

          You’re running face detection + recognition on 3 separate cameras? If so, that’s the issue. The Raspberry Pi just isn’t powerful enough for that.

          • Pacquier March 6, 2019 at 6:21 am #

            When you said from your previous comment that what I want to do is absolutely possible, does it include the slowing down of the frame capture? Or what you had in mind doesn’t really follow what i’m doing with the code?
            I would just like to know if there is a way to optimize my code to be able to use three separate camera to capture the drivers face without really slowing the Rpi very much

          • Adrian Rosebrock March 8, 2019 at 5:43 am #

            Hey Pacquier — I would suggest keeping an eye on the PyImageSearch blog for my upcoming Computer Vision + Raspberry Pi book. I’ll be covering how to optimize the Pi fo computer vision applications covering both the hardware and software side of things. It’s too much for me to address in a single comment on this post so I hope you’ll take a look at the book. I’ll be sharing more details soon.

  67. Abdul Hadi December 9, 2018 at 4:08 am #

    Hi Adrain
    I am facing two error in drowsiness detection program using facial landmarks
    1) Import error imutils
    No such module named imutils
    2) Import error dlib
    No such module named dlib
    I have installed both dlib and imutils via pip install on virtual envoirnment

    • Adrian Rosebrock December 11, 2018 at 12:54 pm #

      You may be forgetting to access your Python virtual environment before executing the script:

      $ workon your_env_name

  68. Florin December 11, 2018 at 6:16 am #

    Hi Adrian, i have a fuzzy error can you help me please:

    ImportError: No module named imutils.video

    • Adrian Rosebrock December 11, 2018 at 12:32 pm #

      You need to install the “imutils” library:

      $ pip install imutils

  69. junhao wu December 15, 2018 at 4:53 am #

    Hi Adrian,I know that the Raspberry Pi can’t do very good real-time, can you recommend a development board for better real-time performance?

    • Adrian Rosebrock December 18, 2018 at 9:17 am #

      It actually depends on what you’re trying to do. For some applications of computer vision the Raspberry Pi can run in real-time. And in other cases you should just use the Movidius NCS to speed it up. Otherwise I recommend the Jetson TX2 if you’re interested in embedded deep learning.

  70. Bilal December 25, 2018 at 12:10 pm #

    hy sir can it detect eye at night time or when person wearing eye site glasses?

    • Adrian Rosebrock December 27, 2018 at 10:24 am #

      No, this method is intended for use when you can clearly detect the eye regions of the user.

      • Bilal January 1, 2019 at 11:27 am #

        if we use night vision camera then its possible to do it at night?

  71. Max January 6, 2019 at 10:04 pm #

    Sir Adrian,
    Thank you for your amazing introduction and idea, I’ve set my pi and run the code you provided. However the IDE says that is an error :
    pi_detect_drowsiness.py: error: the following arguments are required: -c/–cascade, -p/–shape-predictor

    It maybe a easily-dealed problem but I just have no idea to fix it.
    Would you plz help me?
    Once again, Thanks a lot !

    • Max January 6, 2019 at 10:40 pm #

      Sir Adrian,
      I found that the code should be run on the shell ,and it says
      ImportError: No module named imutils.video

      But I have intall imutils , how’s this happen?

      • Adrian Rosebrock January 8, 2019 at 6:55 am #

        According to your error it sounds like imutils it not actually installed. You can install it via:

        $ pip install imutils

    • Adrian Rosebrock January 8, 2019 at 6:54 am #

      If you’re new to command line arguments and argparse, that’s okay, but you need to read this tutorial first. Once you read the guide you will understand how to supply the proper command line arguments to the script.

  72. Kyle January 7, 2019 at 1:05 am #

    Hi Adrian, I’m from the Philippines and i love your work on this. But i need help on how to sound the alarm using speakers via the audio jack since traffic hat isn’t available here in our country. Thanks!

    • Adrian Rosebrock January 8, 2019 at 6:53 am #

      I demonstrate how to do use a speaker with the Raspberry Pi in this guide.

  73. smiley_bin January 16, 2019 at 9:07 pm #

    Hi Adrian, Amazing article!
    If i want to use pi camera, do i just only comment out Line 74 and uncomment Line 75 to switch the video stream to the Raspberry Pi camera?
    Also, if i use the infrared camera, can i detect the drowsy at night? Thanks:)

    • Adrian Rosebrock January 22, 2019 at 9:58 am #

      You are correct in both counts. An infrared camera will help with the drowsiness detection at night.

  74. Muhammad January 27, 2019 at 1:21 pm #

    hi Adrian, the alarm does not sound accurately upon eye closure. sometimes I close my eyes and the alarm is not sounded(detected). like….. the accuracy is very low. what should I do to make real time?

  75. Rendhel Ricohermoso February 11, 2019 at 8:56 pm #

    Hi Sir Adrian,

    Thank you for your project. May I ask how do you power up your raspberry pi in your car? If by powerbank, what brand of powerbank do you used? Reply is very much appreciated.


  76. Mohammad Almas Rizwan February 14, 2019 at 12:52 am #

    Hello Sir,
    I need to know if this project can be implemented on Raspberry pi zero?(1ghz Processor & 512mb ram?)

    • Adrian Rosebrock February 14, 2019 at 12:47 pm #

      The Raspberry Pi Zero will unfortunately be too slow for this project. I would highly recommend you use a Pi 3.

  77. sai krishna February 20, 2019 at 1:38 am #

    how to connect raspberry-pi to laptop sir. please reply soon

    • Adrian Rosebrock February 20, 2019 at 12:03 pm #

      Unless I’m misunderstanding your question, typically we just SSH into our Raspberry Pi via a laptop/desktop:

      $ ssh pi@your_ip_address

  78. Rohit February 28, 2019 at 8:54 pm #

    I am getting a error when we run your code sir.
    It is no module named cv2. But we have installed opencv using your tutorial. What should we do to clear this error sir.

    • Adrian Rosebrock March 1, 2019 at 5:29 am #

      Unfortunately it sounds like you do not have OpenCV properly installed. You should refer to my OpenCV install guides. Which one did you follow? Make sure you refer to the “FAQ” section at the bottom of each post which explains common errors such as yours.

  79. Salman Khalid March 17, 2019 at 1:46 pm #

    It is not working when i put glasses so kindly tell what i can do ?

    • Adrian Rosebrock March 19, 2019 at 10:06 am #

      This method will not work reliably with glasse.

  80. Rizal March 20, 2019 at 9:12 am #

    Hello Sir Adrian, can i get the full code for this project?.

    • Adrian Rosebrock March 22, 2019 at 8:57 am #

      You can use the “Downloads” section of this post to download the source code.

  81. students March 26, 2019 at 2:27 am #

    Hi. Thanks to this post, I am the student who completes the detection of the eyes. Thank you first.

    Can I ask you a few questions?

    I am currently using Raspberry Pie 3b + and pi camera.

    First, there is source code using Haar cascades. Is there source code using HOG and LInear SVM? I’m having a problem with accuracy.

    Secondly, do you want to use the infrared pi camera and proceed with the same source code?

    Answers I’ll wait. Thank you.

    • Adrian Rosebrock March 27, 2019 at 8:45 am #

      HOG + Linear SVM will be very slow on the Pi. If you want to try you can follow this tutorial.

  82. NullNull March 26, 2019 at 9:56 pm #

    Hello This is a wonderful post!

    I have a few questions.
    First, I want to use an infrared pi camera. Is there anything to modify in the source code section?

    Second, is there any code for HOG + Linear SVM?

    • Adrian Rosebrock March 27, 2019 at 8:31 am #

      1. I haven’t tested this code with an infrared camera. You would need to test it and see.

      2. You mean HOG + Linear SVM for face detection? Or arbitrary object detection?

  83. Abubakar April 1, 2019 at 10:52 am #

    Pls does anyone have an idea on how i can change the video source to be streamed in a python GUI???
    . I have already created the GUI but i cannot stream the video to the GUI interface. Im using GUI because i added some features that need to be in the gui

    • Adrian Rosebrock April 2, 2019 at 5:50 am #

      Have you tried this tutorial?

      • Abubakar April 4, 2019 at 3:22 pm #

        thanks. exactly what im looking for

  84. Raspberry April 3, 2019 at 2:14 am #


    I implemented the project according to the posting, but the fps is about 5-7. So I’m going to add Movidius NCS.

    Movidius NCS is based on caffe and tensorflow, and can it help projects in this posting?

    I have purchased Movidius NCS and I do not know how to do the initial setup and installation.

    I would appreciate your help.

  85. Muhammad April 11, 2019 at 1:26 pm #

    hi Guys,

    i tried adding some functions in the for loop (main loop of the program). i

    even tried using threading (declaring functions out of the loop and using threading

    to call them) in the for loop so as to avoid the video streaming from

    slowing down. however, with all these precautions, the streaming is quite slow. pls

    what do u suggest i do in order to add some functions in the for loop and at the

    same maintain a normal streaming speed?

    I would rather say, the for loop is quite fragile. Ur suggestions will really help.


  86. Fadil April 23, 2019 at 12:01 pm #

    Hey, Can I use Infra Red web camera instead of camera used here?

    • Adrian Rosebrock April 25, 2019 at 8:46 am #

      I haven’t tried this code with an infrared camera. Give it a try and see! I would love to know.

  87. sumanth April 29, 2019 at 2:52 pm #

    if i dont use traffic hat what should i do to make alarm to driver and what are the updation in code please can you give any sugestion

    • Adrian Rosebrock May 1, 2019 at 11:41 am #

      Have you tried using a speaker instead?

  88. Shubhanker May 2, 2019 at 1:10 am #

    I am using a simple 5v piezo buzzer instead of traffic hat please help me out in this case which lines I need to change in the code

    • Adrian Rosebrock May 8, 2019 at 1:46 pm #

      Sorry, I am not familiar with that buzzer. You should refer to the documentation associated with your buzzer.

  89. Fadil May 5, 2019 at 5:58 pm #

    hey, i’m getting an error as the following arguments are required: -c/–cascade, -p/–shape-predictor. I’ve read your tutorial on Python, argparse, and command line arguments. But Both the files are nn the same directory as others but getting these error. Please help me out.

  90. Praveen Andhale August 8, 2019 at 2:40 pm #

    Fantastic Article !
    Thanks a lot Adrian.

    • Adrian Rosebrock August 16, 2019 at 6:01 am #

      I’m glad you enjoyed it 🙂

  91. Josue August 14, 2019 at 10:37 pm #

    Dear Dr. Adrian,
    Your projects are very very amazing and important. Thak you to share your knowledge.

    Right now, my drowsiness detector is working on my Raspberry Pi in real time. It was difficult to implement some libraries but at the end, it’s working very good.

    Could you illuminate to me, how can I detect the eyes through sunglasses? Help me please!!!

    Greetings and hugs from Ecuador

  92. Lee September 24, 2019 at 11:39 pm #

    This might be a basic question, but how can I check my pi camera’s frame rate?
    I also saw “Drowsiness detection with OpenCV” article too, but I still cannot know my pi camera’s frame rate. Where can I read it from …? Everyone here knows it, but I do not. Can you please help me?
    Also, I tried haar cascade method from your instructions, and it speed up! (But still have no idea about the frame rate). But it detects my eyes and nose too, so is there any solution for that?

    • Adrian Rosebrock September 25, 2019 at 10:32 am #

      See this tutorial on measuring (and improving) your FPS throughput rate.

  93. Dalia Ibrahim September 28, 2019 at 7:53 am #

    great projects ever ,

    I was asking about how i can run both object detection and drowsiness detection at the same time is that possible to use one camera pi to run the 2 codes that you have discussed in the tutorials and if it is not possible what should i do two integrate this 2 function to open together

  94. divyaa October 3, 2019 at 5:41 am #

    I need to know,how to connect the vibrator motor with Raspberry pi 3???

    Can u give me solution asap..

  95. lee November 12, 2019 at 9:16 pm #

    Hi adrian
    I’m doing your tutorial but it’s not perfect because there’s no traffichat
    So I’m trying using piezo buzzer
    Doesn’t it matter if you use a piezo buzzer to code to make a sound when drowsiness is detected?
    I’ll wait for your reply
    Thanks ~

    • Adrian Rosebrock November 14, 2019 at 9:19 am #

      Sorry, I don’t have any experience with the piezo buzzer. I would suggest referring to the documentation for it.

  96. Khayam khan December 20, 2019 at 4:34 am #

    Is there any website from where I can download data sets in videos with ground truth given….
    I need to verify my network on the basis of that ground truth..
    like videos of drowsy person (eyelid Closure, Eye Blinking rate ) etc.

    • Adrian Rosebrock December 26, 2019 at 10:02 am #

      Sorry, I don’t think there is such a dataset.

  97. joseph January 23, 2020 at 3:39 am #

    sir is it possible to do this project in raspberry 3 instead of traffic hat

    • Adrian Rosebrock January 23, 2020 at 9:11 am #

      Yes, you can use an RPi 3.

Before you leave a comment...

Hey, Adrian here, author of the PyImageSearch blog. I'd love to hear from you, but before you submit a comment, please follow these guidelines:

  1. If you have a question, read the comments first. You should also search this page (i.e., ctrl + f) for keywords related to your question. It's likely that I have already addressed your question in the comments.
  2. If you are copying and pasting code/terminal output, please don't. Reviewing another programmers’ code is a very time consuming and tedious task, and due to the volume of emails and contact requests I receive, I simply cannot do it.
  3. Be respectful of the space. I put a lot of my own personal time into creating these free weekly tutorials. On average, each tutorial takes me 15-20 hours to put together. I love offering these guides to you and I take pride in the content I create. Therefore, I will not approve comments that include large code blocks/terminal output as it destroys the formatting of the page. Kindly be respectful of this space.
  4. Be patient. I receive 200+ comments and emails per day. Due to spam, and my desire to personally answer as many questions as I can, I hand moderate all new comments (typically once per week). I try to answer as many questions as I can, but I'm only one person. Please don't be offended if I cannot get to your question
  5. Do you need priority support? Consider purchasing one of my books and courses. I place customer questions and emails in a separate, special priority queue and answer them first. If you are a customer of mine you will receive a guaranteed response from me. If there's any time left over, I focus on the community at large and attempt to answer as many of those questions as I possibly can.

Thank you for keeping these guidelines in mind before submitting your comment.

Leave a Reply