Raspberry Pi: Facial landmarks + drowsiness detection with OpenCV and dlib

Today’s blog post is the long-awaited tutorial on real-time drowsiness detection on the Raspberry Pi!

Back in May I wrote a (laptop-based) drowsiness detector that can be used to detect if the driver of a motor vehicle was getting tired and potentially falling asleep at the wheel.

The driver drowsiness detector project was inspired by a conversation I had with my Uncle John, a long haul truck driver who has witnessed a more than a few accidents due to fatigued drivers.

The post was really popular and a lot of readers got value out of it…

…but the method was not optimized for the Raspberry Pi!

Since then readers have been requesting me to write a followup blog post that covers the necessary optimizations to run the drowsiness detector on the Raspberry Pi.

I caught up with my Uncle John a few weeks ago and asked him what he would think of a small computer that could be mounted inside his truck cab to help determine if he was getting tired at the wheel.

He wasn’t crazy about the idea of being monitored by a camera his entire work day (and I don’t necessarily blame him either — I wouldn’t want to be monitored all the time either). But he did eventually concede that a device like this, and ideally less invasive, would certainly help avoid accidents due to fatigued drivers.

To learn more about these facial landmark optimizations and how to run our drowsiness detector on the Raspberry Pi, just keep reading!

Looking for the source code to this post?
Jump right to the downloads section.

Raspberry Pi: Facial landmarks + drowsiness detection with OpenCV and dlib

Today’s tutorial is broken into four parts:

  1. Discussing the tradeoffs between Haar cascades and HOG + Linear SVM detectors.
  2. Examining the TrafficHAT used to create the alarm that will sound if a driver/user gets tired.
  3. Implementing dlib facial landmark optimizations so we can deploy our drowsiness detector to the Raspberry Pi.
  4. Viewing the results of our optimized driver drowsiness detection algorithm on the Raspberry Pi.

Before we get started I would highly encourage you to read through my previous tutorial on Drowsiness detection with OpenCV.

While I’ll be reviewing the code in its entirety here, you should still read the previous post as I discuss the actual Eye Aspect Ratio (EAR) algorithm in more detail.

The EAR algorithm is responsible for detecting driver drowsiness.

Haar cascades: less accurate, but faster than HOG

The major optimization we need to run our driver drowsiness detection algorithm on the Raspberry Pi is to swap out the default dlib HOG + Linear SVM face detector and replace it with OpenCV’s Haar cascade face detector.

While HOG + Linear SVM detectors tend to be significantly more accurate than Haar cascades, the cascade method is also much faster than HOG + Linear SVM detection algorithms.

A complete review of both HOG + Linear SVM and Haar cascades work is outside the scope of this blog post, but I would encourage you to:

  1. Read this post on Histogram of Oriented Gradients and Object Detection where I discuss the pros and cons of HOG + Linear SVM and Haar cascades.
  2. Work through the PyImageSearch Gurus course where I demonstrate how to implement your own custom HOG + Linear SVM object detectors from scratch.

The Raspberry Pi TrafficHAT

In our previous tutorial on drowsiness detection I used my laptop to execute driver drowsiness detection code — this enabled me to:

  1. Ensure the drowsiness detection algorithm would run in real-time due to the faster hardware.
  2. Use the laptop speaker to sound an alarm by playing a .WAV file.

The Raspberry Pi does not have a speaker so we cannot play any loud alarms to wake up the driver…

…but the Raspberry Pi is a highly versatile piece of hardware that includes a large array of hardware add-ons.

One of my favorites is the TrafficHAT:

Figure 1: The Raspberry Pi 3 with TrafficHat board containing button, buzzer, and lights.

The TrafficHAT includes:

  • Three LED lights
  • A button
  • A loud buzzer (which we’ll be using as our alarm)

This kit is an excellent starting point to get some exposure to GPIO. If you’re just getting started as well, be sure to take a look at the TrafficHat.

You don’t have to use the TrafficHAT of course; any other piece of hardware that emits a loud noise will do.

Another approach I like to do is just plug a 3.5mm audio cable in the audio jack, and then set up text to speech using espeak  (a package available via apt-get ). Using this method you could have your Pi say “WAKEUP WAKEUP!” when you’re drowsy. I’ll leave this as an exercise for you to implement if you so choose.

However, for the sake of this tutorial I will be using the TrafficHAT. You can buy your own TrafficHAT here.

And from there you can install the required Python packages you need to use the TrafficHAT via pip . But first, ensure you’re in your appropriate virtual environment on your Pi. I have a thorough explanation on virtual environments on this previous post.

Here are the installation steps upon opening a terminal or SSH connection:

From there, if you want to check that everything is installed properly in your virtual environment you may run the Python interpreter directly:

Note: I’ve made the assumption that the virtual environment you are using already has the above packages installed in it. My cv  virtual environment has NumPy, dlib, OpenCV, and imutils already installed, so by using pip  to install the RPi.GPIO  and gpiozero  to install the respective GPIO packages, I’m able to access all six libraries from within the same environment. You may pip install  each of the packages (except for OpenCV). To install an optimized OpenCV on your Raspberry Pi, then just follow this previous post. If you are having trouble getting dlib installed, please follow this guide.

The driver drowsiness detection algorithm is identical to the one we implemented in our previous tutorial.

To start, we will apply OpenCV’s Haar cascades to detect the face in an image, which boils down to finding the bounding box (x, y)-coordinates of the face in the frame.

Given the bounding box the face we can apply dlib’s facial landmark predictor to obtain 68 salient points used to localize the eyes, eyebrows, nose, mouth, and jawline:

Figure 2: Visualizing the 68 facial landmark coordinates from the iBUG 300-W dataset.

As I discuss in this tutorial, dlib’s 68 facial landmarks are indexable which enables us to extract the various facial structures using simple Python array slices.

Given the facial landmarks associated with an eye, we can apply the Eye Aspect Ratio (EAR) algorithm which was introduced by Soukupová and Čech’s in their 2017 paper, Real-Time Eye Blink Detection suing Facial Landmarks:

Figure 3: Top-left: A visualization of eye landmarks when then the eye is open. Top-right: Eye landmarks when the eye is closed. Bottom: Plotting the eye aspect ratio over time. The dip in the eye aspect ratio indicates a blink (Image credit: Figure 1 of Soukupová and Čech).

On the top-left we have an eye that is fully open and the eye facial landmarks plotted. Then on the top-right we have an eye that is closed. The bottom then plots the eye aspect ratio over time. As we can see, the eye aspect ratio is constant (indicating that the eye is open), then rapidly drops to close to zero, then increases again, indicating a blink has taken place.

You can read more about the blink detection algorithm and the eye aspect ratio in this post dedicated to blink detection.

In our drowsiness detector case, we’ll be monitoring the eye aspect ratio to see if the value falls but does not increase again, thus implying that the driver/user has closed their eyes.

Once implemented, our algorithm will start by localizing the facial landmarks on extracting the eye regions:

Figure 4: Me with my eyes open — I’m not drowsy, so the Eye Aspect Ratio (EAR) is high.

We can then monitor the eye aspect ratio to determine if the eyes are closed:

Figure 5: The EAR is low because my eyes are closed — I’m getting drowsy.

And then finally raising an alarm if the eye aspect ratio is below a pre-defined threshold for a sufficiently long amount of time (indicating that the driver/user is tired):

Figure 6: My EAR has been below the threshold long enough for the drowsiness alarm to come on.

In the next section, we’ll implement the optimized drowsiness detection algorithm detailed above on the Raspberry Pi using OpenCV, dlib, and Python.

A real-time drowsiness detector on the Raspberry Pi with OpenCV and dlib

Open up a new file in your favorite editor or IDE and name it pi_drowsiness_detection.py . From there, let’s get started coding:

Lines 1-9 handle our imports — make sure you have each of these installed in your virtual environment.

From there let’s define a distance function:

On Lines 11-14 we define a convenience function for calculating the Euclidean distance using NumPy. Euclidean is arguably the most well known and must used distance metric. The Euclidean distance is normally described as the distance between two points “as the crow flies”.

Now let’s define our Eye Aspect Ratio (EAR) function which is used to compute the ratio of distances between the vertical eye landmarks and the distances between the horizontal eye landmarks:

The return value will be approximately constant when the eye is open and will decrease towards zero during a blink. If the eye is closed, the eye aspect ratio will remain constant at a much smaller value.

From there, we need to parse our command line arguments:

We have defined two required arguments and one optional one on Lines 33-40:

  • --cascade : The path to the Haar cascade XML file used for face detection.
  • --shape-predictor : The path to the dlib facial landmark predictor file.
  • --alarm : A boolean to indicate if the TrafficHat buzzer should be used when drowsiness is detected.

Both the --cascade  and --shape-predictor  files are available in the “Downloads” section at the end of the post.

If the --alarm  flag is set, we’ll set up the TrafficHat:

As shown in Lines 43-46 if the argument supplied is greater than 0, we’ll import the TrafficHat function to handle our buzzer alarm.

Let’s also define a set of important configuration variables:

The two constants on Lines 52 and 53 define the EAR threshold and number of consecutive frames eyes must be closed to be considered drowsy, respectively.

Then we initialize the frame counter and a boolean for the alarm (Lines 57 and 58).

From there we’ll load our Haar cascade and facial landmark predictor files:

Line 64 differs from the face detector initialization from our previous post on drowsiness detection — here we use a faster detection algorithm (Haar cascades) while sacrificing accuracy. Haar cascades are faster than dlib’s face detector (which is HOG + Linear SVM-based) making it a great choice for the Raspberry Pi.

There are no changes to Line 65 where we load up dlib’s shape_predictor  while providing the path to the file.

Next, we’ll initialize the indexes of the facial landmarks for each eye:

Here we supply array slice indexes in order to extract the eye regions from the set of facial landmarks.

We’re now ready to start our video stream thread:

If you are using the PiCamera module, be sure to comment out Line 74 and uncomment Line 75 to switch the video stream to the Raspberry Pi camera. Otherwise if you are using a USB camera, you can leave this unchanged.

We have one second sleep so the camera sensor can warm up.

From there let’s loop over the frames from the video stream:

The beginning of this loop should look familiar if you’ve read the previous post. We read a frame, resize it (for efficiency), and convert it to grayscale (Lines 83-85).

Then we detect faces in the grayscale image with our detector on Lines 88-90.

Now let’s loop over the detections:

Line 93 begins a lengthy for-loop which is broken down into several code blocks here. First we extract the coordinates and width + height of the  rects  detections. Then, on Lines 96 and 97 we construct a dlib rectangle  object using the information extracted from the Haar cascade bounding box.

From there, we determine the facial landmarks for the face region (Line 102) and convert the facial landmark (x, y)-coordinates to a NumPy array.

Given our NumPy array, shape , we can extract each eye’s coordinates and compute the EAR:

Utilizing the indexes of the eye landmarks, we can slice the shape  array to obtain the (x, y)-coordinates each eye (Lines 107 and 108).

We then calculate the EAR for each eye on Lines 109 and 110.

Soukupová and Čech recommend averaging both eye aspect ratios together to obtain a better estimation (Line 113).

This next block is strictly for visualization purposes:

We can visualize each of the eye regions on our frame by using cv2.drawContours  and supplying the cv2.convexHull  calculation of each eye (Lines 117-120). These few lines are great for debugging our script but aren’t necessary if you are making an embedded product with no screen.

From there, we will check our Eye Aspect Ratio ( ear ) and frame counter ( COUNTER ) to see if the eyes are closed, while sounding the alarm to alert the drowsy driver if needed:

On Line 124 we check the ear  against the EYE_AR_THRESH  — if it is less than the threshold (eyes are closed), we increment our COUNTER  (Line 125) and subsequently check it to see if the eyes have been closed for enough consecutive frames to sound the alarm (Line 129).

If the alarm isn’t on, we turn it on for a few seconds to wake up the drowsy driver. This is accomplished on Lines 136-138.

Optionally (if you’re implementing this code with a screen), you can draw the alarm on the frame as I have done on Lines 141 and 142.

That brings us to the case where the ear  wasn’t less than the EYE_AR_THRESH  — in this case we reset our COUNTER  to 0 and make sure our alarm is turned off (Lines 146-148).

We’re almost done — in our last code block we’ll draw the EAR on the frame , display the frame , and do some cleanup:

If you’re integrating with a screen or debugging you may wish to display the computed eye aspect ratio on the frame as I have done on Lines 153 and 154. The frame is displayed to the actual screen on Lines 157 and 158.

The program is stopped when the ‘q’ key is pressed on a keyboard (Lines 157 and 158).

You might be thinking, “I won’t have a keyboard hooked up in my car!” Well, if you’re debugging using your webcam and your computer at your desk, you certainly do. If you want to use the button on the TrafficHAT to turn on/off the drowsiness detection algorithm, that is perfectly fine — the first reader to post the solution in the comments to using the button to turn on and off the drowsiness detector with the Pi deserves an ice cold craft beer or a hot artisan coffee.

Finally, we clean up by closing any open windows and stopping the video stream (Lines 165 and 166).

Drowsiness detection results

To run this program on your own Raspberry Pi, be sure to use the “Downloads” section at the bottom of this post to grab the source code, face detection Haar cascade, and dlib facial landmark detector.

I didn’t have enough time to wire everything up in my car and record the screen while as I did previously. It would have been quite challenging to record the Raspberry Pi screen while driving as well.

Instead, I’ll demonstrate at my desk — you can then take this implementation and use it inside your own car for drowsiness detection as you see fit.

You can see an image of my setup below:

Figure 7: My desk setup for coding, testing, and debugging the Raspberry Pi Drowsiness Detector.

To run the program, simply execute the following command:

I have included a video of myself demoing the real-time drowsiness detector on the Raspberry Pi below:

Our Raspberry Pi 3 is able to accurately determine if I’m getting “drowsy”. We were able to accomplish this using our optimized code.

Disclaimer: I do not advise that you rely upon the hobbyist Raspberry Pi and this code to keep you awake at the wheel if you are in fact drowsy while driving. The best thing to do is to pull over and rest; walk around; or have a coffee/soda. Have fun with this project and show it off to your friends, but do not risk your life or that of others.

How do I run this program automatically when the Pi boots up?

This is a common question I receive. I have a blog post covering the answer here: Running a Python + OpenCV script on reboot.


In today’s blog post, we learned how to optimize facial landmarks on the Raspberry Pi by swapping out a HOG + Linear SVM-based face detector for a Haar cascade.

Haar cascades, while less accurate, are significantly faster than HOG + Linear SVM detectors.

Given the detections from the Haar cascade we were able to construct a dlib.rectangle  object corresponding to the bounding box (x, y)-coordinates in the image. This object was fed into dlib’s facial landmark predictor which in turn gives us the set of localized facial landmarks on the face. From there, we applied the same algorithm we used in our previous post to detect drowsiness in a video stream.

I hope you enjoyed this tutorial!

To be notified when new blog posts are published here on the PyImageSearch blog, be sure to enter your email address in the form below — I’ll be sure to notify you when new content is released!


If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 11-page Resource Guide on Computer Vision and Image Search Engines, including exclusive techniques that I don’t post on this blog! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , , , , , , ,

84 Responses to Raspberry Pi: Facial landmarks + drowsiness detection with OpenCV and dlib

  1. Mike October 23, 2017 at 11:07 am #

    Great article! Do you plan an article (or series) on low light environment face/eye blink detection. Followed your guide recently, but really excited to know how to raise the detection quality on low-light environments/low quality video stream.

    • Adrian Rosebrock October 23, 2017 at 12:23 pm #

      It’s always easier to write code for (reliable) computer vision algorithms for higher quality video streams than try to write code that compensates for a poor environment. If you’re running into situations where you are considering writing code for a poor environment I would encourage you to first examine the environment and see if you can update to make it higher quality.

      • Mike October 24, 2017 at 7:57 am #

        Due to business requirements I can’t force our clients to shot themselves only in good-to-process conditions. They could be using our software anywhere they want, so… I’d like to read of any approaches available to solve this problem. Just as an idea for you for future publications.

        • Adrian Rosebrock October 24, 2017 at 10:33 am #

          Other readers have suggested infrared cameras and infrared lights. I would expect that solution to solve the problem when it is dark outside. There are other “poor conditions” such as reflection and glare which you would need to overcome too. This blog post will get you started but it isn’t intended to be a solution that you can sell.

          • kaisar khatak November 5, 2017 at 9:21 pm #

            I would suggest taking a look at iPhone X, Intel Realsense F200/R200, Logitech C922 and the Structure sensor (structure.io) to name a few. Also take a look at how Google Tango approaches depth for AR. I personally think everything (apps and sensors) are moving to 3D now…

      • zz November 7, 2017 at 8:05 am #

        hi,I am chinese,I like your essay.

        Do you know TrafficHAT,Buy link invalidation.
        Can you use raspberry pi to write an article about face recognition using tensorflow, opencv, Dlib?

    • Petri K. October 24, 2017 at 1:47 am #

      Raspberry Pi has “night vision” camera boards. They have IR LED spotlights and some of the cameras come without IR filter. Your eyes are not able to see the infrared light, but the camera is. Add light to low light and create higher quality video stream…

      There is also IR webcams available and it is possible to use infrared light with some of the standard non IR webcam. Most of the webcams have IR blocking filter, but some of them doesn’t filter properly. (And it is possible to remove the filters in some cases. Use google for this.)

      Maybe this could help you?

      (The articles are excellent! Thank you Adrian!)

  2. fariborz October 23, 2017 at 11:31 am #


    That is great

    now this is what i need

    very very thank you Adrian

    • Adrian Rosebrock October 23, 2017 at 12:21 pm #

      Thanks Fariborz, I’m glad you enjoyed the tutorial 🙂

  3. Some Guy October 23, 2017 at 11:39 am #

    Hi Dr. Rosebrock, great article as usual! Thank you for the good consistent content. I’m learning a lot 🙂

    • Adrian Rosebrock October 23, 2017 at 12:21 pm #

      Thank you 🙂

  4. rohit October 23, 2017 at 9:37 pm #

    Hi Adrian,
    Thanks for posting this.
    In this post from May ’17 about running dlib on a raspberry pi, you mention that a Raspberry Pi3 is not fast enough to do dlib’s face landmark detection in realtime.


    Since the drowsiness detection also uses dlib’s face landmarks, does it have similar performance issues as you mention in your older post? Or have you figured out some optimizations for RPi3 to improve performance?


    • Adrian Rosebrock October 24, 2017 at 7:17 am #

      Hi Rohit — please see the section entitled “Haar cascades: less accurate, but faster than HOG”. This is where our big speedup comes from.

  5. pochao October 23, 2017 at 10:14 pm #

    Logitech webcam is better than Pi camera?

    • Adrian Rosebrock October 24, 2017 at 7:16 am #

      It depends on how you define “better”. What is your use case? How do you intend on deploying it? Both cameras can be good for different reasons. The Raspberry Pi camera module is cheaper but the Logitech C920 is technically “better” for many uses. It is nice being able to connect the camera directly to the Pi though.

  6. arash allahari October 24, 2017 at 1:40 am #

    oh Come on man i just wrote this idea two weeks ago in C++
    obviously ideas could go beyond pacific through continents

    but good news for me is i optimized it with an awesome idea and now i can process drowsiness with almost 30 frame per second from 1 megapixel image stream in Raspberry Pi

    I beg u Dr. Rosebrock do not publish such ideas, image processing fans and researchers will get it with just a hint

    • Adrian Rosebrock October 24, 2017 at 7:14 am #

      Hey Arash — I actually wrote the original drowsiness detection tutorial way back in May. Secondly, I tend to write blog posts 2-3 weeks ahead of time before they are actually published. I’m not sure what your point is — you would prefer I note publish tutorials?

    • jamhan November 3, 2017 at 11:48 pm #

      Where can i see your blogs?

    • HanSol February 11, 2018 at 1:07 pm #

      Hi Arash,

      Can you share the source codes ? is this c++ using dlib in Rapberry Pi and 30 fps ? is it around 1280×960 . resolution ?

      I would love to discuss you if you have contact address


  7. Melrick Nicolas October 24, 2017 at 3:25 am #

    How to download updated imutils?

    • Adrian Rosebrock October 24, 2017 at 7:08 am #

      I would suggest using “pip”:

      $ pip install --upgrade imutils

      If you are using a Python virtual environment please make sure you activate it before installing/upgrading.

  8. Marcus Souza October 24, 2017 at 11:07 am #

    Hey Adrian,

    Thanks for sharing!

    As always a great job !!
    I tested with webcom and verified a great performance in the identification of drowsiness, with a processing load of 70%. Perhaps there is something that can be improved to reduce PLOAD, perhaps by altering Haarcascade, perhaps by using the one haarcascade_eye.xml or similar, targeting only the eye area. I wanted you to share your opinion with us. Can you comment on the subject?

    Thanks for all help, Adrian

    • Adrian Rosebrock October 25, 2017 at 8:23 am #

      In order to apply drowsiness detection we need to detect the entire face — this enables us to localize the eyes. We could use a Haar cascade to detect eyes but the problem is that we need to train a facial landmark detector for just the eyes. That wouldn’t do much to improve processing speed.

  9. Raghu October 24, 2017 at 11:32 am #

    Hi Adrian,

    I’m impressed with the tutorial!

    Please let me know what Operating System used in the Raspberry Pi 3.

    • Adrian Rosebrock October 24, 2017 at 2:50 pm #

      Hi Raghu — Raspbian is the official operating system and the one used. You can download it here.

  10. Marcus Souza October 24, 2017 at 1:29 pm #

    Hey Adrian,

    First thank you for sharing this great edition !!

    Doing some tests I found the following error in the code, when I used the “PICamera”, I got the following TypeError:

    vs = VideoStream(usePicamera=True).start()

    vs = VideoStream(usePiCamera=True).start()

    This corrects the following failure:

    vs = VideoStream(usePicamera=True).start()
    TYpeError: __init__() got an unexpected keyword argument ‘usePicamera’


    • Adrian Rosebrock October 24, 2017 at 2:34 pm #

      Thank you Marvin — you are correct. I’ve updated the post, and I’ll update the download soon. Thanks for bringing this to my attention.

      • Adrian Rosebrock October 31, 2017 at 8:54 am #

        I have now updated the code download as well. Thanks again!

  11. Yoni October 25, 2017 at 6:59 am #


    So there’s something which bothers me here:

    Your original article used something like HOG+SVM and a sliding window for detection.
    I got to say, that face detector that you have provided does work most of the time (~75%).
    However, doesn’t RCNN (or faster RCNN etc,whatever you get the point) just work better than pre-deep learning techniques? I mean,that’s what Justin from stanford claims (https://www.youtube.com/watch?v=nDPWywWRIRo&index=11&list=PL3FW7Lu3i5JvHM8ljYj-zLfQRF3EO8sYv&t=1950s).

    Is that really the case? If so, when should I NOT prefer RCNN & Why?

    • Adrian Rosebrock October 25, 2017 at 8:21 am #

      RCNN-based methods will be more accurate than both Haar cascades and/or HOG + Linear SVM (provided the network is properly trained and deployed). The problem can be speed — we need to achieve that balance on the Raspberry Pi.

      • Yoni October 26, 2017 at 10:49 am #

        Cool, thx for the answer.

        Just another thing: Does RCNN require more training data as well?
        I mean, it requires a bounding box for each object for each picture.
        HOG+SVM requires negative and positive examples, and for the false positives, we need to manually tell the learning algo that those are false positives.

        So,in your experience, which learning algo requires more training data to work decently?

        • Adrian Rosebrock October 31, 2017 at 8:13 am #

          It will vary on a dataset to dataset basis, but in general, you can assume that your CNN will need more example images.

  12. Muhammad Zohair October 26, 2017 at 8:23 am #

    Hello Adrian,
    Just started Image processing and sounds like fun but really tired of installing libraries, I have been “setting” up my pi for about a week now.
    Stuck on pip install scipy.
    running setup.py bdist_wheel for scipy … takes forever.
    Any tips?


    • Adrian Rosebrock October 26, 2017 at 11:27 am #

      Hi Muhammad — yes setting up the Pi can be quite frustrating. For some of the PIP installs you must be patient and let the Pi finish. If you’re interested in a pre-configured and pre-installed Raspbian image, it comes with the Quickstart and Hardcopy Bundles of my book, Practical Python and OpenCV + Case Studies.

  13. majid azimi October 26, 2017 at 6:42 pm #

    Hi Adrian,

    I think it is now time to use cnn based algorithms for face detection part. Is it slow?! not anymore. You can make an awesome binarization model method tutorial in your website which face detector part would be more accurate and faster. Let me know if you need help.

    Best and Greeting from Venice ICCV17,

    • Adrian Rosebrock October 27, 2017 at 11:01 am #

      Hi Majid — thanks for the suggestion. Enjoy your conference!

  14. Suganya Robert October 30, 2017 at 12:36 am #

    Hi Adrian

    Recently I came to know about thin client. Can you please tell me the difference between thin client and a Pi with a SD card. Is there any additional memory support?. Is it possible to connect a thin client with a portable display?(7”). Please reply.

    • Adrian Rosebrock October 30, 2017 at 1:45 pm #

      Hi Suganya, see this information about thin clients. Basically thin clients rely on a server for storage and applications. You don’t store or process much locally on a thin client. A Raspberry Pi is not a thin client, but I suppose you could make it into one. Raspberry Pis (at least the Raspbian OS), allow for processing and storage on the device — it’s a fully functional small computer. Yes, you can attach a display to a thin client.

  15. Fred Laganiere November 6, 2017 at 4:34 pm #

    Hello Adrian,

    I’m a high school student and I would like to reproduce your project for my science class and try some variables of my own. I wonder what camera and other equipments did you use for this experiment. Would it be possible to specify?

    Thank you in advance.

    If you want, I can share with you the results of my experiment at the end of my project.

    best regards


    • Adrian Rosebrock November 9, 2017 at 7:15 am #

      Hi Fred, thanks for the comment. It’s great to hear you are interested in computer vision! I was in high school as well when I first got into image processing.

      The camera for this tutorial doesn’t matter a whole lot. I like the Raspberry Pi camera module but it might be easier for you to use the Logitech C920 which is plug-and-play compatible with the Raspberry Pi.

      For this specific blog post I used the Logitech C920.

      • Fred Laganiere November 18, 2017 at 5:50 pm #

        Thank you so much,

        I’ll give it a try and let you know how far I can get.



      • Fred Laganiere December 2, 2017 at 3:38 pm #

        Hi Adrian,

        in which folded should I extract the zip file?

        Thank you for you help

        I think I got everything else ready now for my testing.

        Thank you so much


        • Adrian Rosebrock December 5, 2017 at 7:51 am #

          It doesn’t matter where you download and extract the .zip file. Extract it, change directory into it, and execute the script.

  16. Hien November 10, 2017 at 11:50 am #

    Hi Adrian, i run the code, but its very slowly. What is the problem ?

    • Adrian Rosebrock November 13, 2017 at 2:10 pm #

      Hi Hien — what type of system are you executing the code on? What are the specs of the machine?

  17. Angelo November 16, 2017 at 9:35 pm #

    Hi adrian. If i use a night vision cam, do i need change the code?

    • Adrian Rosebrock November 18, 2017 at 8:17 am #

      You might have to. I would verify that faces can still be detected and the facial landmarks localized when switching over to the night vision cam.

  18. Liz November 17, 2017 at 6:14 am #

    Hello Adrian. I am a student and want to make this as my project .Traffic hat is not available so I’m planning on using the 3.5mm audio jack on playing the alarm. Im really a newbie in image processing. The part of the codes in replacing the alarm really confuses me. Can you help me out in replacing the codes instead of using traffic hat? Thank you.

    • Adrian Rosebrock November 18, 2017 at 8:13 am #

      Hi Liz — congrats on working on your project, that’s fantastic. I haven’t used the audio jack or associated audio libraries on a Raspberry Pi so unfortunately I can’t give any direct advice. But in general you’ll need to remove all TrafficHat imports and then play your audio file on Lines 136-138.

      If you’re new to computer vision and OpenCV I would suggest you work through Practical Python and OpenCV. I created this book to help beginners and it would certainly help you get quickly up to speed and complete your project.

  19. yorulmaz December 4, 2017 at 4:59 pm #

    Hi, dr. ROSEBROCK. your work is very good and thank you for your sharing. I just started raspberry. I installed dlib and OpenCV. I ran those codes on raspberry. How can I just run this project when the Raspberry is opened? How can I add .and .xml and .dat files to code? thanks in advance

    • Adrian Rosebrock December 5, 2017 at 7:29 am #

      Hello, thanks for the comment. Can you be a bit more specific when you say “run the project when Raspberry is opened”? Are you referring to running the project on reboot? Secondly, I’m not sure what you mean by “add .xml and .dat files to code”? You are trying to hardcode the paths to the files in the code?

  20. yorulmaz December 5, 2017 at 8:01 am #

    thank you dr. your reply made me very happy:
    yes, I want to run the project during reboot. I would also like to add the paths of xml and dat files to hardcoding. Finally I use a buzzer instead of traffichat and I did not get the sound output. my goal is just to learn something … thanks …

    • Adrian Rosebrock December 8, 2017 at 5:20 pm #

      If you are using a buzzer you should read up on GPIO and the Raspberry Pi. You should also consult the manual/documentation for your particular buzzer. You can hardcode the paths to the XML file if you so wish. Just create a variable that points to the paths. Or you can execute the script at boot and include the full paths to the XML files as command line arguments. Either method will work. For more information on running a script on reboot, take a look at this blog post.

  21. yorulmaz December 9, 2017 at 3:38 am #

    Dr Rosebrock, thank you so much. I ran the project. Your article was very useful.
    ( for buzzer…… buzzer + pin = raspbbery 29 pin, buzzer – pin = raspberry 25 pin gnd). I can send a video for work ..

  22. kaisar khatak December 24, 2017 at 11:03 pm #

    Would testing for a yawn follow a similar approach??? Thanks.

    • Adrian Rosebrock December 26, 2017 at 4:03 pm #

      Yes, monitoring the aspect ratio of the mouth would be a reasonable method to detect a yawn.

      • kaisar khatak January 28, 2018 at 9:58 pm #

        The only problem is occlusion (when the hand moves in front of the mouth) or if the user is singing a song. I think one might need to use a deep learning training and classification approach. Thoughts?

        • Adrian Rosebrock January 30, 2018 at 10:18 am #

          Deep learning might be helpful but it could also be overkill. If a hand, coffee cup, or breakfast sandwich moves in front of the mouth, I’m not sure that matters provided it’s only an occlusion for a short period of time. I doubt many people yawn once and then immediately fall asleep unless they have a specific condition. A more robust drowsiness detector should involve sensor fusion, such as body temperate, heart rate, oxygen levels, etc.

  23. arman December 25, 2017 at 9:48 am #

    this error is shown when i run it, please help… i did not change anything from the code.

    File “pi_detect_drowsiness.py”, line 145
    cv2.putText(frame, “DROWSINESS ALERT!”, (10, 30),
    IndentationError: expected an indented block

    • Adrian Rosebrock December 26, 2017 at 4:03 pm #

      Make sure you use the “Downloads” section of this blog post to download the source code. It looks like you formatted the code incorrectly when copying and pasting.

  24. Raghu January 2, 2018 at 4:46 am #

    Hi Adrian,

    I’m impressed with your Drowsiness Detection algorithm for Raspberry Pi.

    why don’t you develop the algorithm for iOS and Android phones so that it would reduce the cost of buying Raspberry Pi.

  25. glev January 15, 2018 at 4:32 am #

    what is the dlib face landmarks detection speed on raspberry pi when people number is big(about 10)?

    • Adrian Rosebrock January 15, 2018 at 9:09 am #

      The facial landmark detector is extremely fast, it’s the face detection that tends to be slow. It really depends on what your goal is. Are you trying to apply drowsiness detection to all ten people in the input frame?

  26. Gabriel January 16, 2018 at 9:51 pm #

    Hi Dr Rosebrock.
    I have a question. As I reviewed your code in my raspberry and more or less there are 5 frames per second. And in your video you can see the photoprograms much faster. Maybe there is some way to take more frames per second?

    • Adrian Rosebrock January 17, 2018 at 10:14 am #

      Just to clarify — did you use my code exactly (downloaded via the “Downloads” form of this blog post)? Did you make any modifications? It would also be helpful to know which model of the Raspberry Pi you are using.

  27. akalya January 24, 2018 at 12:24 pm #

    I tried to use the same code in the above, but i have the problem in installing the dlib in my windows.can you please tell how to install that in windows.I download dlib package directly from net but its not working.

  28. zjfsharp January 26, 2018 at 2:41 am #

    Hi Adrian, Thank you very much.
    I run the code which download from this blog on raspberry pi3 model B(raspbian stretch), but its very slowly. What is the problem ?
    I followed blogs to install opencv3 and dlib on my raspberry pi3 ( optimizing opencv on raspberry pi and install dlib ( the easy, complete guide ) ).

    • Adrian Rosebrock January 26, 2018 at 10:05 am #

      Can you elaborate on what you mean by “slowly”? Are you using a Raspberry Pi camera module or a USB camera? Additionally, how large are the input frames that you are processing? Make sure you are using the Haar cascades for face detection rather than the HOG + Linear SVM face detector provided by dlib. This will give you additional speed as we do in this blog post.

      • Charlie January 30, 2018 at 12:41 am #

        Hi Adrian,
        Thanks for your sharing. I have the same problem as zjfsharp. I did exactly as the post (with optimizing opencv installed )and ran the downloaded and unchange code in my raspberry pi3 model B successfully. But the fps is around 4. The operating system is raspbian stretch lite with GUI. while runing the code the, the cpu runs at 600MHz(half of 1.2GHz). The memory usage is about 40 percent. The result is far less smooth as your video shown above.

        • Adrian Rosebrock January 30, 2018 at 10:06 am #

          Hey Charlie — just to clarify, how are you accessing your Raspberry Pi? Via SSH or VNC? Or via a standard keyboard + HDMI monitor setup? Additionally, are you using a USB webcam or a Raspberry Pi camera module?

          • Charlie January 30, 2018 at 10:15 pm #

            I’m using a usbcam(logitech em2500) via a standard Keyboard+HDMI setup to access. Low fps seems has nothing to with cpu frequency(boosted to 1.2GHz), cpu and memory usage and power supply(5v,2A).

          • Adrian Rosebrock January 31, 2018 at 6:40 am #

            Thanks for sharing the hardware setup, Charlie. Are you using Python 2.7 or Python 3?

          • Charlie February 1, 2018 at 6:03 am #

            Python 3. I’m still stuck here. Do you have any idea? Thanks for your reply.

          • Adrian Rosebrock February 3, 2018 at 11:05 am #

            I know Python 3 handles threading and queuing slightly different than Python 2. Would you be able to try Python 2 and see if you have the same results?

  29. Emre Osma January 30, 2018 at 1:48 am #

    Hello Adrian,
    OS is Rasbian Stretch hw is raspberry pi 2.
    OpenCV and all other imports are OK
    but the result is “AttributeError: ‘NoneType’ object has no attribute ‘shape'” 🙂
    any comments?
    thanks in advance

    $ python pi_detect_drowsiness.py –cascade haarcascade_frontalface_default.xml –shape-predictor shape_predictor_68_face_landmarks.dat –alarm 1
    [INFO] using TrafficHat alarm…
    [INFO] loading facial landmark predictor…
    [INFO] starting video stream thread…
    Traceback (most recent call last):
    File “pi_detect_drowsiness.py”, line 88, in
    frame = imutils.resize(frame, width=450)

    (h, w) = image.shape[:2]
    AttributeError: ‘NoneType’ object has no attribute ‘shape’

    • Adrian Rosebrock January 30, 2018 at 10:03 am #

      If you are getting a “NoneType” error than OpenCV cannot read the frame from your Raspberry Pi camera module or USB webcam. Double-check that OpenCV can access your Raspberry Pi camera by following this post. Additionally, you should read up on NoneType errors and how to debug them here.

      • Emre February 2, 2018 at 5:00 am #

        installing cam driver solved my problem 🙂
        sudo modprobe bcm2835-v4l2
        thanks a lot

  30. Emre Osma January 30, 2018 at 1:49 am #

    using “Raspberry Pi camera module” not a USB one.

  31. Henrick February 1, 2018 at 6:40 am #

    This is awesome. Can I execute this program on boot though? I tried using your tutorial on crontab and instead of using “python pi_reboot_alarm.py” I replaced it with this
    python pi_detect_drowsiness.py –cascade haarcascade_frontalface_default.xml \
    –shape-predictor shape_predictor_68_face_landmarks.dat –alarm 1
    But it did not work at all. Can you help me out?

    • Adrian Rosebrock February 3, 2018 at 11:01 am #

      See this tutorial on running a script on reboot.

      You’ll either need to access your Python virtual environment and then execute the script (best accomplished via a shell script) or supply the full path to the Python binary (which I think is a bit easier).

  32. nandhu February 6, 2018 at 1:14 pm #

    hello adrian …..
    iam a having a doubt that is iam using a rasberry pie controller iam in coded in the python
    whether i need the laptop needs to be attached with the module
    else the coded program can uploaded to the controller and detached the laptop
    can you explain me
    iam a very beginner

    • Adrian Rosebrock February 8, 2018 at 8:46 am #

      If you’re a beginner I would suggest coding directly on the Raspberry Pi, that way you won’t be confused on which system the code is executing on.

  33. JP February 15, 2018 at 5:22 am #

    can i put the codes inside the python shell in the virtual environment? i am having a big trouble on how to start scripting and what ide i should use and also i dont know how to run it. i am very sorry if i look dumb but i am really new to this kind of tech can someone help me?

    • Adrian Rosebrock February 18, 2018 at 10:01 am #

      Instead of trying to use the Python shell or IDE to recommend the code, simply open up a terminal, access your Python virtual environment via the “workon” command execute the script from the terminal. There is no need to launch a shell or IDE.

Leave a Reply