Human Activity Recognition with OpenCV and Deep Learning

In this tutorial you will learn how to perform Human Activity Recognition with OpenCV and Deep Learning.

Our human activity recognition model can recognize over 400 activities with 78.4-94.5% accuracy (depending on the task).

A sample of the activities can be seen below:

  1. archery
  2. arm wrestling
  3. baking cookies
  4. counting money
  5. driving tractor
  6. eating hotdog
  7. flying kite
  8. getting a tattoo
  9. grooming horse
  10. hugging
  1. ice skating
  2. juggling fire
  3. kissing
  4. laughing
  5. motorcycling
  6. news anchoring
  7. opening present
  8. playing guitar
  9. playing tennis
  10. robot dancing
  1. sailing
  2. scuba diving
  3. snowboarding
  4. tasting beer
  5. trimming beard
  6. using computer
  7. washing dishes
  8. welding
  9. yoga
  10. …and more!

Practical applications of human activity recognition include:

  • Automatically classifying/categorizing a dataset of videos on disk.
  • Training and monitoring a new employee to correctly perform a task (ex., proper steps and procedures when making a pizza, including rolling out the dough, heating oven, putting on sauce, cheese, toppings, etc.).
  • Verifying that a food service worker has washed their hands after visiting the restroom or handling food that could cause cross-contamination (i.e,. chicken and salmonella).
  • Monitoring bar/restaurant patrons and ensuring they are not over-served.

To learn how to perform human activity recognition with OpenCV and Deep Learning, just keep reading!

Looking for the source code to this post?
Jump right to the downloads section.

Human Activity Recognition with OpenCV and Deep Learning

In the first part of this tutorial we’ll discuss the Kinetics dataset, the dataset used to train our human activity recognition model.

From there we’ll discuss how we can extend ResNet, which typically uses 2D kernels, to instead leverage 3D kernels, enabling us to include a spatiotemporal component used for activity recognition.

We’ll then implement two versions of human activity recognition using the OpenCV library and the Python programming language.

Finally, we’ll wrap up the tutorial by looking at the results of applying human activity recognition to a few sample videos.

The Kinetics Dataset

Figure 1: The pre-trained human activity recognition deep learning model used in today’s tutorial was trained on the Kinetics 400 dataset.

The dataset our human activity recognition model was trained on is the Kinetics 400 Dataset.

This dataset consists of:

  • 400 human activity recognition classes
  • At least 400 video clips per class (downloaded via YouTube)
  • A total of 300,000 videos

You can view the full list of classes the model can recognize here.

To learn more about the dataset, including how it was curated, be sure to refer to Kay et al.’s 2017 paper, The Kinetics Human Action Video Dataset.

3D ResNet for Human Activity Recognition

Figure 2: Deep neural network advances on image classification with ImageNet have also led to success in deep learning activity recognition (i.e. on videos). In this tutorial, we perform deep learning activity recognition with OpenCV. (image source: Figure 1 from Hara et al.)

The model we’re using for human activity recognition comes from Hara et al.’s 2018 CVPR paper, Can Spatiotemporal 3D CNNs Retrace the History of 2D CNNs and ImageNet?

In this work the authors explore how existing state-of-the-art 2D architectures (such as ResNet, ResNeXt, DenseNet, etc.) can be extended to video classification via 3D kernels.

The authors argue:

  • These architectures have been successfully applied to image classification.
  • The large-scale ImageNet dataset allowed such models to be trained to such high accuracy.
  • The Kinetics dataset is also sufficiently large.

…and therefore, these architectures should be able to perform video classification by (1) changing the input volume shape to include spatiotemporal information and (2) utilizing 3D kernels inside of the architecture.

The authors were in fact correct!

By modifying both the input volume shape and the kernel shape, the authors obtained:

  • 78.4% accuracy on the Kinetics test set
  • 94.5% accuracy on the UCF-101 test set
  • 70.2% accuracy on the HMDB-51 test set

These results are similar to rank-1 accuracies reported on state-of-the-art models trained on ImageNet, thereby demonstrating that these model architectures can be utilized for video classification simply by including spatiotemporal information and swapping 2D kernels for 3D ones.

For more information on our modified ResNet architecture, experiment design, and final accuracies, be sure to refer to the paper.

Downloading the Human Activity Recognition Model for OpenCV

Figure 3: Files required for human activity recognition with OpenCV and deep learning.

To follow along with the rest of this tutorial you’ll need to download the:

  1. Human activity model
  2. Python + OpenCV source code
  3. Example video for classification

You can use the “Downloads” section of this tutorial to download a .zip containing all three.

Once downloaded, continue on with the rest of this tutorial.

Project structure

Let’s inspect our project files:

Our project consists of three auxiliary files:

  • action_recognition_kinetics.txt : The class labels for the Kinetics dataset.
  • resnet-34_kinetics.onx : Hara et al.’s pre-trained and serialized human activity recognition convolutional neural network trained on the Kinetics dataset.
  • example_activities.mp4 : A compilation of clips for testing human activity recognition.

We will review two Python scripts, each of which accepts the above three files as input:

  • human_activity_reco.py : Our human activity recognition script which samples N frames at a time to make an activity classification prediction.
  • human_activity_reco_deque.py : A similar human activity recognition script that implements a rolling average queue. This script is slower to run; however, I’m providing the implementation so that you can learn from and experiment with it.

Implementing Human Activity Recognition with OpenCV

Let’s go ahead and implement human activity recognition with OpenCV. Our implementation is based on OpenCV’s official example; however, I’ve provided additional changes (both in this example and the next) along with additional commentary/detailed explanations on what the code is doing.

Open up the human_activity_reco.py file in your project structure and insert the following code:

We begin with imports on Lines 2-6. For today’s tutorial you need OpenCV 4 and imutils installed. Visit my pip install opencv instructions to install OpenCV on your system if you have not done so already.

Lines 10-16 parse our command line arguments:

  • --model : The path to the trained human activity recognition model.
  • --classes : The path to the activity recognition class labels file.
  • --input : An optional path to your input video file. If this argument is not included on the command line, your webcam will be invoked.

From here we’ll perform initializations:

Line 21 loads our class labels from the text file.

Lines 22 and 23 define the sample duration (i.e. the number of frames for classification) and sample size (i.e. the spatial dimensions of the frame).

Next, we’ll load and initialize our human activity recognition model:

Line 27 uses OpenCV’s DNN module to read the PyTorch pre-trained human activity recognition model.

Line 31 then instantiates our video stream using either a video file or webcam.

We’re now ready to begin looping over frames and performing human activity recognition:

Line 34 begins a loop over our frames where first we initialize the batch of frames  that will be passed through the neural net (Line 37).

From there, Lines 40-53 populate the batch of frames  directly from our video stream. Line 52 resizes each frame to a width  of 400  pixels while maintaining aspect ratio.

Let’s construct our blob  of input frames which we will soon pass through the human activity recognition CNN:

Lines 56-60 construct a blob  from our input frames  list.

Notice that we’re using the blobFromImages (i.e. plural) rather than the blobFromImage (i.e. singular) function — the reason here is that we’re building a batch of multiple images to be passed through the human activity recognition network, enabling it to take advantage of spatiotemporal information.

If you were to insert a print(blob.shape) statement into your code you would notice that the blob has the following dimensionality:

(1, 3, 16, 112, 112)

Let’s unpack this dimensionality a bit more:

  • 1: The batch dimension. Here we have only a single data point that is being passed through the network (a “data point” in this context means the N frames that will be passed through the network to obtain a single classification).
  • 3: The number of channels in our input frames.
  • 16: The total number of frames  in the blob .
  • 112 (first occurrence): The height of the frames.
  • 112 (second occurrence): The width of the frames.

At this point, we’re ready to perform human activity recognition inference followed by annotating the frame with the predicted label and showing the prediction to our screen:

Lines 64 and 65 pass the blob  through the network, obtaining a list of outputs , the predictions.

We then grab the label  of the highest prediction for the blob  (Line 66).

Using the label , we can then draw the prediction on each and every frame in the frames  list (Lines 69-73), displaying the output frames until the q  key is pressed at which point we break  and exit.

An Alternate Human Activity Implementation Using a Deque Data Structure

Inside our human activity recognition from the previous section, you’ll notice the following lines:

This implementation implies that:

  • We read a total of SAMPLE_DURATION frames from our input video.
  • We pass those frames through our human activity recognition model to obtain the output.
  • And then we read another SAMPLE_DURATION frames and repeat the process.

Thus, our implementation is not a rolling prediction.

Instead, it’s simply grabbing a sample of frames, classifying them, and moving on to the next batch — any frames from the previous batch are discarded.

The reason we do this is for speed.

If we classified each individual frame it would take longer for the script to run.

That said, using rolling frame prediction via a deque data structure can lead to better results as it does not discard all of the previous frames — rolling frame prediction only discards the oldest frame in the list, making room for the newest frame.

To see how this can cause a problem related to inference speed, let’s suppose there are N total frames in a video file:

  • If we do use rolling frame prediction, we perform N classifications, one for each frame (once the deque data structure is filled, of course)
  • If we do not use rolling frame prediction, we only have to perform N / SAMPLE_DURATION classifications, thus reducing the amount of time it takes to process a video stream significantly.

Figure 4: Rolling prediction (blue) uses a fully populated FIFO queue window to make predictions. Batch prediction (red) does not “roll” from frame to frame. Rolling prediction requires more computational horsepower but leads to better results for human activity recognition with OpenCV and deep learning.

Given that OpenCV’s dnn module does not support most GPUs (including NVIDIA GPUs), I would recommend you do not use rolling frame prediction for most applications.

That said, inside the .zip file for today’s tutorial (found in the “Downloads” section of the post) you’ll find a file named human_activity_reco_deque.py — this file contains an implementation of Human Activity Recognition that performs rolling frame prediction.

The script is very similar to the previous one, but I’m including it here for you to experiment with:

Imports are the same with the exception of Python’s built-in deque  implementation from the collections  module (Line 2).

On Line 28, we initialize the FIFO frames  queue with a maximum length equal to our sample duration. Our “first-in, first-out” (FIFO) queue will automatically pop out old frames and accept new ones. We’ll perform rolling inference on the queue of frames.

All other lines above are the same, so let’s now inspect our frame processing loop:

Lines 41-57 are different than in our previous script.

Previously, we sampled a batch of  SAMPLE_DURATION  frames and would later perform inference on that batch.

In this script, we still perform inference in batch; however, it is now a rolling batch. The difference is that we add frames to our FIFO queue on Line 52. Again, this queue has a maxlen  of our sample duration and the head of the queue will always be the current frame  of our video stream. Once the queue fills up, old frames are popped out automatically with the deque FIFO implementation.

The result of this rolling implementation is that once the queue is full, any given frame (with the exception of the very first frame) will be “touched” (i.e. included in the rolling batch) more than once. This method is less efficient; however, it leads to more accurate activity recognition, especially when the video/scene’s activities change periodically.

Lines 56 and 57 allow our frames  queue to fill up (i.e. to 16 frames as shown in Figure 4blue) prior to any inference being performed.

Once the queue is full, we will perform a rolling human activity recognition prediction:

This code block contains lines of code identical to our previous script. Here we:

  • Construct a blob  from our queue of frames .
  • Perform inference and grab the highest probability prediction for the blob .
  • Annotate and display the current frame  with the resulting label  of rolling average human activity recognition.
  • Exit upon the q  key being pressed.

Human Activity Recognition Results

Let’s see the results of our human activity recognition code in action!

Use the “Downloads” section of this tutorial to download the pre-trained human activity recognition model, Python + OpenCV source code, and example demo video.

From there, open up a terminal and execute the following command:

Please note that our Human Activity Recognition model requires at least OpenCV 4.1.2.

If your are running an older version of OpenCV you will receive the following error:

If you receive that error you need to upgrade your OpenCV install to at least OpenCV 4.1.2.

Below is an example of our model correctly labeling an input video clip as “yoga”

Notice how the model waffles back and forth between “yoga” and “stretching leg” — both are technically correct here as in a downward dog position you are, by definition, doing yoga, but also stretching your legs at the same time.

In the next example our human activity recognition model correctly predicts this video as “skateboarding”:

You can see why the model also predicted “parkour” as well — the skater is jumping over a railing which is similar to an action that a parkourist may perform.

Anyone hungry?

If so, you might be interested in “making pizza”:

But before you eat, make sure you’re “washing hands” before you sit down to eat:

If you choose to indulge in “drinking beer” you better watch how much you’re drinking — the bartender might cut you off:

As you can see, our human activity recognition model, while not perfect, is still performing quite well given the simplicity of our technique (converting ResNet to handle 3D inputs versus 2D ones).

Human activity recognition is far from solved, but with deep learning and Convolutional Neural Networks, we’re making great strides.

Credits

The videos on this page, including the ones in the  example_activities.mp4  file found in the “Downloads” of this guide come from the following sources:

Summary

In this tutorial you learned how to perform human activity recognition using OpenCV and Deep Learning.

To accomplish this task, we leveraged a human activity recognition model pre-trained on the Kinetics dataset, which includes 400-700 human activities (depending on which version of the dataset you’re using) and over 300,000 video clips.

The model we utilized was ResNet, but with a twist — the model architecture had been modified to utilize 3D kernels rather than the standard 2D filters, enabling the model to include a temporal component for activity recognition.

You can read more about the model in Hara et al.’s 2018 paper, Can Spatiotemporal 3D CNNs Retrace the History of 2D CNNs and ImageNet?

Finally, we implemented human activity recognition using OpenCV and Hara et al.’s PyTorch implementation which we loaded via OpenCV’s dnn module.

Based on our results, we can see that while not perfect, our human activity recognition model is performing quite well!

To download the source code and pre-trained human activity recognition model (and be notified when future tutorials are published here on PyImageSearch), just enter your email address in the form below!

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , , ,

66 Responses to Human Activity Recognition with OpenCV and Deep Learning

  1. Walid November 25, 2019 at 10:59 am #

    Well. I am speechless with such a great post
    You make me hoping that every weekend will finish early so that I will learn from your articles on Monday.
    Figure 4 is worth a thousands words

    • Adrian Rosebrock November 25, 2019 at 2:00 pm #

      Thanks Walid!

  2. Dave Xanatos November 25, 2019 at 11:01 am #

    As usual, this is fantastic! Thank you very much & I hope you have a happy Thanksgiving!

    Dave

    • Adrian Rosebrock November 25, 2019 at 1:59 pm #

      Thanks Dave! Have a Happy Thanksgiving as well.

    • Frederik November 25, 2019 at 6:15 pm #

      How well does it perform on unknown labels, lets say activities that havnt been trained on?

      • Adrian Rosebrock November 27, 2019 at 11:22 am #

        It can’t predict activities it was never trained on nor does the model have an “unknown/ignore” class (which I think is a bit unfortunate).

  3. Walid November 25, 2019 at 11:12 am #

    Hi Adrian
    I am having the following error

    cv2.error: OpenCV(4.0.0) C:\projects\opencv-python\opencv\modules\dnn\src\onnx\onnx_importer.cpp:215: error: (-215:Assertion failed) attribute_proto.ints_size() == 2 in function ‘

    Can you please help?

    • Adrian Rosebrock November 25, 2019 at 1:59 pm #

      Make sure you are using at least OpenCV version 4.12.

      • olof November 25, 2019 at 3:20 pm #

        Hi Adrian,
        I’m using your gurus image but also got the same error as Walid.

        • Adrian Rosebrock November 27, 2019 at 11:21 am #

          Hi Olof, use OpenCV 4.1.2 and it will work.

      • Matt November 25, 2019 at 9:49 pm #

        Hello Adrian. I am getting the same error and using 4,.1
        cv2.error: OpenCV(4.1.0) C:\projects\opencv-python\opencv\modules\dnn\src\onnx\onnx_importer.cpp:245: error: (-215:Assertion failed) attribute_proto.ints_size() == 2 in function ‘cv::dnn::dnn4_v20190122::ONNXImporter::getLayerParams’

        • Matt November 25, 2019 at 10:21 pm #

          Scratch that you have to have OpenCV 4.1.2 the newest version and it works fine

      • Rohit November 26, 2019 at 6:14 am #

        Hi Adrian,

        I am using opencv 4.1.0. I am still facing the same error.

        Can you please help?

        • Adrian Rosebrock November 27, 2019 at 11:21 am #

          You need at least OpenCV 4.1.2 to run this example.

      • Zaigham Abbas Randhawa November 26, 2019 at 7:39 am #

        Hey Adrain, I was having the same issue.

        And I also have openCV 4.1.0 installed.

        Do you know of any other thing that we should be aware of?

        • Adrian Rosebrock November 27, 2019 at 11:24 am #

          You need at least OpenCV 4.1.2.

  4. Mkhuseli November 25, 2019 at 12:08 pm #

    Hei Adrian, great blog. One question is how can i use my own dataset on this model or how do i prepare my own training dataset.

    • Adrian Rosebrock November 25, 2019 at 1:58 pm #

      I’ll be doing a separate tutorial on that in the future.

      • Philippe November 27, 2019 at 10:14 am #

        Can’t wait for that one, as I also have my own set I would like to use. How can I tempt you to change ‘future’ in ‘near future’? 🙂

  5. Nick November 26, 2019 at 12:53 am #

    Thanks for the tutorial Adrian!

    I would like to apply “activity recognition” to my own dataset. Will this be taught in the new edition of DL4CV coming out on the 28th?

    Kind Regards,

    Nick

    • Adrian Rosebrock November 27, 2019 at 11:22 am #

      Not in the 3rd edition of DL4CV but it will be taught in the 4th edition of DL4CV coming out in 2020. If you purchase a copy now you will receive a free update to the 4th edition when it is released.

  6. Todorov November 26, 2019 at 1:49 am #

    opencv-python 4.1.1.26 works

    • Adrian Rosebrock November 27, 2019 at 11:21 am #

      hanks for sharing, Todorov!

  7. Kiran Prakash Kamble November 26, 2019 at 7:00 am #

    Hello Adrian,

    Can you please more elaboration on SAMPLE_SIZE..?

    • Adrian Rosebrock November 27, 2019 at 11:26 am #

      This model requires multiple input frames in a single to the network when making a prediction. The SAMPLE_SIZE controls the number of frames in that batch.

  8. Abkul November 26, 2019 at 9:00 am #

    Excellent work.

    I would like to train model for doing the same on phone, Kindly cover the option of mobile phone based human activity recognition procedures.

    Keep it up.

    • Adrian Rosebrock November 27, 2019 at 11:26 am #

      I’ll consider it for a future topic but cannot guarantee if/when that may be.

  9. Zheng Li November 26, 2019 at 10:30 am #

    Hi,Adrian:

    How to train the model with my own dataset? In general, how many video clips should be provided for every class?

    • Adrian Rosebrock November 27, 2019 at 11:27 am #

      I’ll be covering that in a future blog post/tutorial.

  10. Mats Önnerby November 26, 2019 at 5:01 pm #

    Hi Adrian
    Great post as always. I’m have built a human sized InMoov robot. I have noticed that people very often wave their hands to get the robots attention. Is there any neural network that has been trained to recognize “waving hands”. It would be awesome to be able to make the robot turn his attention to that person. I’m using MyrobotLab to control the robot and opencv to do face recognition already.
    Thanks in advance
    /Mats

    • Adrian Rosebrock November 27, 2019 at 11:27 am #

      That’s an interesting insight that people wave their hands to get the robots attention. I don’t know of a “waving hands” dataset or existing model though.

  11. Clark November 26, 2019 at 5:36 pm #

    This is fantastic! My colleagues are working on similar project to detect passenger falling down on escalators. There are challenges on both processing speed and model reliability and they have not good idea how to target setting the precision. I think this post is good to understand state-of-the-art method.
    One point, if we have multiple bodies in the video, do you have any tutorial on pre-processing before feeding to the model. That is one question when i read your books.
    Thanks, Adrian,

    BR

    Clark

    • Adrian Rosebrock November 27, 2019 at 11:28 am #

      I would apply object detection to find all people in the input frame and then apply activity recognition to each person.

  12. HJYOO November 26, 2019 at 10:07 pm #

    Thank you, Adrian.
    I showed and explained this code to my students.
    I am sure that it’s helpful for them.

    • Adrian Rosebrock November 27, 2019 at 11:28 am #

      Thanks so much!

  13. Yaser Sakkaf November 27, 2019 at 2:46 am #

    Hi Adrian,
    I was hoping to work on this use case.

    Verifying that a food service worker has washed their hands after visiting the restroom or handling food that could cause cross-contamination (i.e,. chicken and salmonella).

    I figured out that I will have to combine the face identification and this blog’s model(to see if some worker out of multiple ones have washed their hands or not) to get the resuling output.

    Can you hand out some more tips.

    • Adrian Rosebrock November 27, 2019 at 11:29 am #

      Hey Yaser, you basically have the general idea of the project. For each input frame:

      1. Run face recognition
      2. Run activity recognition

      You’ll then be able to know who was performing what activity.

      • Yaser Sakkaf November 28, 2019 at 2:33 am #

        Thanks for the advice.

  14. Pranav November 29, 2019 at 5:59 am #

    Cheers Adrian,
    Thanks for the wonderful tutorial.

    however, I am facing an error as mentioned below:

    the following arguments are required: -m/–model, -c/–classes
    An exception has occurred, use %tb to see the full traceback.

    I did also check the link you mentioned:
    https://www.pyimagesearch.com/2018/03/12/python-argparse-command-line-arguments/

    However, i am not able to move forward in the above tutorial without tackling the error. I just cant get passed through, like compile the other half after

    args = vars(ap.parse_args())

    please do let me know how do I go and what code has to be used to move on

    Regards and best wished

    • Adrian Rosebrock December 5, 2019 at 10:09 am #

      Hey Pranav — what have you tried thus far? Are you trying to execute the code via command line?

  15. David November 30, 2019 at 2:14 pm #

    Amazing Adrian, I have no words to thank you enough all the work you are sharing.

    Listen, I’ve been reading the Guru course and the different Bundles. I’m quite interesting in human activity recognition. What product of yours do you recommend me?

    All the best,

    Dave

    • Adrian Rosebrock December 5, 2019 at 10:09 am #

      Hey Dave — I would recommend the Deep Learning for Computer Vision with Python. The next edition of that book will cover how to train human activity recognition models from scratch. If you purchase now you’ll get the next update for free.

  16. aashu December 1, 2019 at 4:14 am #

    hii mate..
    great blog…but when i am executing this stuff the video running is very slow and lagging..what is the issue??

    • Adrian Rosebrock December 5, 2019 at 10:07 am #

      Which method are you using to run the script? And what are the specs of your machine?

      • Max December 5, 2019 at 11:36 pm #

        Hi Adrian, great article! I’m experiencing the same problem. Running the script from the Pycharm IDE. I’m using a laptop with i7 quad core, 16GB RAM, 64bit. Closed all other programs too.

        What would be the recommended specs?

        Cheers!

  17. Zayne December 2, 2019 at 4:57 am #

    Hi Adrian,
    What should I do if my training dataset is extremely imbalance,for example,500,000 samples for the label(named others), 10,000 to 20,000 samples for each of the remaining categories. I know the data augmentation may be the first choice, but how can we improve it from the algorithm level(loss function maybe).

  18. Engr Don December 2, 2019 at 9:28 pm #

    How can we train from scratch?

    • Adrian Rosebrock December 5, 2019 at 10:08 am #

      I’ll be covering how to train the model from scratch in a separate tutorial.

      • Hassan December 5, 2019 at 11:04 pm #

        This will be extremely interesting tutorial on how to train the model with own data. In addition, transfer learning might be a very useful for an advanced tutorial

  19. Ranga priyan V December 3, 2019 at 1:01 am #

    Hi Adrian,
    Can it be used in real time ?? Is there a way to train a dataset with a single particular activity ??

    Thank you.

    • Adrian Rosebrock December 5, 2019 at 10:07 am #

      1. Yes, it can run in real-time but you will need a GPU.

      2. I’ll be doing a separate blog post on training on specific activities.

      • Ranga priyan V December 9, 2019 at 12:28 am #

        hey thanks for the reply,
        my machine has nvidia gtx1650 gpu (4gb) can i run the same code for real-time by making a few changes ?

        • Adrian Rosebrock December 12, 2019 at 9:52 am #

          No, OpenCV’s “dnn” module does not yet support NVIDIA GPUs.

  20. Mohd Aman December 5, 2019 at 3:24 pm #

    Hi Adrian,

    Thanks for excellent blog on human activity recognition.
    Human activity recognition is my master’s project. Will you please give me some idea about this project, how to train own model from scratch on other data set. Can be used LSTM on top of this network ??. Because i read some papers in which author used 3D CNN + LSTM for spatio-temporal features.

    • Adrian Rosebrock December 12, 2019 at 9:54 am #

      Please refer to the previous comments. I’ll be doing a separate tutorial on training human activity recognition models.

  21. Walid December 5, 2019 at 5:57 pm #

    Hi Adrian

    The extension of the model file is .onnx and not .pth.
    is the model a pytorch model or open ecosystem for interchangeable AI models?

    Thanks a lot

    • Adrian Rosebrock December 12, 2019 at 9:53 am #

      The model has been converted to ONNX format from a PyTorch model.

  22. Rachita December 6, 2019 at 1:32 am #

    Hey Adrian,

    Amazing post. I was wondering how you downloaded kinetics dataset. I’ve been having problems doing that.

  23. Tom December 9, 2019 at 7:13 am #

    Hi Adrian, another great post.

    I tried this on a random video of me cooking some sausages, and it did a pretty good job, however on occasional sections, it decided I was changing a wheel instead. I can understand why it got confused, a big dark circle (the pan/wheel) with a metal object working around the center (cooking tongs/tire iron). The use case I’m looking at doesn’t require real-time predictions, so I was thinking on if there was a good approach to “smoothing” the predictions to give a more accurate overall classification?

    I’ve had 2 thoughts on possible ways to address this, one is to construct a domain specific transition matrix for each of the model states, and for example, if I determine that there is a very low probability that subsequent frames will be “cooking sausages” and “changing a wheel” then I create a random between 0 and 1 and only accept the models change in state if the random is below the relevant threshold. I quite like this, but with over 400 action labels, that’s quite a big matrix that would need to be defined for each possible domain. Although, for certain domains, large portions of the matrix will be irrelevant.

    My second idea was to do a simple look forward/backward approach, if prediction X is “changing a wheel” but predictions x-1 and x+1 are “cooking sausages” then I might choose to modify X to be the same as those either side of it. Obviously the window size either side of the prediction could be varied.

    I’m planning on working through each of the above when I get a chance, and maybe even some hybrid of the 2, but wanted to ask if you had any tricks up your sleeve, or thoughts on the above?

    Thanks

  24. praduman December 12, 2019 at 1:08 am #

    Hey, great blog!!
    I wanted to know how to enable GPU.

    • Adrian Rosebrock December 12, 2019 at 9:51 am #

      Unfortunately you cannot (yet). OpenCV’s “dnn” module does not yet support many GPUs for deep learning inference.

Before you leave a comment...

Hey, Adrian here, author of the PyImageSearch blog. I'd love to hear from you, but before you submit a comment, please follow these guidelines:

  1. If you have a question, read the comments first. You should also search this page (i.e., ctrl + f) for keywords related to your question. It's likely that I have already addressed your question in the comments.
  2. If you are copying and pasting code/terminal output, please don't. Reviewing another programmers’ code is a very time consuming and tedious task, and due to the volume of emails and contact requests I receive, I simply cannot do it.
  3. Be respectful of the space. I put a lot of my own personal time into creating these free weekly tutorials. On average, each tutorial takes me 15-20 hours to put together. I love offering these guides to you and I take pride in the content I create. Therefore, I will not approve comments that include large code blocks/terminal output as it destroys the formatting of the page. Kindly be respectful of this space.
  4. Be patient. I receive 200+ comments and emails per day. Due to spam, and my desire to personally answer as many questions as I can, I hand moderate all new comments (typically once per week). I try to answer as many questions as I can, but I'm only one person. Please don't be offended if I cannot get to your question
  5. Do you need priority support? Consider purchasing one of my books and courses. I place customer questions and emails in a separate, special priority queue and answer them first. If you are a customer of mine you will receive a guaranteed response from me. If there's any time left over, I focus on the community at large and attempt to answer as many of those questions as I possibly can.

Thank you for keeping these guidelines in mind before submitting your comment.

Leave a Reply

[email]
[email]