Object detection and image classification with Google Coral USB Accelerator

A few weeks ago I published a tutorial on how to get started with the Google Coral USB Accelerator. That tutorial was meant to help you configure your device and run your first demo script.

Today we are going to take it a step further and learn how to utilize the Google Coral in your own custom Python scripts!

Inside today’s tutorial you will learn:

  • Image classification with the Coral USB Accelerator
  • Image classification in video with the Google Coral Accelerator
  • Object detection with the Google Coral
  • Object detection in video with the Coral USB Accelerator

After reading this guide, you will have a strong understanding of how to utilize the Google Coral for image classification and object detection in your own applications.

To learn how to perform image classification and object detection with the Google Coral USB Accelerator, just keep reading!

Looking for the source code to this post?
Jump right to the downloads section.

Object detection and image classification with Google Coral USB Accelerator

For this guide I will be making the following assumptions:

  1. You already own a Google Coral USB Accelerator.
  2. You have followed my previous tutorial on how to install and configure Google Coral.

If you haven’t followed by install guide, please refer to it before continuing. Finally, I’ll note that I’m connecting my Google Coral USB Accelerator to my Raspberry Pi to gather results — I’m doing this for two reasons:

  1. I’m currently writing a book on using the Raspberry Pi for Computer Vision which will also cover the Google Coral.
  2. I cover the Raspberry Pi quite often on the PyImageSearch blog and I know many readers are interested in how they can leverage it for computer vision.

If you don’t have a Raspberry Pi but still want to use your Google Coral USB Accelerator, that’s okay, but make sure you are running a Debian-based OS.

Again, refer to my previous Google Coral getting started guide for more information.

Project structure

Let’s review the project included in today’s “Downloads”:

Today we will be reviewing four Python scripts:

  1. classify_image.py  – Classifies a single image with the Google Coral.
  2. classify_video.py  – Real-time classification of every frame from a webcam video stream using the Coral.
  3. detect_image.py  – Performs object detection using Google’s Coral deep learning coprocessor.
  4. detect_video.py  – Real-time object detection using Google Coral and a webcam.

We have three pre-trained TensorFlow Lite models + labels available in the “Downloads”:

  • Classification (trained on ImageNet):
    • inception_v4/  – The Inception V4 classifier.
    • mobilenet_v2/  – MobileNet V2 classifier.
  • Object detection (trained on COCO):
    • mobilenet_ssd_v2/  – MobileNet V2 Single Shot Detector (SSD).

If you are curious about how to train your own classification and object detection models, be sure to refer to Deep Learning for Computer Vision with Python.

For both classify_image.py  and detect_image.py , I’ve provided two testing images in the “Downloads”:

  • janie.jpg  – My adorable beagle.
  • thanos.jpg  – Character from Avengers: End Game.

For the classify_video.py  and detect_video.py  scripts, we’ll be capturing frames directly from a camera connected to the Raspberry Pi. You can use one of the following with today’s example scripts:

  • PiCamera V2 – The official Raspberry Pi Foundation camera.
  • USB Webcam – Any USB camera that supports V4L will work, such as a Logitech branded webcam.

Image classification with the Coral USB Accelerator

Figure 1: Image classification using Python with the Google Coral TPU USB Accelerator and the Raspberry Pi.

Let’s get started with image classification on the Google Coral!

Open up the classify_image.py  file and insert the following code:

We start of by importing packages. Most notably, we are importing ClassificationEngine  from edgetpu  on Line 2.

From there we’ll parse three command line arguments via Lines 10-17:

  • --model : The path to our TensorFlow Lite classifier.
  • --labels : Class labels file path associated with our model.
  • --image : Our input image path.

Using these three command line arguments, our script will be able to handle compatible pre-trained models and any image you throw at it all from the command line. Command line arguments are one of the number one problems people e-mail me about, so be sure to review my tutorial on argparse and command line arguments if you need a refresher.

Let’s go ahead and load the labels :

Lines 21-27 facilitate loading class labels  from a text file into a Python dictionary. Later on, the Coral API will return the predicted classID  (an integer). We can then take that integer class label and lookup the associated  label  value in this dictionary.

Moving on, now let’s load our classification model  with the edgetpu  API:

Our pre-trained TensorFlow Lite classification model  is instantiated via the ClassificationEngine  class (Line 31) where we pass in the path to our model via command line argument.

Let’s go ahead and load + preprocess our image :

Our image  is loaded (Line 34) and then preprocessed (Lines 35-42).

Take note that we made an original copy  of the image — we’ll be annotating this copy of the image with the output predictions later in the script.

How easy is it to perform classification inference on an image with the Google Coral Python API?

Let’s find out now:

On Line 47, we make classification predictions on the input image using the ClassifyWithImage  function (a super easy one-liner function call). I really like how the edgetpu  API allows us to specify that we only want the top results with the top_k  parameter.

Timestamps are sandwiched around this classification line and the elapse time is then printed via Lines 49 and 50.

From here we’ll process the results :

Looping over the results  (Line 53) we first find the top result and annotate the image with the label and percentage score (Lines 56-60).

For good measure, we’ll also print the other results and scores (but only in our terminal) via Lines 63 and 64.

Finally, the annotated original (OpenCV format) image is displayed to the screen (Lines 67 and 68).


That was straightforward. Let’s put our classification script to the test!

To see image classification in action with the Google Coral, make sure you use the “Downloads” section of this guide to download the code + pre-trained models — from there, execute the following command:

The output of the image classification script can be seen in Figure 1 at the top of this section.

Here you can see that Janie, my dog, is correctly classified as “beagle”.

Image classification in video with the Google Coral Accelerator

Figure 2: Real-time classification with the Google Coral TPU USB Accelerator and Raspberry Pi using Python. OpenCV was used for preprocessing, annotation, and display.

In the previous section, we learned how to perform image classification to a single image — but what if we wanted to perform image classification to a video stream?

I’ll be showing you how to accomplish exactly that.

Open up a new file, name it classify_video.py  and insert the following code:

There are two differences in our first code block for real-time classification compared to our previous single image classification script:

  1. On Line 2 we’ve added the VideoStream  import for working with our webcam.
  2. We no longer have a --image  argument since by default we will be using our webcam.

Just as before, let’s load the labels  and model , but now we also need to instantiate our VideoStream :

Lines 19-31 are identical to our previous script where we load our class labels and store them in a dictionary.

On Line 35 we instantiate our VideoStream  object so that we can read frames in our webcam (covered in the next code block). A 2.0  second sleep is added so our camera has time to warm up (Line 37).

Note: By default, this script will use a USB webcam. If you would like to use a Raspberry Pi camera module, simply comment out Line 35 and uncomment Line 36.

Let’s begin our loop:

We start looping on Line 40.

Line 43 grabs a frame  from the threaded video stream.

We go ahead and preprocess it exactly as we did in the previous script (Lines 44-51).

With the frame  in the correct PIL format, now we can make predictions and draw our annotations:

Just as before, Line 55 performs inference.

From there, the top result is extracted and the classification label + score  is annotated on the orig  frame (Lines 59-66).

The frame is displayed on the screen (Line 69).

If the "q"  key is pressed, we’ll break from the loop and clean up (Lines 70-78).


Let’s give image classification in video streams with the Google Coral a try!

Make sure you use the “Downloads” section of this guide to download the code + pre-trained models — from there, execute the following command:

An example of real-time image classification can be seen above in Figure 2.

Using the Google Coral USB Accelerator, the MobileNet classifier (trained on ImageNet) is fully capable of running in real-time on the Raspberry Pi.

Object detection with the Google Coral

Figure 3: Deep learning-based object detection of an image using Python, Google Coral, and the Raspberry Pi.

We’ve already learned how to apply image classification with the Google Coral — but what if we not only wanted to classify an object in an image but also detect where in the image the object is?

Such a task is called object detection, a technique I’ve covered quite a few times on the PyImageSearch blog (refer to this deep learning-based object detection guide if you are new to the concept).

Open up the detect_image.py  file and let’s get coding:

Our packages are imported on Lines 2-7. For Google Coral object detection with Python, we use the DetectionEngine  from the edgetpu  API.

Our command line arguments are similar to the classify_image.py  script with one exception — we’re also going to supply a --confidence  argument representing the minimum probability to filter out weak detections (Lines 17 and 18).

Now we’ll load the labels in the same manner as in our classification scripts:

And from there we’ll load our object detection  model :

We can now load our input image and perform preprocessing:

After preprocessing, it is time to perform object detection inference:

Lines 49 and 50 use Google Coral’s object detection API to make predictions.

Being able to pass our confidence threshold (via the threshold  parameter), is extremely convenient in this API. Honestly, I wish OpenCV’s DNN API would follow suit. It saves an if-statement later on as you can imagine.

Let’s process our results :

Looping over the results  (Line 56), we first extract the bounding box  coordinates (Lines 58 and 59). Conveniently, the box  is already scaled relative to our input image dimensions (from any behind the scenes resizing the API does to fit the image into the CNN).

From there we can easily extract the class label  via Line 60.

Next, we draw the bounding box rectangle (Lines 63 and 64) and draw the predicted object  text  on the image (Lines 65-68).

Our orig  image (with object detection annotations) is then displayed via Lines 71 and 72.


Let’s put object detection with the Google Coral USB Accelerator to the test!

Use the “Downloads” section of this tutorial to download the source code + pre-trained models.

From there, open up a terminal and execute the following command:

Just for fun, I decided to apply object detection to a screen capture of Avengers: Endgame movie (don’t worry, there aren’t any spoilers!)

Here we can see that Thanos, a character from the film, is detected (Figure 3)…although I’m not sure he’s an actual “person” if you know what I mean.

Object detection in video with the Coral USB Accelerator

Figure 4: Real-time object detection with Google’s Coral USB deep learning coprocessor, the perfect companion for the Raspberry Pi.

Our final script will cover how to perform object detection in real-time video with the Google Coral.

Open up a new file, name it detect_video.py , and insert the following code:

To start, we import our required packages and parse our command line arguments (Lines 2-8) Again, we’re using VideoStream  so we can access our webcam (since we’re performing object detection on webcam frames, we don’t have a --image  command line argument).

Next, we’ll load our labels  and instantiate both our model  and video stream:

From there, we’ll loop over frames from the video stream:

Our frame processing loop begins on Line 41. We proceed to:

  • Grab and preprocess our frame (Lines 44-52).
  • Perform object detection inference with the Google Coral (Lines 56 and 57).

From there we’ll process the results and display our output:

Here we loop over each of the detected objects, grab the bounding box + class label, and annotate the frame (Lines 61-73).

The frame (with object detection annotations) is displayed via Line 76.

We’ll continue to process more frames unless the "q"  (quit) key is pressed at which point we break and clean up (Lines 77-85).


Let’s put this Python + Coral object detection script to work!

To perform video object detection with the Google Coral, make sure you use the “Downloads” section of the guide to download the code + pre-trained models.

From there you can execute the following command to start the object detection script:

For our final example of applying real-time object detection with the Google Coral, I decided to let Janie in my office for a bit as I recorded a demo (and even decided to sing her a little song) — you can see the result in Figure 4 above.

The problem with the Raspberry Pi 3B+ and Google Coral USB Accelerator

Figure 5: USB 3.0 is much faster than USB 2.0. To take full advantage of Google Coral’s deep learning capabilities a USB 3.0 port is required, however, the Raspberry Pi 3B+ does not include USB 3.0 capability. (image source)

You might have noticed that our inference results are pretty similar to what we obtain with the Movidius NCS — doesn’t Google advertise the Coral USB Accelerator as being faster than the NCS?

What’s the problem here?

Is it the Google Coral?

Is it our code?

Is our device configured incorrectly?

Actually, it’s none of the above.

The problem here is the Raspberry Pi 3B+ only supports USB 2.0.

The bottleneck is the I/O taking place from the CPU, to USB, to the Coral USB Accelerator, and back.

Inference speed will dramatically improve once the Raspberry Pi 4 is released (which will certainly support USB 3, giving us the fastest possible inference speeds with the Coral USB Accelerator).

What about custom models?

This tutorial has focused on state-of-the-art deep learning models that have been pre-trained on popular image datasets, including ImageNet (for classification) and COCO (for object detection).

But what if you wanted to run your own pre-trained models on the Google Coral?

Is this possible?

And if so, how can we do it?

I’ll be answering that exact question inside my upcoming book, Raspberry Pi for Computer Vision.

The book will be released in Autumn 2019, but if you pre-order your copy now, you’ll be getting a discount (the price of the book will increase once it officially releases later this year).

If you’re interested in computer vision + deep learning on embedded devices such as the:

  • Raspberry Pi
  • Movidius NCS
  • Google Coral
  • Jetson Nano

…then you should definitely pre-order your copy now.

Summary

In this tutorial, you learned how to utilize your Google Coral USB Accelerator for:

  1. Image classification
  2. Image classification in video
  3. Object detection
  4. Object detection in video

Specifically, we used pre-trained deep learning models, including:

  • Inception V4 (trained on ImageNet)
  • MobileNet V4 (trained on ImageNet)
  • MobileNet SSD V2 (trained on COCO)

Our results were far, far better than trying to use the Raspberry Pi CPU alone for deep learning inference.

Overall, I was very impressed with how easy it is to use the Google Coral and the edgetpu  library in my own custom Python scripts.

I’m looking forward to seeing how the package develops (and hope they make it this easy to convert and run custom deep learning models on the Coral).

To download the source code and pre-trained to this post (and be notified when future tutorials are published here on PyImageSearch), just enter your email address in the form below!

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , , , ,

28 Responses to Object detection and image classification with Google Coral USB Accelerator

  1. Naufil Hassan May 13, 2019 at 10:31 am #

    Hi Adrian,

    Great work. I have been following your tutorials and they really are helpful to me.
    I went thorugh your Dlib tracker tutorial and was wondering if it is possible to run that tracker on Google Coral.

    Also, recommend if Yolov3-tiny is better for real time tasks or Mobilenet SSD ?

    Keep up the good work !

    Regards,
    Naufil

    • Adrian Rosebrock May 13, 2019 at 2:09 pm #

      The tracker itself will not be able to run on the Coral USB Accelerator as dlib’s correlation tracker is not a deep learning model.

      As far as YOLO versus MobileNet + SSD goes, that really depends on your application. I personally find that MobileNet + SSD tends to perform better than YOLO (less false-positives). I’ve also found that MobileNet + SSD tends to be a bit easier to train. I don’t typically use YOLO unless I have a very specific reason to do so.

      • Muhammad Maaz May 14, 2019 at 1:27 am #

        Hi Adrian,

        Just wanted to push my thoughts into it. What about writing the dlib’s correlation tracker into TensorFlow in order to run it on Coral? Also as dlib’s GPU compile is available, so we can run this tracker on GPU.

        • Adrian Rosebrock May 15, 2019 at 2:43 pm #

          Dlib’s correlation tracker is not a deep learning model. As far as I understand, it was never meant to run on a GPU.

  2. martin May 13, 2019 at 11:02 am #

    Nice article! It seems quite simple to implement tf models. I’m curious about the FPS compared to the movidius NCS1

    • Adrian Rosebrock May 13, 2019 at 2:08 pm #

      Thanks Martin. I’ll be doing a full comparison between the embedded devices, including the NCS, Coral, and Nano, at a later date.

  3. Oscar May 13, 2019 at 11:35 am #

    Hi Adrian,

    Thanks for another great post, but I was asking my self, there are only three pre-trained models. can you use other pre-trained models or train your own and then download them into the board?

    Thanks.

    • Adrian Rosebrock May 13, 2019 at 2:08 pm #

      Great question. I’ll be covering how to train your own custom models and then convert + deploy them to the Google Coral USB Accelerator inside my upcoming book, Raspberry Pi for Computer Vision.

  4. wally May 13, 2019 at 1:42 pm #

    Another very timely tutorial for me!

    I added your fps counter from other tutorials and ran this:

    python3 detect_video.py –model mobilenet_ssd_v2/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite –labels mobilenet_ssd_v2/coco_labels.txt

    on my Odroid XU-4 running Ubuntu Mate16, The Odroid has USB3 ports.

    Doing everything via ssh -X wtih an HP 1080p USB webcam I got:
    [INFO] starting video stream…
    [INFO] starting the FPS counter …
    [INFO] Run elapsed time: 44.93
    [INFO] AI processing approx. FPS: 19.83
    [INFO] Frames processed by AI system: 891

    Repeating the test on my Pi3B+, same modified code, Coral stick and USB webcam, again via ssh -X I got:
    [INFO] starting video stream…
    [INFO] starting the FPS counter …
    [INFO] Run elapsed time: 114.73
    [INFO] AI processing approx. FPS: 6.52
    [INFO] Frames processed by AI system: 748

    The Odroid gets ~3X the performance of the the Pi3B+ on this code.

    I think things like the Odroid XU-4 (octacore, with GigE, and USB3) should raise the bar for what we could hope to expect from the Pi4.

    The Odroid XU-4 (straight from Korea) runs about ~$25-30 more (board, case, power supply) than the Pi3B+ if you ignore the shipping cost (~$20) which is zero here for the Pi as I can pick them up at my local “maker” supply store.

    But even so 2X the cost for 3X the performance is usually a win.

    • Adrian Rosebrock May 13, 2019 at 2:07 pm #

      Thanks for sharing all this, Wally!

      • wally May 15, 2019 at 11:39 am #

        Another followup to get an idea of the speed of the Coral, I ran this tutorial code on an i3-4025 “Industrial Mini PC” running Ubuntu Mate 16.04

        With everything run as ssh -X I get:
        [INFO] starting video stream…
        [INFO] starting the FPS counter …
        [INFO] Run elapsed time: 87.51
        [INFO] AI processing approx. FPS: 46.36
        [INFO] Frames processed by AI system: 4057

        I’m impressed!

        Running the openvino_real_time_object_detection.py tutorial code using Movidius NCS2 on this system I get:
        [INFO] starting video stream…
        [INFO] elapsed time: 174.52
        [INFO] approx. FPS: 19.30

        Using the original NCS:
        [INFO] starting video stream…
        [INFO] elapsed time: 84.82
        [INFO] approx. FPS: 9.82

        • wally May 17, 2019 at 9:12 pm #

          I think the Coral TPU is the bang/buck leader at the moment.
          Running this sample code on my i7-8750H notebook I get 70 fps with the same webcam I used on the i3-4025 system above.

          On this system the OpenVINO with NCS2 gets 23.7 fps on their Security Barrier demo, but 101 fps with CPU instead of MYRIAD.

          Your OpenVINO tutorial on this system gets 20.2 fps with NCS2

          The Coral on a <$100 Odroid XU-4 is getting about the same frame rate as the NCS2 on a $300+ i3 system!

  5. Jerome May 13, 2019 at 1:58 pm #

    Very nice post,

    thanks again 🙂

    maybe next step could be using jetson nano and coral usb accelerator (there is an usb-3 …)

    • Adrian Rosebrock May 13, 2019 at 2:07 pm #

      Thanks Jerome, although if you are using a Jetson Nano there wouldn’t be much point of using the Coral USB Accelerator since the Nano has a GPU for inference.

      • Matthew Pottinger May 15, 2019 at 9:33 am #

        It could make sense, depending on the situation. The Coral accelerator is faster for object detection.

        Also there are other non-deep learning algorithms that use CUDA such as pose tracking/SLAM. That is one thing I will be trying, so the GPU on the Nano will be all used for mapping while the TPU is used for object detection.

        Or just multiple deep learning models at once.

      • wally May 15, 2019 at 11:50 am #

        I think there still could be a point to running both.

        I’ve verified that OpenVINO and Coral USB accelerators can be run simultaneously, so Coral and Jetson or Movidius and Jetson could be useful if, like me, you haven’t yet gotten into training models and are thus confined to using only publicly available models — not all models are available for every TPU, at least for now.

        Its easy to envision a pipeline of detection and classification models using the “best” TPU for each model.

        I think the hope is the Jetson Nano GPU will do better with YOLO models that Movidus and Coral.

  6. toz May 13, 2019 at 3:54 pm #

    at the right time the right information, as always. good work, keep it up.

    maybe its of help, to resize images not only to 500px, instead directly to the tensor-size of the coral-coproc?

    like its done here: https://github.com/freedomtan/edge_tpu_python_scripts/blob/master/object_detection_coral.py

    check it out, if you can improve your examples…

    thanks, keep on going..

  7. KH May 13, 2019 at 9:54 pm #

    Are the pretrained models able to detect drones as a unique object class or will it classify them as aircraft or birds or something else?

    • Adrian Rosebrock May 15, 2019 at 2:47 pm #

      There is not a drone/quadcopter object class. It would either classify them as something else or fail to detect them entirely.

  8. Steff May 16, 2019 at 3:28 am #

    It’s a nice post!

    Have you ever had experience in detecting camera occlusionr or have some resources worthy of reference?
    I have some problem in this regard and don’t know how to do

  9. Rob Jones May 17, 2019 at 10:04 am #

    Hi Adrian

    Could you post how you record your video clips ? That would be very useful.

    Also, with regard to the PI, have you had any success running them headless and sending video to another desktop over VNC ? vino only works if there is a running X server and desktop on the Pi and I’ve not had any luck with other alternatives on a truly headless Pi.

    • Adrian Rosebrock May 23, 2019 at 10:10 am #

      The video clip demos you mean? If so, I use VNC to stream to my Mac where I use Camtasia to record my screen.

  10. Paul Versteeg May 22, 2019 at 10:03 am #

    I think I just fell in love with the Coral.

    I was struggling to get the people counter program to work for my application, which is to watch over my 93 year old dad who still lives alone. I wanted to recognize a certain amount of movement within a certain period, by him crossing an imaginary (horizontal) line. If he does, fine, if he doesn’t, I will get an SMS as a warning.

    I’m using an RPi M3+, and I just could not get the frame rate high enough to have a reliable recognition/tracking.

    So I purchased the Coral USB stick.
    It arrived today, and I quickly went through the demo programs. They all worked very well, as usual. I was a little apprehensive that I would be disappointed, but my hopes were getting up.

    When I modified the detect video demo program to make it do what I needed, I was amazed about the throughput.

    No more detection every 40 frames, centroids and complicated tracking to get a reasonable fps (between 6 and 7) at best. And still have major stuttering and still loosing track of objects.

    Even with recognition now going at every frame, I get between 9 and 10 fps, and with that, the video is not stuttering at all and tracking is perfect. I’m delighted!

    Thank you for bringing this subject to the masses!

    Keep it up!

    Paul

    • Adrian Rosebrock May 23, 2019 at 9:32 am #

      Thanks for the comment Paul, I’m happy the Coral is working out for you!

  11. John-Paul Perron June 5, 2019 at 12:02 pm #

    Hey Adrian,

    Not sure if this has been mentioned, apologies if it has. But do you plan on testing the google tpu Dev Board? Interested to see if there is better performance there.

    Thanks!

    • Adrian Rosebrock June 6, 2019 at 6:42 am #

      I hope to but I’m not sure when that may be.

  12. Willie Nelson June 5, 2019 at 12:12 pm #

    Could you please add steps to this tutorial to install OpenCV within the coral virtual environment? I tried one of your previous tutorials for OpenCV 4.0 and failed to import package cv2. I might be using the wrong paths for RPi.

    • Adrian Rosebrock June 6, 2019 at 6:42 am #

      Make sure you are in the “coral” Python virtual environment and then install OpenCV:

      There will be a few other packages you may need to install so make sure you refer to this tutorial.

Leave a Reply

[email]
[email]