Getting started with Google Coral’s TPU USB Accelerator

In this tutorial, you will learn how to configure your Google Coral TPU USB Accelerator on Raspberry Pi and Ubuntu. You’ll then learn how to perform classification and object detection using Google Coral’s USB Accelerator.

A few weeks ago, Google released “Coral”, a super fast, “no internet required” development board and USB accelerator that enables deep learning practitioners to deploy their models “on the edge” and “closer to the data”.

Using Coral, deep learning developers are no longer required to have an internet connection, meaning that the Coral TPU is fast enough to perform inference directly on the device rather than sending the image/frame to the cloud for inference and prediction.

The Google Coral comes in two flavors:

  1. A single-board computer with an onboard Edge TPU. The dev board could be thought of an “advanced Raspberry Pi for AI” or a competitor to NVIDIA’s Jetson Nano.
  2. A USB accelerator that plugs into a device (such as a Raspberry Pi). The USB stick includes an Edge TPU built into it. Think of Google’s Coral USB Accelerator as a competitor to Intel’s Movidius NCS.

Today we’ll be focusing on the Coral USB Accelerator as it’s easier to get started with (and it fits nicely with our theme of Raspberry Pi-related posts the past few weeks).

To learn how to configure your Google Coral USB Accelerator (and perform classification + object detection), just keep reading!

Looking for the source code to this post?
Jump right to the downloads section.

Getting started with Google Coral’s TPU USB Accelerator

Figure 1: The Google Coral TPU Accelerator adds deep learning capability to resource-constrained devices like the Raspberry Pi (source).

In this post I’ll be assuming that you have:

  • Your Google Coral USB Accelerator stick
  • A fresh install of a Debian-based Linux distribution (i.e., Raspbian, Ubuntu, etc.)
  • Understand basic Linux commands and file paths

If you don’t already own a Google Coral Accelerator, you can purchase one via Google’s official website.

I’ll be configuring the Coral USB Accelerator on Raspbian, but again, provided that you have a Debian-based OS, these commands will still work.

Let’s get started!

Downloading and installing Edge TPU runtime library

If you are using a Raspberry Pi, you first need to install feh, used by the Edge TPU runtime example scripts to display output images:

The next step is to download the Edge TPU runtime and Python library. The easiest way to download the package is to simply use the command line + wget:

Now that the TPU runtime has been downloaded, we can extract it, change directory into python-tflite-source, and then install it (notice that sudo permissions are not required):

During the install you’ll be prompted “Would you like to enable the maximum operating frequency?” — be careful with this setting!

According to Google’s official getting started guide, enabling this option will:

  1. Improve your inference speed…
  2. …but cause the USB Accelerator to become very hot.

If you were to touch it/brush up against the USB stick, it may burn you, so be careful with it!

My personal recommendation is to select N (for “No, I don’t want maximum operating frequency”), at least for your first install. You can always increase the operating frequency later.

Secondly, it’s important to note that you need at least Python 3.5 for the Edge TPU runtime library.

You cannot use Python 2.7 or any Python 3 version below Python 3.5.

The install.sh scripts assumes you’re using Python 3.5, so if you’re not, you’ll want to open up the install.sh script, scroll down to the final line of the file (i.e., the setup.py) where you’ll see this line:

If you’re using Python 3.6 you’ll simply want to change the Python version number:

After that, you’ll be able to successfully run the install.sh script.

Overall, the entire install process on a Raspberry Pi took just over one minute. If you’re using a more powerful system than the RPi then the install should be even faster.

Classification, object detection, and face detection using the Google Coral USB Accelerator

Now that we’ve installed the TPU runtime library, let’s put the Coral USB Accelerator to the test!

First, make sure you are in the python-tflite-source/edgetpu directory. If you followed my instructions and put python-tflite-source in your home directory then the following command will work for you:

The next step is to download the pre-trained classification and object detection models. The full list of pre-trained models Google provides can be found here, including:

  • MobileNet V1 and V2 trained on ImageNet, iNat Insects, iNat Plants, and iNat Birds
  • Inception V1, V2, V2, and V4, all trained on ImageNet
  • MobileNet + SSD V1 and V2 trained on COCO
  • MobileNet + SSD V2 for face detection

Again, refer to this link for the pre-trained models Google Coral provides.

For the sake of this tutorial, we’ll be using the following models:

  1. MobileNet V2 trained on ImageNet
  2. MobileNet + SSD V2 for face detection
  3. MobileNet + SSD V2 trained on COCO

You can use the following commands to download the models and follow along with this tutorial:

For convenience, I’ve included all models + example images used in this tutorial in the “Downloads” section — I would recommend using the downloads to ensure you can follow along with the guide.

Again, notice how the models are downloaded to the ~/edgetpu_models directory — that is important as it ensures the paths used in the examples below will work out of the box for you.

Let’s start by performing a simple image classification example:

Figure 2: The Google Coral has made a deep learning classification inference on a Macaw/parrot.

As you can see, MobileNet (trained on ImageNet) has correctly labeled the image as “Macaw”, a type of parrot.

Note: If you are using a Python virtual environment (covered below) you would want to use python rather than python3 as the Python binary.

Now let’s try performing face detection using the Google Coral USB Accelerator:

Figure 3: Deep learning face detection with the Google Coral and Raspberry Pi.

Here the MobileNet + SSD face detector was able to detect all four faces in the image. This is especially impressive given the poor lighting conditions and the partially obscured face on the far right.

The next example shows how to perform object detection using a MobileNet + SSD trained on the COCO dataset:

Figure 4: Deep learning object detection with the Raspberry Pi and Google Coral.

Notice there are three detections but only one bird in the image — why is that?

The reason is that the object_detection.py script is not filtering on a minimum probability. You could easily modify the script to ignore detections with < 50% probability (I’ll leave that as an exercise to you, the reader, to implement).

For fun, I decided to try an image that was not included in the example TPU runtime library demos.

Here’s an example of applying the face detector to a custom image:

Figure 5: Testing face detection (using my own face) with the Google Coral and Raspberry Pi.

Sure enough, my face is detected!

Finally, here’s an example of running the MobileNet + SSD on the same image:

Figure 6: An example of running the MobileNet SSD object detector on the Google Coral + Raspberry Pi.

Again, we can improve results by filtering on a minimum probability to remove the extraneous detections. Doing so would leave only two detections: person (87.89%) and dog (58.20%).

Installing the edgetpu runtime into Python virtual environments

Figure 7: Importing egetpu in Python inside of my coral virtual environment on the Raspberry Pi.

It’s a best practice to use Python virtual environments for development, and as you know, we make heavy use of Python virtual environments on the PyImageSearch blog.

Installing the edgetpu library into a Python virtual environment definitely requires a few more steps, but is well worth it to ensure you libraries are kept in sequestered, independent environments.

The first step is to install both virtualenv and virtualenvwrapper:

You’ll notice that I’m using sudo here — this is super important as when installing the TPU runtime, the install.sh script created ~/.local directory. If we try to install virtualenv and virtualenvwrapper via pip they would actually go into the ~/.local/bin directory (which is what we don’t want).

The solution is to use sudo with pip3 (like we did above) so virtualenvand virtualenvwrapper end up in /usr/local/bin.

The next step is to open our ~/.bashrc file:

Then, scroll down to the bottom and insert the following lines to ~/.bashrc:

You can then re-load the .bashrc using source:

We can now create our Python 3 virtual environment:

I’m naming my virtual environment coral but you can call it whatever you like.

Finally, sym-link in the edgetpu library to your Python virtual environment:

Assuming you followed my exact instructions, your path to the edgetpu directory should match mine. If you didn’t follow my exact instructions then you’ll want to double-check and triple-check your paths.

As a sanity test, let’s try to import the edgetpu library into our Python virtual environment:

As you can see, everything is working and we can now execute the demo scripts above using our Python virtual environment!

What about custom models on Google’s Coral?

You’ll notice that I’m only using pre-trained deep learning models on the Google Coral in this post — what about custom models that you train yourself?

Google does provide some documentation on that but it’s much more advanced, far too much for me to include in this blog post.

If you’re interested in learning how to train your own custom models for Google’s Coral I would recommend you take a look at my upcoming book, Raspberry Pi for Computer Vision where I’ll be covering the Google Coral in detail.

How do I use Google Coral’s Python runtime library in my own custom scripts?

Use the edgetpu library in conjunction with OpenCV and your own custom Python scripts is outside the scope of this post.

I’ll be covering how to use Google Coral in your own Python scripts in a future blog post as well as in my Raspberry Pi for Computer Vision book.

Thoughts, tips, and suggestions when using Google’s TPU USB Accelerator

Overall, I really liked the Coral USB Accelerator. I thought it was super easy to configure and install, and while not all the demos ran out of the box, with some basic knowledge of file paths, I was able to get them running in a few minutes.

In the future, I would like to see the Google TPU runtime library more compatible with Python virtual environments.

Technically, I could create a Python virtual environment and then edit the install.sh script to install into that virtual environment, but editing the install.sh script shouldn’t be a strict requirement — instead, I’d like to see that script detect my Python binary/environment and then install for that specific Python binary.

I’ll also add that inference on the Raspberry Pi is a bit slower than what’s advertised by the Google Coral TPU Accelerator — that’s actually not a problem with the TPU Accelerator, but rather the Raspberry Pi.

What do I mean by that?

Keep in mind that the Raspberry Pi 3B+ uses USB 2.0 but for more optimal inference speeds the Google Coral USB Accelerator recommends USB 3.

Since the RPi 3B+ doesn’t have USB 3, that’s not much we can do about that until the RPi 4 comes out — once it does, we’ll have even faster inference on the Pi using the Coral USB Accelerator.

Finally, I’ll note that once or twice during the object detection examples it appeared that the Coral USB Accelerator “locked up” and wouldn’t perform inference (I think it got “stuck” trying to load the model), forcing me to ctrl + c out of the script.

Killing the script must have prevented a critical “shut down” script to run on the Coral — any subsequent executions of the demo Python scripts would result in an error.

To fix the problem I had to unplug the Coral USB accelerator and then plug it back in. Again, I’m not sure why that happened and I couldn’t find any documentation on the Google Coral site that referenced the issue.

Interested in using the Google Coral in your own projects?

I bet you’re just as excited about the Google Coral as me. Along with the Movidius NCS and Jetson Nano, these devices are bringing computer vision and deep learning to resource constrained systems such as embedded devices and the Raspberry Pi.

In my opinion, embedded CV and DL is the next big wave in the AI community. It’s so big that it may even be a tsunami — will you be riding that wave?

To help you get your start in embedded Computer Vision and Deep Learning, I have decided to write a brand new book — Raspberry Pi for Computer Vision.

Inside this book you will learn how to:

  • Build practical, real-world computer vision applications on the Pi
  • Create computer vision and Internet of Things (IoT) projects and applications with the RPi
  • Optimize your OpenCV code and algorithms on the resource constrained Pi
  • Perform Deep Learning on the Raspberry Pi (including utilizing the Movidius NCS and OpenVINO toolkit)
  • Configure your Google Coral, perform image classification and object detection, and even train + deploy your own custom models to the Coral Edge TPU!
  • Utilize the NVIDIA Jetson Nano to run multiple deep neural networks on a single board, including image classification, object detection, segmentation, and more!

I’m running a Kickstarter campaign to fund the creation of the new book, and to celebrate, I’m offering 25% OFF my existing books and courses if you pre-order a copy of RPi for CV.

In fact, the Raspberry Pi for Computer Vision book is practically free if you pre-order it with Deep Learning for Computer Vision with Python or the PyImageSearch Gurus course.

The clock is ticking and these discounts won’t last — the Kickstarter pre-sale shuts down on May 10th at 10AM EDT, after which I’m taking the deals down.

Reserve your pre-sale book now and while you are there, grab another course or book at a discounted rate.

Summary

In this tutorial, you learned how to get started with the Google Coral USB Accelerator.

We started by installing the Edge TPU runtime library on your Debian-based operating system (we specifically used Raspbian for the Raspberry Pi).

After that, we learned how to run the example demo scripts included in the Edge TPU library download.

We also learned how to install the edgetpu library into a Python virtual environment (that way we can keep our packages/projects nice and tidy).

We wrapped up the tutorial by discussing some of my thoughts, feedback, and suggestions when using the Coral USB Accelerator (be sure to refer them first if you have any questions).

I hope you enjoyed this tutorial!

To download the source code to this post, and be notified when future tutorials are published here on PyImageSearch, just enter your email address in the form below!

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , , , , ,

48 Responses to Getting started with Google Coral’s TPU USB Accelerator

  1. Jaiyam April 22, 2019 at 10:13 am #

    Thank you for the great tutorial. Can you share how much time it took for the edgetpu to perform inference for mobilenet and SSD models? How does that compare with NCS1 and NCS2?

    • Adrian Rosebrock April 22, 2019 at 2:02 pm #

      I’ll be doing more tutorials on the Google Coral, including comparisons with the NCS1 and NCS2, in future tutorials. Stay tuned!

  2. wally April 22, 2019 at 11:49 am #

    Very timely post! I just got mine installed following Google’s instructions, yours are clearer!

    If you hack on the install.sh script to make 32-bit Ubuntu-Mate on the Odroid XU-4 think its a Raspberry Pi 3B+, it works. Although for Mate18 (current default if you buy it pre-loaded) you’ll need to “sideload” Python 3.5.6, since it defaults to Python 3.6 which I couldn’t get to work, and install to a virtual envrinoment.

    If one wants a headstart using Coral USB TPU with OpenCV take a look at this:
    https://github.com/PINTO0309/TPU-MobilenetSSD

    • Adrian Rosebrock April 22, 2019 at 2:02 pm #

      Awesome, thanks for sharing Wally!

  3. Andrey Cheremskoy April 22, 2019 at 12:20 pm #

    How it compares to Intel Movidius?
    Thanks.

    • Adrian Rosebrock April 22, 2019 at 2:01 pm #

      I’ll be covering a comparison of the Google Coral to Intel Movidius NCS in a future post, stay tuned!

      • Jay Johnson May 28, 2019 at 6:02 am #

        I’m curious if you have also taken a look at Gyrfalcon’s PLAI plug or the Orange Pi AI stick both of which are iterations of the Lightspeeur 2801s.

        • Adrian Rosebrock May 30, 2019 at 9:15 am #

          I have not but thanks for mentioning it.

    • Tinus May 20, 2019 at 8:24 am #

      See https://qengineering.eu/deep-learning-with-raspberry-pi-and-alternatives.html
      At the end you find the comparison.

      Tinus

  4. David Bonn April 22, 2019 at 12:20 pm #

    Adrian,

    Thanks for another great post!

    I agree that edge computing is going to be the Next Big Thing for deep learning applications. Maybe we should call it AIoT? 🙂

    I was surprised at how inexpensive the USB Coral module is. Even at that low price I suspect that for large-scale deployment at low cost you’ll be designing and building a custom single-board computer. At some point I expect that there will be a system-on-a-chip that will include TPUs (or their NVIDIA equivalent).

    When you talk about having to unplug things from the Pi to get them working again — that seems to be a recurring theme with any kind of fairly exotic hardware and the Pi. I suspect a lot of the problem is that the drivers (or whatever you’d call them) on the Pi are rather primitive and don’t handle all error cases very well.

  5. atomek April 22, 2019 at 2:33 pm #

    Hi Adrian,
    What’s the inference time per frame when using SSD Mobilenet COCO?
    Thanks,
    Tom

    • Adrian Rosebrock April 25, 2019 at 8:55 am #

      I’ll be sharing a full benchmark in a separate post. I’m still gathering my results. This is just a getting started guide.

  6. wally April 22, 2019 at 4:04 pm #

    I just followed these instructions and installed the Coral edgetpu support onto a Pi3 system after I’d made a test install of OpenVINO 2019R1

    The coral test code runs successfully while the openvino real-time tutorial is also running. Nice that they don’t interfere!

    A typo crept in on the parrot example command:
    –model ~/edgetpu_models/ mobilenet_v2_1.0_224_quant_edgetpu.tflite \

    Had to remove the space to make it:
    –model ~/edgetpu_models/mobilenet_v2_1.0_224_quant_edgetpu.tflite \

    • Adrian Rosebrock April 25, 2019 at 8:54 am #

      Awesome, thanks for sharing Wally!

  7. sina April 22, 2019 at 11:34 pm #

    Adrian,

    Thank you very much for all your hard works and awesome posts.

    I just have a question regarding the processing speed. You mentioned in your post that it is not as fast as, what Google claims on its website because of USB-2 speed limitations. Have you done any testing of your own to get an approximation of how much slower?

    • Adrian Rosebrock April 25, 2019 at 8:53 am #

      I have done my own testing. I’ll be sharing my results in a separate tutorial.

  8. Tim U April 22, 2019 at 11:41 pm #

    Any comment on how well we can follow your tutorials, here and in the forthcoming book, if we use the Dev board (SOM) instead of the USB device? e.g. to overcome the USB 2.0 performance hit.

    • Adrian Rosebrock April 25, 2019 at 8:53 am #

      Once I get my hands on the Dev Board I’ll be doing tutorials on those as well.

  9. Tim U April 22, 2019 at 11:45 pm #

    Is there a significant advantage to using a virtual environment on a RPi dedicated to these exercises?
    (I usually just swap out SD cards to change config/etc)

    • Adrian Rosebrock April 25, 2019 at 8:52 am #

      The advantage is that you don’t have to swap SD cards. Manually swapping is tedious and unnecessary. Virtual environments help you overcome that limitation.

  10. Lu April 23, 2019 at 12:19 am #

    Can the USB Carol run on a x86 host while not Raspberry Pi?

    • Adrian Rosebrock April 25, 2019 at 8:51 am #

      As long as it’s Debian-based, yes.

  11. DeepNet April 23, 2019 at 2:08 am #

    Hi Adrian,
    Thanks a lot for the awsome post again,
    I want to explain how to convert the models to tflite?

  12. Victor April 23, 2019 at 4:31 am #

    I’m very interested in this stick, especially compared to the NCS2. Thanks for your work!

    • Adrian Rosebrock April 25, 2019 at 8:50 am #

      Thanks Victor, I’m glad you enjoyed it!

  13. Brad April 23, 2019 at 7:31 am #

    What was the average frame rate using the coral usb accelerator on a usb2.0 port of the pi?

    Thanks for the great post! I’m fairly new to deep learning on the pi and your content has been extremely valuable.

    • Adrian Rosebrock April 25, 2019 at 8:49 am #

      This is just a getting started guide. I’ll be sharing benchmarks in a future tutorial.

  14. Srikanth Anantharam April 23, 2019 at 11:08 am #

    What is the average time it takes to run inference on a single frame for the various models that you have evaluated?

    • Adrian Rosebrock April 25, 2019 at 8:47 am #

      I’ll be providing a more thorough evaluation in a separate tutorial (this is just a getting started guide).

  15. wally April 24, 2019 at 3:19 pm #

    Any idea of where this error comes from, and how to fix?

    7044 package_registry.cc:65] Minimum runtime version required by package (5) is lower than expected (10)

    Everything seems to work OK.

    • Adrian Rosebrock April 25, 2019 at 8:39 am #

      I saw that as well. I’m not sure what caused it but as you noted, everything seems to work fine.

  16. phil April 30, 2019 at 5:40 am #

    I am running the classify_image.py example on my RPi 3B board with Python 3.5 and I continue to get an error stating “ImportError: No module named ‘edgetpu.swig.edgetpu_cpp_wrapper'”

    I have a feeling this is related to the _edgetpu_cpp_wrapper.cpython-35m-arm-linux-gnueabihf.so file not being renamed to something like _edgetpu_cpp_wrapper.so but I have also tried this.

    Any ideas on what the issue could be?

    • Adrian Rosebrock May 1, 2019 at 11:34 am #

      That sounds like it could be the issue. Did you try using virtual environments? Or using the standard install instructions without virtual environments?

  17. phil May 1, 2019 at 12:13 pm #

    Hi Adrian, thanks for the response. So it turns out that after following the Google instructions I had to run the demo’s as root (feedback provided by Google support).

    However, I have since then setup a symlink into my VirtualEnvironment for the library and this has solved the problem.

    As an aside, do you know (post install) how I would go about enabling maximum frequency on the unit?

    • Adrian Rosebrock May 8, 2019 at 1:51 pm #

      During the install of the “edgetpu” library it will ask you if you want to enable maximum frequency. The easiest way would be to re-install it. I personally haven’t tried to re-enable it post-install.

  18. Crsitian Benglenok May 10, 2019 at 10:17 am #

    Is there something negative? I think it’s almost perfect

  19. Matthew Pottinger May 11, 2019 at 8:16 pm #

    I am more interested in handheld/portable applications and you can’t get more portable than a smartphone.

    I just want to put it out there that I tested this edge TPU with a rooted samsung s7, with Linux installed via the ‘Linux Deploy’ app and it works.

    It could probably also work with a non-rooted Android phone, with Linux installed via UserLand. However to get it to work it would require writing a libusb wrapper library that forwards the few libusb function calls the api makes to libusb on Android.

    Just putting that out there in case other people are interested. I was curious and I tried it.

  20. JP Cassar May 22, 2019 at 7:09 am #

    Thank you for the great tutorial that makes me think to use a spare FireFly RK3328-CC running Ubuntu 19.04 on an aarch64 architecture. The nice thing is that the board got a USB 3.0. After installing the newest Edge TPU API version 1.9.2 and patch everything is running fine on python 3.6.
    $ lsusb
    Bus 005 Device 002: ID 18d1:9302 Google Inc.
    Bus 005 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub

    $ python3 classify_image.py \
    –model ~/edgetpu_models/mobilenet_v2_1.0_224_quant_edgetpu.tflite \
    –label ~/edgetpu_models/imagenet_labels.txt \
    –image parrot.jpg
    —————————
    macaw
    Score : 0.99609375

    Putting the above in case anyone is trying to do the same.

  21. Paul Versteeg May 24, 2019 at 10:21 am #

    Adrian,something must have been changed recently, because the first demo program no longer works because of a typo.

    There is a space in this address:
    –model ~/edgetpu_models/ mobilenet_v2_1.0_224_quant_edgetpu.tflite \

    Can you please fix this?

    Tks,

    Paul

    • Adrian Rosebrock May 30, 2019 at 9:34 am #

      Thanks Paul, I’ve updated the post to remove the space. Thanks for bringing it to my attention!

  22. Paul June 25, 2019 at 9:26 am #

    Hi Adrian, thank you very much for the tutorial and your content, which is great!

    We just got the new Raspberry Pi 4 and the installation of the edge TPU runtime library doesn’t work for us as you described. Running the install.sh script leads to the error: “Platform not supported”. Do you know a workaround or what causes the issue? We already updated the file to python3.7, which is the standard python version on the newest raspbian OS version.

    Thank you in advance!

    Paul

  23. Patrick June 26, 2019 at 8:42 am #

    “…..Since the RPi 3B+ doesn’t have USB 3, that’s not much we can do about that until the RPi 4 comes out — once it does, we’ll have even faster inference on the Pi using the Coral USB Accelerator….”

    I guess that you will have to revisit some of these blogs now….. or should I say the new book as well ?

    • Adrian Rosebrock June 26, 2019 at 11:14 am #

      I’ll be doing an updated blog post with the RPi v4 (now that it has USB 3). Results reported in the new Raspberry Pi for Computer Vision book will also use the RPi 4.

  24. Mark June 27, 2019 at 11:02 am #

    Hi QQ – now we have Rp 4 🙂 🙂 Do you know how to get the edge software to install on the board?

    Currently it’s complaining (./install.sh) “Your Platform Is Not Supported”

    Thanks

    Mark

    • Adrian Rosebrock July 4, 2019 at 10:54 am #

      I don’t have a RPi 4 yet, but once I do I’ll be doing an updated tutorial for the Google Coral USB Accelerator (and sharing some benchmark information).

Leave a Reply

[email]
[email]