Getting started with the Intel Movidius Neural Compute Stick

Let me ask you three questions:

  1. What if you could could run state-of-the-art neural networks on a USB stick?
  2. What if you could see over 10x performance on this USB stick compared to your CPU?
  3. And what if this entire device costs under $100?

Sound interesting?

Enter Intel’s Movidius Neural Compute Stick (NCS).

Raspberry Pi users will especially welcome the device as it can dramatically improve upon image classification and object detection speeds and capabilities. You may find that the Movidius is “just what you needed” to speedup network inference time in (1) a small form factor and (2) a good price.

Inside today’s post we’ll discuss:

  • What the Movidius Neural Compute Stick is capable of
  • If you should buy one
  • How to quickly and easily get up and running with the Movidius
  • Benchmarks comparing network inference times on a MacBook Pro and Raspberry Pi

Next week I’ll provide additional benchmarks and object detection scripts using the Movidius as well.

To get started with the Intel Movidius Neural Compute Stick and to learn how you can deploy a CNN model to your Raspberry Pi + NCS, just keep reading.

Looking for the source code to this post?
Jump right to the downloads section.

Getting started with the Intel Movidius Neural Compute Stick

Today’s blog post is broken into five parts.

First, I’ll answer:

What is the Intel Movidius Neural Compute Stick and should I buy one?

From there I’ll explain the workflow of getting up and running with the Movidius Neural Compute Stick. The entire process is relatively simple, but it needs to be spelled out so that we understand how to work with the NCS

We’ll then setup our Raspberry Pi with the NCS in API-only mode. We’ll also do a quick sanity check to ensure we have communication to the NCS.

Next up, I’ll walk through my custom Raspberry Pi + Movidius NCS image classification benchmark script. We’ll be using SqueezeNet, GoogLeNet, and AlexNet.

We’ll wrap up the blog post by comparing benchmark results.

What is the Intel Movidius Neural Compute Stick?

Intel’s Neural Compute Stick is a USB-thumb-drive-sized deep learning machine.

You can think of the NCS like a USB powered GPU, although that is quite the overstatement — it is not a GPU, and it can only be used for prediction/inference, not training.

I would actually classify the NCS as a coprocessor. It’s got one purpose: running (forward-pass) neural network calculations. In our case, we’ll be using the NCS for image classification.

The NCS should not be used for training a neural network model, rather it is designed for deployable models. Since the device is meant to be used on single board computers such as the Raspberry Pi, the power draw is meant to be minimal, making it inappropriate for actually training a network.

So now you’re wondering: Should I buy the Movidius NCS?

At only $77 dollars, the NCS packs a punch. You can buy the device on Amazon or at any of the retailers listed on Intel’s site.

Under the hood of the NCS is a Myriad 2 processor (28 nm) capable of 80-150 GFLOPS performance. This processor is also known as a Vision Processing Unit (or vision accelerator) and it consumes only 1W of power (for reference, the Raspberry Pi 3 B consumes 1.2W with HDMI off, LEDs off, and WiFi on).

Whether buying the NCS is worth it to you depends on the answers to several questions:

  1. Do you have an immediate use case or do you have $77 to burn on another toy?
  2. Are you willing to deal with the growing pains of joining a young community? While certainly effective, we don’t know if these “vision processing units” are here to stay.
  3. Are you willing to devote a machine (or VM) to the SDK?
  4. Pi Users: Are you willing to devote a separate Pi or at least a separate microSD to the NCS? Are you aware that the device based on it’s form factor dimensions will block 3 USB ports unless you use a cable to go to the NCS dongle?

Question 1 is up to you.

The reason I’m asking question 2 is because Intel is notorious for poor documentation and even discontinuing their products as they did with the Intel Galileo.

I’m not suggesting that either will occur with the NCS. The NCS is in the deep learning domain which is currently heading full steam ahead, so the future of this product does look bright. It also doesn’t hurt that there aren’t too many competing products.

Questions 2 and 3 (and their answers) are related. In short, you can’t isolate the development environments with virtual environments and the installer actually removes previous installations of OpenCV from your system. For this reason you should not get the installer scripts anywhere near your current projects and working environments. I learned the hard way. Trust me.

Hopefully I haven’t scared you off — that is not my intention. Most people will be purchasing the Movidius NCS to pair with a Raspberry Pi or other single board computer.

Question 4 is for Pi users. When it comes to the Pi, if you’ve been following any other tutorials on PyImageSearch.com, you’re aware that I recommend Python virtual environments to isolate your Python projects and associated dependencies. Python virtual environments are a best practice in the Python community.

One of my biggest gripes with the Neural Compute Stick is that Intel’s install scripts will actually make your virtual environments nearly unusable. The installer downloads packages from the Debian/Ubuntu Aptitude repos and changes your PYTHONPATH system variable.

It get really messy real quick and to be on the safe side, you should use a fresh microSD (purchase a 32GB 98MB/s microSD on Amazon) with Raspbian Stretch. You might even buy another Pi to marry to the NCS if you’re working on a deployable project.

When I received my NCS I was excited to plug it into my Pi…but unfortunately I was off to a rough start.

Check out the image below.

I found out that with the NCS plugged in, it blocks all 3 other USB ports on my Pi. I can’t even plug my wireless keyboard/mouse dongle in another port!

Now, I understand that the NCS is meant to be used with devices other than the Raspberry Pi, but given that the Raspberry Pi is one of the most used single board systems, I was a bit surprised that Intel didn’t consider this — perhaps it’s because the device consumes a lot of power and they want you to think twice about plugging in additional peripherals to your Pi.

Figure 1: The Intel Movidius NCS blocks the 3 other USB ports from easy access.

This is very frustrating. The solution is to buy a 6in USB 3.0 extension such as this one:

Figure 2: Using a 6in USB extension dongle with the Movidius NCS and Raspberry Pi allows access the other 3 USB ports.

With those considerations in mind, the Movidius NCS is actually a great device at a good value. So let’s dive into the workflow.

Movidius NCS Workflow

Figure 3: The Intel Movidius NCS workflow (image credit: Intel)

Working with the NCS is quite easy once you understand the workflow.

The bottom line is that you need a graph file to deploy to the NCS. This graph file can live in the same directory as your Python script if you’d like — it will get sent to the NCS using the NCS API.

In general, the workflow of using the NCS is:

  1. Use a pre-trained TensorFlow/Caffe model or train a network with Tensorflow/Caffe on Ubuntu or Debian.
  2. Use the NCS SDK toolchain to generate a graph file.
  3. Deploy the graph file and NCS to your single board computer running a Debian flavor of Linux. I used a Raspberry Pi 3 B running Raspbian (Debian based).
  4. With Python, use the NCS API to send the graph file to the NCS and request predictions on images. Process the prediction results and take an (arbitrary) action based on the results.

Today, we’ll set up the Raspberry Pi in with the NCS API-only mode toolchain. This setup also generates the bare-minimum SDK tools to create graph files. It does not install Caffe, Tensorflow, etc.

For the sake of simplicity, we’ll be using a pre-trained Caffe model and prototxt files that come from the Movidius GitHub page (indirectly a Makefile downloads them from the DeepScale GitHub repo).

Executing the Makefile will:

  1. Download the Caffe files
  2. Use the NCS API to generate a graph file

From there, we’ll try out the Movidius using their example script + a single static image.

Finally, we’ll create our own custom image classification benchmarking script. You’ll notice that this script is based heavily on a previous post on Deep learning with the Raspberry Pi and OpenCV.

First, let’s prepare our Raspberry Pi.

Setting up your Raspberry Pi and the NCS in API-only mode

I learned the hard way that the Raspberry Pi can’t handle the SDK (what was I thinking?) by reading some sparse documentation.

I later started from square one and found better documentation that instructed me to set up my Pi in API-only mode (now this makes sense). I was quickly up and running in this fashion and I’ll show you how to do the same thing.

For your Pi, I recommend that you install the SDK in API-only mode on a fresh installation of Raspbian Stretch.

To install the Raspbian Stretch OS on your Pi, grab the Stretch image here and then flash the card using these instructions.

From there, boot up your Pi and connect to WiFi. You can complete all of the following actions over an SSH connection or using a monitor + keyboard/mouse (with the 6inch dongle listed above as the USB ports are blocked by the NCS) if you prefer.

Let’s update the system:

Then let’s install a bunch of packages:

Notice that we’ve installed libopencv-dev  from the Debian repositories. This is the first time I’m ever recommending it and hopefully the last time as well. Installing OpenCV via apt-get (1) installs an older version of OpenCV, (2) does not install the full version of OpenCV, and (3) does not take advantage of various system operations. Again, I do not recommend this method to installing OpenCV.

Additionally, you can see we’re installing a whole bunch of packages that I generally prefer to manage inside Python virtual environments with pip. Be sure you are using a fresh memory card so you don’t mess up other projects that you’ve been working on, on your Pi.

From there, let’s make a workspace directory and clone the NCSDK:

And while we’re at it, let’s clone down the NC App Zoo as we’ll want it for later.

And from there, navigate into the following directory:

In that directory we’ll use the Makefile to install SDK in API-only mode:

Test the Raspberry Pi installation on the NCS

Let’s test the installation by using code from the NC App Zoo. Be sure that the NCS is plugged into your Pi at this point.

You should see the exact output as above.

Generating your Movidius NCS neural network

SqueezeNet is included with the NC App Zoo and it is easy to generate the required graph file. The Makefile does it all for us:

Classification with the Movidius NCS

If you open up the run.py  file that we just created with the Makefile, you’ll notice that most inputs are hardcoded and that the file is ugly in general.

Instead, we’re going to create our own file for classification and benchmarking.

In a previous post, Deep learning on the Raspberry Pi with OpenCV, I described how to use the OpenCV’s DNN module to perform object classification.

Today, we’re going to modify that exact same script to make it compatible with the Movidius NCS.

If you compare both scripts you’ll see that they are nearly identical. For this reason, I’ll simply be pointing out the differences, so I encourage you to refer to the previous post for full explanations.

Each script is included in the “Downloads” section of this blog post, so be sure to grab the zip and follow along.

Let’s review the differences in the modified file named  pi_ncs_deep_learning.py :

Here we are importing our packages — the only difference is on Line 2 where we import the mvncapi as mvnc . This import is for the NCS API.

From there, we need to parse our command line arguments:

In this block I’ve removed two arguments ( --prototxt  and --model ) while adding two arguments ( --graph  and --dim ).

The --graph  argument is the path to our graph file — it takes the place of the prototxt and model.

Graph files can be generated via the NCS SDK, which we’ll cover in next week’s blog post. I’ve included the graph files for this week in the “Downloads for convenience. In the case of Caffe the graph is generated from the prototxt and model files with the SDK.

The --dim  argument simply specifies the pixel dimensions of the (square) image we’ll be sending through the neural network. Dimensions of the image were hardcoded in the previous post.

Next, we’ll load the class labels and input image from disk:

Here we’re loading the class labels from  synset_words.txt  with the same method as previously.

Then, we utilize OpenCV to load the image.

One slight change is that we’re making a copy of the original image on Line 26. We need two copies — one for preprocessing/normalization/classification and one for displaying to our screen later on.

Line 27 resizes our image and you’ll notice that we’re using args["dim"]  — our command line argument value.

Common choices for width and height image dimensions inputted to Convolutional Neural Networks include 32 × 32, 64 × 64, 224 × 224, 227 × 227, 256 × 256, and 299 × 299. Your exact image dimensions will depend on which CNN you are using.

Line 28 converts the image array data to float32  format which is a requirement for the NCS and the graph files we’re working with.

Next, we perform mean subtraction, but we’ll do it in a slightly different way this go around:

We load the ilsvrc_2012_mean.npy  file on Line 31. This comes from the ImageNet Large Scale Visual Recognition Challenge and can be used for SqueezeNet, GoogLeNet, AlexNet, and typically all other networks trained on ImageNet that utilize mean subtraction (we hardcode the path for this reason).

The image mean subtraction is computed on Lines 32-34 (using the same method shown in the Movidius example scripts on GitHub).

From there, we need to establish communication with the NCS and load the graph into the NCS:

As you can tell, the above code block is completely different because last time we didn’t use the NCS at all.

Let’s walk through it — it’s actually very straightforward.

In order to prepare to use a neural network on the NCS we need to perform the following actions:

  1. List all connected NCS devices (Line 38).
  2. Break out of the script altogether if there’s a problem finding one NCS (Lines 41-43).
  3. Select and open device0  (Lines 48 and 49).
  4. Load the graph file into Raspberry Pi memory so that we can transfer it to the NCS with the API (Lines 53 and 54).
  5. Load/allocate the graph on the NCS (Line 58).

The Movidius developers certainly got this right — their API is very easy to use!

In case you missed it above, it is worth noting here that we are loading a pre-trained graph. The training step has already been performed on a more powerful machine and the graph was generated by the NCS SDK. Training your own network is outside the scope of this blog post, but covered in detail in both PyImageSearch Gurus and Deep Learning for Computer Vision with Python.

You’ll recognize the following block if you read the previous post, but you’ll notice three changes:

Here we will classify the image with the NCS and the API.

Using our graph object, we call graph.LoadTensor  to make a prediction and graph.GetResult  to grab the resulting predictions. This is a two-step action, where before we simply called net.forward  on a single line.

We time these actions to compute our benchmark while displaying the result to the terminal just like previously.

We perform our housekeeping duties next by clearing the graph memory and closing the connection to the NCS on Lines 69 and 70.

From there we’ve got one remaining block to display our image to the screen (with a very minor change):

In this block, we draw the highest prediction and probability on the top of the image. We also print the top-5 predictions + probabilities in the terminal.

The very minor change in this block is that we’re drawing the text on image_orig  rather than image .

Finally, we display the output image_orig  on the screen. If you are using SSH to connect with your Raspberry Pi this will only work if you supply the  -X  flag for X11 forwarding when SSH’ing into your Pi.

To see the results of applying deep learning image classification on the Raspberry Pi using the Intel Movidius Neural Compute Stick and Python, proceed to the next section.

Raspberry Pi and deep learning results

For this benchmark, we’re going to compare using the Pi CPU to using the Pi paired with the NCS coprocessor.

Just for fun, I also threw in the results from using my Macbook Pro with and without the NCS (which requires an Ubuntu 16.04 VM that we’ll be building and configuring next week).

We’ll be using three models:

  1. SqueezeNet
  2. GoogLeNet
  3. AlexNet

Just to keep things simple, we’ll be running the classification on the same image each time — a barber chair:

Figure 4: A barber chair in a barbershop is our test input image for deep learning on the Raspberri Pi with the Intel Movidius Neural Compute Stick.

Since the terminal output results are quite long, I’m going to leave them out of the following blocks. Instead, I’ll be sharing a table of the results for easy comparison.

Here are the CPU commands (you can actually run this on your Pi or on your desktop/laptop despite pi  in the filename):

Note:  In order to use the OpenCV DNN module, you must have OpenCV 3.3 at a minimum. You can install an optimized OpenCV 3.3 on your Raspberry Pi using these instructions.

And here are the NCS commands using the new modified script that we just walked through above (you can actually run this on your Pi or on your desktop/laptop despite pi  in the filename):

Note: In order to use the NCS, you must have a Raspberry Pi loaded with a fresh install of Raspbian (Stretch preferably) and the NCS API-only mode toolchain installed as per the instructions in this blog post. Alternatively you may use an Ubuntu machine or VM.

Please pay attention to both Notes above. You’ll need two separate microSD cards to complete these experiments. The NCS API-only mode toolchain uses OpenCV 2.4 and therefore does not have the new DNN module. You cannot use virtual environments with the NCS, so you need completely isolated systems. Do yourself a favor and get a few spare microSD cards — I like the 32 GB 98MB/s cards. Dual booting your Pi might be an option, but I’ve never tried it and don’t want to deal with the hassle of partitioned microSD cards.

Now for the results summarized in a table:

Figure 5: Intel NCS with the Raspberry Pi benchmarks. I compared classification using the Pi CPU and using the Movidius NCS. On ImageNet, the NCS achieves a 395 to 545% speedup.

The NCS is clearly faster on the Pi when compared to using the Pi’s CPU for classification achieving a 6.45x speedup (545%) on GoogLeNet. The NCS is sure to bring noticeable speed to the table on larger networks such as the three compared here.

Note: The results gathered on the Raspberry Pi used my optimized OpenCV install instructions. If you are not using the optimized OpenCV install, you would see speedups in the range of 10-11x!

When comparing execution on my MacBook Pro with Ubuntu VM to the SDK VM on my MBP, performance is worse — this is expected for a number of reasons.

For starters, my MBP has a much more powerful CPU. It turns out that it’s faster to run the full inference on the CPU versus the added overhead of moving the image from the CPU to the NCS and then pulling the results back.

Second, there is USB overhead when conducting USB passthrough to the VM. USB 3 isn’t supported via the VirtualBox USB passthrough either.

It is worth noting that the Raspberry Pi 3 B has USB 2.0. If you really want speed for a single board computer setup, select a machine that supports USB 3.0. The data transfer speeds alone will be apparent if you are benchmarking.

Next week’s results will be even more evident when we compare real-time video FPS benchmarks, so be sure to check back on Monday.

Where to from here?

I’ll be back soon with another blog post to share with you how to generate your own custom graph files for the Movidius NCS.

I’ll also be describing how to perform object detection in realtime video using the Movidius NCS — we’ll benchmark and compare the FPS speedup and I think you’ll be quite impressed.

In the meantime, be sure to check out the Movidius blog and TopCoder Competition.

Movidus blog on GitHub

Intel and Movidius are keeping their blog up to date on GitHub. Be sure to bookmark their page and/or subscribe to RSS:

developer.movidius.com/blog

You might also want to sign into GitHub and click the “watch” button on the Movidius repos:

TopCoder Competition

Figure 5: Earn up to $8,000 with the Movidius NCS on TopCoder.

Are you interested in pushing the limits of the Intel Movidius Neural Compute Stick?

Intel is sponsoring a competition on TopCoder.

There are $20,000 in prizes up for grabs (first place wins $8,000)!

Registration and submission close on February 26, 2018.

Keep track of the leaderboard and standings!

Summary

Today we explored Intel’s new Movidius Neural Compute Stick. My goal here today was expose you to this new deep learning device (which we’ll  be using in future blog posts as well). I also demonstrated how to use the NCS workflow and API.

In general, the NCS workflow involes:

  1. Training a network with Tensorflow or Caffe using a machine running Ubuntu/Debian (or using a pre-trained network).
  2. Using the NCS SDK to generate a graph file.
  3. Deploying the graph file and NCS to your single board computer running a Debian flavor of Linux. We used a Raspberry Pi 3 B running Raspbian (Debian based).
  4. Performing inference, classification, object detection, etc.

Today, we skipped Steps 1 and 2. Instead I am providing graph files which you can begin using on your Pi immediately.

Then, we wrote our own classification benchmarking Python script and analyzed the results which demonstrate a significant 10x speedup on the Raspberry Pi.

I’m quite impressed with the NCS capabilities so far — it pairs quite well with the Raspberry Pi and I think it is a great value if (1) you have a use case for it or (2) you just want to hack and tinker.

I hope you enjoyed today’s introductory post on Intel’s new Movidius Neural Compute Stick!

To stay informed about PyImageSearch blog posts, sales, and events such as PyImageConf, be sure to enter your email address in the form below.

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 11-page Resource Guide on Computer Vision and Image Search Engines, including exclusive techniques that I don’t post on this blog! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , , , , , , ,

34 Responses to Getting started with the Intel Movidius Neural Compute Stick

  1. Alexander Sack February 12, 2018 at 10:57 am #

    Adrian, how many custom graphs have you actually tried with the NCS?

    I’ve been an early adopter of this stick and even run it under docker (you have to edit the install process a bit to get it to go but it will work that way quite happily).

    I’ve found that Intel’s support for Tensorflow is woefully lacking, and the procedure you have to do to edit your graph to get it ready to compile is tedious and error prone. My biggest complaint hands down is that you really need to know Caffe to use it effectively (and I’m not a Caffe kinda guy, I prefer hot chocolate) and I have still not being able to do Keras->TF->Caffe-NCS or even PyTorch->Caffe->NCS (though I am still experimenting with the latter).

    • Adrian Rosebrock February 12, 2018 at 4:39 pm #

      Great question, thanks for asking Alexander.

      To be honest, the number of custom graphs I’ve played around with is pretty small (6-7 tops). About half of them required some sort of change to the Caffe prototxt model definition file. I have not tried any TensorFlow models yet.

      I hope Intel decides to support Keras inside their SDK in the future. Trying to train via Keras, export into TensorFlow, and then edit the model graph for the NCS sounds like a rats nest of issues. Even though you’re not a fan of Caffe, it’s likely the easiest way to go from trained model to NCS.

  2. Philipp February 12, 2018 at 11:22 am #

    I love the NCS and really liked how easy it was to set up, even for me.
    I am still desperately waiting for the ability to run Tensorflow MobileNets SSD models for Object Detection as this is still not possible…
    Have a custom trained model that is just waiting for it to get on that stick!!
    🙂

    • Adrian Rosebrock February 12, 2018 at 4:37 pm #

      Hey Philipp — I’ll be covering Caffe SSD + MobileNets on the NCS next week. If I have time I’ll try to get it to work with a TensorFlow SSD + MobileNet. Is there a particular SSD + MobileNet you are trying to work with?

      • Philipp February 13, 2018 at 2:10 pm #

        Oh that would be wonderful!
        Like many others who just got into Object Detection I am working with the SSD Mobilenet V1 Coco (11_06_2017) from the Tensorflow Object Detection Model Zoo
        As far as I know Tensorflow SSD Models are not supported by the NCSDK

  3. Seamus February 12, 2018 at 12:01 pm #

    Thanks Adrian for another great blog. I just bought one for reasons 1 through 4. 🙂

    • Adrian Rosebrock February 12, 2018 at 4:36 pm #

      Enjoy it Seamus, let me know how it works for you 🙂

  4. wally February 12, 2018 at 12:45 pm #

    How does this compare to the Google AIY Vision Kit co-processor board? There are initial issues with the AIY Vision kit too, such that Microcenter gave me my money back, essentially issuing a recall.

    Obvious differences are the AIY kit board inputs direct from a Pi2 camera module and does a pass through to the PiZero camera port — this was the issue as the supplied cable didn’t fit the connector on the AIY board 🙁

    I can afford a $77 “toy” so I will be ordering one and look forward to part 2.

    • Adrian Rosebrock February 12, 2018 at 4:35 pm #

      I have a Google AIY Vision Kit but I haven’t yet played around with it. I’m planning on doing a tutorial on the Google AIY later this month/early next and then I’ll be able to better discuss the differences between the two.

  5. Sergei February 12, 2018 at 1:09 pm #

    Arr, this morning Ive met this amazing stick, this evening you published manual. Thats the destiny

    • Adrian Rosebrock February 12, 2018 at 4:35 pm #

      I’m glad the post was timely, Sergei! 🙂

  6. Ella February 12, 2018 at 2:17 pm #

    Thank you so much for the amazing post Adrian!!! Would you consider doing another tutorial on chaining multiple NCS’s in the future for video processing?

    • Adrian Rosebrock February 12, 2018 at 4:34 pm #

      I’ll be doing another blog post next week that covers video processing. It doesn’t cover using multiple NCS, but it does help you get started on the video processing route. Be on the look out for it!

  7. Philippe Rivest February 12, 2018 at 3:53 pm #

    Hi!
    Great guide 😀

    Is it complicated to create my own models? For instance a model that classifies musical instruments.

    Thank you

    • Adrian Rosebrock February 12, 2018 at 4:34 pm #

      Keep in mind that the Movidius is currently only supporting Caffe and TensorFlow models. Depending on how complex your model is and any type of special layers you use, it could be non-trivial to convert the model using the Movidius SDK.

      It sounds like you’re interested in studying the basics of deep learning, which by definition, includes training your own models. I have a number of tutorials that cover this on the PyImageSearch blog. You should also take a look at Deep Learning for Computer Vision with Python where discuss deep learning in detail.

      I hope that helps!

  8. simon February 12, 2018 at 7:43 pm #

    Hi, I have the same UBS stick and wonder it is able to run some custom DNN like SSD or YOLO?

    • Adrian Rosebrock February 13, 2018 at 9:32 am #

      Yep! It’s absolutely possible to run SSD and YOLO on the Movidius. I’ll be demonstrating how to use the Movidius for object detection in next week’s post. Stay tuned.

  9. Jason Hoffman February 12, 2018 at 9:00 pm #

    Adrian, I had forgotten what a fantastic writer and teacher you are. Every couple of months I check back in and It makes me wish I had a project to put your wisdom to use on. Keep up the great work, doc!

    • Adrian Rosebrock February 13, 2018 at 9:31 am #

      Thank you for the kind words Jason 🙂

  10. Foggy February 14, 2018 at 6:31 am #

    Can this device be used to speed up your home surveilance rpi security cam?

    • Adrian Rosebrock February 15, 2018 at 8:57 am #

      No, the home surveillance Raspberry Pi security cam is not using any deep learning. The Movidius NCS is meant to be used for running networks at a faster speed.

  11. JBeale February 14, 2018 at 10:18 am #

    Looking at one of the Movidius github examples, https://github.com/movidius/ncappzoo/blob/master/caffe/SSD_MobileNet/run.py
    it looks to me like they have some bugs in their code they are trying to work around (and even a typo in the comment explaining it- they mean “non-finite” I think, not “non infinite”):

    # boxes with non infinite (inf, nan, etc) numbers must be ignored
    print(‘box at index: ‘ + str(box_index) + ‘ has nonfinite data, ignoring it’)

    • Adrian Rosebrock February 15, 2018 at 8:57 am #

      Getting the SSD + MobileNet detector to run was a bit of a process. I’ll be discussing it in next weeks post.

  12. kaisar khatak February 14, 2018 at 11:47 pm #

    Adding compute to the PI via USB? Very cool, even despite the USB 2.0 HW constraint. Nvidia TX1/TX2 systems still preferred though…

    Have you tried running the video face matcher from the app zoo? It looks like code depends on opencv 3.3 built from source. Apparently, the NCS supports newer versions of opencv, though I did see “-D BUILD_opencv_cnn_3dobj=OFF \ -D BUILD_opencv_dnn_modern=OFF ” in the install-opencv-from_source.sh script.

    https://github.com/movidius/ncappzoo/tree/master/apps/video_face_matcher

    Cheers.

    • Adrian Rosebrock February 15, 2018 at 8:59 am #

      I have not tried to run the face matcher. When running the install scripts for the Movidius it forcibly uninstalled previous versions of OpenCV on my system and then installed OpenCV 2.4. Hacking the make script to compile OpenCV 3 instead might be possible but it’s not something I’ve tried.

      The TX1 and TX2 are great devices but they also have a heftier price tag. It’s hard to say which one is “preferable” as they both have their use cases. I would likely recommend on a case-by-case basis rather than saying “always use this one”.

  13. Peter van Lith February 15, 2018 at 11:20 am #

    Hi Adrian.
    While playing around with the Movidius I am running into a problem with the first SqueezeNet example. First of all in the previous block you are using python3, in this one it says python. Isn’t that calling python2 instead of python3?
    When I use python3 it starts executing the make file but fails because it cannot find mvNCProfile. It seems as if the api-only install is missing something or is there perhaps a problem with the PYTHONPATH ?

  14. John Beale February 17, 2018 at 7:46 pm #

    One errata: I followed these instructions from a fresh Raspbian install, but I found there was one item missing. I also had to do: sudo apt-get install python-opencv
    after that, I was able to run the SqueezeNet ‘run.py’ example and see electric guitar 99.12%

    • Adrian Rosebrock February 18, 2018 at 9:40 am #

      Thanks for sharing, John!

  15. Raghvendra February 18, 2018 at 3:01 am #

    Hi,

    Test the Raspberry Pi installation on the NCS was SUCCESS. But after that when I try, Generating your Movidius NCS neural network. It gives me the following error, what did I do wrong?

    making prototxt
    Prototxt file already exists

    making profile
    mvNCProfile deploy.prototxt -s 12
    make: mvNCProfile: Command not found
    Makefile:73: recipe for target 'profile' failed
    make: *** [profile] Error 127

    • Colin February 19, 2018 at 5:41 am #

      I run into the same problem. Looks like we ran into the same problem at the same time.

    • Adrian Rosebrock February 19, 2018 at 8:40 am #

      Hi Raghvendra — I think you are using the Makefile from Movidius. Is that correct? If so, the first message is that the prototxt has already been formatted and created. I’m not sure why the second message says mvNCProfile failed. Which model are you building, and can you include a link to the GitHub page if that’s where it came from?

  16. Sandor Seres February 18, 2018 at 3:47 pm #

    Hi,
    I have tried to move my own, trained network (written in Keras with Tensorflow backend) to the stick (12 layers, plus some Dropout)
    Already find
    https://github.com/ardamavi/Intel-Movidius-NCS-Keras to use, and it seems mostly work .

    But it still have issue with Dropout layers…

    tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'dropout_5/keras_learning_phase' with dtype bool
    [[Node: dropout_5/keras_learning_phase = Placeholder[dtype=DT_BOOL, shape=, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]

    I am thinking what to do?
    – remove the Dropouts from the final model (but how..)
    – retrain the whole model again without Dropout ( +10 hour run on AWS p2.xlarge)

    Anyone had similar problem?
    S&|

    • David Hoffman February 19, 2018 at 8:26 am #

      Sandor — I find it hard to believe (and quite frustrating) that mvNCCompile doesn’t support dropout regularization (or just doesn’t play nice with it). It appears that other users on the Movidius Forums (such as Yang) have experienced your exact problem. The solution mentioned there is to pass a constant of 1.0 from the dropout nodes. Dropout with the Movidius NCS might be a future blog post idea for Adrian. Report back if you are able to overcome this hurtle.

Trackbacks/Pingbacks

  1. Real-time object detection on the Raspberry Pi with the Movidius NCS - PyImageSearch - February 19, 2018

    […] I’m enjoying your blog and I especially liked last week’s post about image classification with the Intel Movidius NCS. […]

Leave a Reply