Real-time object detection on the Raspberry Pi with the Movidius NCS

Real-time object detection on the Raspberry Pi + Movidius NCS GIF

Today’s post is inspired by Danielle, a PyImageSearch reader who emailed me last week and asked:

Hi Adrian,

I’m enjoying your blog and I especially liked last week’s post about image classification with the Intel Movidius NCS.

I’m still considering purchasing an Intel Movidius NCS for a personal project.

My project involves object detection with the Raspberry Pi where I’m using my own custom Caffe model. The benchmark scripts you supplied for applying object detection on the Pi’s CPU were too slow and I need faster speeds.

Would the NCS be a good choice for my project and help me achieve a higher FPS?

Great question, Danielle. Thank you for asking.

The short answer is yes, you can use the Movidius NCS for object detection with your own custom Caffe model. You’ll even achieve high frame rates if you’re processing live or recorded video.

…but there’s a catch.

I told Danielle that she’ll need the full-blown Movidius SDK installed on her (Ubuntu 16.04) machine. I also mentioned that generating graph files from Caffe models isn’t always straightforward.

Inside today’s post you will learn how to:

  • Install the Movidius SDK on your machine
  • Generate an object detection graph file using the SDK
  • Write a real-time object detection script for the Raspberry Pi + NCS

After going through the post you’ll have a good understanding of the Movidius NCS and whether it’s appropriate for your Raspberry Pi + object detection project.

To get started with real-time object detection on the Raspberry Pi, just keep reading.

Looking for the source code to this post?
Jump right to the downloads section.

Real-time object detection on the Raspberry Pi

Today’s blog post is broken into five parts.

First, we’ll install the Movidius SDK and then learn how to use the SDK to generate the Movidius graph files.

From there, we’ll write a script for real time object detection with the Intel Movidius Neural compute stick that can be used with the Pi (or alternative single board computer with minor modifications).

Next, we’ll test the script + compare results.

In a previous post, we learned how to perform real-time object detection in video on the Raspberry Pi using the CPU and the OpenCV DNN module. We achieved approximately 0.9 FPS which serves as our benchmark comparison. Today, we’re going to see how the NCS paired with a Pi performs against the Pi CPU using the same model.

And finally, I’ve captured some Frequently Asked Questions (FAQs). Refer to this section often — I expect it to grow as I receive comments and emails.

Installing the Intel Movidius SDK

Figure 1: The Intel Movidius NCS workflow (image credit: Intel)

Last week, I reviewed the Movidius Workflow. The workflow has four basic steps:

  1. Train a model using a full-size machine
  2. Convert the model to a deployable graph file using the SDK and an NCS
  3. Write a Python script which deploys the graph file and processes the results
  4. Deploy the Python script and graph file to your single board computer equipped with an Intel Movidius NCS

In this section we’ll learn how to install the SDK which includes TensorFlow, Caffe, OpenCV, and the Intel suite of Movidius tools.

Requirements:

  • Stand-alone machine or VM. We’ll install Ubuntu 16.04 LTS on it
  • 30-60 minutes of time depending on download speed and machine capability
  • Movidius NCS USB stick

I highlighted “Stand-alone” as it’s important that this machine only be used for Movidius development.

In other words, don’t install the SDK on a “daily development and productivity use” machine where you might have Python Virtual Environments and OpenCV installed. The install process is not entirely isolated and can/will change existing libraries on your system.

However, there is an alternative:

Use a VirtualBox Virtual Machine (or other virtualization system) and run an isolated Ubuntu 16.04 OS in the VM.

The advantage of a VM is that you can install it on your daily use machine and still keep the SDK isolated. The disadvantage is that you won’t have access to a GPU via the VM.

Danielle wants to use a Mac and VirtualBox works well on macOS, so let’s proceed down that path. Note that you could also run VirtualBox on a Windows or Linux host which may be even easier.

Before we get started, I want to bring attention to non-standard VM settings we’ll be making. We’ll be configuring USB settings which will allow the Movidius NCS to stay connected properly.

As far as I can tell from the forums, these are Mac-specific VM USB settings (but I’m not certain). Please share your experiences in the comments section.

Download Ubuntu and Virtualbox

Let’s get started.

First, download the Ubuntu 16.04 64-bit .iso image from here the official Ubuntu 16.04.3 LTS download page. You can grab the .iso directly or the torrent would also be appropriate for faster downloads.

While Ubuntu is downloading, if you don’t have Oracle VirtualBox, grab the installer that is appropriate for your OS (I’m running macOS). You can download VirtualBox here.

Non-VM users: If you aren’t going to be installing the SDK on a VM, then you can skip downloading/installing Virtualbox. Instead, scroll down to “Install the OS” but ignore the information about the VM and the virtual optical drive — you’ll probably be installing with a USB thumb drive.

After you’ve got VirtualBox downloaded, and while the Ubuntu .iso continues to download, you can install VirtualBox. Installation is incredibly easy via the wizard.

From there, since we’ll be using USB passthrough, we need the Extension Pack.

Install the Extension Pack

Let’s navigate back to the VirtualBox download page and download the Oracle VM Extension Pack if you don’t already have it.

The version of the Extension Pack must match the version of Virtualbox you are using. If you have any VMs running, you’ll want to shut them down in order to install the Extension Pack. Installing the Extension Pack is a breeze.

Create the VM

Once the Ubuntu 16.04 image is downloaded, fire up VirtualBox, and create a new VM:

Figure 2: Creating a VM for the Intel Movidius SDK.

Give your VM reasonable settings:

  • I chose 2048MB of memory for now.
  • I selected 2 virtual CPUs.
  • I set up a 40Gb dynamically allocated VDI (Virtualbox Disk Image).

The first two settings are easy to change later for best performance of your host and guest OSes.

As for the third setting, it is important to give your system enough space for the OS and the SDK. If you run out of space, you could always “connect” another virtual disk and mount it, or you could expand the OS disk (advanced users only).

USB passthrough settings

A VM, by definition, is virtually running as software. Inherently, this means that it does not have access to hardware unless you specifically give it permission. This includes cameras, USB, disks, etc.

This is where I had to do some digging on the intel forms to ensure that the Movidius would work with MacOS (because originally it didn’t work on my setup).

Ramana @ Intel provided “unofficial” instructions on how to set up USB over on the forums. Your mileage may vary.

In order for the VM to access the USB NCS, we need to alter settings.

Go to the “Settings” for your VM and edit “Ports > USB” to reflect a “USB 3.0 (xHCI) Controller”.

You need to set USB2 and USB3 Device Filters for the Movidius to seamlessly stay connected.

To do this, click the “Add new USB Filter” icon as is marked in this image:

Figure 3: Adding a USB Filter in VirtualBox settings to accommodate the Intel Movidius NCS on MacOS.

From there, you need to create two USB Device Filters. Most of the fields can be left blank. I just gave each a Name and provided the Vendor ID.

  1. Name: Movidius1, Vendor ID: 03e7, Other fields: blank
  2. Name: Movidius2, Vendor ID: 040e, Other fields: blank

Here’s an example for the first one:

Figure 4: Two Virtualbox USB device filters are required for the Movidius NCS to work in a VM on MacOS.

Be sure to save these settings.

Install the OS

To install the OS, “insert” the .iso image into the virtual optical drive. To do this, go to “Settings”, then under “Storage” select “Controller: IDE > Empty”, and click the disk icon (marked by the red box).  Then find and select your freshly downloaded Ubuntu .iso.

Figure 5: Inserting an Ubuntu 16.04 .iso file into a Virtualbox VM.

Verify all settings and then boot your machine.

Follow the prompts to “Install Ubuntu”. If you have a fast internet connection, you can select “Download updates while installing Ubuntu”.  I did not select the option to “Install third-party software…”.

The next step is to “Erase disk and install Ubuntu” — this is a safe action because we just created the empty VDI disk. From there, set up system name and a username + password.

Once you’ve been instructed to reboot and removed the virtual optical disk, you’re nearly ready to go.

First, let’s update our system. Open a terminal and type the following to update your system:

Install Guest Additions

Non-VM users: You should skip this section.

From there, since we’re going to be using a USB device (the Intel NCS), let’s install guest additions. Guest additions also allows for bidirectional copy/paste between the VM and the host amongst other nice sharing utilities.

Guest additions can be installed by going to the Devices menu of Virtual box and clicking “Insert Guest Additions CD Image…”:

Figure 6: Virtualbox Guest Additions for Ubuntu has successfully been installed.

Follow the prompt to press “Return to close this window…” which completes the the install.

Take a snapshot

Non-VM users: You can skip this section or make a backup of your desktop/laptop via your preferred method.

From there, I like to reboot followed by taking a “snapshot” of my VM.

Rebooting is important because we just updated and installed a lot of software and want to ensure the changes take effect.

Additionally, a snapshot will allow us to rollback if we make any mistakes or have problems during the install — as we’ll find out, there are some gotchas along the way that can trip you up wih the Movidius SDK, so this is a worthwhile step.

Definitely take the time to snapshot your system. Go to the VirtualBox menubar and press “Machine > Take Snapshot”.

You can give the snapshot a name such as “Installed OS and Guest Additions” as is shown below:

Figure 7: Taking a snapshot of the Movidius SDK VM prior to actually installing the SDK.

Installing the Intel Movidius SDK on Ubuntu

This section assumes that you either (a) followed the instructions above to install Ubuntu 16.04 LTS on a VM, or (b) are working with a fresh install of Ubuntu 16.04 LTS on a Desktop/Laptop.

Intel makes the process of installing the SDK very easy. Cheers to that!

But like I said above, I wish there was an advanced method. I like easy, but I also like to be in control of my computer.

Let’s install Git from a terminal:

From there, let’s follow Intel’s instructions very closely so that there are hopefully no issues.

Open a terminal and follow along:

Now that we’re in the workspace, let’s clone down the NCSDK and the NC App Zoo:

And from there, you should navigate into the ncsdk  directory and install the SDK:

You might want to go outside for some fresh air or grab yourself a cup of coffee (or beer depending on what time it is). This process will take about 15 minutes depending on the capability of your host machine and your download speed.

Figure 8: The Movidius SDK has been successfully installed on our Ubuntu 16.04 VM.

VM Users: Now that the installation is complete, it would be a good time to take another snapshot so we can revert in the future if needed. You can follow the same method as above to take another snapshot (I named mine “SDK installed”). Just remember that snapshots require adequate disk space on the host.

Connect the NCS to a USB port and verify connectivity

This step should be performed on your desktop/laptop.

Non-VM users: You can skip this step because you’ll likely not have any USB issues. Instead, plug in the NCS and scroll to “Test the SDK”.

First, connect your NCS to the physical USB port on your laptop or desktop.

Note: Given that my Mac has Thunderbolt 3 / USB-C ports, I initially plugged in Apple’s USB-C Digital AV Multiport Adapter which has a USB-A and HDMI port. This didn’t work. Instead, I elected to use a simple adapter, but not a USB hub. Basically you should try to eliminate the need for any additional required drivers if you’re working with a VM.

From there, we need to make the USB stick accessible to the VM. Since we have Guest Additions and the Extension Pack installed, we can do this from the VirtualBox menu. In the VM menubar, click “Devices > USB > ‘Movidius Ltd. Movidius MA2X5X'” (or a device with a similar name). It’s possible that the Movidus already has a checkmark next to it, indicating that it is connected to the VM.

In the VM open a terminal. You can run the following command to verify that the OS knows about the USB device:

You should see that the Movidius is recognized by reading the most recent 3 or 4 log messages as shown below:

Figure 9: Running the dmesg command in a terminal allows us to see that the Movidius NCS is associated with the OS.

If you see the Movidius device then it’s time to test the installation.

Test the SDK

This step should be performed on your desktop/laptop.

Now that the SDK is installed, you can test the installation by running the pre-built examples:

This may take about five minutes to run and you’ll see a lot of output (not shown in the block above).

If you don’t see error messages while all the examples are running, that is good news. You’ll notice that the Makefile has executed code to go out and download models and weights from Github, and from there it runs mvNCCompile. We’ll learn about mvNCCompile in the next section. I’m impressed with the effort put into the Makefiles by the Movidius team.

Another check (this is the same one we did on the Pi last week):

This test ensures that the links to your API and connectivity to the NCS are working properly.

If you’ve made it this far without too much trouble, then congratulations!

Generating Movidius graph files from your own Caffe models

This step should be performed on your desktop/laptop.

Generating graph files is made quite easy by Intel’s SDK. In some cases you can actually compute the graph using a Pi. Other times, you’ll need a machine with more memory to accomplish the task.

There’s one main tool that I’d like to share with you: mvNCCompile .

This command line tool supports both TensorFlow and Caffe. It is my hope that Keras will be supported in the future by Intel.

For Caffe, the command line arguments are in the following format (TensorFlow users should refer to the documentation which is similar):

Let’s review the arguments:

  • network.prototxt : path/filename of the network file
  • -w network.caffemodel : path/filename of the caffemodel file
  • -s MaxNumberOfShaves : SHAVEs (1, 2, 4, 8, or 12) to use for network layers (I think the default is 12, but the documentation is unclear)
  • -in InputNodeNodeName : you may optionally specify a specific input layer (it would match the name in the prototxt file)
  • -on OutputNodeName : by default the network is processed through the output tensor and this option allows a user to select an alternative end point in the network
  • -is InputWidth InputHeight : the input shape is very important and should match the design of your network
  • -o OutputGraphFilename : if no file/path is specified this defaults to the very ambiguous filename, graph , in the current working directory

Where’s the batch size argument?

The batch size for the NCS is always 1 and the number of color channels is assumed to be 3.

If you provide command line arguments to mvNCCompile  in the right format with an NCS plugged in, then you’ll be on your way to having a graph file rather quickly.

There’s one caveat (at least from my experience thus far with Caffe files). The mvNCCompile  tool requires that the prototxt be in a specific format.

You might have to modify your prototxt to get the mvNCCompile  tool to work. If you’re having trouble, the Movidius forums may be able to guide you.

Today we’re working with MobileNet Single Shot Detector (SSD) trained with Caffe. The GitHub user, chuanqui305, gets credit for the training the model on the MS-COCO dataset. Thank you chuanqui305!

I have provided chuanqui305’s files in the “Downloads” section. To compile the graph you should execute the following command:

You should expect the Copyright message and possibly additional information or a warning like I encountered above. I procdeded by ignoring the warning without any trouble.

Object detection with the Intel Movidius Neural Compute Stick

Writing this code can be performed on your desktop/laptop or your Pi, however you should run it on your Pi in the next section.

Let’s write a real-time object detection script. The script very closely aligns with the non-NCS version that we built in a previous post.

You can find today’s script and associated files in the “Downloads” section of this blog post. I suggest you download the source code and model file if you wish to follow along.

Once you’ve downloaded the files, open ncs_realtime_objectdetection.py :

We import our packages on Lines 2-8, taking note of the mvncapi , which is the the Movidius NCS Python API package.

From there we’ll perform initializations:

Our class labels and associated random colors (one random color per class label) are initialized on Lines 12-16.

Our MobileNet SSD requires dimensions of 300×300, but we’ll be displaying the video stream at 900×900 to better visualize the output (Lines 19 and 20).

Since we’re changing the dimensions of the image, we need to calculate the scalar value to scale our object detection boxes (Line 23).

From there we’ll define a preprocess_image  function:

The actions made in this pre-process function are specific to our MobileNet SSD model. We resize, perform mean subtraction, scale the image, and convert it to float16  format (Lines 27-30).

Then we return the preprocessed  image to the calling function (Line 33).

To learn more about pre-processing for deep learning, be sure to refer to my book, Deep Learning for Computer Vision with Python.

From there we’ll define a predict  function:

This predict  function applies to users of the Movidius NCS and it is largely based on the Movidius NC App Zoo GitHub example — I made a few minor modifications.

The function requires an image  and a graph  object (which we’ll instantiate later).

First we pre-process the image (Line 37).

From there, we run a forward pass through the neural network utilizing the NCS while grabbing the predictions (Lines 41 and 42).

Then we extract the number of valid object predictions ( num_valid_boxes ) and initialize our predictions  list (Lines 46 and 47).

From there, let’s loop over the valid results:

Okay, so the above code might look pretty ugly. Let’s take a step back. The goal of this loop is to append prediction data to our predictions  list in an organized fashion so we can use it later. This loop just extracts and organizes the data for us.

But what in the world is the base_index ?

Basically, all of our data is stored in one long array/list ( output ). Using the box_index , we calculate our base_index  which we’ll then use (with more offsets) to extract prediction data.

I’m guessing that whoever wrote the Python API/bindings is a C/C++ programmer. I might have opted for a different way to organize the data such as a list of tuples like we’re about to construct.

Why are we ensuring values are finite on Lines 55-62?

This ensures that we have valid data. If it’s invalid we continue  back to the top of the loop (Line 63) and try another prediction.

What is the format of the output  list?

The output list has the following format:

  1. output[0] : we extracted this value on Line 46 as num_valid_boxes
  2. output[base_index + 1] : prediction class index
  3. output[base_index + 2] : prediction confidence
  4. output[base_index + 3] : object boxpoint x1 value (it needs to be scaled)
  5. output[base_index + 4] : object boxpoint y1 value (it needs to be scaled)
  6. output[base_index + 5] : object boxpoint x2 value (it needs to be scaled)
  7. output[base_index + 6] : object boxpoint y2 value (it needs to be scaled)

Lines 68-82 handle building up a single prediction tuple. The prediction consists of: (pred_class, pred_conf, pred_boxpts)  and we append the prediction  to the predictions  list on Line 83.

After we’re done looping through the data, we return  the predictions  list to the calling function on Line 86.

From there, let’s parse our command line arguments:

We parse our three command line arguments on Lines 89-96.

We require the path to our graph file. Optionally we can specify a different confidence threshold or display the image to the screen.

Next, we’ll connect to the NCS and load the graph file onto it:

The above block is identical to last week, so I’m not going to review it in detail. Essentially we’re checking that we have an available NCS, connecting, and loading the graph file on it.

The result is a graph  object which we use in the predict function above.

Let’s kick off our video stream:

We start the camera VideoStream , allow our camera to warm up, and our instantiate our FPS counter.

Now let’s process the camera feed frame by frame:

Here we’re reading a frame from the video stream, making a copy (so we can draw on it later), and resizing it (Lines 135-137).

We then send the frame through our object detector which will return predictions  to us.

Let’s loop over the predictions  next:

Looping over the predictions , we first extract the class, confidence and boxpoints for the object (Line 145).

If the confidence  is above the threshold, we print the prediction to the terminal and check if we should display the image on the screen:

If we’re displaying the image, we first build a label  string which will contain the class name and confidence in percentage form (Lines 160-161).

From there we extract the corners of the rectangle and calculate the position for our label  relative to those points (Lines 164-168).

Finally, we display the rectangle and text label on the screen. If there are multiple objects of the same class in the frame, the boxes and labels will have the same color.

From there, let’s display the image and update our FPS counter:

Outside of the prediction loop, we again make a check to see if we should display the frame to the screen. If so, we show the frame (Line 181) and wait for the “q” key to be pressed if the user wants to quit (Lines 182-186).

We update our frames per second counter on Line 189.

From there, we’ll most likely continue to the top of the frame-by-frame loop to complete the process again.

If the user happened to press “ctrl+c” in the terminal or if there’s a problem reading a frame, we break out of the loop.

This last code block handles some housekeeping (Lines 200-211) and finally prints the elapsed time and the frames per second pipeline information to the screen. This information allows us to benchmark our script.

Movidius NCS object detection results

This step should be applied on your Raspberry Pi + NCS with an HDMI cable + screen hooked up. You’ll also need a keyboard and mouse and as I described in my previous tutorial in Figure 2, you may need a dongle extension cable to make room for a USB keyboard/mouse. It’s also possible to run this step on a desktop/laptop, but the speed is likely to be slower than using your CPU.

Let’s run our real-time object detector with the NCS using the following command:

Prediction results will be printed in the terminal and the image will be displayed on our Raspberry Pi monitor.

Below I have included an example GIF animation of shooting a video with a smartphone and then post-processing it on the Raspberry Pi:

Real-time object detection on the Raspberry Pi + Movidius NCS GIF

Along with the full example video of clips:

Thank you to David McDuffee for shooting these example clips so I could include it!

Here’s an example video of the system in action recorded with a Raspberry Pi:

A big thank you to David Hoffman for demoing the Raspberry Pi + NCS in action.

Note: As some of you know, this past week I was taking care of a family member who is recovering from emergency surgery. While I was able to get the blog post together, I wasn’t able to shoot the example videos. A big thanks to both David Hoffman and David McDuffee for gathering great examples making today’s post possible!

And here’s a table of results:

Figure 10: Object detection results on the Intel Movidius Neural Comput Stick (NCS) when compared to the Pi CPU. The NCS helps the Raspberry Pi to achieve a ~6.88x speedup.

The Movidius NCS can propel the Pi to a ~6.88x speedup over the standard CPU object detection! That’s progress.

I reported results with the display option being “on” as well as “off”. As you can see, displaying on the screen slows down the FPS by about 1 FPS due to the OpenCV drawing text/boxes as well as highgui overhead. The reason I reported both this week is so that you’ll have a better idea of what to expect if you’re using this platform and performing object detection without the need for a display (such as in a robotics application).

Note: Optimized OpenCV 3.3+ (with the DNN module) installations will have faster FPS on the Pi CPU (I reported 0.9 FPS previously). To install OpenCV with NEON and VFP3 optimizations just read this previous post. I’m not sure if the version of OpenCV 2.4 that gets installed with the Movidius toolchain contains these optimizations which is one reason why I reported the non-optimized 0.49 FPS metric in the table.

I’ll wrap up this section by saying that it is possible give the illusion of faster FPS with threading if you so wish. Check out this previous post and implement the strategy into ncs_realtime_objectdetection.py  that we reviewed today.

Frequently asked questions (FAQs)

In this section I detail the answers to Frequently Asked Questions regarding the NCS.

Why does my Movidius NCS continually disconnect from my VM? It appears to be connected, but then when I run ‘make examples’ as instructed above, I see connectivity error messages. I’m running macOS and using a VM.

You must use the VirtualBox Extension Pack and add two USB device filters specifically for the Movidius. Please refer to the USB passthrough settings above.

No predictions are being made on the video — I can see the video on the screen, but I don’t see any error messages or stacktrace. What might be going wrong?

This is likely due to an error in pre-processing.

Be sure your pre-processing function is correctly performing resizing and normalization.

First, the dimensions of the pre-processed image must match the model exactly. For the MobileNet SSD that I’m working with, it is 300×300.

Second, you must normalize the input via mean subtraction and scaling.

I just bought an NCS and want to run the example on my Pi using my HDMI monitor and a keyboard/mouse. How do I access the USB ports that the NCS is blocking?

It seems a bit of poor design that the NCS blocks adjacent USB ports. The only solution I know of is to buy a short extension cable such as this 6in USB 3.0 compatible cable on Amazon — this will give more space around the other three USB ports.

Of course, you could also take your NCS to a machine shop and mill down the heatsink, but that wouldn’t be good for your warranty or cooling purposes.

How do I install the Python bindings to the NCS SDK API in a virtual environment?

Quite simply: you can’t.

Install the SDK on an isolated computer or VM.

For your Pi, install the SDK API-only mode on a separate micro SD card than the one you currently use for everyday use.

I have errors when running ‘mvNCCompile’ on my models. What do you recommend?

The Movidius graph compiling tool, mvNCCompile, is very particular about the input files. Oftentimes for Caffe, you’ll need to modify the .prototxt file. For TensorFlow I’ve seen that the filenames themselves need to be in a particular format.

Generally it is a simple change that needs to be made, but I don’t want to lead you in the wrong direction. The best resource right now is the Movidius Forums.

In the future, I may update these FAQs and the Generating Movidius graph files from your own Caffe models section with guidelines or a link to Intel documentation.

I’m hoping that the Movidius team at Intel can improve their graph compiler tool as well.

What’s next?

If you’re looking to perform image classification with your NCS, then refer to last week’s blog post.

Let me know what you’re looking to accomplish with a Movidius NCS and maybe I’ll turn the idea into a blog post.

Be sure to check out the Movidius blog and TopCoder Competition as well.

Movidus blog on GitHub

The Movidius team at Intel has a blog where you’ll find additional information:

developer.movidius.com/blog

The GitHub community surrounding the Movidius NCS is growing. I recommend that you search for Movidius projects using the GitHub search feature.

Two official repos that you should watch are (click the “watch” button on to be informed of updates):

TopCoder Competition

Figure 11: Earn up to $8,000 with the Movidius NCS on TopCoder.

Are you interested earning up to $8,000?

Intel is sponsoring a competition on TopCoder.

There are $20,000 in prizes up for grabs (first place wins $8,000)!

Registration and submission closes on February 26, 2018. That is next Monday, so don’t waste any time!

Keep track of the leaderboard and standings!

Summary

Today, we answered PyImageSearch reader, Danielle’s questions. We learned how to:

  • Install the SDK in a VM so she can use her Mac.
  • Generate Movidius graph files from Caffe models.
  • Perform object detection with the Raspberry Pi and NCS.

We saw that MobileNet SSD is >6.8x faster on a Raspberry Pi when using the NCS.

The Movidius NCS is capable of running many state-of-the-art networks and is a great value at less than $100 USD. You should consider purchasing one if you want to deploy it in a project or if you’re just yearning for another device to tinker with. I’m loving mine.

There is a learning curve, but the Movidius team at Intel has done a decent job breaking down the barrier to entry with working Makefiles on GitHub.

There is of course room for improvement, but nobody said deep learning was easy.

I’ll wrap today’s post by asking a simple question:

Are you interested in learning the fundamentals of deep learning, how to train state-of-the-art networks from scratch, and discovering my handpicked best practices?

If that sounds good, then you should definitely check out my latest book, Deep Learning for Computer Vision with Python. It is jam-packed with practical information and deep learning code that you can use in your own projects.

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , , , , , ,

179 Responses to Real-time object detection on the Raspberry Pi with the Movidius NCS

  1. Dougi February 19, 2018 at 10:50 am #

    This looks like a great article and I look forward to digging into it properly.

    I am currently working on a similar thing and am thinking of having a go with DarkNet’s YOLO.

    Do you know if that would work on a Raspberry Pi too? If so, an article on that would be awesome!

    Thanks so much.

    • Adrian Rosebrock February 19, 2018 at 12:26 pm #

      Movidius supports a variation of YOLO based on a Caffe port of the DarkNet method. I would suggest using the Movidius version of YOLO or finding a Caffe version that can be directly imported to OpenCV’s “dnn” module. See this blog post for more information.

    • John Jamieson February 19, 2018 at 9:33 pm #

      Have you had a look at https://github.com/gudovskiy/yoloNCS ?

  2. haixun February 19, 2018 at 10:56 am #

    Well done, Adrian! First such detailed post on this. Thanks!

    • Adrian Rosebrock February 19, 2018 at 12:24 pm #

      Thanks haixun!

  3. Steve Cox February 19, 2018 at 1:36 pm #

    Great article. If anyone has hands on experience taking a re-trained tensorflow object detection model and running it on OpenCV 3.3 DNN api just like you do with caffe models I would greatly appreciate the help.

    I can’t seem to find the secret sauce to take a tensorflow model and get it loaded in OpenCV DNN. I realize everyone has great experience with caffe models, but I want to stick with Tensorflow/Karas framework.

    Thanks !!!

    • Adrian Rosebrock February 19, 2018 at 3:10 pm #

      Hey Steve! TensorFlow models can be a real pain to work with when it comes to loading their serialized weights. This is true for both OpenCV DNN and the NCS. If I can figure the “secret sauce” I’ll absolutely be doing a blog post on it.

    • Dmitry February 24, 2018 at 2:39 pm #

      There is a way to create a supportive definition of TensorFlow graph in text. As you know, Caffe models are represented by .caffemodel and .prototxt. Actually they are both protocol buffers but the last one has no weights and easy for editing. For example, you can add extra layers without weights like SoftMax. The script mentioned at a wiki page https://github.com/opencv/opencv/wiki/TensorFlow-Object-Detection-API (tf_text_graph_ssd.py) creates a .pbtxt file that can be used in OpenCV to help it import TensorFlow graph. Note that this script works only for SSD-based object detection model.

  4. Ali February 19, 2018 at 2:01 pm #

    Very well detailed tutorial as always, Adrian ! Looking forward to getting my Intel NCS ! Meanwhile I’m trying to use real time object detection along with some opencv image processing for a project, which framework and which model implementation would you suggest can be able to achieve a decent frame rate ( around 20 – 30 fps) on GPU ? Thanks.

    • Adrian Rosebrock February 19, 2018 at 3:11 pm #

      Hey Ali — a Single Shot Detector (SSD) + MobileNet would get you in the range of 30-50 FPS on a GPU.

  5. braca February 19, 2018 at 2:59 pm #

    Thanks Adrian very cool post! I think that you skip the section to install pip3 and one could encounter problems while trying to complete all the steps.

    • Adrian Rosebrock February 19, 2018 at 3:12 pm #

      Hi braca — just to clarify, are you referring to installing pip on the VM or the Raspberry Pi?

    • braca February 19, 2018 at 3:19 pm #

      Ignore previous comment restarted the system and worked!!!

      • braca February 19, 2018 at 3:19 pm #

        I’m using a VM!! Thanks Adrain!!

      • Adrian Rosebrock February 19, 2018 at 5:08 pm #

        Awesome, I’m glad it’s now working for you, Braca 🙂

    • David Hoffman February 19, 2018 at 4:33 pm #

      Hi Braca,

      I used the system PIP. The only packages I installed are imutils and picamera[array].

      $ pip install imutils
      $ pip install “picamera[array]”

      You may need to use pip3.

      • braca February 19, 2018 at 8:12 pm #

        Thanks David!! It helped!!

  6. Alvin February 20, 2018 at 3:41 am #

    Hi adrian, I always have problem when running the make examples on ncsdk. It says syntax error on protobuf. It appear that protobuf 2.6.1 did not compatible on python3 and make examples on movidius is using python3. How did you counter this problem? Thanks

    • Adrian Rosebrock February 22, 2018 at 11:25 am #

      Hi Alvin — I suggest you double check the formatting in your prototxt and then post in the Movidius Forums if you’re still having trouble.

  7. Al Bee February 20, 2018 at 4:29 am #

    Hey Adrian,

    Have you tried YOLO? It uses a totally different approach by applying a single neural network to the full image.

    https://pjreddie.com/darknet/yolo/

    • Adrian Rosebrock February 20, 2018 at 7:36 am #

      I have used YOLO for different projects, yes. It really depends on the project but I tend to find SSDs provide a better balance between speed and accuracy. Of course, it really depends on the project. I have yet to try YOLO on the NCS though. That will be for another project 🙂

      • Al Bee February 20, 2018 at 11:10 am #

        For others reference, here is the link to the instructions to install yoloNCS. Cheers!

        https://github.com/gudovskiy/yoloNCS/blob/master/README.md

        • Adrian Rosebrock February 21, 2018 at 9:34 am #

          Thanks for sharing!

        • Achintya Kumar June 23, 2018 at 4:25 am #

          Does the yoloNCS work accurately with the neural stick and raspberry pi?

  8. iamtodor February 20, 2018 at 8:30 am #

    Hello Adrian Rosebrock,

    I have found your blog-post https://www.pyimagesearch.com/2015/09/14/ball-tracking-with-opencv/ very useful for myself. I want to say thank you.
    The only thing I have incomplete is I can’t figure out how to use range-detector: https://github.com/jrosebr1/imutils/blob/master/bin/range-detector
    I see a lot of people have that problem and you replied you might publish even the whole article about that util. Unfortunately, I don’t find it.
    Can you help me please to run that script? I have an image and specific object – orange. I want to know the upper and lower boundaries.
    Thanks in advance

    • Adrian Rosebrock February 21, 2018 at 9:36 am #

      I had not written the article on it yet, I’ve been busy with a few other projects and writing up these deep learning tutorials. I’ll make a note to write an article on it soon.

  9. Marc A Getter February 20, 2018 at 11:39 am #

    After the make install for ncsdk runs for about 10 minutes, my vm is crashing with a red and purple screen with a combination of characters. Has anyone else encountered this?

    • David Hoffman February 22, 2018 at 11:23 am #

      Which OS is your Host, which OS is your Guest, and which version of VirtualBox are you running? For reference, I’m on macOS OSX 10.13.3, my Guest VM is Ubuntu 16.04, and VirtualBox is 5.2.6.

  10. Jim February 20, 2018 at 2:02 pm #

    I followed all the instructions (I think!) and get:
    (cv) $ python ncs_realtime_objectdetection.py –graph graph –display 1
    Traceback (most recent call last):
    File “ncs_realtime_objectdetection.py”, line 6, in
    from mvnc import mvncapi as mvnc
    ImportError: No module named mvnc

    I did install ncsdk and it talks to my NCS
    $ python hello_ncs.py
    Hello NCS! Device opened normally.
    Goodbye NCS! Device closed normally.
    NCS device working.

    • Adrian Rosebrock February 22, 2018 at 9:12 am #

      Hey Jim — unfortunately you will not be able to use Python virtual environments with the NCS. I discuss this more in the previous post.

    • FanWah May 21, 2018 at 2:35 am #

      may i know the solutions for this issue? Im having this issue as well.

  11. abdbaddude February 21, 2018 at 2:13 am #

    Wondering why you keep using print(“INFO” ….) ? I guess the python logging facility could be used. Or is the a performance gain considered on the raspberryPi.

    • Adrian Rosebrock February 21, 2018 at 9:34 am #

      You could use logging if you wanted, there is no problem with that. I just used “print” as some readers may not be comfortable or used to the logging features with Python. It’s pretty trivial to swap out “print” for “logging” so feel free to use whichever one you are comfortable with.

  12. Prubio February 21, 2018 at 4:26 am #

    Hi!
    I downloaded your code and I had some errors. I share it here in case can help others.

    The first error I suffered was: numpy.float16 cannot be interpreted as integer.
    To solve that, in line 50:
    num_valid_boxes=output[0].astype(int)

    The second error: DISP_MULTIPLIER is not defined
    In lines 169 and 170 change DISP_MULTIPLIER for DISPLAY_MULTIPLIER.

    Great job Adrian, congrats!
    I am trying to apply ncs to object detection with tensorflow, for my own models, but I have problems. If you have time, I´ll really appreciate an explication about that.

    Cheers!

    • Adrian Rosebrock February 21, 2018 at 2:59 pm #

      Hi Prubio — thanks for the catch about the variable name mismatch. This was a mistake while putting the post together and it has now been corrected. I didn’t encounter the same issue with needing to force the type of output[0] to an int, but I’m glad that you got yours working! What is your question about NCS object detection with tensorflow?

      • Prubio February 22, 2018 at 6:42 am #

        I trained ssd_mobilenet_v1 (pre trained in coco dataset) for object detection, with LabelImg and object detection API Tensorflow. I obtained the frozen_inference_graph.pb and it´s working perfectly, but I can´t convert it with mvNC for using with Movidius NCS. I have read that ssd_mobilenet_v1 has not supported. But I have the same problem with ssd_inception_v2.
        How could I do for reach my goal? Train a network with object detection API and obtain the graph to apply inference with Movidius NCS.

        Thanks Adrian.
        Cheers!

        • Adrian Rosebrock February 22, 2018 at 11:17 am #

          Prubio — I’ve had success using the models that Movidius has provided. Using models of my own I’ve had limited success and have been referring to the Movidius Forums and searching on GitHub. That’s the first place I’ve been going for support and that seems to be where the “experts” are. Nobody is truly an expert yet (short of the ones that coded the compile tool at Movidius) as this product is still in its infancy. Early adopters have definitely been struggling along, but it will get better I hope.

    • Leo April 18, 2018 at 7:18 am #

      I had the same errors. Thanks a lot!!

  13. simon February 21, 2018 at 6:49 pm #

    Hi,

    Thanks for your amazing post. And I have tested the same script on my workstation (i7 Core, 64GB RAM, NVIDIA 980 TI) and the FPS is around 10. I guess it depends on CPU/RAM performance.

    Since it seems to depend on CPU/RAM performance, I wish to monitor system CPU and RAM usage when more than one NCS USB is taking a job; I got the secound NCS USB and wonder how to assign a different work on a second USB while the first USB works on the other script.

    Thanks,

    • simon February 21, 2018 at 6:52 pm #

      So this
      “device = mvnc.Device(devices[0])” should be
      “device = mvnc.Device(devices[1])” on the secound USB?

      Thanks,

      • simon February 21, 2018 at 7:08 pm #

        I have tested it and it works fine.
        The second one shows the same performance.

        CPU usages on both: 9%
        RAM usage on both: about 130 MB.
        FPS: around 10 FPS.

        Thanks,

        • Adrian Rosebrock February 22, 2018 at 8:49 am #

          Excellent. Thanks for sharing.

      • Adrian Rosebrock February 22, 2018 at 8:48 am #

        If you have two NCS devices plugged in, that’s my understanding as to how it would work.

    • Adrian Rosebrock February 22, 2018 at 8:47 am #

      Hi Simon — I’d recommend a threaded approach if you’re putting multiple NCS devices to work. Here’s a relevant blog post to get you started. Populate a queue as frames come in. Then have a thread to preprocess and assign the images to the available NCSs (in separate threads). The trick would be getting the detections back into the right order before displaying since you’ve got multiple worker threads. To accommodate, I’d recommend including additional information with each frame from the start (frame count number). This might be a future blog post idea.

  14. Michael February 21, 2018 at 9:34 pm #

    Adrian,

    When I download the code from your email, I get the following error:

    This XML file does not appear to have any style information associated with it. The document tree is shown below.

    Access Denied
    E33023FCA7F14778

    • Adrian Rosebrock February 22, 2018 at 8:57 am #

      Hi Michael — thank you for bringing this to my attention. I uploaded a new version of the NCS code yesterday and forgot to set the permissions to make the file downloadable. It is fixed now and the code can be downloaded.

  15. Jussi February 22, 2018 at 9:47 am #

    VM dont recognize my movidius stick.
    I have installed extension and insert guest additions.

    • David Hoffman February 22, 2018 at 11:05 am #

      Hi Jussi, I definitely struggled with USB too, so you’re not alone. Which OS is your Host, which OS is your Guest, and which version of VirtualBox are you running? For reference, I’m on macOS OSX 10.13.3, my Guest VM is Ubuntu 16.04, and VirtualBox is 5.2.6.

      • Jussi February 23, 2018 at 9:01 am #

        Hi David, I use Ubuntu 17.10, my Guest is 16.04 and VirtualBox is 5.2.6. I am really stack with this. Meaby I try also with my mac.

        • David Hoffman February 23, 2018 at 8:59 pm #

          Hi Jussi, I would suggest that you post in the VirtualBox forums. They’ll be able to help you and their community is very active. Feel free to post a linkback to your VirtualBox forum post so that other PyImageSearch readers can see any potential solutions. I’m also curious — did you experience a VirtualBox error, or was the Stick just not able to stay connected (an NCS error in the terminal of the VM itself)?

          • Jussi February 28, 2018 at 10:44 am #

            Ok. I will ask that from VM forum. Ubuntu did not recognize the usb stick even filters has right setup. I changed to my mac and stick found right away ;).

          • David Hoffman March 1, 2018 at 2:57 pm #

            Thanks for sharing. I’m sorry you’re having trouble with an Ubuntu host — I don’t currently know a solution for you. If you figure it out, then please share to benefit the community.

    • Don February 23, 2018 at 11:38 pm #

      Don’t forget to add the 2 filters described above in “USB passthrough settings”. I struggled with this problem for days, until I came to this article. (thumbs up Adrian) Intel’s team give NO instructions on this, and users in the forums state you must enter “Product ID” and “Vendor ID” in the filter. THIS WILL FAIL!! (at least for me) ONLY enter the “Vendor ID” ‘040e’ and ’03e7′ for each.

      You can see if it’s connected by looking at Devices-USB on the VM window. If there a check next to the Movidius, then ubuntu can see it, and it should auto-reconnect during initialization.

      • Adrian Rosebrock February 26, 2018 at 10:21 am #

        Thanks for sharing Don. I’m glad this blog post was able help you!

  16. Raghvendra Jain February 23, 2018 at 12:56 am #

    Hi, Thank you for the great blog! My question is can we try Caffe2 also on this stick too? If so, it will be very easy to write code using PyTorch and use ONNX to convert a model defined in PyTorch into the ONNX format and then load it into Caffe2. Thank you very much.

    • Adrian Rosebrock February 23, 2018 at 8:55 pm #

      As far as I know, Caffe2 isn’t supported yet — there’s a Movidius Forum topic question and nobody from Intel has responded. I’m sure it is on their roadmap (quite honestly it would be nice if they share their roadmap on the Movidius Blog).

      • Raghvendra Jain February 24, 2018 at 8:06 am #

        Thank you for the reply!

  17. Pongrut February 24, 2018 at 1:58 am #

    Hi, I always appreciate your dedication to your blogs
    I doubt the process of creating a graph. I tried downloading the MobileNet_deploy.caffemodel and MobileNet_deploy.prototxt files from chuanqui305 GitHub, and of course, I failed to generate graph files.

    With your provided files, I can create your graph file as you shown in the blog, so I want to ask what you need to do to generate the graph file successfully.

  18. Andy February 25, 2018 at 12:54 pm #

    Just an update for anyone trying to use the NCS on a VirtualBox VM on Windows 10…
    It looks like there are various issues with setting this up depending on your PC. I’m on an ACER with 2 x USB3.0 and 1 x USB2.0 ports on Windows 10 Home and running a VirtualBox VM with Ubuntu 16.04.
    I’ve gone through various combinations with plugging the NCS into all three different USB ports and the USB2.0 seemed to be the most stable. The device shows up in the device list in the VM USB menu as Movidius MA2X5X with a Vendor ID of 03e7 and Product ID of 2150. It’s worth noting that the movidius forum have indicated that it’s not worth pursuing using the VM route as it does seem fraught with challenges but Adrian has summarised the various steps worth trying.
    For my part, I would add that since I tried the stick in the USB3.0 port first which didn’t work, it seemed to leave residual devices in the system that were being picked up by VirtualBox (I had an unknown device 03e7:2150 and a Movidius LSC (or VSC maybe as it’s now gone) on 03e7:2150 too) .
    The issue of the device being dropped from the VM system is still present (you’ll see this when you run ‘make examples’ in the ncsdk folder ) and there was some traffic about this on the forums but the ncappzoo hello_ncs_py works as a single test so persevere but be patient.

    Thanks again to Adrian for this great write-up with all the supporting detail.

    • Adrian Rosebrock February 26, 2018 at 10:24 am #

      Andy, thanks for sharing your experience the VM in Windows 10.

    • Niklas March 1, 2018 at 7:56 pm #

      Hi Andy,

      quick question is the :
      [Error 7] Toolkit Error: USB Failure. Code: No device found

      what youre refering to here ?
      Got that Error message myself when i try to launch the examples… (Win10)

    • John Jamieson March 5, 2018 at 10:08 pm #

      Hi Andy, I use VirtualBox 5.2.8 on windows 10 FCU. The VM is Ubuntu 16.04.5. The NCS is plugged into a USB3 port. I set the USB in VirtualBox to the USB 3.0 xHCI controller. I then add two USB devices, 1st one with “Vendor 03E7, Product 2150” and the 2nd one with “Vendor 03E7, Product F63B”. Note the Vendor ID for both. I did not test this combo on a USB 2 port yet, but will test it when get a chance to put the VM onto another machine. (laptop only has 3.0). I am able to run every single NCS example (I only have 1 stick) in the ncappzoo without any problems, except maybe the webcam – it struggles somewhat. This includes the python and compiled C examples. I used the minhoolee/install-opencv-3.0.0 script off github for openCV. I dont get any dropouts with the NCS, even when i had it plugged into a TB powered USB hub.

  19. han February 28, 2018 at 2:43 am #

    Thanks for sharing this cool information

    • Adrian Rosebrock March 1, 2018 at 2:53 pm #

      I’m glad you enjoyed it Han. Do you have an NCS or are you considering purchasing one?

  20. Jiang Chuan February 28, 2018 at 4:48 am #

    Hi Adrian,
    I have movidius but I do not have Raspberry Pi, So I am trying to start your sample from my ubuntu host. When I run “python ncs_realtime_objectdetection.py –graph graphs/mobilenetgraph”, It fails as follows:
    [INFO] finding NCS devices…
    [INFO] found 1 devices. device0 will be used. opening device0…
    [INFO] loading the graph file into RPi memory…
    [INFO] allocating the graph on the NCS…
    [INFO] starting the video stream and FPS counter…
    Traceback (most recent call last):
    File “ncs_realtime_objectdetection.py”, line 144, in
    predictions = predict(frame, graph)
    File “ncs_realtime_objectdetection.py”, line 54, in predict
    for box_index in range(num_valid_boxes):
    TypeError: ‘numpy.float16’ object cannot be interpreted as an integer

    Do you have any suggestion about how to fix the error?

    Thanks,
    Jiang Chuan.

    • Adrian Rosebrock March 1, 2018 at 2:55 pm #

      Hi Jiang, it looks like num_valid_boxes is being reported as a float16. Try casting it to an int.

      • roshan July 30, 2018 at 6:17 am #

        how can i casting into int, can u pls explain clearly

        • Adrian Rosebrock July 31, 2018 at 9:50 am #

          Just call: int(num_valid_boxes

  21. Jiang Chuan February 28, 2018 at 4:53 am #

    In order to get video from camera, I changed the following code:

    vs = VideoStream(usePiCamera=True).start()

    to

    vs = VideoStream(0).start()

    • Eswar Sai Krishna G May 2, 2018 at 9:14 am #

      I changed it, but still, I am getting elapsed time and approx. FPS as 0 when I am trying to run on laptop’s camera.

      Do you have any other suggestion or idea about the problem?

      Thanks,
      Eswar.

  22. Jussi February 28, 2018 at 11:58 am #

    Hi, what is (in this case) the best way and line to flip video horizontal? My pi camera is upside down.

    • Adrian Rosebrock March 1, 2018 at 2:41 pm #

      You can use the “cv2.flip” function.

  23. Lee Mewshaw March 1, 2018 at 11:04 am #

    Hi Adrian,
    I’m trying to follow your steps, and I’m not clear on when I switch the Movidius from Ubuntu to the Raspberry Pi. I have successfully made it through “Test the SDK” on the Ubuntu machine, and I’m getting an error on mvNCCompile. I’ll work through the forums to figure that out, but the question for you is after running that command mvNCCompile command, do I move it over to a USB port on the raspberry pi after that, or before? I can’t tell from your steps when you actually are writing something to the Movidius and I should then move it to the pi.

    Thanks in advance for any help!
    Lee

    • Adrian Rosebrock March 1, 2018 at 2:59 pm #

      Hi Lee. I’m sorry if this was unclear. Please review the workflow image above. You need the NCS to generate the graph. You also need the NCS to deploy the graph. Does this make sense?

      • David Ramírez April 20, 2018 at 11:37 pm #

        Hello Adrian, I’m trying to follow your tutorial step by step but I don’t understand this part either. So, just to be clear, just before the “Object detection with the Intel Movidius Neural Compute Stick” section, should I connect the NCS on my raspberry? or when exactly?. Also, I’m having errors while running the mvNCCompile command, it says: “[Error 9] Argument Error : Network weight cannot be found.”, I believe it is because I don’t have the download files on the VM but I don’t know how to put them there or in which folder should I save them, can you please tell me how to solve this or where to find information about it.

        Thanks a lot. David.

        • Adrian Rosebrock April 25, 2018 at 9:03 am #

          Hi David.

          Thank you for the feedback.

          (1) I urge you to read the first Movidius tutorial from the prior week first: Getting started with the Intel Movidius Neural Compute Stick.

          (2) I edited this post with information in italics at the top of several of the sections so that the instructions are more clear, however you should really read the first tutorial from top to bottom before moving forward with this post.

          (3) To move files between the VM and your host, you should make use of SCP (Secure Copy). Explaining SCP is not appropriate for this forum, but I will say that you need a “host-only” network adapter for your VM and you need openssh-server installed on the Ubuntu VM to make it work. See the following two links: host-only network adapter and how to SCP files.

  24. Christoph Viehoff March 3, 2018 at 10:02 pm #

    I ran follwed the VM installation and the SDK installed sucessfully . All tests pass when I run the mvNCComple script from the Real-time-object-detection folder on my VM I get the following error:

    nvNCComplie V02.00, Copyright @ Movidius Ltd 2016

    Error importing caffe

    • Adrian Rosebrock March 9, 2018 at 10:29 am #

      Hi Christoph — try opening a fresh terminal and/or check your PYTHONPATH environment variable. For further information, please see the response from Tome at Intel on this direct forum link. Let me know if that works.

    • yang July 3, 2018 at 4:23 am #

      Hi Christoph,
      I met the same problem like you.
      Would you please tell me how did you solve that?
      I’ve changed my PYTHONPATH to where I installed caffe,but it doesn’t work.
      Many thanks

  25. Zimeng March 5, 2018 at 1:02 pm #

    Hi Adrian, after building CV env on Rasp pi (jessie+cv3.3) , I wanna update system to Strech (recommended in your tutorials on using Movidius NCS ) ,will it influence my current cv env?

    • Adrian Rosebrock March 7, 2018 at 9:21 am #

      Updating the actual OS is notorious for breaking development environments. I do not recommend it. But if you would like to try, backup your .img file on your desktop and then try the upgrade.

  26. Kevin March 8, 2018 at 1:28 pm #

    Hi Adrian. Thank you for this amazing tutorial. Everything worked perfectly. Now I’m trying to make a gender detection in real-time with the Movidius. Do you have any recommendations of a model to use? I’ve found a classification model in https://gist.github.com/GilLevi/c9e99062283c719c03de, but I would like to make a detection. Can the classification be used inside this detection code?

    Thank You,
    Kevin

    • Adrian Rosebrock March 9, 2018 at 9:02 am #

      I don’t think there is a need for a detection model. Use a face detector to detect the face. Extract the ROI. Then pass the ROI into a classification model.

      • Kevin March 9, 2018 at 3:40 pm #

        Ohhh. Thank you Adrian 😀

  27. schamarti March 9, 2018 at 1:34 am #

    Hi Adrian, I have installed sdk and I am able to run example graph provided in the downloaded section. To convert model on raspberry pi, I am getting error mvNCCompile command not found. Any hints

    • Adrian Rosebrock March 9, 2018 at 10:32 am #

      Hey Schmarti — run mvNCCompile on the Ubuntu machine or VM where you installed the full SDK. The tool isn’t available on the Pi.

  28. Hein March 12, 2018 at 1:26 am #

    Hi Adrian, I am confused about this article.

    Is the tutorial you provided to run on Ubantu or Raspberry?

    • Adrian Rosebrock March 14, 2018 at 1:09 pm #

      The Raspberry Pi runs Raspbian but I needed an Ubuntu VM to develop the code and create the deep learning model that later runs on the Pi.

  29. monsour March 17, 2018 at 11:08 pm #

    hai sir Adrian can i ask some help? do u have any xml file for garbage detection? because it is a very long process when i trained my own haar_cascade.

    thank you for the help

    • Adrian Rosebrock March 19, 2018 at 5:18 pm #

      I do not have any pre-trained models for garbage detection. You would need to train your own.

  30. bob March 23, 2018 at 2:45 am #

    Hi Adrian,

    Thanks for the tutorial! I was able to run the demo but I’m now interested to use different graph.
    There are 2 magic numbers in the code that I’m not sure.

    preprocessed = preprocessed – 127.5
    preprocessed = preprocessed * 0.007843

    Could you explain why you chose these numbers?
    Thanks

    • Adrian Rosebrock March 27, 2018 at 6:38 am #

      These numbers are used to perform mean subtraction and scaling. See this post for more details.

  31. Fabian April 12, 2018 at 11:33 pm #

    Hi Adrian, great tutorial as always!! , I was wondering, is there a way to improve the fps performance?? would you recommend another card rather than raspberry to work on? I was looking some options, like up board but I am scared to make a mistake buying it… I would appreciate some suggestion, please… I don’t know if it can be used some kind of rack of raspberry also to improve fps performance…

    • Adrian Rosebrock April 13, 2018 at 6:39 am #

      I would recommend NVIDIA Jetson TX1 or TX2.

  32. Schwarz April 17, 2018 at 12:53 am #

    Hi Adrian, I happened to stumble upon your page months ago when I was searching for answers for my project, I am very very grateful for all the insightful blog posts that I have been following since.

    I am currently doing a similar project which performs object detection from a camera stream. The thing is, I want to only detect specific parts of the stream (lets say only the right hand corner). Is there any way to specify the ROI in python?

    • Adrian Rosebrock April 17, 2018 at 9:24 am #

      Hey Schwarz, it’s wonderful to hear you are enjoying the PyImageSearch blog 🙂 There are a few ways you can accomplish your goal:

      1. Manually use array slicing to extract the ROI and only pass the ROI through the network for detection
      2. Perform object detection on the entire image, but when you loop over the results, discard any where the bounding box would not fall into your ROI coordinates

      Exactly which method will work better really depends on your project and dataset so give both a try.

  33. Vivek April 18, 2018 at 11:52 am #

    I am trying to run the above mentioned installs on a Raspberry Pi instead of a VM. Have been stuck at the “make install” step for a while now. I realized that the previous tutorial that helped getting started with the movidius on a Raspberry Pi lacked the required steps for mVNCComplie as this tutorial covers, and so I went ahead and attempted to follow the steps mentioned here on my RPi to make sure all the dependancies are looked after.

    Is this feasible. Is there a different set of dependancies that need to be run if I am using a raspberry pi instead of a VM?

    Thanks

    • Adrian Rosebrock April 19, 2018 at 3:08 pm #

      Hi Vivek — were you planning on putting the full SDK on the Pi? If so, that’s not possible/recommended. Instead, you should put the full SDK on a capable full size computer and then put the API-only mode software on the Pi. I tried to make this clear in the blog post, but I understand that it is confusing in general. Can you please let me know what your intentions are?

      • Vivek April 19, 2018 at 3:19 pm #

        Hi Adrian,

        Thank you for getting back! I realized (through experience) that getting the SDK onto the Pi is not feasible. On giving your post another read I also caught on to the very concisely laid out pipeline which clearly mentions loading the SDK on a VM.
        I made the necessary changes and got this up and running 🙂
        I am now working to convert my own custom SSD tensorflow model into NCS.
        Thank you for the work you do here!

        • Adrian Rosebrock April 20, 2018 at 9:57 am #

          Congrats on resolving the issue, Vivek!

  34. Rodolfo April 19, 2018 at 10:06 am #

    Hi Adrian, thanks for tutorial.

    I created a script that uses multiple streams to use a single movidius stick. The script is working fine but do you think this can cause some problem with it?

    In it I’m also using this caffe deployment of the mobilenet provided by https://github.com/chuanqi305/MobileNet-SSD

    • David Hoffman April 19, 2018 at 3:16 pm #

      I don’t see any problems with this approach — just remember that the Movidius is fast but there will be a delay. It’s also possible to hook up multiple Movidius sticks to your Pi where each could support a different stream, or a few sticks could work in tandem to support one stream provided that you handle the overhead well.

      • Rodolfo April 23, 2018 at 3:49 pm #

        Thanks for the help David.

  35. Vivek April 19, 2018 at 1:59 pm #

    Hi Ardian,

    Thank you so much for this post. I was able to successfully execute the entire tutorial and run the model on my RPi.

    How would you approach converting a tensorflow model in this format. I have a custom SSD model trained in tensorflow that I was trying to run on the Neural stick.

    Thanks

  36. Rabbani April 25, 2018 at 1:50 am #

    Hi Ardian,
    would you mine can u give program which detect the mobile phone

    • Adrian Rosebrock April 25, 2018 at 5:21 am #

      This post demonstrates how to run a deep neural network on a mobile device.

      • rabbani April 26, 2018 at 3:49 am #

        hi
        Adrian,
        sir i want to detect illegal use of mobile phone while driving a car.
        would mine to help me about this concept

        • Adrian Rosebrock April 28, 2018 at 6:15 am #

          You would need to research “activity recognition with deep learning”. It’s not an easy project. Good luck with it!

  37. Simeon May 7, 2018 at 2:43 pm #

    Hi Adrian. Great blog post! I was wondering if the fps can be increased on the new 3B+ version of the pi? The project is an autonomous mobile security robot. The fps needs to be fast as I am streaming live video with the pi to a control room. Also the robot is in motion whilst the real-time object detection is being done. Budget is limited so a Movidius stick isn’t an option nor is getting an alternative board as the prices are too steep. I’m hoping the new 3B+ pi would be enough for a better framerate…?

    Thanks for any input you may have.

    • Adrian Rosebrock May 9, 2018 at 9:54 am #

      The Pi 3B+ is ~17% faster than the original Pi 3. That will certainly lead to a bit faster inference but it’s not going to even remotely compare to the NCS.

  38. Angelo May 9, 2018 at 5:43 am #

    — Generating Movidius graph files from your own models —

    Hi, I’ve a TF model that has 7 output nodes. There is a way to generate the graph without changing the code of the model or the NCS allows only one input and one output? There’s someone that had the same problem?

    Thx a lot

  39. Angelo Tartaglia May 16, 2018 at 5:42 am #

    Movidius software has been upgraded: NCAPIv1 to NCAPIv2.
    I’ve tried to modify your script with the new commands and I’ve used the new generated graph file using the .prototxt and mvNCCompile command but it doesn’t work.

    • Adrian Rosebrock May 18, 2018 at 9:35 am #

      Thanks for sharing, Angelo. I will certainly look into the new API. I haven’t decided if I’ll make a new blog post or if I’ll update this one to be compatible. In the meantime I suggest you use the old API to work with my blog post. Is there a particular feature in the new API that you need right now?

  40. Amare May 29, 2018 at 11:17 am #

    Hi Adrian Thank you for bringing me to the new Intel processor!!!

    I have seen the video at the start of the page and it is very fast in real time…. does it mean if I buy this movidius NCS it can run (trained SSD model with a dlib tracker) like a GPU accelerated desktop computer?

    • Adrian Rosebrock May 31, 2018 at 5:18 am #

      The NCS is certainly faster than the CPU of a Raspberry Pi but don’t expect it to run as fast as a desktop GPU like a Titan X.

  41. Shuhei Kawamoto May 31, 2018 at 3:00 am #

    Nice to meet you,Adrian.

    I am a Japanese college student who is researching machine learning.
    In my study, I would like to recognize objects in real time using the source code you created.
    I tried to try it immediately, but this kind of error occurred.

    File “realtime-object-detection/ncs_realtime_objectdetection.py”, line 54, in predict for box_index in range(num_valid_boxes):
    TypeError: ‘numpy.float16’ object cannot interpreted as an integer

    Would you please lend me your power if you do not mind?
    Waiting for a reply.

    • Adrian Rosebrock May 31, 2018 at 8:32 am #

      Hi Shuhei, it’s nice to meet you as well. I haven’t experienced this problem, but I think another reader sent me an email. See Line 30 in the blog post where it is shown how to convert a NumPy datatype. You can use a similar method to convert to an int as needed.

  42. Annie June 1, 2018 at 1:32 am #

    Hi, What if I want more classes to be added so that it will recognize more objects?

    • Adrian Rosebrock June 5, 2018 at 8:27 am #

      You would need to either:

      1. Train your own model from scratch
      2. Apply fine-tuning to a pre-trained model

  43. Suman Ghimire June 4, 2018 at 1:46 pm #

    Wow this is an interesting article and the way you explained made it so straightforward to implement and it worked. I want to put this article to real application by counting number of people flow inside my lab. I am wondering how i can modify this python script to display the total number of peoples flow in a day and real time updating the number in the same video (top left corner) after it detects the person.

    Any suggestion Adrian? and Thanks again for posting such an informative video.

    Cheers.

    • Adrian Rosebrock June 5, 2018 at 7:50 am #

      Hey Suman, I’m happy to hear you enjoyed the post! Unfortunately building a system to detect and count flows is a bit more involved and certainly move involved than what I can cover in a comment. I’ll be sure to add this to my list of ideas to cover in a future post.

  44. Mehrzad Mehrabipour June 6, 2018 at 6:19 pm #

    Hello Adrian,

    Thanks a lot for this great post.
    I have created a traffic sensor using your instructions. However, I also want to track vehicles for a couple of seconds. Therefore, I need to assign a specified ID to the detected vehicles. I was wondering if it is possible to do.
    I need to have a text file as outputs for more analysis as follows:

    Vehicles ID (a specified number), coordinates, time

    Thanks a lot,

    • Adrian Rosebrock June 7, 2018 at 3:06 pm #

      What you are referring to is called “object tracking”. Take a look at correlation trackers.

  45. Ahmed June 9, 2018 at 10:06 am #

    Hi adrian;
    How I can create my own caffe model?
    When I can do it?
    I searched for toturial about creating caffe model, but I couldn’t find it.
    Best regards

    • Adrian Rosebrock June 13, 2018 at 6:04 am #

      You can train your own custom Caffe models but you’ll need experience in computer vision, machine learning, and deep learning. If you’re interested, I discuss how to train your own custom Caffe models inside the PyImageSearch Gurus course.

  46. michell June 13, 2018 at 2:54 pm #

    hi adrian

    I’m doing a project and I need to detect fruit foods for example do you know any model that detects and makes a box around the object? for raspberry?

    • Adrian Rosebrock June 15, 2018 at 12:37 pm #

      What you are referring to is called object detection. I have an introduction to deep learning object detection which you can read here.

  47. vahid June 14, 2018 at 4:13 am #

    hi adrian. tnx for best post.
    i installed mvnc on my pi and testing using this method and got result like you.
    cd ~/workspace/ncsdk/examples/apps
    $ make all
    $ cd hello_ncs_py
    $ python hello_ncs.py
    Hello NCS! Device opened normally.
    Goodbye NCS! Device closed normally.
    NCS device working

    also when i import mvnc in my program i don’t get any error. but using mvNCCompile in command line for graph file i got this below error.
    bash. mvNCCompile command not found.
    please help me.

    • Adrian Rosebrock June 18, 2018 at 9:01 am #

      Hi Vahid, thanks for your comment. Just checking — are you using mvNCCompile on the Pi? You shouldn’t run that command on the Pi. Instead, you should execute that command on a capable desktop computer. The instructions show you how to run it in a VM but a VM isn’t necessary if you have Ubuntu.

      • vahid June 19, 2018 at 12:33 am #

        dear Adrian thanks for your answer. I should say that I used mvNCCompile on the Ubuntu in my pc but unfortunately could not compile. please if you can tell me more detail about it. thank you very much.

        • Adrian Rosebrock June 19, 2018 at 9:12 am #

          Double check that you didn’t install Movidius SDK version 2. This blog post was written before version 2 was released. The other thing you could check is your PATH to make sure that the directory housing the binary is properly added.

  48. Joseph Palermo June 23, 2018 at 10:46 am #

    Installation failed for me because: “No matching distribution for tensorflow==1.4.0”. Is the tensorflow version important? I may try simply installing the latest version.

    • Adrian Rosebrock June 25, 2018 at 1:54 pm #

      Try with the latest version of TensorFlow — it should now be pip installable:

      $ pip install tensorflow

  49. Joseph Palermo June 23, 2018 at 11:53 am #

    Also “Insert Guest Additions CD image…” is a checkbox in the User Interface tab. Do I have do something in addition to checking the box? For instance, Adrian in figure 6 you had the guest additions installer running in your terminal on the Ubuntu guest os. How did you get that to happen?

    • Adrian Rosebrock June 25, 2018 at 2:28 pm #

      Hi Joseph, when you have your VM running and go to to the VirtualBox menu bar under devices, you’ll see an option to “Insert Guest Additions CD Image…”. Click that and the installer will start automatically. If it is already checked, you might see CD icon on the desktop of the VM. You should be able to launch the autorun executable from the CD if it didn’t start automatically. Refer to Figure 6 — that’s what it will look like when the installation is complete. Don’t forget to do a restart!

  50. Aske June 27, 2018 at 9:20 am #

    Great and interesting tutorial!
    Have you tried installing the OpenVINO toolkit and optimize the model? Not sure if an Intel CPU based host is required or if it possible to use the Pi3+NCS.

    • Adrian Rosebrock June 28, 2018 at 8:06 am #

      I have not tried the OpenVINO toolkit. I’ll have to look into it.

  51. Sam Pawall June 27, 2018 at 2:06 pm #

    Hi Adrian – thanks for the amazing post. I don’t have any experience with Deep learning or object detection. I will likely be purchasing one of your books. I want to use it to build a model to detect if screws have been installed on a part. So imagine a part is on a moving conveyor belt. This part should have 5 screws on it. I want to detect if all 5 screws are present and if missing, which screws (location and quantity) are missing. Can I do that with this stick?

    Also, as the part moves through different stations on the conveyor belt, I imagine I can calculate time spent at each station (how long it took to complete an operation at a station) as well, right?

    I would appreciate your thoughts, also would appreciate if you can suggest which one of your blogs and books and other resources would be helpful.

    Thanks.

    • Adrian Rosebrock June 28, 2018 at 8:06 am #

      Hey Sam, are you intending to deploy your trained model to the Raspberry Pi? Keep in mind that even with the NCS the Pi will likely achieve 4-10 FPS at the very max, depending on your model. If your conveyor belt is moving slow enough this should be fine but if it’s a fast moving conveyor belt it may be problematic.

      As far as suggestions go, if you are serious about studying deep learning and object detection you should absolutely go with my book, Deep Learning for Computer Vision with Python.

      • Sam July 11, 2018 at 2:14 pm #

        Hi Adrian – thanks for your reply. Yes.. can I use my trained model on Movidius + Raspeberry PI?

        • David Hoffman July 13, 2018 at 9:28 am #

          If you’ve already trained a model and created a graph file, then yes — you can run it on your Movidius + Pi. I’d also like to add that you might consider traditional image processing approaches (non deep learning) to identify the screws as you might achieve higher FPS especially with the Pi.

  52. Michael June 28, 2018 at 7:50 am #

    Hey guys,

    so I’m experiencing more of a an annoyance than a problem. I’m working on a raspberry by by ssh’ing into the pi. One keyboard/mouse, but I’m still looking at the screen. However when i run the code on the pi via ssh I get the following.

    $ python ncs_video.py –graph mobilenetgraph –display 1

    (Output:1160): Gtk-WARNING **: cannot open display:

    Now If I connect my keyboard/mouse directly to the Pi and run it. It works fine, slow, but it works.

    Any suggestions on how to get this to work via ssh?

    • Adrian Rosebrock June 28, 2018 at 7:55 am #

      You need to enable X11 forwarding when you SSH into your Pi:

      $ ssh -X pi@your_ip_address

      • michael June 28, 2018 at 10:19 am #

        Adrian,

        Thank you sir!!

  53. Sai Teja July 5, 2018 at 2:23 pm #

    Hi Adrian,

    It was a really good tutorial. I was wondering to know how you got the results that you displayed in the table comparing Pi with and without Movidius. How can we calculate the FPS..?

    • Sai Teja July 5, 2018 at 6:12 pm #

      I am sorry, please ignore this question

  54. Sai Teja July 5, 2018 at 6:11 pm #

    Hi Adrain,

    Can I use GoogleNet, Alexnet instead of MobileNet…?

  55. hiankun July 7, 2018 at 10:19 am #

    Hi, the default SHAVEs number is 1, which can be found in the (maybe newer?) document: https://movidius.github.io/ncsdk/tools/compile.html

    I recently encounter a situation in which I didn’t assign the SHAVEs value, and the final graph runs at very slow speed. After asking my question in the NCS forum and getting the suggestion, I realized the importance of the -s option. :-p

    • Adrian Rosebrock July 10, 2018 at 9:20 am #

      Thanks for sharing Hiankun. Yes, the default is one and the more you can allocate the better.

  56. Amare Mahtsentu July 7, 2018 at 4:04 pm #

    Hi Adrian !!!
    it is very clear from your post that movidius is best for fast processing.
    you have shown here SSD with mobile net architecture written in caffe frame work…. and it is very nice… does NCS support SSD mobile net (written in tensorflow) ?

    • Adrian Rosebrock July 10, 2018 at 9:28 am #

      Hi Amare! The Movidius is a good device to augment SBCs, but it would never replace a fully capable GPU or even a high-end laptop CPU. Please refer to the previous post (Figure 5) where a benchmark was made that includes my Macbook Pro. As the figure demonstrates, the NCS + laptop is actually slower than the laptop itself (and that’s even in a VM on the laptop — running on bare metal the numbers would be even worse).

      Check this link for SSD MobileNet with Tensorflow: Movidius MobileNets.

  57. Bitbitbit July 8, 2018 at 2:28 am #

    Hello Adrian, thanks a lot for the tutorial.
    I saw that Intel has updated the NCAPI from version 1 to version 2. I noticed that the version of API your tutorial use is version 1. Will the same python code that your dowbload provide work with NCAPI version 2? Can you guide us to make it work with NCAPI version 2?
    Thanks again.

    • Adrian Rosebrock July 10, 2018 at 9:29 am #

      From what I understand, it will not work with NCSDKv2. You’ll have to figure out some modifications to the scripts. I may do a new blog post in the future.

  58. Wicus Van der Westhuizen July 10, 2018 at 1:17 pm #

    Hi Adrian,

    Great tutorial and I love your blog. Keep up the good work.

    Is there a way to display the video feed from the camera on the display at a higher framerate than the FPS of the processed feed? I don’t know if that makes any sense at all. So you basically have a smooth video feed at say 30 fps with the classification boxes running at for instance 4 fps.

  59. Sai Teja July 12, 2018 at 6:48 pm #

    Hi Adrain,

    Thanks for the post. Can you guide me in training a custom model for object detection..??

    • Adrian Rosebrock July 13, 2018 at 4:52 am #

      Before you get too far down the rabbit hole on training your own model I would read this getting started guide for object detection. Inside the guide I help you develop a foundation of what object detection is, how it works from a deep learning perspective, and then provide suggestions for how to train your models.

  60. Pravesh Bawangade July 14, 2018 at 2:46 pm #

    can we cluster two or more raspberry pi to make processing faster. can you make a tutorial on that. thank you

  61. Mark West July 19, 2018 at 5:42 am #

    Hello Adrian!

    First off, great blog. I’ve really struggled getting the Movidius up and running and your instructions really helped.

    I’m trying to adapt the example from this blog post to use Graphs generated from other models. However when I try using a Graph based on a Tiny Yolo Caffe Model I’m just getting an array of NaN’s back from graph.GetResult().

    My guess is that this has something to do with both the image dimensions and the preprocess_image function. Do you have any tips that may help me progress further here?

    Thanks again for all your work!

    • Adrian Rosebrock July 20, 2018 at 7:13 am #

      Hey Mark — are you using float16 datatypes? If so, read this Movidius thread and specifically Tome’s January 17 post. That’s the only thing I can think of. Typically when I’ve run models without proper peprocessing it just yields unfavorable results, but not NaNs. Definitely triple check your image dimensions!

      • Mark West July 22, 2018 at 12:11 pm #

        Thanks for the tip! I’m off on holiday now but will get back to this in a week or so.

        Thanks again!

  62. Sai Teja July 20, 2018 at 11:58 am #

    HI Adrian,

    Thanks for the tutorial. Can you guide me how to give recorded stream as input to the code.

  63. Kashyap Nishtala July 21, 2018 at 3:31 am #

    Hi Adrian!

    Can you please tell me how to use the detected objects in the video to perform any physical action (robot arm) through the raspberry pi? Can I write code for the motors along with the training models and form a graph file?
    Thank you

    • Adrian Rosebrock July 21, 2018 at 9:11 am #

      You certainly can take actions based on objects detected but exactly what those actions are and how you perform them are entirely up to you. For a robot you would want to look at the physical hardware you are using as well as read the documentation on how to use it. The documentation of your robot/servo/etc. will instruct you on which libraries to use.

  64. Michael July 25, 2018 at 5:05 am #

    Hi Adrian, could you recommend a raspi alternative that could do detection’s at around 50+fps using CNN. I was hoping the NCS and raspberry pi3b might do it but now no it cant after reading your article.

    • Adrian Rosebrock July 25, 2018 at 7:54 am #

      Have you taken a look at the NVIDIA Jetson lineup? That would be my suggested hardware.

  65. tuxkart August 8, 2018 at 7:38 am #

    Hi Adrian,
    Although this post is far left behind but I still hope your help anyway.
    Actually, I had been so happy with my on-board-graphics laptop until I found that I must need more and this neural stick is quite good choice. Do you have any advice for my situation?

    • Adrian Rosebrock August 9, 2018 at 2:54 pm #

      It really depends on what your application is. What do you hope to accomplish with the NCS?

      • tuxkart August 9, 2018 at 9:30 pm #

        Oh, my mistake in posting.
        Actually, I’m a “big fan” of your posts that they’re really cool to me. However, when I run your code sample of object detection in my laptop, the FPS is quite low and some other samples I cloned on github (yolo for example), the results’re even worse. Movidius NCS which possibly speeds up about ~10 times as shown above, may be a good choice for me. But I still hope more options and I look forward to your suggestion about this. Thanks so much Adrian.

        • Adrian Rosebrock August 10, 2018 at 6:09 am #

          If you take a look at this post you’ll find a comparison of the NCS speeds on a MacBook Pro. In general, the speed is actually worse than using the CPU itself. The NCS works great on speeding up inference on resource constrained devices such as the Pi but it won’t do much for your laptop (provided you are running a modern laptop). I would instead invest in a good GPU.

          • tuxkart August 10, 2018 at 6:17 am #

            Oh I saw, Adrian. That’s what I need.
            Thanks so much

  66. hashir August 10, 2018 at 4:46 am #

    how can i get the output only in between my predefined percentage level rest of the prediction must be avoided

    • Adrian Rosebrock August 10, 2018 at 6:05 am #

      On Line 149 you could modify the “if” statement to be something like:

      if pred_conf > min_conf and pred_conf < max_conf

  67. senay August 16, 2018 at 8:26 pm #

    Hi Adrian !!
    I have tested NCS and it works fine with FPS of 8…
    but I still need more speed !!
    what will happen if I use two NCSs for the same model and the same video sample? does it give a speed of around 16 for the two?
    I am using a raspberry pi camera module for the live video…

    • Adrian Rosebrock August 17, 2018 at 7:16 am #

      Unfortunately no, using more than one NCS is not going to increase your FPS. You should look into faster embedded devices that are designed to run inference with deep learning models. The Jetson TX2 would be my first suggestion.

Leave a Reply