Getting started with the NVIDIA Jetson Nano

In this tutorial, you will learn how to get started with your NVIDIA Jetson Nano, including:

  • First boot
  • Installing system packages and prerequisites
  • Configuring your Python development environment
  • Installing Keras and TensorFlow on the Jetson Nano
  • Changing the default camera
  • Classification and object detection with the Jetson Nano

I’ll also provide my commentary along the way, including what tripped me up when I set up my Jetson Nano, ensuring you avoid the same mistakes I made.

By the time you’re done with this tutorial, your NVIDIA Jetson Nano will be configured and ready for deep learning!

To learn how to get started with the NVIDIA Jetson Nano, just keep reading!

Getting started with the NVIDIA Jetson Nano

Figure 1: In this blog post, we’ll get started with the NVIDIA Jetson Nano, an AI edge device capable of 472 GFLOPS of computation. At around $100 USD, the device is packed with capability including a Maxwell architecture 128 CUDA core GPU covered up by the massive heatsink shown in the image. (image source)

In the first part of this tutorial, you will learn how to download and flash the NVIDIA Jetson Nano .img file to your micro-SD card. I’ll then show you how to install the required system packages and prerequisites.

From there you will configure your Python development library and learn how to install the Jetson Nano-optimized version of Keras and TensorFlow on your device.

I’ll then show you how to access the camera on your Jetson Nano and even perform image classification and object detection on the Nano as well.

We’ll then wrap up the tutorial with a brief discussion on the Jetson Nano — a full benchmark and comparison between the NVIDIA Jetson Nano, Google Coral, and Movidius NCS will be published in a future blog post.

Before you get started with the Jetson Nano

Before you can even boot up your NVIDIA Jetson Nano you need three things:

  1. A micro-SD card (minimum 16GB)
  2. A 5V 2.5A MicroUSB power supply
  3. An ethernet cable

I really want to stress the minimum of a 16GB micro-SD card. The first time I configured my Jetson Nano I used a 16GB card, but that space was eaten up fast, particularly when I installed the Jetson Inference library which will download a few gigabytes of pre-trained models.

I, therefore, recommend a 32GB micro-SD card for your Nano.

Secondly, when it comes to your 5V 2.5A MicroUSB power supply, in their documentation NVIDIA specifically recommends this one from Adafruit.

Finally, you will need an ethernet cable when working with the Jetson Nano which I find really, really frustrating.

The NVIDIA Jetson Nano is marketed as being a powerful IoT and edge computing device for Artificial Intelligence…

…and if that’s the case, why is there not a WiFi adapter on the device?

I don’t understand NVIDIA’s decision there and I don’t believe it should be up to the end user of the product to “bring their own WiFi adapter”.

If the goal is to bring AI to IoT and edge computing then there should be WiFi.

But I digress.

You can read more about NVIDIA’s recommendations for the Jetson Nano here.

Download and flash the .img file to your micro-SD card

Before we can get started installing any packages or running any demos on the Jetson Nano, we first need to download the Jetson Nano Developer Kit SD Card Image from NVIDIA’s website.

NVIDIA provides documentation for flashing the .img file to a micro-SD card for Windows, macOS, and Linux — you should choose the flash instructions appropriate for your particular operating system.

First boot of the NVIDIA Jetson Nano

After you’ve downloaded and flashed the .img file to your micro-SD card, insert the card into the micro-SD card slot.

I had a hard time finding the card slot — it’s actually underneath the heat sync, right where my finger is pointing to:

Figure 2: Where is the microSD card slot on the NVIDIA Jetson Nano? The microSD receptacle is hidden under the heatsink as shown in the image.

I think NVIDIA could have made the slot a bit more obvious, or at least better documented it on their website.

After sliding the micro-SD card home, connect your power supply and boot.

Assuming your Jetson Nano is connected to an HDMI output, you should see the following (or similar) displayed to your screen:

Figure 3: To get started with the NVIDIA Jetson Nano AI device, just flash the .img (preconfigured with Jetpack) and boot. From here we’ll be installing TensorFlow and Keras in a virtual environment.

The Jetson Nano will then walk you through the install process, including setting your username/password, timezone, keyboard layout, etc.

Installing system packages and prerequisites

In the remainder of this guide, I’ll be showing you how to configure your NVIDIA Jetson Nano for deep learning, including:

  • Installing system package prerequisites.
  • Installing Keras and TensorFlow and Keras on the Jetson Nano.
  • Installing the Jetson Inference engine.

Let’s get started by installing the required system packages:

Provided you have a good internet connection, the above commands should only take a few minutes to finish up.

Configuring your Python environment

The next step is to configure our Python development environment.

Let’s first install pip, Python’s package manager:

We’ll be using Python virtual environments in this guide to keep our Python development environments independent and separate from each other.

Using Python virtual environments are a best practice and will help you avoid having to maintain a micro-SD for each development environment you want to use on your Jetson Nano.

To manage our Python virtual environments we’ll be using virtualenv and virtualenvwrapper which we can install using the following command:

Once we’ve installed virtualenv and virtualenvwrapper we need to update our ~/.bashrc file. I’m choosing to use nano but you can use whatever editor you are most comfortable with:

Scroll down to the bottom of the ~/.bashrc file and add the following lines:

After adding the above lines, save and exit the editor.

Next, we need to reload the contents of the ~/.bashrc file using the source command:

We can now create a Python virtual environment using the mkvirtualenv command — I’m naming my virtual environment deep_learning, but you can name it whatever you would like:

Installing TensorFlow and Keras on the NVIDIA Jetson Nano

Before we can install TensorFlow and Keras on the Jetson Nano, we first need to install NumPy.

First, make sure you are inside the deep_learning virtual environment by using the workon command:

From there, you can install NumPy:

Installing NumPy on my Jetson Nano took ~10-15 minutes to install as it had to be compiled on the system (there currently no pre-built versions of NumPy for the Jetson Nano).

The next step is to install Keras and TensorFlow on the Jetson Nano. You may be tempted to do a simple pip install tensorflow-gpudo not do this!

Instead, NVIDIA has provided an official release of TensorFlow for the Jetson Nano.

You can install the official Jetson Nano TensorFlow by using the following command:

Installing NVIDIA’s tensorflow-gpu package took ~40 minutes on my Jetson Nano.

The final step here is to install SciPy and Keras:

These installs took ~35 minutes.

Compiling and installing Jetson Inference on the Nano

The Jetson Nano .img already has JetPack installed so we can jump immediately to building the Jetson Inference engine.

The first step is to clone down the jetson-inference repo:

We can then configure the build using cmake.

There are two important things to note when running cmake:

  1. The cmake command will ask for root permissions so don’t walk away from the Nano until you’ve provided your root credentials.
  2. During the configure process, cmake will also download a few gigabytes of pre-trained sample models. Make sure you have a few GB to spare on your micro-SD card! (This is also why I recommend a 32GB microSD card instead of a 16GB card).

After cmake has finished configuring the build, we can compile and install the Jetson Inference engine:

Compiling and installing the Jetson Inference engine on the Nano took just over 3 minutes.

What about installing OpenCV?

I decided to cover installing OpenCV on a Jetson Nano in a future tutorial. There are a number of cmake  configurations that need to be set to take full advantage of OpenCV on the Nano, and frankly, this post is long enough as is.

Again, I’ll be covering how to configure and install OpenCV on a Jetson Nano in a future tutorial.

Running the NVIDIA Jetson Nano demos

When using the NVIDIA Jetson Nano you have two options for input camera devices:

  1. A CSI camera module, such as the Raspberry Pi camera module (which is compatible with the Jetson Nano, by the way)
  2. A USB webcam

I’m currently using all of my Raspberry Pi camera modules for my upcoming book, Raspberry Pi for Computer Vision so I decided to use my Logitech C920 which is plug-and-play compatible with the Nano (you could use the newer Logitech C960 as well).

The examples included with the Jetson Nano Inference library can be found in jetson-inference:

  • detectnet-camera: Performs object detection using a camera as an input.
  • detectnet-console: Also performs object detection, but using an input image rather than a camera.
  • imagenet-camera: Performs image classification using a camera.
  • imagenet-console: Classifies an input image using a network pre-trained on the ImageNet dataset.
  • segnet-camera: Performs semantic segmentation from an input camera.
  • segnet-console: Also performs semantic segmentation, but on an image.
  • A few other examples are included as well, including deep homography estimation and super resolution.

However, in order to run these examples, we need to slightly modify the source code for the respective cameras.

In each example you’ll see that the DEFAULT_CAMERA value is set to -1, implying that an attached CSI camera should be used.

However, since we are using a USB camera, we need to change the DEFAULT_CAMERA value from -1 to 0 (or whatever the correct /dev/video V4L2 camera is).

Luckily, this change is super easy to do!

Let’s start with image classification as an example.

First, change directory into ~/jetson-inference/imagenet-camera:

From there, open up imagenet-camera.cpp:

You’ll then want to scroll down to approximately Line 37 where you’ll see the DEFAULT_CAMERA value:

Simply change that value from -1 to 0:

From there, save and exit the editor.

After editing the C++ file you will need to recompile the example which is as simple as:

Keep in mind that make is smart enough to not recompile the entire library. It will only recompile files that have changed (in this case, the ImageNet classification example).

Once compiled, change to the aarch64/bin directory and execute the imagenet-camera binary:

Here you can see that the GoogLeNet is loaded into memory, after which inference starts:

Image classification is running at ~10 FPS on the Jetson Nano at 1280×720.

IMPORTANT: If this is the first time you are loading a particular model then it could take 5-15 minutes to load the model.

Internally, the Jetson Nano Inference library is optimizing and preparing the model for inference. This only has to be done once so subsequent runs of the program will be significantly faster (in terms of model loading time, not inference).

Now that we’ve tried image classification, let’s look at the object detection example on the Jetson Nano which is located in ~/jetson-inference/detectnet-camera/detectnet-camera.cpp.

Again, if you are using a USB webcam you’ll want to edit approximately Line 39 of detectnet-camera.cpp and change DEFAULT_CAMERA from -1 to 0 and then recompile via make (again, only necessary if you are using a USB webcam).

After compiling you can find the detectnet-camera binary in ~/jetson-inference/build/aarch64/bin.

Let’s go ahead and run the object detection demo on the Jetson Nano now:

Here you can see that we are loading a model named ped-100 used for pedestrian detection (I’m actually not sure what the specific architecture is as it’s not documented on NVIDIA’s website — if you know what architecture is being used, please leave a comment on this post).

Below you can see an example of myself being detected using the Jetson Nano object detection demo:

According to the output of the program, we’re obtaining ~5 FPS for object detection on 1280×720 frames when using the Jetson Nano. Not too bad!

How does the Jetson Nano compare to the Movidius NCS or Google Coral?

This tutorial is simply meant to be a getting started guide for your Jetson Nano — it is not meant to compare the Nano to the Coral or NCS.

I’m in the process of comparing each of the respective embedded systems and will be providing a full benchmark/comparison in a future blog post.

In the meantime, take a look at the following guides to help you configure your embedded devices and start running benchmarks of your own:

How do I deploy custom models to the Jetson Nano?

One of the benefits of the Jetson Nano is that once you compile and install a library with GPU support (compatible with the Nano, of course), your code will automatically use the Nano’s GPU for inference.

For example:

Earlier in this tutorial, we installed Keras + TensorFlow on the Nano. Any Python scripts that leverage Keras/TensorFlow will automatically use the GPU.

And similarly, any pre-trained Keras/TensorFlow models we use will also automatically use the Jetson Nano GPU for inference.

Pretty awesome, right?

Provided the Jetson Nano supports a given deep learning library (Keras, TensorFlow, Caffe, Torch/PyTorch, etc.), we can easily deploy our models to the Jetson Nano.

The problem here is OpenCV.

OpenCV’s Deep Neural Network ( dnn) module does not support NVIDIA GPUs, including the Jetson Nano.

OpenCV is working to provide NVIDIA GPU support for their dnn module. Hopefully, it will be released by the end of the summer/autumn.

But until then we cannot leverage OpenCV’s easy to use cv2.dnn functions.

If using the cv2.dnn module is an absolute must for you right now, then I would suggest taking a look at Intel’s OpenVINO toolkit, the Movidius NCS, and their other OpenVINO-compatible products, all of which are optimized to work with OpenCV’s deep neural network module.

If you’re interested in learning more about the Movidius NCS and OpenVINO (including benchmark examples), be sure to refer to this tutorial.

Interested in using the NVIDIA Jetson Nano in your own projects?

I bet you’re just as excited about the NVIDIA Jetson Nano as I am. In contrast to pairing the Raspberry Pi with with either the Movidius NCS or Google Coral, the Jetson Nano has it all built right in (minus WiFi) to powerfully conduct computer vision and deep learning at the edge.

In my opinion, embedded CV and DL is the next big wave in the AI community. It’s so big that it may even be a tsunami — will you be riding that wave?

To help you get your start in embedded Computer Vision and Deep Learning, I have decided to write a brand new book — Raspberry Pi for Computer Vision.

I’ve chosen to focus on the Raspberry Pi as it is the best entry-level device for getting started into the world of computer vision for IoT.

But I’m not stopping there. Inside the book, we’ll:

  • Augment the Raspberry Pi with the Google Coral and Movidius NCS coprocessors.
  • Apply the same skills we learn with the RPi to a device with more horsepower: NVIDIA’s Jetson Nano.

Additionally, you’ll learn how to:

  • Build practical, real-world computer vision applications on the Pi.
  • Create computer vision and Internet of Things (IoT) projects and applications with the RPi.
  • Optimize your OpenCV code and algorithms on the resource-constrained Pi.
  • Perform Deep Learning on the Raspberry Pi (including utilizing the Movidius NCS and OpenVINO toolkit).
  • Configure your Google Coral, perform image classification and object detection, and even train + deploy your own custom models to the Coral Edge TPU!
  • Utilize the NVIDIA Jetson Nano to run multiple deep neural networks on a single board, including image classification, object detection, segmentation, and more!

I’m running a Kickstarter campaign to fund the creation of the new book, and to celebrate, I’m offering 25% OFF my existing books and courses if you pre-order a copy of RPi for CV.

In fact, the Raspberry Pi for Computer Vision book is practically free if you pre-order it with Deep Learning for Computer Vision with Python or the PyImageSearch Gurus course.

The clock is ticking and these discounts won’t last — the Kickstarter pre-sale shuts down on this Friday (May 10th) at 10AM EDT, after which I’m taking the deals down.

Reserve your pre-sale book now and while you are there, grab another course or book at a discounted rate.

Summary

In this tutorial, you learned how to get started with the NVIDIA Jetson Nano.

Specifically, you learned how to install the required system packages, configure your development environment, and install Keras and TensorFlow on the Jetson Nano.

We wrapped up learning how to change the default camera and perform image classification and object detection on the Jetson Nano using the pre-supplied scripts.

I’ll be providing a full comparison and benchmarks of the NVIDIA Jetson Nano, Google, Coral, and Movidius NCS in a future tutorial.

To be notified when future tutorials are published here on PyImageSearch (including the Jetson Nano vs. Google Coral vs. Movidus NCS benchmark), just enter your email address in the form below!

, , , , , ,

152 Responses to Getting started with the NVIDIA Jetson Nano

  1. Yurii Chernyshov May 6, 2019 at 10:20 am #

    “I decided to cover installing OpenCV on a Jetson Nano in a future tutorial”.

    Looking forward 🙂 !

    P.S.
    It seems like I always looking forward with this site for the last 5 years already.
    Thanks to keep me motivated for such a long time.

    • Adrian Rosebrock May 6, 2019 at 2:19 pm #

      Thanks Yurii 🙂

    • Satyajith May 10, 2019 at 7:49 am #

      A Tegra optimised version of OpenCV is a part of Jet Pack, right?

  2. al krinker May 6, 2019 at 10:37 am #

    what a timing… i just finished listening to the podcast interview on super data science with you where your made your predictions about next best thing and you mentioned NVIDIA Jetson Nano…. it def blows rasberrypi out of the water!

    guess. i will be shelling $99 soon to get on the wagon soon

    • Adrian Rosebrock May 6, 2019 at 2:20 pm #

      If you are interested in embedding computer vision/deep learning I think it’s worth the investment. I also think NVIDIA did a pretty good job nailing the price point as well.

  3. wally May 6, 2019 at 10:38 am #

    Wow another timely post!

    My Jetson Nano is still on backorder, but this will get me jumpstarted when it finally arrives.

    • Adrian Rosebrock May 6, 2019 at 2:20 pm #

      I hope you enjoy hacking with it when it arrives!

  4. David Bonn May 6, 2019 at 10:49 am #

    Great post, Adrian.

    I too am really excited about embedded computer vision and deep learning. There will likely be explosive growth and some amazing opportunities in the very near future.

    Now I’m trying to decide whether to buy a Jetson or a Google Coral! Or both…

    One interesting point you hit upon — current camera libraries aren’t very flexible if you want to use different cameras (or even the full capabilities of a given USB camera) and will reduce you to tears if you (for example) have more than one USB camera connected to your system at a time. I’d love to see a better solution.

    • Adrian Rosebrock May 6, 2019 at 2:19 pm #

      OpenCV does a fairly reasonable job for at least accessing the camera. The problem becomes programmatically adjusting any settings on the camera sensor. OpenCV provides “.get” and “.set” methods for various parameters but not all of them will be supported by a given camera. It can be a royal pain to work with.

  5. AR May 6, 2019 at 10:55 am #

    Hi Adrian

    Thanks for this blog, its really helpful. I have the kit for like 1 week now and was struggling to get started with it. I have few questions.

    1. How can we write our own code to run the detection process.?
    2. I see that all the demos are in cpp, so no support for python.?
    3. Are you planning to write other blog on writing python code for detection.?

    Thanks

    • Adrian Rosebrock May 6, 2019 at 2:18 pm #

      Hey AR, I’m actually covering all of topics Raspberry Pi for Computer Vision. If you’d like to learn how to use your Nano from within Python projects (image classification, object detection, etc.), I would definitely suggest pre-ordering a copy.

  6. issaiass May 6, 2019 at 11:32 am #

    You probably like to add these things to complete the “on the edge module”
    Intel Dual Band Wireless-Ac 8265 BT/WF module
    Noctua NF-A4x20 5V PWM, Premium Quiet Fan, 4-Pin, 5V Version (40x20mm, Brown)
    5V 5A AC to DC Power Supply Adapter Converter Charger 5.5×2.1mm
    2 x 6dBi RP-SMA Dual Band 2.4GHz 5GHz + 2 x 35cm M.2(NGFF)Cable Antenna Mod Kit
    128GB Ultra microSDXC UHS-I Memory Card with Adapter – C10 (at least 64GB)
    4 x M3-0.6x16mm or 4x 13mm

  7. Stewart Christie May 6, 2019 at 11:49 am #

    Thanks for documenting this, I have my devices on order from SEEED, hopefully arriving soon. Do you plan on a similar getting started guide for Coral and the TPU?

    Regarding the lack of Wi-Fi , the issue is certification, and every country has different regulations. Therefore the systems that do ship with Wi-Fi have soldered down parts, and on board antennae, similar to the newer pi and pi-zeros’. in the US there’s also FCC restrictions on number of “prototype” and “pre-production” devices a company can ship, usually in the hundreds, with labelling issues, and $$$ fines for non-compliance So that further complicates issues, and gaining w-fi certification takes time.

    As an independent user I can add a wifi card, and any old antennae to a system, and even if my unshielded rf-emitting device now causes interference, its not as big an issue.

    I fully expect any real edge device, built using this system would be in a case, shielded, with Wi-FI, and a totally seperate certification.

  8. Rean May 6, 2019 at 11:59 am #

    Hi Adrian,

    Cool tutorial! Great Job!

    I just porting the ZMQ host to Jetson nano(without python virtualenv due to opencv issue), I can’t wait your new tutorials in the future, maybe one day we can receive images from Pi through internet and inference using the CUDA resource on nano.

    Thanks a lot!!

    • Adrian Rosebrock May 6, 2019 at 2:15 pm #

      Thanks Rean, enjoy hacking with your Nano!

  9. Joseph May 6, 2019 at 12:19 pm #

    Awesome as always Adrian!

    • Adrian Rosebrock May 6, 2019 at 2:15 pm #

      Thanks Joseph!

  10. Karthik May 6, 2019 at 1:11 pm #

    Pls Im waiting for the Jetson nano to perform a decent face recognition and use its full potential as so far dlib is not working, opencv dnn is performing bad, their inbuilt face inference is a joke… please help, and great work adrian

    • Adrian Rosebrock May 6, 2019 at 2:15 pm #

      Hey Karthik — I don’t have any tutorials right now for face recognition on the Nano. I’ll consider it though, thanks for the suggestion!

  11. Jerome May 6, 2019 at 1:29 pm #

    thanks a lot 🙂
    Nano is ever on my desk 🙂

    • Adrian Rosebrock May 6, 2019 at 2:14 pm #

      You’re welcome, Jerome! Enjoy the guide and let me know if you have any issues with it 🙂

  12. Dustin Franklin May 6, 2019 at 3:58 pm #

    “I’m actually not sure what the specific architecture is as it’s not documented on NVIDIA’s website — if you know what architecture is being used, please leave a comment on this post”

    It is using the DetectNet model, which is trained using DIGITS/Caffe in the full tutorial which covers training + inferencing: https://github.com/dusty-nv/jetson-inference/blob/master/docs/detectnet-training.md

    You can read more about the DetectNet architecture here: https://devblogs.nvidia.com/detectnet-deep-neural-network-object-detection-digits/

    • Adrian Rosebrock May 7, 2019 at 9:02 am #

      Awesome, thank you for clarifying the model architecture Dustin! 🙂

  13. Andrew May 6, 2019 at 4:06 pm #

    I love my Nano and NVIDIA has code and plans for two ground robots you can make: JetBot and Kaya. I built and installed OpenCV 4.1.0, by using the following cmake command:

    cmake -D CMAKE_BUILD_TYPE=RELEASE
    -D CMAKE_INSTALL_PREFIX=/usr/local
    -D INSTALL_PYTHON_EXAMPLES=ON
    -D INSTALL_C_EXAMPLES=OFF
    -D OPENCV_ENABLE_NONFREE=ON
    -D OPENCV_EXTRA_MODULES_PATH=~/path to your opencv contrib modules folder
    -D PYTHON_EXECUTABLE=~/.virtualenvs/cv/bin/python
    -D BUILD_EXAMPLES=ON ..

    make -j4
    sudo make install
    sudo ldconfig

    • Adrian Rosebrock May 7, 2019 at 9:02 am #

      Thanks for sharing the install configuration, Andrew!

    • Rich May 10, 2019 at 7:12 pm #

      I used a different one to get everything running with CUDA.

      cmake -D CMAKE_BUILD_TYPE=RELEASE \
      -D CMAKE_INSTALL_PREFIX=/usr/local \
      -D WITH_CUDA=ON \
      -D ENABLE_FAST_MATH=1 \
      -D CUDA_FAST_MATH=1 \
      -D WITH_CUBLAS=1 \
      -D WITH_GSTREAMER=ON \
      -D WITH_LIBV4L=ON \
      -D OPENCV_ENABLE_NONFREE=ON \
      -D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib/modules \
      -D BUILD_EXAMPLES=OFF ..

      It required a 64GB card minimum for me, and I had to use a swap usb drive since the memory requirements redlined. Jetsonhacks has a good video on the swap drive. Installing OpenCV required 17.8GB of space beyond the opencv zip download, it was incredible.

      Final symbolic link was different too:

      ln -s /usr/local/lib/python3.6/site-packages/cv2/python-3.6/cv2.so cv2.so

      • Adrian Rosebrock May 15, 2019 at 3:09 pm #

        Wow, thanks for sharing this Rich!

  14. Prem May 6, 2019 at 4:48 pm #

    Great! yes looking forward for more NANO articles, particularly using OpenCV DNN for CUDA, as I see lot of OpenCV in your articles, but using it in Jetson platform will be a challenge due to the lack of support, as your code runs on frugal powered Pi, I can’t imagine what you can achieve with NANO.

    • Adrian Rosebrock May 7, 2019 at 9:04 am #

      One of the problems you’ll run into is that the “dnn” (i.e., Deep Neural Network) module of OpenCV does not yet support CUDA. Ideally it will soon but it’s one of the current limitations of OpenCV. You’ll need to use whatever library the model was trained on to access the GPU and then just use OpenCV to facilitate reading frames/images, image processing, etc.

      That’s one aspect that the Movidius NCS really has going for it — basically one line of code added to the script and you’re able to perform faster inference. (tutorial here if you’re interested)

  15. Rodolfo May 6, 2019 at 5:46 pm #

    Hello Adrian,
    I wonder how difficult is it to setup OpenVINO with the Jetson Nano. I’d like to do some work with the emotion recognition sample available for OpenVINO. Any suggestions about emotion recognition would be greatly appreciated.

    Thank you for your tutorial!

  16. Mehmet Ali Anil May 6, 2019 at 8:28 pm #

    The renders of the module show that there is a WiFi on the module, but not populated. Therefore we will see a connected version of it, just not on the dev board for now.

    • Adrian Rosebrock May 7, 2019 at 9:05 am #

      Got it, that makes sense.

  17. Lucas May 6, 2019 at 11:18 pm #

    Thanks a lot. Adrian. There will be a new “toy” for me. Ha Ha!

    • Adrian Rosebrock May 7, 2019 at 9:06 am #

      Thanks Lucas.

  18. DeepNet May 7, 2019 at 5:54 am #

    Hi Adrian,
    Thanks a lot for the new post, that’s excellent, please continue this post.
    I want to deploy ssd_mobilenet on Jetson, and I want to use TensorRT for this, what do I do? how convert .pb file Tensorflow object detection API file to TesorRT format? In this code you use TensorRT library, right? you write TensoRT codes for this or cv.dnn by default use from TensorRT?

  19. Mehdi May 7, 2019 at 7:52 am #

    I am really impressed that you keep your blog updated so that new technologies are always covered. Thank you Adrian, and keep going, please :).

    • Adrian Rosebrock May 7, 2019 at 9:07 am #

      Thanks Mehdi, I really appreciate that 🙂

  20. Paawan Sharma May 7, 2019 at 9:12 am #

    Hi Adrian,
    A great article. I tested it on my Jetson nano and its fantastic. Thank you for making things easy. Eagerly waiting for content on jetson nano in RPi for CV ebook.
    Kudos.
    Paawan.

    • Adrian Rosebrock May 8, 2019 at 12:55 pm #

      Thanks Paawan — and congrats on getting the example to run on your Nano 🙂

  21. JoeV May 7, 2019 at 12:53 pm #

    I love you man
    Your tutorials are always
    in the right place
    and
    always at the right time

    • Adrian Rosebrock May 8, 2019 at 12:53 pm #

      Thanks, I’m glad you’re enjoying the guides 🙂

  22. zeev May 7, 2019 at 1:47 pm #

    WOW!

    becuase of you i bought today the jetson nano, and started to play with it.
    first of all thanks a lot for you job, and for your help.

    one question, how can i connect a rtsp camera rather then home camera.
    i would like to play with home security camera rather then room camera.

    and thanks a lot for your efforts

    • Adrian Rosebrock May 8, 2019 at 12:52 pm #

      Thanks Zeev, I’m glad you liked the guide. I hope you enjoy hacking with the Nano!

      As far as RTSP goes, I don’t really like working with that protocol. See this tutorial for my suggested way to stream frames and consume them.

      • zeev May 12, 2019 at 12:17 pm #

        about the openCV issue.
        on nano you should install openCV 4 (what a lucky i bought your e-book for that)

        for the install script, just use the following link:
        https://github.com/AastaNV/JEP/tree/master/script

        • wally May 29, 2019 at 5:06 pm #

          OpenCV should be able to decode rtsp streams using:

          vs=cv2.VideoCapture(rtspURL)
          ret, frame = vs.read()

          As long as the rtspURL plays in VLC.

          On a Pi3B+ with NCS using rtsp streams drops my fps from about 6.5 to 4.5 fps. Perhaps the nano can do better.

          On various systems I’ve used OpenCV versions 3.4.2 to 4.1.0, although certainly not all the variations in between 🙂

          My project is here: https://github.com/wb666greene/AI_enhanced_video_security/blob/master/README.md

        • wally May 30, 2019 at 4:45 pm #

          Any reason that script can’t be modified to build 4.1.0 instead of 4.0.0?

          I try to avoid building OpenCV whenever possible, but key parts (for us) of OpenVINO have been Open Sourced by Intel (announced today https://github.com/opencv/dldt/blob/2019/inference-engine/README.md ) so I’m looking into the possibility of changing the x86-64 build to try compiling for arm64 of the Jetson Nano.

          Cmake projects generaly give me the fits, so I’m not really expecting success but feel its worth a try tomorrow while I’m stuck home waiting for package deliveries.

          • blackantywj September 27, 2019 at 10:13 pm #

            Excuse me, have you built the dldt project in jetson nano successfully? Can you please tell me how you solve the problem of opencv

  23. S@g@r May 8, 2019 at 6:55 am #

    Hi, Adrian very good article. its really helpful.
    I have some questions.
    I am using a Logitech c170 camera, is it compatible with the jetson nano?
    Error message: “fail to capture”.
    Can you please help me with that

    • Adrian Rosebrock May 8, 2019 at 12:48 pm #

      I haven’t trained with the Logitech C170 so I’m not sure. I used a Logitech C920 and it worked out of the box.

  24. Rich May 8, 2019 at 7:51 pm #

    Is there anyone out there that actually understands how to use the TensorRT Python API so that this awesome tut would actually be more helpful to the real world? I blame nVidia for the poor docs, but there doesn’t seem to be hardly and support for non C++ coders.

    You couldn’t actually use this to generate predictions that are usable in your python code, like, take this bounding box and send it somewhere. You could do this if it was all python code.

    Any ideas Adrian or others how to do this?

    • Adrian Rosebrock May 15, 2019 at 3:25 pm #

      Hey Rich — all of those questions are being addressed inside Raspberry Pi for Computer Vision. I’ll likely do TensorRT + Python blog posts as well.

  25. Troy Zuroske May 9, 2019 at 10:43 am #

    Does the Jetson Nano not come with OpenCV preinstalled? I have a Jetson Xavier and using the Jetpack sdk installer, it was one of the additional libraries pushed to the device upon flashing it. If it does come preinstalled on the Nano, why can’t you use that version? I was able to get this real time object detection library to work on my Jetson with the preinstalled openCV: https://github.com/ayooshkathuria/pytorch-yolo-v3. There was some custom stuff I had to do because I am using some Jetson specific cameras (https://www.e-consystems.com/nvidia-cameras/jetson-agx-xavier-cameras/four-synchronized-4k-cameras.asp) but it is performing well with GPU support. Maybe if does not come preinstalled you could leverage some help from this post: https://www.jetsonhacks.com/2018/11/08/build-opencv-3-4-on-nvidia-jetson-agx-xavier-developer-kit/ which you may already be aware of.

    • Phil May 29, 2019 at 2:59 pm #

      It does come pre-installed. If you want to use a virtualenv though, I believe you have to build all non-core modules yourself (though you might be able to symlink your install ?)

      • wally May 30, 2019 at 10:18 am #

        Mine seems to have come with OpenCV 3.3.1, which is close to the minimum version required to support the dnn module.

        I plan to try and install OpenVINO later today, which will install a “local” OpenCV 4.1.0 with OpenVINO mods to the dnn module.

        I tried installing the Coral TPU and all looked to go fine, but running the parrot demo fails with:

        File “/usr/local/lib/python3.6/dist-packages/edgetpu/swig/edgetpu_cpp_wrapper.py”, line 18, in swig_import_helper
        fp, pathname, description = imp.find_module(‘_edgetpu_cpp_wrapper’, [dirname(__file__)])
        File “/usr/lib/python3.6/imp.py”, line 297, in find_module
        raise ImportError(_ERR_MSG.format(name), name=name)
        ImportError: No module named ‘_edgetpu_cpp_wrapper’

        I’m going to poke around a bit more before giving up and trying OpenVINO.

      • wally May 30, 2019 at 11:12 am #

        I’ve got the Coral TPU runing on my Jetson, found the solution via Google:

        cd /usr/local/lib/python3.6/dist-packages/edgetpu/swig/

        sudo cp _edgetpu_cpp_wrapper.cpython-35m-aarch64-linux-gnu.so _edgetpu_cpp_wrapper.so

        Details were here: https://github.com/f0cal/google-coral/issues/7

        Seems the install script fails to create the required symlink for arm64.

  26. Marco Miglionico May 10, 2019 at 1:58 pm #

    Hi Adrian. As always, thanks fo the amazing tutorial. I am building a Computer Vision application based on Face Detection, so I really need the cv2.dnn module. As you mentioned, this won’t be available on NVIDIA GPU until the end of the summer. I also have an Intel NCS, but not a Raspberry py. So I was thinking if it is possible to use the Jetson Nano + NCS to use the cv2.dnn module?
    If this is not possible I will start using a Raspberry pi + NCS .
    Thanks

    • Adrian Rosebrock May 15, 2019 at 3:10 pm #

      That may be possible but I have not tried.

    • Phil May 29, 2019 at 3:00 pm #

      … why use the way overpowered cv2.dnn when you can use a simple Haar Cascade for face detection. Perhaps you’re talking about facial recognition?

  27. Alex Grutter May 11, 2019 at 8:00 am #

    Hey Adrian!

    I found this script for building OpenCV on Jetson Nano. The NVIDIA peeps linked to it. Have you had a chance to try it already?

    https://devtalk.nvidia.com/default/topic/1049972/opencv-cuda-python-with-jetson-nano/

    https://github.com/AastaNV/JEP/blob/master/script/install_opencv4.0.0_Nano.sh

    Thanks!

    Alex

    • Adrian Rosebrock May 15, 2019 at 3:06 pm #

      I have not tried yet. It’s on my todo list though 🙂

  28. Edward Li May 11, 2019 at 4:16 pm #

    Hi Adrian! Thanks for the post – it was very helpful when setting up my Jetson Nano

    I had a few questions:
    – My Jetson froze a few times while running install scripts for scipy and keras. I think it’s a RAM issue. Do you know if it’s possible/what the best way is to enable a swap file?
    – When deploying custom models, do you know if it is faster to use TensorRT to accelerate the model rather than simply running it on Keras/Tensorflow?
    – Finally, is there a reason why you use get-pip.py over apt-get install python3-pip?

    Thanks so much for the post!

    • Adrian Rosebrock May 15, 2019 at 3:05 pm #

      See the other comments regarding SciPy. Try creating a swapfile.

      TensorRT should theoretically be faster but I haven’t confirmed that yet.

  29. jay abrams May 11, 2019 at 8:14 pm #

    hi Adrian,
    is there anything one can do if jetson nano keeps hangin on ‘running setup.py install for scipy’??
    it hangs on that for some time and then the whole thing freezes up. Am i supposed to do something with a swapfile or something?

    thank you!

    • Adrian Rosebrock May 15, 2019 at 3:03 pm #

      See the other comments — try creating a swapfile.

  30. Thanhlong May 12, 2019 at 1:07 am #

    Hi Adrian, after read a lots of article and your post about jetson nano, i very confuse should i buy a rasberry to develop or should i save money for the jetson nano. I afraid that the community support for jetson not large like rasberry.
    Thank for all your article, i learn from it a lots:))

    • Adrian Rosebrock May 15, 2019 at 3:03 pm #

      It really depends on what you are trying to build and the goals of the project. Could you describe what you would like the RPi or Nano to be doing? What types of models do you want to run?

  31. Bog Flap May 12, 2019 at 7:35 am #

    Hi. Great article. However a point you may have overlooked. In order to get the scipy install to work I had to install a swapfile first. Part of the install uses all the avaliable ram (about 110MB of extra ram is required). Otherwise my nano crashed and not very gracefully.

    • Adrian Rosebrock May 15, 2019 at 3:00 pm #

      Thanks for sharing. I didn’t need a swapfile though, strange!

  32. Jorge Paredes May 13, 2019 at 6:27 pm #

    Hello

    Just in case of someone have the same problem that me.

    I followed the tutorial and when I tried to install scipy:

    $ pip install scipy

    My Nano hangs after 10 minutes, more less .

    I see that my Nano has no configured swapfile, so I created one swap file and try to install scipy again and then it worked. Scipy is installed.

    Do you think that I must remove the swapfile after install everything? I know it runs on the SD Card….

    • Adrian Rosebrock May 15, 2019 at 2:48 pm #

      Thanks for sharing, Jorge! How large was your swapfile?

  33. Mel May 14, 2019 at 7:44 am #

    Hi Adrian,

    is the Jetson nano meant to be only used for computer vision tasks or can I use it in place of a GPU to train models on? Thanks for the tutorials, always learning something new

    • Adrian Rosebrock May 15, 2019 at 2:39 pm #

      No, you don’t use the Jetson Nano for training, only for inference.

  34. Marco Miglionico May 14, 2019 at 11:19 am #

    Hi Adrian, I am having a problem during the scipy installation. It get stuck forever, do you know what can be the problem?
    Thanks

    • Adrian Rosebrock May 15, 2019 at 2:36 pm #

      It’s not stuck. SciPy will take awhile to compile and install.

  35. Bob O May 14, 2019 at 1:14 pm #

    I am working my way through the installation (for the second day) because the system keeps locking up. It just freezes, even the System Monitor. I am running it low power, with one browser page, one terminal, and one System Monitor window open. It eventually recovers, but this happens so often the system is almost useless. Nothing seems to interrupt it, but if the screen saver comes on, the picture will swap every couple minutes (this happens when I am away) but getting the prompt to unlock it is hit and miss. When I pull the plug to hard reboot, the package has finished installing.

    One different thing is that I am using a larger memory card (200 GB) but nothing warns me about that being a problem.

    So far, system updated, scipy and TF installed, and watching nothing happen for the last 20 minutes while installing scipy.

    • Bob O May 14, 2019 at 5:01 pm #

      Update, it appears that running at 4k UHD is what the problem was. When I downgraded resolution to 1920, it became completely stable and let me finish the install.

      • Adrian Rosebrock May 15, 2019 at 2:34 pm #

        Thanks for sharing, Bob!

  36. Rob Jones May 14, 2019 at 5:05 pm #

    In your examples you run the binaries from ~/jetson-inference/build/aarch64/bin

    But sudo make install puts copies of them into /usr/local/bin and so you can just run them from anywhere

    The problem is you need to tell them where to get the libjetson-inference.so library

    Add this to your ~/.bashrc file

    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib

    • Adrian Rosebrock May 15, 2019 at 2:34 pm #

      Nice, thanks for sharing Rob!

  37. Gavin Huang May 16, 2019 at 1:49 pm #

    My jetson nano would stuck in ‘pip install scipy’ command,
    does anyone has the same problem?

    • Adrian Rosebrock May 23, 2019 at 10:20 am #

      Make sure you read the other comments on the post. Try creating a swap file as other readers suggested.

  38. Andres May 21, 2019 at 12:14 pm #

    hello adrian thank you for everything you do, I have a problem I installed on a 64GB card but when I connect the nano does not turn on, the screen stays black and only the green light of the led on stays on. I used a 64GB kingston card and followed the nvidia instructions

    • Adrian Rosebrock May 23, 2019 at 9:39 am #

      I have not encountered that error before so I unfortunately do not know. Try reflashing the .img file. I would also post the issue on the official Jetson Nano forums.

  39. Bingxin Hou May 23, 2019 at 3:01 am #

    Hi Adrian,

    I have the same question, which one to purchase. Jetson nano or RPI+accelorator.

    1. I’m doing school IOT project on real-time video object detection, especially on instance segmentation and would like to put mask R-CNN or other instance segmentation model for this purpose.

    Would like to measure the speed and accuracy as a reference /comparison with my other method implemented on IOT device.

    So I hope it can handle my task.

    2. Do you think it’ll be very hard for me to use Jetson because the community for Jetson is smaller than Raspberry Pi?

    3. If I purchase Raspberry Pi 3 B+, which one do you recommend, Google coral or Intel NCS?

    Thank you very much!

    Your comment is awaiting moderation.

    • Adrian Rosebrock May 23, 2019 at 9:27 am #

      The Jetson community is smaller, but keep in mind that while the RPi community is huge, not all of the RPi community performs CV and DL. My recommendation for you would probably be a Jetson Nano. It’s a faster and has a dedicated onboard GPU.

      • Tim U May 27, 2019 at 11:33 pm #

        I am a Raspberry Pi enthusiast strictly with python. (I tried an Orange Pi for better hardware, but quit in frustration due to the incompatibility with the RPi packages; ex GPIO for LED and servos – aka robotics)

        Any insight for running RPI code, unmodified, on the Jetson Nano?

        Seems like I’ll run into the same limitations.
        https://github.com/NVIDIA/jetson-gpio/issues/1

        (note: maybe this will be in your upcoming comparison between Raspberry, Coral, and Jetson? – I’ll wait)

        • Adrian Rosebrock May 30, 2019 at 9:18 am #

          What do you mean by “unmodified” RPi code?

    • uday as June 18, 2019 at 1:57 am #

      Hey Bingxin Hou ,

      i have similar task to do , just wanted to know if you are able to run instance segmentation model in jetson nano

  40. Derek Decker May 23, 2019 at 11:02 pm #

    Thanks Adrian!

    I’m still try to follow your instructions above. The “nano ~/.bashrc” returned “bash: nano: command not found”. Also, I am still searching for bashrc, can’t find it anywhere.

    Your thoughts?
    Derek

    • Adrian Rosebrock May 30, 2019 at 9:42 am #

      That’s definitely not correct — try re-flashing your Nano .img file to your micro-SD card, I think you may have messed up your paths.

  41. Kurt May 25, 2019 at 11:54 am #

    I have to use “sudo -H” for the pip install jumpy and I still get an error stating “Error: uff 0.5.5 ..protobuf≥3.3.0 required”

    • Adrian Rosebrock May 30, 2019 at 9:29 am #

      Is that at typo? Do you mean “numpy”?

  42. wally May 29, 2019 at 4:47 pm #

    My Jetson arrived yesterday, without this great tutorial I’d have been getting nowhere fast!

    Very smooth installation and setup with your instructions.

    However ~5fps “person detection” with C++ code is most unimpressive compared to what I’ve gotten on Pi3B+ with NCS and the overhead of servicing multiple cameras, or your Coral tutorial, both running Python code.

    Maybe Jetson does better on other networks, but this is my main concern at the moment.

  43. wally May 30, 2019 at 12:00 pm #

    I’m sorry I think I posted this in the Coral TPU tutorial by mistake, although it is relevant their as well.

    I am less than impressed with the Jetson Nano Cuda/GPU “person detection” in this tutorial, ~5fps seems hardly worthwhile compared to previous tutorials on the Pi3B+ with Coral TPU or OpenVINO

    But the Jetson CPU makes a great host for the Coral TPU!

    Running your Coral tutorial detect_video.py modified for timing and using a USB webcam I get the following results with my HP HD usb webcam:

    Pi3B+ ~6.5 fps (Raspbian, 32-bit, USB2)
    Odroid XU-4 ~19.8 fps (Mate16, 32-bit, USB3)
    i3-4025 ~46.4 fps (Mate16, 32-bit, USB3)
    Nano ~44.5 fps (Ubuntu18, 64-bit, USB3)

    I know my HP webcam is not doing 40+ fps, imutils is returning duplicate frames, but the inference loop is processing that many images per second.

    The Tegra arm64 might just be the best bang/buck “sleeper” among the small, low power Pi-like computers. My i3 Mini-PC needs a 60W power supply the others are under 25W.

    • Adrian Rosebrock June 6, 2019 at 8:48 am #

      Thanks for sharing these numbers, Wally! I really appreciate it (as I’m sure other readers do as well).

  44. Adithya May 31, 2019 at 8:58 am #

    Hi Adrian, going forward in jetson nano setup, I am stuck on scipy installation. I followed your instructions, successfully installed numpy, tensorflow, but now stuck on scipy. I am including the error below. Any help will be very useful.

    ERROR: Could not build wheels for scipy which use PEP 517 and cannot be installed directly

    • Komms June 25, 2019 at 6:43 am #

      Hi Adithya, did you solve it?

      • Hamza El Hanbali August 29, 2019 at 2:24 pm #

        Have figured out how to solve that problem? ’cause i’m having the same here and don’t know how to deal with it.

        I already tried to pip install pep517 and still not working..+

    • Jose Luis July 4, 2019 at 4:00 pm #

      Hi, I do not know if you solved the problem, but I had the same error and I could install it in the following way

      $ pip install –no-binary :all: scipy

      source: https://github.com/pypa/pip/issues/6222

    • Hamza El Hanbali August 29, 2019 at 3:44 pm #

      Pip install PEP517

      then sudo apt upgrade

      It worked for me, i had the same error.

  45. Dagson June 5, 2019 at 11:07 pm #

    Great Adrian tutorial, congratulations on your work.

    I think of taking a master’s degree in computer vision and image processing, do you accept being my teacher advisor? It would be like winning a prize at the mega sena in Brazil. lol

    I look forward to your tutorial by installing Opencv on a Jetson Nano.

    • Adrian Rosebrock June 6, 2019 at 6:40 am #

      I’m flattered that you would want me to be your advisor; however, I am not affiliated with any university so I cannot take on MSc or PhD students.

      • Dagson June 19, 2019 at 8:42 am #

        OK Adrian, understood!

  46. Hassan June 8, 2019 at 4:41 am #

    Hi Adrian,

    Great article again. Thanks and congrats.

    I got a problem after cmake has finished configuring the build. I typed “make” but I got the following error:

    make: *** No targets specified and no makefile found. Stop.

    When I checked the content of the build folder, I realized that there is no makefile indeed. What may be the possible cause and what should I do?

    • Adrian Rosebrock June 12, 2019 at 1:51 pm #

      The “cmake” step failed. Go back to the cmake step and examine it for errors.

      • Gopal Holla June 13, 2019 at 5:22 am #

        Thanks Adrian. I was able to resolve this

        • Adrian Rosebrock June 13, 2019 at 9:31 am #

          Congrats on resolving the issue, Gopal!

  47. Gopal Holla June 13, 2019 at 5:21 am #

    Hello Adrian, I was successfully able to install Keras as per your instructions. However, when I ran the classify.py (this I took from one of your cnn-keras classifier), i get the following error on jetson nano:”… tensorflow.python.framework.errors_impl.InternalError: Dst tensor is not initialized” the error is quite long.. but have pasted the main part here”. Am completely lost here. Fromm google search, the information is GPU memory insufficient. Please help how to overcome this..

  48. Vignesh June 14, 2019 at 2:18 pm #

    This is great article, easy to follow, helped with the initial setup. Waiting for the opencv article.

    • Adrian Rosebrock June 19, 2019 at 2:18 pm #

      Thanks Vignesh!

  49. Sathish Krishnamoorthi June 21, 2019 at 9:01 am #

    Hi Adrian Rosebrock
    I follow all your article. It is crisp and clear. It helped me a lot with my projects. Thanks !!!

    Recently, I bought the Jetson developer kit and sadly I used 16GB and ran out of memory. Haha, I should have gone through your article first.

    So, what should I do to increase the memory?

    1. I understand that my kernel is built on the present SD card (16GB). How can I use a new SD card with more memory? Should I reflash and what are the step involved to use a new SD card? Because, I tried flashing a 512gb SD card and inserted the SD card into the Nano board (which was already flashed by 16 GB card), but reinstallation didn’t happen?
    2. What are the requirements should I consider while buying a new SD card (like storage, type)?

    • Adrian Rosebrock June 26, 2019 at 1:45 pm #

      It sounds like you weren’t too far along in the install/config process so I would recommend starting with a new, larger SD card and then a fresh install.

  50. Jeff June 25, 2019 at 5:56 pm #

    Thanks so much. I’m a hw guy but was able to make my way through and am up and running.

    • Adrian Rosebrock June 26, 2019 at 11:19 am #

      Thanks Jeff, I’m glad you liked the tutorial!

  51. izack t June 27, 2019 at 5:52 pm #

    Wow! as usual amazing tutorial and just on time!

    Thank you so much Adrian!

    p.s.
    maybe it will help someone,
    I had permission problems with all the installations in the virtual env and to fix it I just has to add “sudo” before each one and then the installations went perfect 🙂

    • izack t June 27, 2019 at 5:56 pm #

      forgot to add to the fix – a “sudo apt-get update” was needed just after creating the virtual env and before installing the rest of its packages.

  52. Camill Trueb July 13, 2019 at 4:04 pm #

    I installed Pytorch using the prebuilt wheel provided by NVIDIA https://devtalk.nvidia.com/default/topic/1049071/jetson-nano/pytorch-for-jetson-nano/. When I move my models & tensors to the GPU, predictions become very slow. Should I switch to TF / Keras? Or am I missing something…

    • Adrian Rosebrock July 25, 2019 at 10:07 am #

      Sorry, I haven’t used PyTorch with the Nano so I’m not sure what the issue is there.

  53. James July 17, 2019 at 1:13 am #

    Hey Adrian, what exactly does the system package pre-reqs do ?

  54. Subhidh Agarwal July 19, 2019 at 6:39 am #

    As the camera screen opens, the nano shuts down automatically.
    Please help

    • TBS Dr. J September 19, 2019 at 3:12 pm #

      When I had this problem, it was due to not enough current coming off the power supply. A 5v, 2A USB power supply will not be enough to power the GPUs, and, when it tries, the entire power will cut out. I switched to a 5V, 4A supply through the barrel jack and had no more problems.

  55. TBS Dr. J July 24, 2019 at 3:23 am #

    I ran through all of this tutorial and hooked up a PiCamera (v2) instead of a USB webcam, and got 40-55 fps at 1360×768 running object detection (using unmodified imagenet-camera)

    Thanks for your clear instructions and many tutorials.

  56. praveen July 24, 2019 at 5:10 am #

    Hi Adrian ,

    Thanks for the nice post. I have done installation successfully as per your steps and when i started running

    ./detectnet-camera

    jetson nano board gets restarted and even i tried swap memory also ,nothing helps me.

    I am using CSI camera [pi camera – v2]. Please share your thoughts on this

  57. marcel July 27, 2019 at 1:30 am #

    I think this part is wrong…
    “#define DEFAULT_CAMERA -1 // -1 for onboard camera, or change to index of /dev/video V4L2 camera (>=0) ” you say we can find it in .cpp file but this code is python code.. C doesnt have “#” coments, also I was looking in that imagenet-camera.cpp file (which is btw. in /jetson-inference/examples) and there was nothing about default camera 0. But It was in the file with same name .py in python folder

    • Adrian Rosebrock August 7, 2019 at 1:08 pm #

      The “#” is not a comment. It’s part of the “#define” statement. Give the file another look.

      • Steven Griset August 12, 2019 at 9:04 pm #

        Adrian

        The latest version of the inference engine code does not have #define camera statement and also has python examples. You can set the camera input now for either python example or c++ example from the command line . I might be mistaken but I think that is the confusion here.

        • Adrian Rosebrock August 16, 2019 at 5:42 am #

          Ah okay, that makes sense. Thanks for the clarification Steven!

        • Robert August 23, 2019 at 10:33 am #

          Thanks Steven for the clarification, I had the same confusion, now with the latest release of the code you don’t have to change the code to change cameras.

          The command (for USB camera) from ~/jetson-inference/build/aarch64/bin is:

          ./imagenet-camera –camera /dev/video0

  58. Marcin August 16, 2019 at 6:38 pm #

    Yo Adrian. Check out Nvidia Deepstream 4.0. A tutorial on installing that on the Nano would be really helpful. With Deepstream, uou can do 4k at 60fps with the Jetson Nano while detecting.

    • Hariharan.M September 27, 2019 at 7:31 am #

      yess!! Adrian! Please the nano and on Ubuntu as well!! this will be a life saver for the community!

  59. Robert August 22, 2019 at 12:09 pm #

    Hi Adrian, thanks for this tutorial!! I’m looking forward to integrating this into my robot.
    Do you have any plans for writing tutorials or a book for integrating this into ROS or other robot control systems? I’m planning on taking Udacity’s robotics nanodegree but I don’t know how good it is and I already know your books are awesome, so I’ll hold off if you’re going to release one soon.
    Thanks again,
    Robert

    • Adrian Rosebrock September 5, 2019 at 9:51 am #

      I’ve considered doing something related to ROS but haven’t committed yet. Maybe in the future but I cannot guarantee it.

  60. Dayos August 25, 2019 at 1:55 pm #

    Hello adrian,thank you for your awesome tutorial.
    I have a question,could i use gstreamer in opencv install?i have a 2080 ti gpu and i want to use gstreamer for accelerating multiple cameras frame grabbing and processing in my gpu.
    How i can use hardware decoding using opencv and python?
    For example i have 10 cameras 4k resolution.

    • Adrian Rosebrock September 5, 2019 at 9:50 am #

      Yes, you can use gstreamer with OpenCV. I don’t have any tutorials on it at the moment but I’ll definitely consider it for a future post.

  61. James P Noon August 27, 2019 at 10:47 am #

    Hi Adrian,

    Awhile ago, I had a Lego League Robotics Team to whom I wrote weekly notes on AI. The notes were all conceptual. We did no programming except in the simply Mindstorms Robotics language. Later on, I got Mindsensors kit, and I did the simply Python programming necessary to get a robot to follow a wall or a black line on the floor, but nothing more sophisticated, but nothing more. I would have liked to build a robot with a simple visual or IR based navigation system, but that was way beyond my knowledge level.

    So now, Nvidia has launched the Nano, and that appears to be a robust platform on which to build a robot using the Mindstorms mechanical kit and the Nano brain. The only problem is,”How to program it?…(that seems to be your expertise).”

    The Mindstorms kit came with fairly specific programming instructions for the beginner. I think any directed effort created by you would need that level of detail. Let me know if you are willing to collaborate in this project.

    Jim Noon

    • Adrian Rosebrock September 5, 2019 at 9:50 am #

      Thanks for the comment. I don’t have any experience with the Mindstorms kits so unforuntately don’t have any input there.

  62. Alibek Jakupov August 28, 2019 at 6:29 am #

    Hi Adrian, thank you very much. Just a little precision, after the last update there is no constant for camera id, it must now be set via command line arguments.

    For instance, if you are using a USB cam, the command should be:

    ./imagenet-camera –camera /dev/video0

    Again thank you for the great tutorial

  63. Adrian Valle September 1, 2019 at 7:23 pm #

    Hello Adrian,

    I have found your tutorial on Jetson nano very well explained, I just have purchased a Jetson nano module to learn/study machine learning /deep learning, and I happened see your offer, and books.
    Question: your course includes practical applications with focus on jetson nano and RB-pi ?
    or do I have to learn to use the jetson module in order to implement your class examples?

    Thank you,

    • Adrian Rosebrock September 5, 2019 at 9:49 am #

      Hey Adrian — yes, Raspberry Pi for Computer Vision is compatible with both the RPi and Jetson Nano. I’ve included instructions on how to use the Nano for for CV and DL. Code is included as well.

  64. jisoo yu September 9, 2019 at 2:44 am #

    Thank you for the excellent tutorial. I bumped into an error while was following your instructions. Right after the command, ./imagenet-camera, it pops up the error, ‘imagenet-camera: camera open for streaming
    GST_ARGUS: Creating output stream
    CONSUMER: Waiting until producer is connected…
    GST_ARGUS: Available Sensor modes :
    Segmentation fault (core dumped).’ Please help me fix the error. Thank you.

    • jisoo yu September 10, 2019 at 1:02 am #

      I found the solution. Type, ./imagenet-camera –camera /dev/video0 This will make it work.

      • Adrian Rosebrock September 12, 2019 at 11:13 am #

        Congrats on resolving the issue!

      • Gabriel September 19, 2019 at 7:40 pm #

        didn’t solve 🙁

        • Gabriel September 19, 2019 at 8:36 pm #

          ./imagenet-camera –camera /dev/video0

          “–”

          now it worked

  65. Arun K Soman September 25, 2019 at 2:11 pm #

    Hi Adrian can I use Jetson Nano for training DNN model if dataset is small say 500mb?

    • Adrian Rosebrock October 3, 2019 at 12:48 pm #

      Technically yes but I really don’t recommend it. The Nano isn’t meant for training anything substantial, it really should be used for inference.

Before you leave a comment...

Hey, Adrian here, author of the PyImageSearch blog. I'd love to hear from you, but before you submit a comment, please follow these guidelines:

  1. If you have a question, read the comments first. You should also search this page (i.e., ctrl + f) for keywords related to your question. It's likely that I have already addressed your question in the comments.
  2. If you are copying and pasting code/terminal output, please don't. Reviewing another programmers’ code is a very time consuming and tedious task, and due to the volume of emails and contact requests I receive, I simply cannot do it.
  3. Be respectful of the space. I put a lot of my own personal time into creating these free weekly tutorials. On average, each tutorial takes me 15-20 hours to put together. I love offering these guides to you and I take pride in the content I create. Therefore, I will not approve comments that include large code blocks/terminal output as it destroys the formatting of the page. Kindly be respectful of this space.
  4. Be patient. I receive 200+ comments and emails per day. Due to spam, and my desire to personally answer as many questions as I can, I hand moderate all new comments (typically once per week). I try to answer as many questions as I can, but I'm only one person. Please don't be offended if I cannot get to your question
  5. Do you need priority support? Consider purchasing one of my books and courses. I place customer questions and emails in a separate, special priority queue and answer them first. If you are a customer of mine you will receive a guaranteed response from me. If there's any time left over, I focus on the community at large and attempt to answer as many of those questions as I possibly can.

Thank you for keeping these guidelines in mind before submitting your comment.

Leave a Reply

[email]
[email]