Getting started with the NVIDIA Jetson Nano

In this tutorial, you will learn how to get started with your NVIDIA Jetson Nano, including:

  • First boot
  • Installing system packages and prerequisites
  • Configuring your Python development environment
  • Installing Keras and TensorFlow on the Jetson Nano
  • Changing the default camera
  • Classification and object detection with the Jetson Nano

I’ll also provide my commentary along the way, including what tripped me up when I set up my Jetson Nano, ensuring you avoid the same mistakes I made.

By the time you’re done with this tutorial, your NVIDIA Jetson Nano will be configured and ready for deep learning!

To learn how to get started with the NVIDIA Jetson Nano, just keep reading!

Getting started with the NVIDIA Jetson Nano

Figure 1: In this blog post, we’ll get started with the NVIDIA Jetson Nano, an AI edge device capable of 472 GFLOPS of computation. At around $100 USD, the device is packed with capability including a Maxwell architecture 128 CUDA core GPU covered up by the massive heatsink shown in the image. (image source)

In the first part of this tutorial, you will learn how to download and flash the NVIDIA Jetson Nano .img file to your micro-SD card. I’ll then show you how to install the required system packages and prerequisites.

From there you will configure your Python development library and learn how to install the Jetson Nano-optimized version of Keras and TensorFlow on your device.

I’ll then show you how to access the camera on your Jetson Nano and even perform image classification and object detection on the Nano as well.

We’ll then wrap up the tutorial with a brief discussion on the Jetson Nano — a full benchmark and comparison between the NVIDIA Jetson Nano, Google Coral, and Movidius NCS will be published in a future blog post.

Before you get started with the Jetson Nano

Before you can even boot up your NVIDIA Jetson Nano you need three things:

  1. A micro-SD card (minimum 16GB)
  2. A 5V 2.5A MicroUSB power supply
  3. An ethernet cable

I really want to stress the minimum of a 16GB micro-SD card. The first time I configured my Jetson Nano I used a 16GB card, but that space was eaten up fast, particularly when I installed the Jetson Inference library which will download a few gigabytes of pre-trained models.

I, therefore, recommend a 32GB micro-SD card for your Nano.

Secondly, when it comes to your 5V 2.5A MicroUSB power supply, in their documentation NVIDIA specifically recommends this one from Adafruit.

Finally, you will need an ethernet cable when working with the Jetson Nano which I find really, really frustrating.

The NVIDIA Jetson Nano is marketed as being a powerful IoT and edge computing device for Artificial Intelligence…

…and if that’s the case, why is there not a WiFi adapter on the device?

I don’t understand NVIDIA’s decision there and I don’t believe it should be up to the end user of the product to “bring their own WiFi adapter”.

If the goal is to bring AI to IoT and edge computing then there should be WiFi.

But I digress.

You can read more about NVIDIA’s recommendations for the Jetson Nano here.

Download and flash the .img file to your micro-SD card

Before we can get started installing any packages or running any demos on the Jetson Nano, we first need to download the Jetson Nano Developer Kit SD Card Image from NVIDIA’s website.

NVIDIA provides documentation for flashing the .img file to a micro-SD card for Windows, macOS, and Linux — you should choose the flash instructions appropriate for your particular operating system.

First boot of the NVIDIA Jetson Nano

After you’ve downloaded and flashed the .img file to your micro-SD card, insert the card into the micro-SD card slot.

I had a hard time finding the card slot — it’s actually underneath the heat sync, right where my finger is pointing to:

Figure 2: Where is the microSD card slot on the NVIDIA Jetson Nano? The microSD receptacle is hidden under the heatsink as shown in the image.

I think NVIDIA could have made the slot a bit more obvious, or at least better documented it on their website.

After sliding the micro-SD card home, connect your power supply and boot.

Assuming your Jetson Nano is connected to an HDMI output, you should see the following (or similar) displayed to your screen:

Figure 3: To get started with the NVIDIA Jetson Nano AI device, just flash the .img (preconfigured with Jetpack) and boot. From here we’ll be installing TensorFlow and Keras in a virtual environment.

The Jetson Nano will then walk you through the install process, including setting your username/password, timezone, keyboard layout, etc.

Installing system packages and prerequisites

In the remainder of this guide, I’ll be showing you how to configure your NVIDIA Jetson Nano for deep learning, including:

  • Installing system package prerequisites.
  • Installing Keras and TensorFlow and Keras on the Jetson Nano.
  • Installing the Jetson Inference engine.

Let’s get started by installing the required system packages:

Provided you have a good internet connection, the above commands should only take a few minutes to finish up.

Configuring your Python environment

The next step is to configure our Python development environment.

Let’s first install pip, Python’s package manager:

We’ll be using Python virtual environments in this guide to keep our Python development environments independent and separate from each other.

Using Python virtual environments are a best practice and will help you avoid having to maintain a micro-SD for each development environment you want to use on your Jetson Nano.

To manage our Python virtual environments we’ll be using virtualenv and virtualenvwrapper which we can install using the following command:

Once we’ve installed virtualenv and virtualenvwrapper we need to update our ~/.bashrc file. I’m choosing to use nano but you can use whatever editor you are most comfortable with:

Scroll down to the bottom of the ~/.bashrc file and add the following lines:

After adding the above lines, save and exit the editor.

Next, we need to reload the contents of the ~/.bashrc file using the source command:

We can now create a Python virtual environment using the mkvirtualenv command — I’m naming my virtual environment deep_learning, but you can name it whatever you would like:

Installing TensorFlow and Keras on the NVIDIA Jetson Nano

Before we can install TensorFlow and Keras on the Jetson Nano, we first need to install NumPy.

First, make sure you are inside the deep_learning virtual environment by using the workon command:

From there, you can install NumPy:

Installing NumPy on my Jetson Nano took ~10-15 minutes to install as it had to be compiled on the system (there currently no pre-built versions of NumPy for the Jetson Nano).

The next step is to install Keras and TensorFlow on the Jetson Nano. You may be tempted to do a simple pip install tensorflow-gpudo not do this!

Instead, NVIDIA has provided an official release of TensorFlow for the Jetson Nano.

You can install the official Jetson Nano TensorFlow by using the following command:

Installing NVIDIA’s tensorflow-gpu package took ~40 minutes on my Jetson Nano.

The final step here is to install SciPy and Keras:

These installs took ~35 minutes.

Compiling and installing Jetson Inference on the Nano

The Jetson Nano .img already has JetPack installed so we can jump immediately to building the Jetson Inference engine.

The first step is to clone down the jetson-inference repo:

We can then configure the build using cmake.

There are two important things to note when running cmake:

  1. The cmake command will ask for root permissions so don’t walk away from the Nano until you’ve provided your root credentials.
  2. During the configure process, cmake will also download a few gigabytes of pre-trained sample models. Make sure you have a few GB to spare on your micro-SD card! (This is also why I recommend a 32GB microSD card instead of a 16GB card).

After cmake has finished configuring the build, we can compile and install the Jetson Inference engine:

Compiling and installing the Jetson Inference engine on the Nano took just over 3 minutes.

What about installing OpenCV?

I decided to cover installing OpenCV on a Jetson Nano in a future tutorial. There are a number of cmake  configurations that need to be set to take full advantage of OpenCV on the Nano, and frankly, this post is long enough as is.

Again, I’ll be covering how to configure and install OpenCV on a Jetson Nano in a future tutorial.

Running the NVIDIA Jetson Nano demos

When using the NVIDIA Jetson Nano you have two options for input camera devices:

  1. A CSI camera module, such as the Raspberry Pi camera module (which is compatible with the Jetson Nano, by the way)
  2. A USB webcam

I’m currently using all of my Raspberry Pi camera modules for my upcoming book, Raspberry Pi for Computer Vision so I decided to use my Logitech C920 which is plug-and-play compatible with the Nano (you could use the newer Logitech C960 as well).

The examples included with the Jetson Nano Inference library can be found in jetson-inference:

  • detectnet-camera: Performs object detection using a camera as an input.
  • detectnet-console: Also performs object detection, but using an input image rather than a camera.
  • imagenet-camera: Performs image classification using a camera.
  • imagenet-console: Classifies an input image using a network pre-trained on the ImageNet dataset.
  • segnet-camera: Performs semantic segmentation from an input camera.
  • segnet-console: Also performs semantic segmentation, but on an image.
  • A few other examples are included as well, including deep homography estimation and super resolution.

However, in order to run these examples, we need to slightly modify the source code for the respective cameras.

In each example you’ll see that the DEFAULT_CAMERA value is set to -1, implying that an attached CSI camera should be used.

However, since we are using a USB camera, we need to change the DEFAULT_CAMERA value from -1 to 0 (or whatever the correct /dev/video V4L2 camera is).

Luckily, this change is super easy to do!

Let’s start with image classification as an example.

First, change directory into ~/jetson-inference/imagenet-camera:

From there, open up imagenet-camera.cpp:

You’ll then want to scroll down to approximately Line 37 where you’ll see the DEFAULT_CAMERA value:

Simply change that value from -1 to 0:

From there, save and exit the editor.

After editing the C++ file you will need to recompile the example which is as simple as:

Keep in mind that make is smart enough to not recompile the entire library. It will only recompile files that have changed (in this case, the ImageNet classification example).

Once compiled, change to the aarch64/bin directory and execute the imagenet-camera binary:

Here you can see that the GoogLeNet is loaded into memory, after which inference starts:

Image classification is running at ~10 FPS on the Jetson Nano at 1280×720.

IMPORTANT: If this is the first time you are loading a particular model then it could take 5-15 minutes to load the model.

Internally, the Jetson Nano Inference library is optimizing and preparing the model for inference. This only has to be done once so subsequent runs of the program will be significantly faster (in terms of model loading time, not inference).

Now that we’ve tried image classification, let’s look at the object detection example on the Jetson Nano which is located in ~/jetson-inference/detectnet-camera/detectnet-camera.cpp.

Again, if you are using a USB webcam you’ll want to edit approximately Line 39 of detectnet-camera.cpp and change DEFAULT_CAMERA from -1 to 0 and then recompile via make (again, only necessary if you are using a USB webcam).

After compiling you can find the detectnet-camera binary in ~/jetson-inference/build/aarch64/bin.

Let’s go ahead and run the object detection demo on the Jetson Nano now:

Here you can see that we are loading a model named ped-100 used for pedestrian detection (I’m actually not sure what the specific architecture is as it’s not documented on NVIDIA’s website — if you know what architecture is being used, please leave a comment on this post).

Below you can see an example of myself being detected using the Jetson Nano object detection demo:

According to the output of the program, we’re obtaining ~5 FPS for object detection on 1280×720 frames when using the Jetson Nano. Not too bad!

How does the Jetson Nano compare to the Movidius NCS or Google Coral?

This tutorial is simply meant to be a getting started guide for your Jetson Nano — it is not meant to compare the Nano to the Coral or NCS.

I’m in the process of comparing each of the respective embedded systems and will be providing a full benchmark/comparison in a future blog post.

In the meantime, take a look at the following guides to help you configure your embedded devices and start running benchmarks of your own:

How do I deploy custom models to the Jetson Nano?

One of the benefits of the Jetson Nano is that once you compile and install a library with GPU support (compatible with the Nano, of course), your code will automatically use the Nano’s GPU for inference.

For example:

Earlier in this tutorial, we installed Keras + TensorFlow on the Nano. Any Python scripts that leverage Keras/TensorFlow will automatically use the GPU.

And similarly, any pre-trained Keras/TensorFlow models we use will also automatically use the Jetson Nano GPU for inference.

Pretty awesome, right?

Provided the Jetson Nano supports a given deep learning library (Keras, TensorFlow, Caffe, Torch/PyTorch, etc.), we can easily deploy our models to the Jetson Nano.

The problem here is OpenCV.

OpenCV’s Deep Neural Network ( dnn) module does not support NVIDIA GPUs, including the Jetson Nano.

OpenCV is working to provide NVIDIA GPU support for their dnn module. Hopefully, it will be released by the end of the summer/autumn.

But until then we cannot leverage OpenCV’s easy to use cv2.dnn functions.

If using the cv2.dnn module is an absolute must for you right now, then I would suggest taking a look at Intel’s OpenVINO toolkit, the Movidius NCS, and their other OpenVINO-compatible products, all of which are optimized to work with OpenCV’s deep neural network module.

If you’re interested in learning more about the Movidius NCS and OpenVINO (including benchmark examples), be sure to refer to this tutorial.

Interested in using the NVIDIA Jetson Nano in your own projects?

I bet you’re just as excited about the NVIDIA Jetson Nano as I am. In contrast to pairing the Raspberry Pi with with either the Movidius NCS or Google Coral, the Jetson Nano has it all built right in (minus WiFi) to powerfully conduct computer vision and deep learning at the edge.

In my opinion, embedded CV and DL is the next big wave in the AI community. It’s so big that it may even be a tsunami — will you be riding that wave?

To help you get your start in embedded Computer Vision and Deep Learning, I have decided to write a brand new book — Raspberry Pi for Computer Vision.

I’ve chosen to focus on the Raspberry Pi as it is the best entry-level device for getting started into the world of computer vision for IoT.

But I’m not stopping there. Inside the book, we’ll:

  • Augment the Raspberry Pi with the Google Coral and Movidius NCS coprocessors.
  • Apply the same skills we learn with the RPi to a device with more horsepower: NVIDIA’s Jetson Nano.

Additionally, you’ll learn how to:

  • Build practical, real-world computer vision applications on the Pi.
  • Create computer vision and Internet of Things (IoT) projects and applications with the RPi.
  • Optimize your OpenCV code and algorithms on the resource-constrained Pi.
  • Perform Deep Learning on the Raspberry Pi (including utilizing the Movidius NCS and OpenVINO toolkit).
  • Configure your Google Coral, perform image classification and object detection, and even train + deploy your own custom models to the Coral Edge TPU!
  • Utilize the NVIDIA Jetson Nano to run multiple deep neural networks on a single board, including image classification, object detection, segmentation, and more!

I’m running a Kickstarter campaign to fund the creation of the new book, and to celebrate, I’m offering 25% OFF my existing books and courses if you pre-order a copy of RPi for CV.

In fact, the Raspberry Pi for Computer Vision book is practically free if you pre-order it with Deep Learning for Computer Vision with Python or the PyImageSearch Gurus course.

The clock is ticking and these discounts won’t last — the Kickstarter pre-sale shuts down on this Friday (May 10th) at 10AM EDT, after which I’m taking the deals down.

Reserve your pre-sale book now and while you are there, grab another course or book at a discounted rate.

Summary

In this tutorial, you learned how to get started with the NVIDIA Jetson Nano.

Specifically, you learned how to install the required system packages, configure your development environment, and install Keras and TensorFlow on the Jetson Nano.

We wrapped up learning how to change the default camera and perform image classification and object detection on the Jetson Nano using the pre-supplied scripts.

I’ll be providing a full comparison and benchmarks of the NVIDIA Jetson Nano, Google, Coral, and Movidius NCS in a future tutorial.

To be notified when future tutorials are published here on PyImageSearch (including the Jetson Nano vs. Google Coral vs. Movidus NCS benchmark), just enter your email address in the form below!

, , , , , ,

77 Responses to Getting started with the NVIDIA Jetson Nano

  1. Yurii Chernyshov May 6, 2019 at 10:20 am #

    “I decided to cover installing OpenCV on a Jetson Nano in a future tutorial”.

    Looking forward 🙂 !

    P.S.
    It seems like I always looking forward with this site for the last 5 years already.
    Thanks to keep me motivated for such a long time.

    • Adrian Rosebrock May 6, 2019 at 2:19 pm #

      Thanks Yurii 🙂

    • Satyajith May 10, 2019 at 7:49 am #

      A Tegra optimised version of OpenCV is a part of Jet Pack, right?

  2. al krinker May 6, 2019 at 10:37 am #

    what a timing… i just finished listening to the podcast interview on super data science with you where your made your predictions about next best thing and you mentioned NVIDIA Jetson Nano…. it def blows rasberrypi out of the water!

    guess. i will be shelling $99 soon to get on the wagon soon

    • Adrian Rosebrock May 6, 2019 at 2:20 pm #

      If you are interested in embedding computer vision/deep learning I think it’s worth the investment. I also think NVIDIA did a pretty good job nailing the price point as well.

  3. wally May 6, 2019 at 10:38 am #

    Wow another timely post!

    My Jetson Nano is still on backorder, but this will get me jumpstarted when it finally arrives.

    • Adrian Rosebrock May 6, 2019 at 2:20 pm #

      I hope you enjoy hacking with it when it arrives!

  4. David Bonn May 6, 2019 at 10:49 am #

    Great post, Adrian.

    I too am really excited about embedded computer vision and deep learning. There will likely be explosive growth and some amazing opportunities in the very near future.

    Now I’m trying to decide whether to buy a Jetson or a Google Coral! Or both…

    One interesting point you hit upon — current camera libraries aren’t very flexible if you want to use different cameras (or even the full capabilities of a given USB camera) and will reduce you to tears if you (for example) have more than one USB camera connected to your system at a time. I’d love to see a better solution.

    • Adrian Rosebrock May 6, 2019 at 2:19 pm #

      OpenCV does a fairly reasonable job for at least accessing the camera. The problem becomes programmatically adjusting any settings on the camera sensor. OpenCV provides “.get” and “.set” methods for various parameters but not all of them will be supported by a given camera. It can be a royal pain to work with.

  5. AR May 6, 2019 at 10:55 am #

    Hi Adrian

    Thanks for this blog, its really helpful. I have the kit for like 1 week now and was struggling to get started with it. I have few questions.

    1. How can we write our own code to run the detection process.?
    2. I see that all the demos are in cpp, so no support for python.?
    3. Are you planning to write other blog on writing python code for detection.?

    Thanks

    • Adrian Rosebrock May 6, 2019 at 2:18 pm #

      Hey AR, I’m actually covering all of topics Raspberry Pi for Computer Vision. If you’d like to learn how to use your Nano from within Python projects (image classification, object detection, etc.), I would definitely suggest pre-ordering a copy.

  6. issaiass May 6, 2019 at 11:32 am #

    You probably like to add these things to complete the “on the edge module”
    Intel Dual Band Wireless-Ac 8265 BT/WF module
    Noctua NF-A4x20 5V PWM, Premium Quiet Fan, 4-Pin, 5V Version (40x20mm, Brown)
    5V 5A AC to DC Power Supply Adapter Converter Charger 5.5×2.1mm
    2 x 6dBi RP-SMA Dual Band 2.4GHz 5GHz + 2 x 35cm M.2(NGFF)Cable Antenna Mod Kit
    128GB Ultra microSDXC UHS-I Memory Card with Adapter – C10 (at least 64GB)
    4 x M3-0.6x16mm or 4x 13mm

  7. Stewart Christie May 6, 2019 at 11:49 am #

    Thanks for documenting this, I have my devices on order from SEEED, hopefully arriving soon. Do you plan on a similar getting started guide for Coral and the TPU?

    Regarding the lack of Wi-Fi , the issue is certification, and every country has different regulations. Therefore the systems that do ship with Wi-Fi have soldered down parts, and on board antennae, similar to the newer pi and pi-zeros’. in the US there’s also FCC restrictions on number of “prototype” and “pre-production” devices a company can ship, usually in the hundreds, with labelling issues, and $$$ fines for non-compliance So that further complicates issues, and gaining w-fi certification takes time.

    As an independent user I can add a wifi card, and any old antennae to a system, and even if my unshielded rf-emitting device now causes interference, its not as big an issue.

    I fully expect any real edge device, built using this system would be in a case, shielded, with Wi-FI, and a totally seperate certification.

  8. Rean May 6, 2019 at 11:59 am #

    Hi Adrian,

    Cool tutorial! Great Job!

    I just porting the ZMQ host to Jetson nano(without python virtualenv due to opencv issue), I can’t wait your new tutorials in the future, maybe one day we can receive images from Pi through internet and inference using the CUDA resource on nano.

    Thanks a lot!!

    • Adrian Rosebrock May 6, 2019 at 2:15 pm #

      Thanks Rean, enjoy hacking with your Nano!

  9. Joseph May 6, 2019 at 12:19 pm #

    Awesome as always Adrian!

    • Adrian Rosebrock May 6, 2019 at 2:15 pm #

      Thanks Joseph!

  10. Karthik May 6, 2019 at 1:11 pm #

    Pls Im waiting for the Jetson nano to perform a decent face recognition and use its full potential as so far dlib is not working, opencv dnn is performing bad, their inbuilt face inference is a joke… please help, and great work adrian

    • Adrian Rosebrock May 6, 2019 at 2:15 pm #

      Hey Karthik — I don’t have any tutorials right now for face recognition on the Nano. I’ll consider it though, thanks for the suggestion!

  11. Jerome May 6, 2019 at 1:29 pm #

    thanks a lot 🙂
    Nano is ever on my desk 🙂

    • Adrian Rosebrock May 6, 2019 at 2:14 pm #

      You’re welcome, Jerome! Enjoy the guide and let me know if you have any issues with it 🙂

  12. Dustin Franklin May 6, 2019 at 3:58 pm #

    “I’m actually not sure what the specific architecture is as it’s not documented on NVIDIA’s website — if you know what architecture is being used, please leave a comment on this post”

    It is using the DetectNet model, which is trained using DIGITS/Caffe in the full tutorial which covers training + inferencing: https://github.com/dusty-nv/jetson-inference/blob/master/docs/detectnet-training.md

    You can read more about the DetectNet architecture here: https://devblogs.nvidia.com/detectnet-deep-neural-network-object-detection-digits/

    • Adrian Rosebrock May 7, 2019 at 9:02 am #

      Awesome, thank you for clarifying the model architecture Dustin! 🙂

  13. Andrew May 6, 2019 at 4:06 pm #

    I love my Nano and NVIDIA has code and plans for two ground robots you can make: JetBot and Kaya. I built and installed OpenCV 4.1.0, by using the following cmake command:

    cmake -D CMAKE_BUILD_TYPE=RELEASE
    -D CMAKE_INSTALL_PREFIX=/usr/local
    -D INSTALL_PYTHON_EXAMPLES=ON
    -D INSTALL_C_EXAMPLES=OFF
    -D OPENCV_ENABLE_NONFREE=ON
    -D OPENCV_EXTRA_MODULES_PATH=~/path to your opencv contrib modules folder
    -D PYTHON_EXECUTABLE=~/.virtualenvs/cv/bin/python
    -D BUILD_EXAMPLES=ON ..

    make -j4
    sudo make install
    sudo ldconfig

    • Adrian Rosebrock May 7, 2019 at 9:02 am #

      Thanks for sharing the install configuration, Andrew!

    • Rich May 10, 2019 at 7:12 pm #

      I used a different one to get everything running with CUDA.

      cmake -D CMAKE_BUILD_TYPE=RELEASE \
      -D CMAKE_INSTALL_PREFIX=/usr/local \
      -D WITH_CUDA=ON \
      -D ENABLE_FAST_MATH=1 \
      -D CUDA_FAST_MATH=1 \
      -D WITH_CUBLAS=1 \
      -D WITH_GSTREAMER=ON \
      -D WITH_LIBV4L=ON \
      -D OPENCV_ENABLE_NONFREE=ON \
      -D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib/modules \
      -D BUILD_EXAMPLES=OFF ..

      It required a 64GB card minimum for me, and I had to use a swap usb drive since the memory requirements redlined. Jetsonhacks has a good video on the swap drive. Installing OpenCV required 17.8GB of space beyond the opencv zip download, it was incredible.

      Final symbolic link was different too:

      ln -s /usr/local/lib/python3.6/site-packages/cv2/python-3.6/cv2.so cv2.so

      • Adrian Rosebrock May 15, 2019 at 3:09 pm #

        Wow, thanks for sharing this Rich!

  14. Prem May 6, 2019 at 4:48 pm #

    Great! yes looking forward for more NANO articles, particularly using OpenCV DNN for CUDA, as I see lot of OpenCV in your articles, but using it in Jetson platform will be a challenge due to the lack of support, as your code runs on frugal powered Pi, I can’t imagine what you can achieve with NANO.

    • Adrian Rosebrock May 7, 2019 at 9:04 am #

      One of the problems you’ll run into is that the “dnn” (i.e., Deep Neural Network) module of OpenCV does not yet support CUDA. Ideally it will soon but it’s one of the current limitations of OpenCV. You’ll need to use whatever library the model was trained on to access the GPU and then just use OpenCV to facilitate reading frames/images, image processing, etc.

      That’s one aspect that the Movidius NCS really has going for it — basically one line of code added to the script and you’re able to perform faster inference. (tutorial here if you’re interested)

  15. Rodolfo May 6, 2019 at 5:46 pm #

    Hello Adrian,
    I wonder how difficult is it to setup OpenVINO with the Jetson Nano. I’d like to do some work with the emotion recognition sample available for OpenVINO. Any suggestions about emotion recognition would be greatly appreciated.

    Thank you for your tutorial!

  16. Mehmet Ali Anil May 6, 2019 at 8:28 pm #

    The renders of the module show that there is a WiFi on the module, but not populated. Therefore we will see a connected version of it, just not on the dev board for now.

    • Adrian Rosebrock May 7, 2019 at 9:05 am #

      Got it, that makes sense.

  17. Lucas May 6, 2019 at 11:18 pm #

    Thanks a lot. Adrian. There will be a new “toy” for me. Ha Ha!

    • Adrian Rosebrock May 7, 2019 at 9:06 am #

      Thanks Lucas.

  18. DeepNet May 7, 2019 at 5:54 am #

    Hi Adrian,
    Thanks a lot for the new post, that’s excellent, please continue this post.
    I want to deploy ssd_mobilenet on Jetson, and I want to use TensorRT for this, what do I do? how convert .pb file Tensorflow object detection API file to TesorRT format? In this code you use TensorRT library, right? you write TensoRT codes for this or cv.dnn by default use from TensorRT?

  19. Mehdi May 7, 2019 at 7:52 am #

    I am really impressed that you keep your blog updated so that new technologies are always covered. Thank you Adrian, and keep going, please :).

    • Adrian Rosebrock May 7, 2019 at 9:07 am #

      Thanks Mehdi, I really appreciate that 🙂

  20. Paawan Sharma May 7, 2019 at 9:12 am #

    Hi Adrian,
    A great article. I tested it on my Jetson nano and its fantastic. Thank you for making things easy. Eagerly waiting for content on jetson nano in RPi for CV ebook.
    Kudos.
    Paawan.

    • Adrian Rosebrock May 8, 2019 at 12:55 pm #

      Thanks Paawan — and congrats on getting the example to run on your Nano 🙂

  21. JoeV May 7, 2019 at 12:53 pm #

    I love you man
    Your tutorials are always
    in the right place
    and
    always at the right time

    • Adrian Rosebrock May 8, 2019 at 12:53 pm #

      Thanks, I’m glad you’re enjoying the guides 🙂

  22. zeev May 7, 2019 at 1:47 pm #

    WOW!

    becuase of you i bought today the jetson nano, and started to play with it.
    first of all thanks a lot for you job, and for your help.

    one question, how can i connect a rtsp camera rather then home camera.
    i would like to play with home security camera rather then room camera.

    and thanks a lot for your efforts

    • Adrian Rosebrock May 8, 2019 at 12:52 pm #

      Thanks Zeev, I’m glad you liked the guide. I hope you enjoy hacking with the Nano!

      As far as RTSP goes, I don’t really like working with that protocol. See this tutorial for my suggested way to stream frames and consume them.

  23. S@g@r May 8, 2019 at 6:55 am #

    Hi, Adrian very good article. its really helpful.
    I have some questions.
    I am using a Logitech c170 camera, is it compatible with the jetson nano?
    Error message: “fail to capture”.
    Can you please help me with that

    • Adrian Rosebrock May 8, 2019 at 12:48 pm #

      I haven’t trained with the Logitech C170 so I’m not sure. I used a Logitech C920 and it worked out of the box.

  24. Rich May 8, 2019 at 7:51 pm #

    Is there anyone out there that actually understands how to use the TensorRT Python API so that this awesome tut would actually be more helpful to the real world? I blame nVidia for the poor docs, but there doesn’t seem to be hardly and support for non C++ coders.

    You couldn’t actually use this to generate predictions that are usable in your python code, like, take this bounding box and send it somewhere. You could do this if it was all python code.

    Any ideas Adrian or others how to do this?

    • Adrian Rosebrock May 15, 2019 at 3:25 pm #

      Hey Rich — all of those questions are being addressed inside Raspberry Pi for Computer Vision. I’ll likely do TensorRT + Python blog posts as well.

  25. Troy Zuroske May 9, 2019 at 10:43 am #

    Does the Jetson Nano not come with OpenCV preinstalled? I have a Jetson Xavier and using the Jetpack sdk installer, it was one of the additional libraries pushed to the device upon flashing it. If it does come preinstalled on the Nano, why can’t you use that version? I was able to get this real time object detection library to work on my Jetson with the preinstalled openCV: https://github.com/ayooshkathuria/pytorch-yolo-v3. There was some custom stuff I had to do because I am using some Jetson specific cameras (https://www.e-consystems.com/nvidia-cameras/jetson-agx-xavier-cameras/four-synchronized-4k-cameras.asp) but it is performing well with GPU support. Maybe if does not come preinstalled you could leverage some help from this post: https://www.jetsonhacks.com/2018/11/08/build-opencv-3-4-on-nvidia-jetson-agx-xavier-developer-kit/ which you may already be aware of.

  26. Marco Miglionico May 10, 2019 at 1:58 pm #

    Hi Adrian. As always, thanks fo the amazing tutorial. I am building a Computer Vision application based on Face Detection, so I really need the cv2.dnn module. As you mentioned, this won’t be available on NVIDIA GPU until the end of the summer. I also have an Intel NCS, but not a Raspberry py. So I was thinking if it is possible to use the Jetson Nano + NCS to use the cv2.dnn module?
    If this is not possible I will start using a Raspberry pi + NCS .
    Thanks

    • Adrian Rosebrock May 15, 2019 at 3:10 pm #

      That may be possible but I have not tried.

  27. Alex Grutter May 11, 2019 at 8:00 am #

    Hey Adrian!

    I found this script for building OpenCV on Jetson Nano. The NVIDIA peeps linked to it. Have you had a chance to try it already?

    https://devtalk.nvidia.com/default/topic/1049972/opencv-cuda-python-with-jetson-nano/

    https://github.com/AastaNV/JEP/blob/master/script/install_opencv4.0.0_Nano.sh

    Thanks!

    Alex

    • Adrian Rosebrock May 15, 2019 at 3:06 pm #

      I have not tried yet. It’s on my todo list though 🙂

  28. Edward Li May 11, 2019 at 4:16 pm #

    Hi Adrian! Thanks for the post – it was very helpful when setting up my Jetson Nano

    I had a few questions:
    – My Jetson froze a few times while running install scripts for scipy and keras. I think it’s a RAM issue. Do you know if it’s possible/what the best way is to enable a swap file?
    – When deploying custom models, do you know if it is faster to use TensorRT to accelerate the model rather than simply running it on Keras/Tensorflow?
    – Finally, is there a reason why you use get-pip.py over apt-get install python3-pip?

    Thanks so much for the post!

    • Adrian Rosebrock May 15, 2019 at 3:05 pm #

      See the other comments regarding SciPy. Try creating a swapfile.

      TensorRT should theoretically be faster but I haven’t confirmed that yet.

  29. jay abrams May 11, 2019 at 8:14 pm #

    hi Adrian,
    is there anything one can do if jetson nano keeps hangin on ‘running setup.py install for scipy’??
    it hangs on that for some time and then the whole thing freezes up. Am i supposed to do something with a swapfile or something?

    thank you!

    • Adrian Rosebrock May 15, 2019 at 3:03 pm #

      See the other comments — try creating a swapfile.

  30. Thanhlong May 12, 2019 at 1:07 am #

    Hi Adrian, after read a lots of article and your post about jetson nano, i very confuse should i buy a rasberry to develop or should i save money for the jetson nano. I afraid that the community support for jetson not large like rasberry.
    Thank for all your article, i learn from it a lots:))

    • Adrian Rosebrock May 15, 2019 at 3:03 pm #

      It really depends on what you are trying to build and the goals of the project. Could you describe what you would like the RPi or Nano to be doing? What types of models do you want to run?

  31. Bog Flap May 12, 2019 at 7:35 am #

    Hi. Great article. However a point you may have overlooked. In order to get the scipy install to work I had to install a swapfile first. Part of the install uses all the avaliable ram (about 110MB of extra ram is required). Otherwise my nano crashed and not very gracefully.

    • Adrian Rosebrock May 15, 2019 at 3:00 pm #

      Thanks for sharing. I didn’t need a swapfile though, strange!

  32. Jorge Paredes May 13, 2019 at 6:27 pm #

    Hello

    Just in case of someone have the same problem that me.

    I followed the tutorial and when I tried to install scipy:

    $ pip install scipy

    My Nano hangs after 10 minutes, more less .

    I see that my Nano has no configured swapfile, so I created one swap file and try to install scipy again and then it worked. Scipy is installed.

    Do you think that I must remove the swapfile after install everything? I know it runs on the SD Card….

    • Adrian Rosebrock May 15, 2019 at 2:48 pm #

      Thanks for sharing, Jorge! How large was your swapfile?

  33. Mel May 14, 2019 at 7:44 am #

    Hi Adrian,

    is the Jetson nano meant to be only used for computer vision tasks or can I use it in place of a GPU to train models on? Thanks for the tutorials, always learning something new

    • Adrian Rosebrock May 15, 2019 at 2:39 pm #

      No, you don’t use the Jetson Nano for training, only for inference.

  34. Marco Miglionico May 14, 2019 at 11:19 am #

    Hi Adrian, I am having a problem during the scipy installation. It get stuck forever, do you know what can be the problem?
    Thanks

    • Adrian Rosebrock May 15, 2019 at 2:36 pm #

      It’s not stuck. SciPy will take awhile to compile and install.

  35. Bob O May 14, 2019 at 1:14 pm #

    I am working my way through the installation (for the second day) because the system keeps locking up. It just freezes, even the System Monitor. I am running it low power, with one browser page, one terminal, and one System Monitor window open. It eventually recovers, but this happens so often the system is almost useless. Nothing seems to interrupt it, but if the screen saver comes on, the picture will swap every couple minutes (this happens when I am away) but getting the prompt to unlock it is hit and miss. When I pull the plug to hard reboot, the package has finished installing.

    One different thing is that I am using a larger memory card (200 GB) but nothing warns me about that being a problem.

    So far, system updated, scipy and TF installed, and watching nothing happen for the last 20 minutes while installing scipy.

    • Bob O May 14, 2019 at 5:01 pm #

      Update, it appears that running at 4k UHD is what the problem was. When I downgraded resolution to 1920, it became completely stable and let me finish the install.

      • Adrian Rosebrock May 15, 2019 at 2:34 pm #

        Thanks for sharing, Bob!

  36. Rob Jones May 14, 2019 at 5:05 pm #

    In your examples you run the binaries from ~/jetson-inference/build/aarch64/bin

    But sudo make install puts copies of them into /usr/local/bin and so you can just run them from anywhere

    The problem is you need to tell them where to get the libjetson-inference.so library

    Add this to your ~/.bashrc file

    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib

    • Adrian Rosebrock May 15, 2019 at 2:34 pm #

      Nice, thanks for sharing Rob!

Leave a Reply

[email]
[email]