How to install TensorFlow 2.0 on Ubuntu

In this tutorial, you will learn to install TensorFlow 2.0 on your Ubuntu system either with or without a GPU.

There are a number of important updates in TensorFlow 2.0, including eager execution, automatic differentiation, and better multi-GPU/distributed training support, but the most important update is that Keras is now the official high-level deep learning API for TensorFlow.

In short — you should be using the Keras implementation inside TensorFlow 2.0 (i.e., tf.keras ) when training your own deep neural networks. The official Keras package will still receive bug fixes, but all new features and implementations will be inside tf.keras .

Both Francois Chollet (the creator of Keras) as well as the TensorFlow developers and maintainers recommend you use tf.keras  moving forward.

Furthermore, if you own a copy of my book, Deep Learning for Computer Vision with Python, you should use this guide to install TensorFlow 2.0 on your Ubuntu system.

Inside this tutorial, you’ll learn how to install TensorFlow 2.0 on Ubuntu.

Alternatively, click here for my macOS + TensorFlow 2.0 installation instructions.

To learn how to install TensorFlow 2.0 on Ubuntu, just keep reading.

How to install TensorFlow 2.0 on Ubuntu

In the first part of this tutorial we’ll discuss the pre-configured deep learning development environments that are a part of my book, Deep Learning for Computer Vision with Python.

From there, you’ll learn why you should use TensorFlow 2.0, including the Keras implementation inside of TensorFlow 2.0.

We’ll then configure and install TensorFlow 2.0 on our Ubuntu system.

Let’s begin.

Pre-configured deep learning environments

Figure 1: My deep learning Virtual Machine with TensorFlow, Keras, OpenCV, and all other Deep Learning and Computer Vision libraries you need, pre-configured and pre-installed.

When it comes to working with deep learning and Python I highly recommend that you use a Unix-based environment.

Deep learning tools can be more easily configured and installed on Linux, allowing you to develop and run neural networks quickly.

Of course, configuring your own deep learning + Python + Linux development environment can be quite the tedious task, especially if you are new to Linux, a beginner at working the command line/terminal, or a novice when compiling and installing packages by hand.

In order to help you jump start your deep learning + Python education, I have created two pre-configured environments:

  1. Pre-configured VirtualBox Ubuntu Virtual Machine (VM) with all necessary deep learning libraries you need to be successful (including Keras, TensorFlow, scikit-learn, scikit-image, OpenCV, and others) pre-configured and pre-installed.
  2. Pre-configured Deep Learning Amazon Machine Image (AMI) which runs on Amazon Web Service’s (AWS) Elastic Compute (EC2) infrastructure. This environment is free for anyone on the internet to use regardless of whether you are a DL4CV customer of mine or not (cloud/GPU fees apply). Deep learning libraries are pre-installed including both those listed in #1 in addition to TFOD API, Mask R-CNN, RetinaNet, and mxnet.

I strongly urge you to consider using my pre-configured environments if you are working through my books. Using a pre-configured environment is not cheating —  they simply allow you to focus on learning rather than the job of a system administrator.

If you are more familiar with Microsoft Azure’s infrastructure, be sure to check out their Data Science Virtual Machine (DSVM), including my review of the environment. The Azure team maintains a great environment for you and I cannot speak highly enough about the support they provided while I ensured that all of my deep learning chapters ran successfully on their system.

That said, pre-configured environments are not for everyone.

In the remainder of this tutorial, we will serve as the “deep learning systems administrators” installing TensorFlow 2.0 on our bare metal Ubuntu machine.

Why TensorFlow 2.0 and where is Keras?

Figure 2: Keras and TensorFlow have a complicated history together. When installing TensorFlow 2.0 on Ubuntu, keep in mind that Keras is the official high-level API built into TensorFlow.

It seems like every day that there is a war on Twitter about the best deep learning framework. The problem is that these discussions are counterproductive to everyone’s time.

What we should be talking about is your new model architecture and how you’ve applied it to solve a problem.

That said, I use Keras as my daily deep learning library and as the primary teaching tool on this blog.

If you can pick up Keras, you’ll be perfectly comfortable in TensorFlow, PyTorch, mxnet, or any other similar framework. They are all just different ratcheting wrenches in your toolbox that can accomplish the same goal.

Francois Chollet (chief maintainer/developer of Keras), committed his first version of Keras to his GitHub on March 27th, 2015. Since then, the software has undergone many changes and iterations.

Earlier in 2019, the tf.keras submodule was introduced into TensorFlow v1.10.0.

Now with TensorFlow 2.0, Keras is the official high-level API of TensorFlow.

The keras package will only receive bug fixes from here forward. If you want to use Keras now, you need to use TensorFlow 2.0.

To learn more about the marriage of Keras and TensorFlow, be sure to read my previous article.

TensorFlow 2.0 has a bunch of new features, including:

  • The integration of Keras into TensorFlow via tf.keras
  • Sessions and eager execution
  • Automatic differentiation
  • Model and layer subclassing
  • Better multi-GPU/distributed training support
  • TensorFlow Lite for mobile/embedded devices
  • TensorFlow Extended for deploying production models

Long story short — if you would like to use Keras for deep learning, then you need to install TensorFlow 2.0 going forward.

Configuring your TensorFlow 2.0 + Ubuntu deep learning system

The following instructions for installing TensorFlow 2.0 on your machine assume:

  • You have administrative access to your system
  • You can open a terminal and or you have an active SSH connection to the target machine
  • You know how to operate the command line.

Let’s get started!

Step #1: Install Ubuntu + TensorFlow 2.0 deep learning dependencies

This step is for both GPU users and non-GPU users.

Our Ubuntu install instructions assume you are working with Ubuntu 18.04 LTS. These instructions are tested on 18.04.3.

We’ll begin by opening a terminal and updating our system:

From there we’ll install compiler tools:

And then we’ll install screen, a tool used for multiple terminals in the same window — I often use it for remote SSH connections:

From there we’ll install X windows libraries and OpenGL libraries:

Along with image and video I/O libraries:

Next, we’ll install optimization libraries:

And HDF5 for working with large datasets:

We also need our Python 3 development libraries including TK and GTK GUI support:

If you have a GPU, continue to Step #2.

Otherwise, if you do not have a GPU, skip to Step #3.

Step #2 (GPU-only): Install NVIDIA drivers, CUDA, and cuDNN

Figure 3: How to install TensorFlow 2.0 for a GPU machine.

This step is only for GPU users.

In this step, we will install NVIDIA GPU drivers, CUDA, and cuDNN for TensorFlow 2.0 on Ubuntu.

We need to add an apt-get repository so that we can install NVIDIA GPU drivers. This can be accomplished in your terminal:

Go ahead and install your NVIDIA graphics driver:

And then issue the reboot command and wait for your system to restart:

Once you are back at your terminal/SSH connection, run the nvidia-smi command to query your GPU and check its status:

The nvidia-smi command output is useful to see the health and usage of your GPU.

Let’s go ahead and download CUDA 10.0. I’m recommending CUDA 10.0 from this point forward as it is now very reliable and mature.

The following commands will both download and install CUDA 10.0 right from your terminal

Note: As you follow these commands take note of the line-wrapping due to long URLs/filenames.

You will be prompted to accept the End User License Agreement (EULA). During the process, you may encounter the following error:

You may safely ignore this error message.

Now let’s update our bash profile using nano (you can use vim or emacs if you are more comfortable with them):

Insert the following lines at the bottom of the profile:

Save the file ( ctrl + x , y , enter ) and exit to your terminal.

Figure 4: How to install TensorFlow 2.0 on Ubuntu with an NVIDIA CUDA GPU.

Then, source the profile:

From here we’ll query CUDA to ensure that it is successfully installed:

If your output shows that CUDA is built, then you’re now ready to install cuDNN — the CUDA compatible deep neural net library.

Go ahead and download cuDNN v7.6.4 for CUDA 10.0 from the following link:

Make sure you select:

  1. Download cuDNN v7.6.4 (September 27, 2019), for CUDA 10.0
  2. cuDNN Library for Linux
  3. And then allow the .zip file to download (you may need to create an account on NVIDIA’s website to download the cuDNN files)

You then may need to SCP (secure copy) it from your home machine to your remote deep learning box:

Back on your GPU development system, let’s install cuDNN:

At this point, we have installed:

  • NVIDIA GPU v418 drivers
  • CUDA 10.0
  • cuDNN 7.6.4 for CUDA 10.0

The hard part is certainly behind us now — GPU installations can be challenging. Great job setting up your GPU!

Continue on to Step  #3.

Step #3: Install pip and virtual environments

This step is for both GPU users and non-GPU users.

In this step, we will set up pip and Python virtual environments.

We will use the de-facto Python package manager, pip.

Note: While you are welcome to opt for Anaconda (or alternatives), I’ve still found pip to be more ubiquitous in the community. Feel free to use Anaconda if you so wish, just understand that I cannot provide support for it.

Let’s download and install pip:

To complement pip, I recommend using both virtualenv and virtualenvwrapper to manage virtual environments.

Virtual environments are a best practice when it comes to Python development. They allow you to test different versions of Python libraries in sequestered development and production environments. I use them daily and you should too for all Python development.

In other words, do not install TensorFlow 2.0 and associated Python packages directly to your system environment. It will only cause problems later.

Let’s install my preferred virtual environment tools now:

Note: Your system may require that you use the sudo  command to install the above virtual environment tools. This will only be required once — from here forward, do not use sudo .

From here, we need to update our bash profile to accommodate virtualenvwrapper . Open up the ~/.bashrc  file with Nano or another text editor:

And insert the following lines at the end of the file:

Save the file ( ctrl + x , y , enter ) and exit to your terminal.

Go ahead and source/load the changes into your profile:

Output will be displayed in your terminal indicating that virtualenvwrapper is installed. If you encounter errors here, you need to address them before moving on. Usually, errors at this point are due to typos in your ~/.bashrc  file.

Now we’re ready to create your Python 3 deep learning virtual environment named dl4cv:

You can create similar virtual environments with different names (and packages therein) as needed. On my personal system, I have many virtual environments. For developing and testing software for my book, Deep Learning for Computer Vision with Python, I like to name (or precede the name of) the environment with dl4cv . That said, feel free to use the nomenclature that makes the most sense to you.

Great job setting up virtual environments on your system!

Step #3: Install TensorFlow 2.0 into your dl4cv virtual environment

This step is for both GPU users and non-GPU users.

In this step, we’ll install TensorFlow 2.0 with pip.

Ensure that you are still in your dl4cv  virtual environment (typically the virtual environment name precedes your bash prompt). If not, no worries. Simply activate the environment with the following command:

A prerequisite of TensorFlow 2.0 is NumPy for numerical processing. Go ahead and install NumPy and TensorFlow 2.0 using pip:

To install TensorFlow 2.0 for a GPU be sure to replace tensorflow with tensorflow-gpu.

You should NOT have both installed — use either tensorflow  for a CPU install or  tensorflow-gpu  for a GPU install, not both!

Great job installing TensorFlow 2.0!

Step #4: Install TensorFlow 2.0 associated packages into your dl4cv virtual environment

Figure 5: A fully-fledged TensorFlow 2.0 + Ubuntu deep learning environment requires additional Python libraries as well.

This step is for both GPU users and non-GPU users.

In this step, we will install additional packages needed for common deep learning development with TensorFlow 2.0.

Ensure that you are still in your dl4cv  virtual environment (typically the virtual environment name precedes your bash prompt). If not, no worries. Simply activate the environment with the following command:

We begin by installing standard image processing libraries including OpenCV:

These image processing libraries will allow us to perform image I/O, various preprocessing techniques, as well as graphical display.

From there, let’s install machine learning libraries and support libraries, the most notable two being scikit-learn and matplotlib:

Scikit-learn is an especially important library when it comes to machine learning. We will use a number of features from this library including classification reports, label encoders, and machine learning models.

Great job installing associated image processing and machine learning libraries.

Step #5: Test your TensorFlow 2.0 install

This step is for both GPU users and non-GPU users.

As a quick sanity test, we’ll test our TensorFlow 2.0 install.

Fire up a Python shell in your dl4cv environment and ensure that you can import the following packages:

If you configured your system with an NVIDIA GPU, be sure to check if TensorFlow 2.0’s installation is able to take advantage of your GPU:

Great job testing your TensorFlow 2.0 installation on Ubuntu.

Accessing your TensorFlow 2.0 virtual environment

At this point, your TensorFlow 2.0 dl4cv  environment is ready to go. Whenever you would like to execute TensorFlow 2.0 code (such as from my deep learning book), be sure to use the workon  command:

Your bash prompt will be preceded with (dl4cv)  indicating that you are “inside” the TensorFlow 2.0 virtual environment.

If you need to get back to your system-level environment, you can deactivate the current virtual environment:

Frequently Asked Questions (FAQ)

Q: These instructions seem really complicated. Do you have a pre-configured environment?

A: Yes, the instructions can be daunting. I recommend brushing up on your Linux command line skills prior to following these instructions. I do offer two pre-configured environments for my book:

  1. Pre-configured Deep Learning Virtual Machine: My VirtualBox VM is included with your purchase of my deep learning book. Just download the VirtualBox and import the VM into VirtualBox. From there, boot it up and you’ll be running example code in a matter of minutes.
  2. Pre-configured Amazon Machine Image (EC2 AMI): Free for everyone on the internet. You can use this environment with no strings attached even if you don’t own my deep learning book (AWS charges apply, of course). Again, compute resources on AWS are not free — you will need to pay for cloud/GPU fees but not the AMI itself. Arguably, working on a deep learning rig in the cloud is cheaper and less time-consuming than keeping a deep learning box on-site. Free hardware upgrades, no system admin headaches, no calls to hardware vendors about warranty policies, no power bills, pay only for what you use. This is the best option if you have a few one-off projects and don’t want to drain your bank account with hardware expenses.

Q: Why didn’t we install Keras?

A: Keras is officially part of TensorFlow as of TensorFlow v1.10.0. By installing TensorFlow 2.0 the Keras API is inherently installed.

Keras has been deeply embedded into TensorFlow and tf.keras  is the primary high-level API in TensorFlow 2.0. The legacy functions that come with TensorFlow play nicely with tf.keras  now.

In order to understand the difference between Keras and tf.keras  in a more detailed manner, check out my recent blog post.

You may now import Keras using the following statement in your Python programs:

Q: Which version of Ubuntu should I use?

A: Ubuntu 18.04.3 is “Long Term Support” (LTS) and is perfectly appropriate. There are plenty of legacy systems using Ubuntu 16.04 as well, but if you are building a new system, I would recommend Ubuntu 18.04.3 at this point. Currently, I do not advise using Ubuntu 19.04, as usually when a new Ubuntu OS is released, there are Aptitude package conflicts.

Q: I’m really stuck. Something is not working. Can you help me?

A: I really love helping readers and I would love to help you configure your deep learning development environment.

That said, I receive 100+ emails and blog post comments per day — I simply don’t have the time to get to them all

Customers of mine receive support priority over non-customers due to the number of requests myself and my team receive. Please consider becoming a customer by browsing my library of books and courses.

My personal recommend is that you to grab a copy of Deep Learning for Computer Vision with Python — that book includes access to my pre-configured deep learning development environments that have TensorFlow, Keras, OpenCV, etc. pre-installed. You’ll be up and running in a matter of minutes.

What’s next?

Figure 6: My deep learning book, Deep Learning for Computer Vision with Python, is trusted by employees and students of top institutions. It is regularly updated to keep pace with the fast-moving AI industry. The book is ready to go for TensorFlow 2.0.

The 3rd edition release of Deep Learning for Computer Vision with Python (DL4CV) includes TensorFlow 2.0 support!

DL4CV has taught 1000s of PyImageSearch readers how to successfully apply Computer Vision and Deep Learning to their own projects.

Francois Chollet, Google AI Researcher and creator of the Keras deep learning library had this to say about the book:

This book is a great, in-depth dive into practical deep learning for computer vision. I found it to be an approachable and enjoyable read: explanations are clear and highly detailed. You’ll find many practical tips and recommendations that are rarely included in other books. I highly recommend it, both to practitioners and beginners.

My complete, self-study deep learning book is trusted by members of top machine learning schools, companies, and organizations, including Microsoft, Google, Stanford, MIT, CMU, and more!

And what’s more is that my readers and customers (just like you) have gone on to win Kaggle competitions, secure academic grants, and start careers in CV and DL using the knowledge they gained through study and practice.

Be sure to take a look — and while you’re at it, don’t forget to grab your (free) table of contents + sample chapters.


In this tutorial, you learned how to install TensorFlow 2.0 on Ubuntu (either with or without GPU support).

Now that your TensorFlow 2.0 + Ubuntu deep learning rig is configured, I would suggest picking up a copy of Deep Learning for Computer Vision with Python. You’ll be getting a great education and you’ll learn how to successfully apply Deep Learning to your own projects.

To be notified when future tutorials are published here on PyImageSearch, just enter your email address in the form below!

, , , , ,

42 Responses to How to install TensorFlow 2.0 on Ubuntu

  1. Oscar Rangel December 9, 2019 at 11:41 am #

    great tutorial! I had a small problem but I was able to fix it.

    got an unexpected keyword argument “serialized_options”

    Fix :
    pip install -U protobuf

    • Adrian Rosebrock December 9, 2019 at 2:09 pm #

      Hi Oscar — when did you receive that error?

  2. Wim Valcke December 9, 2019 at 12:12 pm #

    Hi Adrian,

    i tested the tensorflow 2.0 via pip install tensorflow-gpu. (Linux)
    This works but this version needs cudnn 7.6.x, i tested with cudnn7.6.5.
    Otherwise you can get problems with cnn layers when training with Keras (my cudnn version was 7.5.0)
    I made a reply as i saw that the blog mentions cudnn 7.4.2, but i doubt this will work.
    Many thanks for the DL python books upgrade to V3.0. It is a pleasure to see that you release free updates for your books.

    • Adrian Rosebrock December 9, 2019 at 2:08 pm #

      Thanks Wim. What version/flavor of Linux were you using when you tested?

  3. David Bonn December 9, 2019 at 12:57 pm #

    As always, a very helpful and complete post!

    One word of warning: if you have installed previous versions of tensorflow and keras and installed earlier versions of CUDA and CUDNN for them they might not work after you install the newer versions of CUDA and CUDNN. So I recommend exercising some caution and don’t assume that you won’t break anything by installing them.

    • Adrian Rosebrock December 9, 2019 at 2:07 pm #

      Great point, thanks for sharing David!

      • Ran Fang December 10, 2019 at 4:32 am #

        Would you mind including instructions on how to uninstall prev versions too? I was following your previous installation guide and then came here after that was done.

        • Adrian Rosebrock December 10, 2019 at 6:44 am #

          Realistically, no. It can be challenging to configure a deep learning dev environment from scratch, especially if you intend on using a GPU as well. It’s near impossible for me to support all the different combinations of OS versions, CUDA/cuDNN versions, TensorFlow versions, and whatever else you have installed on your machine.

          I therefore assume you are starting with a fresh install.

  4. Hervé December 10, 2019 at 2:26 am #

    You prefer pip to anaconda and it was the same for me… Before. Actually, installing TF with anaconda (community version) only needs ONE line since it already contains the appropriate cuda and cudnn.

    conda create -n tf_gpu tensorflow-gpu

    It may nevertheless require to install the nvidia driver before (I don’t know). Anyway, it seems much simpler than the pip version.

    • Adrian Rosebrock December 10, 2019 at 6:56 am #

      I’ve seen people successfully install TensorFlow with GPU support with Anaconda. I’ve also seen the opposite where their dev environments get hosed. It could be any number of factors that caused the issue, including Anaconda but certainly not limited to it either.

      My goal of providing these instructions is for readers to see what’s going on under the hood, and more importantly, if a command errors out, they’ll know the exact step that caused the problem, enabling them to better research the issue and ultimately resolve it.

  5. Saif Ansari December 10, 2019 at 7:09 am #

    Failed to initialize NVML: Driver/library version mismatch
    This is what i get when i tried the procedure on a g4dn aws instance using the tutorial.

  6. Muhammad Abubakar December 10, 2019 at 10:07 am #

    hi adrian, everything is going well and is similar to the outputs that you’ve shown, but this only allows to run python line by line, i want to know where to write all the python program and then run it (such as we used spyder in anaconda)

  7. Douglas Jones December 10, 2019 at 12:59 pm #

    Thanks, Adrian! Great tutorial as usual. Looks like I am up clean on a dual GPU Ubuntu 18.04 system. One general question. Nvidia provides Tensorflow and other frameworks etc. For those of us running GPUs, is there any advantage or benefit to using the tools provided by Nvidia vs. the tools you suggest?


    • Adrian Rosebrock December 12, 2019 at 10:01 am #

      I personally prefer to configure my systems using the official repos unless I have a very good reason to use a non-official one. A good example being NVIDIA’s TensorFlow install for the Jetson Nano. You can and should be using that install for the Nano but I wouldn’t bother if you’re configuring a standard deep learning rig.

  8. Constantin December 10, 2019 at 1:16 pm #

    Thanks for the post! As I am just starting this seems super useful.

    I am a bit hesitant to start as I have already installed nvidia-driver-435. Thus my questions:

    1. Do i need to change to nvidia-driver-418 or can I keep 435?
    2. Can I jump between driver versions post installation process?

    • Adrian Rosebrock December 12, 2019 at 10:00 am #

      You should be able to keep your current driver version, just make sure it’s compatible with the latest CUDA release. I also would not suggest jumping between driver versions, that’s a good way to hose your DL install.

  9. Andrew Baker December 10, 2019 at 1:57 pm #

    Great tutorial as always. Lately I’ve been working on Google Colab which operates like a Jupyter notebook and runs entirely in the cloud. The GPU instance uses a K80. The best part is this is free.

  10. okorie emmanuel December 12, 2019 at 4:28 am #

    Very instructive as other tutorials. Please, can you show how to install tensorflow2 on a raspberry pi? Thanks

    • Adrian Rosebrock December 12, 2019 at 9:59 am #

      Thanks for the suggestion.

  11. B. Damdinsuren December 13, 2019 at 5:55 am #

    Thanks a lot

    • Adrian Rosebrock December 18, 2019 at 9:36 am #

      You are welcome!

  12. Subhendu Sinha Chaudhuri December 15, 2019 at 10:28 pm #

    How to install tensorflow 2.0 in windows 10 64 Bit. I had followed the guide. It is getting installed. But import tensorflow command is giving dynamic library error. I have visual studio 2019

  13. Balaji December 16, 2019 at 12:43 pm #

    Thanks, nice tutorial
    I have RTX series GPU will the above steps will work ?
    And want to know what do you suggest for better usage of RTX series GPU.

  14. shaheen December 17, 2019 at 1:29 am #

    Thank you very much for your efforts MAY GOD BLESS you, will you pleas explain for us how to install Tensorflo2.0 CPU AND GPU in anaconda .

  15. Greg Chapman December 18, 2019 at 9:57 pm #

    Complete success. I finally gave up banging my head on anaconda — I have a new Ubuntu 19.10 box with a Ryzen 3800x and a RTX2700 — and with one minor change to the intsructions (pip/pip3 via apt), I was up and running quickly with only one small change to the .bashrc instructions.


    • Adrian Rosebrock December 26, 2019 at 9:42 am #

      Congrats on getting your deep learning rig configured, Greg!

    • Paul January 1, 2020 at 5:08 pm #

      Hi Greg!
      I also have Ubuntu 19.20 but GeForce GTX 1050, Could you please walk me through the changes you made to enable me setup mine. Thanks!

  16. RexBarker December 19, 2019 at 4:15 pm #

    In case someone else runs into the same error, I had troubles installing the Nvidia drivers from the run file in step #2 (“sudo ./ –override”). The script would fail after the license agreement listing, and nothing was installed.

    After looking in the .log file, it turns out that the nvidia-drm install was being blocked since I was installing it directly on the machine from using GUI interface. I had previous installed cudaDNN 9.2 last year for Tensorflow 1.x, and it had installed the Nvidia Xwindows drivers at the same time….so the graphics drivers were in use for my current login.

    The solution was to log out. At the prompt screen, enter into command line mode with CTRL-ALT-F2. Then disable the GUI with

    # systemctl isolate

    (login in now, and install the file as instructed above)

    Re-enable the GUI after you’re done:

    # systemctl start

    Basic instructions are here:

    Restart, and should be able to move on to the rest of step 2 & 3

    • Adrian Rosebrock December 26, 2019 at 9:42 am #

      Fantastic, thanks for sharing this!

    • sharonwoo January 19, 2020 at 8:36 am #

      Hi, thanks for sharing as I encountered this error and am trying it out. Will update if it doesn’t work!

  17. VIKAS BHANGDIYA December 21, 2019 at 8:38 pm #

    Hi Adrian

    with above instruction my CV2 version is 4.1.1.

    I tried to update CV2 4.1.2 with following command but not succeed

    sudo -H pip install opencv-python==4.1.2

    sudo pip3 install opencv-python==4.1.2

    pip3 install opencv-python==4.1.2

    can u suggest how to update cv2 4.1.2

    • Adrian Rosebrock December 26, 2019 at 9:41 am #

      I would suggest you create a new Python virtual environment and install OpenCV there:

  18. Roberto GV December 22, 2019 at 12:29 pm #

    Thanks a lot for the post Adrian. It’s once again very detailed and useful. I have followed the steps for installation of Tensorflow with GPU on my Acer predator Helios 300 with Ubuntu 18.04. It worked perfect but I got the following error when trying to train my model:

    Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR

    tensorflow.python.framework.errors_impl.UnknownError: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [Op:Conv2D]

    I only needed to change to cudnn 7.6.4 and add the following lines in my code:

    config = tf.compat.v1.ConfigProto(log_device_placement=True)
    config.gpu_options.per_process_gpu_memory_fraction=0.8 # don’t hog all vRAM
    config.operation_timeout_in_ms=15000 # terminate on long hangs
    sess = tf.compat.v1.InteractiveSession(“”, config=config)

    Just posting here in case is useful for someone else. Thanks a lot for having this aweosome blog!


    • Adrian Rosebrock December 26, 2019 at 9:42 am #

      Thank you for sharing, Roberto!

  19. uniu December 27, 2019 at 6:35 am #

    Hi Adrian,

    finally I did it. Just some of my hardest experiments:

    – Tensorflow 2.0 only support Cuda 10.0 (maybe cuda 10.1 but I didnt try)
    – If you run this command “sudo apt-get install nvidia-driver-418” you will get the version 418, not this one: “Driver Version: 430.50”
    – The easyest way to install Cuda 10 is install the nvidia driver version 410 with this command: “sudo apt-get install nvidia-driver-410”
    – One possible Cuda downgrade will be painful without the skip “Driver” option.

    • Adrian Rosebrock January 2, 2020 at 8:40 am #

      Thank you for sharing!

  20. Wanderson January 17, 2020 at 10:10 am #

    Hi Adrian,
    Is there any special reason why you prefer virtualenv rather than Docker?


    • Adrian Rosebrock January 23, 2020 at 9:35 am #

      Virtualenv and Docker are two completely different things. Docker is more of a “lightweight container/VM” while virtualenv creates Python virtual environments. You could actually run virtualenv/virtualenvwrapper on your own Docker instance if you wanted to.

  21. fly January 21, 2020 at 8:39 am #

    Hi Adrian another great post, i would like to add a few things i was using an old comp it didn’t have avx instructions on the cpu which is needed. 9th of jan 2020 tensorflow updated to 2.1 which wont work with the other packages.Pip install tensorflow-gpu==2.0.0-rc2 did the trick for me.

    • Adrian Rosebrock January 23, 2020 at 9:25 am #

      Thanks, Fly. It seems like there is a bug in TensorFlow 2.1. It’s recommended that readers use TensorFlow 2.0 until v2.2 is released.

Before you leave a comment...

Hey, Adrian here, author of the PyImageSearch blog. I'd love to hear from you, but before you submit a comment, please follow these guidelines:

  1. If you have a question, read the comments first. You should also search this page (i.e., ctrl + f) for keywords related to your question. It's likely that I have already addressed your question in the comments.
  2. If you are copying and pasting code/terminal output, please don't. Reviewing another programmers’ code is a very time consuming and tedious task, and due to the volume of emails and contact requests I receive, I simply cannot do it.
  3. Be respectful of the space. I put a lot of my own personal time into creating these free weekly tutorials. On average, each tutorial takes me 15-20 hours to put together. I love offering these guides to you and I take pride in the content I create. Therefore, I will not approve comments that include large code blocks/terminal output as it destroys the formatting of the page. Kindly be respectful of this space.
  4. Be patient. I receive 200+ comments and emails per day. Due to spam, and my desire to personally answer as many questions as I can, I hand moderate all new comments (typically once per week). I try to answer as many questions as I can, but I'm only one person. Please don't be offended if I cannot get to your question
  5. Do you need priority support? Consider purchasing one of my books and courses. I place customer questions and emails in a separate, special priority queue and answer them first. If you are a customer of mine you will receive a guaranteed response from me. If there's any time left over, I focus on the community at large and attempt to answer as many of those questions as I possibly can.

Thank you for keeping these guidelines in mind before submitting your comment.

Leave a Reply