My Top 9 Favorite Python Deep Learning Libraries


So you’re interested in deep learning and Convolutional Neural Networks. But where do you start? Which library do you use? There are just so many!

Inside this blog post, I detail 9 of my favorite Python deep learning libraries.

This list is by no means exhaustive, it’s simply a list of libraries that I’ve used in my computer vision career and found particular useful at one time or another.

Some of these libraries I use more than others — specifically, Kerasmxnet, and sklearn-theano.

Others, I use indirectly, such as Theano and TensorFlow (which libraries like Keras, deepy, and Blocks build upon).

And even others, I use only for very specific tasks (such as nolearn and their Deep Belief Network implementation).

The goal of this blog post is to introduce you to these libraries. I encourage you to read up on each them individually to determine which one will work best for you in your particular situation.

My Top 9 Favorite Python Deep Learning Libraries

Again, I want to reiterate that this list is by no means exhaustive. Furthermore, since I am a computer vision researcher and actively work in the field, many of these libraries have a strong focus on Convolutional Neural Networks (CNNs).

I’ve organized this list of deep learning libraries into three parts.

The first part details popular libraries that you may already be familiar with. For each of these libraries, I provide a very general, high-level overview. I then detail some of my likes and dislikes about each library, along with a few appropriate use cases.

The second part dives into my personal favorite deep learning libraries that I use heavily on a regular basis (HINT: Keras, mxnet, and sklearn-theano).

Finally, I provide a “bonus” section for libraries that I have (1) not used in a long time, but still think you may find useful or (2) libraries that I haven’t tried yet, but look interesting.

Let’s go ahead and dive in!

For starters:

1. Caffe

It’s pretty much impossible to mention “deep learning libraries” without bringing up Caffe. In fact, since you’re on this page right now reading up on deep learning libraries, I’m willing to bet that you’ve already heard of Caffe.

So, what is Caffe exactly?

Caffe is a deep learning framework developed by the Berkeley Vision and Learning Center (BVLC). It’s modular. Extremely fast. And it’s used by academics and industry in start-of-the-art applications.

In fact, if you were to go through the most recent deep learning publications (that also provide source code), you’ll more than likely find Caffe models on their associated GitHub repositories.

While Caffe itself isn’t a Python library, it does provide bindings into the Python programming language. We typically use these bindings when actually deploying our network in the wild.

The reason I’ve included Caffe in this list is because it’s used nearly everywhere. You define your model architecture and solver methods in a plaintext, JSON-like file called .prototxt  configuration files. The Caffe binaries take these .prototxt  files and train your network. After Caffe is done training, you can take your network and classify new images via Caffe binaries, or better yet, through the Python or MATLAB APIs.

While I love Caffe for its performance (it can process 60 million images per day on a K40 GPU), I don’t like it as much as Keras or mxnet.

The main reason is that constructing an architecture inside the .prototxt  files can become quite tedious and tiresome. And more to the point, tuning hyperparameters with Caffe can not be (easily) done programmaticallyBecause of these two reasons, I tend to lean towards libraries that allow me to implement the end-to-end network (including cross-validation and hyperparameter tuning) in a Python-based API.

2. Theano

Let me start by saying that Theano is beautiful. Without Theano, we wouldn’t have anywhere near the amount of deep learning libraries (specifically in Python) that we do today. In the same way that without NumPy, we couldn’t have SciPy, scikit-learn, and scikit-image, the same can be said about Theano and higher-level abstractions of deep learning.

At the very core, Theano is a Python library used to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays. Theano accomplishes this via tight integration with NumPy and transparent use of the GPU.

While you can build deep learning networks in Theano, I tend to think of Theano as the building blocks for neural networks, in the same way that NumPy serves as the building blocks for scientific computing. In fact, most of the libraries I mention in this blog post wrap around Theano to make it more convenient and accessible.

Don’t get me wrong, I love Theano — I just don’t like writing code in Theano.

While not a perfect comparison, building a Convolutional Neural Network in Theano is like writing a custom Support Vector Machine (SVM) in native Python with only a sprinkle of NumPy.

Can you do it?

Sure, absolutely.

Is it worth your time and effort?

Eh, maybe. It depends on how low-level you want to go/your application requires.

Personally, I’d rather use a library like Keras that wraps Theano into a more user-friendly API, in the same way that scikit-learn makes it easier to work with machine learning algorithms.

3. TensorFlow

Similar to Theano, TensorFlow is an open source library for numerical computation using data flow graphs (which is all that a Neural Network really is). Originally developed by the researchers on the Google Brain Team within Google’s Machine Intelligence research organization, the library has since been open sourced and made available to the general public.

A primary benefit of TensorFlow (as compared to Theano) is distributed computing, particularly among multiple-GPUs (although this is something Theano is working on).

Other than swapping out the Keras backend to use TensorFlow (rather than Theano), I don’t have much experience with the TensorFlow library. Over the next few months, I expect this to change, however.

4. Lasagne

Lasagne is a lightweight library used to construct and train networks in Theano. The key term here is lightweight — it is not meant to be a heavy wrapper around Theano like Keras is. While this leads to your code being more verbose, it does free you from any restraints, while still giving you modular building blocks based on Theano.

Simply put: Lasagne functions as a happy medium between the low-level programming of Theano and the higher-level abstractions of Keras.

My Go-To’s:

5. Keras

If I had to pick a favorite deep learning Python library, it would be hard for me to pick between Keras and mxnet — but in the end, I think Keras might win out.

Really, I can’t say enough good things about Keras.

Keras is a minimalist, modular neural network library that can use either Theano or TensorFlow as a backend. The primary motivation behind Keras is that you should be able to experiment fast and go from idea to result as quickly as possible.

Architecting networks in Keras feels easy and natural. It includes some of the latest state-of-the-art algorithms for optimizers (Adam, RMSProp), normalization (BatchNorm), and activation layers (PReLU, ELU, LeakyReLU).

Keras also places a heavy focus on Convolutional Neural Networks, something very near to my heart. Whether this was done intentionally or unintentionally, I think this is extremely valuable from a computer vision perspective.

More to the point, you can easily construct both sequence-based networks (where the inputs flow linearly through the network) and graph-based networks (where inputs can “skip” certain layers, only to be concatenated later). This makes implementing more complex network architectures such as GoogLeNet and SqueezeNet much easier.

My only problem with Keras is that it does not support multi-GPU environments for training a network in parallel. This may or may not be a deal breaker for you.

If I want to train a network as fast as possible, then I’ll likely use mxnet. But if I’m tuning hyperparameters, I’m likely to setup four independent experiments with Keras (running on each of my Titan X GPUs) and evaluate the results.

6. mxnet

My second favorite deep learning Python library (again, with a focus on training image classification networks), would undoubtedly be mxnet. While it can take a bit more code to standup a network in mxnet, what it does give you is an incredible number of language bindings (C++, Python, R, JavaScript, etc.)

The mxnet library really shines for distributed computing, allowing you to train your network across multiple CPU/GPU machines, and even in AWS, Azure, and YARN clusters.

Again, it takes a little more code to get an experiment up and running in mxnet (as compared to Keras), but if you’re looking to distribute training across multiple GPUs or systems, I would use mxnet.

7. sklearn-theano

There are times where you don’t need to train a Convolutional Neural Network end-to-end. Instead, you need to treat the CNN as a feature extractor. This is especially useful in situations where you don’t have enough data to train a full CNN from scratch. Instead, just pass your input images through a popular pre-trained architecture such as OverFeat, AlexNet, VGGNet, or GoogLeNet, and extract features from the FC layers (or whichever layer you decide to use).

In short, this is exactly what sklearn-theano allows you to do. You can’t train a model from scratch with it — but it’s fantastic for treating networks as feature extractors. I tend to use this library as my first stop when evaluating whether a particular problem is suitable for deep learning or not.

8. nolearn

I’ve used nolearn a few times already on the PyImageSearch blog, mainly when performing some initial GPU experiments on my MacBook Pro and performing deep learning on an Amazon EC2 GPU instance.

While Keras wraps Theano and TensorFlow into a more user-friendly API, nolearn does the same — only for Lasagne. Furthermore, all code in nolearn is compatible with scikit-learn, a huge bonus in my book.

personally don’t use nolearn for Convolutional Neural Networks (CNNs), although you certainly could (I prefer Keras and mxnet for CNNs) — I mainly use nolearn for its implementation of Deep Belief Networks (DBNs).


Alright, you got me.

DIGITS isn’t a true deep learning library (although it is written in Python). DIGITS (Deep Learning GPU Training System) is actually a web application used for training deep learning models in Caffe (although I suppose you could hack the source code to work with a backend other than Caffe, but that sounds like a nightmare).

If you’ve ever worked with Caffe before, then you know it can be quite tedious to define your .prototxt  files, generate your image dataset, run your network, and babysit your network training all via your terminal. DIGITS aims to fix this by allowing you to do (most) of these tasks in your browser.

Furthermore, the user interface is excellent, providing you with valuable statistics and graphs as your model trains. I also like that you can easily visualize activation layers of the network for various inputs. Finally, if you have a specific image that you would like to test, you can either upload the image to your DIGITS server or enter the URL of the image and your Caffe model will automatically classify the image and display the result in your browser. Pretty neat!


10. Blocks

I’ll be honest, I’ve never used Blocks before, although I do want to give it a try (hence why I’m including it in this list). Like many of the other libraries in this list, Blocks builds on top of Theano, exposing a much more user friendly API.

11. deepy

If you were to guess which library deepy wraps around, what would your guess be?

That’s right, it’s Theano.

I remember using deepy awhile ago (during one if its first initial commits), but I haven’t touched it in a good 6-8 months. I plan on giving it another try in future blog posts.

12. pylearn2

I feel compelled to include pylearn2 in this list for historical reasons, even though I don’t actively use it anymore. Pylearn2 is more than a general machine learning library (similar to scikit-learn in that it respect), but also includes implementations of deep learning algorithms.

The biggest concern I have with pylearn2 is that (as of this writing), it does not have an active developer. Because of this, I’m hesitant to recommend pylearn2 over more maintained and active libraries such as Keras and mxnet.

13. Deeplearning4j

This is supposed to be a Python-based list, but I thought I would include Deeplearning4j in here, mainly out of the immense respect I have for what they are doing — building an open source, distributed deep learning library for the JVM.

If you work in enterprise, you likely have a basement full of servers you use for Hadoop and MapReduce. Maybe you’re still using these machines. Maybe you’re not.

But what if you could use these same machines to apply deep learning?

It turns out you can — you just need Deeplearning4j.

Take a deep dive into Deep Learning and Convolutional Neural Networks

Figure 1: Learn how to utilize Deep Learning and Convolutional Neural Networks to classify the contents of images inside the PyImageSearch Gurus course.

Figure 1: Learn how to utilize Deep Learning and Convolutional Neural Networks to classify the contents of images inside the PyImageSearch Gurus course.

Curious about deep learning?

I’m here to help.

Inside the PyImageSearch Gurus course,  I’ve created 21 lessons covering 256 pages of tutorials on Neural Networks, Deep Belief networks, and Convolutional Neural Networks, allowing you to get up to speed qucikly and easily.

To learn more about the PyImageSearch Gurus course (and grab 10 FREE sample lessons), just click the button below:

Click here to learn more about PyImageSearch Gurus!


In this blog post, I reviewed some of my favorite libraries for deep learning and Convolutional Neural Networks. This list was by no means exhaustive and was certainly biased towards deep learning libraries that focus on computer vision and Convolutional Neural Networks.

All that said, I do think this is a great list to utilize if you’re just getting started in the deep learning field and looking for a library to try out.

In my personal opinion, I find it hard to beat Keras and mxnet. The Keras library sits on top of computational powerhouses such as Theano and TensorFlow, allowing you to construct deep learning architectures in remarkably few lines of Python code.

And while it may take a bit more code to construct and train a network with mxnet, you gain the ability to distribute training across multiple GPUs easily and efficiently. If you’re in a multi-GPU system/environment and want to leverage this environment to its full capacity, then definitely give mxnet a try.

Before you go, be sure to sign up for the PyImageSearch Newsletter using the form below to be notified when new deep learning posts are published (there will be a lot of them in the coming months!)

, , , , , , ,

34 Responses to My Top 9 Favorite Python Deep Learning Libraries

  1. joe minichino June 27, 2016 at 11:07 am #

    Hi Adrian,

    how’s it going? What is your opinion on tflearn (former scikit flow)? seems interesting!

    • Adrian Rosebrock June 28, 2016 at 10:53 am #

      As fas a I understand, scikit-flow has been moved into TensorFlow, starting with v0.8+.

  2. Abhishek Mishra June 27, 2016 at 11:37 am #

    This is a great list. Thanks for putting this together.

    For convolutional networks, being “Fast” really helps. Some of the networks that won ImageNet challenge in last few years were more than 100 layers. This is where GPU and distributed come in to be very useful.

    • Adrian Rosebrock June 28, 2016 at 10:53 am #

      RESNet, the architecture that won the most recent ImageNet challenges was massive. In fact, it took over 3 weeks to train using 8 GPUs! There is a mantra in the deep learning world that “the deeper, the better”. But I think RESNet, while winning the ImageNet challenge, also demonstrated there is a point in diminishing returns as the network gets deeper.

  3. Kenny June 27, 2016 at 12:18 pm #

    As always, awesome post Adrian 😉 Wonderful and splendid!

    • Adrian Rosebrock June 28, 2016 at 10:51 am #

      Thanks for the kind words Kenny 🙂

  4. Sahil Dadia June 27, 2016 at 2:02 pm #

    Finally a concise answer on how to get started with deep learning. Are you going to post tutorials on Keras? So that I will start with Keras. I really like your posts. Keep posting?

    • Adrian Rosebrock June 28, 2016 at 10:51 am #

      Correct, most tutorials will use either Keras or mxnet. The ones I have planned for the near future will be using Keras.

  5. Keith Prisbrey June 27, 2016 at 4:57 pm #

    Incredibly useful. Thank you very much.

    • Adrian Rosebrock June 28, 2016 at 10:50 am #

      Thanks Keith 🙂

  6. Jason June 27, 2016 at 9:06 pm #

    Nice summary… I notice that today the Tensorflow team have released 0.9 which includes support for running it on the Raspberry Pi, Android and iOS (amongst other things)…

    It seems to be gaining allot of momentum in the ML community.

    • Adrian Rosebrock June 28, 2016 at 10:51 am #

      TensorFlow will only continue to grow. I personally haven’t used it as much as I would like, mainly because Theano does such a good job as a backend for Keras already, but I’m really looking forward to giving the multi-GPU variant of TensorFlow a try.

  7. Linus July 8, 2016 at 8:07 am #

    Unfortunaly I didn’t got the last two posts mailed 🙁
    But nice to see you starting with Deep learning and Neural Netwoks!

    • Adrian Rosebrock July 8, 2016 at 9:44 am #

      Thanks for letting me know Linus, I’ll be sure to look into this.

  8. Amit July 18, 2016 at 11:55 am #


    Thank you for a very useful post.
    I suggest having a look at chainer. It is pure python (with cython bindings to cuda/cuDNN). On the one hand it allows easy construction and training of deeplearning networks. On the other hand it offers low level programming of custom layers. It also supports multi-GPU.

    • Adrian Rosebrock July 18, 2016 at 5:08 pm #

      Thanks for sharing Amit, I’ll be sure to take a look at chainer.

      • chugmagaga September 11, 2016 at 9:20 pm #

        Just curious to see if you had time to look at chainer.

        • Adrian Rosebrock September 12, 2016 at 12:44 pm #

          I honestly haven’t had a chance to play with it at all. I’ve been mainly using Keras and mxnet for my recent projects.

  9. Gerard July 20, 2016 at 2:36 am #

    installing and configuring theano gives me a whole deal of a headache >:(

    • Adrian Rosebrock July 20, 2016 at 2:35 pm #

      Hey Gerard — installing and configuring Theano isn’t too hard once you’ve seen it done before. See this blog post where I install Keras (which has Theano as a pre-requisite). As you’ll see, it’s not too bad.

  10. JC August 10, 2016 at 3:39 pm #

    This morning Intel bought Nervana which backs up neon, which is fastest framework today. What’s your comment on neon framework?

  11. Neerja October 25, 2016 at 1:53 am #

    hey Adrian which all deep learning libraries can be integrated with opencv other than caffe?

    • Adrian Rosebrock November 1, 2016 at 9:41 am #

      Any Python deep learning library can be used with OpenCV. Keep in mind that OpenCV represents images as NumPy arrays just like deep learning libraries do. Therefore, if you can represent an image with a NumPy array it can be easily used in other libraries.

  12. Ajeet February 6, 2017 at 8:31 am #

    Excellent summary , very useful for someone like me ,who just started learning deeep learning. do you recommend any book on deep learning.

    • Adrian Rosebrock February 7, 2017 at 9:12 am #

      If you’re interested in deep learning for the specific use of computer vision, I’m actually writing a book on that very topic.

      • Bob June 16, 2017 at 2:15 pm #

        Hi Adrian,

        can computer vision solve problems such as “if the persons in the 2 photos look alike”, or “if the 2 house in the 2 pictures have similar style”?


        • Adrian Rosebrock June 20, 2017 at 11:18 am #

          It depends on what you define as “similar” as “similarity” is often a subjective concept. If you can formalize what “similar” means, than in some cases, yes, computer vision can be used with very high accuracy for this.

  13. Vincent April 22, 2017 at 12:24 am #

    Time to update this blog post to include PyTorch B)

    • Adrian Rosebrock April 24, 2017 at 9:48 am #

      I haven’t tried PyTorch, but I’ll certainly check it out.

  14. Juraj January 31, 2018 at 7:22 am #

    Hi Adrian, please update information about multi-GPU support in Keras. You can insert there a link to your article:

  15. Priyanka October 12, 2018 at 2:26 pm #

    Actually I am new to deep learning and very confused between all these libraries, can you please tell me that should I go for either TensorFlow or Keras or both for video classification and action recognition in video. And in which format, my video dataset should be or which video format is supported by these libraries.

    • Adrian Rosebrock October 16, 2018 at 8:52 am #

      Hey Priyanka — I actually just authored a blog post on Keras vs. TensorFlow. Give it a look as it should help start to address your concerns.

  16. Fadi January 9, 2019 at 2:55 am #

    Thank you Adrian, very useful post. I’m new to deep learning, I want to work on object detection using Satellite imagery. Could you please give me some advises about how to start (which framework is better to use?, which CNN model to use as a baseline?,…). I will use XView satellite dataset. Thank you in advance.

    • Adrian Rosebrock January 11, 2019 at 9:52 am #

      Hey Fadi — it’s awesome that you are interested in studying deep learning and object detection. I would suggest using Keras and TensorFlow. You should also utilize object detectors such as Faster R-CNN, Single Shot Detector (SSDs), and RetinaNet. I cover all of these object detectors inside Deep Learning for Computer Vision with Python. Be sure to take a look, I believe it will really help you.

Before you leave a comment...

Hey, Adrian here, author of the PyImageSearch blog. I'd love to hear from you, but before you submit a comment, please follow these guidelines:

  1. If you have a question, read the comments first. You should also search this page (i.e., ctrl + f) for keywords related to your question. It's likely that I have already addressed your question in the comments.
  2. If you are copying and pasting code/terminal output, please don't. Reviewing another programmers’ code is a very time consuming and tedious task, and due to the volume of emails and contact requests I receive, I simply cannot do it.
  3. Be respectful of the space. I put a lot of my own personal time into creating these free weekly tutorials. On average, each tutorial takes me 15-20 hours to put together. I love offering these guides to you and I take pride in the content I create. Therefore, I will not approve comments that include large code blocks/terminal output as it destroys the formatting of the page. Kindly be respectful of this space.
  4. Be patient. I receive 200+ comments and emails per day. Due to spam, and my desire to personally answer as many questions as I can, I hand moderate all new comments (typically once per week). I try to answer as many questions as I can, but I'm only one person. Please don't be offended if I cannot get to your question
  5. Do you need priority support? Consider purchasing one of my books and courses. I place customer questions and emails in a separate, special priority queue and answer them first. If you are a customer of mine you will receive a guaranteed response from me. If there's any time left over, I focus on the community at large and attempt to answer as many of those questions as I possibly can.

Thank you for keeping these guidelines in mind before submitting your comment.

Leave a Reply