In today’s blog post, I interview arguably one of the most important researchers and practitioners in modern day deep learning, Francois Chollet.
Francois is not only the creator of the Keras deep learning library, but he’s also a Google AI researcher. He will also be speaking at PyImageConf 2018 in August of this year.
Inside this interview Francois discusses:
- His inspiration to study AI
- The reason why he created Keras
- How deep learning will eventually be prevalent in every industry, every business, and every non-profit
- Problems and challenges in the AI industry (and how we can help solve them)
- What he would change about the deep learning field, and how the research community may be in trouble
- His advice to you, as a PyImageSearcher, on the optimal way to study deep learning
Please, join me in welcoming Francois to the PyImageSearch — it is truthfully a privilege to have him here.
An interview with Francois Chollet
Adrian: Hi Francois! I know you are very busy with your work at Google AI and on the Keras library — I really appreciate you taking the time to do this interview. It’s quite the honor to have you on the PyImageSearch blog! For people who don’t know you, who are you and what do you do?
Francois: I work as a software engineer at Google in Mountain View, CA. I develop Keras, a deep learning library. I started it in 2015 as a side-project, and over time it has become bigger than intended — over 250,000 users now, a good chunk of the deep learning world. I also do AI research on a number of topics, including computer vision and program synthesis.
Adrian: What inspired you to start working in the machine learning, deep learning, and computer vision field?
Francois: I’ve been into AI for a long time. I was originally approaching it from a philosophical angle — I wanted to understand how intelligence works, what is the nature of consciousness, that sort of thing. I started by reading up on neuropsychology, which from a distance looked like the field that should be able to answer these questions. I learned everything I could, but it turned out that neuropsychology didn’t have any real answers. That was a big disappointment.
So I moved on to AI — the idea being to try to understand minds by trying to create them from first principles, bottom up, very much the reverse approach to neuropsychology. Of course, most of AI at that time wasn’t at all concerned about minds and how they might work, so I ended up in the one corner of AI that seemed most relevant to my interests: developmental cognitive robotics, which is about using robots and AI to test models of human cognitive development. Then, because I’m not very good at doing the same thing for a long time, I eventually branched out into more applied subfields, such as computer vision and natural language processing.
Adrian: Tell us a bit about the Keras deep learning library. Why did you create it and what gap does it fill in the set of existing ML/DL libraries and packages?
Francois: I created Keras around February / March 2015. Deep learning was a very different field back then. First, it was smaller. There might have been 10,000 people doing deep learning at the time. It’s closer to one million now.
In terms of tools, you didn’t have many options. You could use Caffe, which was popular in computer vision, but only worked for fairly narrow use cases (convnets) and wasn’t very extensible. You could use Torch 7, which was a pretty good choice, but that meant you had to code in Lua, which doesn’t have any of the goodies of the Python data science ecosystem. Any data format you wanted to load — you had to hack together your own parser from scratch in Lua, because you weren’t going to find one on GitHub. And then there was Theano, a Python library that was very much the spiritual ancestor to TensorFlow. I liked Theano a lot, it felt like the future, but it was very low-level, pretty difficult to use. You had to write everything from scratch.
At the time I was doing research on applying deep learning to natural language processing, with a focus on question-answering. Support for RNNs in the existing tool ecosystem was near-inexistent. So I decided to make my own Python library, on top of Theano, borrowing some ideas from the parts of the Scikit-Learn API and Torch API that I liked. When I launched, the main value proposition was that Keras was the first deep learning library for Python that offered support for both RNNs and convnets at the same time. It also had the first reusable open-source implementation of a LSTM, to the best of my knowledge (previously available implementations were essentially research code). And it was pretty easy to use.
Keras starting getting users from day one, and it has been a nonstop development marathon since.
Adrian: Why might a deep learning researcher, practitioner, or developer choose Keras over other libraries/frameworks such as PyTorch, Caffe, or even just strict TensorFlow?
Francois: I think what makes Keras stand out in the deep learning framework landscape today is its focus on the user experience. Everything in the Keras API is designed to follow best practices for reducing cognitive load, being more accessible, being more productive. I think that’s the main reason why Keras has got to this level of adoption, even though Torch and Caffe had a big head start. You can’t overstate the importance of ease-of-use and productivity, whether for practitioners or researchers. Going from idea to results as fast as possible, in a tight iteration loop, is key to doing great research or building a great product.
Also, here’s one thing about Keras and TensorFlow. There’s no “Keras or TensorFlow” choice to make. Keras is the official high-level interface to TensorFlow. It comes packaged with TensorFlow as the tf.keras module. You can think of Keras as a frontend for deep learning, that’s tuned for ease-of-use and productivity, and that you can run on top of different backend engines, TensorFlow being the main one.
Adrian: One of the most exciting aspects about open source is seeing how your work is used by others. What are some of the more interesting, and even surprising, ways you’ve seen Keras used?
Francois: One thing that’s really fascinating about our field is the sheer diversity of problems that you can solve with our techniques and our tools. I’ve seen Keras being used for so many problems I didn’t even know existed. Like optimizing the operation of a salmon farm. Allocating micro-loans in developing countries. Building automated checkout systems for brick-and-mortar stores. In general, there seems to be a gap between the set of problems that people here in Silicon Valley are aware of, and all the problems that people are facing out there and that could be solved with these technologies.
That’s a big reason why focusing on accessibility is so important: Silicon Valley on its own is never going to solve every problem that can be solved. There will not be a Silicon Valley-based “deep learning industry” that would have a monopoly on deep learning expertise and that would sell consulting services and software to everyone else. Instead, deep learning is going to be in every industry, in every business and non-profit, a tool in everyone’s hands. Making frameworks like Keras and TensorFlow free to use and as accessible as possible is a way to initiate a kind of large-scale distributed wave of problem-solving: it’s the people who understand the target domains that are going to be building the solutions, on their own, using our tools, having 100x the impact that we alone could have.
And I think Keras has done a good job at being accessible to everyone, compared to other frameworks that only aim at being used by expert researchers and other insiders. When I talk to people doing deep learning who are outside the usual research and industry circles, it’s generally Keras they’re using.
Adrian: What are your favorite open source libraries, excluding Keras and/or TensorFlow?
Francois: I really like Scikit-Learn, it has been hugely impactful in the scientific Python ecosystem. It’s a very user-centric and well-designed library. I’m generally a big fan of user-centric design. Libraries like Requests and Flask are some good examples as well.
Also, talking about deep learning frameworks, I can’t overstate the importance that Theano has had for the deep learning world. It had its issues, but it was really visionary in many ways.
Adrian: Sometimes even the best intentions from well-minded people can have disastrous consequences — this logic extends to machine learning and AI as well. Would you agree with that statement? And if so, what can we as a ML/DL community do to help ensure we’re not causing more problems than we’re solving?
Francois: Yes, definitely. Applying machine learning inappropriately can potentially lead to simplistic, inaccurate, unaccountable, un-auditable decision systems getting deployed in serious situations and negatively affecting people’s lives. And if you look at some of the ways companies and governments are using machine learning today, it’s not a hypothetical risk, it’s a pressing concern.
Thankfully, I think there’s been a positive trend in the machine learning community recently. People are getting increasingly aware of these issues. One example is algorithmic bias, which is the fact that machine learning systems reflect in their decisions the biases inherent to their training data, whether it’s due to biased data sampling, biased annotations, or the fact that the real world is biased in various ways. A year ago this important issue was pretty much off the radar. Now it’s something that most big companies doing machine learning are looking into. So at least in terms of awareness of some of these issues, we’re making progress. But that’s just the first step.
Adrian: If you could change one thing about the deep learning industry, what would it be?
Francois: I think applied deep learning in the industry is generally doing well, except for a general tendency to oversell the capabilities of current technology, and be overly optimistic about the near future (radiologists will definitely still have a job in five years). The way I see it, it’s the research community that’s in trouble. There are many things I would change on that front.
First, we should attempt to fix the broken incentives in the research community. Currently we have a number of incentives that go against the scientific method and scientific rigor. It’s easier to publish at deep learning conferences when you over-claim and under-investigate, while obfuscating your methodology. People gravitate towards incremental architecture tricks that kinda seem to work if you don’t test them adversarially. They use weak baselines, they overfit to the validation set of their benchmarks. Few people do ablation studies (attempting to verify that your empirical results are actually linked to the idea you’re advancing), do rigorous validation of their models (instead of using the validation set as a training set for hyperparameters), or do significance testing.
Then, we have the problem of PR-driven research. Science-fiction narratives and neuroscience terminology have given the field of AI a special kind of aura. When it’s really a crossover subfield at the intersection of mathematics and CS. Some well-known labs pick their research projects specifically for PR, disregarding the question of what can be learned from the project, what useful knowledge can be gained. We should remember that the purpose of research is to create knowledge. It’s not to get media coverage, nor is it to publish papers to get a promotion.
Also, I’m sad about our broken reviewing process. The field of deep learning has gone from a few hundreds of people to tens of thousands in less than 5 years. Most of them are young and inexperienced, often having unrealistic ideas about the field and no real experience with the scientific method. They don’t just write papers, they also review them, and that’s why you end up with the first problem I mentioned — a lack of rigor.
Adrian: You published a book, Deep Learning with Python, in 2017 — congrats on the release! What does your book cover and who is the target audience?
Francois: It’s a deep learning curriculum written for developers. It takes you from the basics (understanding what tensors are, what machine learning is about, and so on) to being able to handle relatively advanced problems on your own, such as image classification, timeseries prediction, text classification, and so on. It’s written with a focus on being accessible and to-the-point. One thing I’ve tried to do is to convey all mathematical concepts using code rather than mathematical notation. Mathematical notation can be a huge accessibility barrier, and it isn’t at all a requirement to understand deep learning clearly. Code can be in many cases a very intuitive medium to work with mathematical ideas.
Adrian: What advice would you give to PyImageSearch readers who are interested in studying deep learning? Would you suggest a “theory first” approach, a “hands-on” approach, or some balance between the two?
Francois: I would definitely recommend a hands-on approach. Theory is a higher-level framework to help you make sense of the experience you’ve gathered so far. In the absence of experience, theory makes no sense, and focusing on it too early might lead you to build misleading mental models about what you will be doing later.
Adrian: Francois, you’re a successful AI researcher, you’re highly regarded in the open source DL/ML community, you’re a writer, and you’re an artist. You’re clearly a person who enjoys the act of creating and bringing new ideas, concepts, and creative works into the world. I can absolutely appreciate and relate to this drive to create. However, when we create, whether in terms of ideas or works of art, we’re bound to encounter “haters”. How would you advise someone to handle these types of people who are overly critical/just plain disrespectful of what we create?
Francois: I think different people can behave like trolls for different reasons. But trolls seem to follow the same playbook in every field, whether art or software engineering or science. You see the same patterns across the board. The higher-profile ones seem to be playing status games, attacking someone to gather attention and elevate their own status in the eyes of any audience they might have. The anonymous ones tend to be insecure personalities who cope with themselves by playing gatekeepers, hating on “the noobs” and on outsiders, and who vent their frustration by showing cruelty towards the people or groups who most remind them of their own personal failings.
My advice is to ignore the trolls. Don’t engage with them. Don’t talk to them and don’t talk about them — don’t give them a platform. There’s nothing to be gained from engaging with people who act in bad faith and aim at being hurtful (it’s just stressful). And it deprives the trolls of the attention they seek.
Adrian: You’ll be speaking at PyImageConf this year — we’re super excited and lucky to have you there. Can you tell us a bit more about what you’ll be talking about?
Francois: I’ll be talking about some of my previous research in computer vision, in particular using depthwise separable convolutions in convnet architectures. It’s a really underrated pattern in my opinion. It’s basically a set of priors about the structure of the visual space that enable you to simultaneously build much smaller models, that run faster, and that generalize better. In the same way that the translation-invariance prior leveraged by convnets is a considerable improvement compared to fully-connected networks, I think the depthwise separability prior in convnet features is strictly superior to regular convolutions when it comes to processing 2D images or continuous 3D spaces.
In today’s blog post, we interviewed Francois Chollet, Google AI researcher and creator of the popular Keras deep learning library.
Please, take the time to leave a comment on this post and thank Francois for taking the time out of his busy to day to join us on PyImageSearch for this interview. We are truthfully privileged and lucky to have him here.
Thank you, Francois!