3 ways to create a Keras model with TensorFlow 2.0 (Sequential, Functional, and Model Subclassing)

Keras and TensorFlow 2.0 provide you with three methods to implement your own neural network architectures:

  1. Sequential API
  2. Functional API
  3. Model subclassing

Inside of this tutorial you’ll learn how to utilize each of these methods, including how to choose the right API for the job.

To learn more about Sequential, Functional, and Model subclassing with Keras and TensorFlow 2.0, just keep reading!

Looking for the source code to this post?
Jump right to the downloads section.

3 ways to create a Keras model with TensorFlow 2.0 (Sequential, Functional, and Model subclassing)

In the first half of this tutorial, you will learn how to implement sequential, functional, and model subclassing architectures using Keras and TensorFlow 2.0. I’ll then show you how to train each of these model architectures.

Once our training script is implemented we’ll then train each of the sequential, functional, and subclassing models, and review the results.

Furthermore, all code examples covered here will be compatible with Keras and TensorFlow 2.0.

Project structure

Go ahead and grab the source code to this post by using the “Downloads” section of this tutorial. Then extract the files and inspect the directory contents with the tree  command:

Our models.py  contains three functions to build Keras/TensorFlow 2.0 models using the Sequential, Functional and Model subclassing APIs, respectively.

The training script, train.py , will load a model depending on the provided command line arguments. The model will be trained on the CIFAR-10 dataset. An accuracy/loss curve plot will be output to a .png  file in the output  directory.

Implementing a Sequential model with Keras and TensorFlow 2.0

Figure 1: The “Sequential API” is one of the 3 ways to create a Keras model with TensorFlow 2.0.

A sequential model, as the name suggests, allows you to create models layer-by-layer in a step-by-step fashion.

Keras Sequential API is by far the easiest way to get up and running with Keras, but it’s also the most limited — you cannot create models that:

  • Share layers
  • Have branches (at least not easily)
  • Have multiple inputs
  • Have multiple outputs

Examples of seminal sequential architectures that you may have already used or implemented include:

  • LeNet
  • AlexNet
  • VGGNet

Let’s go ahead and implement a basic Convolutional Neural Network using TensorFlow 2.0 and Keras’ Sequential API.

Open up the models.py file in your project structure and insert the following code:

Notice how all of our Keras imports on Lines 2-13 come from tensorflow.keras  (also known as tf.keras ).

To implement our sequential model, we need the Sequential  import on Line 3. Let’s go ahead and create the sequential model now:

Line 15 defines the  shallownet_sequential model builder method.

Notice how on Line 18, we initialize the model as an instance of the Sequential  class. We’ll then add each layer to to the Sequential  class, one at a time.

ShallowNet contains one CONV => RELU  layer followed by a softmax classifier (Lines 22-29). Notice on each of these lines of code that we call model.add  to assemble our CNN with the appropriate building blocks. Order matters — you must call model.add  in the order in which you want to insert layers, normalization methods, softmax classifiers, etc.

Once your model has all of the components you desire, you can return  the object so that it can be be compiled later.

Line 32 returns our sequential model (we will use it in our training script).

Creating a Functional model with Keras and TensorFlow 2.0

Figure 2: The “Functional API” is one of the 3 ways to create a Keras model with TensorFlow 2.0.

Once you’ve had some practice implementing a few basic neural network architectures using Keras’ Sequential API, you’ll then want to gain experience working with the Functional API.

Keras’ Functional API is easy to use and is typically favored by most deep learning practitioners who use the Keras deep learning library.

Using the Functional API you can:

  • Create more complex models.
  • Have multiple inputs and multiple outputs.
  • Easily define branches in your architectures (ex., an Inception block, ResNet block, etc.).
  • Design directed acyclic graphs (DAGs).
  • Easily share layers inside the architecture.

Furthermore, any Sequential model can be implemented using Keras’ Functional API.

Examples of models that have Functional characteristics (such as layer branching) include:

  • ResNet
  • GoogLeNet/Inception
  • Xception
  • SqueezeNet

To gain experience using TensorFlow 2.0 and Keras’ Functional API, let’s implement a MiniGoogLeNet which includes a simplified version of the Inception module from Szegedy et al.’s seminal Going Deeper with Convolutions paper:

Figure 3: The “Functional API” is the best way to implement GoogLeNet to create a Keras model with TensorFlow 2.0. (image source)

As you can see, there are three modules inside the MiniGoogLeNet architecture:

  1. conv_module: Performs convolution on an input volume, utilizes batch normalization, and then applies a ReLU activation. We define this module out of simplicity and to make it reusable, ensuring we can easily apply a “convolutional block” inside our architecture using as few lines of code as possible, keeping our implementation tidy, organized, and easier to debug.
  2. inception_module: Instantiates two conv_module objects. The first CONV block applies 1×1 convolution while the second block performs 3×3 convolution with “same” padding, ensuring the output volume sizes for 1×1 and 3×3 convolutional are identical. The output volumes are then concatenated together along the channel dimension.
  3. downsample_module: This module is responsible for reducing the size of an input volume. Similar to the inception_module two branches are utilized here. The first branch performs 3×3 convolution but with (1) 2×2 stride and (2) “valid” padding, thereby reducing the volume size. The second branch applies 3×3 max-pooling with a 2×2 stride. The output volume dimensions for both branches are identical so they can be concatenated together along the channel axis.

Think of each of these modules as Legos — we implement each type of Lego and then stack them in a particular manner to define our model architecture.

Legos can be organized and fit together in a near-infinite number of possibilities; however, since form defines function, we need to take care and consider how these Legos should fit together.

Note: If you would like a detailed review of each of the modules inside the MiniGoogLeNet architecture, be sure to refer to Deep Learning for Computer Vision with Python where I cover them in detail.

As an example of piecing our “Lego modules” together, let’s go ahead and implement MiniGoogLeNet now:

Line 34 defines the minigooglenet_functional model builder method.

We’re going to define three reusable modules which are part of the GoogLeNet architecture:

  • conv_module
  • inception_module
  • downsample_module

Be sure to refer to the detailed descriptions of each above.

Defining the modules as sub-functions like this allows us to reuse the structure and save on lines of code, not to mention making it easier to read and make modifications.

Line 35 defines the conv_module  and its parameters. The most important parameter is x  — the input to this module. The other parameters pass through to Conv2D  and BatchNormalization .

Lines 37-39 build a set of CONV => BN => RELU  layers.

Notice that the beginning of each line starts with x =  and the end of the lines finish with (x) . This style is representative of the Keras Functional API. Layers are appended to one another where x  acts as the input to subsequent layers. This functional style will be present throughout the minigooglenet_functional  method.

Line 42 returns the built  conv_module  to the caller.

Let’s create our inception_module  which consists of two convolution modules:

Line 44 defines our  inception_module and parameters.

The inception module contains two branches of the conv_module  that are concatenated together:

  1. In the first branch, we perform 1×1 convolutions (Line 47).
  2. In the second branch we perform 3×3 convolutions (Line 48).

The call to concatenate  on Line 49 brings the module branches together across the channel dimension. Since the padding is the same  for both branches the output volume dimensions are equal. Thus, they can be concatenated along the channel dimension.

Line 51 returns the inception_module  block to the caller.

Finally, we’ll implement our downsample_module :

Line 54 defines our  downsample_module and parameters. The downsample module is responsible for reducing the input volume size and it also utilizes two branches:

  1. The first branch performs 3×3 convolution with 2×2 stride (Lines 57 and 58).
  2. The second branch performs 3×3 max-pooling with 2×2 stride (Line 59).

The outputs of the branches are then stacked along the channel dimension via a call to concatenate  (Line 60).

Line 63 returns the downsample block to the caller.

With each of our modules defined, we can now use them to build the entire MiniGoogLeNet architecture using the Functional API:

Lines 67-71 set up our inputs  to the CNN.

From there, we use the Functional API to assemble our model:

  1. First, we apply a single conv_module (Line 72).
  2. Two inception_module blocks are then stacked on top of each other before using the downsample_module to reduce volume size. (Lines 75-77).
  3. We then deepen the module by applying four inception_module blocks before reducing volume size via the downsample_module (Lines 80-84).
  4. We then stack two more inception_module blocks before applying average pooling and constructing the fully-connected layer head (Lines 87-90).
  5. A softmax classifier is then applied (Lines 93-95).
  6. Finally, the fully constructed Model is returned to the calling function (Lines 98-101).

Again, notice how we are using the Functional API in comparison to the Sequential API discussed in the previous section.

For a more detailed discussion of how to utilize Keras’ Functional API to implement your own custom model architectures, be sure to refer to my book, Deep Learning for Computer Vision with Python, where I discuss the Functional API in more detail.

Additionally, I want to give credit to Zhang et al. who originally proposed the MiniGoogLeNet architecture in a beautiful visualization in their paper, Understanding deep learning requires rethinking generalization.

Model subclassing with Keras and TensorFlow 2.0

Figure 4: “Model Subclassing” is one of the 3 ways to create a Keras model with TensorFlow 2.0.

The third and final method to implement a model architecture using Keras and TensorFlow 2.0 is called model subclassing.

Inside of Keras the Model class is the root class used to define a model architecture. Since Keras utilizes object-oriented programming, we can actually subclass the Model class and then insert our architecture definition.

Model subclassing is fully-customizable and enables you to implement your own custom forward-pass of the model.

However, this flexibility and customization comes at a cost — model subclassing is way harder to utilize than the Sequential API or Functional API.

So, if the model subclassing method is so hard to use, why bother utilizing it all?

Exotic architectures or custom layer/model implementations, especially those utilized by researchers, can be extremely challenging, if not impossible, to implement using the standard Sequential or Functional APIs.

Instead, researchers wish to have control over every nuance of the network and training process — and that’s exactly what model subclassing provides them.

Let’s look at a simple example implementing MiniVGGNet, an otherwise sequential model, but converted to a model subclass:

Line 103 defines our MiniVGGNetModel  class followed by Line 104 which defines our constructor.

Line 106 calls our parent constructor using the super  keyword.

From there, our layers are defined as instance attributes, each with its own name (Lines 110-137). Attributes in Python use the self  keyword and are typically (but not always) defined in a constructor. Let’s review them now:

  • The first (CONV => RELU) * 2 => POOL  layer set (Lines 110-116).
  • The second (CONV => RELU) * 2 => POOL  layer set (Lines 120-126).
  • Our fully-connected network head ( Dense ) with "softmax"  classifier (Line 129-138).

Notice how each layer is defined inside the constructor — this is on purpose!

Let’s say we had our own custom layer implementation that performed an exotic type of convolution or pooling. That layer could be defined elsewhere in the MiniVGGNetModel and then instantiated inside the constructor.

Once our Keras layers and custom implemented layers are defined, we can then define the network topology/graph inside the call function which is used to perform a forward-pass:

Notice how this model is essentially a Sequential model; however, we could just as easily define a model with multiple inputs/outputs, branches, etc.

The majority of deep learning practitioners won’t have to use the model subclassing method, but just know that it’s available to you if you need it!

Implementing the training script

Our three model architectures are implemented, but how are we going to train them?

The answer lies inside train.py — let’s take a look:

Lines 2-24 import our packages:

  • For matplotlib , we set the backend to "Agg"  so that we can export our plots to disk as .png  files (Lines 2 and 3).
  • We import logging  and set the log level to ignore anything but critical errors (Lines 10 and 11). TensorFlow was reporting (irrelevant) warning messages when training a model using Keras’ model subclassing feature so I updated the logging to only report critical messages. I believe that the warnings themselves are a bug inside TensorFlow 2.0 that will likely be removed in the next release.
  • Our three CNN models are imported: (1) MiniVGGNetModel , (2) minigooglenet_functional , and (3) shallownet_sequential  (Lines 14-16).
  • We import our CIFAR-10 dataset (Line 21).

From here we’ll go ahead and parse command line arguments:

Our two command line arguments include:

  • --model : One of the following choices=["sequential", "functional", "class"]  must be made to load our model using Keras’ APIs.
  • --plot : The path to the output plot image file. You may store your plots in the output/  directory as I have done.

From here we’ll (1) initialize a number of hyperparameters, (2) prepare our data, and (3) construct our data augmentation object:

In this code block, we:

  • Initialize the (1) learning rate, (2) batch size, and (3) number of training epochs (Lines 37-39).
  • Set the CIFAR-10 dataset labelNames , load the dataset, and preprocess them (Lines 42-51).
  • Binarize our labels (Lines 54-56).
  • Instantiate our data augmentation object with settings for random rotations, zooms, shifts, shears, and flips (Lines 59-61).

The heart of the script lies in this next code block where we instantiate our model:

Here we check if either our Sequential, Functional, or Model Subclassing architecture should be instantiated. Following the if/elif  statements based on the command line arguments we initialize the appropriate model .

From there, we are ready to compile the model and fit to our data:

All of our models are compiled with Stochastic Gradient Descent ( SGD ) and learning rate decay (Lines 82-85).

Lines 88-93 kick off training using Keras’ .fit_generator  method to handle data augmentation. You may read about the .fit_generator method in depth in this article.

We’ll wrap up by evaluating our model and plotting the training history:

Lines 97-99 make predictions on the test set and evaluate the network. A classification report is printed to the terminal.

Lines 102-117 plot the training accuracy/loss curves and output the plot to disk.

Keras Sequential model results

We are now ready to use Keras and TensorFlow 2.0 to train our Sequential model!

Take a second now to use the “Downloads” section of this tutorial to download the source code to this guide.

From there, open up a terminal and execute the following command to train and evaluate a Sequential model:

Figure 5: Using TensorFlow 2.0’s Keras Sequential API (one of the 3 ways to create a Keras model with TensorFlow 2.0), we have trained ShallowNet on CIFAR-10.

Here we are obtaining 59% accuracy on the CIFAR-10 dataset.

Looking at our training history plot in Figure 5, we notice that our validation loss is less than our training loss for nearly the entire training process — we can improve our accuracy by increasing the model complexity which is exactly what we’ll do in the next section.

Keras Functional model results

Our Functional model implementation is far deeper and more complex than our Sequential example.

Again, make sure you’ve used the “Downloads” section of this guide to download the source code.

Once you have the source code, execute the following command to train our Functional model:

Figure 6: Using TensorFlow 2.0’s Keras Functional API (one of the 3 ways to create a Keras model with TensorFlow 2.0), we have trained MiniGoogLeNet on CIFAR-10.

This time we’ve been able to boost our accuracy all the way up to 87%!

Keras Model subclassing results

Our final experiment evaluates our implementation of model subclassing using Keras.

The model we’re using here is a variation of VGGNet — an essentially sequential model consisting of 3×3 CONVs and 2×2 max-pooling for volume dimension reduction.

We used Keras model subclassing here (rather than the Sequential API) as a simple example of how you may take an existing model and convert it to subclassed architecture.

Note: Implementing your own custom layer types and training procedures for the model subclassing API is outside the scope of this post but I will cover it in a future guide.

To see Keras model subclassing in action make sure you’ve used the “Downloads” section of this guide to grab the code — from there you can execute the following command:

Figure 7: Using TensorFlow 2.0’s Keras Subclassing (one of the 3 ways to create a Keras model with TensorFlow 2.0), we have trained MiniVGGNet on CIFAR-10.

Here we obtain 73% accuracy, not quite as good as our MiniGoogLeNet implementation, but it still serves as an example of how to implement an architecture using Keras’ model subclassing feature.

In general, I do not recommend using Keras’ model subclassing:

  • It’s harder to use.
  • It adds more code complexity
  • It’s harder to debug.

…but it does give you full control over the model.

Typically I would only recommend you use Keras’ model subclassing if you are a:

  • Deep learning researcher implementing custom layers, models, and training procedures.
  • Deep learning practitioner trying to replicate the results of a researcher/paper.

The majority of deep learning practitioners are not going to need Keras’ model subclassing feature.

How can I learn Deep Learning?

Figure 8: My deep learning book, Deep Learning for Computer Vision with Python, is trusted by employees and students of top institutions. It is regularly updated to keep pace with the fast-moving AI industry.

If you’re interested in diving head-first into the world of computer vision/deep learning and discovering how to:

  • Design and train Convolutional Neural Networks for your project on your own custom datasets.
  • Learn deep learning fundamentals, rules of thumb, and best practices.
  • Replicate the results of state-of-the-art papers, including ResNet, SqueezeNet, VGGNet, and others.
  • Train your own custom Faster R-CNN, Single Shot Detectors (SSDs), and RetinaNet object detectors.
  • Use Mask R-CNN to train your own instance segmentation networks.

…then be sure to take a look at my book, Deep Learning for Computer Vision with Python!

My complete, self-study deep learning book is trusted by members of top machine learning schools, companies, and organizations, including Microsoft, Google, Stanford, MIT, CMU, and more!

Readers of my book have gone on to win Kaggle competitions, secure academic grants, and start careers in CV and DL using the knowledge they gained through study and practice.

My book not only teaches the fundamentals, but also teaches advanced techniques, best practices, and tools to ensure that you are armed with practical knowledge and proven coding recipes to tackle nearly any computer vision and deep learning problem presented to you in school, research, or the modern workforce.

Be sure to take a look  — and while you’re at it, don’t forget to grab your (free) table of contents + sample chapters.

Summary

In this tutorial you learned the three ways to implement a neural network architecture using Keras and TensorFlow 2.0:

  • Sequential: Used for implementing simple layer-by-layer architectures without multiple inputs, multiple outputs, or layer branches. Typically the first model API you use when getting started with Keras.
  • Functional: The most popular Keras model implementation API. Allows everything inside the Sequential API, but also facilitates substantially more complex architectures which include multiple inputs and outputs, branching, etc. Best of all, the syntax for Keras’ Functional API is clean and easy to use.
  • Model subclassing: Utilized when a deep learning researcher/practitioner needs full control over model, layer, and training procedure implementation. Code is verbose, harder to write, and even harder to debug. Most deep learning practitioners won’t need to subclass models using Keras, but if you’re doing research or custom implementation, model subclassing is there if you need it!

If you’re interested in learning more about the Sequential, Functional, and Model Subclassing APIs, be sure to refer to my book, Deep Learning for Computer Vision with Python, where I cover them in more detail.

I hope you enjoyed today’s tutorial!

To download the source code to this post, and be notified when future tutorials are published here on PyImageSearch, just enter your email address in the form below!

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , ,

16 Responses to 3 ways to create a Keras model with TensorFlow 2.0 (Sequential, Functional, and Model Subclassing)

  1. Thang Hoang October 28, 2019 at 12:49 pm #

    Excellent tutorial! But in the line 150 of models.py, do you mean x = self.conv2A(x)?

    • Adrian Rosebrock October 30, 2019 at 9:23 am #

      You are correct, thank you for catching that typo! I’ve updated the code and blog post.

  2. AMIT PANDEY October 28, 2019 at 3:21 pm #

    Thanks for this tutorial Adrain.
    I recently used Sonnet along with TF1.14. There I was able to implement FasterRCNN referring to the one open-sourced by Luminoth. With Sonnet, I was able to do model subclassing and get full control of the model. I am happy to see that in TF2.0. thanks for sharing the dos and don’ts it really helps.

    • Adrian Rosebrock October 30, 2019 at 9:21 am #

      Thanks Amit!

  3. david October 28, 2019 at 9:40 pm #

    Thank you very much for your blog. I have benefited a lot. May I ask, is there any plan to launch a tensorflow2.0 blog?

    • Adrian Rosebrock October 30, 2019 at 9:21 am #

      What do you mean by “launch a TensorFlow 2.0 blog”?

      I’ll be updating all popular Keras deep learning tutorials to TensorFlow 2.0 shortly.

  4. Toshio Futami October 28, 2019 at 11:01 pm #

    Thank you very much for your blog post always.
    I tried three models and succeeded the sequential and the functional but I had error on the model class. The error comments is as following.

    AttributeError:’MiniVGGNet’ object has no attribute ‘total_loss’

    Please let me know how to recover it.

    • Adrian Rosebrock October 30, 2019 at 9:20 am #

      What version of TensorFlow were you using?

  5. Nicholas Yafremau October 29, 2019 at 3:29 am #

    Hello. Thank you for this update considering Tensorflow 2.0 APIs. I was eager to see what has changed in this new version. I concluded for myself that Functional API is still the most balanced and perspective API to use and work with, as I preferred it in Tensorflow 1.0. Even though, as you said Subclass API allows for full control, personally in scientific projects it’s a little bit too much for me :). Great overview, looking forward to new updates!

    • Adrian Rosebrock October 30, 2019 at 9:20 am #

      I agree, the Functional API tends to be the best in the majority of situations. I you can easily implement a Sequential model using the Functional API with the added benefit of adding in branching if you need to.

  6. Yaser Sakkaf October 30, 2019 at 1:39 am #

    Hi Adrian,

    In train.py, I see that on line 88 in the method model.fit_generator yuo have used the same data for training and validation.

    Isn’t that incorrect.

    • Adrian Rosebrock October 30, 2019 at 9:19 am #

      We are not using the same data for training and validation. The actual training data is specified via:

      aug.flow(trainX, trainY, batch_size=BATCH_SIZE),

      While the validation data is supplied as:

      validation_data=(testX, testY),

  7. maoqiu November 1, 2019 at 7:51 am #

    Hi Adrian,
    I try to use keras to train a mode with a large number of data. When I use data.append(image) to load images, the memory will be full and the linux ternimal says Killed. How can I solve it?

    • Adrian Rosebrock November 7, 2019 at 10:02 am #

      Your machine is running out of RAM and cannot fit the entire dataset into memory. You should use either “.flow_from_directory” or follow Deep Learning for Computer Vision with Python to learn how to work with datasets too large to fit into RAM.

  8. cherif November 6, 2019 at 5:13 am #

    Hello

    I thank you for this tutorial.
    should I install Keras and tensorflow for these scripts
    if yes, should I install Keras and tensorflow in cv?

    Thank you

    • Adrian Rosebrock November 7, 2019 at 10:02 am #

      Are you referring to Python virtual environments? If so, yes, but just install TensorFlow as Keras is now part of the “tf.keras” submodule of TensorFlow.

Before you leave a comment...

Hey, Adrian here, author of the PyImageSearch blog. I'd love to hear from you, but before you submit a comment, please follow these guidelines:

  1. If you have a question, read the comments first. You should also search this page (i.e., ctrl + f) for keywords related to your question. It's likely that I have already addressed your question in the comments.
  2. If you are copying and pasting code/terminal output, please don't. Reviewing another programmers’ code is a very time consuming and tedious task, and due to the volume of emails and contact requests I receive, I simply cannot do it.
  3. Be respectful of the space. I put a lot of my own personal time into creating these free weekly tutorials. On average, each tutorial takes me 15-20 hours to put together. I love offering these guides to you and I take pride in the content I create. Therefore, I will not approve comments that include large code blocks/terminal output as it destroys the formatting of the page. Kindly be respectful of this space.
  4. Be patient. I receive 200+ comments and emails per day. Due to spam, and my desire to personally answer as many questions as I can, I hand moderate all new comments (typically once per week). I try to answer as many questions as I can, but I'm only one person. Please don't be offended if I cannot get to your question
  5. Do you need priority support? Consider purchasing one of my books and courses. I place customer questions and emails in a separate, special priority queue and answer them first. If you are a customer of mine you will receive a guaranteed response from me. If there's any time left over, I focus on the community at large and attempt to answer as many of those questions as I possibly can.

Thank you for keeping these guidelines in mind before submitting your comment.

Leave a Reply

[email]
[email]