A scalable Keras + deep learning REST API

In today’s blog post we are going to create a deep learning REST API that wraps a Keras model in an efficient, scalable manner.

Our Keras + deep learning REST API will be capable of batch processing images, scaling to multiple machines (including multiple web servers and Redis instances), and round-robin scheduling when placed behind a load balancer.

To accomplish this we will be using:

  • Keras
  • Redis (an in-memory data structure store)
  • Flask (a micro web framework for Python)
  • Message queuing and message broker programming paradigms

This blog post is a bit more advanced than other tutorials on PyImageSearch and is intended for readers:

  • Who are familiar with the Keras deep learning library
  • Who have an understanding of web frameworks and web services (and ideally coded a simple website/web service before)
  • Who understand basic data structures, such as hash tables/dictionaries, lists, along with their associated asymptotic complexities

For a more simple Keras + deep learning REST API, please refer to this guest post I did on the official Keras.io blog.

To learn how to create your own scalable Keras + deep learning REST API, just keep reading!

Looking for the source code to this post?
Jump right to the downloads section.

A scalable Keras + deep learning REST API

Today’s tutorial is broken into multiple parts.

We’ll start with a brief discussion of the Redis data store and how it can be used to facilitate message queuing and message brokering.

From there, we’ll configure our Python development environment by installing the required Python packages to build our Keras deep learning REST API.

Once we have our development environment configured we can implement our actual Keras deep learning REST API using the Flask web framework. After implementing, we’ll start the Redis and Flask servers, follow by submitting inference requests to our deep learning API endpoint using both cURL and Python.

Finally, we’ll end with a short discussion on the considerations you should keep in mind when building your own deep learning REST API.

A short introduction to Redis as a REST API message broker/message queue

Figure 1: Redis can be used as a message broker/message queue for our deep learning REST API

Redis is an in-memory data store. It is different than a simple key/value store (such as memcached) as it can can store actual data structures.

Today we’re going to utilize Redis as a message broker/message queue. This involves:

  • Running Redis on our machine
  • Queuing up data (images) to our Redis store to be processed by our REST API
  • Polling Redis for new batches of input images
  • Classifying the images and returning the results to the client

To read more about Redis, I encourage you to review this short introduction.

Configuring and installing Redis for our Keras REST API

Redis is very easy to install. Below you’ll find the commands to download, extract, and install Redis on your system:

To start the Redis server, use the following command:

Leave this terminal open to keep the Redis data store running.

In another terminal, you can validate Redis is up and running:

Provided that you get a PONG  back from Redis, you’re ready to go.

Configuring your Python development environment to build a Keras REST API

I recommend that you work on this project inside of a Python virtual environment so that it does not impact system level Python and projects.

To do this, you’ll need to install pip, virtualenv, and virtualenvwrapper (provided you haven’t already):

You’ll also need to edit your ~/.bashrc  (or ~/.bash_profile  on macOS) to include the following lines:

Then, simply source the file in the terminal depending on your OS:

Ubuntu

macOS

From there, you can create a Python virtual environment specifically for this project:

And once your environment is ready and activated, let’s install the necessary packages for our Keras REST API into the environment:

That’s it — and notice that we don’t actually need OpenCV for this project because we’ll be making use of PIL/Pillow.

Implementing a scalable Keras REST API

Figure 2: Our deep learning Keras + Redis + Flask REST API data flow diagram

Let’s get started building our server script. For convenience I’ve implemented the server in a single file, however it can be modularized as you see fit.

For best results and to avoid copy/paste errors, I encourage you to use the “Downloads” section of this blog post to grab the associated scripts and images.

Let’s open up run_keras_server.py  and walk through it together:

There are quite a few imports listed above, notably ResNet50 , flask , and redis .

For the sake of simplicity, we’ll be using ResNet pre-trained on the ImageNet dataset. I’ll point out where you can swap out ResNet for your own models.

The flask  module contains the Flask library (used to build our web API). The redis  module will enable us to interface with the Redis data store.

From there, let’s initialize constants which will be used throughout run_keras_server.py :

We’ll be passing float32  images to the server with dimensions of 224 x 224 and containing 3  channels.

Our server can handle a BATCH_SIZE = 32 . If you have GPU(s) on your production system, you’ll want to tune your BATCH_SIZE  for optimal  performance.

I’ve found that setting both SERVER_SLEEP  and CLIENT_SLEEP  to 0.25  seconds (the amount of time the server and client will pause before polling Redis again, respectively) will work well on most systems. Definitely adjust these constants if you’re building a production system.

Let’s kick off our Flask app and Redis server:

Here you can see how easy it is to start Flask.

I’ll assume that before you run this server script that your Redis server is running. Our Python script connect to the Redis store on our localhost  on port 6379  (the default host and port values for Redis).

Don’t forget to initialize a global Keras  model  to None here as well.

From there let’s handle serialization of images:

Redis will act as our temporary data store on the server. Images will come in to the server via a variety of methods such as cURL, a Python script, or even a mobile app.

Furthermore, images could come in only every once in awhile (a few every hours or days) or at a very high rate (multiple per second). We need to put the images somewhere as they queue up prior to being processed. Our Redis store will act as the temporary storage.

In order to store our images in Redis, they need to be serialized. Since images are just NumPy arrays, we can utilize base64 encoding to serialize the images. Using base64 encoding also has the added benefit of allowing us to use JSON to store additional attributes with the image.

Our base64_encode_image  function handles the serialization and is defined on Lines 35-37.

Similarly, we need to deserialize our image prior to passing them through our model. This is handled by the  base64_decode_image  function on Lines 39-51.

Let’s pre-process our image:

On Line 53, I’ve defined a prepare_image  function which pre-processes our input image for classification using the ResNet50 implementation in Keras.. When utilizing your own models I would suggest modifying this function to perform any required pre-processing, scaling, or normalization.

From there we’ll define our classification method:

The classify_process  function will be kicked off in its own thread as we’ll see in __main__  below. This function will poll for image batches from the Redis server, classify the images, and return the results to the client.

Line 72 loads the model . I’ve sandwiched this action with terminal print  messages — depending on the size of your Keras model, loading be instantaneous or it could take a few seconds.

Loading the model happens only once when this thread is launched — it would be terribly slow if we had to load the model each time we wanted to process an image and furthermore it could lead to a server crash due to memory exhaustion.

After loading the model, this thread will continually poll for new images and then classify them:

Here we’re first using the Redis database’s lrange  function to get, at most, BATCH_SIZE  images from our queue (Line 79).

From there we initialize our imageIDs  and batch  (Lines 80 and 81) and begin looping over the queue  beginning on Line 84.

In the loop, we first decode the object and deserialize it into a NumPy array, image  (Lines 86-88).

Next, on Lines 90-96, we’ll add the image  to the batch  (or if the batch  is currently None  we just set the batch  to the current image ).

We also append the id  of the image to imageIDs  (Line 99).

Let’s finish out the loop and function:

In this code block, we check if there are any images in our batch (Line 102).

If we have a batch of images, we make predictions on the entire batch by passing it through the model (Line 105).

From there, we loop over a the imageIDs  and corresponding prediction  results  (Lines 110-122). These lines append labels and probabilities to an output list and then store the output in the Redis database using the imageID  as the key (Lines 116-122).

We remove the set of images that we just classified from our queue using ltrim  on Line 125.

And finally, we sleep for the set SERVER_SLEEP  time and await the next batch of images to classify.

Let’s handle the /predict  endpoint of our REST API next:

As you’ll see later, when we POST to the REST API, we’ll be using the /predict  endpoint. Our server could, of course, have multiple endpoints.

We use the @app.route  decorator above our function in the format shown on Line 130 to define our endpoint so that Flask knows what function to call. We could easily have another endpoint which uses AlexNet instead of ResNet and we’d define the endpoint with associated function in a similar way. You get the idea, but for our purposes today, we just have one endpoint called /predict .

Our predict  method defined on Line 131 will handle the POST requests to the server. The goal of this function is to build the JSON data  that we’ll send back to the client.

If the POST data contains an image (Lines 137 and 138) we convert the image to PIL/Pillow format and preprocess it (Lines 141-143).

While developing this script, I spent considerable time debugging my serialization and deserialization functions, only to figure out that I needed Line 147 to convert the array to C-contiguous ordering (which is something you can read more about here). Honestly, it was a pretty big pain in the ass to figure out, but I hope it helps you get up and running quickly.

If you were wondering about the id  mentioned back on Line 99, it is actually generated here using uuid , a universally unique identifier, on Line 151. We use a UUID to prevent hash/key conflicts.

Next, we append the id  as well as the base64  encoding of the image  to the d  dictionary. It’s very simple to push this JSON data to the Redis db  using rpush  (Line 153).

Let’s poll the server to return the predictions:

We’ll loop continuously until the model server returns the output predictions. We start an infinite loop and attempt to get the predictions Lines 157-159.

From there, if the output  contains predictions, we deserialize the results and add them to data  which will be returned to the client.

We also delete  the result from the db  (since we have pulled the results form the database and no longer need to store them in the database) and break  out of the loop (Lines 163-172).

Otherwise, we don’t have any predictions and we need to sleep and continue to poll (Line 176).

If we reach Line 179, we’ve successfully got our predictions. In this case we add a success  value of True  to the client data (Line 179).

Note: For this example script, I didn’t bother adding timeout logic in the above loop which would ideally add a success  value of False  to the data. I’ll leave that up to you to handle and implement.

Lastly we call flask.jsonify  on data  and return it to the client (Line 182). This completes our predict function.

To demo our Keras REST API, we need a __main__  function to actually start the server:

Lines 186-196 define the __main__  function which will kick off our classify_process  thread (Lines 190-192) and run the Flask app (Line 196).

Starting the scalable Keras REST API

To test our Keras deep learning REST API, be sure to download the source code + example images using the “Downloads” section of this blog post.

From there, let’s start the Redis server if it isn’t already running:

Then, in a separate terminal, let’s start our REST API Flask server:

Additionally, I would suggest waiting until your model is loaded completely into memory before submitting requests to the server.

Now we can move on to testing the server with both cURL and Python.

Using cURL to access our Keras REST API

Figure 3: Using cURL to test our Keras REST API server. Pictured is my family beagle, Jemma. She is classified as a beagle with 94.6% confidence by our ResNet model.

The cURL tool is available pre-installed on most (Unix-based) operating systems. We can POST an image file to our deep learning REST API at the /predict  endpoint by using the following command:

You’ll receive the predictions back in JSON format right in your terminal:

Let’s try passing another image, this time a space shuttle:

The results of which can be seen below:

Figure 4: Submitting an input image to our Keras REST API and obtaining the prediction results.

Once again our Keras REST API has correctly classified the input image.

Using Python to submit requests to the Keras REST API

As you can see, verification using cURL was quite easy. Now let’s build a Python script that will POST an image and parse the returning JSON programmatically.

Let’s review simple_request.py :

We use Python requests  in this script to handle POSTing data to the server.

Our server is running on the localhost  and can be accessed on port 5000  with the endpoint /predict  as is specified by the KERAS_REST_API_URL  variable (Line 6). If the server is running remotely or on a different machine, be sure to specify the appropriate domain/ip, port, and endpoint.

We also define an IMAGE_PATH (Line 7). In this case, jemma.png  is in the same directory as our script. If you want to test with other images, be sure to specify the full path to your input image.

Let’s load the image and send it off to the server:

We read the image on Line 10 in binary mode and put the it into a payload dictionary.

The payload is POST’ed to the server with requests.post  on Line 14.

If we get a success  message, we can loop over the predictions and print them to the terminal. I made this script simple, but you could also draw the highest prediction text on the image using OpenCV if you want to get fancy.

Running the simple request script

Putting the script to work is easy. Open up a terminal and execute the following command (provided both our Flask server and Redis server are running, of course).

Figure 5: Using Python to programmatically consume the results of our Keras deep learning REST API.

For the space_shuttle.png , simply modify the IMAGE_PATH  variable:

And from there, run the script again:

Figure 6: A second example of programmatically consuming our Keras deep learning REST API. Here a space shuttle is classified with 99% confidence by ResNet + Keras REST API.

Considerations when scaling your deep learning REST API

If you anticipate heavy load for extended periods of time on your deep learning REST API you may want to consider a load balancing algorithm such as round-robin scheduling to help evenly distribute requests across multiple GPU machines and Redis servers.

Keep in mind that Redis is an in-memory data store so we can only store as many images in the queue we have available memory.

A single 224 x 224 x 3 image with a float32  data type will consume 60,2112 bytes of memory.

Assuming a server with a modest 16GB of RAM, this implies that we can hold approximately 26,500 images in our queue, but at that point we likely would want to add more GPU servers to burn through the queue faster.

However, there is a subtle problem…

Depending on how you deploy your deep learning REST API, there is a subtle problem with keeping the classify_process  function in the same file as the rest of our web API code.

Most web servers, including Apache and nginx, allow for multiple client threads.

If you keep classify_process  in the same file as your predict  view, then you may load multiple models if your server software deems it necessary to create a new thread to serve the incoming client requests — for every new thread, a new view will be created, and therefore a new model will be loaded.

The solution is to move classify_process  to an entirely separate process and then start it along with your Flask web server and Redis server.

In next week’s blog post I’ll build on today’s solution, show how to resolve this problem, and demonstrate:

  • How to configure the Apache web server to serve our deep learning REST API
  • How to run classify_process  as an entirely separate Python script, avoiding “multiple model syndrome”
  • Provide stress test results, confirming and verifying that our deep learning REST API can scale under heavy load

What now?

If you’re interested in taking a deeper dive into deep learning and discovering how to:

  • Train Convolutional Neural Networks on your own custom datasets
  • Study advanced deep learning techniques, including object detection, multi-GPU training, transfer learning, and Generative Adversarial Networks (GANs)
  • Replicate the results of state-of-the-art papers, including ResNet, SqueezeNet, VGGNet, and others

…then be sure to take a look at my new book, Deep Learning for Computer Vision with Python!

My complete, self-study deep learning book is trusted by members of top machine learning schools, companies, and organizations, including Microsoft, Google, Stanford, MIT, CMU, and more!

Be sure to take a look  — and while you’re at it, don’t forget to grab your (free) table of contents + sample chapters.

Summary

In today’s blog post we learned how to build a scalable Keras + deep learning REST API.

To accomplish this, we:

  • Built a simple Flask app to load our Keras model into memory and accept incoming requests.
  • Utilized Redis to act as an in-memory message queue/message broker.
  • Utilized threading to batch process input images, write them back to the message queue and then return the results to the client.

This method can scale to multiple machines, including multiple web servers and Redis instances.

I hope you enjoyed today’s blog post!

Be sure to enter your email address in the form below to be notified when future tutorials are published here on PyImageSearch!

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , , , ,

28 Responses to A scalable Keras + deep learning REST API

  1. Siva January 29, 2018 at 3:46 pm #

    Hi Adrian,

    Thank you for the wonderful post! I was wondering if the architecture could have been simplified by replacing the Flask / Redis stack with a single Twisted server. What are your thoughts?

    • Adrian Rosebrock January 29, 2018 at 5:28 pm #

      Hey Siva — I’ve only used Twisted once so my knowledge on the library is pretty limited so I’m probably not the best person to address that question.

      In any case, are you referring specifically to the polling of images when they are in the queue? If so, yes, the event-driven nature of Twisted would help with that. However, there is a problem when you consider the image queue:

      1. CNNs are most efficient when processing images in batches. If you use Twisted for single events (such as a new image entering the queue) it won’t help as much since we would rather wait a tiny bit more of time to more efficiently (and quickly) classify a larger batch.

      2. Redis is an incredibly fast in memory data store which allows us to batch queue our images. This batching goes back to point 1. It actually helps for speed.

      • Siva January 29, 2018 at 6:50 pm #

        Hi Adrian – yes, I was referring to the polling of the images. And, now the Flask / Redis architecture makes sense if it’s better to batch the image requests. Thanks!

      • TaeWoo Kim May 10, 2018 at 11:13 am #

        Hey Adrian. Is there any benefit to batch prediction if I am using CPU only? I ran test on image classfication on video where im trying to classify each frame, on three scenarios

        – one 1 image at a time
        – batch of ALL images at once
        – few batches of 32 frames/images
        – many batches of 4 frames/images

        Using CPU only, there was no real benefit of using batched predictions.. (batched were all 90+ seconds on a test video, where as 1 image/frame at a time took 85 seconds)

        • Adrian Rosebrock May 14, 2018 at 12:16 pm #

          You’ll see more benefit of batched prediction on your GPU rather than CPU.

  2. Flo February 1, 2018 at 6:23 am #

    Hey Adrian,

    like always a wonderful post!

    I have not worked with redis before, but from what I could glance from the documentation it looks to me that your keras – classify_process() will not scale well.
    It first retrieves a new batch of images, processes them and then removes them from the queue. Assuming all workers access the same redis instance (the same image queue) that would mean a second worker could load the same batch of images while the first one is processing them. Not only would those images get analyzed twice, the slower worker would remove a batch of pictures without them having been seen by any model.

    The StrictRedis docstrings mention two functions that could help:

    – lock() – which supposedly “mimics the behavior of threading.Lock”. The solution would be to lock before reading and to release after deleting from the queue
    – lpop() – “Remove[s] and return[s] the first item of the list”, so you would need a loop (and multiple round trips to redis) to get a batch

    Let me know what you think

    • Adrian Rosebrock February 3, 2018 at 11:04 am #

      Hey Flo — I discuss this in next week’s blog post as well, but the point of this method is to have one image queue per GPU. If you have multiple GPUs you’ll want to create a separate queue, for example image_queue_0, image_queue_1, image_queue_N for each of your N GPUs. This will prevent any issues with multiple GPUs processing the same batch.

      Additionally, Redis is single threaded so if you use a different image queue name for each GPU you will not run into any batches being processed multiple times.

      Again, make sure you read next week’s blog post so this point becomes more clear.

  3. Akash February 2, 2018 at 3:18 am #

    Hi Adrian! Thanks for this deeply informative post . Could we do the same for text recognition from images?

    • Adrian Rosebrock February 3, 2018 at 10:40 am #

      Provided you have trained your model to perform text recognition you can swap in your model (instead of ResNet) and use it as an API in the exact same manner we have done in this blog post.

  4. Casey February 14, 2018 at 10:11 am #

    Wow this is very informative. I have your ImageNet Bundle and have yet to start it but if the quality is even close to this (which knowing your previous content it will be) you should have charged more. Excellent post!

    • Adrian Rosebrock February 18, 2018 at 10:09 am #

      Thanks Casey 🙂 The ImageNet Bundle of Deep Learning for Computer Vision with Python is even more in-depth and high quality than this blog post. Enjoy it and please do reach out if you have any questions on it.

  5. Charlie March 8, 2018 at 3:08 pm #

    Hi Adrian,

    Why are you using threading instead of multiprocessing?

    Thanks

    • Adrian Rosebrock March 9, 2018 at 9:01 am #

      There isn’t a need for multiprocessing here. Threading is typically used for I/O bound tasks while multiprocessing is used for CPU heavy tasks.

  6. Damon Wang March 14, 2018 at 3:45 am #

    Hi Adrian,

    Could you teach me how to serialize and deserialize the videos(e.g. MP4 videos)? Cause I want to use your blog to classify videos with other DL models.

    Thanks

    • Adrian Rosebrock March 14, 2018 at 12:35 pm #

      There are a few ways to approach this. Is your goal to feed the video, one frame at a time, through the DL model?

      • Damon Wang March 15, 2018 at 4:10 am #

        Thank you so much for your reply.
        My goal is to feed the whole video to the DL model.
        The steps of my project(based on Flask) includes:
        1.feed the videos into Redis
        2.Get the videos from Redis, extract the video frames, feed the frames into DL models, get the prediction of the video.
        But I wonder whether I should serialize and deserialize the videos before feed videos into Redis.
        Thanks

        • Adrian Rosebrock March 19, 2018 at 5:01 pm #

          Video files are significantly larger than images. I wouldn’t recommend putting the video itself into Redis as Redis is an in-memory file store. You could technically feed each frame, one-by-one, from the client to the server but that’s likely wasteful.

          Instead, you should consider modifying this code so:

          1. The video file is uploaded to the server and saved to disk
          2. The video file is processed by the server (again, from disk, not via Redis)
          3. The resulting video file or results are returned to the client

          Again, I really do not recommend trying to store video files in Redis as you could quickly run out of memory.

          • TaeWoo Kim May 10, 2018 at 11:27 am #

            In the case of video (i.e. classifying each frame), would redis even be needed at all?

          • TaeWoo Kim May 10, 2018 at 11:34 am #

            In other words, for running image classification on videos, would your original post on keras blog (https://blog.keras.io/building-a-simple-keras-deep-learning-rest-api.html) would suffice, no?

          • Adrian Rosebrock May 14, 2018 at 12:16 pm #

            I wouldn’t use this method if you want to process entire video files. Video files are significantly larger and your system would quickly run out of RAM. I would use a hybrid approach where a video is uploaded, saved to disk, and a new job is kicked off that runs in the background to process all frames of the video.

  7. Prakruti June 20, 2018 at 8:23 am #

    Hi Adrian,

    Is it possible to deploy a flask api as a service without being bounded to wsgi and apache ?
    Cant one just execute the python file with api and use it from localhost:5000 ? A user without sudo rights would need something like this right ? Because one does not have access to apache config or rights to start apache server .
    Also, What if I want to just call this api from another java wrapper ?

    • Adrian Rosebrock June 20, 2018 at 4:01 pm #

      1. Be careful if you use the Flask server for this. It’s not threaded as I discuss in both this post and this one. Even though your model will be loaded properly using the Flask testing server won’t use more than one thread so it defeats the purpose.

      2. If you would like to call the API from Java you should look into the HTTP request libraries available for Java (I’m not familiar with them) but it’s 100% possible, just do your research and you’ll be fine 🙂

  8. Regis Amichia June 21, 2018 at 8:18 am #

    Hi Adrian,

    First of all thanks a lot for this post.
    I have an issue following your methods. I trained my model offline, saved it in a h.5 file and I would like to know how to upload it on Redis and then, load it in my code.

    Thanks a lot for your answer

    • Adrian Rosebrock June 21, 2018 at 9:14 am #

      I think you’re confusing what Redis does. Redis does not hold your model, the server does. Redis only holds the images in the queue. You can modify the classify_process function to load your own model using Keras’ load_model function. Be sure to refer to the Keras docs if you have never used this function before. I would also recommend reading through Deep Learning for Computer Vision with Python so you can study deep learning in more detail as well.

  9. Slim Frikha July 3, 2018 at 8:22 am #

    Hi Adrian,

    First, thanks for this great article with thorough explanations!
    I noticed in this example that you actually programmed the scheduler logic with the 2 while True loops and what it basically does is the following:
    – every X seconds, the server wokes up to check if there is images to predict in Redis queue
    – every Y seconds, the server wokes to see if results are ready in Redis queue

    I was wondering if you maybe tried or tested the same stack but with Celery as a task scheduler to avoid doing so.

    Thanks!

    • Adrian Rosebrock July 3, 2018 at 8:42 am #

      Celery would be a feasible solution as well. I wanted to keep this solution as a template for others to build off though. You can add any other bells and whistles you see fit.

Trackbacks/Pingbacks

  1. Deep learning in production with Keras, Redis, Flask, and Apache - PyImageSearch - February 5, 2018

    […] part two we demonstrated how to leverage Redis along with message queueing/message brokering paradigms to […]

  2. Deep learning in production with Keras, Redis, Flask, and Apache – InsideNothing - February 5, 2018

    […] part two we demonstrated how to leverage Redis along with message queueing/message brokering paradigms to […]

Leave a Reply