Fire and smoke detection with Keras and Deep Learning

In this tutorial, you will learn how to detect fire and smoke using Computer Vision, OpenCV, and the Keras Deep Learning library.

Today’s tutorial is inspired by an email I received last week from PyImageSearch reader, Daniel.

Daniel writes:

Hey Adrian, I’m not sure if you’ve seen the news, but my home state of California has been absolutely ravaged by wildfires over the past few weeks.

My family lives in the Los Angeles area, not too far from the Getty fire. It’s hard not to be concerned about our home and our safety.

It’s a scary situation and it got me thinking:

Do you think computer vision could be used to detect wildfires? What about fires that start in people’s homes?

If you could write a tutorial on the topic I would appreciate it. I’d love to learn from it and do my part to help others.

The short answer is, yes, computer vision and deep learning can be used to detect wildfires:

  • IoT/Edge devices equipped with cameras can be deployed strategically throughout hillsides, ridges, and high elevation areas, automatically monitoring for signs of smoke or fire.
  • Drones and quadcopters can be flown above areas prone to wildfires, strategically scanning for smoke.
  • Satellites can be used to take photos of large acreage areas while computer vision and deep learning algorithms process these images, looking for signs of smoke.

That’s all fine and good for wildfires — but what if you wanted to monitor your own home for smoke or fire?

The answer there is to augment existing sensors to aid in fire/smoke detection:

  • Existing smoke detectors utilize photoelectric sensors and a light source to detect if the light source particles are being scattered (implying smoke is present).
  • You could then distribute temperature sensors around the house to monitor the temperature of each room.
  • Cameras could also be placed in areas where fires are likely to start (kitchen, garage, etc.).
  • Each individual sensor could be used to trigger an alarm or you could relay the sensor information to a central hub that aggregates and analyzes the sensor data, computing a probability of a home fire.

Unfortunately, that’s all easier said than done.

While there are 100s of computer vision/deep learning practitioners around the world actively working on fire and smoke detection (including PyImageSearch Gurus member, David Bonn), it’s still an open-ended problem.

That said, today I’ll help you get your start in smoke and fire detection — by the end of this tutorial, you’ll have a deep learning model capable of detecting fire in images (I’ve even included my pre-trained model to get you up and running immediately).

To learn how to create your own fire and smoke detector with Computer Vision, Deep Learning, and Keras, just keep reading!

Looking for the source code to this post?
Jump right to the downloads section.

Fire and smoke detection with Keras and Deep Learning

Figure 1: Wildfires can quickly become out of control and endanger lives in many parts of the world. In this article, we will learn to conduct fire and smoke detection with Keras and deep learning.

In the first part of this tutorial we’ll discuss the two datasets we’ll be using for fire and smoke detection.

From there we’ll review or directory structure for the project and then implement FireDetectionNet, the CNN architecture we’ll be using to detect fire and smoke in images/video.

Next, we’ll train our fire detection model and analyze the classification accuracy and results.

We’ll wrap up the tutorial by discussing some of the limitations and drawbacks of the approach, including how you can improve and extend the method.

Our fire and smoke dataset

Figure 2: Today’s fire detection dataset is curated by Gautam Kumar and pruned by David Bonn (both of whom are PyImageSearch readers). We will put the dataset to work with Keras and deep learning to create a fire/smoke detector.

The dataset we’ll be using for fire and smoke examples was curated by PyImageSearch reader, Gautam Kumar.

Guatam gathered a total of 1,315 images by searching Google Images for queries related to the term “fire”, “smoke”, etc.

However, the original dataset has not been cleansed of extraneous, irrelevant images that are not related to fire and smoke (i.e., examples of famous buildings before a fire occurred).

Fellow PyImageSearch reader, David Bonn, took the time to manually go through the fire/smoke images and identify ones that should not be included.

Note: I took the list of extraneous images identified by David and then created a shell script to delete them from the dataset. The shell script can be found in the “Downloads” section of this tutorial.

The 8-scenes dataset

Figure 3: We will combine Gautam’s fire dataset with the 8-scenes natural image dataset so that we can classify Fire vs. Non-fire using Keras and deep learning.

The dataset we’ll be using for Non-fire examples is called 8-scenes as it contains 2,688 image examples belonging to eight natural scene categories (all without fire):

  1. Coast
  2. Mountain
  3. Forest
  4. Open country
  5. Street
  6. Inside city
  7. Tall buildings
  8. Highways

The dataset was originally curated by Oliva and Torralba in their 2001 paper, Modeling the shape of the scene: a holistic representation of the spatial envelope.

The 8-scenes dataset is a natural complement to our fire/smoke dataset as it depicts natural scenes as they should look without fire or smoke present.

While this dataset has 8 unique classes, we will consider the dataset as a single Non-fire class when we combine it with Gautam’s Fire dataset.

Project structure

Figure 4: The project structure for today’s tutorial on fire and smoke detection with deep learning using the Keras/TensorFlow framework.

Go ahead and grab today’s .zip from the source code and pre-trained model using the “Downloads” section of this blog post.

From there you can unzip it on your machine and your project will look like Figure 4. There is an exception: neither dataset .zip (white arrows) will be present yet. We will download, extract, and prune the datasets in the next section.

Our output/  directory contains:

  • Our serialized fire detection model. We will train the model today with Keras and deep learning.
  • The Learning Rate Finder plot will be generated and inspected for the optimal learning rate prior to training.
  • A training history plot will be generated upon completion of the training process.
  • The  examples/  subdirectory will be populated by predict_fire.py  with sample images that will be annotated for demonstration and verification purposes.

Our pyimagesearch  module holds:

  • config.py : Our customizable configuration.
  • FireDetectionNet : Our Keras Convolutional Neural Network class designed specifically for detecting fire and smoke.
  • LearningRateFinder : A Keras class for assisting in the process of finding the optimal learning rate for deep learning training.

The root of the project contains three scripts:

  • prune.sh : A simple bash script that removes irrelevant images from Gautam’s fire dataset.
  • train.py : Our Keras deep learning training script. This script has two modes of operation: (1) Learning Rate Finder mode,and (2) training mode.
  • predict_fire.py : A quick and dirty script which samples images from our dataset, generating annotated Fire/Non-fire images for verification.

Let’s move on to preparing our Fire/Non-fire dataset in the next section.

Preparing our Fire and Non-fire combined dataset

Preparing our Fire and Non-fire dataset involves a four-step process:

  1. Step #1: Ensure you followed the instructions in the previous section to grab and unzip today’s files from the “Downloads” section.
  2. Step #2: Download and extract the fire/smoke dataset into the project.
  3. Step #3: Prune the fire/smoke dataset for extraneous, irrelevant files.
  4. Step #4: Download and extract the 8-scenes dataset into the project.

The result of Steps #2-4 will be a dataset consisting of two classes:

  • Fire
  • Non-fire

Combining datasets is a tactic I often use. It saves valuable time and often leads to a great model.

Let’s begin putting our combined dataset together.

Step #2: Download and extract the fire/smoke dataset into the project.

Download the fire/smoke dataset using this link. Store the .zip in the keras-fire-detection/  project directory that you extracted in the last section.

Once downloaded, unzip the dataset:

Step #3: Prune the dataset for extraneous, irrelevant files.

Execute the prune.sh script to delete the extraneous, irrelevant files from the fire dataset:

At this point, we have Fire data. Now we need Non-fire data for our two-class problem.

Step #4: Download and extract the 8-scenes dataset into the project.

Download the 8-scenes dataset using this linkStore the .zip in the keras-fire-detection/  project directory alongside the Fire dataset.

Once downloaded, navigate to the project folder and unarchive the dataset:

Review Project + Dataset Structure

At this point, it is time to inspect our directory structure once more. Yours should be identical to mine:

Ensure your dataset is pruned (i.e. the Fire/  directory should have exactly 1,315 entries and not the previous 1,405 entries).

Our configuration file

This project will span multiple Python files that will need to be executed, so let’s store all important variables in a single config.py file.

Open up config.py now and insert the following code:

We’ll use the os  module for combining paths (Line 2).

Lines 5-7 contain paths to our (1) Fire images, and (2) Non-fire images.

Line 10 is a list of our two class names.

Let’s set a handful of training parameters:

Lines 13 and 14 define the size of our training and testing dataset splits.

Lines 17-19 contain three hyperparameters — the initial learning rate, batch size, and number of epochs to train for.

From here, we’ll define a few paths:

Lines 22-27 include paths to:

  • Our yet-to-be-trained serialized fire detection model.
  • The Learning Rate Finder plot which we will analyze to set our initial learning rate.
  • A training accuracy/loss history plot.

To wrap up our config we’ll define settings for prediction spot-checking:

Our prediction script will sample and annotate images using our model.

Lines 32 and 33 include the path to output directory where we’ll store output classification results and the number of images to sample.

Implementing our fire detection Convolutional Neural Network

Figure 5: FireDetectionNet is a deep learning fire/smoke classification network built with the Keras deep learning framework.

In this section we’ll implement FireDetectionNet, a Convolutional Neural Network used to detect smoke and fire in images.

This network utilizes depthwise separable convolution rather than standard convolution as depthwise separable convolution:

  • Is more efficient, as Edge/IoT devices will have limited CPU and power draw.
  • Requires less memory, as again, Edge/IoT devices have limited RAM.
  • Requires less computation, as we have limited CPU horsepower.
  • Can perform better than standard convolution in some cases, which can lead to a better fire/smoke detector.

Let’s get started implementing FireDetectioNet now — open up the firedetectionnet.py file now and insert the following code:

Our TensorFlow 2.0 Keras imports span from Lines 2-9. We will use Keras’ Sequential API to build our fire detection CNN.

Line 11 defines our FireDetectionNet  class. We begin by defining the  build method on Line 13.

The build  method accepts parameters including dimensions of our images ( width , height , depth ) as well as the number of classes  we will be training our model to recognize (i.e. this parameter affects the softmax classifier head shape).

We then initialize the model  and inputShape  (Lines 16-18).

From here we’ll define our first set of CONV => RELU => POOL  layers:

These layers use a larger kernel size to both (1) reduce the input volume spatial dimensions faster, and (2) detect larger color blobs that contain fire.

We’ll then define more CONV => RELU => POOL  layer sets:

Lines 34-40 allow our model to learn richer features by stacking two sets of CONV => RELU  before applying a POOL .

From here we’ll create our fully-connected head of the network:

Lines 43-53 add two sets of FC => RELU  layers.

Lines 56 and 57 append our Softmax classifier prior to Line 60 returning the model .

Creating our training script

Our training script will be responsible for:

  1. Loading our Fire and Non-fire combined dataset from disk.
  2. Instantiating our FireDetectionNet architecture.
  3. Finding our optimal learning rate by using our LearningRateFinder class.
  4. Taking the optimal learning rate and training our network for the full set of epochs.

Let’s get started!

Open up the train.py file in your directory structure and insert the following code:

Lines 1-19 handle our imports:

  • matplotlib : For generating plots with Python. Line 3 sets the backend so we can save our plots as image files.
  • tensorflow.keras : Our TensorFlow 2.0 imports including data augmentation, stochastic gradient descent optimizer, and one-hot label encoder.
  • sklearn : Two imports for dataset splitting and classification reporting.
  • LearningRateFinder : A class we will use for finding an optimal learning rate prior to training. When we operate our script in this mode, it will generate a plot for us to (1) manually inspect and (2) insert the optimal learning rate into our configuration file.
  • FireDetectionNet : The fire/smoke Convolutional Neural Network (CNN) that we built in the previous section.
  • config : Our configuration file of settings for this training script (it also contains settings for our prediction script).
  • paths : Contains functions from my imutils package to list images in a directory tree.
  • argparse : For parsing command line argument flags.
  • cv2 : OpenCV is used for loading and preprocessing images.

Now that we’ve imported packages, let’s define a reusable function to load our dataset:

Our load_dataset  helper function assists with loading, preprocessing, and preparing both the Fire and Non-fire datasets.

Line 21 defines the function which accepts a path to the dataset.

Line 24 grabs all image paths in the dataset.

Lines 28-35 loop over the imagePaths . Images are loaded, resized to 128×128 dimensions, and added to the data  list.

Line 38 returns the data  in NumPy array format.

We’ll now parse a single command line argument:

The --lr-find  flag sets the mode for our script. If the flag is set to 1 , then we’ll be in our learning rate finder mode, generating a learning rate plot for us to inspect. Otherwise, our script will operate in training mode and train the network for the full set of epochs (i.e. when the --lr-find  flag is not present).

Let’s go ahead and load our data  now:

Lines 48 and 49 load and resize the Fire and Non-fire images.

Lines 52 and 53 construct labels for both classes ( 1 for Fire and  0 for Non-fire).

Subsequently, we stack the data  and labels  into a single NumPy array (i.e. combine the datasets) via Lines 57 and 58.

Line 59 scales pixel intensities to the range [0, 1].

We have three more steps to prepare our data:

First, we perform one-hot encoding on our labels  (Line 63).

Then, we account for skew in our dataset (Lines 64 and 65). To do so, we compute the classWeight to weight Fire images more than Non-fire images during the gradient update (as we have over 2x more Fire images than Non-fire images).

Lines 68 and 69 construct training and testing splits based on our config (in my config I have the split set to 75% training/25% testing).

Next, we’ll initialize data augmentation and compile our FireDetectionNet  model:

Lines 74-79 instantiate our data augmentation object.

We then build  and compile  our  FireDetectionNet model (Lines 83-88). Note that our initial learning rate and decay is set as we initialize our SGD  optimizer.

Let’s handle our Learning Rate Finder mode:

Line 92 checks to see if we should attempt to find optimal learning rates. Assuming so, we:

  • Initialize LearningRateFinder  (Line 96).
  • Start training with a 1e-10  learning rate and exponentially increase it until we hit 1e+1  (Lines 97-103).
  • Plot the loss vs. learning rate and save the resulting figure (Lines 107 and 108).
  • Gracefully exit  the script after printing a couple of messages to the user (Lines 115).

After this code executes we now need to:

  1. Step #1: Manually inspect the generated learning rate plot.
  2. Step #2: Update config.py with our INIT_LR (i.e., the optimal learning rate we determined by analyzing the plot).
  3. Step #3: Train the network on our full dataset.

Assuming we have completed Step #1 and Step #2, now let’s handle the Step #3 where our initial learning rate has been determined and updated in the config. In this case, it is time to handle training mode in our script:

Lines 119-125 train our fire detection model  using data augmentation and our skewed dataset class weighting. Be sure to review my .fit_generator tutorial.

Finally, we’ll evaluate the model, serialize it to disk, and plot the training history:

Lines 129-131 make predictions on test data and print a classification report in our terminal.

Line 135 serializes the model  and saves it to disk. We’ll recall the model in our prediction script.

Lines 138-149 generate a historical plot of accuracy/loss curves during training. We will inspect this plot for overfitting or underfitting.

Training the fire detection model with Keras

Training our fire detection model is broken down into three steps:

  1. Step #1: Run the train.py script with the --lr-find command line argument to find our optimal learning rate.
  2. Step #2: Update Line 17 of our configuration file ( config.py ) to set our INIT_LR value as the optimal learning rate.
  3. Step #3: Execute the train.py script again, but this time let it train for the full set of epochs.

Start by using the “Downloads” section of this tutorial to download the source code to this tutorial.

From there you can perform Step #1 by executing the following command:

Figure 6: Analyzing our optimal deep learning rate finder plot. We will use the optimal learning rate to train a fire/smoke detector using Keras and Python.

Examining Figure 6 above you can see that our network is able to gain traction and start to learn around 1e-5 .

The lowest loss can be found between 1e-2 and 1e-1; however, at 1e-1 we can see loss starting to increase sharply, implying that the learning rate is too large and the network is overfitting.

To be safe we should use an initial learning rate of 1e-2.

Let’s now move on to Step #2.

Open up config.py and scroll to Lines 16-19 where we set our training hyperparameters:

Here we see our initial learning rate ( INIT_LR) value — we need to set this value to 1e-2 (as our code indicates).

The final step (Step #3) is to train FireDetectionNet for the full set of NUM_EPOCHS:

Figure 7: Accuracy/loss curves for training a fire and smoke detection deep learning model with Keras and Python.

Learning is a bit volatile here but you can see that we are obtaining 92% accuracy.

Making predictions on fire/non-fire images

Given our trained fire detection model, let’s now learn how to:

  1. Load the trained model from disk.
  2. Sample random images from our dataset.
  3. Classify each input image using our model.

Open up predict_fire.py and insert the following code:

Lines 2-9 handle our imports, namely load_model , so that we can load our serialized TensorFlow/Keras model from disk.

Let’s grab 25 random images from our combined dataset:

Lines 17 and 18 grab image paths from our combined dataset while Lines 22-24 sample 25 random image paths.

From here, we’ll loop over each of the individual image paths and perform fire detection inference:

Line 27 begins a loop over our sampled image paths:

  • We load and preprocess the image just as in training (Lines 29-35).
  • Make predictions and grab the highest probability label (Lines 38-40).
  • Annotate the label in the top corner of the image (Lines 43-46).
  • Save the output image to disk (Lines 49-51).

Fire detection results

To see our fire detector in action make sure you use the “Downloads” section of this tutorial to download the source code and pre-trained model.

From there you can execute the following command:

Figure 8: Fire and smoke detection with Keras, deep learning, and Python.

I’ve included a set sample of results in Figure 8 — notice how our model was able to correctly predict “fire” and “non-fire” in each of them.

Limitations and drawbacks

Our results are not perfect, however. Here are a few examples of incorrect classifications:

Figure 9: Examples of incorrect fire/smoke detection.

The image on the left in particular is troubling — a sunset will cast shades of reds and oranges across the sky, creating an “inferno” like effect. It appears that in those situations our fire detection model will struggle considerably.

So, why are these incorrect classifications coming from?

The answer lies in the dataset itself.

To start, we only worked with raw image data.

Smoke and fire can be better detected with video as fires start off as a smolder, slowly build to a critical point, and then erupt into massive flames. Such a pattern is better detected in video streams rather than images.

Secondly, our datasets are quite small.

Combining the two datasets we only had a total of 4,003 images. Fire and smoke datasets are hard to come by, making it extremely challenging to create high accuracy models.

Finally, our datasets are not necessarily representative of the problem.

Many of the example images in our fire/smoke dataset contained examples of professional photos captured by news reports. Fires don’t look like that in the wild.

In order to improve our fire and smoke detection model, we need better data.

Future efforts in fire/smoke detection research should focus less on the actual deep learning architectures/training methods and more on the actual dataset gathering and curation process, ensuring the dataset better represents how fires start, smolder, and spread in natural scene images.

Where can I learn more about Deep Learning and Computer Vision?

Figure 10: My deep learning book is the go-to resource for deep learning developers, students, researchers, and hobbyists, alike. Use the book to build your skillset from the bottom up, or read it to gain a deeper understanding. Don’t be left in the dust as the fast paced AI revolution continues to accelerate.

Today’s tutorial helped us solve a real-world classification problem for classifying fire and smoke images.

Such an application could be:

  • Deployed on radio towers to warn nearby residents with sirens and cell phone alerts.
  • Utilized by park rangers to monitor for wildfires.
  • Employed in cities to detect smoke in buildings and other areas.
  • Used by television news companies to sort their archives of images and videos.

If you have your own real-world project you’re trying to solve, you need a strong deep learning foundation.

To jumpstart your education, including discovering my tips, suggestions, and best practices when training deep neural networks, be sure to refer to my book, Deep Learning for Computer Vision with Python.

Inside the book I cover:

  1. Deep learning fundamentals and theory without unnecessary mathematical fluff. I present the basic equations and back them up with code walkthroughs that you can implement and easily understand. You don’t need a degree in advanced mathematics to understand this book.
  2. More details on learning rates, tuning them, and how a solid understanding of the concept dramatically impacts the accuracy of your model.
  3. How to spot underfitting and overfitting on-the-fly, saving you days of training time.
  4. My tips/tricks, suggestions, and best practices for training CNNs.

To learn more about the book, and grab the table of contents + free sample chapters, just click here!

Summary

In this tutorial, you learned how to create a smoke and fire detector using Computer Vision, Deep Learning, and the Keras library.

To build our smoke and fire detector we utilized two datasets:

We then designed a FireDetectionNet — a Convolutional Neural Network for smoke and fire detection. This network was trained on our two datasets. Once our network was trained we evaluated it on our testing set and found that it obtained 92% accuracy.

However, there are a number of limitations and drawbacks to this approach:

  • First, we only worked with image data. Smoke and fire can be better detected with video as fires start off as a smolder, slowly build to a critical point, and then erupt into massive flames.
  • Secondly, our datasets are small. Combining the two datasets we only had a total of 4,003 images. Fire and smoke datasets are hard to come by, making it extremely challenging to create high accuracy models.

Building on the previous point, our datasets are not necessarily representative of the problem. Many of the example images in our fire/smoke dataset are of professional photos captured by news reports. Fires don’t look like that in the wild.

The point is this:

Fire and smoke detection is a solvable problem…but we need better datasets.

Luckily, PyImageSearch Gurus member David Bonn is actively working on this problem and discussing it in the PyImageSearch Gurus Community forums. If you’re interested in learning more about his project, be sure to connect with him.

I hope you enjoyed this tutorial!

To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), just enter your email address in the form below.

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , ,

15 Responses to Fire and smoke detection with Keras and Deep Learning

  1. Toby Breckon November 18, 2019 at 11:23 am #

    Very interesting indeed – also see our experimentally defined approach, large dataset and example inference code + pre-trained models here: https://github.com/tobybreckon/fire-detection-cnn

    • Adrian Rosebrock November 21, 2019 at 8:58 am #

      This is fantastic work, thanks for sharing Toby! I really love the usage of superpixel segmentation as well!

  2. David Bonn November 18, 2019 at 11:36 am #

    Hi Adrian. Great post!

    You are very right that solving this problem is very much about curating a great dataset. My advice for the practitioner that wants to curate that great dataset would be to go outside and shoot video of fires. Since fire is self-similar on different scales even a small campfire should produce representative images that would detect larger fires. Since fire is very active and changes constantly you could literally produce hundreds of thousands of training images in a weekend.

    If you restrict the problem to just detecting flames, I’d also consider applying a simple color filter to the datasets and your input images. My own experiments have shown that is good for a few percentage points improvement in accuracy.

    One path to very high accuracy on this problem is to use other techniques to identify candidate regions, curate your datasets using those same techniques, and only apply a Deep Learning model to those candidate regions rather than the whole image. I’ve achieved around 99.5% accuracy using that technique on just the Deep Learning part of the system.

    This is a serious and very large problem. There are roughly 50 million homes in the United States vulnerable to wildfire, and around 6 million of those homes are at extreme wildfire risk.

    • Adrian Rosebrock November 21, 2019 at 8:58 am #

      Thanks David!

      Also be sure to refer to Toby’s comment, I think you’ll really enjoy it 🙂

    • Arian Hs December 4, 2019 at 10:55 am #

      Hi David,
      Curious what architecture you used for this higher accuracy? My target is explosion detection. And sometimes explosion is non-orange, like a huge dust pile in deserts or plasma explosion in movies which is blue!!

  3. Debal B November 18, 2019 at 11:58 am #

    Hi Adrian
    Thanks for a great tutorial once again.
    And you have put the problem rightly, the image dataset used for fire detection needs to be curated carefully, maybe capturing fire at different stages as it grows.
    I was expecting that certain pictures of sunset/dusk may be mistakenly interpreted by the model as fire due to the similarity in color spreads.
    When you say that detection is easier through videos, I am curious to know how the model is trained in such a case.
    Does it require building some sort of time context while parsing the video frames?

    • Adrian Rosebrock November 21, 2019 at 8:57 am #

      Yes, a spatiotemporal approach will help dramatically here. LSTM networks would be a good first start.

  4. Averustin November 19, 2019 at 3:56 am #

    Hey Adrian you’re 2 yrs late from my thesis project LOL 🙂
    Still thankfull though,

    • Adrian Rosebrock November 21, 2019 at 8:57 am #

      Haha, thanks Averustin! 🙂

  5. Falahgs November 19, 2019 at 4:19 am #

    Hi Adrian …
    thanks for great tutorial
    my problem is that
    why i see folder fire_detetcion.model in output folder and not see model file h5 ….?
    and contains some subfolders and file …
    like assest subfolder and variabels subfolder
    and file save_model.pb it is tensorflow format extension
    and i couldn’t load_model from folder (fire_detetcion.model)
    and i got error …

    Thanks for help
    and sharing your knowledge

    • Adrian Rosebrock November 21, 2019 at 8:56 am #

      This tutorial assumes you are using TensorFlow 2.0 which will generate a directory of files rather than single HDF5 file. If you’re not using TF 2.0 you should retrain the model.

  6. Arian Hs December 4, 2019 at 10:52 am #

    Thanks for the good tutorial.
    Given that your model is not very deep, do you think larger dataset, especially adding those sunset photos would help with higher accuracy?
    And one question: Real-time is not an issue for me. Shall I expect better accuracy if I replace separableConv2D with just conv2D?

    • Adrian Rosebrock December 5, 2019 at 10:14 am #

      1. A larger dataset is the most important aspect here. You can change your architecture based on the size of your dataset.

      2. Yes, you could use either Separable Convolution or standard conolution.

      • Arian Hs December 9, 2019 at 11:23 am #

        Thanks Adrian. On 1, you mean the larger the dataset, the deeper the model should be? I have around 8K-10K images (3K positive, and 7K negatives). How deep do you expect the architecture be? I didn’t get very high precision with ResNet-50! So thinking of a shallower one.

        • Adrian Rosebrock December 12, 2019 at 9:51 am #

          If you need help with suggestions and best practices on developing your own CNNs I would recommend you read Deep Learning for Computer Vision with Python. That book contains my tips, suggestions, and best practices.

Before you leave a comment...

Hey, Adrian here, author of the PyImageSearch blog. I'd love to hear from you, but before you submit a comment, please follow these guidelines:

  1. If you have a question, read the comments first. You should also search this page (i.e., ctrl + f) for keywords related to your question. It's likely that I have already addressed your question in the comments.
  2. If you are copying and pasting code/terminal output, please don't. Reviewing another programmers’ code is a very time consuming and tedious task, and due to the volume of emails and contact requests I receive, I simply cannot do it.
  3. Be respectful of the space. I put a lot of my own personal time into creating these free weekly tutorials. On average, each tutorial takes me 15-20 hours to put together. I love offering these guides to you and I take pride in the content I create. Therefore, I will not approve comments that include large code blocks/terminal output as it destroys the formatting of the page. Kindly be respectful of this space.
  4. Be patient. I receive 200+ comments and emails per day. Due to spam, and my desire to personally answer as many questions as I can, I hand moderate all new comments (typically once per week). I try to answer as many questions as I can, but I'm only one person. Please don't be offended if I cannot get to your question
  5. Do you need priority support? Consider purchasing one of my books and courses. I place customer questions and emails in a separate, special priority queue and answer them first. If you are a customer of mine you will receive a guaranteed response from me. If there's any time left over, I focus on the community at large and attempt to answer as many of those questions as I possibly can.

Thank you for keeping these guidelines in mind before submitting your comment.

Leave a Reply

[email]
[email]