OpenCV – Stream video to web browser/HTML page

In this tutorial you will learn how to use OpenCV to stream video from a webcam to a web browser/HTML page using Flask and Python.

Ever have your car stolen?

Mine was stolen over the weekend. And let me tell you, I’m pissed.

I can’t share too many details as it’s an active criminal investigation, but here’s what I can tell you:

My wife and I moved to Philadelphia, PA from Norwalk, CT about six months ago. I have a car, which I don’t drive often, but still keep just in case of emergencies.

Parking is hard to find in our neighborhood, so I was in need of a parking garage.

I heard about a garage, signed up, and started parking my car there.

Fast forward to this past Sunday.

My wife and I arrive at the parking garage to grab my car. We were about to head down to Maryland to visit my parents and have some blue crab (Maryland is famous for its crabs).

I walked to my car and took off the cover.

I was immediately confused — this isn’t my car.

Where the #$&@ is my car?

After a few short minutes I realized the reality —  my car was stolen.

Over the past week, my work on my upcoming Raspberry Pi for Computer Vision book was interrupted — I’ve been working with the owner of the the parking garage, the Philadelphia Police Department, and the GPS tracking service on my car to figure out what happened.

I can’t publicly go into any details until it’s resolved, but let me tell you, there’s a whole mess of paperwork, police reports, attorney letters, and insurance claims that I’m wading neck-deep through.

I’m hoping that this issue gets resolved in the next month — I hate distractions, especially distractions that take me away from what I love doing the most — teaching computer vision and deep learning.

I’ve managed to use my frustrations to inspire a new security-related computer vision blog post.

In this post, we’ll learn how to stream video to a web browser using Flask and OpenCV.

You will be able to deploy the system on a Raspberry Pi in less than 5 minutes:

  • Simply install the required packages/software and start the script.
  • Then open your computer/smartphone browser to navigate to the URL/IP address to watch the video feed (and ensure nothing of yours is stolen).

There’s nothing like a little video evidence to catch thieves.

While I continue to do paperwork with the police, insurance, etc, you can begin to arm yourself with Raspberry Pi cameras to catch bad guys wherever you live and work.

To learn how to use OpenCV and Flask to stream video to a web browser HTML page, just keep reading!

Looking for the source code to this post?
Jump right to the downloads section.

OpenCV – Stream video to web browser/HTML page

In this tutorial we will begin by discussing Flask, a micro web framework for the Python programming language.

We’ll learn the fundamentals of motion detection so that we can apply it to our project. We’ll proceed to implement motion detection by means of a background subtractor.

From there, we will combine Flask with OpenCV, enabling us to:

  1. Access frames from RPi camera module or USB webcam.
  2. Process the frames and apply an arbitrary algorithm (here we’ll be using background subtraction/motion detection, but you could apply image classification, object detection, etc.).
  3. Stream the results to a web page/web browser.

Additionally, the code we’ll be covering will be able to support multiple clients (i.e., more than one person/web browser/tab accessing the stream at once), something the vast majority of examples you will find online cannot handle.

Putting all these pieces together results in a home surveillance system capable of performing motion detection and then streaming the video result to your web browser.

Let’s get started!

The Flask web framework

Figure 1: Flask is a micro web framework for Python (image source).

In this section we’ll briefly discuss the Flask web framework and how to install it on your system.

Flask is a popular micro web framework written in the Python programming language.

Along with Django, Flask is one of the most common web frameworks you’ll see when building web applications using Python.

However, unlike Django, Flask is very lightweight, making it super easy to build basic web applications.

As we’ll see in this section, we’ll only need a small amount of code to facilitate live video streaming with Flask — the rest of the code either involves (1) OpenCV and accessing our video stream or (2) ensuring our code is thread safe and can handle multiple clients.

If you ever need to install Flask on a machine, it’s as simple as the following command:

While you’re at  it, go ahead and install NumPy, OpenCV, and imutils:

Note: If you’d like the full-install of OpenCV including “non-free” (patented) algorithms, be sure to compile OpenCV from source.

Project structure

Before we move on, let’s take a look at our directory structure for the project:

To perform background subtraction and motion detection we’ll be implementing a class named SingleMotionDetector — this class will live inside the singlemotiondetector.py file found in the motion_detection submodule of pyimagesearch.

The webstreaming.py file will use OpenCV to access our web camera, perform motion detection via SingleMotionDetector, and then serve the output frames to our web browser via the Flask web framework.

In order for our web browser to have something to display, we need to populate the contents of index.html with HTML used to serve the video feed. We’ll only need to insert some basic HTML markup — Flask will handle actually sending the video stream to our browser for us.

Implementing a basic motion detector

Figure 2: Video surveillance with Raspberry Pi, OpenCV, Flask and web streaming. By use of background subtraction for motion detection, we have detected motion where I am moving in my chair.

Our motion detector algorithm will detect motion by form of background subtraction.

Most background subtraction algorithms work by:

  1. Accumulating the weighted average of the previous N frames
  2. Taking the current frame and subtracting it from the weighted average of frames
  3. Thresholding the output of the subtraction to highlight the regions with substantial differences in pixel values (“white” for foreground and “black” for background)
  4. Applying basic image processing techniques such as erosions and dilations to remove noise
  5. Utilizing contour detection to extract the regions containing motion

Our motion detection implementation will live inside the SingleMotionDetector class which can be found in singlemotiondetector.py.

We call this a “single motion detector” as the algorithm itself is only interested in finding the single, largest region of motion.

We can easily extend this method to handle multiple regions of motion as well.

Let’s go ahead and implement the motion detector.

Open up the singlemotiondetector.py file and insert the following code:

Lines 2-4 handle our required imports.

All of these are fairly standard, including NumPy for numerical processing, imutils for our convenience functions, and cv2 for our OpenCV bindings.

We then define our SingleMotionDetector class on Line 6. The class accepts an optional argument, accumWeight, which is the factor used to our accumulated weighted average.

The larger accumWeight is, the less the background ( bg) will be factored in when accumulating the weighted average.

Conversely, the smaller accumWeight is, the more the background bg will be considered when computing the average.

Setting accumWeight=0.5 weights both the background and foreground evenly — I often recommend this as a starting point value (you can then adjust it based on your own experiments).

Next, let’s define the update method which will accept an input frame and compute the weighted average:

In the case that our bg frame is None (implying that update has never been called), we simply store the bg frame (Lines 15-18).

Otherwise, we compute the weighted average between the input frame, the existing background bg, and our corresponding accumWeight factor.

Given our background bg we can now apply motion detection via the detect method:

The detect method requires a single parameter along with an optional one:

  • image: The input frame/image that motion detection will be applied to.
  • tVal: The threshold value used to mark a particular pixel as “motion” or not.

Given our input image we compute the absolute difference between the image and the bg (Line 27).

Any pixel locations that have a difference > tVal are set to 255 (white; foreground), otherwise they are set to 0 (black; background) (Line 28).

A series of erosions and dilations are performed to remove noise and small, localized areas of motion that would otherwise be considered false-positives (likely due to reflections or rapid changes in light).

The next step is to apply contour detection to extract any motion regions:

Lines 37-39 perform contour detection on our thresh image.

We then initialize two sets of bookkeeping variables to keep track of the location where any motion is contained (Lines 40 and 41). These variables will form the “bounding box” which will tell us the location of where the motion is taking place.

The final step is to populate these variables (provided motion exists in the frame, of course):

On Lines 43-45 we make a check to see if our contours list is empty.

If that’s the case, then there was no motion found in the frame and we can safely ignore it.

Otherwise, motion does exist in the frame so we need to start looping over the contours (Line 48).

For each contour we compute the bounding box and then update our bookkeeping variables (Lines 47-53), finding the minimum and maximum (x, y)-coordinates that all motion has taken place it.

Finally, we return the bounding box location to the calling function.

Combining OpenCV with Flask

Figure 3: OpenCV and Flask (a Python micro web framework) make the perfect pair for web streaming and video surveillance projects involving the Raspberry Pi and similar hardware.

Let’s go ahead and combine OpenCV with Flask to serve up frames from a video stream (running on a Raspberry Pi) to a web browser.

Open up the webstreaming.py file in your project structure and insert the following code:

Lines 2-12 handle our required imports:

  • Line 2 imports our SingleMotionDetector class which we implemented above.
  • The VideoStream class (Line 3) will enable us to access our Raspberry Pi camera module or USB webcam.
  • Lines 4-6 handle importing our required Flask packages — we’ll be using these packages to render our index.html template and serve it up to clients.
  • Line 7 imports the threading library to ensure we can support concurrency (i.e., multiple clients, web browsers, and tabs at the same time).

Let’s move on to performing a few initializations:

First, we initialize our outputFrame on Line 17 — this will be the frame (post-motion detection) that will be served to the clients.

We then create a lock on Line 18 which will be used to ensure thread-safe behavior when updating the ouputFrame (i.e., ensuring that one thread isn’t trying to read the frame as it is being updated).

Line 21 initialize our Flask app itself while Lines 25-27 access our video stream:

  • If you are using a USB webcam, you can leave the code as is.
  • However, if you are using a RPi camera module you should uncomment Line 25 and comment out Line 26.

The next function, index, will render our index.html template and serve up the output video stream:

This function is quite simplistic — all it’s doing is calling the Flask render_template on our HTML file.

We’ll be reviewing the index.html file in the next section so we’ll hold off on a further discussion on the file contents until then.

Our next function is responsible for:

  1. Looping over frames from our video stream
  2. Applying motion detection
  3. Drawing any results on the outputFrame

And furthermore, this function must perform all of these operations in a thread safe manner to ensure concurrency is supported.

Let’s take a look at this function now:

Our detection_motion function accepts a single argument, frameCount, which is the minimum number of required frames to build our background bg in the SingleMotionDetector class:

  • If we don’t have at least frameCount frames, we’ll continue to compute the accumulated weighted average.
  • Once frameCount is reached, we’ll start performing background subtraction.

Line 37 grabs global references to three variables:

  • vs: Our instantiated VideoStream object
  • outputFrame: The output frame that will be served to clients
  • lock: The thread lock that we must obtain before updating outputFrame

Line 41 initializes our SingleMotionDetector class with a value of accumWeight=0.1, implying that the bg value will be weighted higher when computing the weighted average.

Line 42 then initializes the total number of frames read thus far — we’ll need to ensure a sufficient number of frames have been read to build our background model.

From there, we’ll be able to perform background subtraction.

With these initializations complete, we can now start looping over frames from the camera:

Line 48 reads the next frame from our camera while Lines 49-51 perform preprocessing, including:

  • Resizing to have a width of 400px (the smaller our input frame is, the less data there is, and thus the faster our algorithms will run).
  • Converting to grayscale.
  • Gaussian blurring (to reduce noise).

We then grab the current timestamp and draw it on the frame (Lines 54-57).

With one final check, we can perform motion detection:

On Line 62 we ensure that we have read at least frameCount frames to build our background subtraction model.

If so, we apply the .detect motion of our motion detector, which returns a single variable, motion.

If motion is None, then we know no motion has taken place in the current frame. Otherwise, if motion is not None (Line 67), then we need to draw the bounding box coordinates of the motion region on the frame.

Line 76 updates our motion detection background model while Line 77 increments the total number of frames read from the camera thus far.

Finally, Line 81 acquires the lock required to support thread concurrency while Line 82 sets the outputFrame.

We need to acquire the lock to ensure the outputFrame variable is not accidentally being read by a client while we are trying to update it.

Our next function, generate , is a Python generator used to encode our outputFrame as JPEG data — let’s take a look at it now:

Line 86 grabs global references to our outputFrame and lock, similar to the detect_motion function.

Then  generate starts an infinite loop on Line 89 that will continue until we kill the script.

Inside the loop, we:

  • First acquire the lock (Line 91).
  • Ensure the outputFrame is not empty (Line 94), which may happen if a frame is dropped from the camera sensor.
  • Encode the frame as a JPEG image on Line 98 — JPEG compression is performed here to reduce load on the network and ensure faster transmission of frames.
  • Check to see if the success flag has failed (Lines 101 and 102), implying that the JPEG compression failed and we should ignore the frame.
  • Finally, serve the encoded JPEG frame as a byte array that can be consumed by a web browser.

That was quite a lot of work in a short amount of code, so definitely make sure you review this function a few times to ensure you understand how it works.

The next function, video_feed calls our generate function:

Notice how this function as the app.route signature, just like the index function above.

The app.route signature tells Flask that this function is a URL endpoint and that data is being served from http://your_ip_address/video_feed.

The output of video_feed is the live motion detection output, encoded as a byte array via the generate function. Your web browser is smart enough to take this byte array and display it in your browser as a live feed.

Our final code block handles parsing command line arguments and launching the Flask app:

Lines 118-125 handle parsing our command line arguments.

We need three arguments here, including:

  • --ip: The IP address of the system you are launching the webstream.py  file from.
  • --port: The port number that the Flask app will run on (you’ll typically supply a value of 8000 for this parameter).
  • --frame-count: The number of frames used to accumulate and build the background model before motion detection is performed. By default, we use 32  frames to build the background model.

Lines 128-131 launch a thread that will be used to perform motion detection.

Using a thread ensures the detect_motion function can safely run in the background — it will be constantly running and updating our outputFrame so we can serve any motion detection results to our clients.

Finally, Lines 134 and 135 launches the Flask app itself.

The HTML page structure

As we saw in webstreaming.py, we are rendering an HTML template named index.html.

The template itself is populated by the Flask web framework and then served to the web browser.

Your web browser then takes the generated HTML and renders it to your screen.

Let’s inspect the contents of our index.html file:

As we can see, this is super basic web page; however, pay close attention to Line 7 — notice how we are instructing Flask to dynamically render the URL of our video_feed route.

Since the video_feed function is responsible for serving up frames from our webcam, the src of the image will be automatically populated with our output frames.

Our web browser is then smart enough to properly render the webpage and serve up the live video stream.

Putting the pieces together

Now that we’ve coded up our project, let’s put it to the test.

Open up a terminal and execute the following command:

As you can see in the video, I opened connections to the Flask/OpenCV server from multiple browsers, each with multiple tabs. I even pulled out my iPhone and opened a few connections from there. The server didn’t skip a beat and continued to serve up frames reliably with Flask and OpenCV.

Join the embedded computer vision and deep learning revolution!

I first started playing guitar twenty years ago when I was in middle school. I wasn’t very good at it and I gave it up only a couple years after. Looking back, I strongly believe the reason I didn’t stick with it was because I wasn’t learning in a practical, hands-on manner.

Instead, my music teacher kept trying to drill theory into my head — but as an eleven year old kid, I was just trying to figure out whether I even liked playing guitar, let alone if I wanted to study the theory behind music in general.

About a year and a half ago I decided to start taking guitar lessons again. This time, I took care to find a teacher who could blend theory and practice together, showing me how to play songs or riffs while at the same time learning a theoretical technique.

The result? My finger speed is now faster than ever, my rhythm is on point, and I can annoy my wife to no end rocking Sweet Child of Mine on my Les Paul.

My point is this — whenever you are learning a new skill, whether it’s computer vision, hacking with the Raspberry Pi, or even playing guitar, one of the fastest, fool-proof methods to pick up the technique is to design (small) real-world projects around the skill and try to solve it.

For guitar, that meant learning short riffs that not only taught me parts of actual songs but also gave me a valuable technique (such as mastering a particular pentatonic scale, for instance).

In computer vision and image processing, your goal should be to brainstorm mini-projects and then try to solve them. Don’t get too complicated too quickly, that’s a recipe for failure.

Instead, grab a copy of my Raspberry Pi for Computer Vision book, read it, and use it as a launchpad for your personal projects.

When you’re done reading, go back to the chapters that inspired you the most and see how you can extend them in some manner (even if it’s just applying the same technique to a different scenario).

Solving the mini-projects you brainstorm will not only keep you interested in the subject (since you personally thought of them), but they’ll teach you hands-on skills at the same time.

Today’s tutorial — motion detection and streaming to a web browser — is a great starting point for such a mini-project. I hope that now that you’ve gone through this tutorial, you have brainstormed ideas on how you may extend this project to your own applications.

But, if you’re interested in learning more…

My new book, Raspberry Pi for Computer Vision, has over 40 projects related to embedded computer vision + Internet of Things (IoT). You can build upon the projects in the book to solve problems around your home, business, and even for your clients. Each of these projects have an emphasis on:

  • Learning by doing.
  • Rolling up your sleeves.
  • Getting your hands dirty in code and implementation.
  • Building actual, real-world projects using the Raspberry Pi.

A handful of the highlighted projects include:

  • Daytime and nightime wildlife monitoring
  • Traffic counting and vehicle speed detection
  • Deep Learning classification, object detection, and instance segmentation on resource constrained devices
  • Hand gesture recognition
  • Basic robot navigation
  • Security applications
  • Classroom attendance
  • …and many more!

The book also covers deep learning using the Google Coral and Intel Movidius NCS coprocessors (Hacker + Complete Bundles). We’ll also bring in the NVIDIA Jetson Nano to the rescue when more deep learning horsepower is needed (Complete Bundle).

In case you missed the Kickstarter, you may wish to watch my announcement video:

Are you ready to join me to learn about computer vision and how to apply embedded devices such as the Raspberry Pi, Google Coral, and NVIDIA Jetson Nano?

If so, take a look at the book using the link below!

Pre-order my Raspberry Pi for Computer Vision book!

Summary

In this tutorial you learned how to stream frames from a server machine to a client web browser. Using this web streaming we were able to build a basic security application to monitor a room of our house for motion.

Background subtraction is an extremely common method utilized in computer vision. Typically, these algorithms are computationally efficient, making them suitable for resource-constrained devices, such as the Raspberry Pi.

After implementing our background subtractor, we combined it with the Flask web framework, enabling us to:

  1. Access frames from RPi camera module/USB webcam.
  2. Apply background subtraction/motion detection to each frame.
  3. Stream the results to a web page/web browser.

Furthermore, our implementation supports multiple clients, browsers, or tabs — something that you will not find in most other implementations.

Whenever you need to stream frames from a device to a web browser, definitely use this code as a template/starting point.

To download the source code to this post, and be notified when future posts are published here on PyImageSearch, just enter your email address in the form below!

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , , , , , ,

62 Responses to OpenCV – Stream video to web browser/HTML page

  1. auraham September 2, 2019 at 10:14 am #

    Great tutorial, as always! Although, sorry for your car.

    • Adrian Rosebrock September 5, 2019 at 10:19 am #

      Thanks Auraham.

  2. Tom September 2, 2019 at 10:51 am #

    if you’re getting:

    TypeError: only integer scalar arrays can be converted to a scalar index

    add “.tobytes()” in line 108:

    yield (b’–frame\r\n’ b’Content-Type: image/jpeg\r\n\r\n’ + bytearray(encodedImage.tobytes()) + b’\r\n’)

    • Adrian Rosebrock September 5, 2019 at 10:19 am #

      I think you’re using Python 2.7. The code is only compatible with Python 3.

      • Tom September 6, 2019 at 6:01 am #

        3.6
        great tutorial ofc, guru!

        • Adrian Rosebrock September 12, 2019 at 11:26 am #

          Hmm, I’m honestly not sure what the issue may be then. Perhaps share your OS as well just in case any other readers know what the problem may be.

  3. Rajat September 2, 2019 at 10:55 am #

    Hello Adrian. Great tutorial once again !
    My query is: Can I view the cam feed from a device which not connected to same network ? Say if I am at my office can I view it from there ?

    • Adrian Rosebrock September 5, 2019 at 10:19 am #

      You would need to utilize port forwarding on your router.

  4. Samjith September 2, 2019 at 11:29 am #

    Hi Adrian ,
    Are u using CCTV camera to record videos ? How will u connect it into raspberry ?

    • Samjith September 2, 2019 at 11:58 am #

      Actually I’m using a analog cctv camera , which didn’t have usb output .. How to convert this into usb output and connect to raspberry pi usb port ?

    • Adrian Rosebrock September 5, 2019 at 10:18 am #

      Sorry, I don’t have any guides with CCTV.

  5. Imam Ferianto September 2, 2019 at 12:51 pm #

    This detection process is only run when someone acces the server via browser. The correct method I belive is run the detection on background process and pipe stream via ffmpeg to youtube or other rtmp, then display stream on html for public view.

    • Adrian Rosebrock September 5, 2019 at 10:27 am #

      Actually, the detection method will run in the background *regardless* of whether or not your web browser is open 🙂

  6. Mattia September 2, 2019 at 12:54 pm #

    Hi Adrian,
    I noticed that you used flask to post the web page, the official documentation states that flask is not suitable for production (I usually use gunicorn as a production webserver), do you think it is safe to use Flask directly?.

    • Adrian Rosebrock September 5, 2019 at 10:17 am #

      I’ve addressed that question a few times in the comments already. Please refer to my replies.

  7. Vincent PINTE DEREGNAUCOURT September 2, 2019 at 1:05 pm #

    when the bundle will be ready (I see pre order : ok, but the book + code + … are for sept, nov, january ?)
    Thx

    • Adrian Rosebrock September 5, 2019 at 10:17 am #

      Chapters will start to release in September 2019. If you have pre-ordered a copy you will receive an email with a link to download the files.

  8. Scott Adrian Braconnier September 2, 2019 at 1:20 pm #

    Adrian,

    So sorry to hear about your vehicle. “It’s a horrible feeling” to have your personal space violated like that.

    Finding a way to share your store in a positive way is truly a gift and shows a lot about your character as a person.

    Karma is all around us and it will prevail.

    • Adrian Rosebrock September 5, 2019 at 10:17 am #

      Thanks Scott 🙂

  9. Brian September 2, 2019 at 2:09 pm #

    Awesome content!

    Wouldn’t it be great to have car-cams recording each other in parking lots…communicating via dynamic mesh networks…ultimately uploading relevant footage to owners of stolen cars…?! This avoids the problem of the dvr footage lost with the stolen vehicle, as nearby cars capture and save the action.

    (My car was once stolen from a hospital parking lot while I visited a friend in the hospital. It was recovered a few weeks later.)

    • Adrian Rosebrock September 5, 2019 at 10:16 am #

      That would be pretty neat!

  10. Pero September 2, 2019 at 4:59 pm #

    Thanks for the great post, Adrian, I tried it out immediately as I got your newsletter 😀 However, it seems that imutils package (0.4.5) has no grab_contours() method..or at least I’m getting that error message. I also can’t find it in the site-packages. any idea what could’ve gone wrong?

    • Adrian Rosebrock September 5, 2019 at 10:16 am #

      It sounds like you have an old version of imutils installed. You should upgrade it via:

      $ pip install --upgrade imutils

  11. Rahmoun September 2, 2019 at 5:00 pm #

    Thanks a lot Dear Adrian. Pretty good projects, my students and I love what you are doing: Thanks for sharing. My hat is off. BRAVO

    • Adrian Rosebrock September 5, 2019 at 10:16 am #

      Thanks Rahmoun!

  12. Soïta September 2, 2019 at 6:32 pm #

    Hi Adrian,

    Another great post here !!

    How about a production deployment with Flask ? This is a development one.

    Not so secure…..

    • Adrian Rosebrock September 5, 2019 at 10:16 am #

      This a computer vision blog. I teach CV algorithms and techniques, not web development ones. There are many books and courses on Flask, feel free to refer to them to add any other bells and whistles.

  13. Alan McDonley September 3, 2019 at 1:20 am #

    @Adrian So sorry to hear about the theft of your car, and your father – yikes.

    I typed it all in and it worked!
    (with the addition to detect_motion() after the vs.read(), if frame is None: continue.)

    Actually, it worked my non-aspirated RPi 3B with PiCam v1.3 to 70% load and 75 degC. I added a time.sleep(0.1) to the end of detect_motion() which dropped the load to 33% and keeps the temp at at a “cool” 60 degC.

    • Adrian Rosebrock September 5, 2019 at 10:15 am #

      Thanks Alan. And congrats on getting the script to run on your RPi!

  14. Falahgs September 3, 2019 at 2:35 am #

    I am very sad about stealing your car…..you are a great person
    I am a follower of all your publications..All wonderful
    I hope to will you find your car as soon as possible
    I wish you the best luck…
    this is a great post

    thanks so much for knowledge sharing

    • Adrian Rosebrock September 5, 2019 at 10:15 am #

      Thank you for the kind words, Falahgs.

  15. Mae September 3, 2019 at 6:27 am #

    Shucks, I’m sorry to hear about your car.

    Great article by the way! Easy to read and understand.

    Question: How would we add user authentication to this?

    Cheers!

    • Adrian Rosebrock September 5, 2019 at 10:15 am #

      That’s really up to you. Flask is a popular web framework. There are many books and courses on Flask, feel free to refer to them — those are additional bells and whistles outside of the concept taught in this post.

  16. Pedro September 3, 2019 at 7:00 am #

    Thanks, really interesting. Did you think in recording the video to visualize it after?

    • Adrian Rosebrock September 5, 2019 at 10:14 am #

      Yes, you can use my KeyClipWriter.

      • Pedro September 6, 2019 at 5:26 pm #

        Thanks!

  17. Danial September 4, 2019 at 1:41 am #

    Hi Adrian,

    It is another great post and I learned a lot from this.

    Please tell me how to access this html page from other networks?

    • Adrian Rosebrock September 5, 2019 at 10:14 am #

      You would need to update your router settings to perform port forwarding.

  18. Tomal September 4, 2019 at 7:50 am #

    Great work!
    Really love it.

    I just have a little question. Is it possible to continue run the background thread for motion detection without running (opening) any browser window. But I can see the result any time (or whenever required) in the browser.

    Appreciate your feedback.

  19. Peter September 4, 2019 at 8:14 am #

    Hallo. thanks for your help, its always been a life saver each time. I know you do so much but i do need a little help. Been trying to detect objects using a remote camera while streaming on the web but i havent been quit successfull. is there anything anything i should realy do to better my suituation.

  20. Tomal September 5, 2019 at 2:16 am #

    I tried it today but whenever I open the stream in browser window it just lockup my PC and I have to force power off to get it back. Seems it eats my whole resource.

    By the way, it is an i3 PC with Nvidia 670/2GB and OpenCV 4.0.

    Regards

    • Adrian Rosebrock September 5, 2019 at 10:12 am #

      What operating system are you using?

      • Tomal September 5, 2019 at 1:17 pm #

        Thanks for your response.
        My OS: Ubuntu 18.04

        • Tomal September 5, 2019 at 1:19 pm #

          I forgot tell you one thing, I tried with IP cam. Not the USB one.

          • Adrian Rosebrock September 12, 2019 at 11:26 am #

            That could be it. Try using a USB webcam and see if that resolves the issue.

  21. Carlos Urteaga September 6, 2019 at 2:16 pm #

    Amazing page and work congrats, Sorry for your car.
    I was wondering if you know how can you publish with the same flask a list of picture/description list in the right side, like array history board with previous movements

    • Adrian Rosebrock September 12, 2019 at 11:25 am #

      Thanks Carlos, I’m glad you enjoyed the project.

      As for your question, I suggest you look into some basic web development, specifically HTML, JavaScript, and CSS. That’s really more of a web dev/GUI-related question than it is CV.

      Otherwise, you might like this this tutorial on saving key events.

  22. Mark September 6, 2019 at 7:29 pm #

    Hi, is it possible for a website to embed the live video feed? And how do I do that? Thanks

    • Adrian Rosebrock September 12, 2019 at 11:23 am #

      This code already shows you how to embed the live video stream so I’m not really sure what you’re asking.

  23. Umut Alihan Dikel September 7, 2019 at 4:25 am #

    Hi Adrian,

    thank you very much for such preccious contribution and I wish that you find your car asap!!

    Your code works seamlessly and I learned a lot on the way. I am trying to add the feature to turn on and off the camera controlled from index.html (client side). Could you please give a hint or show the way how it should be done?

    I have tried removing “<img src" element by clicking a button but it only removes the image from client. I would like to control the stream by starting/stopping it.

    Best regards,
    Alihan

    • Adrian Rosebrock September 12, 2019 at 11:24 am #

      I would suggest you look into basic web development. Learn some basic HTML and JavaScript and you’ll be able to make quick work of the project.

  24. Muhammad Najib September 8, 2019 at 5:41 am #

    Hi Adrian,
    it’s a great project thank and i am sorry for your car

    • Adrian Rosebrock September 12, 2019 at 11:23 am #

      Thanks Muhammad!

  25. Carlos Córdoba Ruiz September 8, 2019 at 7:57 am #

    Hi Adrian, nice post, i am thinking about how to consume from another computer (maybe using VLC) the streaming generated from opencv, i make some examples consuming the video from opencv video capture and the URL of flask, but didnt work, u have any idea how?
    Regards

    • Adrian Rosebrock September 12, 2019 at 11:22 am #

      Sorry, I don’t have any code or tutorials for taking the output of an OpenCV script and streaming it to VLC.

  26. Adam September 8, 2019 at 4:06 pm #

    Hey Adrian, as usual – great post and sorry for your loss (been there a few years ago!!!)

    Can you possibly give me a hint if you had multiple cameras in action and wanted to stream all of them into a browser?

    I have followed your tutorial using message queue. I am using Flask/Socket on a central server to stream base64-encoded frames from 3 R-Pi’s to clients in the browser (which update image source for each frame), but I have a feeling it’s not optimal. It works like a butter from localhost (central server), but mobiles and tablets are getting a massive lag and socket.io is not catching up!

    It’s no worries if you have no interest in this, I understand it’s an OpenCV blog. An awesome OpenCV blog 😉

    • Adrian Rosebrock September 12, 2019 at 11:22 am #

      Are all the cameras on a single RPi? Keep in mind that you really can’t use more than two cameras on a RPi, it will be far too slow.

      Otherwise, I would suggest you:

      1. Access each individual camera in a single Python script
      2. Grab new frames from each camera
      3. Stack them together (either horizontally or vertically)
      4. Then output that to your web browser

      That should reduce latency. Otherwise it sounds like you’re trying to stream two separate sets of frames out which will certainly slow down the system.

  27. Ujjawal Singh September 11, 2019 at 2:36 am #

    Great post Adrian sir.
    I want to take a POST request as an integer in video_feed() and then passing that integer as an argument in a utility function (generator() function in your case). Then showing the result as you did.

    • Adrian Rosebrock September 12, 2019 at 11:20 am #

      I’m not sure I fully understand. You’re saying a video is uploaded via a POST request?

Before you leave a comment...

Hey, Adrian here, author of the PyImageSearch blog. I'd love to hear from you, but before you submit a comment, please follow these guidelines:

  1. If you have a question, read the comments first. You should also search this page (i.e., ctrl + f) for keywords related to your question. It's likely that I have already addressed your question in the comments.
  2. If you are copying and pasting code/terminal output, please don't. Reviewing another programmers’ code is a very time consuming and tedious task, and due to the volume of emails and contact requests I receive, I simply cannot do it.
  3. Be respectful of the space. I put a lot of my own personal time into creating these free weekly tutorials. On average, each tutorial takes me 15-20 hours to put together. I love offering these guides to you and I take pride in the content I create. Therefore, I will not approve comments that include large code blocks/terminal output as it destroys the formatting of the page. Kindly be respectful of this space.
  4. Be patient. I receive 200+ comments and emails per day. Due to spam, and my desire to personally answer as many questions as I can, I hand moderate all new comments (typically once per week). I try to answer as many questions as I can, but I'm only one person. Please don't be offended if I cannot get to your question
  5. Do you need priority support? Consider purchasing one of my books and courses. I place customer questions and emails in a separate, special priority queue and answer them first. If you are a customer of mine you will receive a guaranteed response from me. If there's any time left over, I focus on the community at large and attempt to answer as many of those questions as I possibly can.

Thank you for keeping these guidelines in mind before submitting your comment.

Leave a Reply

[email]
[email]