Long exposure with OpenCV and Python

One of my favorite photography techniques is long exposure, the process of creating a photo that shows the effect of passing time, something that traditional photography does not capture.

When applying this technique, water becomes silky smooth, stars in a night sky leave light trails as the earth rotates, and car headlights/taillights illuminate highways in a single band of continuous motion.

Long exposure is a gorgeous technique, but in order to capture these types of shots, you need to take a methodical approach: mounting your camera on a tripod, applying various filters, computing exposure values, etc. Not to mention, you need to be a skilled photographer!

As a computer vision researcher and developer, I know a lot about processing images — but let’s face it, I’m a terrible photographer.

Luckily, there is a way to simulate long exposures by applying image/frame averaging. By averaging the images captured from a mounted camera over a given period of time, we can (in effect) simulate long exposures.

And since videos are just a series of images, we can easily construct long exposures from the frames by averaging all frames in the video together. The result is stunning long exposure-esque images, like the one at the top of this blog post.

To learn more about long exposures and how to simulate the effect with OpenCV, just keep reading.

Looking for the source code to this post?
Jump right to the downloads section.

Long exposure with OpenCV and Python

This blog post is broken down into three parts.

In the first part of this blog post, we will discuss how to simulate long exposures via frame averaging.

From there, we’ll write Python and OpenCV code that can be used to create long exposure-like effects from input videos.

Finally, we’ll apply our code to some example videos to create gorgeous long exposure images.

Simulating long exposures via image/frame averaging

The idea of simulating long exposures via averaging is hardly a new idea.

In fact, if you browse popular photography websites, you’ll find tutorials on how to manually create these types of effects using your camera and tripod (a great example of such a tutorial can be found here).

Our goal today is to simply implement this approach so we can automatically create long exposure-like images from an input video using Python and OpenCV. Given an input video, we’ll average all frames together (weighting them equally) to create the long exposure effect.

Note: You can also create this long exposure effect using multiple images, but since a video is just a sequence of images, it’s easier to demonstrate this technique using video. Just keep this in mind when applying the technique to your own files.

As we’ll see, the code itself is quite simple but has a beautiful effect when applied to videos captured using a tripod, ensuring there is no camera movement in between frames.

Implementing long exposure simulation with OpenCV

Let’s begin by opening a new file named long_exposure.py , and inserting the following code:

Lines 2-4 handle our imports — you’ll need imutils and OpenCV.

In case you don’t already have imutils  installed on your environment, simply use pip :

If you don’t have OpenCV configured and installed, head on over to my OpenCV 3 tutorials page and select the guide that is appropriate for your system.

We parse our two command line arguments on Lines 7-12:

  • --video : The path to the video file.
  • --output : The path + filename to the output “long exposure” file.

Next, we will perform some initialization steps:

On Line 16 we initialize RGB channel averages which we will later merge into the final long exposure image.

We also initialize a count of the total number of frames on Line 17.

For this tutorial, we are working with a video file that contains all of our frames, so it is necessary to open a file pointer to the video capture stream  on Line 21.

Now let’s begin our loop which will calculate the average:

In our loop, we will grab frames from the stream  (Line 27) and split the frame  into its respective BGR channels (Line 35). Notice the exit condition squeezed in between — if a frame is not grabbed  from the stream we are at the end of the video file and we will break  out of the loop (Lines 31 and 32).

In the remainder of the loop we’ll perform the running average computation:

If this is the first iteration, we set the initial RGB averages to the corresponding first frame channels grabbed on Lines 38-41 (it is only necessary to do this on the first go-around, hence the if-statement).

Otherwise, we’ll compute the running average for each channel on Lines 45-48. The averaging computation is quite simple — we take the total number of frames times the channel-average, add the respective channel, and then divide that result by the floating point total number of frames (we add 1 to the total in the denominator because this is a fresh frame). We store the calculations in respective RGB channel average arrays.

Finally, we increment the total number of frames, allowing us to maintain our running average (Line 51).

Once we have looped over all frames in the video file, we can merge the (average) channels into one image and write it to disk:

On Line 54, we utilize the handy cv2.merge  function while specifying each of our channel averages in a list. Since these arrays contain floating point numbers (as they are averages across all frames), we tack on the astype("uint8")  to convert pixels to integers in the range [0-255].

We write the avg  image to disk on the subsequent Line 55 using the command line argument path + filename. We could also display the image to our screen via cv2.imshow , but since it can take a lot of CPU time to process a video file, we’ll simply save the image to disk just in case we want to save it as our desktop background or share it with our friends on social media.

The final step in this script is to perform cleanup by releasing our video stream pointer (Line 58).

Long exposure and OpenCV results

Let’s see our script in action by processing three sample videos. Note that each video was captured by a camera mounted on a tripod to ensure stability.

Note: The videos I used to create these examples do not belong to me and were licensed from the original creators; therefore, I cannot include them along with the source code download of this blog post. Instead, I have provided links to the original videos should you wish to replicate my results.

Our first example is a 15-second video of water rushing over rocks — I have included a sample of frames from the video below:

Figure 1: Sample frames of river water rushing over rocks.

To create our long exposure effect, just execute the following command:

Figure 2: Long exposure of 15 seconds of river water rushing over rocks, constructed by averaging frames with Python and OpenCV.

Notice how the water has been blended into a silky form due to the averaging process.

Let’s continue with a second example of a river, again, with a montage of frames displayed below:

Figure 3: Sample frames of another river.

The following command was used to generate the long exposure image:

Figure 4: Long exposure of a second river which makes the water look silky smooth (created using OpenCV).

Notice how the stationary rocks remain unchanged, but the rushing water is averaged into a continuous sheet, thus mimicking the long exposure effect.

This final example is my favorite as the color of the water is stunning, giving a stark contrast between the water itself and the forest:

Figure 5: Sample frames of a rushing river through a forest

When generating the long exposure with OpenCV, it gives the output a surreal, dreamlike feeling:

Figure 6: A dream-like long exposure of a river through a forest created using Python and OpenCV.

Different outputs can be constructed by sampling the frames from the input video at regular intervals rather than averaging all frame. This is left as an exercise to you, the reader, to implement.

Summary

In today’s blog post we learned how to simulate long exposure images using OpenCV and image processing techniques.

To mimic a long exposure, we applied frame averaging, the process of averaging a set of images together. We made the assumption that our input images/video were captured using a mounted camera (otherwise the resulting output image would be distorted).

While this is not a true “long exposure”, the effect is quite (visually) similar. And more importantly, this allows you to apply the long exposure effect without (1) being an expert photographer or (2) having to invest in expensive cameras, lenses, and filters.

If you enjoyed today’s blog post, be sure to enter your email address in the form below to be notified when future tutorials are published!

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 11-page Resource Guide on Computer Vision and Image Search Engines, including exclusive techniques that I don’t post on this blog! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , ,

33 Responses to Long exposure with OpenCV and Python

  1. Sam August 14, 2017 at 12:23 pm #

    Awesome! Thank you for sharing this technique

  2. Dana E Moore August 14, 2017 at 12:48 pm #

    Terrific. The code sample also illustrated band separation, which I actually needed for some overhead imagery work where the inputs have 3 visible bands (RGB) and a near infrared band). saved me a bit of poking around to understand how to split the channels

    • Adrian Rosebrock August 14, 2017 at 1:00 pm #

      Thanks Dana, I’m happy you enjoyed the post! 🙂

  3. pavan August 14, 2017 at 2:16 pm #

    nice work bro

    • Adrian Rosebrock August 14, 2017 at 5:04 pm #

      Thanks Pavan.

  4. Chris August 14, 2017 at 2:52 pm #

    Using video was a good idea. Now I’m going to have to figure out how to apply this to video output. Maybe every output frame is an average of N frames.

    • Adrian Rosebrock August 14, 2017 at 5:02 pm #

      Are you trying to apply long exposure to a live video stream? If so I would do a rolling average every N frames.

  5. Hans Middelbeek August 14, 2017 at 2:52 pm #

    Perfect instructions, tanks for that! I am very interested to know if a comparable technique can be used for a video of e.g. a bee hive, where the final image only shows the stationary background and not the moving bees.

    • Adrian Rosebrock August 14, 2017 at 5:03 pm #

      Yes, this would be a great method for showing stationary objects. The individual bees will get lost in the average (provided the bees aren’t overly dense, of course).

      • Hans Middelbeek August 15, 2017 at 5:57 am #

        Hi! I tested the program on a short and on a long MOV video, but in both cases I get “Floating point exeption” as soon as the program is started and has responded with
        “[INFO] computing frame averages (this will take awhile)…”

        What am I doing wrong?

        • Adrian Rosebrock August 17, 2017 at 9:22 am #

          Hi Hans — that is very strange, I’m not sure what could be causing that. Can you validate that the frames are beiign successfully read from the input video?

    • Or Shur August 15, 2017 at 2:48 am #

      Depending on the exact scenario, you’d might want to consider calculating the median instead of the average in this case (imagine you have 3 frames with one of them containing a bee. The averaged image might contain a blurred “stain” in the bee location as the average is 2/3 background and 1/3 bee).

  6. Linus August 14, 2017 at 5:31 pm #

    Woah… This is awesome! Thanks for the post, Adrian, gonna try this tomorrow! 😀

    Especially the last sentence is true:

    > And more importantly, this allows you to apply the long exposure effect without (1) being an expert photographer or (2) having to invest in expensive cameras, lenses, and filters.

    Even though I’m not a beginner photographer and have a quite good camera, I’ve not yet managed achieving this effect properly.

    That’s one of the many reasons I like CV for. Pretty impressive, this diversity of use-cases 🙂

    – Linus

  7. chiranjit August 15, 2017 at 1:53 am #

    You are the boss!! I am following you for few months now. Do u sell anything ? You free materials are so good.

  8. Vijay August 15, 2017 at 10:01 am #

    Hi Adrian,

    Thanks for the tutorial! I made a minor change and used the function cv2.accumulateWeighted to calculate the weighted average as following

    cv2.accumulateWeighted(R, rAvg, 1 / ( total ) )
    cv2.accumulateWeighted(G, gAvg, 1 / ( total ) )
    cv2.accumulateWeighted(B, bAvg, 1 / ( total ) )

    Also initialise total to 1.0 before the loop starts (the original code starts with 0).

    I got same (or similar) results with execution time cut down to approx 55% of the original!

    please share your thoughts.

    Vijay

  9. Martin August 15, 2017 at 4:22 pm #

    I testet your long_exposure code and found the following:
    the mp4 video from my Sony camera has a resolution of 1440×1080 but an aspect ratio of 16:9.
    So the video has to be horizontal scaled.
    But the aspect ratio is ignored in the code.

    • Adrian Rosebrock August 17, 2017 at 9:13 am #

      I’m not sure what you mean by the aspect ratio is ignored in the code. We don’t change, modify, or even resize the frames in this example.

  10. Martin August 18, 2017 at 12:46 pm #

    This is interesting: https://watermark-cvpr17.github.io/
    It is about removing watermarks from pictures.
    When we take adrians program and feed it with many pictures, in the end we should get the watermark only.

  11. Pit August 19, 2017 at 8:30 am #

    I wonder what the advantage is of normalizing the average in every step. Or rather what the disadvantage is of doing it only once at the end (other than that it’s much faster)?

    • Adrian Rosebrock August 21, 2017 at 3:44 pm #

      We’re not normalizing the average we are computed the weighted average (perhaps that is what you meant)? The problem with doing it once at the end is that you end up with a lot of frames stored in memory. You could easily exceed your RAM and the program would crash. To speedup the program you could batch-compute the weighted average.

    • Martin Schlatter September 13, 2017 at 1:49 pm #

      This is not normalizing nor is it a weighted average. It is some kind of sliding average.
      total * rAvg is the same as rSum. Adding (1*R) is the same as rSum+R. And diving by (total+1) is the new average as in rSumNew/totalNew.

      You could achieve the same by doing:

      while True:
      rSum += R
      gSum += G
      bSum += B
      total += 1

      #finished loop:
      if total>0:
      rAvg = rSum / total
      gAvg = gSum / total
      bAvg = bSum / total

  12. KelvinS August 21, 2017 at 1:26 pm #

    Hi, I could not find where the imutils package is used in the code. Is some imutils function really being used? Thanks for the great tutorial.

    • Adrian Rosebrock August 21, 2017 at 3:34 pm #

      You need to install “imutils” via pip:

      $ pip install imutils

  13. wjb711 August 22, 2017 at 4:31 am #

    Hello, Adrian Rosebrock
    from my understanding, it’s like
    frame+frame+frame+frame+… / (frame times)
    is it?

    • wjb711 August 22, 2017 at 4:47 am #

      if so,
      another way is to save frame to a dict{}, then sum dict{} items and do average 😀

      • Adrian Rosebrock August 22, 2017 at 10:44 am #

        Please see my reply to “Pit” above. You wouldn’t want to store all those images in a single dictionary or array. RAM/memory would become a concern about that point. You could do “batches” of frames, but don’t store the entire set of frames in memory.

  14. Pablo Rogina August 23, 2017 at 3:19 pm #

    Adrian, great post although just one detail when you say “something that traditional photography does not capture.” it looks like that technique was used with film cameras for a long time (I remember being a kid fascinated with those pictures about red lights of cars in highways…)

    Here’s an example http://www.instructables.com/id/Long-exposure-photography-1/

    Regards.

  15. Álvaro October 5, 2017 at 3:25 am #

    So nice! I’m enjoying it a lot.

    Just one thing, have you realized (I guess you did) that this technique is also a ‘objects-in-motion’ supressor? First time I applied it was over a highway with vehicles constantly passing through; I was looking for the effect on a waving flag in the foreground with the vehicles in the background. The result was: no vehicles at all and an amazing effect on the flag.

    Cheers!

    • Adrian Rosebrock October 6, 2017 at 5:07 pm #

      Hi Álvaro — you’re absolutely correct. Computing the average of a large set of frames will “average out” the moving objects.

  16. gal brandwine October 27, 2017 at 1:09 pm #

    hey Adrian,
    same as usual, another perfect tutorial.

    a question:
    is there a reason why you used cv2.split instead numpy index?

    iv’e tried them both, and with numpy indexing, its more than 50% faster!

    line 35:
    instead of cv2.split
    i used :
    B = frame[:, :, 0]
    G = frame[:, :, 1]
    R = frame[:, :, 2]

    thanks for your awesome tutorials!

    • Adrian Rosebrock October 30, 2017 at 2:03 pm #

      Thanks for the kind words. That’s interesting that NumPy indexing is faster. Thanks for the tip — I’ll try it out.

  17. stephan schulz October 31, 2017 at 1:07 pm #

    I was wondering how your approach differs from this openCv way that uses accumulate() and convertTo()

    http://answers.opencv.org/question/111867/how-to-take-average-every-x-frames/?answer=111972#post-id-111972

    thanks for your great work.

Leave a Reply