Holistically-Nested Edge Detection with OpenCV and Deep Learning

In this tutorial, you will learn how to apply Holistically-Nested Edge Detection (HED) with OpenCV and Deep Learning. We’ll apply Holistically-Nested Edge Detection to both images and video streams, followed by comparing the results to OpenCV’s standard Canny edge detector.

Edge detection enables us to find the boundaries of objects in images and was one of the first applied use cases of image processing and computer vision.

When it comes to edge detection with OpenCV you’ll most likely utilize the Canny edge detector; however, there are a few problems with the Canny edge detector, namely:

  1. Setting the lower and upper values to the hysteresis thresholding is a manual process which requires experimentation and visual validation.
  2. Hysteresis thresholding values that work well for one image may not work well for another (this is nearly always true for images captured in varying lighting conditions).
  3. The Canny edge detector often requires a number of preprocessing steps (i.e. conversion to grayscale, blurring/smoothing, etc.) in order to obtain a good edge map.

Holistically-Nested Edge Detection (HED) attempts to address the limitations of the Canny edge detector through an end-to-end deep neural network.

This network accepts an RGB image as an input and then produces an edge map as an output. Furthermore, the edge map produced by HED does a better job preserving object boundaries in the image.

To learn more about Holistically-Nested Edge Detection with OpenCV, just keep reading!

Looking for the source code to this post?
Jump right to the downloads section.

Holistically-Nested Edge Detection with OpenCV and Deep Learning

In this tutorial we will learn about Holistically-Nested Edge Detection (HED) using OpenCV and Deep Learning.

We’ll start by discussing the Holistically-Nested Edge Detection algorithm.

From there we’ll review our project structure and then utilize HED for edge detection in both images and video.

Let’s go ahead and get started!

What is Holistically-Nested Edge Detection?

Figure 1: Holistically-Nested Edge Detection with OpenCV and Deep Learning (source: 2015 Xie and Tu Figure 1)

The algorithm we’ll be using here today is from Xie and Tu’s 2015 paper, Holistically-Nested Edge Detection, or simply “HED” for short.

The work of Xie and Tu describes a deep neural network capable of automatically learning rich hierarchical edge maps that are capable of determining the edge/object boundary of objects in images.

This edge detection network is capable of obtaining state-of-the-art results on the Berkely BSDS500 and NYU Depth datasets.

A full review of the network architecture and algorithm outside the scope of this post, so please refer to the official publication for more details.

Project structure

Go ahead and grab today’s “Downloads” and unzip the files.

From there, you can inspect the project directory with the following command:

Our HED Caffe model is included in the hed_model/  directory.

I’ve provided a number of sample images/  including one of myself, my dog, and a sample cat image I found on the internet.

Today we’re going to review the detect_edges_image.py  and detect_edges_video.py  scripts. Both scripts share the same edge detection process, so we’ll be spending most of our time on the HED image script.

Holistically-Nested Edge Detection in Images

The Python and OpenCV Holistically-Nested Edge Detection example we are reviewing today is very similar to the HED example in OpenCV’s official repo.

My primary contribution here is to:

  1. Provide some additional documentation (when appropriate)
  2. And most importantly, show you how to use Holistically-Nested Edge Detection in your own projects.

Let’s go ahead and get started — open up the detect_edge_image.py file and insert the following code:

Our imports are handled on Lines 2-4. We’ll be using argparse to parse command line arguments. OpenCV functions and methods are accessed through the cv2  import. Our os  import will allow us to build file paths regardless of operating system.

This script requires two command line arguments:

  • --edge-detector : The path to OpenCV’s deep learning edge detector. The path contains two Caffe files that will be used to initialize our model later.
  • --image : The path to the input image for testing. Like I said previously — I’ve provided a few images in the “Downloads”, but you should try the script on your own images as well.

Let’s define our CropLayer  class:

In order to utilize the Holistically-Nested Edge Detection model with OpenCV, we need to define a custom layer cropping class — we appropriately name this class CropLayer .

In the constructor of this class, we store the starting and ending (x, y)-coordinates of where the crop will start and end, respectively (Lines 15-21).

The next step when applying HED with OpenCV is to define the getMemoryShapes function, the method responsible for computing the volume size of the inputs :

Line 27 derives the shape of the input volume as well as the target shape.

Line 28 extracts the batch size and number of channels from the inputs as well.

Finally, Line 29 extracts the height and width of the target shape, respectively.

Given these variables, we can compute the starting and ending crop (x, y)-coordinates on Lines 32-35.

We then return the shape of the volume to the calling function on Line 39.

The final method we need to define is the forward function. This function is responsible for performing the crop during the forward pass (i.e., inference/edge prediction) of the network:

Lines 43 and 44 take advantage of Python and NumPy’s convenient list/array slicing syntax.

Given our CropLayer class we can now load our HED model from disk and register CropLayer with the net:

Our prototxt path and model path are built up using the --edge-detector  command line argument available via args["edge_detector"]  (Lines 48-51).

From there, both the protoPath  and modelPath  are used to load and initialize our Caffe model on Line 52.

Let’s go ahead and load our input image :

Our original image  is loaded and spatial dimensions (width and height) are extracted on Lines 58 and 59.

We also compute the Canny edge map (Lines 64-66) so we can compare our edge detection results to HED.

Finally, we’re ready to apply HED:

To apply Holistically-Nested Edge Detection (HED) with OpenCV and deep learning, we:

  • Construct a blob  from our image (Lines 70-72).
  • Pass the blob through the HED net, obtaining the hed  output (Lines 77 and 78).
  • Resize the output to our original image dimensions (Line 79).
  • Scale our image pixels back to the range [0, 255] and ensure the type is "uint8"  (Line 80).

Finally, we we’ll display:

  1. The original input image
  2. The Canny edge detection image
  3. Our Holistically-Nested Edge detection results

Image and HED Results

To apply Holistically-Nested Edge Detection to your own images with OpenCV, make sure you use the “Downloads” section of this tutorial to grab the source code, trained HED model, and example image files. From there, open up a terminal and execute the following command:

Figure 2: Edge detection via the HED approach with OpenCV and deep learning (input image source).

On the left we have our input image.

In the center we have the Canny edge detector.

And on the right is our final output after applying Holistically-Nested Edge Detection.

Notice how the Canny edge detector is not able to preserve the object boundary of the cat, mountains, or the rock the cat is sitting on.

HED, on the other hand, is able to preserve all of those object boundaries.

Let’s try another image:

Figure 3: Me playing guitar in my office (left). Canny edge detection (center). Holistically-Nested Edge Detection (right).

In Figure 3 above we can see an example image of myself playing guitar. With the Canny edge detector there is a lot of “noise” caused by the texture and pattern of the carpet — HED, on the other contrary, has no such noise.

Furthermore, HED does a better job of capturing the object boundaries of my shirt, my jeans (including the hole in my jeans), and my guitar.

Let’s do one final example:

Figure 4: My beagle, Janie, undergoes Canny and Holistically-Nested Edge Detection (HED) with OpenCV and deep learning.

There are two objects in this image: (1) Janie, the dog, and (2) the chair behind her.

The Canny edge detector (center) does a reasonable job highlighting the outline of the chair but isn’t able to properly capture the object boundary of the dog, primarily due to the light/dark and dark/light transitions in her coat.

HED (right) is able to capture the entire outline of Janie more easily.

Holistically-Nested Edge Detection in Video

We’ve applied Holistically-Nested Edge Detection to images with OpenCV — is it possible to do the same for videos?

Let’s find out.

Open up the detect_edges_video.py file and insert the following code:

Our vide script requires three additional imports:

  • VideoStream : Reads frames from an input source such as a webcam, video file, or another source.
  • imutils : My package of convenience functions that I’ve made available on GitHub and PyPi. We’re using my resize  function.
  • time : This module allows us to place a sleep command to allow our video stream to establish and “warm up”.

The two command line arguments on Lines 10-15 are quite similar:

  • --edge-detector : The path to OpenCV’s HED edge detector.
  • --input : An optional path to an input video file. If a path isn’t provided then the webcam will be used.

Our CropLayer  class is identical to the one we defined previously:

After defining our identical CropLayer  class, we’ll go ahead and initialize our video stream and HED model:

Whether we elect to use our webcam  or a video file, the script will dynamically work for either (Lines 51-62).

Our HED model is loaded and the CropLayer  is registered on Lines 65-73.

Let’s acquire frames in a loop and apply edge detection!

We begin looping over frames on Lines 76-80. If we reach the end of a video file (which happens when a frame is None ), we’ll break from the loop (Lines 84 and 85).

Lines 88 and 89 resize our frame so that it has a width of 500 pixels. We then grab the dimensions of the frame after resizing.

Now let’s process the frame exactly as in our previous script:

Canny edge detection (Lines 93-95) and HED edge detection (Lines 100-106) are computed over the input frame.

From there, we’ll display the edge detection results:

Our three output frames are displayed on Lines 110-112: (1) the original, resized frame, (2) the Canny edge detection result, and (3) the HED result.

Keypresses are captured via Line 113. If "q"  is pressed, we’ll break from the loop and cleanup (Lines 116-128).

Video and HED Results

So, how does Holistically-Nested Edge Detection perform in real-time with OpenCV?

Let’s find out.

Be sure to use the “Downloads” section of this blog post to download the source code and HED model.

From there, open up a terminal and execute the following command:

In the short GIF demo above you can see a demonstration of the HED model in action.

Notice in particular how the boundary of the lamp in the background is completely lost when using the Canny edge detector; however, when using HED the boundary is preserved.

In terms of performance, I was using my 3Ghz Intel Xeon W when gathering the demo above. We are obtaining close to real-time performance on the CPU using the HED model.

To obtain true real-time performance you would need to utilize a GPU; however, keep in mind that GPU support for OpenCV’s “dnn” module is particularly limited (specifically NVIDIA GPUs are not currently supported).

In the meantime, you may want to consider using the Caffe + Python bindings if you need real-time performance.

Summary

In this tutorial, you learned how to perform Holistically-Nested Edge Detection (HED) using OpenCV and Deep Learning.

Unlike the Canny edge detector, which requires preprocessing steps, manual tuning of parameters, and often does not perform well on images captured using varying lighting conditions, Holistically-Nested Edge Detection seeks to create an end-to-end deep learning edge detector.

As our results show, the output edge maps produced by HED do a better job of preserving object boundaries than the simple Canny edge detector. Holistically-Nested Edge Detection can potentially replace Canny edge detection in applications where the environment and lighting conditions are potentially unknown or simply not controllable.

The downside is that HED is significantly more computationally expensive than Canny. The Canny edge detector can run in super real-time on a CPU; however, real-time performance with HED would require a GPU.

I hope you enjoyed today’s post!

To download the source code to this guide, and be notified when future tutorials are published here on PyImageSearch, just enter your email address in the form below!

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , ,

75 Responses to Holistically-Nested Edge Detection with OpenCV and Deep Learning

  1. Huguens Jean March 4, 2019 at 10:54 am #

    Hot Damn!

  2. Aiden March 4, 2019 at 11:01 am #

    Cool stuff once again Adrian. I need to use something like this in the future to measure the size difference between one edge compared to another. Luckily I can use an image rather than a video.

  3. Amit March 4, 2019 at 11:50 am #

    Adrian, great demo as usual. I have few questions –
    1. What are the usecases according to you?
    2. How would you do this using custom CNN i.e training our own CNN?

    • Adrian Rosebrock March 5, 2019 at 8:43 am #

      1. Building a document scanner is a great example of how edge detection can be applied.
      2. I’m not sure what you’re asking here. Are you asking how to train your own custom HED network? If so, refer to the paper publication.

  4. Gary March 4, 2019 at 12:00 pm #

    Many thanks for this new tutorial Adrian and the new beard looks good on you.

    Is it possible to get an array of objects that have closed / curved line structure to extract or measure the size in real-time or offline?

  5. Peter March 4, 2019 at 1:37 pm #

    An obvious difference I see between Canny and HED is that Canny (by design) has sharper edges, where HED appears to yield burry boundaries. How would you recommend cleaning the HED results up? Run Canny as a second step?

    • Adrian Rosebrock March 5, 2019 at 8:41 am #

      There are a number of steps you could do. Thresholding would be a super simple method to completely binarize the image. I would suggest starting there.

  6. Dexter March 4, 2019 at 3:06 pm #

    Good day, Dr. Adrian. Thanks for this cool tutorial!
    Just have a couple of quick questions:
    1. How do I get the bounding box / region of interest (roi) for each object whose edges have been completely detected?
    — My idea is to use the startX / endX and startY / endY coordinates; but
    — Do I need to get the lowest startX / startY and the highest endX / endY to do this?
    2. Would it be likely to work if I pass each roi (will save each roi in an individual image file) to an object detection model like SSD trained on Coco?
    Please advise. Thanks in advance! 🙂

    • Adrian Rosebrock March 5, 2019 at 8:39 am #

      1. You would apply contour detection. See this tutorial for an example.

      2. You wouldn’t pass the ROI through an object detector. You would just apply the object detector to the entire image and skip edge detection.

  7. Mohammad March 4, 2019 at 3:57 pm #

    Hi Adrian,
    Thanks for this tutorials,I have a GTX 1080 TI? Is Opencv’s dnn support of this GPU? if so, for active this capability ,what do i do ?
    If currently not supported of Nvidia GPU’s , is that possible to use the capability of OpenCL this GPU?

    • Adrian Rosebrock March 5, 2019 at 8:35 am #

      At the present, OpenCV’s “dnn” module does not support NVIDIA GPUs. I know they are working on it, specifically during this year’s Google Summer of Code, but you cannot use your GPU for the “dnn” module yet.

  8. Rui Albuquerque March 4, 2019 at 3:58 pm #

    Nice tutorial allready know well hed maybe you can write a tuto for scanning using hed ( just like the tuto for scan receipt using canny)
    🙂

    • Adrian Rosebrock March 5, 2019 at 8:34 am #

      Thanks Rui!

  9. Andy Lucny March 4, 2019 at 5:19 pm #

    Since the hed does not provide binary output, it would be more correct to compare it wit Sobel operator. More over hed causes headache when you try to apply it on larger image, where there are e.g. a lot of cats and pure resizing of image would lost too much of information. I would be really interested in how hed can be applied in such case.

  10. Jacob Dallas March 4, 2019 at 6:16 pm #

    Great Post! Seems to be a little CPU heavy. Do you have any suggestions on a good GPU implementation? I’m currently using one I found that’s implemented in Tensorflow, https://github.com/moabitcoin/holy-edge. I didn’t know if you had a preference?

    • Adrian Rosebrock March 5, 2019 at 8:33 am #

      The HED model is a Caffe model so you can use “pycaffe” with GPU access enabled to run HED on your GPU.

  11. Israfil March 4, 2019 at 7:39 pm #

    As usual, Very interesting article Adrian.

    • Adrian Rosebrock March 5, 2019 at 8:33 am #

      Thanks Israfil!

  12. Chinmay Kodaganur March 5, 2019 at 1:26 am #

    Thank you so much for keeping us updated on new and better CV technologies. A lot of your work has been helpful for me to get into computer vision.
    Also, can you recommend forums and blogs to follow like yours that would help me know more about the latest trends in computer vision?

    • Adrian Rosebrock March 5, 2019 at 8:29 am #

      The PyImageSearch Gurus course has dedicated forums. I spend more time in there than in the blog post comments. I would definitely take a look if you’re interested in more targeted advice for your projects.

      • Chinmay Kodaganur March 7, 2019 at 6:27 am #

        The model that you have given was run by me on a Macbook Air and Pro. It turned out to be very heavy on processing. Is it expected to behave like this? How can we train the model to make it lighter on processing? My use-case is an edge detector which can crop an identity card from an image. I have already tried purely machine vision based detection (Gray > Blur > Canny > Hough Lines > Find Contours) without any model with poor results across different use-cases. What approach would you recommend for this?

        • Adrian Rosebrock March 7, 2019 at 4:18 pm #

          Yes, it is expected to be very computationally heavy. Make sure you are reading the entire blog post as I discuss the benefits and tradeoffs of HED vs. traditional image processing edge detection. The final section of the real-time edge detection and “Summary” section should be required reading to address your question.

  13. Peter March 5, 2019 at 4:37 am #

    Adrian,
    I have a failure on
    cv2.dnn_registerLayer(“Crop”, CropLayer)
    module ‘cv2’ has no attribute ‘dnn_registerLayer’

    Python 3.7

    • Adrian Rosebrock March 5, 2019 at 8:26 am #

      It’s not Python that’s causing the issue, it’s OpenCV. What version of OpenCV are you using?

      • Peter Manley-Cooke March 5, 2019 at 9:10 am #

        Well, I used your blog to find that out.
        It now prints the version just before the offending line.
        OpenCV version 3.4.1

        Thanks for the rapid response, I knew I would have to wait until you got up as I am on UK time!

        • Adrian Rosebrock March 5, 2019 at 9:51 am #

          Upgrade to OpenCV 3.4.4 or OpenCV 4+. That should resolve the error (v3.4.1 is pretty “old” in terms of deep learning model support).

          • Peter March 5, 2019 at 10:33 am #

            Thanks Adrian, you are brilliant.
            What is the exact string for that as neither conda nor pip can find OpenCV4?

          • Peter March 6, 2019 at 11:41 am #

            OK I used
            pip install opencv-python==
            to give a list of versions available
            (from versions: 3.4.2.16, 3.4.2.17, 3.4.3.18, 3.4.4.19, 3.4.5.20, 4.0.0.21)
            Then copied the version I wanted after the ==
            Works fine with 4.0.0.21.

          • Adrian Rosebrock March 8, 2019 at 5:34 am #

            You should use “opencv-python-contrib” for the reasons I suggest in this post. Otherwise, you should be all set.

      • Greace March 5, 2019 at 10:23 am #

        I also have a failure,

        opencv-python 3.3.0.10

        • Adrian Rosebrock March 8, 2019 at 5:59 am #

          Just saw your comment comment. Your OpenCV version is too old. You need OpenCV 3.4 or OpenCV 4+.

  14. pnsan March 5, 2019 at 5:34 am #

    issue: Inferences images have these border around it, basically at left and top side.

  15. Hossein hazrati March 5, 2019 at 6:52 am #

    Hi there,

    Thank you for your helpful blog,

    However, I can not run the code on the spyder.

    I was wondering If you could give me some advice.

    Bests,
    Hossein.

    • Adrian Rosebrock March 5, 2019 at 8:25 am #

      I suggest you execute the code via the command line instead of an IDE. If you are using an IDE you likely need to set the command line arguments.

  16. hamze March 5, 2019 at 7:59 am #

    Hi Adrian,
    thanks for this great post.
    As I tested, hed’s output is clearer than canny, specially when the background is complex, like with existing grass, trees,….. I am curious to know how this can be used to implement better object detection (not recognition) algorithms, or can it help to have better background subtraction,….?
    Do you have any efficient conventional object detection (ignoring recognition) method using this?

    • Adrian Rosebrock March 5, 2019 at 8:21 am #

      You wouldn’t use HED naturally for object detection. For object detection you would use Faster R-CNN, YOLO, SSD, RetinaNet, etc. For instance segmentation you would use Mask R-CNN. All of these are covered inside Deep Learning for Computer Vision with Python.

  17. Greace March 5, 2019 at 10:43 am #

    Hi adrian,

    thanks for great post, I have failure
    ;Segmentation fault (core dumped)’

    Do you know how to solve it?

    • Adrian Rosebrock March 8, 2019 at 5:58 am #

      Can you debug by using “pdb” or “print” to determine exactly which line of code is causing the segfault? Unfortunately without knowing more information I cannot provide any suggestions.

  18. Malik Moh March 5, 2019 at 10:45 am #

    Thank you Adrian…interesting as always.
    Can we make this detector detects only vertical or horizontal edges like what Sobel does?

  19. Leonardo March 5, 2019 at 4:15 pm #

    Thank Adrian for this blogp post!
    I have some aerial image (video in a second step) of a small river (more like a “stern” in fact) from a dron. In the image I can see booth coast and the water. I need to get the coastline, can I use this method to get a better segmentation?
    The coastline will be used to perform some autonomous navigation of a boat.

    • Adrian Rosebrock March 8, 2019 at 5:56 am #

      Hey Leonardo — I would suggest giving it a try and seeing for yourself. Use the source code and run it on the input image/video. What do your results look like?

  20. mungmelia March 5, 2019 at 7:12 pm #

    MasyaAllah,,,your article always great. It’s really helping me for learning deep learning. May God Always Bless You.

  21. Sarvagya Gupta March 6, 2019 at 2:21 am #

    Hey, I want to know if there’s any image size limit to this? It seems to be working fine with the default images but as soon as I use a larger image, like 1080, 1920, 3, it seems to throw a ‘Segmentation fault (core dumped)’ error.

    Can you tell me why that would happen?

    • Adrian Rosebrock March 8, 2019 at 5:51 am #

      Most likely the system does not have enough memory to process the large image and hold the deep learning model. Check your memory usage during execution and see if it spikes.

  22. Mozart March 6, 2019 at 3:30 am #

    Hello Adrian,
    Thanks for perfect work share. But I can’t find dnn_registerLayer function alternative for android opencv 3.4.3. Do u have any information about it ?

    Thanks.

    • Adrian Rosebrock March 8, 2019 at 5:49 am #

      You need to use OpenCV 3.4.4 or OpenCV 4.

    • Leks April 6, 2019 at 7:04 am #

      Hi Mozart, I’m also trying to implement the same with Android, Any success implementation from your side?

      Thanks.

  23. elena paglia March 6, 2019 at 4:49 pm #

    Hi Adrian,

    Thank you for your tutorials! After walking through one I always learn something useful that I can apply in a number of projects.

    I do have one question on this post. What is the CropLayer Class doing ? I can print out the inputShape and targetShapes and see its calculating starting and ending points plus returning (N,C,H,W) but can’t quite wrap my mind around what is being cropped and how its impacting the final image ?

  24. Joseph March 7, 2019 at 4:24 am #

    Hi Adrian, very appreciate your post.
    As mentioned above, HED method can help detect edge in an image. I am wondering whether it is possible to use HED for blur dectection. For example, when I got the hed after this code:
    hed = net.forward()

    i wanna use it to calculate the std of it as the bluriness score of the image.

    • Adrian Rosebrock March 7, 2019 at 4:21 pm #

      Have you tried following my blur detection post here? You could try using the HED as a replacement for the Laplacian of the image. That might require some tuning though.

  25. Russell March 8, 2019 at 6:48 pm #

    Hi Adrian how did you know how/what to implement for the custom CropLayer class? If I were to start from this point https://github.com/s9xie/hed how would I able to get to the OpenCV implementation?

  26. Dee March 9, 2019 at 11:04 am #

    Hi Adrian,

    Nice work here. But can you tell me how I could use this when I want to detect an object? I am making a project wherein I have to detect a specific leaf out of many different kinds of leaves in the background. Please help. Thank you

    • Adrian Rosebrock March 13, 2019 at 3:53 pm #

      That sounds more like an object detection or instance segmentation problem rather than an edge detection one. Is there anything “different” about that one leaf that would make it different than the rest of the background?

      • Nick August 26, 2019 at 3:39 am #

        Hi Adrian,

        If I want to blur out a face in a photo with two faces in close proximity facing each other, could I used a combo of edge detection and skin color detection to designate the area to blur out? Ie. how can I get the x,y coordinates of boundaries of the detected edges? It’s a close up photo so I’m not sure if Mask-RCNN will work.

        • Adrian Rosebrock September 5, 2019 at 10:48 am #

          I would still give Mask R-CNN a try. The results will be more aesthetically pleasing. The other option is to fit a minimum enclosing circle to the detected face bounding box and then only blur the circular region.

  27. Jaime March 11, 2019 at 10:14 am #

    Hi Adrian!
    I’m starting research on these topics and manage to replicate your results. But there’s another project and I can’t replicate their results using your code and an input figure (composed by a triangle, a circle and a rectangle) that is in the other project:

    https://github.com/ashislaha/Shape-Detection-in-AR/blob/master/README.md

    I obtain some artifacts on the black region… Should I do a threshold to obtain the black and white effect? Thanks!

  28. knoname March 11, 2019 at 4:12 pm #

    Adrian, the algorithm has magic numbers 104.00698793, 116.66876762, 122.67891434 — what are they? you didn’t mention them and I looked at the original paper without seeing them mentioned (perhaps I need more caffeine and re-read?).

    Is this the mean RGB values of the image? Should it be calculated on each frame to improve accuracy?

    • Adrian Rosebrock March 13, 2019 at 3:28 pm #

      They are the mean subtraction values used to normalize the data. You typically compute the RGB mean over the training set and subtract it prior to training, ultimately leading to higher accuracy. Mean subtraction, using the same values, must be performed on testing images as well. If you’d like to learn more about mean resizing, including training your own deep learning models, I would definitely suggest taking a look at my book, Deep Learning for Computer Vision with Python.

  29. Allen Ding March 12, 2019 at 6:47 am #

    Hi Adrian,

    Thank you for your work. I want to detect building’s skyline in the photo, could you give me some advice?

    Thanks a lot !

    • Adrian Rosebrock March 13, 2019 at 3:18 pm #

      How has HED worked for that task? Have you given it a try?

  30. Selman March 13, 2019 at 2:51 pm #

    Suppose that I have two lanes to detect. It seems that HED would do it successfully. But I need to get the coordinates of these lines. How can I do it? You mentioned about contour detection which you demonstrated before. However it was about finding corners. In my situation I will have two main lines. Nothing more. The only need is to find out their x (or) y coordinates. Beginning and ending coordinates will be ok as well.

    • Adrian Rosebrock March 13, 2019 at 3:03 pm #

      You mentioned performing lane detection so I assume you’re interested in self-driving cars. In that case semantic segmentation would likely be a better approach for you.

  31. Thomas March 15, 2019 at 10:46 pm #

    Sir, i have problem
    how can i get hed_model?
    i mean how can i get :
    1. deploy.prototxt
    2. hed_pretrained_bsds.caffemodel

    • Thomas March 15, 2019 at 11:09 pm #

      i got it, at download section. Thankyou

  32. Trenton Carr April 18, 2019 at 3:15 am #

    Hi, how many fps were you able to get please?

  33. Luca May 9, 2019 at 4:59 am #

    Thanks for the code! This is incredibly easy and straightforward to run compared to other HED implementations.
    Sorry for the dumb question, how can I use this to batch process all pictures inside a folder and/or frames from a video and save the results? Currently I can only see the results but not save them, but even if I could, it would take a lot of time to do it one by one.
    Thanks for your kind attention!

    • Adrian Rosebrock May 15, 2019 at 3:20 pm #

      You can use the “list_images” function in the imutils library to loop over all images in a file. The “cv2.VideoCapture” function can be used to access your webcam, then apply the edge detector to each frame of the video. I would also suggest you read Practical Python and OpenCV so you can better learn how to use the functions.

  34. John May 13, 2019 at 1:01 am #

    Thank you very much for your work and source code . But, in the code , It need the path of “OpenCV’s deep learning edge detector”, I don’t know what that is and where to download it. Thanks again

    • Adrian Rosebrock May 15, 2019 at 2:55 pm #

      You can use the “Downloads” section of the post to download the source code and deep learning edge detector.

  35. johan May 27, 2019 at 5:19 pm #

    Hi!
    Thanks for a good tutorial.
    However, when I try using my own png-image, it starts and after 10-20 seconds it says “KILLED”. Any ideas what goes wrong?

    • Adrian Rosebrock May 30, 2019 at 9:18 am #

      It sounds like your machine is running out of RAM. Check your RAM usage when the script is running to verify.

  36. jimi June 24, 2019 at 10:55 pm #

    For some reflective objects, or in the case of exposure, are there any algorithms to remove these bright spots? I now encounter a problem, through the way of word bags to carry out image search, but the image similarity is very high, for this kind of object with high similarity, is there any good way to solve it?

  37. Kaisar Khatak August 13, 2019 at 12:15 pm #

    Could HED be used to for barcode detection?

    • Adrian Rosebrock August 16, 2019 at 5:38 am #

      Potentially but may not just use a dedicated barcode detection library like ZBar?

Before you leave a comment...

Hey, Adrian here, author of the PyImageSearch blog. I'd love to hear from you, but before you submit a comment, please follow these guidelines:

  1. If you have a question, read the comments first. You should also search this page (i.e., ctrl + f) for keywords related to your question. It's likely that I have already addressed your question in the comments.
  2. If you are copying and pasting code/terminal output, please don't. Reviewing another programmers’ code is a very time consuming and tedious task, and due to the volume of emails and contact requests I receive, I simply cannot do it.
  3. Be respectful of the space. I put a lot of my own personal time into creating these free weekly tutorials. On average, each tutorial takes me 15-20 hours to put together. I love offering these guides to you and I take pride in the content I create. Therefore, I will not approve comments that include large code blocks/terminal output as it destroys the formatting of the page. Kindly be respectful of this space.
  4. Be patient. I receive 200+ comments and emails per day. Due to spam, and my desire to personally answer as many questions as I can, I hand moderate all new comments (typically once per week). I try to answer as many questions as I can, but I'm only one person. Please don't be offended if I cannot get to your question
  5. Do you need priority support? Consider purchasing one of my books and courses. I place customer questions and emails in a separate, special priority queue and answer them first. If you are a customer of mine you will receive a guaranteed response from me. If there's any time left over, I focus on the community at large and attempt to answer as many of those questions as I possibly can.

Thank you for keeping these guidelines in mind before submitting your comment.

Leave a Reply

[email]
[email]