An interview with Paul Lee – Doctor, Cardiologist and Deep Learning Researcher

In today’s blog post, I interview Dr. Paul Lee, a PyImageSearch reader and interventional cardiologist affiliated with NY Mount Sinai School of Medicine.

Dr. Lee recently presented his research at the prestigious American Heart Association Scientific Session in Philadelphia, PA where he demonstrated how Convolutional Neural Networks can:

  • Automatically analyze and interpret coronary angiograms
  • Detect blockages in patient arteries
  • And ultimately help reduce and prevent heart attacks

Furthermore, Dr. Lee has demonstrated that the automatic angiogram analysis can be deployed to a smartphone, making it easier than ever for doctors and technicians to analyze, interpret, and understand heart attack risk factors.

Dr. Lee’s work is truly remarkable and paves the way for Computer Vision and Deep Learning algorithms to help reduce and prevent heart attacks.

Let’s give a warm welcome to Dr. Lee as he shares his research.

An interview with Paul Lee – Doctor, Cardiologist and Deep Learning Researcher

Adrian: Hi Paul! Thank you for doing this interview. It’s a pleasure to have you on the PyImageSearch blog.

Paul: Thank you for inviting me.


Figure 1: Dr. Paul Lee, an interventional cardiologist affiliated with NY Mount Sinai School of Medicine, along with his family.

Adrian: Tell us a bit about yourself — where do you work and what is your job?

Paul: I am an interventional cardiologist affiliated with NY Mount Sinai School of Medicine. I have a private practice in Brooklyn.


Figure 2: Radiologists may one day be replaced by Computer Vision, Deep Learning, and Artificial Intelligence.

Adrian: How did you first become interested in computer vision and deep learning?

Paul: In a New Yorker magazine’s 2017 article titled A.I. Versus M.D. What happens when diagnosis is automated?, George Hinton commented that “they should stop training radiologists now”. I realized that one day AI will replace me. I wanted to be the person controlling the AI, not the one being replaced.


Adrian: You recently presented your work automatic cardiac coronary angiogram analysis at the American Heart association. Can you tell us about it?

Paul: After starting your course two years ago, I became comfortable with computer vision technique. I decided to apply what you taught to cardiology.

As a cardiologist, I perform coronary angiography to diagnose whether my patients have blockages in their arteries in the heart that can cause heart attack. I wondered whether I can apply AI to interpret coronary angiograms.

Despite many difficulties, thanks to your ongoing support, the neural networks learned to interpret these images reliably.

I was invited to present my research at American Heart Association Scientific Session in Philadelphia this year. This is the most important research conference for cardiologists. My poster is titled Convolutional Neural Networks for Interpretation of Coronary Angiography (CathNet).

(Circulation. 2019;140:A12950; https://ahajournals.org/doi/10.1161/circ.140.suppl_1.12950) ; the poster is available here: https://github.com/AICardiologist/Poster-for-AHA-2019)


Figure 3: Normal coronary angiogram (left) and stenotic coronary artery (right). Interpretation of angiograms can be subjectives and difficult. Computer vision algorithms can be used to make analyzations more accurate.

Adrian: Can you tell us a bit more about cardiac coronary angiograms? How are these images captured and how can computer vision/deep learning algorithms better/more efficiently analyze these images (as compared to humans)?

Paul: For definitive diagnosis and treatment of coronary artery disease (for example, during heart attack), cardiologists perform coronary angiogram to determine the anatomy and the extent of the stenosis. During the procedure, cardiologists put a narrow catheter from the wrist or the leg. Through the catheter, we inject contrast into the coronary arteries and the images are captured by X-ray. However, the interpretation of the angiogram is sometimes difficult: computer vision has the potential to make these determinations more objective and accurate.

Figure 3 (left) shows a normal coronary angiogram while Figure 3 (right) shows a stenotic coronary artery.


Adrian: What was the most difficult aspect of your research and why?

Paul: I only had around 5000 images.

At first, we did not know why we had so much trouble getting high accuracy. We thought our images were not preprocessed properly, or some of the images were blurry.

Later, we realized there was nothing wrong with our images: the problem was that ConvNets require lots of data to learn something simple to our human eyes.

Determining whether there is a stenosis in a coronary arterial tree in an image is computationally complex. Since sample size depends on classification complexity, we struggled. We had to find a way to train ConvNets with very limited samples.


Adrian: How long did it take for you to train your models and perform your research?

Paul: It took more than one year. Half the time was spent on gathering and preprocessing data, half the time on training and tuning the model. I would gather data, train and tune my models, gather more data or process the data differently, and improve my previous models, and keep repeating this circle.


Figure 4: Utilizing curriculum learning to improve model accuracy.

Adrian: If you had to pick the most important technique you applied during your research, what would it be?

Paul: I scoured PyImageSearch for technical tips to train ConvNets with small sample number of samples: transfer learning, image augmentation, using SGD instead of Adam, learning rate schedule, early stopping.

Every technique contributes to small improvement in F1 score, but I only reached about 65% accuracy.

I looked at Kaggle contest solutions to look for technical tips. The biggest breakthrough was from a technique called “curriculum learning.” I first trained DenseNet to interpret something very simple: “is there a narrowing in that short straight segment of artery?” That only took around a hundred sample.

Then I trained this pre-trained network with longer segments of arteries with more branches. The curriculum gradually build complexity until it learns to interpret the stenosis in the context of complicated figures. This approach dramatically improved our test accuracy to 82%. Perhaps the pre-training steps reduced computational complexity by priming information into the neural network.

“Curriculum learning” in the literature actually means something different: it generally refers to splitting their training samples based on error rates, and then sequencing the training data batches based on increasing error rate. In contrast, I actually created learning materials for the ConvNet to learn, not just re-arrange the batches based on error rate. I got this idea from my experience of learning foreign language, not from the computer literature. At the beginning, I struggled to understand newspaper articles written in Japanese. As I progressed through beginner, then to intermediate, and finally to advanced level Japanese curriculum, I could finally understand these articles.


Figure 5: Example screenshots from the CathNet iPhone app.

Adrian: What are your computer vision and deep learning tools, libraries, and packages of choice?

Paul: I am using standard packages: Keras, Tensorflow, OpenCV 4.

I use Photoshop to cleanup the images and to create curriculums.

Initially I was using cloud instances [for training], but I found that my RTX 2080 Ti x 4 Workstation is much more cost effective. The “global warming” from the GPUs killed my wife’s plants, but it dramatically speeded up model iteration.

We converted our Tensorflow models into an iPhone app using Core ML just like what you did for your Pokemon identification app.

Our demonstration video for our app is here:


Adrian: What advice would you give to someone who wants to perform computer vision/deep learning research but doesn’t know how to get started?

Paul: When I first started two years ago, I did not even know Python. After completing a beginner Python course, I jumped into Andrew Ng’s deep learning course. Because I needed more training, I began PyImageSearch guru course. The materials from Stanford CS231n are great for surveying the “big picture” but PyImageSearch course materials are immediately actionable for someone like me without computer science background.


Adrian: How did the PyImageSearch Gurus course and Deep Learning for Computer Vision with Python book prepare you for your research?

Paul: PyImageSearch course and books armed me with OpenCV and TensorFlow skills. I continuously return to the materials for technical tips and updates. Your advice really motivated me to push forward despite obstacles.


Adrian: Would you recommend the PyImageSearch Gurus course or Deep Learning for Computer Vision with Python to other budding researchers, students, or developers trying to learn computer vision + deep learning?

Paul: Without reservation. The course converted me from a Python beginner to a published computer vision practitioner. If you are looking for the most cost- and time-efficient way to learn Computer Vision, and if you are really serious, I wholeheartedly recommend PyImageSearch courses.


Adrian: What’s next for your research?

Paul: My next project is to bring computer vision to the bedside. Currently, clinicians are spending too much time on their desktop computer during office visit and hospital rounds. I hope our project will empower clinicians to do what they do best: spending time at the bedside caring for patients.


Adrian: If a PyImageSearch reader wants to chat about your work and research, what is the best place to connect with you?

Paul: I can be reached at my LinkedIn account and I look forward to hearing from your readers.

Summary

In this blog post, we interviewed Dr. Paul Lee (MD), an interventional cardiologist and Computer Vision/Deep Learning practitioner.

Dr. Lee recently presented a poster at the prestigious American Heart Association Scientific Session in Philadelphia, PA where he demonstrated how Convolutional Neural Networks can:

  • Automatically analyze and interpret coronary angiograms
  • Detect blockages in patient arteries
  • Help reduce and prevent heart attacks

The primary motivation for Dr. Lee’s work was that he understood that one day radiologists would one day be replaceable by Artificial Intelligence.

Instead of simply accepting that fate, Dr. Lee decided to take matters in his own hands — he strove to be the person building that AI, not the one being replaced by it.

Dr. Lee not only achieved his goal, but was able to publish his work at a distinguished conference, proof that dedication, a strong will, and the proper education is all you need to be successful in Computer Vision and Deep Learning.

If you want to follow in Dr. Lee’s footsteps, be sure to pick up a copy of Deep Learning for Computer Vision with Python (DL4CV) and join the PyImageSearch Gurus course.

Using these resources you can:

  1. Perform research worthy of being published in reputable journals and conferences
  2. Obtain the knowledge necessary to finish your MSc or PhD
  3. Switch careers and obtain a CV/DL position at a respected company/organization
  4. Successfully apply deep learning and computer vision to your own projects at work
  5. Complete your hobby CV/DL projects you’re hacking on over the weekend

I hope you’ll join myself, Dr. Lee, and thousands of other PyImageSearch readers who have not only mastered computer vision and deep learning, but have taken that knowledge and used it to change their lives.

I’ll see you on the other side.

To be notified when future blog posts and interviews are published here on PyImageSearch, just be sure to enter your email address in the form below, and I’ll be sure to keep you in the loop.

15 Responses to An interview with Paul Lee – Doctor, Cardiologist and Deep Learning Researcher

  1. Rita November 22, 2019 at 9:16 am #

    Wow! I can’t think of higher satisfaction than knowing the material you created Adrian was used in this amazing piece of work! And kudos for Dr Paul for keeping at it for more than a year and getting published! (I would have packed it in after 2 weeks of pre-processing!)

    Congratulations to you both in your respective specialisms and contributions!

    If this isn’t nod to Adrian’s work, i don’t know what is! I’m buying his books right now!!!

    • Adrian Rosebrock November 23, 2019 at 9:05 am #

      Thanks so much Rita, your comment truly made my day and put a smile on my face 🙂

  2. khan November 22, 2019 at 9:29 am #

    Thanks Adrian, i want to bring your attention on how to visualize convolution and activation of any trained. please throw some light on this area also. thanks

  3. Zubair Ahmed November 22, 2019 at 10:21 am #

    Great interview, I haven’t read a PyImageSearch.com blog post so fast as it’s published like I read this one.

    It’s heartening to see real-life research using Deep Learning and Computer Vision to save lives.

    • Adrian Rosebrock November 23, 2019 at 9:05 am #

      Thanks Zubair!

  4. Raymond KUDJIE November 22, 2019 at 4:38 pm #

    This is awesome

    • Adrian Rosebrock November 23, 2019 at 9:04 am #

      Thanks Raymond, I’m glad you enjoyed the interview!

  5. Hendriyawan Achmad November 22, 2019 at 5:48 pm #

    Wonderfull and awesome sharing, thanks pyimagesearch and dr. Paul

    • Adrian Rosebrock November 23, 2019 at 9:04 am #

      Thanks Hendriyawan 🙂

  6. Mohanad November 23, 2019 at 12:29 am #

    Thank you very much Adrian, we really enjoyed the real practical conversation about Deep Learning in Computer Vision. I wish to accomplish more in this field and get to have interview like Dr. Paul.

    • Adrian Rosebrock November 23, 2019 at 9:05 am #

      Thanks Mohanad, Paul and I really appreciate your comment 🙂

  7. Luc J. Vermeersch November 24, 2019 at 10:41 pm #

    Really enjoyed this interview with Dr. Paul Lee.
    I learned to be more patient in training networks. Thanks.

    • Adrian Rosebrock December 5, 2019 at 10:11 am #

      Thanks Luc, I’m glad you enjoyed it!

  8. Anthony The Koala November 26, 2019 at 3:06 am #

    Dear Dr Adrian and Dr Lee,
    This is not so much about programming.

    If automatic angiogram analysis will replace manual examination of images, wont this
    (a) liberate the physician from spending time interpreting the image and assisting the patient more.
    (b) I have a problem that if angiogram image analysis takes over the manual analysis and interpretation of images, that it may stop further research into cardiac issues.

    My fear is that if you just rely on automated analysis, there may be undiscovered cardiac issues not yet documented that will not be updated to the cardiac image database.

    I know of a cardio-thoracic surgeon who is an academic surgeon and another interventionalist cardiologist who has a PhD. And their research continues. Isn’t there a risk that their research may be deemed unnecessary as if automated machine learning analysis of cardiac images may be regarded as the pinnacle/be-all-and-end-all of cardiac speciality?

    Summary:
    I see machine learning of cardiac images as ‘liberating’ the physician for better patient care, but at the same time, am concerned that the reliablity and accuarcy of machine learning diagnosis may lead into a false sense of security that the cardio-thoracic world has reached the pinnacle of research.

    Thank you,
    Anthony of Sydney

    • Adrian Rosebrock December 5, 2019 at 10:13 am #

      I don’t see that as an issue. The way I look at it is that it will change the field in a better way. The physician will not have to spend as much time on an arduous, potentially error prone task and can lean on AI algorithms. They could use that time to reinvest back into their patients or perform their own research.

      Furthermore, the field won’t stagnate — you’ll see more computer scientists and doctors working together to improve the diagnostic algorithms.

      Sure, the field will change, but it won’t end research, it will facilitate it.

Before you leave a comment...

Hey, Adrian here, author of the PyImageSearch blog. I'd love to hear from you, but before you submit a comment, please follow these guidelines:

  1. If you have a question, read the comments first. You should also search this page (i.e., ctrl + f) for keywords related to your question. It's likely that I have already addressed your question in the comments.
  2. If you are copying and pasting code/terminal output, please don't. Reviewing another programmers’ code is a very time consuming and tedious task, and due to the volume of emails and contact requests I receive, I simply cannot do it.
  3. Be respectful of the space. I put a lot of my own personal time into creating these free weekly tutorials. On average, each tutorial takes me 15-20 hours to put together. I love offering these guides to you and I take pride in the content I create. Therefore, I will not approve comments that include large code blocks/terminal output as it destroys the formatting of the page. Kindly be respectful of this space.
  4. Be patient. I receive 200+ comments and emails per day. Due to spam, and my desire to personally answer as many questions as I can, I hand moderate all new comments (typically once per week). I try to answer as many questions as I can, but I'm only one person. Please don't be offended if I cannot get to your question
  5. Do you need priority support? Consider purchasing one of my books and courses. I place customer questions and emails in a separate, special priority queue and answer them first. If you are a customer of mine you will receive a guaranteed response from me. If there's any time left over, I focus on the community at large and attempt to answer as many of those questions as I possibly can.

Thank you for keeping these guidelines in mind before submitting your comment.

Leave a Reply

[email]
[email]