An interview with Kwabena Agyeman, co-creator of OpenMV and microcontroller expert

After publishing last week’s blog post on reading barcodes with Python and OpenMV, I received a lot of emails from readers asking questions about embedded computer vision and how microcontrollers can be used for computer vision.

Instead of trying to address these questions myself, I thought it would be best to bring in a true expert — Kwabena Agyeman, co-founder of OpenMV, a small, affordable, and expandable embedded computer vision device.

Kwabena is modest in this interview and denies being an expert (a true testament to his kind character and demeanor), but trust me, meeting him and chatting with him is a humbling experience. His knowledge of embedded programming and microcontroller design is incredible. I could listen to him talk about embedded computer vision all day.

I don’t cover embedded devices here on PyImageSearch often so it’s a true treat to have Kwabena here today.

Join me in welcoming Kwabena Agyeman to the PyImageSearch blog. And to learn more about embedded computer vision, just keep reading.

An interview with Kwabena Agyeman, co-creator of OpenMV and microcontroller expert

Figure 1: The OpenMV camera is a powerful embedded camera board that runs MicroPython.

Adrian: Hey Kwabena, thanks for doing this interview! It’s great to have you on the PyImageSearch blog. For people who don’t know you and OpenMV, who are you and what do you do?

Kwabena: Hi Adrian, thanks for having me in today. Me and my co-founder Ibrahim created the OpenMV Cam and run the OpenMV project.

OpenMV is a focused effort on making embedded computer/machine vision more accessible. The ultimate goal of the project is to enable machine-vision in more embedded devices than there are today.

For example, let’s say you want to add a face detection sensor to your toaster. This is probably overkill for any application, but, bear with me.

First, you can’t just go out today and buy a $50 face detection sensor. Instead, you’re looking at least setting up a Single-Board-Computer (SBC) Linux system running OpenCV. This means adding face detection to your toaster now just became a whole new project.

If your goal was to just detect if there’s a face in view or not, and then toggle a wire to release the toast when you look at the toaster you don’t necessarily want to go down the SBC path.

Instead, what you really want is a microcontroller that can accomplish the goal of detecting faces out-of-the-box and toggling a wire with minimal setup.

So, the OpenMV project is basically about proving high-level machine-vision functionality out of the box for a variety of tasks to developers who want to add powerful features to their projects without having to focus on all the details.

Figure 2: The CMUcam4 is a fully programmable embedded computer vision sensor developed by Kwabena Agyeman while at Carnegie Mellon University.

Adrian: That’s a great point regarding having to set up a SBC Linux system, install OpenCV, and write the code, just to achieve a tiny bit of functionality. I don’t do much work with embedded devices so it’s insightful seeing it from a different perspective. What inspired you to start working in the computer vision, machine learning, and the embedded field?

Kwabena: Thanks for asking Adrian, I got into machine-vision back at Carnegie Mellon University working under Anthony Rowe who created the CMUcam 1, 2, and 3. While I was a student there I created the CMUcam 4 for simple color tracking applications.

While limited, the CMUcams were able to do their jobs of tracking colors quite well (if deployed in a constant lighting environment). I really enjoyed working on the CMUcam4 because it blended board design, microcontroller programming, GUI development, and data-visualization in one project.

Figure 3: A small, affordable, and expandable embedded computer vision device.

Adrian: Let’s get get into more detail about OpenMV and the OpenMV Cam. What exactly is OpenMV Cam and what is it used for?

Kwabena: So, the OpenMV Cam is a low-powered machine-vision camera. Our current model is the OpenMV Cam M7 which is powered by a 216 MHz Cortex-M7 processor that can execute two-instructions per clock making it about half as fast (single-threaded no-SIMD) compute-wise as the Raspberry Pi zero.

The OpenMV Cam is also a MicroPython board. This means you program it in Python 3. Note that this doesn’t mean desktop python libraries are available. But, if you can program in Python you can program the OpenMV Cam and you’ll feel at home using it.

What’s cool though is that we’ve built a number of high-level machine-vision algorithms into the OpenMV Cam’s firmware (which is written in C — python is just to allow you to glue vision logic together like you do with OpenCV’s python library bindings).

In particular, we’ve got:

  • Multi-color blob tracking
  • Face detection
  • AprilTag tracking
  • QR Code, Barcode, Data Matrix detection and decoding
  • Template matching
  • Phase-correlation
  • Optical-flow
  • Frame differencing
  • and more built-in.

Basically, it’s like OpenCV on a low-power microcontroller (runs off a USB port) with Python bindings

Anyway, our goal is to wrap up as much functionality into an easy-to-use function calls as possible. For example, we have a “find_blobs()” method which returns a list of color blobs objects in the image. Each blob object has an centroid, bounding box, pixel count, rotation angle, and etc. So, the function call automatically segments an image (RGB or Grayscale) by a list of color-thresholds, finds all blobs (connected components), merges overlapping blobs based on their bounding boxes, and additionally calculates each blob’s centroid, rotation angle, etc. Subjectively, using our “find_blobs()” is a lot more straight forwards than finding color blobs with OpenCV if you’re a beginner. That said, our algorithm is also less flexible if you need to do something we didn’t think of. So, there’s a trade-off.

Moving on, sensing is just one part of the problem. Once you detect something you need to act. Because the OpenMV Cam is a microcontroller you can toggle I/O pins, control SPI/I2C buses, send UART data, control servos, and more all from the same script you’ve got your vision logic in. With the OpenMV Cam you sense, plan, and act all from one short python script.

Adrian: Great explanation. Can you elaborate more on the target market for the OpenMV? If you had to describe your ideal end user who absolutely had to have an OpenMV, who would they be?

Kwabena: Right now we’re targeting the hobbyist market with the system. Hobbyist have been our biggest buyers so far and helped us sell over five thousand OpenMV Cam M7s last year. We’ve also got a few companies buying the cameras too.

Anyway, as our firmware gets more mature we hope to sell more cameras to more companies building products.

Right now we’re still rapidly building out our firmware functionality to more or less compliment OpenCV for basic image processing functionality. We’ve already got a lot of stuff on board but we’re trying to make sure you have any tool you need like shadow removal with inpainting for creating a shadow free background frame differencing applications.

Figure 4: An example of AprilTags (Image credit: MIT).

Adrian: Shadow removal, that’s fun. So, what was the most difficult feature or aspect that you had to wrangle with when putting together OpenMV?

Kwabena: Porting AprilTags to the OpenMV Cam was the most challenging algorithm to get running onboard.

I started with the AprilTag 2 source code meant for the PC. To get it running on the OpenMV Cam M7 which has only 512 KB of RAM versus a desktop PC. I had go through all 15K+ lines of code and redo how memory allocations worked to be more efficient.

Sometimes this was as simple as moving large array allocations from malloc to a dedicated stack. Sometimes I had to change how some algorithms worked to be more efficient.

For example, AprilTags computes a lookup table of every possible hamming code word with 0, 1, 2, etc. bit errors when trying to match detected tag bit patterns with a tag dictionary. This lookup-table (LUT) can be over 30 MBs for some tag dictionaries! Sure, indexing a LUT is fast, but, a linear search through the tag dictionaries for a matching tag can work too.

Anyway, after porting the algorithm the OpenMV Cam M7 it can run AprilTags at 160×120 at 12 FPS. This let’s you detect tags printed on 8”x11” paper from about 8” away with a microcontroller which can run off of your USB port.

Adrian: Wow! Having to manually go through all 15K lines of code and re-implement certain pieces of functionality must have been quite the task. I hear there are going to be some really awesome new OpenMV features in the next release. Can you tell us about them?

Kwabena: Yes, our next product, the OpenMV Cam H7 powered by the STM32H7 processor will double our performance. In fact, it’s coremark score is on par with the 1 GHz Raspberry Pi zero (2020.55 versus 2060.98). That said, the Cortex-M7 core doesn’t have NEON or a GPU. But, we should be able to keep up for CPU limited algorithms.

However, the big feature add is removable camera module support. This allows us to offer the OpenMV Cam H7 with an inexpensive rolling shutter camera module like we do now. But, for more professional users we’ll have a global shutter options for folks who are trying to do machine vision in high speed applications like taking pictures of products moving on a conveyor belt. Better, yet, we’re also planning to support FLIR Lepton Thermal sensors for machine vision too. Best of all, each camera module will use the same “sensor.snapshot()” construct we use to take pictures now allowing you to switch out one module for another without changing your code.

Finally, thanks to ARM, you can now neural networks on the Cortex-M7. Here’s a video of the OpenMV Cam running a CIFAR-10 network onboard:

We’re going to be building out this support for the STM32H7 processor so that you can run NN’s trained on your laptop to do things like detecting when people enter rooms and etc. The STM32H7 should be able to run a variety of simple NN’s for lots of common detection task folks want for an embedded system to do.

We’ll be running a KickStarter for the next generation OpenMV Cam this year. Sign-up on our email list here and follow us on Twitter to stay up-to-date for when we launch the KickStarter.

Adrian: Global shutter and thermal imaging support is awesome! Theoretically, could I turn an OpenMV Cam with a global shutter sensor into a webcam for use with my Raspberry Pi 3? Inexpensive global shutter sensors are hard to find.

Kwabena: Yes, the OpenMV Cam can be used as a webcam. Our USB speed is limited to 12 Mb/s though, so, you’ll want to stream JPEG compressed images. You can also connect the OpenMV Cam to your Raspberry Pi via SPI for a faster 54 Mb/s transfer rate. Since the STM32H7 has a hardware JPEG encoder onboard now the OpenMV Cam H7 should be able to provide a nice high FPS precisely triggered frame stream to your Raspberry Pi.

Figure 5: Using OpenMV to build DIY robocar racers.

Adrian: Cool, let’s move on. One of the most exciting aspects of developing a new tool, library, or piece of software is to see how your work is used by others. What are some of the more surprising ways you’ve seen OpenMV used?

Kwabena: For hobbyist our biggest feature has been color tracking. We do that very well at above 50 FPS with our current OpenMV Cam M7. I think this has been the main attraction for a lot of customers. Color tracking has historically been the only thing you were able to do on a microcontroller so it makes sense.

QR Code, Barcode, Datamatrix, and AprilTag support have also been selling points.

For example, we’ve had quadcopter folks start using the OpenMV Cam to point down at giant AprilTags printed out on the ground for precision landing. You can have one AprilTag inside of another one and as the quadcopter gets closer to the ground the control algorithm tries to keep the copter centered on the tag in view.

However, what’s tickled me the most is doing DIY Robocar racing with the OpenMV Cam and having some of my customers beat me in racing with their OpenMV Cams.

Adrian: If a PyImageSearch readers would like to get their own OpenMV camera, where can they purchase one?

Kwabena: We just finished another production run of 2.5K OpenMV Cams and you can buy them online on our webstore now. We’ve also got lens accessories, shields for controlling motors, and more.

Adrian: Most people don’t know this, but you and I ran a Kickstarter campaign at the same time back in 2015! Mine was for the PyImageSearch Gurus course while yours was for the initial release and manufacturing of the OpenMV Camera. OpenMV’s Kickstarter easily beat my own, which just goes to show you how interested the embedded community was in the product — fantastic job and congrats on the success. What was running your first Kickstarter campaign like?

Kwabena: Running that KickStarter campaign was stressful. We’ve come a long, long, long way since then. I gave a talk on this a few years back which more or less summarizes my experience:

Anyway, it’s a lot of work and a lot of stress.

For our next KickStarter we’re trying to prepare as much as possible beforehand so it’s more of a turnkey operation. We’ve got our website, online-shop, forums, documentation, shipping, etc. all setup now so we don’t have to build the business and the product at the same time anymore. This will let us focus on delivering the best experience for our backers.

Adrian: I can attest to the fact that running a Kickstarter campaign is incredibly stressful. It’s easily a full-time job for ~3 months as you prepare the campaign, launch it, and fund it. And once it’s funded you then need to deliver on your promise! It’s a rewarding experience and I wouldn’t discourage others from doing it, but prepared to be really stressed out for 3-4 months. Shifting gears a bit, as an expert in your field, who do you talk to when you get “stuck” on a hard problem?

Kwabena: I wouldn’t say I’m an expert. I’m learning computer vision like everyone else. Developing the OpenMV Cam has actually been a great way to learn how algorithms work. For example, I learned a lot porting the AprilTag code. There’s a lot of magic in that C code. I’m also quite excited actually to now start adding more machine learning features to the OpenMV Cam using the ARM CMSIS NN library for the Cortex-M7.

Anyway, to answer where I go for help… The internet! And research papers! Lots of research papers. I do the reading so my OpenMV Cam users don’t have to.

Adrian: From your LinkedIn I know you have a lot of experience in hardware design languages. What advice do you have for programmers interested in using FPGAs for computer vision?

Kwabena: Hmm… You can definitely get a lot of performance out FPGAs. However, it’s definitely a pay-to-play market. You’re going to need some serious budget to get access to any of the high-end hardware and/or intellectual property. That said, if you’ve got an employer willing to spend there’s a lot of development going on that will allow you to run very large deep neural networks on FPGAs. It’s definitely sweet to get a logic pipeline up and running that’s able to process gigabytes of data a second.

Now, there’s also a growing medium-end FPGA market that’s affordable to play in if you don’t have a large budget. Intel (previously Altera) has an FPGA called the Cyclone for sale that’s more or less affordable if you’re willing pay for the hardware. You can interface the Cyclone to your PC via PCIe using Xillybus IP which exposes FIFOs on your FPGA as linux device files on your PC. This makes it super easy to move data over to the FPGA. Furthermore, Intel offers offers DDR memory controller IP for free so you can get some RAM buffers up and running. Finally, you just need to add a camera module and you can start developing.

But… that said, you’re going to run into a rather unpleasant brick wall on how to write verilog code and having to pay for the tool chains. The hardware design world is not really open source, nor will will you find lots of stack overflow threads about how to do things. Did I mention there’s no vision library for hardware available? Gotta roll everything yourself!

Adrian: When you’re not working at OpenMV, you’re at Planet Labs in San Francisco, CA, which is where PyImageConf, PyImageSearch’s very own computer vision and deep learning conference will be held. Will you be at PyImageConf this year (August 26-28th)? The conference will be a great place the show off OpenMV functionality. I know attendees would enjoy it.

Kwabena: Yes, I’ll be in town and present for the conference. I live in SF now.

Adrian: Great to hear! If a PyImageSearch reader wants to chat, what is the best place to connect with you?

Kwabena: Email us at or comment on our forum. Additionally, please follow us on Twitter, our YouTube channel, and sign-up on our mailing list.


In today’s blog post I interviewed Kwabena Agyeman, co-founder of OpenMV.

If you have any questions for Kwabena, be sure to leave a comment on this post! Kwabena is a regular PyImageSearch reader and is active in the comments.

And if you enjoyed today’s post and want to be notified when future blog posts are published here on PyImageSearch, be sure to enter your email address in the form below!

, , ,

6 Responses to An interview with Kwabena Agyeman, co-creator of OpenMV and microcontroller expert

  1. Jeff Bass March 28, 2018 at 1:51 pm #

    Thanks to you both for the great interview. I am very impressed with the OpenMV projects on GitHub and the fact that both hardware and software are open source. I have built multiple OpenCV projects on Raspberry Pi SBCs and I look forward to trying the OpenMV camera and software. Using MicroPython on the OpenMV M7 Cam makes it easy to fit into my existing projects.

    • Adrian Rosebrock March 28, 2018 at 3:08 pm #

      Hey Jeff, thanks for the comment 🙂 One of the best parts of OpenMV is the IDE itself. It makes putting together projects a breeze. I think you’ll really enjoy it. Be sure to leave another comment after you’ve had a chance to play around with it.

  2. Kranthi Kumar Buddha March 28, 2018 at 2:51 pm #

    Wow Adrian,
    Now we have two gurus to follow. You always amaze me and my on the go solution for opencv problems in last few years.Also, the interviews you are doing and sharing with us improves our vision of application scopes. As for kwabena, I am an embedded programmer and know some of your pain in going through all the code again and again. Your view on hardware design not so much opened up is on point. Thanks adrian for this awesome blog and excellent resources you are providing. Good luck in guiding us..-:)

    • Adrian Rosebrock March 28, 2018 at 3:09 pm #

      Thank you for the kind words, Kranthi. Both myself and Kwabena and very appreciative 🙂

  3. Tanmay Jaiswal March 29, 2018 at 1:40 pm #

    It’s a pleasure following your blog posts Adrian! Every time I have any questions regarding CV, I know that the one place I will definitely find answers is your blog. I recommend your blog to everyone looking to set foot into this domain. You have never disappointed me. PyImageSearch has been an invaluable resource!
    I love the work you are doing! These interviews have been especially insightful for people from an electronics background! Too much fun
    Thank You

    • Adrian Rosebrock March 29, 2018 at 2:46 pm #

      Thank you so much for the kind words, Tanmay. Your comment really made my day 🙂 It’s wonderful to hear you are enjoying the PyImageSearch blog and I will ensure that great computer vision and deep learning content continues to appear here. Have a great day!

Before you leave a comment...

Hey, Adrian here, author of the PyImageSearch blog. I'd love to hear from you, but before you submit a comment, please follow these guidelines:

  1. If you have a question, read the comments first. You should also search this page (i.e., ctrl + f) for keywords related to your question. It's likely that I have already addressed your question in the comments.
  2. If you are copying and pasting code/terminal output, please don't. Reviewing another programmers’ code is a very time consuming and tedious task, and due to the volume of emails and contact requests I receive, I simply cannot do it.
  3. Be respectful of the space. I put a lot of my own personal time into creating these free weekly tutorials. On average, each tutorial takes me 15-20 hours to put together. I love offering these guides to you and I take pride in the content I create. Therefore, I will not approve comments that include large code blocks/terminal output as it destroys the formatting of the page. Kindly be respectful of this space.
  4. Be patient. I receive 200+ comments and emails per day. Due to spam, and my desire to personally answer as many questions as I can, I hand moderate all new comments (typically once per week). I try to answer as many questions as I can, but I'm only one person. Please don't be offended if I cannot get to your question
  5. Do you need priority support? Consider purchasing one of my books and courses. I place customer questions and emails in a separate, special priority queue and answer them first. If you are a customer of mine you will receive a guaranteed response from me. If there's any time left over, I focus on the community at large and attempt to answer as many of those questions as I possibly can.

Thank you for keeping these guidelines in mind before submitting your comment.

Leave a Reply