Live video streaming over network with OpenCV and ImageZMQ

In today’s tutorial, you’ll learn how to stream live video over a network with OpenCV. Specifically, you’ll learn how to implement Python + OpenCV scripts to capture and stream video frames from a camera to a server.

Every week or so I receive a comment on a blog post or a question over email that goes something like this:

Hi Adrian, I’m working on a project where I need to stream frames from a client camera to a server for processing using OpenCV. Should I use an IP camera? Would a Raspberry Pi work? What about RTSP streaming? Have you tried using FFMPEG or GStreamer? How do you suggest I approach the problem?

It’s a great question — and if you’ve ever attempted live video streaming with OpenCV then you know there are a ton of different options.

You could go with the IP camera route. But IP cameras can be a pain to work with. Some IP cameras don’t even allow you to access the RTSP (Real-time Streaming Protocol) stream. Other IP cameras simply don’t work with OpenCV’s cv2.VideoCapture  function. An IP camera may be too expensive for your budget as well.

In those cases, you are left with using a standard webcam — the question then becomes, how do you stream the frames from that webcam using OpenCV?

Using FFMPEG or GStreamer is definitely an option. But both of those can be a royal pain to work with.

Today I am going to show you my preferred solution using message passing libraries, specifically ZMQ and ImageZMQ, the latter of which was developed by PyImageConf 2018 speaker, Jeff Bass. Jeff has put a ton of work into ImageZMQ and his efforts really shows.

As you’ll see, this method of OpenCV video streaming is not only reliable but incredibly easy to use, requiring only a few lines of code.

To learn how to perform live network video streaming with OpenCV, just keep reading!

Looking for the source code to this post?
Jump right to the downloads section.

Live video streaming over network with OpenCV and ImageZMQ

In the first part of this tutorial, we’ll discuss why, and under which situations, we may choose to stream video with OpenCV over a network.

From there we’ll briefly discuss message passing along with ZMQ, a library for high performance asynchronous messaging for distributed systems.

We’ll then implement two Python scripts:

  1. A client that will capture frames from a simple webcam
  2. And a server that will take the input frames and run object detection on them

Will be using Raspberry Pis as our clients to demonstrate how cheaper hardware can be used to build a distributed network of cameras capable of piping frames to a more powerful machine for additional processing.

By the end of this tutorial, you’ll be able to apply live video streaming with OpenCV to your own applications!

Why stream videos/frames over a network?

Figure 1: A great application of video streaming with OpenCV is a security camera system. You could use Raspberry Pis and a library called ImageZMQ to stream from the Pi (client) to the server.

There are a number of reasons why you may want to stream frames from a video stream over a network with OpenCV.

To start, you could be building a security application that requires all frames to be sent to a central hub for additional processing and logging.

Or, your client machine may be highly resource constrained (such as a Raspberry Pi) and lack the necessary computational horsepower required to run computationally expensive algorithms (such as deep neural networks, for example).

In these cases, you need a method to take input frames captured from a webcam with OpenCV and then pipe them over the network to another system.

There are a variety of methods to accomplish this task (discussed in the introduction of the post), but today we are going to specifically focus on message passing.

What is message passing?

Figure 2: The concept of sending a message from a process, through a message broker, to other processes. With this method/concept, we can stream video over a network using OpenCV and ZMQ with a library called ImageZMQ.

Message passing is a programming paradigm/concept typically used in multiprocessing, distributed, and/or concurrent applications.

Using message passing, one process can communicate with one or more other processes, typically using a message broker.

Whenever a process wants to communicate with another process, including all other processes, it must first send its request to the message broker.

The message broker receives the request and then handles sending the message to the other process(es).

If necessary, the message broker also sends a response to the originating process.

As an example of message passing let’s consider a tremendous life event, such as a mother giving birth to a newborn child (process communication depicted in Figure 2 above). Process A, the mother, wants to announce to all other processes (i.e., the family), that she had a baby. To do so, Process A constructs the message and sends it to the message broker.

The message broker then takes that message and broadcasts it to all processes.

All other processes then receive the message from the message broker.

These processes want to show their support and happiness to Process A, so they construct a message saying their congratulations:

Figure 3: Each process sends an acknowledgment (ACK) message back through the message broker to notify Process A that the message is received. The ImageZMQ video streaming project by Jeff Bass uses this approach.

These responses are sent to the message broker which in turn sends them back to Process A (Figure 3).

This example is a dramatic simplification of message passing and message broker systems but should help you understand the general algorithm and the type of communication the processes are performing.

You can very easily get into the weeds studying these topics, including various distributed programming paradigms and types of messages/communication (1:1 communication, 1:many, broadcasts, centralized, distributed, broker-less etc.).

As long as you understand the basic concept that message passing allows processes to communicate (including processes on different machines) then you will be able to follow along with the rest of this post.

What is ZMQ?

Figure 4: The ZMQ library serves as the backbone for message passing in the ImageZMQ library. ImageZMQ is used for video streaming with OpenCV. Jeff Bass designed it for his Raspberry Pi network at his farm.

ZeroMQ, or simply ZMQ for short, is a high-performance asynchronous message passing library used in distributed systems.

Both RabbitMQ and ZeroMQ are some of the most highly used message passing systems.

However, ZeroMQ specifically focuses on high throughput and low latency applications — which is exactly how you can frame live video streaming.

When building a system to stream live videos over a network using OpenCV, you would want a system that focuses on:

  • High throughput: There will be new frames from the video stream coming in quickly.
  • Low latency: As we’ll want the frames distributed to all nodes on the system as soon as they are captured from the camera.

ZeroMQ also has the benefit of being extremely easy to both install and use.

Jeff Bass, the creator of ImageZMQ (which builds on ZMQ), chose to use ZMQ as the message passing library for these reasons — and I couldn’t agree with him more.

The ImageZMQ library

Figure 5: The ImageZMQ library is designed for streaming video efficiently over a network. It is a Python package and integrates with OpenCV.

Jeff Bass is the owner of Yin Yang Ranch, a permaculture farm in Southern California. He was one of the first people to join PyImageSearch Gurus, my flagship computer vision course. In the course and community he has been an active participant in many discussions around the Raspberry Pi.

Jeff has found that Raspberry Pis are perfect for computer vision and other tasks on his farm. They are inexpensive, readily available, and astoundingly resilient/reliable.

At PyImageConf 2018 Jeff spoke about his farm and more specifically about how he used Raspberry Pis and a central computer to manage data collection and analysis.

The heart of his project is a library that he put together called ImageZMQ.

ImageZMQ solves the problem of real-time streaming from the Raspberry Pis on his farm. It is based on ZMQ and works really well with OpenCV.

Plain and simple, it just works. And it works really reliably.

I’ve found it to be more reliable than alternatives such as GStreamer or FFMPEG streams. I’ve also had better luck with it than using RTSP streams.

You can learn the details of ImageZMQ by studying Jeff’s code on GitHub.

Jeff’s slides from PyImageConf 2018 are also available here.

In a few days, I’ll be posting my interview with Jeff Bass on the blog as well.

Let’s configure our clients and server with ImageZMQ and put it them to work!

Configuring your system and installing required packages

Figure 6: To install ImageZMQ for video streaming, you’ll need Python, ZMQ, and OpenCV.

Installing ImageZMQ is quite easy.

First, let’s pip install a few packages into your Python virtual environment (assuming you’re using one):

From there, clone the imagezmq  repo:

You may then (1) copy or (2) sym-link the source directory into your virtual environment site-packages.

Let’s go with the sym-link option:

Note: Be sure to use tab completion to ensure that paths are correctly entered.

As a third alternative to the two options discussed, you may place imagezmq  into each project folder in which you plan to use it.

Preparing clients for ImageZMQ

ImageZMQ must be installed on each client and the central server.

In this section, we’ll cover one important difference for clients.

Our code is going to use the hostname of the client to identify it. You could use the IP address in a string for identification, but setting a client’s hostname allows you to more easily identify the purpose of the client.

In this example, we’ll assume you are using a Raspberry Pi running Raspbian. Of course, your client could run Windows Embedded, Ubuntu, macOS, etc., but since our demo uses Raspberry Pis, let’s learn how to change the hostname on the RPi.

To change the hostname on your Raspberry Pi, fire up a terminal (this could be over an SSH connection if you’d like).

Then run the raspi-config  command:

You’ll be presented with this terminal screen:

Figure 7: Configuring a Raspberry Pi hostname with raspi-config. Shown is the raspi-config home screen.

Navigate to “2 Network Options” and press enter.

Figure 8: Raspberry Pi raspi-config network settings page.

Then choose the option “N1 Hostname”.

Figure 9: Setting the Raspberry Pi hostname to something easily identifiable/memorable. Our video streaming with OpenCV and ImageZMQ script will use the hostname to identify Raspberry Pi clients.

You can now change your hostname and select “<Ok>”.

You will be prompted to reboot — a reboot is required.

I recommend naming your Raspberry Pis like this: pi-location . Here are a few examples:

  • pi-garage
  • pi-frontporch
  • pi-livingroom
  • pi-driveway
  • …you get the idea.

This way when you pull up your router page on your network, you’ll know what the Pi is for and its corresponding IP address. On some networks, you could even connect via SSH without providing the IP address like this:

As you can see, it will likely save some time later.

Defining the client and server relationship

Figure 10: The client/server relationship for ImageZMQ video streaming with OpenCV.

Before we actually implement network video streaming with OpenCV, let’s first define the client/server relationship to ensure we’re on the same page and using the same terms:

  • Client: Responsible for capturing frames from a webcam using OpenCV and then sending the frames to the server.
  • Server: Accepts frames from all input clients.

You could argue back and forth as to which system is the client and which is the server.

For example, a system that is capturing frames via a webcam and then sending them elsewhere could be considered a server — the system is undoubtedly serving up frames.

Similarly, a system that accepts incoming data could very well be the client.

However, we are assuming:

  1. There is at least one (and likely many more) system responsible for capturing frames.
  2. There is only a single system used for actually receiving and processing those frames.

For these reasons, I prefer to think of the system sending the frames as the client and the system receiving/processing the frames as the server.

You may disagree with me, but that is the client-server terminology we’ll be using throughout the remainder of this tutorial.

Project structure

Be sure to grab the “Downloads” for today’s project.

From there, unzip the files and navigate into the project directory.

You may use the tree  command to inspect the structure of the project:

Note: If you’re going with the third alternative discussed above, then you would need to place the imagezmq  source directory in the project as well.

The first two files listed in the project are the pre-trained Caffe MobileNet SSD object detection files. The server ( ) will take advantage of these Caffe files using OpenCV’s DNN module to perform object detection.

The  script will reside on each device which is sending a stream to the server. Later on, we’ll upload  onto each of the Pis (or another machine) on your network so they can send video frames to the central location.

Implementing the client OpenCV video streamer (i.e., video sender)

Let’s start by implementing the client which will be responsible for:

  1. Capturing frames from the camera (either USB or the RPi camera module)
  2. Sending the frames over the network via ImageZMQ

Open up the  file and insert the following code:

We start off by importing packages and modules on Lines 2-6:

  • Pay close attention here to see that we’re importing imagezmq  in our client-side script.
  • VideoStream  will be used to grab frames from our camera.
  • Our argparse  import will be used to process a command line argument containing the server’s IP address ( --server-ip  is parsed on Lines 9-12).
  • The socket  module of Python is simply used to grab the hostname of the Raspberry Pi.
  • Finally, time  will be used to allow our camera to warm up prior to sending frames.

Lines 16 and 17 simply create the imagezmq  sender  object and specify the IP address and port of the server. The IP address will come from the command line argument that we already established. I’ve found that port 5555  doesn’t usually have conflicts, so it is hardcoded. You could easily turn it into a command line argument if you need to as well.

Let’s initialize our video stream and start sending frames to the server:

Now, we’ll grab the hostname, storing the value as rpiName  (Line 21). Refer to “Preparing clients for ImageZMQ” above to set your hostname on a Raspberry Pi.

From there, our VideoStream  object is created to connect grab frames from our PiCamera. Alternatively, you can use any USB camera connected to the Pi by commenting Line 22 and uncommenting Line 23.

This is the point where you should also set your camera resolution. We are just going to use the maximum resolution so the argument is not provided. But if you find that there is a lag, you are likely sending too many pixels. If that is the case, you may reduce your resolution quite easily. Just pick from one of the resolutions available for the PiCamera V2 here: PiCamera ReadTheDocs. The second table is for V2.

Once you’ve chosen the resolution, edit Line 22 like this:

Note: The resolution argument won’t make a difference for USB cameras since they are all implemented differently. As an alternative, you can insert a frame = imutils.resize(frame, width=320)  between Lines 28 and 29 to resize the frame  manually.

From there, a warmup sleep time of 2.0  seconds is set (Line 24).

Finally, our while  loop on Lines 26-29 grabs and sends the frames.

As you can see, the client is quite simple and straightforward!

Let’s move on to the actual server.

Implementing the OpenCV video server (i.e., video receiver)

The live video server will be responsible for:

  1. Accepting incoming frames from multiple clients.
  2. Applying object detection to each of the incoming frames.
  3. Maintaining an “object count” for each of the frames (i.e., count the number of objects).

Let’s go ahead and implement the server — open up the  file and insert the following code:

On Lines 2-8 we import packages and libraries. In this script, most notably we’ll be using:

  • build_montages : To build a montage of all incoming frames.
  • imagezmq : For streaming video from clients. In our case, each client is a Raspberry Pi.
  • imutils : My package of OpenCV and other image processing convenience functions available on GitHub and PyPi.
  • cv2 : OpenCV’s DNN module will be used for deep learning object detection inference.

Are you wondering where  is? We usually use my VideoStream  class to read frames from a webcam. However, don’t forget that we’re using imagezmq  for streaming frames from clients. The server doesn’t have a camera directly wired to it.

Let’s process five command line arguments with argparse:

  • --prototxt : The path to our Caffe deep learning prototxt file.
  • --model : The path to our pre-trained Caffe deep learning model. I’ve provided MobileNet SSD in the “Downloads” but with some minor changes, you could elect to use an alternative model.
  • --confidence : Our confidence threshold to filter weak detections.
  • --montageW : This is not width in pixels. Rather this is the number of columns for our montage. We’re going to stream from four raspberry Pis today, so you could do 2×2, 4×1, or 1×4. You could also do, for example, 3×3 for nine clients, but 5 of the boxes would be empty.
  • --montageH : The number of rows for your montage. See the --montageW  explanation.

Let’s initialize our ImageHub  object along with our deep learning object detector:

Our server needs an ImageHub  to accept connections from each of the Raspberry Pis. It essentially uses sockets and ZMQ for receiving frames across the network (and sending back acknowledgments).

Our MobileNet SSD object CLASSES  are specified on Lines 29-32. If you aren’t familiar with the MobileNet Single Shot Detector, please refer to this blog post or Deep Learning for Computer Vision with Python.

From there we’ll instantiate our Caffe object detector on Line 36.

Initializations come next:

In today’s example, I’m only going to CONSIDER  three types of objects from the MobileNet SSD list of CLASSES . We’re considering (1) dogs, (2) persons, and (3) cars on Line 40.

We’ll soon use this CONSIDER  set to filter out other classes that we don’t care about such as chairs, plants, monitors, or sofas which don’t typically move and aren’t interesting for this security type project.

Line 41 initializes a dictionary for our object counts to be tracked in each video feed. Each count is initialized to zero.

A separate dictionary, frameDict  is initialized on Line 42. The frameDict  dictionary will contain the hostname key and the associated latest frame value.

Lines 47 and 48 are variables which help us determine when a Pi last sent a frame to the server. If it has been a while (i.e. there is a problem), we can get rid of the static, out of date image in our montage. The lastActive  dictionary will have hostname keys and timestamps for values.

Lines 53-55 are constants which help us to calculate whether a Pi is active. Line 55 itself calculates that our check for activity will be 40  seconds. You can reduce this period of time by adjusting ESTIMATED_NUM_PIS  and ACTIVE_CHECK_PERIOD  on Lines 53 and 54.

Our mW  and mH  variables on Lines 59 and 60 represent the width and height (columns and rows) for our montage. These values are pulled directly from the command line args  dictionary.

Let’s loop over incoming streams from our clients and processing the data!

We begin looping on Line 65.

Lines 68 and 69 grab an image from the imageHub  and send an ACK message. The result of imageHub.recv_image  is rpiName , in our case the hostname, and the video frame itself.

It is really as simple as that to receive frames from an ImageZMQ video stream!

Lines 73-78 perform housekeeping duties to determine when a Raspberry Pi was lastActive .

Let’s perform inference on a given incoming frame :

Lines 82-90 perform object detection on the frame :

From there, on Line 93 we reset the object counts to zero (we will be populating the dictionary with fresh count values shortly).

Let’s loop over the detections with the goal of (1) counting, and (2) drawing boxes around objects that we are considering:

On Line 96 we begin looping over each of the detections . Inside the loop, we proceed to:

  • Extract the object confidence  and filter out weak detections (Lines 99-103).
  • Grab the label idx  (Line 106) and ensure that the label is in the CONSIDER  set (Line 110). For each detection that has passed the two checks ( confidence  threshold and in CONSIDER ), we will:
    • Increment the objCount  for the respective object (Line 113).
    • Draw a rectangle  around the object (Lines 117-123).

Next, let’s annotate each frame with the hostname and object counts. We’ll also build a montage to display them in:

On Lines 126-133 we make two calls to cv2.putText  to draw the Raspberry Pi hostname and object counts.

From there we update our frameDict  with the frame  corresponding to the RPi hostname.

Lines 139-144 create and display a montage of our client frames. The montage will be mW  frames wide and mH  frames tall.

Keypresses are captured via Line 147.

The last block is responsible for checking our lastActive  timestamps for each client feed and removing frames from the montage that have stalled. Let’s see how it works:

There’s a lot going on in Lines 151-162. Let’s break it down:

  • We only perform a check if at least ACTIVE_CHECK_SECONDS  have passed (Line 151).
  • We loop over each key-value pair in lastActive  (Line 153):
    • If the device hasn’t been active recently (Line 156) we need to remove data (Lines 158 and 159). First we remove ( pop ) the rpiName  and timestamp from lastActive . Then the rpiName  and frame are removed from the frameDict .
  • The lastActiveCheck  is updated to the current time on Line 162.

Effectively this will help us get rid of expired frames (i.e. frames that are no longer real-time). This is really important if you are using the ImageHub server for a security application. Perhaps you are saving key motion events like a Digital Video Recorder (DVR). The worst thing that could happen if you don’t get rid of expired frames is that an intruder kills power to a client and you don’t realize the frame isn’t updating. Think James Bond or Jason Bourne sort of spy techniques.

Last in the loop is a check to see if the "q"  key has been pressed — if so we break  from the loop and destroy all active montage windows (Lines 165-169).

Streaming video over network with OpenCV

Now that we’ve implemented both the client and the server, let’s put them to the test.

Make sure you use the “Downloads” section of this post to download the source code.

From there, upload the client to each of your Pis using SCP:

In this example, I’m using four Raspberry Pis, but four aren’t required — you can use more or less. Be sure to use applicable IP addresses for your network.

You also need to follow the installation instructions to install ImageZMQ on each Raspberry Pi. See the “Configuring your system and installing required packages” section in this blog post.

Before we start the clients, we must start the server. Let’s fire it up with the following command:

Once your server is running, go ahead and start each client pointing to the server. Here is what you need to do on each client, step-by-step:

  1. Open an SSH connection to the client: ssh pi@
  2. Start screen on the client: screen
  3. Source your profile: source ~/.profile
  4. Activate your environment: workon py3cv4
  5. Install ImageZMQ using instructions in “Configuring your system and installing required packages”.
  6. Run the client: python --server-ip

As an alternative to these steps, you may start the client script on reboot.

Automagically, your server will start bringing in frames from each of your Pis. Each frame that comes in is passed through the MobileNet SSD. Here’s a quick demo of the result:

A full video demo can be seen below:

What’s next?

Is your brain spinning with new Raspberry Pi project ideas right now?

The Raspberry Pi is my favorite community driven product for Computer Vision, IoT, and Edge Computing.

The possibilities with the Raspberry Pi are truly endless:

  • Maybe you have a video streaming idea based on this post.
  • Or perhaps you want to learn about deep learning with the Raspberry Pi.
  • Interested in robotics? Why not build a small computer vision-enabled robot or self-driving RC car?
  • Face recognition, classroom attendance, and security? All possible.

I’ve been so excited about the Raspberry Pi that I decided to write a book with over 40 practical, hands-on chapters that you’ll be able to learn from and hack with.

Inside the book, I’ll be sharing my personal tips and tricks for working with the Raspberry Pi (you can apply them to other resource-constrained devices too). You can view the full Raspberry Pi for Computer Vision table of contents here.

The book is currently in development. That said, you can reserve your copy by pre-ordering now and get a great deal on my other books/courses.

The pre-order sale ends on Friday, May 10th, 2019 at 10:00AM EDT. Don’t miss out on these huge savings!


In this tutorial, you learned how to stream video over a network using OpenCV and the ImageZMQ library.

Instead of relying on IP cameras or FFMPEG/GStreamer, we used a simple webcam and a Raspberry Pi to capture input frames and then stream them to a more powerful machine for additional processing using a distributed system concept called message passing.

Thanks to Jeff Bass’ hard work (the creator of ImageZMQ) our implementation required only a few lines of code.

If you are ever in a situation where you need to stream live video over a network, definitely give ImageZMQ a try — I think you’ll find it super intuitive and easy to use.

I’ll be back in a few days with an interview with Jeff Bass as well!

To download the source code to this post, and be notified when future tutorials are published here on PyImageSearch, just enter your email address in the form below!


If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Sound good? If so, enter your email address and I’ll send you the code immediately!

, , , , , ,

139 Responses to Live video streaming over network with OpenCV and ImageZMQ

  1. wally April 15, 2019 at 11:15 am #

    How does this compare with MQTT, performance wise?

    I use paho-mqtt MQTT in Python and mosquitto MQTT broker running locally on the Pi to pass images among separate processes or to processes running on different machines.

    MQTT has the “virtue” that you can buy “cloud broker” services from IBM, HiveMQ, etc. and get remote access without needing to run your own servers exposed to the internet

    In any event thanks for posting this, looking at the imagezmq git hub, it looks to be potentially veryuseful. I’d encourage Jeff to get it pip install-able for wider usage.

    If I can free up some time, I might try to take this sample and modify it to use MQTT for comparison.

    • Adrian Rosebrock April 15, 2019 at 1:22 pm #

      ImageZMQ was designed with ZMQ in mind. It would definitely require some updating to get it to run on MQTT and gather comparisons.

      • wally April 15, 2019 at 4:19 pm #

        I have MQTT sending and receiving jpg images. I was thinking of “dropping” my MQTT code into the timing test code on the ImageZMQ GitHub to compare.

        I’ve downloaded the ImageZMQ, just need to find some time to do it.

        Sorry I wasn’t clearer, your previous OpenVINO tutorial solved some problems for me that has kept me busy implementing the solution and testing.

    • Walter Krämbring April 17, 2019 at 4:13 am #

      Hi Wally,
      I did some performance tests this morning comparing MQTT (that I use for the same purpose as you, sending images across my network) with imageZMQ, in this case using send_jpg instead of send_image since I needed to send the image as a buffer

      In the tests I sent the same 95k jpg image ten times as fast as possible from one Pi to another over my wireless lan connections for both

      The results shows that there was no signinficant difference at all, it took around 3 seconds for both technologies

      mqtt DONE! 2.3724958896636963
      mqtt DONE! 3.370811939239502
      mqtt DONE! 4.014646530151367
      mqtt DONE! 2.674704074859619
      mqtt DONE! 3.1588287353515625

      zmq DONE! 2.7648768424987793
      zmq DONE! 5.127021312713623
      zmq DONE! 3.3753623962402344
      zmq DONE! 2.6726326942443848
      zmq DONE! 3.2702481746673584

      • Adrian Rosebrock April 17, 2019 at 2:01 pm #

        Thank you for sharing, Walter!

    • giuseppe April 17, 2019 at 5:32 pm #

      same concepts. mqtt can transport whatever you want. the core part is the broker (there also a lot of mqtt borkers out there). you could use an mqtt broker instead of zmq. apache has many adapters to support any kind of protocol for the broker.

  2. xiu April 15, 2019 at 11:16 am #

    Thanks for your great work!

  3. Emmanuel April 15, 2019 at 12:01 pm #

    Awesome content Adrian. What machine did you use for processing on the server side? Was it GPU enabled because running object detection on frames from 4 cameras can be quite computationally expensive.

    • Adrian Rosebrock April 15, 2019 at 1:21 pm #

      It was actually a MacBook Pro running YOLO via the CPU!

    • Marshall September 24, 2019 at 2:10 pm #

      I know you asked Adrian, but wanted to share this is working on a Raspberry Pi CM3 on ethernet as well as a Raspberry Pi 3A+ over wifi. I’m getting about 1 frame every 4 seconds with each pi camera streaming at max resolution like Adrian has.

      It’ll max out all 4 cores of the cpu and use about 400MB of RAM.

      • Adrian Rosebrock September 25, 2019 at 10:33 am #

        Thanks for sharing, Marshall!

  4. David Bonn April 15, 2019 at 12:11 pm #

    Great post, Adrian,

    I loved the Jeff Bass’ presentation at PyImageConf.

    One of these days I am going to finally get furiously angry with opencv’s USB Camera interface and start using libuvc. The Pi Camera interface by comparison is very clean and straightforward and consistent.

    • Adrian Rosebrock April 15, 2019 at 1:21 pm #

      Thanks David. I’ll have an interview publishing with Jeff on Wednesday as well 🙂

  5. Jorge Rubi Capaceti April 15, 2019 at 12:28 pm #

    Is it possible to send it via http to see it on a web page?

    • Adrian Rosebrock April 15, 2019 at 1:20 pm #

      I’ll be covering that in my upcoming Raspberry Pi for Computer Vision book, stay tuned!

  6. Jainal April 15, 2019 at 1:25 pm #

    Hi i want to ask you a question say i implemented this project now what i want to do is store the output to a database or simply say i am running open cv face recognition on client and when a person is recognized i want a json of his name in real time streaming. So what should i do?

    • Adrian Rosebrock April 18, 2019 at 7:09 am #

      You could simply encode the name of the person as JSON string. That’s not really a computer vision question though. I would instead suggest you read up on Python programming basics, including JSON serialization and database fundamentals. That will better enable you to complete your project.

  7. alba April 15, 2019 at 1:42 pm #

    Very nice project. but I have an error message : module ‘imagezmq’ has no attribute ‘ImageHub’ when I launch the server.

    Have you an idea.

    • Adrian Rosebrock April 18, 2019 at 7:08 am #

      It sounds like you have not properly installed the “imagezmq” library. Double-check your sym-links or simply put the “imagezmq” Python module in the working directory of your project.

      • Dave A April 21, 2019 at 3:26 pm #

        I was having the same issue running the server. Putting the imagezmq folder that contains, and in the working folder solved the issue.

      • venkat November 21, 2019 at 7:30 am #

        please let me knoe how to install and configure in imagezmq

    • Hans M. April 19, 2019 at 11:03 pm #

      I had the same problem, I solved it in the following way.
      1.- copy the imagezmq folder inside the project directory
      2.- in the file change:

      import imagezmq
      from imagezmq import imagezmq

      I hope to be helpful

      • Marshall September 20, 2019 at 2:44 pm #

        I had the same issue and this fixed mine as well.

    • Geoff April 25, 2019 at 4:05 am #

      Alba – did you resolve this ? I have the same issue.

      • Geoff April 25, 2019 at 7:46 pm #

        Its ok — Hans M’s solution worked for me.

        • Adrian Rosebrock May 1, 2019 at 12:06 pm #

          Awesome, glad to hear it Geoff!

  8. Ehsan April 15, 2019 at 3:19 pm #

    Awesome Adrian. It’s great
    I have two questions about.
    1. It’s possible to share multiple ip camera by one Ras PI? my memory is only 1GB.
    2. Can I put a password for my app and share them on the Internet?

    • Adrian Rosebrock April 18, 2019 at 7:05 am #

      1. You mean have two cameras on a single Pi? Yes, absolutely. You would just have two client scripts running on the Pi, each accessing its respective camera.
      2. Yes, but you would need to code any authentication yourself.

    • metimmee April 19, 2019 at 5:44 am #

      I can confirm this works, I am streaming from a picam and a USB camera.

    • metimmee April 19, 2019 at 8:09 am #

      I should add that I changed the client code very slightly to:

      while True:
      # read the frame from the camera and send it to the server
      frame =
      if frame is not None:
      sender.send_image(rpiName, frame)
      print(“no frame to send {}”.format(time.time()))

      I added a 2nd USB camera along with the picamera, 3 cameras in total which was mostly fine but run into what was either specific to the USB camera I was using, or USB bus limits. I suspect it was the former. By confirming there is a frame, the client doesn’t crash when there is no frame ready.

      I also added time to the frame overlay on the server so it is more obvious when new frames come in on static scenes.

      I suspect if hosting multiple cameras, it’d be better to read and send them consecutively from a single process rather than have multiple instances running. However, this is a great tutorial to build from.

  9. Rasoul April 15, 2019 at 3:46 pm #

    Hi Adrian and thanks for your Awesome website, I have a question:
    Is it possible to reduce the bandwidth of streaming video by doing some compression algorithms on it? I mean choosing for example MJPEG or MPEG4 or some other formats for our streaming video.
    thanks a lot

    • Adrian Rosebrock April 18, 2019 at 7:04 am #

      Yes, the smaller your frames are (in terms of spatial dimensions) combined with your compression will reduce network load and therefore reduce latency. You just need to be careful that the compression can still happen in real-time on the client.

  10. wally April 15, 2019 at 4:21 pm #

    I have MQTT sending and receiving jpg images. I was thinking of “dropping” my MQTT code into the timing test code on the ImageZMQ GitHub to compare.

    I’ve downloaded the ImageZMQ, just need to find some time to do it.

    Sorry I wasn’t clearer, your previous OpenVINO tutorial solved some problems for me that has kept me busy implementing the solution and testing.

  11. Brett Andersson April 15, 2019 at 7:27 pm #

    Thanks for the great article! I’m looking at using WebRTC and the open source Kurento server to stream content from a laptop camera, apply an OpenCV filter on the server side, then stream the results out to a browser endpoint. You mentioned you had some trouble with RTSP? Was this just due to some camera’s not being able to publish on that protocol? Are there other hurdles with RTSP / RTC that are important to consider?

    • Adrian Rosebrock April 17, 2019 at 2:13 pm #

      Hey Brett — see my reply to Heughens Jean.

  12. Jason April 15, 2019 at 7:35 pm #

    Any suggestions on making the pi weatherproof for outdoor applications?

  13. Anthony The Koala April 15, 2019 at 10:30 pm #

    I tried to install imagezmq on your colleague’s site at using the git command.

    and got the following error

    Any idea?

    Thank you,
    Anthony of Sydney

  14. Tomas April 16, 2019 at 12:53 am #

    This is very useful tool!

    One question. If the processing speed on server is slow, will it take the latest frame from the queue on the second loop? or the second one?

    • Adrian Rosebrock April 17, 2019 at 2:11 pm #

      Sorry, I’m not sure what you mean? Could you elaborate?

  15. Li Hsing April 16, 2019 at 3:12 am #

    I do agree with you after my work of ffmpeg and cgstreamer.
    The transmission on internet of opencv frame is very pretty work.It will great decrease hardware cost.
    Thank you so much.Great Adrian

    • Adrian Rosebrock April 17, 2019 at 2:11 pm #

      You are welcome, I’m glad you enjoyed the tutorial!

  16. Walter Krämbring April 16, 2019 at 6:18 am #

    Dear Adrian,

    This works basically as you have presented it…but the is really loading the Pi heavily, the CPU load goes up in the roof

    And still, OpenCV seems not able to keep the frame rate at the same level as the other video software (Motion) I’m currently using in my solution. With Motion the CPU load in the Pi is just around 15% with a frame rate of 10 fps including motion detection enabled

    Motion -> MQTT -> my python dnn analyzer server

    To send images via MQTT, I just send them as byte arrays and it seems to be very fast

    I’m wondering if there is a performance advantage in using imageZMQ instead of pure MQTT. I noticed that zmq uses Cython that I believe should be great for performace reasons, not sure if MQTT does the same

    Anyway, my best regards & thanks for all great writing & sharing


    • Adrian Rosebrock April 17, 2019 at 2:10 pm #

      Hey Walter, I see you already replied to the thread with Wally regarding MQTT. Thanks for doing that.

      As far as CPU usage, are you sure it’s just not the threading of the VideoStream? You can reduce load by using “cv2.VideoCapture” and only polling frames when you want them.

  17. Salazar April 16, 2019 at 7:21 am #

    Why not use IP cameras? They are cheap, compact and don’t require RasPI on far end
    I’ve just checked my IP-cam module (ONVIF) with your code from face-detection post.
    It works like a sharm.

    Thank you.

    • Adrian Rosebrock April 17, 2019 at 2:08 pm #

      Take a look at the intro of the post. If you can use an IP camera, great, but sometimes it’s not possible.

    • anoruo peter September 5, 2019 at 4:32 pm #

      I’ve been trying to do the same. Got it to work but the images being displayed on the web just freezes and updates after minutes. Please how were you able to update your web display if you could..

  18. Huguens Jean April 16, 2019 at 8:41 am #

    Hey Adrian,

    What were some of the issues you were seeing when you tried streaming with rtsp? I’m assuming this should also work on a tx2 or nano.

    • Adrian Rosebrock April 17, 2019 at 2:08 pm #

      Yes, this will also work on the TX2, Nano, Coral, etc.

      As for RTSP, gstreamer, etc., you end up going down a rabbit hole of trying to get the correct parameters to work, having them work one day, and fail the next. I hated debugging it. ImageZMQ makes it far easier and reliable.

  19. Yohakim Samponu April 16, 2019 at 10:15 am #

    Hi Adrian, thanks for this post. But how about to forward montage or cv2.imshow output include detected rectangle from server to website (like custom html with dashboard and video player) ?

  20. Abkul April 16, 2019 at 1:03 pm #

    Inspiring post.

    I want to get images from camera located at a remote site over a GSM enabled router. Will it work? What is needed?

    Kindly shed more light on the configuration of the server side.

    Which python night vision library can you advice?will you be covering in “wildlife monitoring ” section of the book?

    • Adrian Rosebrock April 17, 2019 at 2:06 pm #

      1. Yes, I will be covering wildlife detection inside the book, including wildlife at night. If you haven’t pre-ordered your copy yet, you can use this page to do so.

      2. As for the GSM router, as long as you have a publicly access IP address it will work.

  21. leonard bogdonoff April 16, 2019 at 1:13 pm #

    Hey Adrian! Thank you for continuing to write articles like this!

    Do you think it would be possible for a Raspberry Pi with the Movidius to have enough power to process the inputs from the streaming cameras?

    In your article on OpenVino, it seems you are processing a similar payload as would be received in this article.

    • Adrian Rosebrock April 17, 2019 at 2:05 pm #

      That really depends on:

      1. How computationally expensive the model is
      2. The # of incoming streams

      If you’re going to use the Raspberry Pi + NCS I guess my question would be why not just run inference there on the Pi instead of sending it over the network?

  22. graham April 16, 2019 at 2:18 pm #

    any idea how to stop the server hanging at (rpiName, frame) = hub.recv_image()

    • Adrian Rosebrock April 17, 2019 at 2:04 pm #

      Double-check that your server is running ZMQ. Secondly, make sure your IP address and port numbers are correct.

  23. leo April 16, 2019 at 3:49 pm #

    Hello, I’ve been reading your tutorials an they really helped me learning to use opencv. Could you suggest a way of streaming the captured video to a webpage? Thanks in advance.

    • Adrian Rosebrock April 17, 2019 at 2:04 pm #

      I’m actually covering that exact topic in my Raspberry Pi for Computer Vision book. You can find the Kickstarter page for the book here if you would like to pre-order a copy.

  24. Matthew Pottinger April 16, 2019 at 7:33 pm #

    Thanks for this. It is interesting to me not just for the video streaming part, but I had never heard of zeromq and libraries like it before.

    Whenever I wanted to do something like this, my preferred solution was to use Redis, which is also very simple and reliable. Not sure if it is as fast.

    Also, the smoothest, high fps video for opencv setup I have ever tried was with Webrtc.

    However to take advantage of hardware acceleration, etc, it was a very ugly, complicated set up using JavaScript in the browser, on both ends a node server for Webrtc, a node server to receive frames on the server side out of the browser on the server end, and a redis server. Very ugly and brittle, but the video was super smooth.

    I am reading up as much as possible on zeromq and nanomsg, thanks to you, because tcp can be a pain to work with directly.

    • Adrian Rosebrock April 17, 2019 at 2:03 pm #

      Redis is amazing, I’m a HUGE fan of Redis. You can even design entire message passing/broker libraries around Redis as well.

  25. Vinay April 17, 2019 at 12:27 am #

    Hello Adrian,

    Thanks for the post, very helpful.

    Is it possible to send video from laptop camera to AWS EC2 for processing and show the processed video back on the same laptop.


    • Adrian Rosebrock April 17, 2019 at 2:02 pm #

      Yes, you can use this exact method, actually. Just supply the IP address of your AWS server and ensure the ZMQ server is running on it.

  26. waked April 17, 2019 at 12:55 am #

    Awesome work Adrian! My project is the same but I’m using an IP camera? May I use the same code of Raspberry Pi and if yes what changes must be done

    • Adrian Rosebrock April 17, 2019 at 2:01 pm #

      How do you typically access your IP camera? Do you/have you used OpenCV to access it before?

  27. Kurt April 17, 2019 at 6:59 am #

    I failed at the sym-link option

    cd ~/.virtualenvs/lib/python3.5/site-packages

    No such file or directory

    • Kurt April 17, 2019 at 7:23 am #

      My mistake,

      I needed to add the “cv” (my name for my virtual environment) in the path


      …still learning…

      • Adrian Rosebrock April 17, 2019 at 2:00 pm #

        Congrats on resolving the issue, Kurt!

      • Anurag May 5, 2019 at 7:14 am #

        Please explain in detail as I am stuck with the same error.

  28. Tin W Lam April 17, 2019 at 11:11 am #

    this is really cool. thanks for the article. i am also wondering what is the best way to push from zmq to web browsers? any recommendations? thank you!

    • Adrian Rosebrock April 17, 2019 at 2:00 pm #

      I’m actually covering that exact project in my Raspberry Pi for Computer Vision book. You can find the Kickstarter page for the book here.

  29. zac April 17, 2019 at 6:52 pm #

    hi Adrian. Thx for the tutorial. I’m planning to do something similar instead stream iPhone video at very high fps to my laptop so I can faster prototype algorithms. Do you have any experience or comparison with LCM ( It seems it targets realtime scenarios and using UDP multicast so theoretically should have lower latency.

    • Adrian Rosebrock April 18, 2019 at 6:29 am #

      Sorry, I do not have any experience with that library so I can’t really comment there.

  30. abdessamad gori April 17, 2019 at 8:58 pm #

    Thanks for your great work!
    I tried this work and the result was great
    But implemented only in one device in the sense implemented by the client and the server in one computer. The process did not work with me in the raspberry pi problem. I experimented even with ubuntu computer and windows computer did not fix.
    But if you do them together they work both in ubuntu or in the windows
    I have a problem with communication.

  31. Steve Silvi April 20, 2019 at 4:29 pm #

    Hi Adrian,
    I’ve managed to get the RPi client and Ubuntu 18.04 server communicating properly, but my Pi camera is mounted upside down and I’m not sure if I need to import the PiCamera module and use the –vflip switch. Would this be a way to flip the video image? If so, where in the script should I place the code? Thanks.

    • Adrian Rosebrock April 25, 2019 at 9:14 am #

      You could control it directly via the “vflip” switch OR you could use the “cv2.flip” function. Either will work.

  32. Anthony Muse April 25, 2019 at 8:06 pm #

    where do i find the stuff?

    • Adrian Rosebrock May 1, 2019 at 12:05 pm #

      You install it via pip:

      $ pip install imutils

  33. Rean April 30, 2019 at 2:20 am #

    Hi Adrian,

    Thanks for your tutorial! I made it success in my Env.(Raspberry Pi 3 B+ and up-board square), I would like to ask you several questions.

    1. what is the bottle neck of the low frame rate?
    2. if I use an AI accelerator(like NCS or Edge TPU) for inference, can I get more frame rate?
    3. Is Jetson nano a good platform for the host?

    Thanks and regards,


    • Adrian Rosebrock May 1, 2019 at 11:36 am #

      The bottleneck here is running the object detector on the CPU. If your network is poor then latency could become the bottleneck. If you’re streaming to a central device you wouldn’t use a NCS, Edge TPU, or Nano. You would just perform inference on the device itself.

  34. Madel May 4, 2019 at 7:03 am #

    Great tutorial
    but I have a question
    what is the video format (codec) used by ?

  35. Sam May 15, 2019 at 12:40 am #

    Hey Adrian, as always, a great tutorial!

    I am working on a project in which I need to live stream feed from a rtsp ip cam to the browser. I am also performing some recognition and detection on the feed through opencv on the backend. The problem is I have to put this code on a VM and show these feeds to client on their browser. I tried different players but none is able to recognize the format of the stream. The only option I found was to convert the stream using ffmpeg and then display it on the browser. But doing this process simultaneously is something I am stuck at. Do you have any suggestions?

  36. Froohoo May 15, 2019 at 4:46 pm #

    Your tutorial is awesome. Thanks Adrian, you are a prince. This one helped me tremendously in getting my automated aircraft annotater up and working in only a week.. probably would still be figuring out step 1 inf it wasn’t for your help.

    • Adrian Rosebrock May 23, 2019 at 10:28 am #

      Holy cow. This is one of the neatest projects I have seen. Congratulations on a successful project!

  37. John May 15, 2019 at 8:08 pm #

    Hi Adrian,

    Having some trouble running the code. I first run the as instructed followed by running the client script on the Pi. I am using a Windows 10 machine as the server. The problem is that after running the client script, nothing is being returned on the server side.

    The script appears to hang at the command: “sender.send_image(rpiName, frame)”

    I have checked that the server IP is entered correctly. Any suggestions on how to debug?

    Thanks in advance!

  38. Jason May 17, 2019 at 5:17 am #

    Hi. Just wondering if there is a way to add basic error handling to these scripts. I’ve noticed that if the client is started before the server it will simply block on send_image() until the server is up and will work fine.

    However if the server is killed (for whatever reason) and then restarted it no longer works without killing all the clients and restarting them as well.

    I guess I can have a script running on the server – monitoring the status of the script from this blog post and if something goes wrong ssh into each client and kill it and restart.

    It seems to be an issue in imageHub.rev_image() that stops the server working after a restart.?!? Any ideas on how to make this a robust solution?

    • Adrian Rosebrock May 23, 2019 at 10:15 am #

      Hey Jason — we’re actually including a few updates to the ImageZMQ library inside Raspberry Pi for Computer Vision to make it a bit more robust.

  39. Miguel May 19, 2019 at 12:54 pm #

    Hi ! Awesome tutorial!

    What is the difference between this and using the Motion project ? does that still use message passing?

    • Adrian Rosebrock May 23, 2019 at 9:50 am #

      Which motion project are you referring to?

  40. Rean May 20, 2019 at 11:09 pm #

    Hi Adrian,

    Is it possible to use Raspberry Pi Zero W as the CAM?

    or, what is the minimum demand for a video streamer in Pi series?

    Thanks and Regards,


    • Adrian Rosebrock May 23, 2019 at 9:43 am #

      You can technically use a Pi Zero W but streaming would be a bit slower. I recommend a 3B+.

      • Rean May 24, 2019 at 5:25 am #

        understood, thanks!

  41. Mostafa May 21, 2019 at 4:47 pm #

    Thanks for that great work, could I use that code inside commercial product ?

  42. Kevin May 23, 2019 at 6:16 pm #

    I have been having this issue no matter what I try to do with sending video from one machine to another machine, and I have not seen this asked in the comments section. Whenever I run the client and server on the same machine I have no problems the image frame opens up and displays the video. However every time I try to host the server on one machine and run the client on another machine the image frame never shows up, and in this case the server doesn’t even see the client connecting.

    I am entering the correct IP of the server in the arguments, what am I doing wrong?

    Thank you in advance.

    • Adrian Rosebrock May 30, 2019 at 9:44 am #

      Are both machines on the same network? And have you launched ImageZMQ on the machines?

  43. Nando Junior May 24, 2019 at 2:14 pm #

    Hello, Adrian. It’s OK? I really liked this project. One question, if I wanted to use an IP camera, how would I configure this client code? Another question, is it possible to use the people counting system in this example too? Thank you.

    • Adrian Rosebrock May 30, 2019 at 9:33 am #

      I don’t have any examples of IP cameras yet. But you’ll be able to use people counting inside my book, Raspberry Pi for Computer Vision.

      • Nando Junior June 5, 2019 at 9:40 am #

        How do I do not let zero the counter of people and objects?

  44. Phil June 1, 2019 at 7:11 pm #

    I personally do not like the low framerate associated with ZeroMQ.

    In my own work, I’ve found that ROS+raspicam_node has been a very frictionless system that offers pretty low latency, and will support native 30fps framerates. The only headache I guess will be using ROS if you’re not already familiar.

    • Adrian Rosebrock June 6, 2019 at 8:31 am #

      ROS can be quite a pain to work with. Do you happen to know how ROS is working with the RPi camera? Is it a native integration or something involving a service like gstreamer?

  45. Jahir Mo June 13, 2019 at 2:40 pm #

    Great proyect my friend, this help me out of my issue with recognizing faces faster with the raspberry pi 3+b thank you! My best wishes!

    • Adrian Rosebrock June 19, 2019 at 2:24 pm #

      Thanks Jahir, I’m glad it helped!

  46. Jahir Moreno June 17, 2019 at 7:49 pm #

    Hi Adrian, thanks for the tutorial, I have a problem sending the frames from the pi to my server, when i first run my server displays in the terminal “starting video stream…”, then i run the on the pi but my server never says “receiving video from…”. it works on windows, but not in Ubuntu 18.04. D:

    • Adrian Rosebrock June 19, 2019 at 1:58 pm #

      Try checking the IP address of the server as it sounds like you’re using two different systems.

  47. Olouge July 2, 2019 at 7:49 pm #

    Hello, thank you for a great solution. I wish to askm how can one stream these videos to an online web page, shared hosting server or something where someone can login to see live video feeds of some remote place by opening a web page displaying the video feeds.

    Any ideas please?

  48. jenny July 3, 2019 at 1:18 pm #

    hello adrian thank you for your tutorial.i have a face recognition project and i want to use 15 cameras for this system info is: gpu 1080ti, cpu core i7, 16gb memory.
    My goal is to have at least 10 cameras for real time face recognition, and the speed of recognize in this project is very important.
    Is this sufficient for this project? What is the minimum system I need?
    Is zmq and 10 raspberry pi’s able to give me a good performance?
    Im using ip cameras.

    • Adrian Rosebrock July 4, 2019 at 10:15 am #

      Hey Jenny — I’d be happy to help and provide suggestions for this project, but I would definitely suggest you pickup a copy of Raspberry Pi for Computer Vision first. That book will teach you how to build such a project and it will enable me to provide you with targeted help on your project. Thank you!

      • jenny October 10, 2019 at 4:30 am #

        i just can to add 4cameras and cpu usage is 80%,Do I need to add a cpu number or is my programming problem ? Please help me.

  49. Fabio July 17, 2019 at 1:33 am #

    Hi Adrian,
    I’ve found your post and what you’ve done is amazing! I’m researching in the field of computer Vision and looking for some way to use OpenPose for tracking body movements. I have facing some problems regarding processing the images using this library, as it takes 20 sec to process each of the frames. I would like to set up a GPU in the cloud and then be able to run it real time. Is that possible using imagezmq?
    Cheers Fabio

    • Adrian Rosebrock July 25, 2019 at 9:52 am #

      Yes, basically you would use ImageZMQ to stream frames from your webcam to a system running a GPU. The GPU machine will then run the OpenPose models.

  50. vaibhav July 30, 2019 at 4:34 am #

    hey, Adrian is it possible to sent result back from the server to a client (suppose I want to send a number of persons available in the room back towards the client) how can I do that?

  51. Christopher Impey July 30, 2019 at 6:17 pm #

    i have a question. i am trying to integrate this within the Kivy toolset, i know how to get a screen to display an openCV video stream however when trying to work with this library i get a bunch of errors. is there any advice you might be able to offer with this?

    • Adrian Rosebrock August 7, 2019 at 12:50 pm #

      Sorry, I don’t do much work with Kivy. Good luck resolving the issue!

  52. Ashu August 5, 2019 at 3:53 am #

    Dear Adrian,

    Thanks a lot for making this tutorial. I am able to use this program without any problem. I have a few questions to ask.

    i) How can I set up the server and client if they are not connected to the same wifi network? For example, my client is RPi3 and server is AWS EC2. Which IP of my AWS instance will go into –server-ip.
    ii) Do I need to open port 5555?


    • Adrian Rosebrock August 7, 2019 at 12:26 pm #

      1. You’ll want to check your EC2 dashboard to find the IP address of your EC2 instance.

      2. Make sure the port is available for use. Check your security configurations and make sure port 5555 is accessible.

      • Navaneeth October 9, 2019 at 5:44 am #

        Hi adrian , i have done both as well. still couldn’t get data streams over the networks. what wrong have I done ?.

  53. Ashu August 8, 2019 at 5:12 am #

    Dear Adrian,

    I have another question.
    Suppose for some reason, I stop my server in between but my client is running. When I restart the server, it cannot receive frames from the client. Do you have any idea about that? If I restart the server, I also have to restart the client.
    How to deal with this problem?
    Thank you in advance.

  54. Ollie Graham August 21, 2019 at 10:05 am #

    If anyone is looking for a way to send images to a web page / server rather than to a python client (as I was for an application I was working on), then I highly recommend checking out the Starlette python web framework (

    This framework is asynchronous, which makes it very simple to create a frame generator (such as receiving jpegs from imagezmq’s ImageHub) and pass it to Starlette’s StreamingResponse module to feed the ‘src’ tag of an html img with the frames from any imagezmq client.

    Lots of possibilities!

    • Adrian Rosebrock September 5, 2019 at 10:58 am #

      Thanks for sharing!

  55. Sambhavi September 4, 2019 at 2:17 am #

    Hello Adrian,

    Interesting post, just what I needed, Thanks much. I was using regular VideoCapture and when number of cameras increased, I could feel the pain. They are IP cameras and I use rtsp protocol to get the feed. Let me try using ImageMQ and see how it goes, will share what I find soon.

  56. Miguel September 11, 2019 at 8:49 am #

    Thanks for the detailed post!
    Two questions, if I may ask.

    1. Did you test the latency for your set-up?

    2. I want to do a similar project but I want the server to be outside the “home network”, so that anyone in the world with an internet connection can see the streaming. What would I need to change?


  57. Rizwan Ishaq September 20, 2019 at 12:10 pm #

    Hi Adrian,

    Thanks for such a good post.

    I have question about comparison of gRPC and ZMQ, and which one you prefer gRPC or ZMQ?

  58. Andy Woods September 25, 2019 at 5:28 am #

    Hi Adrian,

    thanks for this great tutorial.

    Any tips on sharing data over a shared-internet (USB) connection? I’ve posted this question on

    best, Andy.

    • Adrian Rosebrock September 25, 2019 at 8:51 am #

      Sorry, I haven’t tried using a shared USB connection. As long as you know the IP address of both machines it should still work without an issue. Just make sure the shared connection has its own IP address on the network.

  59. Hind October 11, 2019 at 7:45 am #


    Thanks for your great work!

    I tried this code using the (logitech c920) external webcam and it worked fine! Then unfortunately it was decided to work with the laptop built-in camera, ignoring Logitech.
    Do you have any idea how to force her to work with Logitech only?

    Best, Hind

  60. Robin October 30, 2019 at 3:47 am #

    Inspired by this article, I implemented similar functionality with a few changes:

    * MQTT – for broad IOT compatibility
    * Browser viewing using streamlit

    The repo is on

    • Adrian Rosebrock November 7, 2019 at 10:34 am #

      Thanks for sharing, Robin!

  61. venkat November 29, 2019 at 4:42 am #

    in this source code ,,,compulsory required raspberry pi are not ,,,please let me know …..

    • Adrian Rosebrock December 5, 2019 at 10:36 am #

      No, you could use whatever OS you like provided you have ImageZMQ properly installed.

  62. Gaureesh December 5, 2019 at 2:12 am #

    Hi Adrian,

    This is a great resource.

    1 question, Instead of Raspberry Pi as client and Mac as server, can this be run on a Windows client (laptop) and Windows server (Virtual Machine) ?

    • Adrian Rosebrock December 5, 2019 at 10:26 am #

      Yes, provided you have ImageZMQ installed you can use whatever OS you want.

  63. Meg December 25, 2019 at 3:22 am #

    hi guys. i’m having a little problem when i run the script.

    It says: AttributeError : module ‘imagezmq.imagezmq’ has no attribute ‘ImageSender’.

    can someone help me resolve this please?!

    • Adrian Rosebrock December 26, 2019 at 9:51 am #

      Hey Meg, refer to the comments to this post as that’s already been addressed 🙂

  64. al costa December 30, 2019 at 5:13 am #

    This is great, and would be even better if we could use regular webpages instead of raspberries, as that would mean being able to send data from any webpage or even cell phones to be analised on real time and obtain via zmq back a bounding box.

Before you leave a comment...

Hey, Adrian here, author of the PyImageSearch blog. I'd love to hear from you, but before you submit a comment, please follow these guidelines:

  1. If you have a question, read the comments first. You should also search this page (i.e., ctrl + f) for keywords related to your question. It's likely that I have already addressed your question in the comments.
  2. If you are copying and pasting code/terminal output, please don't. Reviewing another programmers’ code is a very time consuming and tedious task, and due to the volume of emails and contact requests I receive, I simply cannot do it.
  3. Be respectful of the space. I put a lot of my own personal time into creating these free weekly tutorials. On average, each tutorial takes me 15-20 hours to put together. I love offering these guides to you and I take pride in the content I create. Therefore, I will not approve comments that include large code blocks/terminal output as it destroys the formatting of the page. Kindly be respectful of this space.
  4. Be patient. I receive 200+ comments and emails per day. Due to spam, and my desire to personally answer as many questions as I can, I hand moderate all new comments (typically once per week). I try to answer as many questions as I can, but I'm only one person. Please don't be offended if I cannot get to your question
  5. Do you need priority support? Consider purchasing one of my books and courses. I place customer questions and emails in a separate, special priority queue and answer them first. If you are a customer of mine you will receive a guaranteed response from me. If there's any time left over, I focus on the community at large and attempt to answer as many of those questions as I possibly can.

Thank you for keeping these guidelines in mind before submitting your comment.

Leave a Reply