Giter VIP home page Giter VIP logo

imagezmq's People

Contributors

bigdaddymax avatar fjolublar avatar jeffbass avatar philipp-schmidt avatar retro-node avatar timsears avatar ynx0 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

imagezmq's Issues

Send receive images back to client from server

Hi @jeffbass , first of all, I would like to thank you for writing such as wonderful library. Currently I'm working a POC where video stream will be captured from web cam and sends to a cloud server (AWS) for some kind of face recognition stuff. The processed frame has to be sent back to client for displaying. The flow is shown below
pubsub

Question: 1. Will the current implementation handles this?
2. If yes, can you please provide some kind of pseudo code or existing examples?
Thanks for the reply in advance.

Fail to receive image on server, Works well on local machine

Hi ! This is not an issue related to your implementation.
I am looking to connect frames from my Picam to a google cloud based compute engine GPU

client running on pi :
`import socket
import time
from imutils.video import VideoStream
import imagezmq

sender = imagezmq.ImageSender(connect_to='tcp://192.168.0.254:5555') #for local machine, this works for me.
sender = imagezmq.ImageSender(connect_to='tcp://GPU_IP:9090') #for the GPU , this does not receive frames..
rpi_name = socket.gethostname() # send RPi hostname with each image
picam = VideoStream(usePiCamera=True).start()
time.sleep(2.0) # allow camera sensor to warm up
while True: # send images as stream until Ctrl-C
image = picam.read()
sender.send_image(rpi_name, image)`

Server receiving frames on local machine & GPU have the same snippet running.

import cv2 import imagezmq image_hub = imagezmq.ImageHub() i = 0 while True: # show streamed images until Ctrl-C i = i +1 rpi_name, image = image_hub.recv_image() #cv2.imshow(rpi_name, image) # 1 window for each RPi #cv2.waitKey(1) #img = cv2.imread(image,0) cv2.imwrite("hello.png",image) image_hub.send_reply(b'OK')

The server does not seem to receive the frames even though the local machine does receive the same frames.

I have checked that the ports are opened on the server side. Is there someway I could debug this issue ? I am using an office internet.
Would like to know your thoughts on how this could be debugged....
Thanks for your hard work with this repo ๐Ÿ‘

Streaming Video Slow over the network

HI author,

Thank you for your great work !.

I've tested the PUB/SUB with threading machanism. The publisher sends a lot of frames. However, the receiver shows the returned images extremely slow.

I have no issues with deploying local. Everything run smoothly. But when I expose the port globally which can be accessed via Internet, I encounted that problem.

How to stream two videos on one device

Hi,

Thank you for the great project. I would like to stream two videos from one pi device but I am not sure how to do that, any thoughts?

Thank you in advance.

Can I change the image size?

Hello @jeffbass !

I saved the image after adding cv2.imwrite.

The saved image size is 320x240 which is too small for my project.

cv2.resize code makes the image blur.

My camera module can save the image clean with camera.resolution = (2560,1936) code but I can't figure out how to change image size in imagezmq libary...

Can I resize the image without blurring issue?

Thank you

Provision for authentication

Hi

Just wanted to check, is there any built in provision for authentication? Any advice on how to secure our setup in general?

I would like to setup some form of basic authentication where the server only processes the frames received from a legitimate client that I have deployed.

Use ZMQ on android along with ImageZmq on RPI

I would like to use ImageZmq on RPI with some opencv processing and then stream those images onto an android phone/tablet for further action.
Can I use a ZMQ library for android along with ImageZmq ?

Show the live image from Raspberry Pi to Flask server

Hello , I have this code for showing frame by frame videofeed from Raspberry Pi but it doesn't work,and I don't understand why , can you help me please?
`@app.route('/video_feed')
def video_feed():
"""Video streaming route. Put this in the src attribute of an img tag."""
return Response(gen_frames(),
mimetype='multipart/x-mixed-replace; boundary=frame')

def gen_frames(): # generate frame by frame from camera
imageHub = imagezmq.ImageHub(open_port='tcp://*:3001')
time.sleep(2)
while True:
# Capture frame-by-frame
(rpiName, frame) = imageHub.recv_image()
imageHub.send_reply(b'OK')
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n')`

Reading video of different pi

Hi, thanks for this library, it makes things easier.
I have a question, we can see that it is possible to receive the raspberry pi video and visualize it in different windows on the server, but is it possible to individually manipulate each video stream from the different cameras?

Let's say I have the kitchen camera and another one in the living room, I want to apply facial recognition to the kitchen and detection of objects to the room, what is the method to individually access the video channel each place and manipulate them separately?

Thank you :)

Suggestion: Add PUB/SUB mode for sender/receiver

Imagezmq is a pretty slick and robust library that helped me a lot with running live video streams from different webcams and other sources (not only Raspberry) to my home server.

In my project, I also needed to implement a monitoring functionality - I needed to create a socket I could occasionally connect to and receive a copy of a video stream that is transmitted across my application (for example, between preprocessor and object detecting module or between object detector and file writer).

With the current implementation of Imagezmq this is impossible - sending an image over REQ/REP socket is a blocking operation, therefore publishing side would freeze until receiving part is up and connected.

The solution was adding a non-blocking PUB/SUB mode - in this case, publishing side will not block and will continue to send images across application regardless of the state of the receiver.

The cool thing about the client is that it can connect to multiple publishers so I could create a single client that monitors multiple cameras.

If this is something you'd like to add to this project I could make a respectful pull request.

(python: num_process): GStreamer-CRITICAL **: gst_element_get_state: assertion 'GST_IS_ELEMENT (element)' failed

Jeff, all, since it's my 1st issue post on Github I hope I post it correctly without bothering you all.

When I run the sending part with a Logitech USB Cam, I get at "vs = VideoStream(src=0).start()" the error "(python: num_process): GStreamer-CRITICAL **: gst_element_get_state: assertion 'GST_IS_ELEMENT (element)' failed" and the process hang.

Then when I unplug the USB Cam, the process get out of hang state and return the following message:

VIDEOIO ERROR: V4L2: Pixel format of incoming image is unsupported by OpenCV
VIDEOIO ERROR: V4L: can't open camera by index 0
VIDIOC_REQBUFS: No such device
Traceback (most recent call last):
File "test_2_rpi_send_images.py", line 30, in
sender.send_image(rpi_name, image)
File "/home/pi/imagezmq.py", line 52, in send_image
if image.flags['C_CONTIGUOUS']:
AttributeError: 'NoneType' object has no attribute 'flags'

Any clue where the issue is coming from?

is it possible to directly send video clips?

Thanks for this great repo! It works very nicely for sending images.
I wonder whether it is possible to directly send video clips to remote servers? instead of sending frames to the server then combine them into a video.

Question about MQTT

HI Jeff
this isn't an issue, but I am interested to know why you chose ZMQ when MQTT is very common for IOT? There are a few comments in this article but I am interested to hear your views.
Many thanks
Robin

zmq.error.ZMQError : Address already in use

Great job Jeff. Love this.
As I'm using this in my scenario, I may need to sometimes deallocate the streamer and reallocate it depending upon the circumstances. I get this failure:

Traceback (most recent call last):
  File "src/detector.py", line 718, in async_stream_video
    REQ_REP=False)
  File "/usr/local/lib/python3.6/dist-packages/imagezmq/imagezmq.py", line 53, in __init__
    self.init_pubsub(connect_to)
  File "/usr/local/lib/python3.6/dist-packages/imagezmq/imagezmq.py", line 75, in init_pubsub
    self.zmq_socket.bind(address)
  File "zmq/backend/cython/socket.pyx", line 550, in zmq.backend.cython.socket.Socket.bind
  File "zmq/backend/cython/checkrc.pxd", line 26, in zmq.backend.cython.checkrc._check_rc
zmq.error.ZMQError: Address already in use

when I run this portion of my code more than once :

while True:
    video_streamer = imagezmq.ImageSender(connect_to=f'tcp://*:{target_port}', REQ_REP=False)
    # Do some work...
    video_streamer = None

Whenever it hits the 2nd line another time, I get the traceback above. Any idea how to specifically free this (I'm not myself familiar with pyzmq). But I would have imagined some kind of a context management :

with imagezmq.ImageSender(...) as sender:
   # do stuff

But I'm not seeing this in the source code, but if there's missing cleanup code, we would need to implement that too.
Thoughts? Did I miss something? Is there an easy way to clean up sockets from this object?
thanks!

Possible bug setting "connect_to='localhost':5555" on ImageSender

Python 3.8.2 (Win10 x64)
Imagezmq V 1.1.1

I noticed that when using:

sender = imagezmq.ImageSender(connect_to='tcp://localhost:5555', REQ_REP=False)

it fails with:

zmq.error.ZMQError: No such device.

With REQ_REP enabled, it works like a charm using "localhost":

sender = imagezmq.ImageSender(connect_to='tcp://localhost:5555')

Using 127.0.0.1:5555 or *:5555 with PUB/SUB also runs with no problems.

** I'm not a developer, so I'm sorry if this kind of reporting is not appropriate.

Looking for Advice

Jeff. Love your project. I am working on a project that uses multiple RPI 's with camera's could be webcams as well. The hub program starts and waits for the currently three sender RPI's to send an image. Each image is saved with a sequence number. The hub then sends each sender a timestamp for the next timelapse to be taken. This will be identical for each sender. The hub then does an image stitch to make a pano image and saves with sequence number then increments for next cycle. This is all working well although I had to make custom camera mounts out of foam board, small block of wood and a short dowel.

The problem I have is that the hub Must start before the senders. I want to automate the system so I was thinking of setting up a watch program on each sender that listens on another port. When the hub is started or restarted it sends each sender watch program a new python configuration file of settings eg resolution via zmq. Watch saves File as a config.py. Watch program then starts/restarts the sender program in background via subprocess Popen. Sender then restarts and reads the new config.py as an import. Still working on this.

Would it be possible to send a text file and return a confirming text message between the hub and sender rather than a jpeg and text per imagezmq. Currently I was looking at just using basic zmq commands to the hub and watch program on the senders but would be nicer if it was possible with imagezmq. Could not find this feature in imagezmq code but thought I would drop you a line.

Note I had to modify the https://github.com/ppwwyyxx/OpenPano c++ code to get it to accept an output file path for the pano jpg/png since it would only generate a fixed out.jpg named file in the same folder. My version is here https://github.com/pageauc/OpenPano. Created curl bash scripts for easier install. I could have used Adrians opencv image stitching but prefer the self contained openpano approach since users would not need the latest/greatest opencv contrib version.

FYI I have attached my camera holder template in pdf format. When the pipano project is ready I will post on my GitHub repo. Still a work in progress. There are lots of issues to work out. For one, the cropping of pano's is not consistent so doing a timelapse video would need some image stabilization during video editing or pruning some images. If lighting is stable then most of the pano's crop pretty consistently but low light can throw things off easily. I had to build the stands to allow accurate pointing. Don't want to use a pan/tilt mechanism because images would not be synchronized with same timestamp. Might add pano stitch to my robot as a feature since full view would be in one stitched image.

my pano stand. Currently have three on a board with various holes space around for stand dowels to fit into. Then rotate (pan) via dowel point and tilt via side lower hinge leg screws. This works quite well and easy to mount on a fence or even a vertical surface as long as you don't mind a small hole at mounting point

[cam-stand.pdf](https://github.com/jeffbass/imagezmq/files/5180357/cam-stand.pdf

Excuse me if I got a little chatty
Claude ...

Missing files in sdist

It appears that the manifest is missing at least one file necessary to build
from the sdist for version 1.0.1. You're in good company, about 5% of other
projects updated in the last year are also missing files.

+ /tmp/venv/bin/pip3 wheel --no-binary imagezmq -w /tmp/ext imagezmq==1.0.1
Looking in indexes: http://10.10.0.139:9191/root/pypi/+simple/
Collecting imagezmq==1.0.1
  Downloading http://10.10.0.139:9191/root/pypi/%2Bf/97e/3368e445993b2/imagezmq-1.0.1.tar.gz (14 kB)
    ERROR: Command errored out with exit status 1:
     command: /tmp/venv/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-wheel-f0tgka7t/imagezmq/setup.py'"'"'; __file__='"'"'/tmp/pip-wheel-f0tgka7t/imagezmq/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-wheel-f0tgka7t/imagezmq/pip-egg-info
         cwd: /tmp/pip-wheel-f0tgka7t/imagezmq/
    Complete output (5 lines):
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/tmp/pip-wheel-f0tgka7t/imagezmq/setup.py", line 21, in <module>
        with open('PyPI_README.rst', 'r') as f:
    FileNotFoundError: [Errno 2] No such file or directory: 'PyPI_README.rst'
    ----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.

Unable to establish the connection again after disconnection from server

Hi Jeff,

After establishing the connection between the server and client, when the client goes down (or I disconnect the client) while the server is running, I can re-establish the connection between the server and the client. When the same happens with the server (say I disconnect the server) and the client is running, I cannot establish the connection by turning the server up again. It does not receive any images.

What is the cause of this problem?

Thank you.

Converting to usb camera not pi camera

Hi everyone, im currently running into an issue where I cannot video more then just the first frame without the shown frame freezing and not responding.
I get a still image and not a video. Is this because im using one machine? i currently am too broke to get a raspberry pi so should i use am emulator ? can raspberry pi emulators grab ahold of my pc camera as picamera or is it cap.cv2.VideoCapture(0)

import cv2
import time
import socket 
from imutils.video import VideoStream
import imagezmq

sender = imagezmq.ImageSender(connect_to='tcp://192.168.56.1:55555', REQ_REP=False)
# Open the device at the ID 0

cap = cv2.VideoCapture(0)
currentFrame = 0
#Check whether user selected camera is opened successfully.
if not (cap.isOpened()):
    print('Could not open video device')


rpi_name = socket.gethostname() # send RPi hostname with each image
#picam = VideoStream(usePiCamera=True).start()
#cap = VideoStream(usePiCamera=False).start()
#frame = VideoStream(usePiCamera=False).start()
time.sleep(2.0)  # allow camera sensor to warm up
while True:  # send images as stream until Ctrl-C
        frame = cap.read()
        sender.send_image(rpi_name, frame)
        print(rpi_name)
        break 


Problem running on FileVideoStream

Hi, I'm having problem with displaying the video onto another machine. I tried to modify the code in test_2_rpi_send_images.py into something like this:

sender = imagezmq.ImageSender(connect_to='my ip')
rpi_name = socket.gethostname()
vid = FileVideoStream(path ='D:/Hrnet/ufc.gif').start()
time.sleep(2.0) # allow camera sensor to warm up
while True: # send images as stream until Ctrl-C
vid_read = vid.read()
sender.send_image(rpi_name, vid_read)

After I run the code, it can only display like 2 second of the video before it got flagged.

Traceback (most recent call last):
File "test_2_rpi_send_images.py", line 32, in
sender.send_image(rpi_name, vid_read)
File "C:\Users\Bolt\Anaconda3\lib\site-packages\imagezmq\imagezmq.py", line 106, in send_image_reqrep
if image.flags['C_CONTIGUOUS']:
AttributeError: 'NoneType' object has no attribute 'flags'

Can you help me on solving this issue?

Fails when run inside a class

Hi, this is abit of an emergency, so, when I run it in a normal way(like the example in the README), it runs fine, but whenever I put it inside a class, it runs once for a second-ish and crashes, with this error:

Traceback (most recent call last): File "cserver.py", line 16, in <module> Cserver() File "cserver.py", line 6, in __init__ self.stream() File "cserver.py", line 11, in stream pi_nem, image = image_hub.recv_image() File "/home/user/Desktop/imagezmq.py", line 119, in recv_image msg, image = self.zmq_socket.recv_array(copy=False) File "/home/user/Desktop/imagezmq.py", line 216, in recv_array md = self.recv_json(flags=flags) File "/usr/local/lib/python3.6/dist-packages/zmq/sugar/socket.py", line 668, in recv_json msg = self.recv(flags) File "zmq/backend/cython/socket.pyx", line 788, in zmq.backend.cython.socket.Socket.recv File "zmq/backend/cython/socket.pyx", line 824, in zmq.backend.cython.socket.Socket.recv File "zmq/backend/cython/socket.pyx", line 191, in zmq.backend.cython.socket._recv_copy File "zmq/backend/cython/socket.pyx", line 186, in zmq.backend.cython.socket._recv_copy File "zmq/backend/cython/checkrc.pxd", line 25, in zmq.backend.cython.checkrc._check_rc zmq.error.ZMQError: Operation cannot be accomplished in current state

My code is:

import cv2
import imagezmq
class Cserver:
def init(self):
self.stream()
def stream(self):
image_hub = imagezmq.ImageHub()
while True:
pi_nem, image = image_hub.recv_image()
cv2.imshow(pi_nem, image)
cv2.waitKey(1)
if name == "main":
Cserver()

I would appreciate any help you can provide as soon as possible

Issue streaming and displaying with cv2.imshow()

Hi,

I have a pretty consistent issue with streaming and displaying with cv2.imshow. The tl;dr is that the server and client seem to be communicating fine, but displaying the frames with cv2.imshow keeps running into problems.

Here's an example of my code.

Server:

image_hub = imagezmq.ImageHub()

count = 0
while True:  # show streamed images until Ctrl-C
    try:
        rpi_name, buffer = image_hub.recv_jpg()
        frame = cv2.imdecode(np.frombuffer(buffer, dtype='uint8'), -1)
        cv2.imshow('image', frame)  # 1 window for each RPi
        cv2.waitKey(int(1/15*100))
        image_hub.send_reply(b'OK')
        count += 1
        print(count)

    except KeyboardInterrupt:
        cv2.destroyAllWindows()
        break

Client:

sender = imagezmq.ImageSender(connect_to='tcp://my.ip.addr:5555')

rpi_name = socket.gethostname()  

cap = cv2.VideoCapture(0)

time.sleep(2.0)  

count = 0
while True:  
    frame = threadcap.frame
    (flags, buffer) = cv2.imencode(".jpg", frame, [int(cv2.IMWRITE_JPEG_QUALITY), 95])
    sender.send_jpg(rpi_name, buffer)
    count += 1
    print(count)

As you can see, I'm printing the "count" on both the client and server.

image

By printing the "count", I can confirm that there were no issues with sending or receiving images between server and client, and they appear to be in sync. But without fail, cv2.imshow seems to stall or freeze after a while, and I'm not sure how to fix this issue. It appears to be more of an issue with cv2.imshow() sycning with the stream than an issue imagezmq, but since this frequently called by imagezmq users to display video streams, I'd like to know if:

  1. Others have used different approaches to display the video stream
  2. If there are any known workarounds for this imshow() issue.

I did research this issue before posting here, including: https://stackoverflow.com/questions/37038606/opencv-imshow-freezes-when-updating

Thanks in advance!

to node in one element or another

heya, first off thanks for teaching me zmq and sharing the fruits of your many hours of work.

i wanted a more robust alternative to mjpeg and this delivers, but my clients run on electron apps so i had to tinker a bit to get the bufferarray into a blob before passing it as src in an img element. that approach turned into a leaky, queued solution. i tried alot of different approaches on passing it directly to a videoplayer or canvas but it would not work. i dont know enough about opencv, jpegs and bgrs to figure it out and im wondering if you or anyone else have a nifty solution up their sleeves.

Improve jpg conversion speed a bit

This is NOT an issue with the imagezmq library, but I wanted to express my opinion to improve some speed in the use cases where this library is used.

Instead of using OpenCV to encode/decode video frames, it might be better to use another library: simplejpeg.
With some testing I did, the speed of jpg conversion was up 10 to 30% faster. In my case, I had to optimize every step of the transmission.

I have no affiliation with the library, just want to spread some knowledge.

zmq.error.ZMQError: Permission denied on ImageSender(REQ_REP = False)

Running on Windows with python 3.7.0

Code:

sender = imagezmq.ImageSender(connect_to="tcp://{}:5555".format(args["server_ip"]), REQ_REP = False)

Error:

Traceback (most recent call last):
  File ".\client.py", line 128, in <module>
    sender = imagezmq.ImageSender(connect_to="tcp://{}:5555".format(args["server_ip"]), REQ_REP = False)
  File "C:\Users\jquin\Desktop\SOSAFE\SoSaFe Logic\surveillance_client\imagezmq\imagezmq.py", line 55, in __init__
    self.init_pubsub(connect_to)
  File "C:\Users\jquin\Desktop\SOSAFE\SoSaFe Logic\surveillance_client\imagezmq\imagezmq.py", line 77, in init_pubsub
    self.zmq_socket.bind(address)
  File "zmq\backend\cython\socket.pyx", line 550, in zmq.backend.cython.socket.Socket.bind
  File "zmq\backend\cython\checkrc.pxd", line 25, in zmq.backend.cython.checkrc._check_rc
zmq.error.ZMQError: Permission denied

Getting FPS of around 3

Hi, I have seen such behaviour where we get only 3frames per second over the network.
Is this normal behaviour ?
Any way I could pump this up ? Please let me know a direction for the same.

ImageHub object receiving too many (duplicate) image messages from a client?

I am following the tutorial by Adrian Rosebrock: https://www.pyimagesearch.com/2019/04/15/live-video-streaming-over-network-with-opencv-and-imagezmq/

My issue is with how imagezmq receives image messages once ImageHub is initialized. For example, on the client side (client.py) I check the webcam FPS is 30, therefore I assume 30 image messages are being sent every second.
But counting the number of images received by the server (server.py), I get variable FPS anything between 90-160 FPS (received images). All images are coming from the same camera and the count is restarted approx. every second.
Of course, I expect some variability, but not this great. Has anyone tried to record the incoming frames and have they noticed anything similar? Perhaps my method for counting the incoming images is incorrect?

server_received_images

manual_fps_method

client_fps_count

ImageHub with non-default open_port parameter

I have two machines: my laptop and remote server. They are in the same network ( my laptop is connected via VPN ) and I want to set ImageHub to recive images only from my laptop, so I set open_port parameter to tcp://my_laptop_ip:5555 and it doesnt work for me. When i set open_port to default ( recive from all ips ) than it works fine. There are no examples of ImageHub with non-default open_port in the repository neither I have found any in the internet so Im wondering if this is an issue or do I miss something ?

[PUB/SUB] Subscriber slow motion video (queue keeps growing)

Hi there,

Thanks for your work on ImageZMQ, it's been very useful for me!

I'm using a project setup which has:

  • A server that sends processed opencv frames via ImageZMQ to a receiver
  • A pi that shows the received frames on screen

I've successfully implemented the REQ/REP pattern into my project and it works well. The only issue for me is that the REQ/REP pattern is blocking the server from processing as many images as it can. Because it's waiting for the receiver for the OK reply at every frame.

This is when I started trying the PUB/SUB pattern. For the server this works great. However, when I use PUB/SUB the video plays in slow motion on the receiver. With slow motion I mean that it's queue'ing all the frames it gets, but probably isn't fast enough to display all the frames it gets from the server. This creates an every growing queue of images. I've also tried it on a stronger machine (macbook), but it's the same result.

Any tips or ideas on how I could solve my issue? Any help is much appreciated!

Edit
I've changed my code a bit by re-instantiating the ImageHub object every loop iteration (instead of just once before the while (true) loop), and it seems to get rid of the queue-problem. It doesn't play in slowmotion anymore! However, I wonder if this is really the best solution; because re-instancing ImageHub every loop doesn't seem the most efficient way?

Before (queue/latency growing):

imageHub = imagezmq.ImageHub(open_port='tcp://{}:5555'.format(args["server"]), REQ_REP=False)
while true:
     rpiName, frame = imageHub.recv_image()
     cv2.imshow("Window", frame)

After (steady latency):

while true:
     imageHub = imagezmq.ImageHub(open_port='tcp://{}:5555'.format(args["server"]), REQ_REP=False)
     rpiName, frame = imageHub.recv_image()
     cv2.imshow("Window", frame)

Sending images to a server not connected to same router

I am a bit new to networking. In the client-side, we specify the IP address and the port number. How can we send the images from client to a server or computer (Maybe EC2 or my personal laptop)?
What things are required to be taken care of? Any help will be greatly appreciated.

Thank you!

AttributeError: module 'imagezmq' has no attribute 'ImageSender'

Sorry this must be pretty basic.
When I tried to run client.py I got
AttributeError: module 'imagezmq' has no attribute 'ImageSender'

As to where to place imagezmq, I've tried both (2) and (3) as prescribed in https://www.pyimagesearch.com/2019/04/15/live-video-streaming-over-network-with-opencv-and-imagezmq/

WORK-AROUND:
I worked around it by soft link imagezmq.py to the directory where client.py is at and invoked. Would like to know the proper way to do it right, though.

advanced_http.py not working

There is an issue of connection when running the above script. I was unable to receive the image stream even on localhost. Any help is much appreciated.

client to server timeout

It is possible to set a waiting time on the client when sending video frames to the server:
sender.send_jpg ("example", frame)
where if the server for any reason is not available, the client will remain in the loop of connection to the server.

Could a connection attempt time be established as 5 seconds where if no connection can be established, another process is aborted or executed?

I have seen that it could be implemented using the select module.

Hangs at `image_hub.recv_image()` if the stream was started before the image_hub was initialized

Hi Jeff,

Dave from PyImageSearch here. I'm loving imagezmq...it will make appearances in Adrian's book.

I'm wondering how we can improve the ImageHub so that if a stream server is already running on a Pi, and yet the image_hub client (on a Mac or other box) is restarted, it will pick up seamlessly.

Currently, my client code on a Mac looks like your example:

image_hub = imagezmq.ImageHub()

while True:
	# read the next frame from the stream
	_, frame = image_hub.recv_image()
	image_hub.send_reply(b'OK')

	# Do something

It hangs at image_hub.recv_image() if I were to add/fix functionality to my program and restart it.

Do you have any ideas, or is it better for the Pi to timeout when it doesn't receive the ack "OK"?

All the best,
Dave Hoffman

Cloud Google

Hi Jeff,
I'm loving this library and experiment something using opencv.
I'm able to run it locally, however, when I tried to connect an server on Google cloud receiving images from my local camera, the server is not able to receive those stream. I believe that I set up everything properly (firewall rules, etc) but the communication between the peers doesn't work.

Any ideas?
Regards,
Fabio

run test_2_mac_receive_images.py on Windows 7 , test_2_rpi_send_images.py on Raspberry Pi, server site no response

Dear Jeff,

I encountered a problem.
I tried to run test_2_mac_receive_images.py on Windows 7 , test_2_rpi_send_images.py on Raspberry Pi, but server site no response.

on the Raspberry Pi site, ran "lsof |grep 5555" will show a lots of these messages
python3 658 672 python3 pi 14u IPv4 16265 0t0 TCP 192.168.34.143:54214->192.168.34.93:5555 (SYN_SENT)
python3 658 673 python3 pi 14u IPv4 16265 0t0 TCP 192.168.34.143:54214->192.168.34.93:5555 (SYN_SENT)
python3 658 674 python3 pi 14u IPv4 16265 0t0 TCP 192.168.34.143:54214->192.168.34.93:5555 (SYN_SENT)
python3 658 675 python3 pi 14u IPv4 16265 0t0 TCP 192.168.34.143:54214->192.168.34.93:5555 (SYN_SENT)

on Windows 7 site, ran "netstat -an| grep 5555" , just showed only
TCP 0.0.0.0:5555 0.0.0.0:0 LISTENING

Can you tell what is these problem?

send message to client from server

Hello again.
Following their examples, we see that the client (raspberry) sends to the server (mac) two data, the name of the client and the video frame, but now my question is: is it possible to send a response to the client? for example, I would like to send a response "restart" to the client and be able to perform a restart to the raspberry, in a part of the code I see that the server returns an "ok" to the client, but I did not understand how to recover that message. +

Thank you

How to scale ImageHub to work for 200+ cameras

Hi @jeffbass,

We are working on one of the video surveillance project, where we need to process 200+ camera feeds to multiple computer vision algorithms. This project looks great, but can you suggest the best way to scale the ZMQ setup?

Thank,
Uday

Upload to pypi.org?

@jeffbass thanks for this very nice library!
Worth uploading to pypi. Willing to add the necessary files, test, and make a pull request if you want, but the upload should probably come from you.

Is PUB/SUB pattern compatible with send_jpg() ?

@jeffbass Need your help here.
I got errors while trying to transport pictures in the format of jpg with message pattern PUB/SUB.

Traceback (most recent call last):
  File "./send.py", line 9, in <module>
    sender = imagezmq.ImageSender(connect_to='tcp://58.87.115.128:5555',REQ_REP=False)
  File "/usr/local/lib/python2.7/dist-packages/imagezmq-1.1.1-py2.7.egg/imagezmq/imagezmq.py", line 53, in __init__
    self.init_pubsub(connect_to)
  File "/usr/local/lib/python2.7/dist-packages/imagezmq-1.1.1-py2.7.egg/imagezmq/imagezmq.py", line 75, in init_pubsub
    self.zmq_socket.bind(address)
  File "zmq/backend/cython/socket.pyx", line 550, in zmq.backend.cython.socket.Socket.bind
  File "zmq/backend/cython/checkrc.pxd", line 26, in zmq.backend.cython.checkrc._check_rc
zmq.error.ZMQError: Cannot assign requested address

sender.py is as follow(X.X.X.X is ip of server):

#!/usr/bin/python
import sys
import socket
import time
import cv2
import imagezmq

# use either of the formats below to specifiy address of display computer
sender = imagezmq.ImageSender(connect_to='tcp://**X.X.X.X**:5555',REQ_REP=False)
# sender = imagezmq.ImageSender(connect_to='tcp://192.168.1.190:5555')

rpi_name = socket.gethostname()  # send RPi hostname with each image
cap = cv2.VideoCapture(0)
time.sleep(2.0)  # allow camera sensor to warm up
jpeg_quality = 60  # 0 to 100, higher is better quality, 95 is cv2 default
while True:  # send images as stream until Ctrl-C
    ret,image = cap.read()
    ret_code, jpg_buffer = cv2.imencode(
        ".jpg", image, [int(cv2.IMWRITE_JPEG_QUALITY), jpeg_quality])
    sender.send_jpg(rpi_name, jpg_buffer)
    time.sleep(1)

How to re-eastablish connection?

Hello,
I have seen the project long ago, only now I tried.

However, I found, that if imagehub (receiver) is restarted, the sender is stalled.
Probably waiting the 'OK' response. Is there a way to timeout the sender?

Pointing to an IP from the client and shooting an image is very convenient, but the interruption means a kill and restart on client...

thank you
jaro

Add Compression?

It looks like the image format is up to the VideoCamera codec, and that ImageZMQ is simply passing the raw data, which may be JPEG, or may not, depends on if PiCam is used. Is this true? Would it be more succinct to refer to the image as a binary data rather than jpeg in the code, since I see nothing specific to images... or am I missing something? Thanks! Great library, btw.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.