I want to stream a video that is high-res ( W:3289, H:2464 ) with GStreamer as raw but I found out the data packets have not good enough size to stream video with this dimension as raw.
so I encode it to h264 and send it to another pc over tcp ( i can’t risk packet losing so I can’t use UDP) and it is working fairly good but a problem I have is latency, as i can see the video in receiver side is 2 or 3 seconds behinde i used latency plugin in the transmitter ( jetson nano ) and find out the h264 encoder take too much time over other plugins in pipeline
my question is , is there any way to make two or three h264 encoder to encode live camera feed and put the encoded image on queue to tcp be able to send them ??? some pipeline with tee element or something else …
like this pipe :
gst-launch-1.0 -e nvcamerasrc fpsRange=“30.0 30.0” ! ‘video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1’ !
nvvidconv flip-method=2 ! ‘video/x-raw(memory:NVMM), format=(string)I420’ ! tee name=streams
streams. ! omxh264enc bitrate=8000000 ! ‘video/x-h264, stream-format=(string)byte-stream’ ! filesink location=testA.h264
streams. ! omxh264enc bitrate=8000000 ! ‘video/x-h264, stream-format=(string)byte-stream’ ! filesink location=testB.h264
but in this link the camera feed will be saved on local storage, but I want to send them over TCP and i dont know how i gonna do that with a dual encoder