G’day all,
I’ve been working to build an application heavily dependent on low latency and I’m looking to cut it down wherever I can. As such I have been attempting to get the hardware encoders/decoders working with my gstreamer pipelines, with no luck.
Help refining the gstreamer pipelines to lower latency alongside implementing the hardware engines would be greatly appreciated.
Pipeline 1 (RTSP server pipeline):
factory.set_launch(‘( udpsrc port=5004 ! application/x-rtp,encoding-name=H264 ! rtph264depay ! h264parse ! rtph264pay name=pay0 )’)
Pipeline 2 (Camera in pipeline):
> def gstreamer_pipeline_in():
return( " nvarguscamerasrc sensor-id=0 ! " " video/x-raw(memory:NVMM), width=3840, height=2160, format=NV12, framerate=30/1 ! " #" nvv4l2h264enc ! " " nvvidconv ! " " video/x-raw, format=RGBA ! " " videoconvert ! appsink " )
Pipeline 3 (Processed frame out pipeline):
> def gstreamer_pipeline_out():
return( 'appsrc ! queue ! videoconvert ! video/x-raw,format=I420 ! x264enc key-int-max=30 insert-vui=1 tune=zerolatency ! queue ! h264parse ! rtph264pay ! udpsink host=127.0.0.1 port=5004' )
Screenshot of program running w/ hardware utilization:
Cheers.
