Could you review this gstreamer pipe?

Hello,

I have tested these gstreamer pipes below:

UDP Server :
gst-launch-1.0 nvv4l2camerasrc device=/dev/video0 ! 'video/x-raw(memory:NVMM)',format=UYVY,width=1920,height=1080,framerate=30/1 ! tee name=t t. ! queue ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw, format=BGR ! fakesink t. ! queue ! nvvidconv ! x264enc key-int-max=30 insert-vui=1 tune=zerolatency ! h264parse config-interval=1 ! mpegtsmux ! rtpmp2tpay ! udpsink host=192.168.42.8 port=5004

UDP Client :
gst-launch-1.0 udpsrc port=5004 ! application/x-rtp,media=video,encoding-name=MP2T,clock-rate=90000,payload=33 ! rtpjitterbuffer latency=300 ! rtpmp2tdepay ! tsdemux ! h264parse ! splitmuxsink location=./segment%05d.mp4 max-size-time=100000000000

The H/W configurations :

  • Jetson Orin EVK + Nilecam camera (UDP server and camera runs on)
  • Desktop PC with Nvidia 2080 dGPU (UDP client runs on)

On the server side, the ‘fakesink’ will be switched to ‘appsink drop=1’ and the BGR images will be used with opencv apis.

What I’d like you to check is the gstreamer pipes above for server and client both.

  • Is it optimized already very well?
  • Can I make it better to use more nvidia’s H/W accelerators?
  • Isn’t there any unnecessary elements or configurations?

If you recommend me better pipes, It will be pretty much appreciated!

Thank you very much!

Hi,
You can use replace x264enc with nvv4l2h264enc to use hardware encoder. And OpenCV requires BGR format, so it needs to copy BGRx from NVMM buffer to CPU buffer, and then re-sample to BGR. This is done on CPU, so please execute $ sudo nvpmodel -m 0 and $ sudo jetson_clcoks. To run CPU cores at maximum clock to get optimal throughput.

1 Like

Hi, Thank you for the helpful advice!
Thank you!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.