Question about RTSP udpsink using nvv4l2h264enc

• Hardware Platform (Jetson / GPU) Jetson Nano
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only) 4.6
• TensorRT Version 8.0.1
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) questions

Hi, I’m testing two pipelines based on nvds-anlaytics-test sources.
This is for RTSP Sink. And the source for that is as follows.

nvvidconv1 = gst_element_factory_make ("nvvideoconvert", "nvvideo-converter1");
  			x264enc = gst_element_factory_make ("x264enc", "h264 encoder");			
  			filter = gst_element_factory_make ("capsfilter", "filter");
  			caps4 = gst_caps_from_string ("video/x-raw, format=I420");
  			g_object_set (G_OBJECT (filter), "caps", caps4, NULL);
  			gst_caps_unref (caps4);

  			if (!nvvidconv1 || !x264enc  || !filter) {
  			  g_printerr ("One element could not be created. %p,%p,%p, Exiting.\n",nvvidconv1, x264enc,
  			              filter);
  			  return -1;
  			}

  			guint udp_port  = 5400;
  			rtppay = gst_element_factory_make ("rtph264pay", "rtp-payer");
  			sink = gst_element_factory_make ("udpsink", "udp-sink");

			gst_bin_add_many(GST_BIN(pipeline), queue1, pgie, queue2, nvtracker, 
			queue4, tiler,
			queue5, nvvidconv,
			queue6, nvosd,
			queue7, nvvidconv1,
			filter, x264enc, rtppay,
			sink, 			
			NULL);
nvvidconv1 = gst_element_factory_make ("nvvideoconvert", "nvvideo-converter1");  			
			x264enc = gst_element_factory_make ("nvv4l2h264enc", "h264 encoder");
  			filter = gst_element_factory_make ("capsfilter", "filter");
  			caps4 = gst_caps_from_string ("video/x-raw(memory:NVMM), format=I420");
  			g_object_set (G_OBJECT (filter), "caps", caps4, NULL);
  			gst_caps_unref (caps4);

  			if (!nvvidconv1 || !x264enc  || !filter) {
  			  g_printerr ("One element could not be created. %p,%p,%p, Exiting.\n",nvvidconv1, x264enc,
  			              filter);
  			  return -1;
  			}

  			guint udp_port  = 5400;
  			rtppay = gst_element_factory_make ("rtph264pay", "rtp-payer");
  			sink = gst_element_factory_make ("udpsink", "udp-sink");

			gst_bin_add_many(GST_BIN(pipeline), queue1, pgie, queue2, nvtracker, 
			queue4, tiler,
			queue5, nvvidconv,
			queue6, nvosd,
			queue7, nvvidconv1,
			filter, x264enc, rtppay,
			sink, 			
			NULL);

The difference between the first and the second is only x264enc and caps4.

*First*
x264enc = gst_element_factory_make ("x264enc", "h264 encoder")
caps4 = gst_caps_from_string ("video/x-raw, format=I420");

*Second*
x264enc = gst_element_factory_make ("nvv4l2h264enc", "h264 encoder")
caps4 = gst_caps_from_string ("video/x-raw(memory:NVMM), format=I420");

Both the first and the second run itself is successful.
But the second code doesn’t seem to be able to export the rtsp video (I can’t get the video from VLC on another PC)

Since the first code does not use NVENC, the CPU usage is high, making it difficult to actually use it. (I can load the video from VLC on another PC.)

In conclusion,
Can I get help with what part of the second code is the problem?
Thank you always.

Could you try to set some parameters for the plugin. Please refer to the link below:
https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/blob/master/apps/deepstream-test1-rtsp-out/deepstream_test1_rtsp_out.py#L191

Thank you!
Properties of nvv4l2h264enc and
I changed the address of the host, and it works well.

I have one more question.
I’m using more than two cameras connected to Jetson
I want to send this camera to different RTSP Url.
Any Git or code to refer to in this regard?
(Deepstream-app is too expensive to apply to my product because the code is so different.)

You can use the nvstreamdemux plugin to separate the multiple source.Please refer to https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvstreamdemux.html

1 Like

Your answer was helpful.
Thank you !

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.