Encoding more than 3 streams using nvv4lh264enc for RTSP output stream

Please provide complete information as applicable to your setup.

• Hardware Platform: NVIDIA GeForce GTX 1650 dGPU
• DeepStream Version: 6.1
• TensorRT Version: 8.4.2.1 GA
• NVIDIA GPU Driver Version: 510.85.02
• Issue Type: Question

My requirement is to create a application with DeepStream to have about 6 RTSP output streams. I’m running the inference, tracking and analytics before splitting the outputs using nvstreamdemux plugin. Now I would have to encode each of the six resulting streams before passing them through rtppay to RTSP sink. I know it is a hardware limitation on my GPU for encoding more than 3 streams. I wonder if there is a workaround for this such as software encoding. Following is the pipeline I’m using currently:

nvstreammux → queue → pgie → queue → nvtracker → queue → nvdsanalytics → nvstreamdemux → queue → nvvideoconvert → nvosd → nvvideoconvert → capsfilter → nvv4l2h264enc → rtph264pay → nvegltransform (Jetson) → udpsink

The application works fine for 3 or less streams but the FPS is slower than when using a nvmultistreamtiler. Can I get multiple RTSP outputs without depending on the nvv4lh264enc or can I use it before nvstreamdemux?

An update on this, I managed to use x264enc which makes the output video too laggy (4 FPS average) and sometimes does not even start streaming on the RTSP link.
I also tried implementing nvh264enc from NVIDIA Video Codec but it does not output any frame but the performance from nvdsanalytics shows the streams to be running at ~27 FPS… So, I think I can get my job done with nvh264enc just not sure how to implement it properly in my pipeline to get the output… Relevant part of my pipeline is:

nvstreamdemux - > nvvideoconvert - > nvosd - > nvvideoconvert - > capsfilter (x-raw, format=NV12) - > nvh264enc - > rtph264pay - > udpsink

Please let me know if I’m missing something…

#encoder type 0=Hardware 1=Software
enc-type=0
As the configuration shown, you can set three output streams with hardware encoding, and set the other three streams with software encoding.

Thanks for the reply. My requirement is also to add and remove sources dynamically, so I cannot hard code the sources in the config file. Could you suggest a fast enough software encoder that I can use in my pipeline, I have tried x264enc which is very slow for my use case and NVENC (nvh264enc) does not give out any frames at all.

Is this correct pipeline?

which deepstream sample are you testing? you can refer to deepstream sample deepstream-app. please check why nvh264enc does not give out any frames.
please refer to this command:
gst-launch-1.0 -e nvstreammux name=mux batch-size=2 width=1920 height=1080 ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.txt batch-size=2 ! nvstreamdemux name=demux filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264 ! h264parse ! nvv4l2decoder ! queue ! mux.sink_0 filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264 ! h264parse ! nvv4l2decoder ! queue ! mux.sink_1 demux.src_0 ! “video/x-raw(memory:NVMM), format=NV12” ! queue ! nvvideoconvert ! “video/x-raw(memory:NVMM), format=RGBA” ! nvdsosd ! nvvideoconvert ! nvv4l2h264enc ! h264parse ! qtmux ! filesink location=./out.mp4 demux.src_1 ! queue ! nveglglessink

I’m actually testing a combination of nvdsanalytics, runtime add/delete streams and multi in/multi out samples. I want to let user add/remove cameras at runtime without the need to restart the pipeline. I then want to run the analytics and display the streams on a web based application. I can get my work done using nvmultistreamtiler if only I was using it for a single user. But in case of multiple users, switching streams on tiler causes the streams to switch for all users. Also removing a stream leaves a blank spot in the place of deleted stream, the reason for which I don’t want to use the tiler plugin. I’m trying to resolve this problem by using rtsp ouput for each stream.

I’m not quite able to understand as of why I’m not able to get any frames for it, I’ve tried adding caps filter before the encoder to enforce video/x-raw and format as RGBA, but it seems to me that NVMM is not supported by nvh264enc plugin. FYI, the nvh264enc plugin was NOT included with the default DeepStream installation and I installed it by following a link in this forum:
Nvidia Gstreamer plugins (nvenc, nvdec) decode H265 video error

This pipeline works fine with nvv4l2h264enc as well as nvh264enc. I’m not sure what I’m doing wrong in my deepstream pipeline.

did you still have any deepstream issues?

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.