Jetson ai_nvr sink to rtsp

Hi,

I am trying to view the rtsp stream coming out of the ai_nvr example containers in the nvidia_jetson_plaform example workflows. I run the conatainers and they work perfectly. The tripwires and ROIs are working but I cant seem to get the rtsp stream from the adresss rtsp://localhost:8555 or any other active port.

I looked in the config files and I cant configure the service_maker configure files to intitiate an rtsp sink from deepstream. Can anybody guide me on how can I add an rtsp sink in the service_maker cinfig files ?

Please refer here for enable RTSP out in AI_NVR: JPS FAQ

Thanks for the timely reply.

I have editted the ds-confog_nano.yaml in the service maker directory but still it is failing. I am getting the following errors in the ai_nvr deepstream container logs

This is the command that I am running in my deepstream compose_nano.yaml

”command: sh -c ‘/opt/nvidia/deepstream/deepstream/service-maker/sources/apps/cpp/deepstream_test5_app/build/deepstream-test5-app -s /ds-config-files/pn26/service-maker/source-list_nano.yaml -c /ds-config-files/pn26/service-maker/ds-config_nano.yaml -l /ds-config-files/pn26/labels.txt --perf-measurement-interval-sec 5 2>&1 | grep --line-buffered . | tee -a /log/deepstream.log’“

deepstream_logs.txt (81.3 KB)

********

LINKING: Source: tee Target: queue1
LINKING: Source: tee Target: queue2
LINKING: Source: queue1 Target: converter1
LINKING: Source: queue2 Target: msgconv
LINKING: Source: converter1 Target: sink
Unable to create video pipelineFailed to create on-request sinkpad. Exiting.
YAML Parsing Error: deepstream::Pipeline::Pipeline(const char*, const string&) Element creation failed*
***********

Can you enable more logs for debug the issue? such as “export GST_DEBUG=4”. Orin nano hasn’t HW video encoder. It need SW video encoder.

Hi, I got it working. But there is an issue. The quality is very bad the rtsp video stream is lagging a lot with o many artifacts. Is there anyway to make it better ? And also a strange thing happens that when start the deepstream container and the inference is running int he background, my remote connections of ssh stops automatically and also I am unable to access any remote services like vst or emdx. Why is this happening. This only happened after I added the rtsp sink in jetson orin nano.

How about the system loading? SW video encoder will cause high CPU loading and maybe cause the bottleneck.

I have checked the system loading and its only about 30 to 40 percent. Its more of an issue with jetson orin nano network bandwidth. Adding bitrate to the rtspsinkbin and lowering it to 200000 made it work.

I am using two networks at the same time. One is the wifi and the other is the ethernet which is connected to the camera. And I am accessing jetson remotely through the local wifi network.

Any suggestions on how can it be better or am I stuck with low network badwidth. Currently I am only running peoplenet. I plan to run multiple inference models in parallel on multiple streams coming into the jetson orin nano.

Glad to know you fixed output RTSP issue for AI_NVR. Regarding WIFI network issue, please submit new topic for better tracking different issues. Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.