I was trying to run the deepstream-test1-RTSP on jetson nano orin. I am using x264 (sw) and everything works fine but I am not getting the correct output for the RTSP. The following image shows the decoded images using OpenCV in another machine (I have used different software also to read the RTSP stream). I have used the same code with Jetson Nano (the old one with HW etc) and I didn’t have any problem. I am using the following container: nvcr.io/nvidia/deepstream:6.4-triton-multiarch
• Hardware Platform Jetson Nano Orin • DeepStream Version 6.4 • JetPack Version (valid for Jetson only) 6
I also tested it directly (no docker) and the result is the same.
I install deepstream and python plugin. I run test1, it is able to detect objects but rtsp output is the same.
OK. Let’s narrow it down. Did you use our demo code deepstream-test1-rtsp-out?
Could you attach your modification and try to use the gst-launch-1.0 to run the pipeline to save the output to a mp4 file?
Yes, I used the demo code deepstream-test1-rtsp-out. I didn’t modify it. I used gst-lanuch-1.0 to record the video, and the output is the same. I have attached the video. I use the following command to start it:
python3 deepstream_test1_rtsp_out.py -i /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264 -e 1
There are some issues with the DeepStream 6.4 version for this demo, which we will analyze and fix as soon as possible.
Currently, you can try to modify the code below: deepstream_test1_rtsp_out.py.
I have a similar problem were the RTSP output is just a green screen.
• Hardware Platform Jetson Orin NX • DeepStream Version 6.4 • JetPack Version (valid for Jetson only) 6.0
I do not run the python code but I have used the deepstream-test3 C code and added the RTSP sink. I have checked all the parametes used in the python example. There is no deviation.
By adding this paramter the frames are dropping to 0 (no image processing?):
Orin NX has a hardware encoder and you should be able to use the examples.
I have problems with the software encoder (orin Nano), this pipeline solved the issue:
self.launch_string = 'appsrc name=source is-live=true block=true format=GST_FORMAT_TIME ’
'caps=video/x-raw,format=BGR,width={},height={},framerate={}/1 ’
'! videoconvert ! video/x-raw,format=I420 ’
'! x265enc speed-preset=ultrafast tune=zerolatency bitrate=3000 ’
‘! rtph265pay config-interval=1 name=pay0 pt=96’
.format(opt.image_width, opt.image_height, self.fps)
This may be a different problem, and theoretically configuring parameters for the nvvidconv_postosd should not affect the nvinfer. You can file a new topic.