•Hardware Platform (Jetson / GPU) GPU (T4) •DeepStream Versionnvcr.io/nvidia/deepstream:6.1.1-devel •NVIDIA GPU Driver Version (valid for GPU only) 515 •Issue Type( questions, new requirements, bugs) Question
I’m running a version of the deepstream-test3 pipeline that instead of rendering the video feed, saves the results to a file using NvDsFileOut as well as a RTSP stream using NvDsRtspOut. The pipeline can be seen below:
The first: The RTSP stream is not loading at all in VLC, and only shows something when using Ubuntu’s “Video” application, but it is heavily corrupted, see below:
The second:
The mp4 file that is saved by NvDsFileOut is not playable. VLC gives an error: “this file contains no playable streams.”. Similarly, ffmpeg gives the error:
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x55f05912f6c0] moov atom not found
output.mp4: Invalid data found when processing input
This is caused by the packet lost during RTSP transferring. You can change " iframeinterval parameter of NvDsRtspOut extension to a very small value (such as 5) to improve the recovery from packet lost.
That seems like I should be able to get at least 5 FPS in the outgoing RTSP stream, no? The 6 incoming RTSP streams are 320x240 running at 6 FPS. Could this be due to an issue with the incoming RTSP streams? I saw a lot of properties related to RTSP streams in the input nodes, such as latency on the NvDsMultiScrInput node as well as live-source and max-latency on the NvDsStreamMux node. Do you have some suggestion on what to set these values to? I also sync disabled for all nodes, or should it be enabled?
Thanks
EDIT:
The mp4 file is now playable, but shows the same artifacts, see screenshot below:
EDIT 2:
A local run with a 2070 Super and 32x the ‘samples/streams/sample_qHD.mp4’ file produces a perfectly fine mp4 and RTSP stream with the exact same pipeline.
Have you set " live-source" parameter of NvDsStreamMux extension to “true”? Have you set " batch-size" parameter of NvDsStreamMux extension to the number of your rtsp streams(for the case you post, it is 8)?
Do you have any experience with c/c++ DeepStream APIs before?
Yes, live-source is set to true. batch-size matches the number of input streams for both the NvDsStreamMux and the NvDsInferVideo nodes.
For testing, I also just replaced the 8 RTSP stream URLs with 8x file:///opt/nvidia/deepstream/deepstream-6.1/samples/streams/sample_qHD.mp4 and everything runs totally fine, including the file output and the outgoing RTSP stream.
I also just tested the pipeline again using a local RTSP server, and everything runs fine as well, so I think the issue lies with the configuration and throughput of the RTSPs streams I’m using it with.
I haven’t used the lower level C++ APIs yet, so far I’ve only used the graph composer.
This is caused by the packet lost during RTSP transferring. You can change " iframeinterval parameter of NvDsRtspOut extension to a very small value (such as 5) to improve the recovery from packet lost. It can only be improved but not fixed since it is not an application issue but a network issue.
For future reference, I managed to fix this issue by setting the ‘select-rtp-protocol’ property to 4, which forces TCP instead of UDP, which completely eliminated all artifacts caused due to packet drop.