Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
Jetson Xavier NX Developer Kit • DeepStream Version • JetPack Version (valid for Jetson only)
4.5.2 [L4T 32.5.1] • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
We are running inference on RTSP streams, and when certain objects are detected, we trigger smart record to record a clip of the RTSP stream.
This was working without issue with our initial model, with the pipeline FPS running at 5.07FPS.
We deployed a new model, which reduced the pipeline FPS to 4.23FPS. The smart record video clips for that camera then had an issue with grey frames - the video would go grey for periods throughout the video.
It is my understanding that the smart record call runs separately to the computer vision pipeline, and so increased inference run-time should not affect the smart record performance.
Our specific questions:
Is smart record performance/video quality affected by inference/pipeline runtime
Hi,
Are you able to try DeepStream 6? Looks like you are on DeepStream 5. If it is possible, we woudl sgueest upgrade to latest version and give it a try.
Please check if you observe the same symptom. We have bug fixes from DS 5 to DS 6. If you have another Xavier NX devkit, Would suggest flash Jetpack 4.6 + DS 6 for a try.
Setup 1: Pedestrian Only Model with Fast Inference Speeds:
Pipeline runs at 24fps
No grey artifacts from smart record clips. All high quality. Sample screenshot attached
Setup 2: Multi-class model with slow inference speed:
Pipeline runs at 3fps
Grey artifacts observed on almost all cmart record clips. Sample screenshot attached.
We ran an additional experiment, where from the cli (on host device, outside the container) we recorded multiple clips from the RTSP stream with ffmpeg, for 3 cases:
deepstream app with slow model
deepstream app with fast model
no deepstream app deployed
and in each case the video clips were all artifact free with no grey blocks.
Hi,
Please adjust this property in nvstreammux according to the frame rate:
batched-push-timeout: Timeout in microseconds to wait after the first buffer is available
to push the batch even if the complete batch is not formed.
Set to -1 to wait infinitely
flags: readable, writable
Integer. Range: -1 - 2147483647 Default: -1
The default setting in config files are for 30fps. Would need to change it if your source is not 30fps.
Thanks for the suggestion, our setting for each of these experiments was with a 30FPS stable RTSP camera source. That frame rate from the camera did not vary during the experiment. For both experiments the parameter was set at:
batched-push-timeout=40000
What did vary was the model, and therefore the pipeline performance (as measured by the “perf_cb”) function.
Does the batched-push-timeout need to be adjusted according to pipeline performance ?
Hi,
The source RTSP stream is running at 25FPS. This frame rate does not vary, and so even when we use the model which slows the pipeline down to 3fps, this frame rate from the source device does not vary.
With regard to “batched-push-timeout”, the behaviour of grey-frames was the same in all circumstances:
40000: Fast model had no greyframes. Slow model all grey-frames.
50000: Fast model had no greyframes. Slow model all grey-frames.
250000 (to be closer to performance pipeline fps): Fast model had no greyframes. Slow model all grey-frames.
To return to an earlier question -
Is smart record performance/video quality affected by inference/pipeline runtime
This experiment would suggest that it is. Is that the case?
Please check if it is possible to reproduce it by running the sample. If you can run the sample to reproduce the issue, please share us the steps so that we can set up and try.
Hi,
We have checked and confirmed this output is expected. DS pipeline does not alter anything but if pipeline is slow then if jitter buffer gets full, packets will be dropped and not written to output buffer.
If you are not able to reduce loading of the model or source frame rate, may implement the function instead of using smart recording. May run two pipeline like: