Smart Record Clips Show Grey Frames when inference performance drops

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
Jetson Xavier NX Developer Kit
• DeepStream Version
• JetPack Version (valid for Jetson only)
4.5.2 [L4T 32.5.1]
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

We are running inference on RTSP streams, and when certain objects are detected, we trigger smart record to record a clip of the RTSP stream.

This was working without issue with our initial model, with the pipeline FPS running at 5.07FPS.

We deployed a new model, which reduced the pipeline FPS to 4.23FPS. The smart record video clips for that camera then had an issue with grey frames - the video would go grey for periods throughout the video.

It is my understanding that the smart record call runs separately to the computer vision pipeline, and so increased inference run-time should not affect the smart record performance.

Our specific questions:

  • Is smart record performance/video quality affected by inference/pipeline runtime
  • If so, is there a way to handle this ?

Thanks in advance.

Hi,
Are you able to try DeepStream 6? Looks like you are on DeepStream 5. If it is possible, we woudl sgueest upgrade to latest version and give it a try.

Hi DaneLLL,

For the moment we are restricted to DeepStream 5 and cannot update to DeepStream 6. Is this a known issue in DeepStream 5 ?

Hi,
There is a topic similar to this:
Smart Record issue - record files have strange grey blocks in the middle

Please check if you observe the same symptom. We have bug fixes from DS 5 to DS 6. If you have another Xavier NX devkit, Would suggest flash Jetpack 4.6 + DS 6 for a try.

Hi,

We will upgrade our system to DS6 and report back with the results.

Hi,

We have upgraded to Deepstream 6 and run a more thorough experiment:

Source: RTSP feed from an NVR. 25 fps
Ffprobe output: Stream #0:0: Video: hevc (Main), yuvj420p(pc), 2688x1520 [SAR 1:1 DAR 168:95], 25 tbr, 90k tbn, 90k tbc

Our deepstream app is deployed in a container based on: nvcr.io/nvidia/deepstream-l4t:6.0-samples

Setup 1: Pedestrian Only Model with Fast Inference Speeds:

  • Pipeline runs at 24fps
  • No grey artifacts from smart record clips. All high quality. Sample screenshot attached

Setup 2: Multi-class model with slow inference speed:

  • Pipeline runs at 3fps
  • Grey artifacts observed on almost all cmart record clips. Sample screenshot attached.

We ran an additional experiment, where from the cli (on host device, outside the container) we recorded multiple clips from the RTSP stream with ffmpeg, for 3 cases:

  • deepstream app with slow model
  • deepstream app with fast model
  • no deepstream app deployed

and in each case the video clips were all artifact free with no grey blocks.


Hi,
Please adjust this property in nvstreammux according to the frame rate:

  batched-push-timeout: Timeout in microseconds to wait after the first buffer is available
                        to push the batch even if the complete batch is not formed.
                        Set to -1 to wait infinitely
                        flags: readable, writable
                        Integer. Range: -1 - 2147483647 Default: -1

The default setting in config files are for 30fps. Would need to change it if your source is not 30fps.

And also try to set this property.
RTSP latency does not work with NVSTREAMMUX - #40 by DaneLLL

Thanks for the suggestion, our setting for each of these experiments was with a 30FPS stable RTSP camera source. That frame rate from the camera did not vary during the experiment. For both experiments the parameter was set at:

batched-push-timeout=40000

What did vary was the model, and therefore the pipeline performance (as measured by the “perf_cb”) function.

Does the batched-push-timeout need to be adjusted according to pipeline performance ?

Hi,
Seems like the source frame rate is 25fps(interval=40ms) per
Smart Record Clips Show Grey Frames when inference performance drops - #7 by Scriobhneoir

So you may set batched-push-timeout=50000(50ms; larger than 40 ms)

Hi,
The source RTSP stream is running at 25FPS. This frame rate does not vary, and so even when we use the model which slows the pipeline down to 3fps, this frame rate from the source device does not vary.

With regard to “batched-push-timeout”, the behaviour of grey-frames was the same in all circumstances:
40000: Fast model had no greyframes. Slow model all grey-frames.
50000: Fast model had no greyframes. Slow model all grey-frames.
250000 (to be closer to performance pipeline fps): Fast model had no greyframes. Slow model all grey-frames.

To return to an earlier question -

  • Is smart record performance/video quality affected by inference/pipeline runtime

This experiment would suggest that it is. Is that the case?

Hi @DaneLLL ,

Any update on this issue ?

Hi,
Are you able to share a method so that we can reproduce it and debug the issue? There is a test sample for smart recording:

/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-testsr/

Please check if it is possible to reproduce it by running the sample. If you can run the sample to reproduce the issue, please share us the steps so that we can set up and try.

Hi,
We have checked and confirmed this output is expected. DS pipeline does not alter anything but if pipeline is slow then if jitter buffer gets full, packets will be dropped and not written to output buffer.

So a possible solution is to reduce loading of the model. To let performance of running the model catch up with source frame rate. And can try to set interval property:
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_ref_app_deepstream.html#primary-gie-and-secondary-gie-group
Not to process every frame and use nvtracker.
The other solution is to reduce frame rate of the camera to match fps of the pipeline.

If you are not able to reduce loading of the model or source frame rate, may implement the function instead of using smart recording. May run two pipeline like:

... ! nvstreammux ! nvinfer ! nvdsosd ! appsink
appsrc ! nvvideoconvert ! nvv4l2h264enc ! h264parse ! matroskamux ! filesink location=sr.mkv

You can receive the frames in appsink. If you would like to encode the frame, can send it to appsrc.

Hi,

Thank you for the answer, and for confirming that this is the expected output.

Following the suggestion we have incremented the interval, and are still experiencing the same grey frame issue:

Interval - FPS - Grey Frames
1 - 6.33 - Yes
2 - 7.98 - Yes
3 - 10.29 - Yes
4 - 12.02 - Yes
5 - 14.75 - Yes
6 - 15.97 - Yes