The inference video frame has residual image and water mark problems

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
jetson
• DeepStream Version
6.3
• JetPack Version (valid for Jetson only)
5.1.2
• TensorRT Version
8.5
• NVIDIA GPU Driver Version (valid for GPU only)
11.4
• Issue Type( questions, new requirements, bugs)
question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
When I use Jetson Deepstream for video stream analysis, I find that the inferred images occasionally have problems with afterimages and water ripples, which can last for about 2 seconds. This leads to frequent false alarms during video analysis. Is there any good solution to this problem?
Here are some examples:

Can you provide such information?

I use the Deepstream_parallel_info_app case for vehicle detection. The frame image is saved through the nvds_obj_enc_process interface, and the detection result is drawn on the image through opencv.
When monitoring events on multiple video channels, we found that many abnormal vehicle stops were caused by frame afterimages and water ripples. The car is stuck for a short time, but the video is normal when viewed through VLC.

Is this your customized app?

What is the input sources? Local video files or live streams? What is the output? Directly displaying with the monitor connected with your Jetson board? What is the loading(CPU and GPU) when you run your app with your case?


This is the video saved when a parking event is detected.

Is the VLC player running in the same Jetson device in which the DeepStream app is running?

Customized app modified according to deepstream_parallel_inference_app . The input source is live rtsp streams. It will not be displayed on the monitor, only the image and video of the incident will be kept.

How was the video saved?

VLC running on Windows devices.

Calling the smart-record Module interface.

How did you use the smart recording APIs?

Are the images in your first post captured from the video or the images saved by opencv?

Refer to deepstream-image-meta-test, the image is the frame image stored by calling nvds_obj_enc_process and nvds_obj_enc_finish interfaces. Then the detection box is drawn in the frame image through opencv.

Can you answer the questions?

Is the smart recording API used with the input rtsp sources? There is no bbox in the video.

Yes, there is no bbox in the video.

Is this correct that your video is recorded with the rtsp source before inferencing?

The video is recorded by configuring the relevant parameters of smart-record. The recording process is started by calling the recording interface when an abnormal parking occurs. It can be said that the video is recorded while inferencing.

There are two ways to use smart recording APIs. The first one is to work with the deepstream-app source API by configure DeepStream Reference Application - deepstream-app — DeepStream documentation “smart-record”, “smart-rec-dir-path” , … properties.

The other way is to call the NvDsSRCreate(), NvDsSRStart(),… APIs directly in the app source code. Smart Video Record — DeepStream documentation

Which way you are using?

The second method used.

Where did you put the " NvDsSRContext=>recordbin" in your pipeline?