Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) gpu • DeepStream Version 6.1.1 • TensorRT Version latest for ngc container • NVIDIA GPU Driver Version (valid for GPU only) 515 • Issue Type( questions, new requirements, bugs) help
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I’m having issues with artefacts on my image. In my setup (3080 + ryzen 5900) works better than in a production setup with a virtual xeon and a T4, but I see artefacts on both.
I believe artefacts must be generated in the decoding process because detections from the model appear to be influenced by the artefacts, but I don’t know where to start to debug this issue.
For decoding I am connecting to a server that serves mkv files over rtsp through nvurisrcbin, although this happens with uridecodebin too.
Any help here is welcome, many thanks in advance!!
I attached a short screen recording for you to be able to see what these artifacts look like. On the left you can see the original video, running in an rtsp server that the deepstream connects to. On the right what deepstream is outputting.
Also, here is the pipeline I’m using (single camera):
Sorry, still not understand what help do you need.
The right side seems to be not as smooth as the left side, is this the issue you are referring to?
I also don’t understand the purpose of the pipeline. The message on the right video seems to be counting vehicle/person/crowd count, but why the input is a video of the earth?
Thanks for reaching out for clarification. Yes, the right side video has noticeable visual defects, which is the main issue I am referring to. The video is being used as an example to show the problem I am encountering.
Regarding the purpose of the pipeline, it is a system used to count vehicles, persons, and crowd. The input video in this case is of the earth because I’m trying to debug this problem and found this video showed it very well.
I hope this information helps clarify the situation. If you need any further information, please let me know.
Could you try to use nvinfer instead of nvinferserver and check the result? If it’s convenient, you also can provide us with your app, model, config files.
After having done some more investigation, I have seen that the problem does not appear in live rtsp cameras. I’m facing the problem while serving files through rtsp only (based on very few tests).
I have tried removing everything in de pipeline. Just decode and re encode, and I’m still facing the issue. I’m starting to believe the issue is in the source of the images, not in deepstream, but consuming the video through vlc works fine
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
It may be that native rtsp plugins of gstreamer has problems. You can verify it by using native plguins of Gstreamer to play it. You can also attach your video and give us the way to reproduce the problem. We can help to debug for it. Thanks