Corrupted (Pink) Images in Redis When Using Multistream Pipeline in DeepStream

  • Hardware Platform (Jetson / GPU) = Jetson Orin Nx
  • DeepStream Version = 6.3.0
  • JetPack Version (valid for Jetson only) = 35.6.0
  • TensorRT Version = 8.5.2.2
  • CUDA Version (valid for GPU only) = 11.4
  • Issue Type( questions, new requirements, bugs) = Question

Hi, I am running the Multistream code with 11 sources, the pipeline for this code is this:

Question: We are working on a License Plate Detection and Recognition model.
We’re saving images to Redis, but when running with multiple streams, the saved images appear pink or corrupted.
However, when running the same code with a single stream, the images are perfect and correctly rendered. Examples of both outputs are shown below.
corrupt

Question:

How can we resolve this issue?
Where should we make changes in the pipeline or image saving logic to fix the corrupted images when using multistream?

The corrupted image is caused by the network packets loss during RTSP transferring. Have you measured the GPU and CPU loading when you run the pipeline with multiple streams?

During the initial run of the pipeline with multiple RTSP streams, the CPU usage spikes up to around 60%, but it quickly stabilizes at approximately 50% as the pipeline settles. On the other hand, the GPU utilization remains consistently high at 99.9% throughout the runtime,

The GPU loading is high, in some moment, it may be overloaded. The downstream elements may be slower than the upstream elements, then the rtsp receiving buffering may be overflow. Please make sure your RTSP servers support TCP protocol and set the TCP protocol with your rtspsrc in the DeepStream pipeline. The rtspsrc “latency” property also needs to be set as a large value. rtspsrc

Hi,
I’ve checked the video stream on both the Raspberry Pi camera and the Jetson Orin side. The frames we’re receiving appear to be fine with no visible jitter. I verified this by saving a 1-minute video using ffplay to save frame reception.
Is there anything else I can check that might be related to the DeepStream pipeline?

Please refr to Corrupted (Pink) Images in Redis When Using Multistream Pipeline in DeepStream - #5 by Fiona.Chen

By reducing the camera’s frame rate (currently set to 30 fps), I’m now receiving the correct images without issues. I have two questions:

  1. How can we determine the maximum upstream frame rate that my pipeline can handle without producing corrupted or dropped frames?
  2. Is there a way to optimize the pipeline to support higher input frame rates without modifying the camera’s output FPS?

Please make sure your RTSP servers support TCP protocol and set the TCP protocol with your rtspsrc in the DeepStream pipeline. The rtspsrc “latency” property also needs to be set as a large value. rtspsrc

If you can guarantee every element(or probe function) in the pipeline can finish processing in the time of 1/FPS value, there will be no packet loss caused by pipeline itself.

The pipeline speed is decided by the slowest element in the pipeline, please find out it and try to make it as fast as 1/FPS. For different cases, the slowest element maybe different, we also met the ethernet card issue caused packet loss from some users. For your case, GPU loading is 99.9% means maybe your models are overloaded. You need to either optimize your model or switch to some faster models.