Over Current, Freezing rtsp streams and DLA issues

Hello everybody,

We are facing a video analytics project using a Jetson Orin NX 16GB with Jetpack 6.0 and Deepstream 6.4. As inputs we have 4 RTSP streams. The gstreamer pipeline is basically:

nvurisrcbin (for each camera) → nvstreammux →queue → nvinfer → queue → nvtracker →queue → nvdsanalytics → queue → nvvideoconvert → capsfilter → queue → nvmultistreamtiler → queue → nvdsosd → queue → nv3dsink

The model for inference is a Yolov8m. Each RTSP stream has a resolution of 704x576 pixels at 10FPS. For converting the yolo model to TensorRT we are using the code from MarcosLuciano github repo GitHub - marcoslucianops/DeepStream-Yolo: NVIDIA DeepStream SDK 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models with a custom library implementation.

Well, the issues we are having:

  1. When the pipeline begins to run, a System Throttle due to Over Current message appears. With or without DLA.
  2. When DLA is active, the video streams seem less smoother than without DLA, but the GPU has a constant load.
  3. When trtexec is used to generate the .engine file with DLA support, a lot of warning messages are printed out on the screen, not allowing the use of DLA and falling back to GPU. However when the pipeline is running we can see the DLA core running at 614MHZ.
  4. Sometimes in the image of one or several cameras appears pixel distorsions, as if they were codec problems. We can perceive that the timestamp coming in the stream go backward in time and then returns to the current time. When this happens, pixel problems appear.
  5. The streams suddenly freeze at any time (1, 2, 3 or 4 streams simultaneously). Follow and example of the error message given by the nvurisrcbin element on one camera:

Resetting source rtsp://admin:…

WARNING from element src: Could not write to resource.

Debug info: …/gst/rtsp/gstrtspsrc.c(6607): gst_rtspsrc_try_send (): /GstPipeline:g2f-pipeline/GstBin:source-bin-01/GstDsNvUriSrcBin:uri-decode-bin/GstRTSPSrc:src:

Could not send message. (Received end-of-file)

The questions we have are:

  1. Which is the best way of implementing Yolov8 on the Jetson Orin NX. Has NVIDIA its own implementation of the custom library to use yolov8?
  2. How is the procedure to convert the Yolov8 in pytorch format to TensorRT with DLA support?
  3. What must we do to avoid the Over Current message to effectively use all the hardware resources on the Jetson Orin NX?

Thanks in advance.

Hi @francisco23

Can you please:

  1. Confirm what Orin NX are you using? Is this a development kit? The issue you are describing it may be related to power design on the system you are using.
  2. What model are you using for DLA that generates a lot of errors?
  3. Can you confirm you have run update_rtpmanager.sh as described here?
  4. We do not have a YoloV8 implementation. You can you the YoloV5 project as inspiration as well as the code from our community (which you are already using)

Hi Carlos, thanks for answering:

  1. The Orin system we are using is a reComputer J4012. It is not a development kit from NVIDIA.
  2. The model is a custom Yolov8m, trained directly in ultralytics and exported to ONNX and TensorRT. How I mentioned, to export to onnx we use the repo from Marco Luciano.
  3. No, we didn’t know that we have to run update_rtpmanager.sh. Thanks.

New questions:

  1. Can you confirm if the reComputer from seeedStudio is well designed in matter of power?
  2. Can we use both of the DLA cores at the same time with only one DS pipeline? or we need to separate in 2 pipelines, one of each other using a DLA core?
  3. Can the Orin NX manage 4 rtsp streams in real time with a resolution of 704x576 at 20FPS each one? with inference, tracking and analytics?

Thanks

On the other hand, what are the steps and characteristics of the model to fully run in DLA? Because, we can not did it run.

Thanks.

Hi,

You can find the model that can fully run on DLA in the below GitHub:

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.