Deepstream RTSP streaming distortion

I have installed DeepStream on my laptop using WSL2 and am trying to test my custom YOLOv8 model in a CARLA environment. In this setup, a CARLA RGB camera is attached to a vehicle, and the feed from this camera is sent to DeepStream via RTSP using GStreamer. Previously, I used an ONNX model that was converted to an engine file, following the DeepStream-YOLO procedure, and it worked fine. However, after optimizing the YOLO model and training it with additional datasets, I encountered distortion when using the new model.
I tried reducing resolution,fps,batched-push-timeout and tried x265 encoding still issue persists.
I will attach my deepstream_app_config.txt and there is a python script for whole setup of carla side but iam posting below gstreamer pipeline for carla setup only:
deepstream_app_config.txt (830 Bytes)

appsrc name=mysource is-live=true block=true format=GST_FORMAT_TIME
caps=video/x-raw,format=BGR,width={self.width},height={self.height},framerate={self.fps}/1 !
videoconvert !
video/x-raw,format=I420 !
x264enc speed-preset=ultrafast tune=zerolatency !
rtph264pay config-interval=1 pt=96 name=pay0

Here width= 1280, height= 720 and fps=30.
Also rgb camera from carla also captures in same height and width.

Have you tried the new model with the local video file?

Yes i have tried the new model with local video files in deepstream, It is working fine there is no distortion.

What is the CPU loading and GPU loading when you run the case?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)

When i checked cpu usage on old model iam getting 196 which is overload, But when i checked with old model iam getting 65.

The CPU loading will impact the ethernet protocol stack performance, the streaming distortion means lotsof packets loss.

Please debug for the CPU loading issue.

when i checked with htop ,There are multiple pids for deepstream-app -c deepstream-app-config.txt command, 2 pids uses most of cpu almost 93 by each of them other pids uses like 4 and 1

Can you post the htop log? Can you provide your config_infer_primary_yoloV8.txt?

1 Like

top-log.txt (2.7 KB)
config_infer_primary_yoloV8.txt (686 Bytes)

I have given top-log.txt is that okay?

It seems in wsl deepstream is not using gpu only cpu, when i checked with jetson nano device there is no distortion, it is using around 15 of cpu only . But there is frame drop, it was there before also when i used old model.

Solved the problem. Before i exported the pt model to onnx model with batch 32 . When i converted the into onnx model with batch 1 problem was solved . There is no more distortion and also work properly in jetson nano.

1 Like