Deepstream bug JP > 5.0

Hi there,

I have a problem on the Jetson Xavier NX 16GB Modul with Jetpack > 5.0 and deepstream. Sometimes (not related to temperature or runtime) an error shows up like shown in the video attached. It’s a strange error because sometimes its working without an error for 5 minutes or so and then the error appears suddenly. I use the git repo from Marcos Luciano DeepStream-Yolo to run YOLOv5s on the Seeeds reComputer J2012 (so basically the 16GB NX module + their carrier board). I tried the same with JP 4.6.x and it works without any problems. The video shows an image from the COCO Dataset converted to a video file to work with the deepstream-app but the error also appears when using the image + gstreamer + deepstreamer plugins. The output detection is performed with pretrained Yolov5s.

Can you replicate the problem?

Best
Paul

Can you remind me what is the error that you are referring to? The image has all person recorgnized and seems to be working well.

Yes but can you see the flickering in some frames? It loses the correct bounding box and shows two BBoxes in the middle of the object. I don’t have this behavior on JP 4.6.x. There the BBox is always correct.

Hi @TheJetsonPaul , We have already integrate yoloV5 in our latest deepstream version 6.1.1. You can try to refer the link below to check if there is still a problem.
https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps

https://github.com/NVIDIA-AI-IOT/yolov5_gpu_optimization

Hi @yuweiw, the pretrained YOLOv5s is working with your repo witout any errors. But I’m not able to get a custom model to work with your repo (tao). I basically converted the custom model in the same enviroment and in the same way as the pretrained one. I changed the number of classes and made a new label file. It successfully converts the model to an engine model but on inference it did not detect anything.

Do you know what had changed in Deepstreamer or/and Jetpack and why Marcos Lucianos repo working fine with JP 4.6.x and not with JP 5.X? I also tested Deepstreamer 6.1 on our workstation and its working. So it must be related to the Jetson Devices.

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Could you please attach your config file and demo when you run it in deepstream?

Sorry for the late reply we had experienced the same issue on 18.04. We guess that the issue is related to current issues. Maybe with the board (Seeed A206) or due to throttling? How do I set the max. current to 5000mA on JP5.0? While short tests with the tao implementation the issue is not present.

For the problem with the custom model I trained the pretrained Model again on the COCO Dataset but only for persons, so that it’s close to our model. Here is the result


No detections are displayed (No Person detected). With the pretrained model it’s working. So it seems like there is a problem with custom yolov5 models with the tao implementation. Here is the
config file (2.0 KB)
. I changed the number of classes to 1 and also the input size to 640 according to the exported ONNX model.

About how to improve performance, you can refer the link below:
Boost the clocks
By comparison the effect between our yolov5 model and your model, it may be the your model’s problem. Your env may have some diffrences with the maintainer of the github repo. You can open a topic at that github repo.

So basically this is the solution. So its seems like to be an error with the conversion process to TensorRT (engine). The process is different in the Tao implementation but the engine files can be used in Marcos Deepstream-Yolo

Here is the process to get it work:

git clone https://github.com/ultralytics/yolov5.git
git clone https://github.com/NVIDIA-AI-IOT/yolov5_gpu_optimization.git
cp -r yolov5_gpu_optimization/* yolov5/
cd yolov5
git checkout a80dd66efe0bc7fe3772f259260d5b7278aab42f

Change number of classes in 0001-Enable-onnx-export-with-batchNMS-plugin.patch according to your model

git am 0001-Enable-onnx-export-with-batchNMS-plugin.patch
pip install -r requirement_export.txt
apt update && apt install -y libgl1-mesa-glx
cp <your model>.pt ~/yolov5/<your model>.pt 
python export.py --weights <your model>.pt --include onnx --simplify --dynamic

git clone https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps
cd deepstream_tao_apps
export CUDA_VER=xy.z    // xy.z is CUDA version, e.g. 10.2
make 

mkdir -p models/yolov5
cp ~/yolov5/<your model>.onnx ~/deepstream_tao_apps/models/yolov5/<your model>.onnx

Change classes numbers, label file and path according to your model in configs/yolov5_tao/pgie_yolov5_config.txt

cd apps/tao_detection
./ds-tao-detection -c ../../configs/yolov5_tao/pgie_yolov5_config.txt -i <file:///home/...> -d -l

Use tao or Deepstrea-Yolo. For Deepstream-Yolo:

cp ~/deepstream_tao_apps/models/yolov5/<model>.engine ~/Deepstream-Yolo

Change the model path in your config file.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.