Error Running YoloV7 in DeepStream

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.1
• TensorRT Version 8.4.1.5
• NVIDIA GPU Driver Version (valid for GPU only) 515.65.01
• Issue Type( questions, new requirements, bugs) Bug
I am trying to run the YoloV7 model in a simple DeepStream python pipeline but the results I obtain are not correct. The environment I use is a docker container, using the TensorRT image from NGC and has been extended to include the DeepStream library and the python bindings for this library. I can successfully convert the ONNX file to a TensorRT engine file using this repo gitHub., namely the export script.

Then using this engine file I can run an inference on a on a sample image and get the correct result. At least it aligns with. the results of doing the inference with the ONNX file.

The issue is when I try to use this engine file in the infer plugin of DeepStream. The results are random and do not make sense, many boxes flashing on the top of the image. The scores and num of detections also don’t make sense.

I am surprised that I can run it using a tenorrt inference session but not a deepstream inference, It was my understanding that these are using the same low level libs.

Any help with this would be greatly appreciated.

I should also add that I used the onnx file as input to the deepstream infer plugin to automatically generate its own egine file but the results were the same.

I am checking.

I have since resolved my issue. My solution was to include the following line in the pgie config file:

net-scale-factor=0.0039215697906911373

This means that the input pixel data is in the range of 0 to 1 rather than 0 to 255.

Glad to know you fixed it, thanks for the update!

Can you share config.txt and postprocess like yolov7parser.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.