Possible Solutions to INT64 clamping accuracy drop

• Hardware Platform ( GPU: Nvidia GeForce 4090)
• DeepStream Version: 6.3 docker container
• NVIDIA GPU Driver Version: 525.147.05
• Issue Type( questions)

I converted my custom model (yolov5) to onnx using https://github.com/marcoslucianops/DeepStream-Yolo. Now using this in my deepstream app has reduced the accuracy of my model. I suspect it is due to

WARNING: [TRT]: onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.

Are there any solutions to work around this to improve the model accuracy?

This is a gap between ONNX and TensorRT. The weight value is rarely larger than 32 bits integer. This warning can be ignored, no accuracy issue with this warning.

Then what is the reason behind the accuracy drop from pytorch’s yolov5 to the onnx model in deepstream?

How did you get such conclusion? Have you read about yolo_deepstream/yolov7_qat at main · NVIDIA-AI-IOT/yolo_deepstream (github.com)?
Can you show the way you compare the accuracy of pytorch and the deepstream app?

I followed https://github.com/NVIDIA-AI-IOT/yolo_deepstream/tree/main/yolov7_qat and somehow lost all the detections that I was previously getting. I used the qat.onnx one, what could possibly be going wrong here? i was getting this issue while running the quantize command, is that going to affect anything? @Fiona.Chen

I didn’t use any metric as such, I just ran inference using different models on the same video and looked at it and saw the difference, quite possible the issue is something else

this is the inference using detect.py from ultralytics’ yolov5 repo (.pt weights)
External Image

this is the video i saved from the deepstream app using onnx model which was converted into .engine file (FP16)
External Image

you can see that the person hardly gets detected for the second half of the video, what could be the reason behind it?

this is my config file

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
#onnx-file=megadata_v7_1280l.onnx
onnx-file=person2560lv1.onnx
model-engine-file=model_b1_gpu0_fp16.engine
int8-calib-file=calib.table
labelfile-path=labels_person.txt
batch-size=1
#0-fp32  1-int8  2-fp16
network-mode=2
num-detected-classes=1
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
symmetric-padding=1
#force-implicit-batch-dim=1
workspace-size=6000
parse-bbox-func-name=NvDsInferParseYolo
#parse-bbox-func-name=NvDsInferParseYoloCuda
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300

i later on used

nms-iou-threshold=0.60
pre-cluster-threshold=0.001

External Image
and still there were no bounding boxes in the second half of the video, what could be the reason and how to solve it?

Also, is there a way to embed videos here instead of using links like i did?

@Fiona.Chen
thanks!

Are the pt weights model and the onnx model the same model? How did you convert the pt weights model to the onnx model?

Yes they are same, I converted it using https://github.com/marcoslucianops/DeepStream-Yolo/blob/master/docs/YOLOv5.md

@chanduhna

We don’t know whether the script DeepStream-Yolo/utils/export_yoloV5.py at master · marcoslucianops/DeepStream-Yolo (github.com) generated onnx is exactly the same as the pt wieghts.

And we don’t know whether the preprocess and the postprocess in ultralytics are exactly the same as the preprocess and postprocess in DeepStream.

Please check by yourself.

I see, is there a better alternative?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

They are open source. You can check and implement by yourself.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.