Hello.
We have found that our model based on Yolo was able to get right result in x64 platform with Cuda10+Cudnn7.6+TensorRT7.0. However, we can not acquire the same result in AGX with Cuda10.2+Cudnn8.0+TensorRT7.1 with int8 mode. And the result is totally not true. We also found that the situation was similar when we used Cuda10.2+Cudnn8.0+TensorRT7.1 in x64 platform. So could You provide us the library with arm TensorRT7.0 that we can make a test? JetPack does not hava such version.
Hi,
Sorry that we don’t have a TensorRT v7.0 package for Jetson platform.
Once possible reason is that TensorRT engine doesn’t support portability.
Have you created the TensorRT engine file on Xavier directly?
Thanks.
Yes. We generated engine file on AGX but the result was wrong. Can We build TensorRT V7.0 by docker build container?
Several groups also report such issue. We can perform better on TensorRT7.0 but it does not work on TensorRT7.1 no matter on x64 or Xavier. However, there aren’t V7.0 for Jetson.
model link: https://github.com/wang-xinyu/tensorrtx/tree/master/yolov5
Hi,
Unfortunately, Jetson cannot support desktop-based container.
So you will still need an ARM version package to build the docker.
We are going to reproduce this issue in our environment.
Could you share the test image and result for both TensorRT 7.0 and TensorRT 7.1?
Thanks.
Thanks. I will message you and send some materies for you. What kind of share link can u support?
Hi,
Each online drive will be fine.
Thanks.
Yeah. We have shared you a link in your message box.
Hi,
The link shared in the private message is not working.
Could you help to check it?
Thanks.
We are sorry to hear that. And we would like to share you the link by GoogleDrive thi time. We hope that you can finish the test this time.
Hi,
Thanks, we can download the file successfully.
Will update more information with you later.
Thanks.
Hi,
We try to reproduce this in our environment but found the result is roughly the same between TensorRT-7.0 and TensorRT-7.1.
Could you recheck it?
[TensorRT-7.0]

[TensorRT-7.1]

Thanks.
We want to confirm that it is based on int8 mode? And have you changed any layer in the model? We have several groups and they used the author’s project but the output object box is 0. So could you please more detail information about running platform?
Hi,
Sorry for the missing.
We use the default command shared in the GitHub:
$ ./yolov5 -s
$ ./yolov5 -d [image folder]
It may not inference with INT8 mode. Let us check it first.
We do the TensorRT 7.0 on a desktop GPU while 7.1 on the Xavier.
Thanks.
Yes. You must use images for calibration and then run as INT8 mode. It works well on FP32.
------------------ Original ------------------
Hi,
Do you know the exact command to calibrate and run the sample in INT8 mode?
The informaiton can help us to reproduce this issue directly.
Thanks.
The project do not support directly to run with int8 mode. But we could not provide you our final project. We generate the int8 model according to the official sample from nvidia and then parse it with tensorRT api. So this step may need you to change some codes within the project.
Hi,
Is it possible to share the sample via private message?
Thanks.