Int8 yolov5 on Jetson issue with deepstream

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
Jetson
• DeepStream Version
6.2
• JetPack Version (valid for Jetson only)
5.1 rev 1
• TensorRT Version
8.5.2.2-1+cuda11.4
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi, I have been trying to deploy yolov5 on the Jetson Orin platform, I am currently using an AGX dev kit emulating a Orin Nano 8gb. I followed this tutorial here https://github.com/ultralytics/yolov5/issues/9627 and was able to get fp32 and fp16 working. However, when I try the int8 section, I am unable to run and it gives me a core dumped issue relating to some cuda engine creation failure. The log is attached below, and the config files i used with deepstream are attached as well. Any thoughts on how I might be able to solve this issue?
output.txt (10.2 KB)

config_infer_primary_yoloV5.txt (648 Bytes)
deepstream_app_config.txt (870 Bytes)

From the log attached, there is no calib.table file on your env. Also, you can refer to our yolov5 sample: https://github.com/NVIDIA-AI-IOT/yolov5_gpu_optimization.

yea, i was kinda confused about where to get the calib.table because there didn’t seem to be an explanation in that github tutorial. I will take a look at the sample you linked.

will the linked nvidia repo work for yolov8?

run.txt (18.3 KB)

I ran the steps in the nvidia repo, i’m stuck at the pip install -r requirement_export.txt step, there seems to be an issue in installing onnx-runtime. Any idea how to proceed?

Did you run it in nvcr.io/nvidia/pytorch:22.03-py3 container?

i can’t access the link, can you give the docker command that uses the container?

You can find the container from the link below: https://catalog.ngc.nvidia.com/containers

I tried running the docker container for nvcr.io/nvidia/deepstream:6.1.1-devel for the “prepare for deepstream inference step” and its giving me “WARNING: The requested image’s platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
exec /usr/bin/bash: exec format error”

is there another deepstream docker container i can use? would ```
nvcr.io/nvidia/deepstream-peopledetection:r32.4.2 work?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Since your platform is Jetson, please use the l4t docker:
https://catalog.ngc.nvidia.com/orgs/nvidia/containers/deepstream-l4t/tags

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.