When I refer to yolo_deepstream/tree/main/tensorrt_yolov7 and use “yolov7QAT” to perform a batch detection task, the following error occurs ./build/detect --engine=yolov7QAT.engine --img=./imgs/horses.jpg,./imgs/zidane.jpg
Whether “yolov7_qat_640.onnx” downloaded from “NVIDIA-AI-IOT/yolo_deepstream/yolov7_qat” or self trained (it shows the same structure with netron), the same error occurs when running . /build/detect all show the same error message
Runs fine with non-qat “yolov7db4fp32.engine” or “yolov7db4fp16.engine”
Environment
TensorRT Version: 5.1 GPU Type: J etson AGX Xavier Nvidia Driver Version: CUDA Version: 11.4.315 CUDNN Version: 8.6.0.166 Operating System + Version: 35.2.1 ( Jetpack: 5.1) Python Version (if applicable): Python 3.8.10 TensorFlow Version (if applicable): PyTorch Version (if applicable): 1.12.0a0+2c916ef.nv22.3 Baremetal or Container (if container which image + tag):
Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
validating your model with the below snippet
check_model.py
import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!
When I run the dpkg -l |grep -i tensor command, I get the following message, my tensorrt should be 8.5.2.2 no problem
ii graphsurgeon-tf 8.5.2-1+cuda11.4 arm64 GraphSurgeon for TensorRT package
ii libnvinfer-bin 8.5.2-1+cuda11.4 arm64 TensorRT binaries
ii libnvinfer-dev 8.5.2-1+cuda11.4 arm64 TensorRT development libraries and headers
ii libnvinfer-plugin-dev 8.5.2-1+cuda11.4 arm64 TensorRT plugin libraries
ii libnvinfer-plugin8 8.5.2-1+cuda11.4 arm64 TensorRT plugin libraries
ii libnvinfer-samples 8.5.2-1+cuda11.4 all TensorRT samples
ii libnvinfer8 8.5.2-1+cuda11.4 arm64 TensorRT runtime libraries
ii libnvonnxparsers-dev 8.5.2-1+cuda11.4 arm64 TensorRT ONNX libraries
ii libnvonnxparsers8 8.5.2-1+cuda11.4 arm64 TensorRT ONNX libraries
ii libnvparsers-dev 8.5.2-1+cuda11.4 arm64 TensorRT parsers libraries
ii libnvparsers8 8.5.2-1+cuda11.4 arm64 TensorRT parsers libraries
ii nvidia-tensorrt 5.1-b147 arm64 NVIDIA TensorRT Meta Package
ii nvidia-tensorrt-dev 5.1-b147 arm64 NVIDIA TensorRT dev Meta Package
ii python3-libnvinfer 8.5.2-1+cuda11.4 arm64 Python 3 bindings for TensorRT
ii python3-libnvinfer-dev 8.5.2-1+cuda11.4 arm64 Python 3 development package for TensorRT
ii tensorrt 8.5.2.2-1+cuda11.4 arm64 Meta package for TensorRT
ii tensorrt-libs 8.5.2.2-1+cuda11.4 arm64 Meta package for TensorRT runtime libraries
ii uff-converter-tf 8.5.2-1+cuda11.4 arm64 UFF converter for TensorRT package
But when I use the jtop command, I get the message “TensorRT: 5.1”.
Which version do I have?
We are able to successfully build the TensorRT engine on version 8.6.
Please make sure you’re using the latest TensorRT version. For better help, we are moving this post to the Jetson AGX Xavier forum.
@ AastaLLL
I successfully loaded the model with trtexec and explicitly specified batch=16, and it executed successfully /usr/src/tensorrt/bin/trtexec --loadEngine=yolov7QAT.engine --verbose --batch=12
However, when I change the parameter “batch-size” in the Deepstream configuration file “pgie_yolov7_config.txt”
batch-size=1, execution succeeds
batch-size=2, failed