I want to find a way to measure the Mean Average Precision of an object detection model (.engine) in a jetson device. I already Have the engine models and I was thinking that I could use TensorRT API to create a runtime for running them, getting txt files for predictions and use them as an input for the MAP calculation with pycocotools. However, I am unable to use tensorRT api since I cannot import it getting next message
python 3.8
Import tensorrt as trt
Traceback (most recent call last):
File “”, line 1, in
File “/usr/lib/python3.8/dist-packages/tensorrt/init.py”, line 68, in
from .tensorrt import *
ImportError: /usr/lib/aarch64-linux-gnu/libnvinfer.so.8: undefined symbol: _ZN5nvdla8IProfile37setCanCompressStructuredSparseWeightsEb
What can be the problem, or do you suggest another approach for getting the MAP?
General information
- I have created the engine models in order to work in the “nvcr.io/nvidia/deepstream-l4t:6.2-base” container and they work properly with deepstream
- Tensorrt 8.5.2.2 is installed in the container