Use TensorRT API to create a runtime for running models in jetson

I want to find a way to measure the Mean Average Precision of an object detection model (.engine) in a jetson device. I already Have the engine models and I was thinking that I could use TensorRT API to create a runtime for running them, getting txt files for predictions and use them as an input for the MAP calculation with pycocotools. However, I am unable to use tensorRT api since I cannot import it getting next message

python 3.8

Import tensorrt as trt

Traceback (most recent call last):
File “”, line 1, in
File “/usr/lib/python3.8/dist-packages/tensorrt/init.py”, line 68, in
from .tensorrt import *
ImportError: /usr/lib/aarch64-linux-gnu/libnvinfer.so.8: undefined symbol: _ZN5nvdla8IProfile37setCanCompressStructuredSparseWeightsEb

What can be the problem, or do you suggest another approach for getting the MAP?

General information

  • I have created the engine models in order to work in the “nvcr.io/nvidia/deepstream-l4t:6.2-base” container and they work properly with deepstream
  • Tensorrt 8.5.2.2 is installed in the container

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.