TensorRT importing issues

Description

I am trying to use the samples given on your github page to use detectron2 with tensorRT. I am currently using a Jetson TX2 with a Cogswell carrier board. My issue is that despite having TensorRT and Cuda installed, i get an error -

  File "build_engine.py", line 24, in <module>
    import tensorrt as trt
ImportError: No module named tensorrt```

## Environment
**Cuda**

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Sun_Feb_28_22:34:44_PST_2021
Cuda compilation tools, release 10.2, V10.2.300
Build cuda_10.2_r440.TC440_70.29663091_0

**TensorRT**
sudo dpkg -l | grep TensorRT
ii  graphsurgeon-tf                            8.2.1-1+cuda10.2                           arm64        GraphSurgeon for TensorRT package
ii  libnvinfer-bin                             8.2.1-1+cuda10.2                           arm64        TensorRT binaries
ii  libnvinfer-dev                             8.2.1-1+cuda10.2                           arm64        TensorRT development libraries and headers
ii  libnvinfer-doc                             8.2.1-1+cuda10.2                           all          TensorRT documentation
ii  libnvinfer-plugin-dev                      8.2.1-1+cuda10.2                           arm64        TensorRT plugin libraries
ii  libnvinfer-plugin8                         8.2.1-1+cuda10.2                           arm64        TensorRT plugin libraries
ii  libnvinfer-samples                         8.2.1-1+cuda10.2                           all          TensorRT samples
ii  libnvinfer8                                8.2.1-1+cuda10.2                           arm64        TensorRT runtime libraries
ii  libnvonnxparsers-dev                       8.2.1-1+cuda10.2                           arm64        TensorRT ONNX libraries
ii  libnvonnxparsers8                          8.2.1-1+cuda10.2                           arm64        TensorRT ONNX libraries
ii  libnvparsers-dev                           8.2.1-1+cuda10.2                           arm64        TensorRT parsers libraries
ii  libnvparsers8                              8.2.1-1+cuda10.2                           arm64        TensorRT parsers libraries
ii  nvidia-container-csv-tensorrt              8.2.1.8-1+cuda10.2                         arm64        Jetpack TensorRT CSV file
ii  nvidia-tensorrt                            4.6.2-b5                                   arm64        NVIDIA TensorRT Meta Package
ii  python3-libnvinfer                         8.2.1-1+cuda10.2                           arm64        Python 3 bindings for TensorRT
ii  python3-libnvinfer-dev                     8.2.1-1+cuda10.2                           arm64        Python 3 development package for TensorRT
ii  tensorrt                                   8.2.1.8-1+cuda10.2                         arm64        Meta package of TensorRT
ii  uff-converter-tf                           8.2.1-1+cuda10.2                           arm64        UFF converter for TensorRT package


## Steps To Reproduce

Clone the Nvidia tensorrt repo, then try to python3 build_engine.py --help
Sorry if im not being very specific and my environment isn't super good - i hope it's enough to help- if you need more information feel free to ask and give me steps on how to get that. Thanks in advance - Greetings Frank

Hi,
Please refer to below links related custom plugin implementation and sample:

While IPluginV2 and IPluginV2Ext interfaces are still supported for backward compatibility with TensorRT 5.1 and 6.0.x respectively, however, we recommend that you write new plugins or refactor existing ones to target the IPluginV2DynamicExt or IPluginV2IOExt interfaces instead.

Thanks!

@NVES so your basically saying I shouldn’t use the samples put out by NVIDIA and should rather just make my own? Because the issue i’m having isn’t the code from NVIDIA but rather tensorRT not importing properly

Hi,

Looks like there is some issue in your TensorRT setup.
As you’re using the Jetson platform, we are moving this post to the Jetson TX2 forum to get better help.

Thank you.

Ok thank you, and you don’t happen to have any ideas on how to fix this issue?

Dear @frankvanpaassen3,
May I know the Jetpack version? Are you able to run any other python TRT samples on target?

So I ended up figuring out how to import tensorRT, but now i’m running into the problem of not being able to import other important libraries-numpy, pytorch etc. But back to your question-I used Jetpack 4.6.2 to flash.

Dear @frankvanpaassen3,
I ended up figuring out how to import tensorRT

Could you share how did you fix it?

what error do you see when you try to import numpy, pytorch? How did you install them? Is it related to importing or version compatibility?

For how I fixed it- it was just following the steps given then creating a symlink.
As for the error I was getting when importing numpy and pytorch, (i forget what it was exactly) but it was something along the lines of no version found that match the requirements (from version…none). I’ve also been getting errors with onnx- ERROR: Could not build wheels for onnx, which is required to install pyproject.toml-based projects

Dear @frankvanpaassen3,
So the original issue is resolved and now you have issues with setting up/importing pytorch on TX2. Could you please file a new topic?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.