How to use tensorrt in python of AGX Xavier JetPack5.1

I can’t find the package of Tensorrt in the dist-packages of python that comes with jetpack5.1. Tensorrt has been installed in python after the installation of jetpack5.0.1 I used before, but the Tensorrt in python disappears after i update my system to jetpack5.1.
Do I need to install Tensorrt again, or there are other ways to use Tensorrt in python?
I once tried to download the Tensorrt installation package on the official website and install it into my python. I used dpkg - l | grep TensorRT on the device to display the version of Tensorrt as 8.5.2-1, but this version is no longer available on the official website.

HI,

There are some changes in the TensorRT installation.

This will be fixed in our next JetPack revision.
Currently, please run the command in the following link to install TensorRT python manually:

Thanks.

Thank you for reminding me of this change!

[

李帆

17860826570@163.com

](头像签名)

---- Replied Message ----

From | AastaLLL via NVIDIA Developer Forumsnotifications@nvidia.discoursemail.com |

  • | - |
    Date | 3/8/2023 10:09 |
    To | 17860826570@163.com |
    Subject | [NVIDIA Developer Forums] [Jetson & Embedded Systems/Jetson AGX Xavier] How to use tensorrt in python of AGX Xavier JetPack5.1 |

| AastaLLL Moderator
March 8 |

  • | - |

HI,

There are some changes in the TensorRT installation.

This will be fixed in our next JetPack revision.
Currently, please run the command in the following link to install TensorRT python manually:

No module named ‘tensorrt’ Jetson AGX Orin

Hi, Please try the following command to install the python binding for TensorRT. $ sudo apt install python3-libnvinfer* We are checking why the python binding is not installed by default with our internal team. Will share more information with you later. Thanks.

Thanks.

Hi,
Related TensorRT issue should already been fixed with newly published JP 5.1 (rev. 1).

Thanks for your help. I will update my system soon.

Hello, I updated jetpack 5.1 to the latest version and reboot. The following is the update command:
$ sudo apt update
$ sudo apt list --upgradable
$ sudo apt upgrade
$ sudo apt install nvidia-jetpack

I use command: $ cat /etc/nv_tegra_release Get output:# R35 (release), REVISION: 2.1, GCID: 32413640, BOARD: t186ref, EABI: aarch64, DATE: Tue Jan 24 23:38:33 UTC 2023

After the update, I can find the tensorrt in/usr/lib/python3.8/dist-packages, and I can import tensorrt package ($ pip install Tensorrt) in python 3.8. However, the following error occurred when using tensorrt for inferencing:

(pytorchpy38) nvidia@ubuntu :/media/nvidia/SN128/MDCNN-master - debug version $python 4 build_ onnx2trt.py
[03/08/2023-00:08:37] [TRT] [W] Unable to determine GPU memory usage
[03/08/2023-00:08:38] [TRT] [W] Unable to determine GPU memory usage
[03/08/2023-00:08:38] [TRT] [W] CUDA initialization failure with error: 222. Please check your CUDA installation: cuda-installation-guide-linux 12.1 documentation
Traceback (most recent call last):
File “4 build_onnx2trt.py”, line 72, in
main()
File “4 build_onnx2trt.py”, line 65, in main
serialized_ engine = build_ engine_ onnx(onnx_model_file)
File “4 build_onnx2trt.py”, line 29, in build_ engine_ onnx
Builder=trt. Builder (TRT_LOGGER) # Create builder
TypeError: pybind11::init(): factory function returned nullptr

I guess it is caused by the mismatch between cuda version and tensorrt version.
So i checked the version of Tensorrt (tensorrt. version) in python3.8 and returned ‘8.5.2.2’. But on the official website the cuda version required by Tensorrt 8.5.2.2 is cuda11.8, the default cuda version in jetpack 5.1 is11.4. I think that is why this error occurs.
Do I need to reinstall cuda11.8 to use tensorrt?

I hope to get your help!

Hi,

Did you build the TensorRT pybinding on your own?
Or use the default package included in JetPack 5.1?

Thanks.

Hi, this is my code for converting onnx to engine, I didn’t use trtexec.
where onnx_ model_ file = 'weights/exp_ 3/best_ 5_ Onnx. onnx ’ is newly generated on this device. I convert the pytorch model to onnx model through torch. onnx. export( )
I can use all my conversion codes on my other device Nvidia Jetson Xavier nx (jetpack5.0.1) and make engine inference normally, but there is an error in generating engine on this jetpack5.1 on Nvidia Jetson Agx Xavier.

build_onnx2trt.py

import os
try:
    import pycuda.autoprimaryctx
except ModuleNotFoundError:
    import pycuda.autoinit
import tensorrt as trt

TRT_LOGGER = trt.Logger(trt.Logger.WARNING)

def build_engine_onnx(model_file):
    builder = trt.Builder(TRT_LOGGER)  # 创建构建器
    print("the DLA num:", builder.num_DLA_cores)
    # 预创建网络
    network = builder.create_network(1 << (int)(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH))
    config = builder.create_builder_config()  # builder配置
    parser = trt.OnnxParser(network, TRT_LOGGER)  # 加载onnx解释器
    config.max_workspace_size = 1 << 30  # 1 Mi
    config.set_flag(trt.BuilderFlag.FP16)
    # 记载 Onnx 模型并对其进行解析,以便填充 TensorRT 网络。
    with open(model_file, "rb") as model:
        if not parser.parse(model.read()):
            print("ERROR: Failed to parse the ONNX file.")
            for error in range(parser.num_errors):
                print(parser.get_error(error))
            return None
    return builder.build_serialized_network(network, config)

def main():
    trtFile = 'weights/exp_3/best_5_trt.trt'
    onnx_model_file = 'weights/exp_3/best_5_onnx.onnx'
    # 构建TensorRT引擎.
    if os.path.isfile(trtFile):  # 如果有 .plan 文件则直接读取
        with open(trtFile, "rb") as f:
            serialized_engine = f.read()
            engine = trt.Runtime(TRT_LOGGER).deserialize_cuda_engine(serialized_engine)
        if serialized_engine == None:
            print("Failed getting serialized engine!")
            return
        print("Succeeded getting serialized trt!")
    else:

        serialized_engine = build_engine_onnx(onnx_model_file)

        engine = trt.Runtime(TRT_LOGGER).deserialize_cuda_engine(serialized_engine)

        with open("weights/exp_3/best_5_trt.engine", "wb") as f:
            f.write(serialized_engine)
if __name__ == "__main__":
    main()

To solve this problem, I will install the cuda-11.8 in my conda environment, which may be useful.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.