Hi, I’m facing a problem for 2 days, and I hope I can get the answers here.
I’m using “nvidia/cuda:10.2-cudnn8-devel-ubuntu18.04” docker image to create a container for transforming yolov5.pt to TensorRT engine by tensorrtx due to deploying on Jetson nano 2GB later.
While I install TensorRT 7.1.3 by Debian Installation guide:
It will always try to install the version I don’t want:
Reading package lists... Done
Building dependency tree
Reading state information... Done
cuda-nvrtc-10-2 is already the newest version (10.2.89-1).
The following additional packages will be installed:
cuda-cccl-11-7 cuda-cudart-11-1 cuda-cudart-11-7 cuda-cudart-dev-11-1 cuda-cudart-dev-11-7 cuda-driver-dev-11-1 cuda-driver-dev-11-7 cuda-nvcc-11-1
cuda-toolkit-11-7-config-common cuda-toolkit-11-config-common cuda-toolkit-config-common libcublas-11-7 libcublas-dev-11-7 libnvinfer-bin libnvinfer-dev
libnvinfer-plugin-dev libnvinfer-plugin8 libnvinfer-samples libnvinfer8 libnvonnxparsers-dev libnvonnxparsers8 libnvparsers-dev libnvparsers8
The following NEW packages will be installed:
cuda-cccl-11-7 cuda-cudart-11-1 cuda-cudart-11-7 cuda-cudart-dev-11-1 cuda-cudart-dev-11-7 cuda-driver-dev-11-1 cuda-driver-dev-11-7 cuda-nvcc-11-1
cuda-toolkit-11-7-config-common cuda-toolkit-11-config-common cuda-toolkit-config-common libcublas-11-7 libcublas-dev-11-7 libnvinfer-bin libnvinfer-plugin-dev
libnvinfer-plugin8 libnvinfer-samples libnvinfer8 libnvonnxparsers-dev libnvonnxparsers8 libnvparsers-dev libnvparsers8 tensorrt
The following packages will be upgraded:
libnvinfer-dev
1 upgraded, 23 newly installed, 0 to remove and 6 not upgraded.
Need to get 1280 MB of archives.
Here is the cuda version from command nvcc --version:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Wed_Oct_23_19:24:38_PDT_2019
Cuda compilation tools, release 10.2, V10.2.89
Here is the driver version and CUDA version from command nvidia-smi:
NVIDIA-SMI 515.65.01 Driver Version: 516.94 CUDA Version: 11.7
The graphic card is MX250.
I’d like to know how to fix the problem because I transform the engine through TensorRT 8.4.3 and it doesn’t work on Jetbot container in Jetson nano 2GB.
I think maybe that is because of the TensorRT version, so I want to using the “default version (TensorRT 7.1.3)” in Jetbot to transfom my yolov5 model.
Hi,
Please refer to below links related custom plugin implementation and sample:
While IPluginV2 and IPluginV2Ext interfaces are still supported for backward compatibility with TensorRT 5.1 and 6.0.x respectively, however, we recommend that you write new plugins or refactor existing ones to target the IPluginV2DynamicExt or IPluginV2IOExt interfaces instead.
However, I’m sorry, I’m a newbie in Jetson’s world…
I may not understand why should use the custom plugin implementation.
I guess that you mean the incompatibility of the TensorRT 8.4.3 engine on my Jetbot is not because of the default installed version in Jetbot(7.1.3)?
So I should make a custom layers (plugins) in an ONNX graph and transform the Pytorch model to ONNX, and then transform the ONNX using TensorRT 8.4.3?
Looks like you’re using the Jetson platform, installing TensorRT on Jetson by the above method may not work.
You may need to update the Jetpack version.
If you’re interested in using containers, please try the TensorRT NGC container.
If you need further assistance on installation, we are moving this post to Jetson Nano forum to get better help.
Yes, I’m using a jetbot with jetbot-043_nano-2gb-jp45 image on Jetson nano 2GB.
And I’d like to deploy my custom yolov5 and yolov4 model on it.
But I can’t transform the pytorch or darknet model to TensorRT engine following the instruction provided by tensorrtx or tensorrt_demos.
The error of yolov5 is because of the fatal error: opencv2/dnn/dnn.hpp: No such file or directory, and the error of yolov4 is due to out of memory…
So I’d like to build a environment like my jetbot on another computer to debug, and now I’m facing the TensorRT 7.1.3 installation problem.
I’ll try the TensorRT container!
Or if you have any other solution, please let me know!
Thanks a lot!
I use the TensorRT NGC container nvcr.io/nvidia/tensorrt:20.07-py3 to convert my custom yolov5 pytorch model and it work successfully!
However, when I use the engine in Object Following - Live Demo notebook in my jetbot it doesn’t work!!!
The following is the error message:
AttributeErrorTraceback (most recent call last)
<ipython-input-1-8db3c9432359> in <module>
1 from jetbot import ObjectDetector
2
----> 3 model = ObjectDetector('yolov5s62.engine')
/usr/local/lib/python3.6/dist-packages/jetbot-0.4.3-py3.6.egg/jetbot/object_detection.py in __init__(self, engine_path, preprocess_fn)
27 load_plugins()
28 self.trt_model = TRTModel(engine_path, input_names=[TRT_INPUT_NAME],
---> 29 output_names=[TRT_OUTPUT_NAME, TRT_OUTPUT_NAME + '_1'])
30 self.preprocess_fn = preprocess_fn
31
/usr/local/lib/python3.6/dist-packages/jetbot-0.4.3-py3.6.egg/jetbot/tensorrt_model.py in __init__(self, engine_path, input_names, output_names, final_shapes)
57 with open(engine_path, 'rb') as f:
58 self.engine = self.runtime.deserialize_cuda_engine(f.read())
---> 59 self.context = self.engine.create_execution_context()
60
61 if input_names is None:
AttributeError: 'NoneType' object has no attribute 'create_execution_context'
I wonder why would the error happened.
Is because of the converting is processed on my host PC and that would cause the incompatibility in the JetBot?
If someone has the answer, please let me know!
Thanks!
Please noted that there are some dependencies between GPU library (ex. TensorRT) and OS version.
To run TensorRT 7.1.3, please reflash your environment with JetPack 4.5.1.
Since the TensorRT engine is not portable, please convert it to the target device you want to use.
And please make sure all the library comes from the same JetPack (jetbot and TensorRT) for compatibility.